Try to get the elasticsearch dashboard to run

Hey,

I tried the elasticsearch dashboard 878 with grafana. I added a datasource to my cluster, which is running elasticsearch 5.4 and the save+test button shows green so far. I’m running grafana with the default sqlite DB. After importing the dashboard, I get the error below on every item in the board. Neither my cluster nor node-names contain any “-” chars and I’m kinda stuck where to look next. Does anyone have an idea what may cause this problem?

{
    "root_cause": [
        {
            "type": "parse_exception",
            "reason": "parse_exception: Encountered \"\" at line 1, column 13.
Was expecting one of:
     ...
    \"(\" ...
    \"*\" ...
     ...
     ...
     ...
     ...
     ...
    \"[\" ...
    \"{\" ...
     ...
    "
        }
    ],
    "type": "search_phase_execution_exception",
    "reason": "all shards failed",
    "phase": "query",
    "grouped": true,
    "failed_shards": [
        {
            "shard": 0,
            "index": "bla.blub-1",
            "node": "l_XKqxlMQ2tXMk4vUocANw",
            "reason": {
                "type": "query_shard_exception",
                "reason": "Failed to parse query [cluster_name:]",
                "index_uuid": "okvqdJHqWi-Uesnd6k0hTw",
                "index": "bla.blub-1",
                "caused_by": {
                    "type": "parse_exception",
                    "reason": "parse_exception: Cannot parse 'cluster_name:': Encountered \"\" at line 1, column 13.
Was expecting one of:
     ...
    \"(\" ...
    \"*\" ...
     ...
     ...
     ...
     ...
     ...
    \"[\" ...
    \"{\" ...
     ...
    ",
                    "caused_by": {
                        "type": "parse_exception",
                        "reason": "parse_exception: Encountered \"\" at line 1, column 13.
Was expecting one of:
     ...
    \"(\" ...
    \"*\" ...
     ...
     ...
     ...
     ...
     ...
    \"[\" ...
    \"{\" ...
     ...
    "
                    }
                }
            }
        }
    ]
}

Another question I have is towards the mentioned python script. As far as I understand, grafana gets the elasticsearch metrics from the saved datasource. The description is:

“The below python or powershell script, will collect the metrics from the Elasticsearch API based on the interval set and publish the data to Elasticsearch”.

Does this mean I have to run this script additionally to grafana? And why is it storing the metrics in elasticsearch at all? Or is it the other way around and the python script is collecting while the datasource is where the data for grafana is supposed to be stored?

If I want to run this script, is there a option in grafana to import said script or do I have to create a cronjob to have it periodically executed?

Thanks for any input :slight_smile:

Hi,

If you read the description at the bottom of the dashboard here it states the following:

Elasticsearch Index

  • You no longer need to do this, rather set the template variable to name.raw (v2) or name.keyword (v5) same applies for cluster_name
  • If your cluster_name or node names have “-” you will have to load a custom index to set the “name” field to not_analyzed

Regarding your other question. Grafana does not store any timeseries data, it’s only responsible for visualizing timeseries data by integrating with other databases, e.g. graphite, influxdb, elasticsearch etc. Regarding what the script does the documentation states:

The below python or powershell script, will collect the metrics from the Elasticsearch API based on the interval set and publish the data to Elasticsearch.

You cannot use Grafana to run your script. Grafana is only an application responsible for visualing data. A cronjob may work , but it seems like the script will run forever when started. So maybe running the script as a service with systemd/init will work. I suggest you file an issue on github for the dashboard in question since this is not an official dashboard developed by Grafana Labs, see Issues · trevorndodds/elasticsearch-metrics · GitHub.

On another note there are lots of collectors out there, e.g. collectd, telegraf, prometheus etc, that usally have support/plugins for fetching elasticsearch metrics and storing them in a timeseries database. I would suggest you to take a look into this.

Hope this helps

Marcus