Can only display dashboards with time range of 1hr or less. Any range above 1hr fails

  • What Grafana version and what operating system are you using?
    Using v8.5.3 in a kubernetes setup with kube-prometheus stack

  • What are you trying to achieve?
    New install. Trying to load any dashboards with a time range greater than 1 hour

  • How are you trying to achieve it?
    By selecting any time range over 1 hour in the time picker

  • What happened?
    Pops up an error in the top right corner:
    Template variable service failed block: 01G5M5FBD9Z0DSZSGK5R7WBTPZ: decode postings: invalid size

  • Did you receive any errors in the Grafana UI or in related logs? If so, please tell us exactly what they were.
    Used kubectl logs command resulting in the following:
    logger=context traceID=00000000000000000000000000000000 userId=1 orgId=1 uname=admin t=2022-06-15T20:23:27.23+0000 lvl=info msg=“Request Completed” method=POST path=/api/ds/query status=400 remote_addr=10.26.0.135 time_ms=3 duration=3.646746ms size=234 referer="https://y1-grafana.authmetrics.com/d/hcl_ts_app/transaction-servers?orgId=1&var-DS_PROMETHEUS=Prometheus&var-namespace=commerce&var-job=authts-app&var-pod=All&var-resource=All&var-http_status=All&var-cluster=&from=now-6h&to=now" traceID=00000000000000000000000000000000
    logger=tsdb.prometheus t=2022-06-15T20:23:27.33+0000 lvl=eror msg=“Range query failed” query=“sum(rate(request_rest_status_total{ namespace=~“commerce”, job=~“authmetrics-app”, pod=~“All”, resource=~“All”, http_status=~“All” }[2m])) by (http_status)” err=“execution: expanding series: block: 01G5M5FBD9Z0DSZSGK5R7WBTPZ: decode postings: invalid size”

Upon further inspection I find the prometheus logs to be full of the following error:

ts=2022-06-16T00:43:41.246Z caller=manager.go:636 level=warn component=“rule manager” file=/etc/prometheus/rules/prometheus-prometheus-kube-prometheus-prometheus-rulefiles-0/monitoring-prometheus-kube-prometheus-node-exporter-25618cb9-18d3-4795-8fba-84b0503a9da5.yaml group=node-exporter name=NodeFilesystemSpaceFillingUp index=1 msg=“Evaluating rule failed” rule=“alert: NodeFilesystemSpaceFillingUp\nexpr: (node_filesystem_avail_bytes{fstype!=”",job=“node-exporter”} / node_filesystem_size_bytes{fstype!="",job=“node-exporter”}\n * 100 < 10 and predict_linear(node_filesystem_avail_bytes{fstype!="",job=“node-exporter”}[6h],\n 4 * 60 * 60) < 0 and node_filesystem_readonly{fstype!="",job=“node-exporter”} ==\n 0)\nfor: 1h\nlabels:\n severity: critical\nannotations:\n description: Filesystem on {{ $labels.device }} at {{ $labels.instance }} has only\n {{ printf “%.2f” $value }}% available space left and is filling up fast.\n runbook_url: https://runbooks.prometheus-operator.dev/runbooks/node/nodefilesystemspacefillingup\n summary: Filesystem is predicted to run out of space within the next 4 hours.\n" err=“expanding series: block: 01G5MMAYX9H653GA6Y1MFWGTYE: decode postings: invalid size”

Any help available for a noob?

any updates? I’m currently running into the same issue.

I have prometheus deployed as bitnami helm chart on my on-site k8s cluster. The error happens ever since i try to persist the data (because otherwise the charts disappear if the pod is restarted). We use CIFS as a storage system.

This was a pain but turned out to a file system type thing.

The short version is Prometheus doesn’t support NFS-like mounts and Azure Files I was using isn’t POSIX compliant

This thread summarizes the issue:

https://github.com/prometheus/prometheus/issues/10679

1 Like

yes indeed, thank you very much for the response. For me it was filesystem related too:

sorry for inconveniencing you with the question before, my account was still on hold so the question would not come through until i already resolved the issue.