Dashboard time range changes data shown

I apologize in advance this is probably an obvious newbie question, but I’ve spent most of the day trying to get my head around it.

As part of my Python buildscript for a project I report some data like the size of the build, how long it took, whether it succeeded or failed, to graphite. This happens anytime there’s a new commit, so some days there’s nothing submitted and some days it’s every 30 minutes for a few hours.

In either Graphite or Grafana, when I look at the data everything seems normal when viewing in the ‘Last 6 hours’ time range. However, as soon as I switch to even ‘Last 12 hours’ (let alone the 7 or even 14 days I want to show on this dashboard), the data simply disappears. This is even the case for a singlestat using the function keepLastValue.

The query inspector shows that only null values are returned. There’s 60 seconds between each datapoint.

I also tried increasing the maxDataPoints to 99,999, but every one returns null. If I switch back to ‘6 hours’ it does return a value (along with mostly nulls, but that’s fine).

So I’m assuming that the time range results in a form of aggregation - Graphite is only pulling a value for every (x) seconds and my one datapoint (or at least small number of datapoints) are lost in the cracks.

What’s the correct way to set up Graphite for my use case? I just want a line chart showing each build (however far apart they might be) over the last two weeks showing various info. Ideally I want one point per build (and not every point between each build) with a line showing the trend between them.

I hope that’s enough information, thanks so much for any help you can provide!

Look at your graphite storage scheme config it defines Satan roll up (aggregation)

1 Like

Thanks for the reply torkel.

This is my storage-schema. I’ve tried a few different configurations but decided on this because there will never be more than one build in a 10min period.

[carbon]
pattern = ^carbon.
retentions = 30s:7d,5m:30d,1h:720d

[builds]
pattern = ^fabricator.builds
retentions = 10m:30d,30m:90d,1hm:360d

[default]
pattern = .*
retentions = 30s:6h,5m:30d,10m:360d

This might be a clearer example.

Using whisper-fetch for the last three hours (filtering out the 'None’s) I correctly see the last three legit datapoints.

root@bpr-guiltyspark:~# whisper-fetch --from=1535126400 /apps/docker/graphite/data/storage/whisper/fabricator/builds/totalWarnings.wsp | grep -v None
1535132730      24.000000
1535133930      24.000000
1535135970      24.000000

However when I expand it to 7 hours nothing is returned. I even tried changing the whisper aggregation method to ‘last’.

So how do I ensure that, regardless of the size of the timeframe, it includes all non-Null datapoints within that timeframe?

I have no storage-aggregation file, so taking defaults according to the documentation.

Having a very similar issue. Would love to know what the deal is. Given data is populated when scoped to last 6 hours and nothing at all when expanded it seems like a grafana bug. My storage schema is configured appropriately to allow for data retention for greater than 6 hours.