I found that when the time range in the chart is large enough, Grafana will reduce granularity of the response by aggregating over a larger interval. That sounds reasonable, since you will not be able to see the detail.
Some of my 1-minute sample metrics are written to InfluxDB as rate (e.g. iops) to avoid having to compute it in each panel. To have a chart show the total overall devices, we have sum(iops).
Now when the time range gets larger, the Grafana interval holds more than one of my samples, so I get twice or three times. I can use $__interval_ms in the calculation, but it gets ugly.
A nested query can do the trick, so first aggregate with sum() over the original interval, and then use another aggregate to reduce the data for display (and now we can argue whether I want mean() or max() for that).
Should I get rid of my rates and differences that I write in InfluxDB (mostly to keep the numbers smaller) and leave it all to aggregation in Grafana and InfluxDB?