Are you getting any error with this query? Seems reasonably written…
One thing: you don’t need the ‘r with…’ when you map. I would ditch all columns except the ones needed rather than add even more columns to my table. So more along the lines of 'map(fn: (r) => ({_time = r._time, _value = r._value / 3600.0}, _field: “WattHour”) ’
If you’re specific about the 10s, you should specify them as a fixed window, rather than let the view set it dynamically, so: ‘every: 10s’
And finally I am doubtful about your maths to get Wh as a unit by simply summing your data and then dividing by 3600. It will very much depend on how your data is sampled. Firstly, if you have gaps in your data, your cumsum will fall well short. Secondly, dividing by 3600 and not taking into account the aggregate window will simply be wrong. If you divide by 3600 your aggregate window must be 1 second.
Try this instead:
myWindow = 600.0 //unit is seconds !!
myDuration = duration(v:(string(v:myWindow)+"s"))
from(bucket: "homeassistant")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "W")
|> filter(fn: (r) => r["entity_id"] == "efergy815686")
|> filter(fn: (r) => r["_field"] == "value")
|> aggregateWindow(every: myDuration, fn: mean)
|> map(fn: (r) => ({ _value: r._value * myWindow / 3600.0 , _time:r._time, _field:"WattHour" }))
|> cumulativeSum(columns: ["_value"])
You can set the aggregate window in the ‘myWindow’ variable. Make sure it is larger than your guaranteed minimum sample rate if you don’t want to have errors due to gaps in your data. At the same time, if you make it too large, then you will loose accuracy in your cumsum due to reduced granularity. Plot the data with points so you can see the effects of the chosen window size.
An alternative for a fixed window size is to use v.windowPeriod but then you will need:
- some scripting to verify what window was set automatically by Grafana and use that to adjust your calculation
- still make sure you set the minimum window size larger or equal to your sampling rate
I am making a big fuss of the minimum sampling rate and gaps because my data is sampled ‘on change’ and with a large minimum frequency to preserve bandwidth. Flux at this stage does not offer a function to fill in the gaps (ie. interpolate: Proposal – Time Interpolation · Issue #2428 · influxdata/flux · GitHub) so you have to be cautious if you sample the way I do.
Good luck!
10min window and 1h windows - similar results but different resolution: