Hi,
I’m currently exploring grafana and influxdb2 in a docker setup.
The db content is composed by some Energy/Current/Voltage meters, 2 inverters, a BMS and a power wall.
I normally log each and every “unique” variable (other needs to be calculated using fluxql/grafana for example) with a frequency of 5s and i generally use grafana and fluxql to extract valuable information.
This is an example of my database (i’ve cut out time related columns, there are no other colums besides what you see)
What i would like to know:
- Is the database correctly structured (e.g. cardinality and stuff)? I normally have a “measurement” which represents the entity (inverter1, power wall, inverter2, ecc) that “samples” the information, in thise case the “fields” (current, voltage, frequency).
- Since i generally need queries that “reduce” the total datapoints to a single value e.g kWh/Daily or kWh/Monthly do i have a simple way to trigger e.g. daily a query that prepares a separate bucket or measurment (permanent) so that i can directly fetch those data instead of triggering it everytime the query calculation query itself?
- is it reasonable to also trigger a sort of downsampling e.g. for data older than e.g. 1yr if so how can i achieve that?
For reference this is an example of query
import "date"
today = date.truncate(t: now(), unit: 1d)
tomorrow = date.truncate(t: -7d, unit: 1d)
array = ["Sun","Mon","Tue","Wed","Thu","Fri","Sat"]
from(bucket: "eMgr")
|> range(start: -7d)
|> filter(fn: (r) => r["_measurement"] == "sdm230")
|> filter(fn: (r) => r["_field"] == "L1_P")
|> filter(fn: (r) => date.truncate(t: r._time, unit: 1d) != tomorrow)
|> window(every: 1d)
|> integral(unit: 1h)
|> map(fn: (r) => ({r with _name: array[date.weekDay(t: r._start)]}))
|> group()
|> drop(columns: ["_field", "_measurement"])
Regards,