I am still not able to get the desired outcome.
Problem statement:
Influx 2.0 with Flux, I have a bucket “ShellyInfo”. That bucket logs information about my IoT devices (shelly cloud).
ID
Model
IP
Firmware_version
New_Version
Wifi_Connected
Wifi_IP
Wifi_rssi
Etc.
The data flow is as follows:
ShellyDevice > MQTT >> NodeRed >> InfluxDB
I just can’t get the display of the table correct. What I am trying to get is a simple table of single values per device i.e. as if it was an excel spreadsheet. I just want to see a single (latest value) row with the devices information.
The best view I have been able to get with the query and transformations is:
Influx is powerfull but hard to learn,
i think what your are looking for is “pivot” function
from(bucket: "ShellyInfo")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_field"] == "value")
|> filter(fn: (r) => r["_measurement"] == "ID" or r["_measurement"] == "fw_ver" or r["_measurement"] == "ip" or r["_measurement"] == "model" or r["_measurement"] == "new_fw" or r["_measurement"] == "wifi_sta.connected" or r["_measurement"] == "wifi_sta.ip" or r["_measurement"] == "wifi_sta.rssi" or r["_measurement"] == "wifi_sta.ssid")
|> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")
|> yield(name: "last")
this will change your data format in kind of classic sql table,
after this you can apply new filter.
This come from the key-value pair paradigm. before this transformation ou have to consider your data as one big table with columns like that
time,field,value
0,wifi,100kb
1,wifi,120kb
2,ram,1go
Yes, i think you want one line for each timestamp, what is happen here is a group issue i think,
before your pivot you have to group your data by time , this will result in multiple table for each unique timestamp.
After this you will be able to apply the pivot to transform each (grouped) table in one line
from(bucket: "ShellyInfo")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_field"] == "value")
|> filter(fn: (r) => r["_measurement"] == "ID" or r["_measurement"] == "fw_ver" or r["_measurement"] == "ip" or r["_measurement"] == "model" or r["_measurement"] == "new_fw" or r["_measurement"] == "wifi_sta.connected" or r["_measurement"] == "wifi_sta.ip" or r["_measurement"] == "wifi_sta.rssi" or r["_measurement"] == "wifi_sta.ssid")
|> group(columns :["_time"])
|> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")
|> group()
|> yield(name: "query_name")
And after the pivot you will have same number of line (table) for each time _value
Finnally you can regroup this in one table with |>group() function without parameter (just to help grafana to display)
PS : after everything, in grafana when you will have the good data displayed, you can use “label to field” Trasformation for better column name format.
Ok, understand > we first grouping by data by time
Then we need to pivot the (multiple tables) and consolidate it into one table (for the view) - one line.
Then we grouping again to allow grafana to display the data.
(thank you I am now seeing/learning how to use those grouping and pivot functions)
I understood what you did not in this query:
from(bucket: "ShellyInfo")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_field"] == "value")
|> filter(fn: (r) => r["_measurement"] == "ID" or r["_measurement"] == "fw_ver" or r["_measurement"] == "ip" or r["_measurement"] == "model" or r["_measurement"] == "new_fw" or r["_measurement"] == "wifi_sta.connected" or r["_measurement"] == "wifi_sta.ip" or r["_measurement"] == "wifi_sta.rssi" or r["_measurement"] == "wifi_sta.ssid")
|> group(columns :["_time"])
|> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")
|> group()
|> yield(name: "query_name")
I think I might have a data type issue in how I store the data (Boolean as string) as when I apply the query I get the following error - just want to confirm if its a data type issue or a query issue:
**invalid: runtime error @5:4-5:29: group: schema collision detected: column "_value" is both of type bool and string**
Just want to confirm data type or query - so I can try focus in on figure it out
yes you can add a |>tostring() function before your first grouping,
this will transform any value in string and allow you to group it, and pivot
It’s bit hard without testing but i think your are close to the final debug
Great thread! Just my $0.02…you should be able to use Node-RED (change node with JSONata expression) and have the Boolean go into InfluxDB (instead of a string).
Thank you @grant2 - Yes you right I can do that at node red level. Once I get this right I think I will go back and set the data type correctly. Thank you
Ho i see it now, i was blind !
You created a single _field for all your value, and you use a _measurement as a _field,
It’s not a standard structure, you should consider to change it.
You can still make this work by changing your pivot like that :
from(bucket: "ShellyInfo")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_field"] == "value")
|> filter(fn: (r) => r["_measurement"] == "ID" or r["_measurement"] == "fw_ver" or r["_measurement"] == "ip" or r["_measurement"] == "model" or r["_measurement"] == "new_fw" or r["_measurement"] == "wifi_sta.connected" or r["_measurement"] == "wifi_sta.ip" or r["_measurement"] == "wifi_sta.rssi" or r["_measurement"] == "wifi_sta.ssid")
|> group(columns :["_time"])
|> pivot(rowKey:["_time"], columnKey: ["_measurement"], valueColumn: "_value")
|> group()
|> yield(name: "query_name")
So making the changes I see that the second piviot uses valueColumn _field which I wasnt sure was going to work however having said that.
Its displaying data but still in a format of a “Heat Map” not sure what is struggling to pull the values into one row here if its grafana or the query or both.
This query
from(bucket: "ShellyInfo")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_field"] == "value")
|> filter(fn: (r) => r["_measurement"] == "ID" or r["_measurement"] == "fw_ver" or r["_measurement"] == "ip" or r["_measurement"] == "model" or r["_measurement"] == "new_fw" or r["_measurement"] == "wifi_sta.connected" or r["_measurement"] == "wifi_sta.ip" or r["_measurement"] == "wifi_sta.rssi" or r["_measurement"] == "wifi_sta.ssid")
|> group(columns :["_time"])
|> pivot(rowKey:["_time"], columnKey: ["_measurement"], valueColumn: "_value")
|> group()
|> yield(name: "query_name")
arf not sur i think it’s group issue before pivoting, try to group by measurement in the first group, or maybe by your “field” , it’s hard to imagine…
could you extract your data , anonymise it and post a sample here ? this way i could insert it in my influxdb and debug it, or you can try to reinsert new data with better structure, and the first query i share with you should work…
Nice ! now it will be easy to filter ! to get your last entry you can group by your “shelly devices” column to get one table by unique shelly device name .
after this you can apply |>last() function to get last time row for each shelly device
this way you could use any of |>max(), |>mean() |>first() |>sum() to get the result of this function for each table
from(bucket: "ShellyInfo")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_field"] == "value")
|> filter(fn: (r) => r["_measurement"] == "ID" or r["_measurement"] == "fw_ver" or r["_measurement"] == "ip" or r["_measurement"] == "model" or r["_measurement"] == "new_fw" or r["_measurement"] == "wifi_sta.connected" or r["_measurement"] == "wifi_sta.ip" or r["_measurement"] == "wifi_sta.rssi" or r["_measurement"] == "wifi_sta.ssid")
|> toString()
|> group(columns :["_ID"])
|> pivot(rowKey:["_time"], columnKey: ["_measurement"], valueColumn: "_value")
|> group()
|> group()
|> yield(name: "query_name")
Man almost there thank you so much!
The query above groups nicely but it still is not unique results. If I used Last function it was just returning one row, so I used the |> unique function and we now have the right view!
from(bucket: "ShellyInfo")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_field"] == "value")
|> filter(fn: (r) => r["_measurement"] == "ID" or r["_measurement"] == "fw_ver" or r["_measurement"] == "ip" or r["_measurement"] == "model" or r["_measurement"] == "new_fw" or r["_measurement"] == "wifi_sta.connected" or r["_measurement"] == "wifi_sta.ip" or r["_measurement"] == "wifi_sta.rssi" or r["_measurement"] == "wifi_sta.ssid")
|> toString()
|> unique()
|> group(columns :["_ID"])
|> pivot(rowKey:["_time"], columnKey: ["_measurement"], valueColumn: "_value")
|> group()
|> group()
|> yield(name: "query_name")
Thank you for the learning journey here! One would think it would be easier to display a simple table out of Influx 2.0
Now what would be nice but I am sure 1000000000 times more complex given this is Influx 2.0 is where there is no value in a row for a column use the last value so its always a complete row example:
Not really complex if you think about it ,
trick is to get rid of empty string before, this way you will get only last not empty string in the final result
to achieve this you can use a classic filter function like this
|> filter(fn: (r) => r[“_ID”] != “”)
but maybe you can filter your mesurement before
or check this to fill and replace null value