Transform function with dashboard variables

Hi All,

I’m trying to do some math with 3 signals in Flux. Obviously the easyiest way would be using the Transform function. But since the dashboard has a function to change to another device, all the variables in the transform function get the device serial in the var when creating the transform. This is a problem, because when changing device (boat in the top left corner) the transform function no longer works.
In the screenshot how i want it to work. It works, until i change the device. What is the way to make this work?

Kind regards,

Yannick

Welcome @yannick9

Is there any reason why could not use a map() function(s) in your Flux query(ies) and do the math there? Kinda like having a table with degrees Celsius and creating a second column with degrees Fahrenheit?

No, i’m just not experienced and have no idea how to do that.
Can you point me in the right direction, or do you have a link to a tutorial?

BTW, this is my flux query now

from(bucket: v.defaultBucket)
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> aggregateWindow(every: v.windowPeriod, fn: mean)
  |> filter(fn: (r) => r["_measurement"] =~ /${DEVICE:regex}/)
  |> filter(fn: (r) => r["_field"] == "Current" or r["_field"] == "Voltage" or r["_field"] == "Speed")
  |> yield(name: "mean")
1 Like

I do not have a tutorial that deals with join() functions (there are probably many if you search), but below is what I am thinking. Obviously I cannot check my syntax. Perhaps @ldrascic or @yosiasz can review and offer up their input. You might want to put this in Influx Data Explorer first and see how the tables look before moving it to Grafana.

t1 = from(bucket: v.defaultBucket)
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> aggregateWindow(every: v.windowPeriod, fn: mean)
  |> filter(fn: (r) => r["_measurement"] =~ /${DEVICE:regex}/)
  |> filter(fn: (r) => r["_field"] == "Current")
  |> yield(name: "Current")

t2 = from(bucket: v.defaultBucket)
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> aggregateWindow(every: v.windowPeriod, fn: mean)
  |> filter(fn: (r) => r["_measurement"] =~ /${DEVICE:regex}/)
  |> filter(fn: (r) => r["_field"] == "Voltage")
  |> yield(name: "Voltage")

t3 = from(bucket: v.defaultBucket)
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> aggregateWindow(every: v.windowPeriod, fn: mean)
  |> filter(fn: (r) => r["_measurement"] =~ /${DEVICE:regex}/)
  |> filter(fn: (r) => r["_field"] == "Speed")
  |> map(fn: (r) => ({r with MS_to_Kmh: r._value * 3.6}))
  |> yield(name: "Speed")

t4 = join(tables:{Current:t1, Voltage:t2}, on:["_time"])
  |> map(fn: (r) => ({r with Watthour: float(v:r.Current) / float(v:r.Voltage) }))
  |> yield(name: "Watthour")

t5 = join(tables:{Speed:t3, Watthour:t4}, on:["_time"])
  |> map(fn: (r) => ({r with Wh_KM: float(v:r.Watthour) / float(v:r.MS_to_Kmh) }))
  |> yield(name: "Wh_KM")

Hi @yannick9,
Here is my query:

from(bucket: "test")
  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
  |> filter(fn: (r) => r["_measurement"] == "cpu") // instead of cpu you will have to use variable
  |> filter(fn: (r) => r["cpu"] == "cpu-total") // you don't need this filter
  |> filter(fn: (r) => r["_field"] == "usage_user" or r["_field"] == "usage_system" or r["_field"] == "usage_idle")
  |> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")  
  |> rename(columns: {usage_user: "current", usage_system: "voltage", usage_idle: "speed"})  // renaming columns to make more clear
  |> map(fn: (r) => ({ r with _value: (r.current * r.voltage) / (r.speed * 3.6) }))  
  |> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)
  |> yield(name: "result")

Since all your _fields are from the same _measurement you can use pivot function in order to pivot table by _time. After that you can use map function to calculate (r.current * r.voltage) / (r.speed * 3.6).

Table: Before and after pivot

Before pivoting table looks like:

After pivoting:

Thanks @grant2 for tagging me here. I really enjoyed solving this task. :slight_smile:

Best regards,
ldrascic

@ldrascic That’s awesome and obviously way more efficient. Thank you for solving & sharing!

@yannick9 Usually I learn best by quickly trying different things in Influx Data Explorer and seeing how the table(s) look, but in this case I just did it off the top of my head.

Thanks guys! I will try it next week, right now i’m very busy.
I will let you know how it turns out!

I learn best quickly trying various things in influx data explorer and seeing, how the table look but in this case i just did it off the top of my head.