I am getting some odd results in Datadog when I define custom metrics in k6.
I have a custom metric (success_counter) that adds the value whenever a specific HTTP call is successful.
From k6 results, I can see that it was successful 311 times. But in DD it shows 1.71K.
The DD formula is:
cumsum(sum:k6.success_counter{*}.as_count())
The resampling is set to 1.
Same formula for the basic http_reqs gives fairly accurate result (1.21K in DD for 1244 in k6).
Does anybody know why I get such a large number for my custom metric? I tried multiple times and the custom metric value in DD is always roughly 5.4 times more than k6 (???)
Upon further investigation, I found out that we’ve had this issue before, but haven’t had time to fix it. It’ll be fixed along with this feature or maybe sooner.
Update:
The fix is being tested. It’ll be released soon.
The fix is released to production. The http_reqs is now reported as RPS and the rest of the counter metrics as sum.
Sorry @mostafa I am still having an issue with it. The custom metric shows correctly, only if I use this:
cumsum(max:k6.my_metric{*}) and only if I select the exact time range of the test. If I put too much time before/after the test (in Datadog timeframe selection), the numbers don’t match anymore.
The resolution of the graphs on DataDog are really tricky, because we send a set of metrics with a different resample rate (1~10s), yet the graphs try to compensate and re-aggregate to show accurate data points.