Missing data with statsd output

I’m running a fairly small performance test and using the statsd output. I’m running the statsd Prometheus exporter so that I can view the metrics in Prometheus. Using the default statsd settings, it looks like I lose around 50% of the statsd data, so iteration counts and http_reqs counts are much lower than they should be. Is this expected behaviour? Or is there something I need to be doing differently?

This is the K6 script:

import http from "k6/http";
import { check, group, sleep } from "k6";
import { Rate } from "k6/metrics";

// Options
export let options = {
    stages: [
        // Linearly ramp up from 1 to 50 VUs during first minute
        { target: 50, duration: "10s" },
        // Hold at 50 VUs for the next 3 minutes and 30 seconds
        { target: 50, duration: "40s" },
        // Linearly ramp down from 50 to 0 50 VUs over the last 30 seconds
        { target: 0, duration: "10s" }
        // Total execution time will be ~5 minutes
    ],
};

// Main function
export default function () {
    let response = http.get("https://test.k6.io/");
}

This is the docker-compose.yaml I’m using to run the test:

version: '3.7'
services:
  statsdex:
    image: "quay.io/prometheus/statsd-exporter:v0.20.2"
    ports:
      - 9102:9102
  k6:
    image: "loadimpact/k6:0.32.0"
    command: ["run", "/scripts/k6.js","-q", "-o","statsd"]
    depends_on:
      - statsdex
    environment:
      K6_STATSD_ADDR: "statsdex:9125"
      K6_STATSD_ENABLE_TAGS: "true"
      K6_STATSD_PUSH_INTERVAL: 1s
      #K6_STATSD_BUFFER_SIZE: 7000
    volumes:
      - "./scripts:/scripts"

K6 outputs:

k6_1        |e   execution: local
ek6_1        |e      script: /scripts/k6.js
ek6_1        |e      output: statsd (statsdex:9125)
ek6_1        |e 
ek6_1        |e   scenarios: (100.00%) 1 scenario, 50 max VUs, 1m30s max duration (incl. graceful stop):
ek6_1        |e            * default: Up to 50 looping VUs for 1m0s over 3 stages (gracefulRampDown: 30s, gracefulStop: 30s)
ek6_1        |e 
k6_1        |e 
ek6_1        |e      data_received..................: 271 MB 4.5 MB/s
ek6_1        |e      data_sent......................: 3.1 MB 51 kB/s
ek6_1        |e      http_req_blocked...............: avg=679.56µs min=0s           med=25.7µs   max=404.04ms p(90)=82.2µs   p(95)=121µs   
ek6_1        |e      http_req_connecting............: avg=201.86µs min=0s           med=0s       max=130.11ms p(90)=0s       p(95)=0s      
ek6_1        |e      http_req_duration..............: avg=103.59ms min=56.47ms      med=100.78ms max=749.61ms p(90)=112.28ms p(95)=118.72ms
ek6_1        |e        { expected_response:true }...: avg=103.59ms min=56.47ms      med=100.78ms max=749.61ms p(90)=112.28ms p(95)=118.72ms
ek6_1        |e      http_req_failed................: 0.00%  ✓ 0    ✗ 23946
ek6_1        |e      http_req_receiving.............: avg=759.13µs min=-32.840899ms med=217.9µs  max=549.74ms p(90)=681.45µs p(95)=1.3ms   
ek6_1        |e      http_req_sending...............: avg=164.99µs min=41.1µs       med=86µs     max=40.98ms  p(90)=224.85µs p(95)=322.3µs 
ek6_1        |e      http_req_tls_handshaking.......: avg=421.52µs min=0s           med=0s       max=296.29ms p(90)=0s       p(95)=0s      
ek6_1        |e      http_req_waiting...............: avg=102.67ms min=56.14ms      med=100.22ms max=749.3ms  p(90)=111.35ms p(95)=117.3ms 
ek6_1        |e      http_reqs......................: 23946  399.161598/s
ek6_1        |e      iteration_duration.............: avg=104.66ms min=88.45ms      med=101.12ms max=749.97ms p(90)=112.75ms p(95)=119.55ms
ek6_1        |e      iterations.....................: 23946  399.161598/s
ek6_1        |e      vus............................: 1      min=1  max=50 
ek6_1        |e      vus_max........................: 50     min=50 max=50

Then I’ll run curl -s http://localhost:9102/metrics | grep 'k6_http_reqs{'

Actual results:

k6_http_reqs{expected_response="true",method="GET",name="https://test.k6.io/",proto="HTTP/2.0",scenario="default",status="200",tls_version="tls1.2",url="https://test.k6.io/"} 10924

Expected results:

k6_http_reqs{expected_response="true",method="GET",name="https://test.k6.io/",proto="HTTP/2.0",scenario="default",status="200",tls_version="tls1.2",url="https://test.k6.io/"} 23946

Hi @paulgrav, welcome to the forum.

Is this expected behaviour?

Obviously not :slight_smile:

From what I know about prometheus and it’s instrumentation, the thing reporting the metrics clears it’s metrics each time it get pulled so when prometheus pull the next time it gets only metrics for the time since the last pull. I would expect the exported to do the same.

Can you make certain that:

  1. Nothing pulls from the export beside you
  2. You only pull after k6 has ran to the end.

Also you can run k6 with -v to see write statistics, technically statsd is over UDP which might drop packages, but it is unlikely especially on a local network or the loopback interface, but maybe some other error happens in between :man_shrugging:

Hope this helps

Prometheus will pull from the exporter (via HTTP request to /metrics) but the exporter manages the TTL of metrics, not Prometheus. Making an HTTP request to the exporter has no effect on the availability of the metrics.

I tried running with -v I just see additional debug info but nothing to indicate dropped packets.

N.B. If I lower K6_STATSD_PUSH_INTERVAL, I can improve the situation but I’m not sure should have to lower it and I’m not sure why it improves the situation.

N.B. If I lower K6_STATSD_PUSH_INTERVAL , I can improve the situation but I’m not sure should have to lower it and I’m not sure why it improves the situation.

This looks like packages are getting dropped and lowering the push interval makes smaller packages possibly dropping less :man_shrugging:

Is the exporter on a different machine? can you move it to the same one and try again

but nothing to indicate dropped packets.

you won’t see anything because UDP doesn’t report … dropped packages they just get dropped :wink:

Statsd and K6 are running side-by-side in Docker on the same machine. I’m pretty sure you could replicate this if you were to run the docker-compose.yaml in my original post.

There’s no documentation to indicate that K6_STATSD_PUSH_INTERVAL should be smaller than the default 1s. Why is the default 1s? On what basis should I set the push interval?

I did play around with your docker-compose. I do get less droppage but still some (25% for example) which dropped to around 5% after I set the mtu in docker-compose.

I did also run both of them completely locally which has the same effect, also running the exporter in docker but k6 out of it works better as well.

There’s no documentation to indicate that K6_STATSD_PUSH_INTERVAL should be smaller than the default 1s. Why is the default 1s? On what basis should I set the push interval?

To be honest it is 1s because that seems like a good value in most cases, but given the fact that k6’s stated is udp only(library restrictions) and that the library should not send very big UDP datagrams I see no problem for it to be lower. I can definitely report that with all variants (or at least the ones I wasn’t tired of checking) setting the push interval to 100ms works perfectly and at least I don’t detect any dropped packages.

So you can definitely lower that and we likely should set it lower as well as 1500bytes isn’t all that much given the format … I would guess not more than 50-80 metrics per datagram, so given that k6, in general, makes a lot more, it shouldn’t really matter.

It is still strange that we drop packages :(. I did a test with the latest version of the library and this is still a problem, even bigger one in my testing :man_facepalming: . There are some bugs about this but it seems like it hasn’t been fixes for related problems :man_shrugging:
Not using a buffer client also doesn’t help, and running with v0.29.0 … it is even worse :man_facepalming:

edit: I have opened an issue