I’m running a fairly small performance test and using the statsd output. I’m running the statsd Prometheus exporter so that I can view the metrics in Prometheus. Using the default statsd settings, it looks like I lose around 50% of the statsd data, so iteration counts and http_reqs counts are much lower than they should be. Is this expected behaviour? Or is there something I need to be doing differently?
This is the K6 script:
import http from "k6/http";
import { check, group, sleep } from "k6";
import { Rate } from "k6/metrics";
// Options
export let options = {
stages: [
// Linearly ramp up from 1 to 50 VUs during first minute
{ target: 50, duration: "10s" },
// Hold at 50 VUs for the next 3 minutes and 30 seconds
{ target: 50, duration: "40s" },
// Linearly ramp down from 50 to 0 50 VUs over the last 30 seconds
{ target: 0, duration: "10s" }
// Total execution time will be ~5 minutes
],
};
// Main function
export default function () {
let response = http.get("https://test.k6.io/");
}
This is the docker-compose.yaml I’m using to run the test:
version: '3.7'
services:
statsdex:
image: "quay.io/prometheus/statsd-exporter:v0.20.2"
ports:
- 9102:9102
k6:
image: "loadimpact/k6:0.32.0"
command: ["run", "/scripts/k6.js","-q", "-o","statsd"]
depends_on:
- statsdex
environment:
K6_STATSD_ADDR: "statsdex:9125"
K6_STATSD_ENABLE_TAGS: "true"
K6_STATSD_PUSH_INTERVAL: 1s
#K6_STATSD_BUFFER_SIZE: 7000
volumes:
- "./scripts:/scripts"
K6 outputs:
k6_1 |e execution: local
ek6_1 |e script: /scripts/k6.js
ek6_1 |e output: statsd (statsdex:9125)
ek6_1 |e
ek6_1 |e scenarios: (100.00%) 1 scenario, 50 max VUs, 1m30s max duration (incl. graceful stop):
ek6_1 |e * default: Up to 50 looping VUs for 1m0s over 3 stages (gracefulRampDown: 30s, gracefulStop: 30s)
ek6_1 |e
k6_1 |e
ek6_1 |e data_received..................: 271 MB 4.5 MB/s
ek6_1 |e data_sent......................: 3.1 MB 51 kB/s
ek6_1 |e http_req_blocked...............: avg=679.56µs min=0s med=25.7µs max=404.04ms p(90)=82.2µs p(95)=121µs
ek6_1 |e http_req_connecting............: avg=201.86µs min=0s med=0s max=130.11ms p(90)=0s p(95)=0s
ek6_1 |e http_req_duration..............: avg=103.59ms min=56.47ms med=100.78ms max=749.61ms p(90)=112.28ms p(95)=118.72ms
ek6_1 |e { expected_response:true }...: avg=103.59ms min=56.47ms med=100.78ms max=749.61ms p(90)=112.28ms p(95)=118.72ms
ek6_1 |e http_req_failed................: 0.00% ✓ 0 ✗ 23946
ek6_1 |e http_req_receiving.............: avg=759.13µs min=-32.840899ms med=217.9µs max=549.74ms p(90)=681.45µs p(95)=1.3ms
ek6_1 |e http_req_sending...............: avg=164.99µs min=41.1µs med=86µs max=40.98ms p(90)=224.85µs p(95)=322.3µs
ek6_1 |e http_req_tls_handshaking.......: avg=421.52µs min=0s med=0s max=296.29ms p(90)=0s p(95)=0s
ek6_1 |e http_req_waiting...............: avg=102.67ms min=56.14ms med=100.22ms max=749.3ms p(90)=111.35ms p(95)=117.3ms
ek6_1 |e http_reqs......................: 23946 399.161598/s
ek6_1 |e iteration_duration.............: avg=104.66ms min=88.45ms med=101.12ms max=749.97ms p(90)=112.75ms p(95)=119.55ms
ek6_1 |e iterations.....................: 23946 399.161598/s
ek6_1 |e vus............................: 1 min=1 max=50
ek6_1 |e vus_max........................: 50 min=50 max=50
Then I’ll run curl -s http://localhost:9102/metrics | grep 'k6_http_reqs{'
Actual results:
k6_http_reqs{expected_response="true",method="GET",name="https://test.k6.io/",proto="HTTP/2.0",scenario="default",status="200",tls_version="tls1.2",url="https://test.k6.io/"} 10924
Expected results:
k6_http_reqs{expected_response="true",method="GET",name="https://test.k6.io/",proto="HTTP/2.0",scenario="default",status="200",tls_version="tls1.2",url="https://test.k6.io/"} 23946