Hi folks. Apologies in advance since my gut tells me this is a grafana dashboard issue rather than a purely k6 issue. However I have success here so I’ll take the risk on breaking the rules…
metrics forwarded to prometheus output remote-write docker-compose bundle running locally on my workstation.
Output Summary from my CLI
checks...................................: 100.00% ✓ 246726 ✗ 0
data_received............................: 127 MB 88 kB/s
data_sent................................: 352 MB 243 kB/s
dropped_iterations.......................: 1669 1.154816/s
group_duration...........................: avg=348.38ms min=48.5ms med=365.27ms max=1.47s p(75)=453.73ms p(90)=524.12ms p(99)=711.19ms
http_req_blocked.........................: avg=6.04ms min=1µs med=4µs max=1.47s p(75)=4µs p(90)=5µs p(99)=276.75ms
http_req_connecting......................: avg=6.02ms min=0s med=0s max=1.47s p(75)=0s p(90)=0s p(99)=276.16ms
✗ http_req_duration........................: avg=350.1ms min=42.42ms med=368.85ms max=1.47s p(75)=455.88ms p(90)=523.54ms p(99)=730.67ms
{ expected_response:true }.............: avg=350.1ms min=48.27ms med=368.86ms max=1.47s p(75)=455.88ms p(90)=523.54ms p(99)=730.67ms
✗ { test_type:reserve_gate_rush }........: avg=407.51ms min=53.54ms med=427.88ms max=1.29s p(75)=494.46ms p(90)=600.41ms p(99)=774.95ms
✗ { test_type:reserve_peak }.............: avg=325.26ms min=52.19ms med=339.15ms max=1.47s p(75)=442.72ms p(90)=479.52ms p(99)=686.22ms
✗ { test_type:resolve_gate_rush }........: avg=377.44ms min=42.42ms med=398.5ms max=1.46s p(75)=469.67ms p(90)=560.94ms p(99)=742.88ms
✗ { test_type:resolve_peak }.............: avg=303.07ms min=48.27ms med=312.61ms max=1.02s p(75)=429.89ms p(90)=465.39ms p(99)=633.24ms
✗ { test_type:session_get_gate_rush }....: avg=372.6ms min=48.6ms med=393.25ms max=1.13s p(75)=466.5ms p(90)=553.88ms p(99)=753.07ms
✗ { test_type:session_get_peak }.........: avg=299.32ms min=48.63ms med=307.27ms max=970.61ms p(75)=428.2ms p(90)=464.22ms p(99)=603.37ms
✗ { test_type:session_post_gate_rush }...: avg=396.03ms min=53.96ms med=420.37ms max=1.46s p(75)=483.66ms p(90)=586.05ms p(99)=766.4ms
✗ { test_type:session_post_peak }........: avg=325.41ms min=53.64ms med=339.59ms max=1.47s p(75)=442.98ms p(90)=480.38ms p(99)=677.1ms
✓ http_req_failed..........................: 0.00% ✓ 2 ✗ 257409
✓ { test_type:reserve_gate_rush }........: 0.00% ✓ 1 ✗ 54544
✓ { test_type:reserve_peak }.............: 0.00% ✓ 0 ✗ 62954
✓ { test_type:resolve_gate_rush }........: 0.00% ✓ 1 ✗ 54540
✓ { test_type:resolve_peak }.............: 0.00% ✓ 0 ✗ 62955
✓ { test_type:session_get_gate_rush }....: 0.00% ✓ 0 ✗ 5208
✓ { test_type:session_get_peak }.........: 0.00% ✓ 0 ✗ 6010
✓ { test_type:session_post_gate_rush }...: 0.00% ✓ 0 ✗ 5195
✓ { test_type:session_post_peak }........: 0.00% ✓ 0 ✗ 6003
http_req_receiving.......................: avg=170.05µs min=0s med=31µs max=452.26ms p(75)=37µs p(90)=48µs p(99)=115µs
http_req_sending.........................: avg=25.21µs min=7µs med=22µs max=5.4ms p(75)=27µs p(90)=35µs p(99)=61µs
http_req_tls_handshaking.................: avg=0s min=0s med=0s max=0s p(75)=0s p(90)=0s p(99)=0s
http_req_waiting.........................: avg=349.91ms min=42.4ms med=368.7ms max=1.47s p(75)=455.76ms p(90)=523.26ms p(99)=729.17ms
http_reqs................................: 257411 178.108065/s
iteration_duration.......................: avg=3.1s min=43.02ms med=3.24s max=7.26s p(75)=4.42s p(90)=5.35s p(99)=5.65s
iterations...............................: 246728 170.716273/s
transaction_time.........................: avg=347.010022 min=48.276 med=364.3525 max=1472.994 p(75)=452.593 p(90)=521.02 p(99)=708.79825
vus......................................: 479 min=0 max=1041
vus_max..................................: 1709 min=40 max=1709
running (24m05.3s), 0000/1709 VUs, 246728 complete and 0 interrupted iterations
peak_reserve ✓ [======================================] 000/319 VUs 14m0s 00.84 iters/s
peak_resolve ✓ [======================================] 000/320 VUs 14m0s 00.84 iters/s
peak_session_get ✓ [======================================] 000/034 VUs 14m0s 0.26 iters/s
peak_session_post ✓ [======================================] 000/035 VUs 14m0s 0.26 iters/s
gate_rush_reserve ✓ [======================================] 000/479 VUs 9m0s 001.02 iters/s
gate_rush_resolve ✓ [======================================] 000/465 VUs 9m0s 001.02 iters/s
gate_rush_session_get ✓ [======================================] 000/049 VUs 9m0s 00.32 iters/s
gate_rush_session_post ✓ [======================================] 000/048 VUs 9m0s 00.32 iters/s
thresholds on metrics 'http_req_duration, http_req_duration{test_type:reserve_gate_rush}, http_req_duration{test_type:reserve_peak}, http_req_duration{test_type:resolve_gate_rush}, http_req_duration{test_type:resolve_peak}, http_req_duration{test_type:session_get_gate_rush}, http_req_duration{test_type:session_get_peak}, http_req_duration{test_type:session_post_gate_rush}, http_req_duration{test_type:session_post_peak}' have been breached
Filter selections set to ALL
** URL Filter selections set to any value**
the question I have is whether or not this is normal/expected given my testId, scenario and URL items. each of my group names are unique and each request header has a tag:name pair, which I presume maps to the URL filter in the dashboard.
- when All is selected for each filter (tested, scenario, url) the http breakdown panel is null
- when I leave testId and scenario set to All and selected any URLthen and only then will the http breakdown panel visualize the captured metrics from the runtime.
Thoughts?
Thanks in advance,
PlayStay