So, the better approach to this issue is to take advantage of a few of k6 features:
- The metrics generated by each scenario are automatically tagged with a
scenario: <scenario-name>
tag, without you having to specify anything: Advanced Examples). - k6 can have sub-metrics based on tags; unfortunately, this currently happens only when there’s a threshold defined on the sub-metric (Thresholds), so we have to define some bogus thresholds, but we plan to improve the situation in the next few k6 versions (Add explicit tracking and ignoring of metrics and sub-metrics · Issue #1321 · grafana/k6 · GitHub).
- The end-of-test summary shows the submetric values by default.
Combining these facts, we can have a much nicer script like this:
import http from 'k6/http';
import { check, sleep } from 'k6';
export let options = {
scenarios: {
GET_1506_Bookings: {
executor: 'constant-vus',
vus: 10,
duration: '30s',
gracefulStop: '20s',
exec: 'liveResSearchGetBookings',
env: { DATE_SEARCH: '2020-10-29' },
},
GET_603_Bookings: {
executor: 'constant-vus',
vus: 30,
duration: '30s',
gracefulStop: '20s',
exec: 'liveResSearchGetBookings',
env: { DATE_SEARCH: '2020-11-25' },
},
GET_83_Bookings: {
executor: 'constant-vus',
vus: 60,
duration: '30s',
gracefulStop: '20s',
exec: 'liveResSearchGetBookings',
env: { DATE_SEARCH: '2020-05-20' },
},
},
// So we get count in the summary, to demonstrate different metrics are different
summaryTrendStats: ['avg', 'min', 'med', 'max', 'p(90)', 'p(95)', 'p(99)', 'count'],
thresholds: {
// Intentionally empty. We'll programatically define our bogus
// thresholds (to generate the sub-metrics) below. In your real-world
// load test, you can add any real threshoulds you want here.
}
}
for (let key in options.scenarios) {
// Each scenario automaticall tags the metrics it generates with its own name
let thresholdName = `http_req_duration{scenario:${key}}`;
// Check to prevent us from overwriting a threshold that already exists
if (!options.thresholds[thresholdName]) {
options.thresholds[thresholdName] = [];
}
// 'max>=0' is a bogus condition that will always be fulfilled
options.thresholds[thresholdName].push('max>=0');
}
export function liveResSearchGetBookings(data) {
const options = { headers: { Authorization: `Bearer foobar` } };
let hostUrl = 'https://httpbin.test.k6.io/anything'
let searchUrl = `/api/v1/events/search?page=1&pageSize=100&arrivalMin=${__ENV.DATE_SEARCH}`
let theCall = http.get(hostUrl + searchUrl, options);
check(theCall, {
'status is 200': (r) => r.status === 200
});
// Constant sleep() is usually not ideal, though arribal-rate executors
// might be even better than constant-vus.
sleep(1 + Math.random()); // sleep between 1s and 2s
}
this will result in a summary like this:
running (31.9s), 000/100 VUs, 1772 complete and 0 interrupted iterations
GET_1506_Bookings ✓ [======================================] 10 VUs 30s
GET_603_Bookings ✓ [======================================] 30 VUs 30s
GET_83_Bookings ✓ [======================================] 60 VUs 30s
✓ status is 200
checks.............................: 100.00% ✓ 1772 ✗ 0
data_received......................: 1.8 MB 57 kB/s
data_sent..........................: 217 kB 6.8 kB/s
http_req_blocked...................: avg=58.34ms min=1.8µs med=2.56µs max=1.48s p(90)=4.83µs p(95)=755.54ms p(99)=1.4s count=1772
http_req_connecting................: avg=15.64ms min=0s med=0s max=517.29ms p(90)=0s p(95)=228.04ms p(99)=430.51ms count=1772
http_req_duration..................: avg=184.79ms min=132.73ms med=138.22ms max=1.88s p(90)=364.67ms p(95)=441.73ms p(99)=468.98ms count=1772
✓ { scenario:GET_1506_Bookings }...: avg=184.14ms min=134ms med=139.18ms max=484.23ms p(90)=370.54ms p(95)=453.01ms p(99)=469.81ms count=171
✓ { scenario:GET_603_Bookings }....: avg=158.44ms min=133.63ms med=137.44ms max=1.72s p(90)=147.4ms p(95)=299.66ms p(99)=463.83ms count=540
✓ { scenario:GET_83_Bookings }.....: avg=198.31ms min=132.73ms med=138.61ms max=1.88s p(90)=392.43ms p(95)=452.25ms p(99)=470.7ms count=1061
http_req_receiving.................: avg=3.2ms min=38.2µs med=244.76µs max=1.43s p(90)=416.17µs p(95)=507.14µs p(99)=1.15ms count=1772
http_req_sending...................: avg=75.6µs min=27.77µs med=69.88µs max=1.08ms p(90)=108.13µs p(95)=121.69µs p(99)=186.6µs count=1772
http_req_tls_handshaking...........: avg=41.76ms min=0s med=0s max=1.23s p(90)=0s p(95)=509.12ms p(99)=1.15s count=1772
http_req_waiting...................: avg=181.51ms min=132.37ms med=137.83ms max=1.05s p(90)=363.66ms p(95)=441.54ms p(99)=467.49ms count=1772
http_reqs..........................: 1772 55.514138/s
iteration_duration.................: avg=1.73s min=1.13s med=1.7s max=3.79s p(90)=2.12s p(95)=2.36s p(99)=3.19s count=1772
iterations.........................: 1772 55.514138/s
vus................................: 38 min=38 max=100
vus_max............................: 100 min=100 max=100
For more details, read the advanced examples in the scenarios docs and my previous forum responses to similar questions: Ignore http calls made in Setup or Teardown in results? - #2 by nedyalko and Separate metrics summary for each request in default function - #2 by nedyalko