Unfortunately, that’s currently not possible in a straightforward way. Add explicit tracking and ignoring of metrics and sub-metrics · Issue #1321 · grafana/k6 · GitHub is a proposal that attempts to tackle this issue, but I can’t give any guarantees when it, or something like it, would be implemented.
That said, depending on your specific use case, there are some workarounds. You might not know that k6 tags the HTTP metric values it measures, so you can differentiate between metrics from HTTP requests with a 200
status for their response, and those with 30x (i.e. redirects). The easiest way to do that is to use an external output and explore the raw data k6 produces.
You can also differentiate between requests in the end-of-test summary by setting thresholds on sub-metrics, based on the response status
tag. Here’s a simple example:
import http from "k6/http";
import { sleep } from "k6";
export let options = {
vus: 20,
duration: "30s",
thresholds: {
"http_req_duration{status:302}": ["p(99)<300"], // Measure only redirects
"http_req_duration{status:200}": ["p(99)<2500"], // Measure only non-redirects
},
};
export default function () {
// This will redirect to a page that will take 2 seconds to return
http.get("https://httpbin.test.loadimpact.com/redirect-to?url=https%3A%2F%2Fhttpbin.test.loadimpact.com%2Fdelay%2F2&status_code=302");
sleep(Math.random() + 1); // Random sleep between 1s and 2s
}
this will result in an end-of-test summary somewhat like this:
data_received..............: 231 kB 7.7 kB/s
data_sent..................: 33 kB 1.1 kB/s
http_req_blocked...........: avg=69.35ms min=2.73µs med=3.35µs max=1.07s p(90)=5.55µs p(95)=1.07s
http_req_connecting........: avg=21.26ms min=0s med=0s max=329.37ms p(90)=0s p(95)=328.2ms
http_req_duration..........: avg=1.12s min=135.11ms med=251.2ms max=2.37s p(90)=2.18s p(95)=2.19s
✓ { status:200 }...........: avg=2.16s min=2.13s med=2.15s max=2.37s p(90)=2.19s p(95)=2.2s
✗ { status:302 }...........: avg=165.44ms min=135.11ms med=149.9ms max=360.92ms p(90)=206.07ms p(95)=212.72ms
http_req_receiving.........: avg=171.23µs min=29.97µs med=95.88µs max=3.19ms p(90)=163.07µs p(95)=190.5µs
http_req_sending...........: avg=89.55µs min=33.33µs med=77.28µs max=1.68ms p(90)=127.29µs p(95)=140.28µs
http_req_tls_handshaking...: avg=39.87ms min=0s med=0s max=617.06ms p(90)=0s p(95)=615.41ms
http_req_waiting...........: avg=1.12s min=134.96ms med=250.99ms max=2.37s p(90)=2.18s p(95)=2.19s
http_reqs..................: 309 10.299796/s
iteration_duration.........: avg=3.97s min=3.31s med=3.89s max=5.43s p(90)=4.6s p(95)=5.14s
iterations.................: 140 4.666574/s
vus........................: 20 min=20 max=20
vus_max....................: 20 min=20 max=20
Another potential workaround option would be to use custom metrics and timings
values in the HTTP Response
object. For a request chain, the timings
should contain only the metrics for the last request.
Or, you can set the maxRedirects
option to 0
, so that k6 doesn’t follow HTTP redirects automatically, and build a simple wrapper around the k6/http
module that handles redirects manually, and uses custom metrics again to track precisely what interests you.
In summary, k6 is pretty flexible And as seen in #1321, we’d like to make it more so, but there are plenty of workaround available even now.