Understanding reponse time (http_req_duration)

I have the configuration with 250 and 300 rps and performing some stress test with ecs task min/max as 1. (Just to see how much one task can handle request) .
I’m testing against a mock api, where I intentionally set delay as 500ms to mimic a real api response duration.

scenarios: {
    test-api: {
      executor: 'constant-arrival-rate',
      duration: '15m',
      rate: 250 and later 300, <------- See here
      timeUnit: '1s',
      preAllocatedVUs: 50,
      maxVUs: 5500
    },
  },

export default function() {
    let res = http.get("https://some-ecs-service-host-behind-alb/test");
}

Below is the summary with 250 & 300 rps.

  1. With 250 rps: A single ecs task was able to handle all request without crashing and with min/max as always 1
data_received..................: 304 MB 337 kB/s
data_sent......................: 11 MB  12 kB/s
dropped_iterations.............: 169    0.187659/s
http_req_blocked...............: avg=192.25µs min=174ns    med=279ns    max=277.79ms p(90)=348ns    p(95)=377ns
http_req_connecting............: avg=62.54µs  min=0s       med=0s       max=78.55ms  p(90)=0s       p(95)=0s
http_req_duration..............: avg=580.3ms  min=561.56ms med=571.09ms max=1s       p(90)=595.1ms  p(95)=630.07ms
{ expected_response:true }...: avg=580.3ms  min=561.56ms med=571.09ms max=1s       p(90)=595.1ms  p(95)=630.07ms
http_req_failed................: 0.00%  ✓ 0          ✗ 224831
http_req_receiving.............: avg=38.15µs  min=11.9µs   med=30.31µs  max=386.53ms p(90)=39.9µs   p(95)=43.36µs
http_req_sending...............: avg=23.67µs  min=13.68µs  med=23.22µs  max=434.87µs p(90)=28.42µs  p(95)=30.33µs
http_req_tls_handshaking.......: avg=125.84µs min=0s       med=0s       max=155.14ms p(90)=0s       p(95)=0s
http_req_waiting...............: avg=580.24ms min=561.51ms med=571.03ms max=1s       p(90)=595.03ms p(95)=630ms
http_reqs......................: 224831 249.653932/s
✓ { status:200 }...............: 224831 249.653932/s
✗ { status:401 }...............: 0      0/s
✗ { status:502 }...............: 0      0/s
✗ { status:503 }...............: 0      0/s
✗ { status:504 }...............: 0      0/s
iteration_duration.............: avg=580.54ms min=8.44µs   med=571.15ms max=1.07s    p(90)=595.3ms  p(95)=631.05ms
iterations.....................: 224831 249.653932/s
vus............................: 144    min=117      max=200
vus_max........................: 216    min=126      max=216
running (15m00.6s), 0000/0216 VUs, 224831 complete and 0 interrupted iterations
test-api ✓ [======================================] 0000/0216 VUs  15m0s  250.00 iters/s
  1. With 300rps: Multiple single ecs task came up intermittently to handle all request with min/max task as always 1
.................
WARN[0757] Request Failed       error="Get \"https://some-ecs-service-host-behind-alb/test\": request timeout"
WARN[0757] Request Failed       error="Get \"https://some-ecs-service-host-behind-alb/test\": request timeout"
WARN[0757] Request Failed       error="Get \"https://some-ecs-service-host-behind-alb/test\": request timeout"
WARN[0757] Request Failed       error="Get \"https://some-ecs-service-host-behind-alb/test\": request timeout"

data_received..................: 347 MB 382 kB/s
data_sent......................: 15 MB  17 kB/s
dropped_iterations.............: 14897  16.360598/s
http_req_blocked...............: avg=4.26ms  min=162ns    med=261ns   max=495.65ms p(90)=355ns   p(95)=414ns
http_req_connecting............: avg=1.41ms  min=0s       med=0s      max=171.79ms p(90)=0s      p(95)=0s
http_req_duration..............: avg=10.6s   min=58.85ms  med=10.18s  max=1m0s     p(90)=18.53s  p(95)=20.46s
{ expected_response:true }...:   avg=11.15s  min=567.46ms med=11.04s  max=51.8s    p(90)=18.78s  p(95)=20.73s
http_req_failed................: 9.48%  ✓ 24208      ✗ 230895
http_req_receiving.............: avg=39.58µs min=0s       med=27.37µs max=380.58ms p(90)=37.14µs p(95)=41.29µs
http_req_sending...............: avg=25.13µs min=12.49µs  med=23.24µs max=1.83ms   p(90)=30.65µs p(95)=35.01µs
http_req_tls_handshaking.......: avg=2.84ms  min=0s       med=0s      max=431.48ms p(90)=0s      p(95)=0s
http_req_waiting...............: avg=10.6s   min=58.8ms   med=10.18s  max=1m0s     p(90)=18.53s  p(95)=20.46s
http_reqs......................: 255103 280.166325/s
✓ { status:200 }...............: 230895 253.579941/s
✗ { status:401 }...............: 0      0/s
✗ { status:502 }...............: 0      0/s
✓ { status:503 }...............: 11468  12.594707/s
✓ { status:504 }...............: 12717  13.966418/s
iteration_duration.............: avg=10.6s   min=9.8µs    med=10.22s  max=1m0s     p(90)=18.53s  p(95)=20.46s
iterations.....................: 255103 280.166325/s
vus............................: 13     min=13       max=5500
vus_max........................: 5500   min=151      max=5500

running (15m10.5s), 0000/5500 VUs, 255103 complete and 0 interrupted iterations
test-api ✓ [======================================] 0000/5500 VUs  15m0s  300.00 iters/s

In the first summary with 250rps, p(95)=630.07ms is seems to be acceptable. But:

  1. Why http_req_receiving in second summary with 300rps is way high (p(95)=20.73s) as compared to first summary?
  2. Also, how should I interpret http_req_blocked & http_req_connecting in summary 2 (i.e. 300rps) and why its so high compared to observation 1?
  3. Is there any way, I can print individual response time (& other parameters) of every api hit in the summary within the run?
  1. Maybe you hit the tipping point of your system performance and/or a potential bottleneck
  2. Please refer to the official metric documentation
  3. Not sure how helpful that would be. In your case you have more than 230k api hits and navigating manually is hard. However if you use for i.e. infulxdb to store metrics, k6 reports back (a “line” per) each single hit.
2 Likes