Export rps and response time for each timestamp (seconds) to csv

Hello everyone,

I might have a stupid question but after looking on the doc and on this forum, i don’t find the response.

We don’t use graphana/datadog/new relic but dynatrace in our compagny so i can’t export metrics by second on those tool.

So i would like to export :

  • Request per second (for each second of the load test)
  • Response time (for each second of the load test)

During our load test, we have seen sometimes we have a drop of rps for 3 or 4 seconds but with the summary result we don’t have enough data to debug.
With that i would be able to graph the rps for seconds like other tools do (jmeter/neoload/…)

Is it possible to have this in a csv or json export with K6.io and how to do it ?

Big thanks.
Alex

Hi, welcome to the forum :slight_smile:

If you’re using the k6 CLI tool, you can export the generated metrics with the JSON or CSV outputs. Then you can aggregate them per second using any tool you prefer, such as jq for JSON. You can see some examples on the docs page.

If you need this in the k6 Cloud service, then you can download a CSV with the raw metrics from the test results menu. See the documentation.

Good luck!

Hello Imiric,

It is perfectly clear ! I didn’t know we got one line by metrics by request. It is perfect !

Thanks a lot,
Alex

Hi,
I have executed several basic tests (for HTTP and separately for gRPC) and exported CSV files.
I am going to rebuild the original CSV file from K6 to have 1 line to correspond to 1 response and then group by timestamps.

Could you please confirm or correct me, that for every request in the CSV file we will always have a block of 12 lines (1 line per metric)? (here it is 12 but it does not matter how many)
If so, this will confirm that each block cannot contain information about several responses.
In the CSV produced by k6 for my tests I always see iterations=1 as shown in the fragment output below:

|http_reqs|1615152451|1.000000|
|---|---|---|
|http_req_duration|1615152451|1.474500|
|http_req_blocked|1615152451|0.329700|
|http_req_connecting|1615152451|0.249500|
|http_req_tls_handshaking|1615152451|0.000000|
|http_req_sending|1615152451|0.069700|
|http_req_waiting|1615152451|1.268100|
|http_req_receiving|1615152451|0.136700|
|data_sent|1615152451|82.000000|
|data_received|1615152451|1070.000000|
|iteration_duration|1615152451|2.022600|
|iterations|1615152451|1.000000|

Hi @nsavelyeva,

k6 does not aggregate anything on it’s own. The only place where metrics are aggregated at all is the cloud output and even there it is only for some of them.

So you will see a line for each metric sample emitted by k6 in the csv file.

the http_req* metrics you see above will be emitted per each request (we will actually add 1 more in the next release ) and will be grouped together, and very likely in this order.
The data_* and iteration* are emitted at the end of an iteration - an execution of the “default” function (or the exec on if using scenarios).

And because there is no aggregation both iterations and http_reqs are 1.

So in your case, I would expect that you just do only 1 request in that default function which is why you get these 12 metrics.

I hope this answers your question.