Using scenarios is giving issues in summary and Datadog

I am running 2 scenarios in a script. The test ran fine and see the result below. According to the documentation I should see scenario specific info in it which I don’t. I don’t see any measurements in Datadog either. I have combined two scripts into one.

running (1m16.4s), 0/2 VUs, 20 complete and 0 interrupted iterations
assetCrud      ✓ [======================================] 0/1 VUs  1m0s
assetInstances ✓ [======================================] 0/1 VUs  1m0s

     ✓ Create Asset: is status 202
     ✓ Create Asset Instance: is status 202
     ✓ Read Asset Instance: is status 200
     ✓ Update Asset: is status 200
     ✓ Read Asset: status is 200
     ✓ Read Asset: Response is not null
     ✓ Delete Asset: is status 200

     █ setup

     █ teardown

     checks.........................: 100.00% ✓ 61       ✗ 0
     data_received..................: 291 kB  3.8 kB/s
     data_sent......................: 326 kB  4.3 kB/s
     http_req_blocked...............: min=4µs      max=1.12s    p(50)=6µs      p(75)=7µs      p(90)=9.2µs    p(95)=20.29µs  p(99)=897.97ms
     http_req_connecting............: min=0s       max=246.46ms p(50)=0s       p(75)=0s       p(90)=0s       p(95)=0s       p(99)=230.88ms
     http_req_duration..............: min=250.06ms max=365.84ms p(50)=268.84ms p(75)=283.23ms p(90)=298.81ms p(95)=324.32ms p(99)=357.88ms
       { expected_response:true }...: min=250.06ms max=365.84ms p(50)=264.66ms p(75)=283.23ms p(90)=299.98ms p(95)=338.82ms p(99)=359.58ms
     http_req_failed................: 21.05%  ✓ 20       ✗ 75
     http_req_receiving.............: min=68µs     max=262µs    p(50)=91µs     p(75)=101.5µs  p(90)=140µs    p(95)=187.99µs p(99)=253.54µs
     http_req_sending...............: min=29µs     max=369µs    p(50)=44µs     p(75)=109.5µs  p(90)=141.8µs  p(95)=172.09µs p(99)=281.58µs
     http_req_tls_handshaking.......: min=0s       max=636.73ms p(50)=0s       p(75)=0s       p(90)=0s       p(95)=0s       p(99)=459.58ms
     http_req_waiting...............: min=249.86ms max=365.62ms p(50)=268.71ms p(75)=283.02ms p(90)=298.68ms p(95)=324.16ms p(99)=357.69ms
     http_reqs......................: 95      1.242883/s
     iteration_duration.............: min=2.53s    max=10.68s   p(50)=5.82s    p(75)=6.83s    p(90)=9.97s    p(95)=10.16s   p(99)=10.58s
     iterations.....................: 20      0.26166/s
     vus............................: 0       min=0      max=2
     vus_max........................: 2       min=2      max=2

Hi @aakash.gupta,

The metrics emitted by k6 will have a tag scenario with the name of the scenario by which the sample was emitted. Whether you can see that in datadog depends on whether you have tags enabled and haven’t added scenario to the blocklist.

Can you point out which part of the documentation says the summary will have “scenario specific info”?

The only part I can see is “The key for each scenario can be an arbitrary, but unique, scenario name. It will appear in the result summary,” which is “mostly” true in the sense that you see them in the progressbars, but it definetely could’ve been better worded.

@mstoykov you mean to say I don’t have to specify tag for a scenario, it does that automatically?

yes, automatically it will add the tag scenario with the value being the name of the scenario. You can additionally add additional tags per scenario with the scenario option tags.

This will not tag VUs though, so look at http_req_duration or something like that.

@mstoykov I didn’t change much in my script but K6 is now receiving the metrics.