Response time in Loadimpact is different from those from the server

Hi. Currently, I measure the results of the test comparing it with the logs from the server and I can’t figure it out why I have so different values at both places, either loadimpact lies or I don’t understand something. Here is an example:

We can see that loadimpact shows Response time 42 s, though on the server for this period of time we don’t have any single request with such the response time.

How can it be explained?
Thanks in advance!

Hi there,

I took a look at the logs for this test, and was able to confirm that those metrics are correct from the k6 Cloud side. Notice that the response time increases with the amount of VUs, which typically means the server is being overloaded and is unable to keep up with the amount of traffic.

Are you sure that you’re looking at the correct server logs and not just metrics from the backend? I can imagine Samsung’s infrastructure runs on a variety of CDNs, load balancers and proxies. Depending on the path the request took, the response time will be different when measured from different points in the network. The purpose of k6’s Cloud is to measure the response as an external user would experience it (well, from AWS servers anyway, which is likely the best case scenario), so I can assure you there’s no lying involved. :slight_smile:

One test you could do is run the test script from your own infrastructure and compare the results. If there’s a major difference, it might be an internal network issue we can look into.

1 Like

Hi, thanks for the reply :slight_smile:
About the correctness of the place these logs are taken. I’m sure it’s correct. It’s queried from Azure > Logs tool for the particular server. I find your idea to run the test on my own infrastructure is relevant though it might be redundant, will see.

Thanks!