I have a question on how to performance test a feature in our product.
The feature adds images to a queue, but in order to download this image it first needs to store and process the image in some S3 url.
First request, add image to download queue
Second request, see if S3 url is ready or still pending
Third request is to download S3 url when its resolved and ready for download
Currently K6 is synchronous, and it’s not possible to use async/await(to my knowledge).
So I was thinking of writing a poll function to poll and see if the S3 url is ready.
Would that be:
A recommended approach by you?
If not, what would you recommend for a best practice here in this scenario?
Indeed, k6 is, presently, synchronous and does not expose the async/await syntax. We are working towards offering more asynchronous APIs in the future (such as HTTP), but it’s not there yet, and we don’t have an ETA for it either.
In your scenario, I believe polling might be the only option indeed. But it comes with the potential downside that assuming you have to perform some HTTP calls to check if your S3 URL state is ready, those calls will be considered in the final performance analysis numbers. If you’re only interested in the metrics related to the execution of your whole function (describing the whole process of uploading, waiting and downloading, if I understood correctly), this would work indeed.
Let me know if that was helpful, and if you have any more questions
I wrote the poll function it works, but because it polls the same request every x amount of time, I see only the poll amount of MS in the final summary.
Is there also a way to measure the amount of time between the first time I poll and until it’s resolved and show up in the summary? (Usually this can take up to 5 sec) or is that not possible?
I believe what you would want to do then, is to define a custom metric, maybe a trend in which you track the duration of your polling by ticking when it starts, and when it stops, and you add the difference to your custom metric? That way, the custom metric will be reported in the summary, and will give you an outlook of your time to wait.
The other good thing about using custom metrics is that you can also set thresholds on them directly, to tell k6 that if the waiting time ends up bigger than T, the test should fail
Hope that’s helpful let me know if that solves your issue