Hi, I am trying k6 for the first time. And I tested a very basic http endpoint which response result is:
{"Result":true,"Message":"You are awesome"}
Now, with the k6, and with the help of htop, I got the following test result for 1 minute with 100 users:
http_reqs: 32k/s
cpu%: 331.5
mem%: 1.6
vertual memory: 390GB
memory: 517M
Load average: 17.54 15.90 14.29
From this figure, for 100 users we needed 517M memory - it’s increases over time; so if I extend test from 1 minute to 1 hour it could go higher (maybe more than a GB) and even further when it continues serving for several days or months. Where we can see 32k/s requests are being made.
So, my question is that how do I calculate how much CPU, memory, etc. do I need it to host if my application needs 32k/s requests? I got confusion because memory is being increased over time.
I have tested it in 9 core CPU 32GB RAM.
Furthermore, I can test it successfully for 100 users but when try to test it for more than this, I start getting error:
connection reset by peer
WARN[0000] Request Failed error="Get \"http://localhost:8086/welcome\": read tcp 127.0.0.1:55835
So, my next question is how can we test it for more 100 users, 1000 users, million users and even billion users? It fails for 2000 users as stated above:
k6 run --vus 2000 --duration 10m script.js