Hello,
Is it possible to set desired throughput for gRPC calls?
I tried to apply arrival_rate
for gRPC calls in the same way as was shown for HTTP requests at Is K6 safe from the coordinated omission problem? but it seems this trick does not work for gRPC (details below).
Also, from documentation links below I see batch is supported only for http, is there a plan to have it in gRPC as well?
- k6/net/grpc (and k6-http)
Thank you in advance, k6 is great!
Details:
I got 357.47731/s
when I specified --env arrival_rate=10
, my script is:
import grpc from "k6/net/grpc";
export let options = {
scenarios: {
http_scenario: {
executor: "constant-arrival-rate",
rate: __ENV.arrival_rate,
timeUnit: "1s"
},
},
};
const client = new grpc.Client();
client.load([], "echo.proto");
export default () => {
client.connect("localhost:8080", {
plaintext: true
});
const data = {"message": "Hi"};
const response = client.invoke("go_grpc_echo_pb.Echo/Send", data);
client.close();
};
the output is:
$ /usr/local/bin/k6 run --duration 10s --env arrival_rate=10 --out csv=k6_grpc_kpi.csv test_grpc.js
/\ |‾‾| /‾‾/ /‾‾/
/\ / \ | |/ / / /
/ \/ \ | ( / ‾‾\
/ \ | |\ \ | (‾) |
/ __________ \ |__| \__\ \_____/ .io
execution: local
script: test_grpc.js
output: csv=k6_grpc_kpi.csv (k6_grpc_kpi.csv)
scenarios: (100.00%) 1 scenario, 1 max VUs, 40s max duration (incl. graceful stop):
* default: 1 looping VUs for 10s (gracefulStop: 30s)
running (10.0s), 0/1 VUs, 3581 complete and 0 interrupted iterations
default ✓ [======================================] 1 VUs 10s
data_received........: 1.1 MB 107 kB/s
data_sent............: 677 kB 68 kB/s
grpc_req_duration....: avg=1.19ms min=797.75µs med=1.16ms max=5.22ms p(90)=1.4ms p(95)=1.49ms
iteration_duration...: avg=2.78ms min=2.05ms med=2.72ms max=12.14ms p(90)=3.14ms p(95)=3.37ms
iterations...........: 3581 357.47731/s
vus..................: 1 min=1 max=1
vus_max..............: 1 min=1 max=1