Hi @MattK6,
the requests for flushing metrics to InfluxDB are taking longer (1.6s in your reported case) of the set flush interval (default to 1s
) so it means that k6
is probably flushing more metrics than InfluxDB can really handle.
Do you see a high value of CPU and/or RAM for your InfluxDB instance? Are you using the same machine for running k6 and InfluxDB or are they separated?
I list here some steps to try for getting a better response time from InfluxDBv1
:
- You could set only the SystemTags that you really need for your analysis. This can help for reducing the number of metrics that
k6
is generating and InfluxDB have to ingest. - Tune the options of the k6 output for InfluxDB:
- You could consider excluding some tags to be indexed
K6_INFLUXDB_TAGS_AS_FIELDS
that should speed up the ingestion phase for InfluxDB. Note that not indexed tags could be slow when you need to query them. - Reduce the
K6_INFLUXDB_PUSH_INTERVAL
and increase theK6_INFLUXDB_CONCURRENT_WRITES
options for flushing batches with a smaller number of metrics. You probably will need to check the logs with the –verbose option for setting the right values.
- You could consider excluding some tags to be indexed
- Try to use Telegraf for aggregating some metrics before sending them to InfluxDB.
If none of the previous suggestions works for you then you could try the new InfluxDBv2 extension and see if it’s able to ingest your metrics faster.