Large Loki Logs

I am using syslog-ng to ingest my logs into Grafana loki.
I tried adding my firewalls logs which are quite large. 3-5GB per day.
When I run queries over 3 hours I get a timeout .

What is the best way to parse large firewall logs into Loki?
I have read that I can parse them with syslog-ng would that be more efficient?
or do I need to adjust my settings on Loki or Grafana?

Hello @jprouty

How are you running Loki? I’m guessing self-managed of some kind? Kubernetes or not Kubernetes? Single node or “distributed”?

Depending on that you can scale it in different ways.

That being said, timing out querying that amount of data with just a 3h range is not great.

What exactly is the timeout error? Does it come from Grafana or Loki?

Could you also share the LogQL query. I have noticed it makes a significant difference in which order you put things in your query.

Thank you for the response
Currently, I am self-hosting the Loki on a docker instance on a single server.
4cpu 16 GB RAM 1 TB disk.

The error I get back in Grafana is
Query error
Bad Gateway

LOGQL
{host=“officefw”}|= “dstport=8465”

I would look at Loki container resource usage metrics when you execute the query. I used to see similar errors when I queried more data and I think in my case it was usually the querier(s) running out of memory. I would also look at Loki container logs.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.