Hardware requirements for running Grafana & Loki

Hi Grafana Loki Team,

First of all, thanks for providing an awesome tool as an OpenSource. I have tried few experimental projects, it looks good for centralized logging purposes.

Our main goal is to have a Centralized Log (later tracing either Zipkin/Tempo), as our project is architected as a separate piece of many components (microservices) and we want to have a log in one place to look up everything. Grafana + Loki + AWS S3 fits well for us, but

There are few questions that will help us in deciding whether to set up locally (a separate set of servers) or go for Grafana Cloud. We are expecting 200 - 400 MBs of Log per day (together). I went through almost all the links and community pages, couldn’t find a clear answer for the below questions (I might be wrong).

  1. Our application in production is running via Docker, we are NOT using Kubernetes or Docker swarn. We are planning to run the Granfana + Loki as a Docker container in our servers, is this stack and approach right for us for tracking Logs and Metrics?
  2. What is the hardware requirement for Loki? Processor, Storage, and RAM (For Grafana we found a link for this question, but not for Loki)
  3. Is one Loki instance is fine or do we need to run multiple instances?
  4. How much Storage for S3 is required for chunks? We want to at least one month of data retention
  5. For “indexes”, how much storage is required for bolt-db KV storage in the disk?
  6. In case, if we are going for Grafana Cloud, can the data be stored on our end? that is our own AWS S3 bucket?

Thanks for your help in advance

Answers are highly appreciated.

  1. Yes. I think the main challange you are going to have is pushing logs to loki. But using something like Promtail with a host mount should solve that.
  2. It depends on the way you run loki. It is super lightweight though. For example running the whole loki in distributed mode you can get by with about 1 core and about 300 mb RAM. Storage is depending on how many logs you push to loki and how you configure chunks and indexes.
  3. 1 should be fine for that load. When querying though you might need more workers depending on what queries you run.
  4. You config how large chunks should be and when to flush them to storage. You could make chunks tiny and flush regularly but that will make querying slow.
  5. again depends on how you ingest your logs and index them. It could be a few MB or a few GB depending on how you use it
  6. No idea.

I’ve found that the easiest way to answers your questions is to run it and see what it will do in your environments…

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.