Want to ingest 2TB of logs a day

I want to ingest 2TB of logs a day. is there a way to decide on the number of distributor, ingestors, querier, query-frontend, index gateway and Gateway and how much CPU and RAM should be assigned to each pod of these components?
I have a test set up with 3 nodes generating logs at 12MB/s and have a loki set up with 4 ingestors and 4 Queriers having 8 Gi of RAM and 4 CPU each. Ingestors are hitting the memory limit and getting restarted.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.