Still working on setting up Loki. Logs are coming in, and are browseable through Grafana. Now working on the backend.
I’m trying to make Loki store its data on S3. So far, it stores its indexes on S3, but the chunks are only on the local filesystem.
On S3, I’m expecting my data to be stored like
bucket-name
/index_
/fake
Only /index
exists, but /fake
not. /fake
is the tenant when in single-tenant mode. /fake
will contain all the chunks.
Relevant config
ingester:
# How long chunks should sit in-memory with no updates before
# being flushed if they don't hit the max block size. This means
# that half-empty chunks will still be flushed after a certain
# period as long as they receive no further activity.
chunk_idle_period: 15m
schema_config:
configs:
- from: 2022-01-25
store: boltdb-shipper
object_store: s3
schema: v11
index:
prefix: index_
period: 24h
storage_config:
boltdb_shipper:
active_index_directory: /loki/boltdb-shipper-active
cache_location: /loki/boltdb-shipper-cache
cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space
shared_store: s3
aws:
s3: s3://<api-key>:<api-secret-key>@aws-region/bucket-name
s3forcepathstyle: true
I’m running Loki in a docker container. Log file doesn’t show any errors.
Which setting am I missing here?