Hi, I have below configuration for grafana loki . I am expecting that the chunks will be deleted after every 24hrs. But it is not working as expected.
auth_enabled: false
server:
http_listen_port: 3100
ingester:
lifecycler:
address:
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 5m
chunk_retain_period: 30s
schema_config:
configs:
from: 2022-01-11
store: boltdb
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
storage_config:
boltdb:
directory: /u01/app/grafana/loki/index
filesystem:
directory: /u01/app/grafana/loki/chunks
compactor:
working_directory: /u01/app/grafana/loki
shared_store: filesystem
compaction_interval: 10m
retention_enabled: true
retention_delete_delay: 1h
retention_delete_worker_count: 100
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 24h
retention_period: 24h
chunk_store_config:
max_look_back_period: 24h
table_manager:
retention_deletes_enabled: true
retention_period: 24h
Need help in fixing the log retention issue.
1 Like
But it is not working as expected.
Are they not being deleted at all, to early, or too late?
They are not being deleted at all.
gao963
January 31, 2022, 4:21am
4
Mine, too. Does anyone know why?
I’m experiencing the opposite issue… I just tried to configure log retention and my logs are disappearing after a couple of hours.
My understanding from the documentation is that the compactor and limits_config are relevant:
limits_config:
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
max_cache_freshness_per_query: 10m
retention_period: 336h
compactor:
compactor:
working_directory: /data/retention
shared_store: filesystem
compaction_interval: 10m
retention_enabled: true
retention_delete_delay: 2h
retention_delete_worker_count: 150
This is my complete loki config:
apiVersion: v1
data:
config.yaml: |
auth_enabled: false
server:
http_listen_port: 3100
distributor:
ring:
kvstore:
store: memberlist
memberlist:
join_members:
- obs-loki-memberlist
ingester:
lifecycler:
ring:
kvstore:
store: memberlist
replication_factor: 1
chunk_idle_period: 30m
chunk_block_size: 262144
chunk_encoding: snappy
chunk_retain_period: 1m
max_transfer_retries: 0
wal:
dir: /var/loki/wal
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h
max_cache_freshness_per_query: 10m
retention_period: 336h
schema_config:
configs:
- from: "2020-09-07"
index:
period: 24h
prefix: loki_index_
object_store: filesystem
schema: v11
store: boltdb-shipper
storage_config:
boltdb_shipper:
active_index_directory: /var/loki/index
cache_location: /var/loki/cache
cache_ttl: 168h
shared_store: filesystem
index_gateway_client:
server_address: dns:///obs-loki-index-gateway:9095
filesystem:
directory: /var/loki/chunks
chunk_store_config:
max_look_back_period: 0s
table_manager:
retention_deletes_enabled: false
retention_period: 0s
query_range:
align_queries_with_step: true
max_retries: 5
split_queries_by_interval: 15m
cache_results: true
results_cache:
cache:
enable_fifocache: true
fifocache:
max_size_items: 1024
validity: 24h
frontend_worker:
frontend_address: obs-loki-query-frontend:9095
frontend:
log_queries_longer_than: 5s
compress_responses: true
tail_proxy_url: http://obs-loki-querier:3100
compactor:
working_directory: /data/retention
shared_store: filesystem
compaction_interval: 10m
retention_enabled: true
retention_delete_delay: 2h
retention_delete_worker_count: 150
ruler:
storage:
type: local
local:
directory: /etc/loki/rules
ring:
kvstore:
store: memberlist
rule_path: /tmp/loki/scratch
alertmanager_url: https://alertmanager.xx
external_url: https://alertmanager.xx
Update: I just realized I had a bug in my Helm chart values file where I was using the term ingestor instead of ingester so that may be part of my problem… My persistence settings were not applied so I’ll find out shortly if this makes the difference.
priyankapande:
store: inmemory
Could it be that the use of inmemory
store is the cause here? If you restart your ingester, do the logs all disappear?
I am not sure how it actually works but it would not surprise me if the log retention settings didn’t apply to the inmemory storage as every time your ingester restarts, it would wipe the logs. Just a thought…
Check the file modification time of /var/loki/chunks directory. Are you sure that the chunks files in this directory are deleted? I think you can’t find the log because the index is deleted but the chunks file is still there
system
Closed
December 19, 2023, 7:37am
8
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.