Hello Loki Team,
I have a setup for Promtail, Loki, and Grafana. I have 2 VMs and 1 Dell S3 ECS bucket with a size of 300GB.
VM 1: 10.4.17.19
VM 2: 10.4.17.18
I’m currently using Loki version 2.8.2 and Standalone mode on two VM. Below is my Loki config:
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
grpc_server_max_concurrent_streams: 1000
grpc_server_max_recv_msg_size: 104857600
grpc_server_max_send_msg_size: 104857600
http_server_idle_timeout: 120s
http_server_write_timeout: 1m
common:
path_prefix: /etc/loki/
replication_factor: 1
ring:
kvstore:
store: memberlist
storage:
s3:
endpoint: https://ecs.test.net
insecure: false
bucketnames: bucket
access_key_id: clickhouse_test_user
secret_access_key: secret-key
memberlist:
abort_if_cluster_join_fails: false
bind_port: 7946
join_members:
- 10.4.17.19:7946
- 10.4.17.18:7946
schema_config:
configs:
- from: "2023-01-03"
chunks:
prefix: chunk_
period: 24h
index:
period: 24h
prefix: index_
object_store: s3
schema: v11
store: boltdb-shipper
- from: "2023-06-24"
store: tsdb
object_store: s3
schema: v12
index:
prefix: index_
period: 24h
chunks:
prefix: chunk_
period: 24h
storage_config:
boltdb_shipper:
active_index_directory: /etc/loki/index
build_per_tenant_index: true
cache_location: /etc/loki/index_cache
shared_store: s3
tsdb_shipper:
active_index_directory: /etc/loki/tsdb-index
cache_location: /etc/loki/tsdb-cache
shared_store: s3
aws:
endpoint: https://ecs.test.net
insecure: false
bucketnames: bucket
access_key_id: clickhouse_test_user
secret_access_key: secret-key
s3forcepathstyle: true
frontend_worker:
frontend_address: 10.4.17.19:9096
grpc_client_config:
max_send_msg_size: 104857600
parallelism: 9
match_max_concurrent: true
compactor:
working_directory: /etc/loki/
shared_store: s3
compaction_interval: 5m
ruler:
storage:
s3:
bucketnames: bucket
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 48h
max_global_streams_per_user: 100000
max_entries_limit_per_query: 50000
ingestion_rate_mb: 8190000000
ingestion_rate_strategy: global
ingestion_burst_size_mb: 81900000000
max_line_size: 50000000
per_stream_rate_limit: 100000000000
per_stream_rate_limit_burst: 1000000000
query_timeout: 10m
split_queries_by_interval: 30m
max_cache_freshness_per_query: '10m'
query_range:
align_queries_with_step: true
max_retries: 5
cache_results: true
results_cache:
cache:
enable_fifocache: true
fifocache:
size: 10024
validity: 24h
frontend:
log_queries_longer_than: 3s
max_body_size: 10485760
query_stats_enabled: false
max_outstanding_per_tenant: 1024
querier_forget_delay: 0s
scheduler_address: ""
scheduler_dns_lookup_period: 10s
scheduler_worker_concurrency: 5
compress_responses: true
distributor:
ring:
kvstore:
store: memberlist
ingester:
lifecycler:
ring:
kvstore:
store: memberlist
replication_factor: 1
chunk_idle_period: 30m
chunk_block_size: 262144
chunk_target_size: 1572864
chunk_retain_period: 0s
max_transfer_retries: 0
chunk_encoding: gzip
max_chunk_age: 3h
chunk_store_config:
max_look_back_period: 0s
querier:
engine:
timeout: 5m
max_concurrent: 16
ingester_client:
grpc_client_config:
max_recv_msg_size: 67108864
remote_timeout: 1s
query_scheduler:
max_outstanding_requests_per_tenant: 32768
I have a problem that I’m trying to understand. I have a file with 5 GB size with 55,521,300 logs lines. These logs were generated using Flog and Promtail is reading the file and sending it to Loki. However, when I set the chunk encoding to either none, snappy or gzip, I end up with the same total size (2GB) in my S3 bucket. This happens regardless of the compression settings I use.
I’m trying to understand why this is the case. Any help would be greatly appreciated.