-
What Grafana version and what operating system are you using?
8.5.3 / Linux -
What are you trying to achieve?
Wanted to know of there are configs available to stop promtail process by itself or restart after a particular time -
How are you trying to achieve it?
I’m trying to ingest around 10gb data at a single run, where i’m getting too many open files error, error sending batch. While these errors were fixed by increasing the limits in loki.yaml. But now i’m seeing inotify resources exhausted, inotify cannot be used, reverting to polling errors and data is not reaching loki. -
Can you copy/paste the configuration(s) that you are having problems with?
loki configs
server:
http_listen_port: 3100
grpc_listen_port: 9096
common:
path_prefix: /tmp/loki
storage:
filesystem:
chunks_directory: /home/oracle/elasticsearch/loki/chunks
rules_directory: /home/oracle/elasticsearch/loki/rules
replication_factor: 1
ring:
instance_addr: <ip>
kvstore:
store: inmemory
ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 5m
chunk_retain_period: 30s
max_transfer_retries: 0
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
ruler:
alertmanager_url: http://<ip>:9093
limits_config:
enforce_metric_name: false
reject_old_samples: false
#reject_old_samples_max_age:
ingestion_rate_mb: 10240
ingestion_burst_size_mb: 5120
per_stream_rate_limit: 2560MB
chunk_store_config:
max_look_back_period: 0s
table_manager:
retention_deletes_enabled: false
retention_period: 0s
promtail configs
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://<ip>:3100/loki/api/v1/push
scrape_configs:
#APPLICATION LOGS
- job_name: APPLICATION
static_configs:
- targets:
- <ip>
- labels:
log_type: APPLICATION
component_type: all_server_logs
__path__: "/home/oracle/catt-logs-week23/**/{*.log,*.log*}/*"