Loki not writing to s3 no errors

Hey there,

We are moving our loki environment to docker to kubernetes and I am having trouble getting any data to show up.

Our logging pipleine goes from fluent-bit to fluentd to loki installed with the helm chart running as a single service. There are no errors in any of the logs as far as I can see. However nothing is being written to our s3 bucket and in the loki logs I am seeing ‘msg=“Get - deadline exceeded” key=collectors/ring’. slightly larger stacktrace for reference:

level=debug ts=2021-09-30T17:52:11.34601498Z caller=mock.go:159 msg=Get key=collectors/ring modify_index=280 value=“"\xa5\x05\xf4\xa4\x02\n\xa2\x05\n\x06loki-0\x12\x97\x05\n\x0f10.0.31.32:9095\x10\xc6\xf2\u05ca"”
level=debug ts=2021-09-30T17:52:11.34606237Z caller=mock.go:86 msg=CAS key=collectors/ring modify_index=280 value=“"\xa5\x05\xf4\xa4\x02\n\xa2\x05\n\x06loki-0\x12\x97\x05\n\x0f10.0.31.32:9095\x10\xcb\xf2\u05ca"”
level=debug ts=2021-09-30T17:52:11.346094395Z caller=mock.go:159 msg=Get key=collectors/ring modify_index=281 value=“"\xa5\x05\xf4\xa4\x02\n\xa2\x05\n\x06loki-0\x12\x97\x05\n\x0f10.0.31.32:9095\x10\xcb\xf2\u05ca"”
level=debug ts=2021-09-30T17:52:11.34612015Z caller=mock.go:113 msg=Get key=collectors/ring wait_index=281
level=debug ts=2021-09-30T17:52:13.342450631Z caller=mock.go:149 msg=“Get - deadline exceeded” key=collectors/ring
level=debug ts=2021-09-30T17:52:13.342503547Z caller=mock.go:113 msg=Get key=collectors/ring wait_index=281
level=debug ts=2021-09-30T17:52:14.342626282Z caller=mock.go:149 msg=“Get - deadline exceeded” key=collectors/ring
level=debug ts=2021-09-30T17:52:14.342676166Z caller=mock.go:113 msg=Get key=collectors/ring wait_index=281
level=debug ts=2021-09-30T17:52:15.342852695Z caller=mock.go:149 msg=“Get - deadline exceeded” key=collectors/ring

We have an iam policy with s3:* assigned to the service account and a bucket policy set up on our s3 bucket.

we are running the loki helm chart version 2.6.0.

Here are the values for our helm chart:

    serviceAccount:
      annotations:
        eks.amazonaws.com/role-arn: arn:aws:iam::<Our account id>:role/<s3 role id>
    extraArgs:
      log.level: debug
    config:
      auth_enabled: false   
      server:
        http_listen_port: 3100
      ingester:
        lifecycler:
          ring:
            kvstore:
              store: inmemory
            replication_factor: 1
          final_sleep: 0s
        chunk_idle_period: 1h
        chunk_retain_period: 30s
        max_transfer_retries: 0
        max_chunk_age: 1h
        chunk_target_size: 1048576        
      schema_config:
        configs:
          - from: 2018-04-15
            store: boltdb-shipper
            object_store: aws
            schema: v11
            index:
              prefix: loki_index
              period: 24h
      storage_config:
        aws:
          s3: s3://ca-central-1/<bucket-name>
        boltdb_shipper:
          active_index_directory:  /data/loki/index
          shared_store: s3
          cache_location: /data/loki/boltdb-chache
          cache_ttl: 24h
      compactor:
        working_directory: /data/loki/boltdb-shipper-compactor
        shared_store: aws      
      limits_config:
        enforce_metric_name: false
        reject_old_samples: true
        reject_old_samples_max_age: 168h        
      chunk_store_config:
        max_look_back_period: 0s    
      table_manager:
        retention_deletes_enabled: true
        retention_period: 720h

Any insights would be GREATLY appreciated. I have been spinning my wheels on this for a while.

Did you figure out the problem? We seem to have a similar problem

I’m have the same issue. Been trying different config combinations but no luck.

Any updates on this? Having the same issue.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.