Hi,
We installed the Tempo distributed on AWS EKS successfully through Helm chart. On the EKS we have Promtail, Loki and Prometheus amp. Grafana is installed on a separate EC2 server. Now we exposed the Tempo distributed-query-frontend-discovery service through ingress service of alb-ingress-controller on port 16687.
We able to create the data source on Grafana successfully. Now we are facing the issue in view the tracing of loki logs.
Below is the error.
failed to query data: failed to convert tempo response to Otlp: unexpected EOF
Regards,
Shashi
Hi,
Following is the version we are using…
That is a strange error to receive. Can you post any relevant logs from Grafana or the tempo query-frontend?
Oh, I will also note that I don’t think Tempo will accept trace ids in that form. Can you try to remove the dashes and see if it works?
Hi,
Please find below the logs details
Hi Joeelliott,
Please find below the application log details. So you are saying we need to remove the - in events. But I think mulesoft runtime fabric is creating the ID in this format but still I will check with Dev team to change this.
Regards,
Shashi
The query frontend will normally log all requests at a info level, but I’m not seeing any lines indicating it’s receiving requests. Can you double check connectivity between Grafana and Tempo?
distributed-query-frontend-discovery service through ingress service of alb-ingress-controller on port 16687
Hi, I believe this is the jaeger-query
port, can you try pointing the Grafana datasource to the tempo api port, which defaults to :3200
? This sounds related to the Otlp conversion error.
Also try enabling the log_received_traces setting, so we can verify what traffic Tempo is receiving.
Hi Joeelliott,
It is showing data source active and working.
Regards,
Shashi
Hi,
Please find below all the services for tempo and no service is showing with 3200 port.
Regards,
Shashi
Hi Martin,
Even when we try to create the ingress with 3200 port it is giving the below error.
Regards,
Shashi
Hi Martin,
We have enabled the log_received_traces = ture .
We are using the https://github.com/grafana/tempo/blob/main/example/helm/microservices-tempo-values.yaml file to configure the tempo.
Regards,
Shashi
My apologies. Tempo defaults to 3200, but I forgot the tempo-distributed
chart is exposing it on 3100. Please try pointing the datasource to port 3100.
Hi Martin,
We checked with 3100 port and there it is showing 404 page no found
Regards,
Shashi
Hi Martin,
It is working when we use the 16686/16687 port. both is working but giving the error on log search.
Regards,
Shashi
Hi @sbhushankr,
I think a 404 there is actually what we are expecting. This says that grafana was able to reach the service and just gets back a 404. This was an issue for a while, but has been fixed in more recent versions of grafana I believe, and if I’m reading your helm list
example there, the version there is a little old.
Hi Zachleslie,
Grafana is not installed on same EKS… We have Grafana on separate EC2 server and there we have installed latest version 9.0.2
Regards,
Shashi
Hi,
Need help to resolve this issue. If anyone face this issue please share the resolution
Regards,
Shashi
Perhaps I misremember about version. Can you confirm the http://localhost:3100/status
endpoint from within the pod? If the container is working then I suppose we’re just talking about load balancing, correct? What endpoint is the health check against to confirm healty? I’m not familiar with the helm chart load balancer setup for aws. Can you give more detail?
Thanks for suggesting to check the status url. We set the health check url as /status and it worked well. Like now service is showing healthy there.
Regards,
Shashi