Tempo does not show traces

Hi, I’m trying to install tempo-distributed into Kubernetes.
I successfully connect it to grafana as datasource, but I can’t find any traces.
If I use tempo search - traces list is shown:

But, if I click on any of these - tempo shows query error:

Any suggestions what kind of problem it can be? (traces provided to tempo from knative)

Hi @vlukanichev, can you share the logs you’re getting in the read path? I.e. query-frontend and querier containers. Thanks!

Hi, @mariorodriguez Sure.
Query-fronent container last lines:

level=info ts=2022-07-06T10:47:15.041645612Z caller=poller.go:144 msg="successfully pulled tenant index" tenant=single-tenant createdAt=2022-07-06T10:43:13.3593436Z metas=13 compactedMetas=3
level=info ts=2022-07-06T10:52:15.047725486Z caller=poller.go:144 msg="successfully pulled tenant index" tenant=single-tenant createdAt=2022-07-06T10:48:13.367784153Z metas=13 compactedMetas=3
level=info ts=2022-07-06T10:57:15.04900637Z caller=poller.go:144 msg="successfully pulled tenant index" tenant=single-tenant createdAt=2022-07-06T10:53:13.374770436Z metas=13 compactedMetas=3
level=info ts=2022-07-06T11:02:15.044866201Z caller=poller.go:144 msg="successfully pulled tenant index" tenant=single-tenant createdAt=2022-07-06T10:58:13.372013639Z metas=13 compactedMetas=3
level=info ts=2022-07-06T11:07:15.04340013Z caller=poller.go:144 msg="successfully pulled tenant index" tenant=single-tenant createdAt=2022-07-06T11:03:13.359397787Z metas=13 compactedMetas=3
level=info ts=2022-07-06T11:12:15.050220904Z caller=poller.go:144 msg="successfully pulled tenant index" tenant=single-tenant createdAt=2022-07-06T11:08:13.413994888Z metas=15 compactedMetas=3
level=info ts=2022-07-06T11:12:21.063046188Z caller=handler.go:119 tenant=single-tenant method=GET traceID=58b839f0a578f8a4 url="/api/search?tags=%20name%3D%22%2Fg%22&limit=20&start=1657102343&end=1657105943" duration=135.422357ms response_size=7718 status=200
level=info ts=2022-07-06T11:12:22.742023514Z caller=handler.go:119 tenant=single-tenant method=GET traceID=061641bb7daae2a7 url="/api/search?tags=%20name%3D%22%2Fg%22&limit=20&start=1657102345&end=1657105945" duration=66.577351ms response_size=7702 status=200
level=info ts=2022-07-06T11:17:15.041969302Z caller=poller.go:144 msg="successfully pulled tenant index" tenant=single-tenant createdAt=2022-07-06T11:13:13.547834644Z metas=15 compactedMetas=5
level=info ts=2022-07-06T11:22:15.048128658Z caller=poller.go:144 msg="successfully pulled tenant index" tenant=single-tenant createdAt=2022-07-06T11:18:13.395423324Z metas=15 compactedMetas=5
level=info ts=2022-07-06T11:27:15.057360186Z caller=poller.go:144 msg="successfully pulled tenant index" tenant=single-tenant createdAt=2022-07-06T11:23:13.362477672Z metas=15 compactedMetas=2
level=info ts=2022-07-06T11:32:15.037696187Z caller=poller.go:144 msg="successfully pulled tenant index" tenant=single-tenant createdAt=2022-07-06T11:28:13.347102193Z metas=15 compactedMetas=2

Querier container last lines:

level=info ts=2022-07-06T11:02:13.20013454Z caller=poller.go:144 msg="successfully pulled tenant index" tenant=single-tenant createdAt=2022-07-06T10:58:13.372013639Z metas=13 compactedMetas=3
level=info ts=2022-07-06T11:07:13.215220169Z caller=poller.go:144 msg="successfully pulled tenant index" tenant=single-tenant createdAt=2022-07-06T11:03:13.359397787Z metas=13 compactedMetas=3
level=info ts=2022-07-06T11:12:13.206201657Z caller=poller.go:144 msg="successfully pulled tenant index" tenant=single-tenant createdAt=2022-07-06T11:08:13.413994888Z metas=15 compactedMetas=3
level=info ts=2022-07-06T11:17:13.192878577Z caller=poller.go:144 msg="successfully pulled tenant index" tenant=single-tenant createdAt=2022-07-06T11:13:13.547834644Z metas=15 compactedMetas=5
level=info ts=2022-07-06T11:22:13.195541762Z caller=poller.go:144 msg="successfully pulled tenant index" tenant=single-tenant createdAt=2022-07-06T11:18:13.395423324Z metas=15 compactedMetas=5
level=info ts=2022-07-06T11:27:13.180708769Z caller=poller.go:144 msg="successfully pulled tenant index" tenant=single-tenant createdAt=2022-07-06T11:23:13.362477672Z metas=15 compactedMetas=2
level=info ts=2022-07-06T11:32:13.202457378Z caller=poller.go:144 msg="successfully pulled tenant index" tenant=single-tenant createdAt=2022-07-06T11:28:13.347102193Z metas=15 compactedMetas=2

I see no errors here.

There should be an error log line for the query by ID request somewhere. Are you running a gateway or a similar component between Tempo and Grafana?

No, datasource setting looks like that, so i assume that no components between tempo and grafana:
wmWHjVY1

I noticed that HTTP response when i try to find trace looks like that:

{"message":"Query data error","traceID":"00000000000000000000000000000000"}

And it returns if I try to find any traceID. Is it possible that UI didn’t send right ID value to search backend?

A single error that I found in container(ingester) logs here:

level=info ts=2022-07-06T08:07:15.721223415Z caller=ingester.go:328 msg="beginning wal replay"
level=warn ts=2022-07-06T08:07:15.721279219Z caller=rescan_blocks.go:25 msg="failed to open search wal directory" err="open /var/tempo/wal/search: no such file or directory"
level=info ts=2022-07-06T08:07:15.721304721Z caller=ingester.go:413 msg="wal replay complete"
level=info ts=2022-07-06T08:07:15.721326923Z caller=ingester.go:427 msg="reloading local blocks" tenants=0
level=info ts=2022-07-06T08:07:15.721429831Z caller=app.go:284 msg="Tempo started"
level=info ts=2022-07-06T08:07:15.722415807Z caller=lifecycler.go:570 msg="instance not found in ring, adding with no tokens" ring=ingester
level=info ts=2022-07-06T08:07:15.72323497Z caller=lifecycler.go:417 msg="auto-joining cluster after timeout" ring=ingester
ts=2022-07-06T08:07:15.72945265Z caller=memberlist_logger.go:74 level=warn msg="Failed to resolve tempo-distributed-gossip-ring: lookup tempo-distributed-gossip-ring on 10.3.16.10:53: no such host"
ts=2022-07-06T08:07:17.201685608Z caller=memberlist_logger.go:74 level=warn msg="Failed to resolve tempo-distributed-gossip-ring: lookup tempo-distributed-gossip-ring on 10.3.16.10:53: no such host"
level=info ts=2022-07-06T08:07:19.794702316Z caller=memberlist_client.go:542 msg="joined memberlist cluster" reached_nodes=2
level=info ts=2022-07-06T09:08:05.722393136Z caller=flush.go:168 msg="head block cut. enqueueing flush op" userid=single-tenant block=7994d290-b4a0-4e77-a1e3-1c44ac88a024
level=info ts=2022-07-06T09:08:09.606030067Z caller=flush.go:244 msg="completing block" userid=single-tenant blockID=7994d290-b4a0-4e77-a1e3-1c44ac88a024
level=info ts=2022-07-06T09:08:09.776975692Z caller=flush.go:251 msg="block completed" userid=single-tenant blockID=7994d290-b4a0-4e77-a1e3-1c44ac88a024 duration=170.946425ms
level=info ts=2022-07-06T09:08:09.778102379Z caller=flush.go:300 msg="flushing block" userid=single-tenant block=7994d290-b4a0-4e77-a1e3-1c44ac88a024

But as I see finally it can resolve hostname and all should be fine I hope. Because this error does not appears again.

I found a error message in grafana container, it appear when I try to find trace in tempo.
@mariorodriguez Do you have any ideas what should I check to fix that?

logger=context traceID=00000000000000000000000000000000 t=2022-07-06T18:35:16.66+0000 lvl=eror msg="Failed to look up user based on cookie" error="user token not found"
logger=context traceID=00000000000000000000000000000000 userId=0 orgId=1 uname= t=2022-07-06T18:35:16.66+0000 lvl=eror msg="Query data error" error="failed to query data: failed get to tempo: Get \"tempo-distributed-query-frontend:3100/api/traces/62452e4e3ee12cce89a3e76cd6b4aa7d\": unsupported protocol scheme \"tempo-distributed-query-frontend\"" remote_addr=127.0.0.1 traceID=00000000000000000000000000000000
logger=context traceID=00000000000000000000000000000000 userId=0 orgId=1 uname= t=2022-07-06T18:35:16.66+0000 lvl=eror msg="Request Completed" method=POST path=/api/ds/query status=500 remote_addr=127.0.0.1 time_ms=1 duration=1.181092ms size=75 referer="http://localhost:50276/explore?orgId=1&left=%7B%22datasource%22:%22Tempo2%22,%22queries%22:%5B%7B%22query%22:%2262452e4e3ee12cce89a3e76cd6b4aa7d%22,%22refId%22:%22A%22%7D%5D,%22range%22:%7B%22from%22:%221657128694556%22,%22to%22:%221657132294556%22%7D%7D" traceID=00000000000000000000000000000000
1 Like

Resolved. I missed “http://” prefix when configured tempo datasource. Issue was fixed when i added it.

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.