Search and group by

Hi
How can I search these key words “WFLYCTL0183|WFLYEJB0043|WFLYEJB0048” and count over time them:

I try this but don’t know how can I group by them by search keyword:


{job="varlogs"} |~ `WFLYCTL0183|WFLYEJB0043|WFLYEJB0048` | pattern `<timestamp> <time> <hostname> <level> <id> <msg>`

Any idea?
Thanks

It’s not clear which part of your pattern your keyword would reside in. I am gonna guess it’s id. Then the query should look something like this:

sum by (id) (
  count_over_time(
    {job="varlogs"}
      | pattern `<timestamp> <time> <hostname> <level> <id> <msg>`
      | __error__=""
      | id =~ `WFLYCTL0183|WFLYEJB0043|WFLYEJB0048`
    [$__interval]
  )
)
1 Like

@tonyswumac id is a different field.

Keywords not located on the costant position, need to find them on log and groupby them.
Any other idea?

Thanks,

Does your log have any structure, or is it completely unstructured? If you at least have some patterns to follow you can craft multiple queries for each, but if your logs are completely unstructured then probably will need to do some pre-processing before injecting the logs.

@tonyswumac log belong to wildfly, and there is no specific pattern, just need to find these errors, count and groupby them.

Any idea?

If your log has no specific pattern then you need some pre-processing. Assuming your keyword is bounded (limited number of them and you know what they are), you can setup some sort of filter in your logging agent, something like this if you are using promtail:

pipeline_stages:
- match:
      selector: '{name="<SOME_LABEL>"} |= "WFLYCTL0183"'
      stages:
      - labels:
          keyword: "WFLYCTL0183"
- match:
      selector: '{name="<SOME_LABEL>"} |= "<KEYWORD2>"'
      stages:
      - labels:
          keyword: "<KEYWORD2>"

The idea is to match each keyword, slap a label on them, before forwarding the logs to Loki, which gives you the label that you need for aggregation.

@tonyswumac is it possible to use wildcard? instead define one by one?

something like this:

pipeline_stages:

  • match:
    selector: ‘{name=“WILDFLY-ERRORS”} |= “WFLY*”’
    stages:
    - labels:
    keyword: “WFLY*”

What you posted here is going to do this:

  1. Filter logs for label WILDFLY-ERRORS and grep for string WFLY*.
  2. Insert labe “WFLY*” to the log line.

Essentially you are just adding a static label WFLY* to every log line that matches your selector, because you can’t use the actual value of keyword if you aren’t capturing it somehow.

I try this promtail-local-config.yaml:

pipeline_stages:
  - match:
      selector: '{job="varlogs"}'
      stages:
      - regex:
          expression: "(?P<wildflyerrors>WFLY[^:]+):"
      - labels:
          wildflyerrors:

but it won’t extract all lines that contain errors:

any idea?
Thanks,

Judging from your config it should work. I’d check the labels and make sure those are from job=varlogs too. Maybe they just weren’t part of that pipeline.

What do you mean? Would you please explain more?

Your configuration is obviously working, at least for some logs. The log lines also look similar, so there should be no reason it wouldn’t work. And the logical conclusion is that the log lines that aren’t parsed perhaps because they weren’t selected to be parsed (not part of selector job=“varlogs”.

You can verify that fairly easily, too. Expand the logs that are correctly parsed, and you should see the label job=varlogs (unless you are dropping it in your config). But the log lines that arent parsed don’t have any label. If this is the case then it’s obvious that the logs that aren’t parsed aren’t part of job=varlogs.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.