Does your log have any structure, or is it completely unstructured? If you at least have some patterns to follow you can craft multiple queries for each, but if your logs are completely unstructured then probably will need to do some pre-processing before injecting the logs.
If your log has no specific pattern then you need some pre-processing. Assuming your keyword is bounded (limited number of them and you know what they are), you can setup some sort of filter in your logging agent, something like this if you are using promtail:
Filter logs for label WILDFLY-ERRORS and grep for string WFLY*.
Insert labe “WFLY*” to the log line.
Essentially you are just adding a static label WFLY* to every log line that matches your selector, because you can’t use the actual value of keyword if you aren’t capturing it somehow.
Judging from your config it should work. I’d check the labels and make sure those are from job=varlogs too. Maybe they just weren’t part of that pipeline.
Your configuration is obviously working, at least for some logs. The log lines also look similar, so there should be no reason it wouldn’t work. And the logical conclusion is that the log lines that aren’t parsed perhaps because they weren’t selected to be parsed (not part of selector job=“varlogs”.
You can verify that fairly easily, too. Expand the logs that are correctly parsed, and you should see the label job=varlogs (unless you are dropping it in your config). But the log lines that arent parsed don’t have any label. If this is the case then it’s obvious that the logs that aren’t parsed aren’t part of job=varlogs.