#support

Slow Log Messages in SigNoz with Logrus and OpenTelemetry

TLDR Harald reported slow log messages in SigNoz with logrus and open telemetry. They narrowed down the problem to focusing on relevant k8s namespaces, which improved performance to near real-time.

Powered by Struct AI
Jun 21, 2023 (5 months ago)
Harald
Photo of md5-01b4600434aa419becd17a9f7773e2ff
Harald
05:54 PM
Hello everybody. I still have big trouble with slow log messages in signoz (k8s cluster, go application, logrus to console). Is there anything that I can do to speed up things?

Should i send the data from logrus directly to open telemetry? Is there any example for this? Where/how do I get the data in the website?
Pranay
Photo of md5-8df7ce0274b2473ec07403336e48b574
Pranay
06:36 PM
hey Harald Can you share what you mean by slow log messages? Are you finding logs query to be slower than usual?

Can you give some data points on
1. Machine size used to run SigNoz (RAM/CPU allocated)
2. Logs data size on which query is being run
3. What type of query are you writing?
Harald
Photo of md5-01b4600434aa419becd17a9f7773e2ff
Harald
06:37 PM
k8s cluster - zero volume on logs - I reduced even the logs from the readiness /pings to the webserver.
06:38
Harald
06:38 PM
k8s_namespace_name IN ('the-app') AND k8s_pod_name CONTAINS 'foo-service'
06:39
Harald
06:39 PM
The machine is a 12core - 128gb i7 oder i9
06:39
Harald
06:39 PM
3500gb/sec ssd
06:42
Harald
06:42 PM
Pranay I have an idea - but I can't prove it
06:42
Harald
06:42 PM
I am logging JSON formats with logrus (from golang)
06:42
Harald
06:42 PM
it sorts the json to alphabetic order for the keys
06:43
Harald
06:43 PM
maybe that's an issue, if the "level" isn't at position #1 in the message body?
06:50
Harald
06:50 PM
I have the feeling, I didn't try everything. But I am trying to fix this for months and it simply doesn't work.
06:51
Harald
06:51 PM
I added patches to the metrics examples - but it's just a random number generator 🙂 Whatever I try - I don't get stable metrics running. (as I used them in DDog).
06:51
Harald
06:51 PM
Willing to do what every it takes and to help - at the moment I am getting a bit of the feeling - it's maybe not just me.
07:08
Harald
07:08 PM
Pranay do you guys have 100% validation, that k3s is working with SigNoz?
07:52
Harald
07:52 PM
Now the messages arrived 😒
07:52
Harald
07:52 PM
How can I debug this?
07:55
Harald
07:55 PM
2023-06-21T19:38:38.358Z    warn    [email protected]/batch_processor.go:190    Sender failed    {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-06-21T19:38:41.360Z    warn    [email protected]/batch_processor.go:190    Sender failed    {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-06-21T19:38:44.362Z    warn    [email protected]/batch_processor.go:190    Sender failed    {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-06-21T19:38:47.363Z    warn    [email protected]/batch_processor.go:190    Sender failed    {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-06-21T19:38:50.366Z    error    exporterhelper/queued_retry.go:317    Dropping data because sending_queue is full. Try increasing queue_size.    {"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "dropped_items": 3}

    /go/pkg/mod/go.opentelemetry.io/collector/[email protected]/exporterhelper/queued_retry.go:317
07:55
Harald
07:55 PM
got this from the otel collector :partying_face::partying_face::partying_face:
08:03
Harald
08:03 PM
After restarting the pod - everything is working fine
08:04
Harald
08:04 PM
Can I set signoz just to collect data from a specific k8s namespace?
08:32
Harald
08:32 PM
Ok that did the trick - just focus on the namespaces that are relevant - now it's almost realtime.

SigNoz Community

Built with ClickHouse as datastore, SigNoz is an open-source APM to help you find issues in your deployed applications & solve them quickly | Knowledge Base powered by Struct.AI

Indexed 1023 threads (61% resolved)

Join Our Community