Slow Log Messages in SigNoz with Logrus and OpenTelemetry
TLDR Harald reported slow log messages in SigNoz with logrus and open telemetry. They narrowed down the problem to focusing on relevant k8s namespaces, which improved performance to near real-time.
Jun 21, 2023 (5 months ago)
Harald
05:54 PMShould i send the data from logrus directly to open telemetry? Is there any example for this? Where/how do I get the data in the website?
Pranay
06:36 PMCan you give some data points on
1. Machine size used to run SigNoz (RAM/CPU allocated)
2. Logs data size on which query is being run
3. What type of query are you writing?
Harald
06:37 PMHarald
06:38 PMk8s_namespace_name IN ('the-app') AND k8s_pod_name CONTAINS 'foo-service'
Harald
06:39 PMHarald
06:39 PMHarald
06:42 PMHarald
06:42 PMHarald
06:42 PMHarald
06:43 PMHarald
06:50 PMHarald
06:51 PMHarald
06:51 PMHarald
07:08 PMHarald
07:52 PMHarald
07:52 PMHarald
07:55 PM2023-06-21T19:38:38.358Z warn [email protected]/batch_processor.go:190 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-06-21T19:38:41.360Z warn [email protected]/batch_processor.go:190 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-06-21T19:38:44.362Z warn [email protected]/batch_processor.go:190 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-06-21T19:38:47.363Z warn [email protected]/batch_processor.go:190 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-06-21T19:38:50.366Z error exporterhelper/queued_retry.go:317 Dropping data because sending_queue is full. Try increasing queue_size. {"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "dropped_items": 3}
/go/pkg/mod/go.opentelemetry.io/collector/[email protected]/exporterhelper/queued_retry.go:317
Harald
07:55 PMHarald
08:03 PMHarald
08:04 PMHarald
08:32 PMSigNoz Community
Indexed 1023 threads (61% resolved)
Similar Threads
Troubleshooting SigNoz Auto-Instrumentation Configuration
igor is having trouble configuring auto-instrumentation for Java applications using SigNoz, with traces not appearing in the SigNoz UI. Prashant advises to check logs of the otel sidecar, use service name for endpoint, verify supported libraries, and test with telemetrygen. However, the issue still persists.
Issue Accessing Pod Logs in SigNoz UI on AKS
prashant is facing an issue accessing pod logs of their application in SigNoz UI on AKS. nitya-signoz and Prashant provide suggestions related to log file paths and potential issues, but the problem remains unresolved.
Resolving Signoz Query Service Error
Einav encountered an error related to a missing table in the Signoz service which was preventing data visibility in the UI. Srikanth guided them to restart specific components and drop a database table, which resolved the issue.
Issues with SigNoz Setup and Data Persistence in AKS
Vaibhavi experienced issues setting up SigNoz in AKS, and faced data persistence issues after installation. Srikanth provided guidance on ClickHouse version compatibility and resource requirements, helping Vaibhavi troubleshoot and resolve the issue.
Signoz Deployment Error: Unsupported Metric Type
Einav is having trouble with a signoz deployment, receiving an error regarding unsupported metric types. Srikanth suggests they may be encountering a known error and provides a link for reference.