Deleting Data from Clickhouse Directly
TLDR Shreyash is facing issues deleting data from Clickhouse directly due to lack of space. vishal-signoz confirms the issue is insufficient space.
Feb 24, 2023 (9 months ago)
Shreyash
06:03 AMShreyash
06:06 AM<pre>
/src/exporter/clickhousetracesexporter/writer.go:130
/src/exporter/clickhousetracesexporter/writer.go:109
2023-02-24T05:49:34.993Z error clickhousetracesexporter/writer.go:110 Could not write a batch of spans {"kind": "exporter", "data_type": "traces", "name": "clickhousetraces", "error": "code: 243, message: Cannot reserve 1.00 MiB, not enough space"}
/src/exporter/clickhousetracesexporter/writer.go:110
2023-02-24T05:49:35.344Z warn [email protected]/batch_processor.go:178 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-02-24T05:49:36.345Z warn [email protected]/batch_processor.go:178 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-02-24T05:49:37.346Z warn [email protected]/batch_processor.go:178 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-02-24T05:49:38.347Z warn [email protected]/batch_processor.go:178 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-02-24T05:49:39.347Z warn [email protected]/batch_processor.go:178 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-02-24T05:49:40.347Z error exporterhelper/queued_retry.go:310 Dropping data because sending_queue is full. Try increasing queue_size. {"kind": "exporter", "data_type": "logs", "name": "clickhouselogsexporter", "dropped_items": 861}
/go/pkg/mod/go.opentelemetry.io/[email protected]/exporter/exporterhelper/queued_retry.go:310
/go/pkg/mod/go.opentelemetry.io/[email protected]/exporter/exporterhelper/logs.go:114
/go/pkg/mod/go.opentelemetry.io/collector/[email protected]/logs.go:36
/go/pkg/mod/go.opentelemetry.io/collector/processor/[email protected]/batch_processor.go:339
/go/pkg/mod/go.opentelemetry.io/collector/processor/[email protected]/batch_processor.go:176
/go/pkg/mod/go.opentelemetry.io/collector/processor/[email protected]/batch_processor.go:144
2023-02-24T05:49:40.348Z warn [email protected]/batch_processor.go:178 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
</pre>
Getting above error
vishal-signoz
10:25 AMSigNoz Community
Indexed 1023 threads (61% resolved)
Similar Threads
Resolving Signoz Query Service Error
Einav encountered an error related to a missing table in the Signoz service which was preventing data visibility in the UI. Srikanth guided them to restart specific components and drop a database table, which resolved the issue.
Issue Accessing Pod Logs in SigNoz UI on AKS
prashant is facing an issue accessing pod logs of their application in SigNoz UI on AKS. nitya-signoz and Prashant provide suggestions related to log file paths and potential issues, but the problem remains unresolved.
Troubleshooting SigNoz Auto-Instrumentation Configuration
igor is having trouble configuring auto-instrumentation for Java applications using SigNoz, with traces not appearing in the SigNoz UI. Prashant advises to check logs of the otel sidecar, use service name for endpoint, verify supported libraries, and test with telemetrygen. However, the issue still persists.