Issue Accessing Pod Logs in SigNoz UI on AKS
TLDR prashant is facing an issue accessing pod logs of their application in SigNoz UI on AKS. nitya-signoz and Prashant provide suggestions related to log file paths and potential issues, but the problem remains unresolved.
May 22, 2023 (4 months ago)
prashant
10:16 AMwe have installed signoz in k8 recently, we are facing issue that we are not getting pod logs of application in signoz ui, but we are getting the logs of signoz componets. can you please help me in this
prashant
10:17 AMvishal-signoz
10:17 AMprashant
10:31 AMnitya-signoz
11:37 AMprashant
12:14 PMprashant
12:15 PMnitya-signoz
12:18 PMcan you manually check if you can access files present in
/var/log/pods/*/*/*.log
https://github.com/SigNoz/charts/blob/5d322fbf515a3af07b4ea8102be6ca4b8f16654b/charts/k8s-infra/values.yaml#L86 through the collector podcc Prashant
prashant
12:24 PMnitya-signoz
12:25 PMor maybe more permissions are required
prashant
12:26 PMprashant
12:26 PMnitya-signoz
12:27 PMPrashant
12:27 PMPrashant
12:28 PMprashant
12:28 PMprashant
12:28 PM1. zp-rara-backend
2. zp-rara-frontend
3. zp-devops-tools
Prashant
12:29 PMotel-agent
pods?prashant
12:31 PMprashant
12:32 PM{
"timestamp": 1684758022447026700,
"id": "2PyEEZGV7J0yMixQCHwdjkijkSk",
"trace_id": "",
"span_id": "",
"trace_flags": 0,
"severity_text": "",
"severity_number": 0,
"body": "10.20.193.90 - - [22/May/2023:12:20:22 +0000] \"GET /api/5QVY36-0856-7c2867 HTTP/1.1\" 400 30 \"-\" \"okhttp/2.7.5\" 147 0.003 [zp-rara-backend-zp-rara-ms-oms-brs-80] [] 10.20.192.192:8181 30 0.003 400 089ecf4890d6aebe1caec59720a10cf9",
"resources_string": {
"host_name": "aks-ondemand2c-23288688-vmss000007",
"k8s_cluster_name": "",
"k8s_container_name": "controller",
"k8s_container_restart_count": "2",
"k8s_namespace_name": "zp-devops-ingress",
"k8s_node_name": "aks-ondemand2c-23288688-vmss000007",
"k8s_pod_ip":
"k8s_pod_name": "main-public-controller-74d9bc46-hhrcw",
"k8s_pod_start_time": "2023-04-25 03:13:09 +0000 UTC",
"k8s_pod_uid": "1f5b75c1-eddb-4fe6-a121-f2293f23fcdd",
"os_type": "linux",
"signoz_component": "otel-agent"
},
"attributes_string": {
"log_file_path": "/var/log/pods/zp-devops-ingress_main-public-controller-74d9bc46-hhrcw_a80102e1-ff36-496d-9308-7ac18f6edd1b/controller/2.log",
"log_iostream": "stdout",
"logtag": "F",
"time": "2023-05-22T12:20:22.447026621Z"
},
"attributes_int": {},
"attributes_float": {}
}
prashant
12:37 PMprashant
12:40 PM2023-05-18T14:41:05.902Z info exporterhelper/queued_retry.go:426 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "logs", "name": "otlp", "error": "rpc error: code = DeadlineExceeded desc = context deadline exceeded", "interval": "26.412791317s"}
2023-05-18T14:41:05.906Z info exporterhelper/queued_retry.go:426 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "metrics", "name": "otlp", "error": "rpc error: code = DeadlineExceeded desc = context deadline exceeded", "interval": "23.285570238s"}
2023-05-18T14:41:06.808Z error exporterhelper/queued_retry.go:310 Dropping data because sending_queue is full. Try increasing queue_size. {"kind": "exporter", "data_type": "logs", "name": "otlp", "dropped_items": 395}
2023-05-18T14:41:06.813Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:07.024Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:07.225Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:07.429Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:07.630Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:07.830Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:07.913Z info fileconsumer/file.go:171 Started watching file {"kind": "receiver", "name": "filelog/k8s", "pipeline": "logs", "component": "fileconsumer", "path": "/var/log/pods/zp-devops-tools_zp-devops-cert-manager-webhook-7fd5b5c95b-zhdc7_e9eada80-182a-4d77-8a76-d90afb6fb822/cert-manager-webhook/14.log"}
2023-05-18T14:41:08.033Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:08.235Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:08.434Z error exporterhelper/queued_retry.go:175 Exporting failed. No more retries left. Dropping data. {"kind": "exporter", "data_type": "metrics", "name": "otlp", "error": "max elapsed time expired rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 100.64.246.181:4317: connect: connection refused\"", "dropped_items": 855}
2023-05-18T14:41:08.443Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:08.644Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:08.849Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:09.053Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:09.256Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:09.459Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:09.661Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:09.862Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:10.065Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:10.269Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:10.472Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:10.673Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:10.874Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:11.076Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:11.276Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:11.477Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:11.680Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-18T14:41:11.881Z warn [email protected]/batch_processor.go:177 Sender failed {"kind": "processor", "name": "batch", "pipeline": "logs", "error": "sending_queue is full"}
2023-05-22T04:38:24.713Z info fileconsumer/file.go:171 Started watching file {"kind": "receiver", "name": "filelog/k8s", "pipeline": "logs", "component": "fileconsumer", "path": "/var/log/pods/zp-devops-tools_zp-devops-signoz-k8s-infra-otel-deployment-54cc957dd7-b26fj_42123d85-2ef3-4677-9fdd-73fa45712f69/zp-devops-signoz-k8s-infra-otel-deployment/0.log"}
May 23, 2023 (4 months ago)
prashant
03:30 AMPrashant
05:33 AMto know more abt this, logs of signoz-otel-collector or clickhouse would be helpful.
But seems to be resolved on
2023-05-22T04:38
, but few log files being watched.It should work fine as long as you do not blacklist any pods or namespace.
SigNoz Community
Indexed 826 threads (62% resolved)
Similar Threads
Troubleshooting SigNoz Auto-Instrumentation Configuration
igor is having trouble configuring auto-instrumentation for Java applications using SigNoz, with traces not appearing in the SigNoz UI. Prashant advises to check logs of the otel sidecar, use service name for endpoint, verify supported libraries, and test with telemetrygen. However, the issue still persists.
Resolving Signoz Query Service Error
Einav encountered an error related to a missing table in the Signoz service which was preventing data visibility in the UI. Srikanth guided them to restart specific components and drop a database table, which resolved the issue.
Signoz Deployment Error: Unsupported Metric Type
Einav is having trouble with a signoz deployment, receiving an error regarding unsupported metric types. Srikanth suggests they may be encountering a known error and provides a link for reference.
Slow Log Messages in SigNoz with Logrus and OpenTelemetry
Harald reported slow log messages in SigNoz with logrus and open telemetry. They narrowed down the problem to focusing on relevant k8s namespaces, which improved performance to near real-time.
Deleting Data from Clickhouse Directly
Shreyash is facing issues deleting data from Clickhouse directly due to lack of space. vishal-signoz confirms the issue is insufficient space.