Issue with Filters on Logs and Duplicate Fields
TLDR Patrick reported issues with filter functionality and duplicate fields in logs. Srikanth looked into the problem. nitya-signoz offered a solution involving an update and guide.
1
Mar 09, 2023 (9 months ago)
Patrick
10:12 PMMar 10, 2023 (9 months ago)
Srikanth
06:06 AMPatrick
06:09 PMPatrick
06:10 PMPatrick
06:10 PMPatrick
06:31 PMPatrick
06:31 PMSELECT k8s_container_name FROM logs WHERE k8s_container_name != '' LIMIT 10;
SELECT k8s_container_name
FROM logs
WHERE k8s_container_name != ''
LIMIT 10
Query id: 16ad9816-9627-4161-9da9-69ace8e23398
Ok.
0 rows in set. Elapsed: 0.462 sec. Processed 171.96 million rows, 1.55 GB (372.60 million rows/s., 3.35 GB/s.)
thats probably the cause...
Patrick
06:32 PM│ CREATE TABLE signoz_logs.logs
(
`timestamp` UInt64 CODEC(DoubleDelta, LZ4),
`observed_timestamp` UInt64 CODEC(DoubleDelta, LZ4),
`id` String CODEC(ZSTD(1)),
`trace_id` String CODEC(ZSTD(1)),
`span_id` String CODEC(ZSTD(1)),
`trace_flags` UInt32,
`severity_text` LowCardinality(String) CODEC(ZSTD(1)),
`severity_number` UInt8,
`body` String CODEC(ZSTD(2)),
`resources_string_key` Array(String) CODEC(ZSTD(1)),
`resources_string_value` Array(String) CODEC(ZSTD(1)),
`attributes_string_key` Array(String) CODEC(ZSTD(1)),
`attributes_string_value` Array(String) CODEC(ZSTD(1)),
`attributes_int64_key` Array(String) CODEC(ZSTD(1)),
`attributes_int64_value` Array(Int64) CODEC(ZSTD(1)),
`attributes_float64_key` Array(String) CODEC(ZSTD(1)),
`attributes_float64_value` Array(Float64) CODEC(ZSTD(1)),
`k8s_pod_name` String MATERIALIZED attributes_string_value[indexOf(attributes_string_key, 'k8s_pod_name')] CODEC(LZ4),
`k8s_container_name` String MATERIALIZED attributes_string_value[indexOf(attributes_string_key, 'k8s_container_name')] CODEC(LZ4),
`service_name` String MATERIALIZED resources_string_value[indexOf(resources_string_key, 'service_name')] CODEC(LZ4),
`event_domain` String MATERIALIZED attributes_string_value[indexOf(attributes_string_key, 'event_domain')] CODEC(LZ4),
`event_name` String MATERIALIZED attributes_string_value[indexOf(attributes_string_key, 'event_name')] CODEC(LZ4),
`k8s_namespace_name` String MATERIALIZED attributes_string_value[indexOf(attributes_string_key, 'k8s_namespace_name')] CODEC(LZ4),
`k8s_cluster_name` String MATERIALIZED resources_string_value[indexOf(resources_string_key, 'k8s_cluster_name')] CODEC(LZ4),
`k8s_node_name` String MATERIALIZED resources_string_value[indexOf(resources_string_key, 'k8s_node_name')] CODEC(LZ4),
`os_type` String MATERIALIZED resources_string_value[indexOf(resources_string_key, 'os_type')] CODEC(LZ4),
`k8s_deployment_name` String MATERIALIZED resources_string_value[indexOf(resources_string_key, 'k8s_deployment_name')] CODEC(LZ4),
`enduser_id` String MATERIALIZED attributes_string_value[indexOf(attributes_string_key, 'enduser_id')] CODEC(LZ4),
INDEX body_idx body TYPE tokenbf_v1(10240, 3, 0) GRANULARITY 4,
INDEX id_minmax id TYPE minmax GRANULARITY 1,
INDEX k8s_container_name_idx k8s_container_name TYPE bloom_filter(0.01) GRANULARITY 64,
INDEX service_name_idx service_name TYPE bloom_filter(0.01) GRANULARITY 64,
INDEX event_name_idx event_name TYPE bloom_filter(0.01) GRANULARITY 64,
INDEX event_domain_idx event_domain TYPE bloom_filter(0.01) GRANULARITY 64,
INDEX trace_id_idx trace_id TYPE bloom_filter(0.01) GRANULARITY 64,
INDEX k8s_pod_name_idx k8s_pod_name TYPE bloom_filter(0.01) GRANULARITY 64,
INDEX k8s_deployment_name_idx k8s_deployment_name TYPE bloom_filter(0.01) GRANULARITY 64,
INDEX enduser_id_idx enduser_id TYPE bloom_filter(0.01) GRANULARITY 64
)
ENGINE = MergeTree
PARTITION BY toDate(timestamp / 1000000000)
ORDER BY (timestamp, id)
TTL toDateTime(timestamp / 1000000000) + toIntervalSecond(1209600)
SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1 │
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
1 row in set. Elapsed: 0.009 sec.
Patrick
06:33 PMresources_string.k8s_pod_name
in signoz, but the index has it as attributes_string.k8s_pod_name
Srikanth
06:42 PMk8s_*
should be part of resource attributes only. Looks like there is some issue. Did you make any changes to charts and otel collector config?Patrick
06:50 PMPatrick
06:54 PMclickhouse:
layout:
shardsCount: 3
resources:
limits:
cpu: 4
memory: 6Gi
k8s-infra:
otelDeployment:
resources:
limits:
cpu: 1
memory: 2Gi
presets:
hostMetrics:
enabled: true
kubeletMetrics:
enabled: true
logsCollection:
blacklist:
namespaces:
- kube-system
- openshift*
- grafana-stack
- platform
- istio-system
otelCollector:
replicaCount: 3
resources:
limits:
cpu: 4
memory: 8Gi
Patrick
06:58 PMSrikanth
07:03 PMPatrick
07:03 PMPatrick
07:32 PMPatrick
07:32 PMPatrick
07:32 PMPatrick
07:33 PMMar 14, 2023 (9 months ago)
nitya-signoz
05:17 AM1
SigNoz Community
Indexed 1023 threads (61% resolved)
Similar Threads
Troubleshooting K8s Deployment and Suggested Fields
saajid had issues with k8s deployment suggested fields. nitya-signoz suggested redeploying and shared troubleshooting steps.
Issues with SigNoz Setup and Data Persistence in AKS
Vaibhavi experienced issues setting up SigNoz in AKS, and faced data persistence issues after installation. Srikanth provided guidance on ClickHouse version compatibility and resource requirements, helping Vaibhavi troubleshoot and resolve the issue.
Extracting Custom Fields as Attributes from Log in SigNoz
Harald wants to have custom fields from their body parsed as attributes in the SigNoz logger. They have tried implementing suggestions from nitya-signoz and Prashant, but the issue remains unsolved due to a potential race condition encountered while executing the code. They have provided a full example using Kind for further assistance.
SigNoz crashing in k8s due to ClickHouse OOM
Travis reported SigNoz crashing in k8s due to ClickHouse OOM. The team suggested increasing resources for ClickHouse, and other troubleshooting steps, but the issue remains unresolved.
Kubernetes Signoz-otel-collector Issue and Clickhouse Cold Storage
Pruthvi faced an issue with Kubernetes signoz-otel-collector. nitya-signoz suggested deleting the `signoz_logs` database and restarting collectors. Pruthvi then asked about Clickhouse cold storage on S3 and observed a spike in cost, which Ankit agreed to investigate further.