Troubleshooting and Adding Log Files to SigNoz POC
TLDR Noor has requested help with incorporating log files into their SigNoz POC. In collaboration with vishal-signoz and nitya-signoz, they managed to successfully setup and resolve their issues.
Aug 18, 2023 (1 month ago)
Noor
02:59 AMNoor
03:40 AMnitya-signoz
04:19 AMIf you are running it in docker, then you will have to mount these files to otel-collector.
Noor
12:22 PMNoor
12:26 PMNoor
12:30 PMNoor
01:03 PMreceivers:
filelog:
include: [ /Untiled/Users/nooramin-ali/Documents/Log file/access.log
]
operators:
- type: regex_parser
regex: '^(?P<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}) (?P<sev>[A-Z]*) (?P<msg>.*)$'
timestamp:
parse_from: attributes.time
layout: '%Y-%m-%d %H:%M:%S'
severity:
parse_from: attributes.sev
in order for the logs to show up on Signoz we are doing POC without the Agent install
Noor
01:07 PMnitya-signoz
01:09 PMthen you have to add the path to the mounted file in
include:
and save the receiver here https://github.com/SigNoz/signoz/blob/6fb071cf377a9662d063b082ca20c73db65cbec3/deploy/docker/clickhouse-setup/otel-collector-config.yamlOnce done, restart your otel collector.
nitya-signoz
01:09 PMNoor
01:14 PMNoor
01:37 PMNoor
04:34 PMNoor
06:14 PMNoor
07:44 PMNoor
07:53 PMAug 19, 2023 (1 month ago)
Noor
12:32 PMnitya-signoz
12:39 PMAug 21, 2023 (1 month ago)
Noor
11:29 AMnitya-signoz
11:31 AMNoor
11:40 AMNoor
11:41 AMNoor
12:19 PMinclude: [ /Untiled/Users/nooramin-ali/Documents/Log file/access.log
tcplog/docker:
listen_address: "0.0.0.0:2255"
operators:
- type: regex_parser
regex: '^<([0-9]+)>[0-9]+ (?P<timestamp>[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}(\.[0-9]+)?([zZ]|([\+-])([01]\d|2[0-3]):?([0-5]\d)?)?) (?P<container_id>\S+) (?P<container_name>\S+) [0-9]+ - -( (?P<body>.*))?'
timestamp:
parse_from: attributes.timestamp
layout: '%Y-%m-%dT%H:%M:%S.%LZ'
- type: move
from: attributes["body"]
to: body
- type: remove
field: attributes.timestamp
# please remove names from below if you want to collect logs from them
- type: filter
id: signoz_logs_filter
expr: 'attributes.container_name matches "^signoz-(logspout|frontend|alertmanager|query-service|otel-collector|otel-collector-metrics|clickhouse|zookeeper)"'
opencensus:
endpoint: 0.0.0.0:55678
otlp/spanmetrics:
protocols:
grpc:
endpoint: localhost:12345
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
jaeger:
protocols:
grpc:
endpoint: 0.0.0.0:14250
thrift_http:
endpoint: 0.0.0.0:14268
# thrift_compact:
# endpoint: 0.0.0.0:6831
# thrift_binary:
# endpoint: 0.0.0.0:6832
hostmetrics:
collection_interval: 30s
scrapers:
cpu: {}
load: {}
memory: {}
disk: {}
filesystem: {}
network: {}
prometheus:
config:
global:
scrape_interval: 60s
scrape_configs:
# otel-collector internal metrics
- job_name: otel-collector
static_configs:
- targets:
- localhost:8888
labels:
job_name: otel-collector
processors:
logstransform/internal:
operators:
- type: trace_parser
if: '"trace_id" in attributes or "span_id" in attributes'
trace_id:
parse_from: attributes.trace_id
span_id:
parse_from: attributes.span_id
output: remove_trace_id
- type: trace_parser
if: '"traceId" in attributes or "spanId" in attributes'
trace_id:
parse_from: attributes.traceId
span_id:
parse_from: attributes.spanId
output: remove_traceId
- id: remove_traceId
type: remove
if: '"traceId" in attributes'
field: attributes.traceId
output: remove_spanId
- id: remove_spanId
type: remove
if: '"spanId" in attributes'
field: attributes.spanId
- id: remove_trace_id
type: remove
if: '"trace_id" in attributes'
field: attributes.trace_id
output: remove_span_id
- id: remove_span_id
type: remove
if: '"span_id" in attributes'
field: attributes.span_id
batch:
send_batch_size: 10000
send_batch_max_size: 11000
timeout: 10s
signozspanmetrics/prometheus:
metrics_exporter: prometheus
latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ]
dimensions_cache_size: 100000
dimensions:
- name: service.namespace
default: default
- name: deployment.environment
default: default
# This is added to ensure the uniqueness of the timeseries
# Otherwise, identical timeseries produced by multiple replicas of
# collectors result in incorrect APM metrics
- name: 'signoz.collector.id'
# memory_limiter:
# # 80% of maximum memory up to 2G
# limit_mib: 1500
# # 25% of limit up to 2G
# spike_limit_mib: 512
# check_interval: 5s
#
# # 50% of the maximum memory
# limit_percentage: 50
# # 20% of max memory usage spike expected
# spike_limit_percentage: 20
# queued_retry:
# num_workers: 4
# queue_size: 100
# retry_on_failure: true
resourcedetection:
# Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
detectors: [env, system] # include ec2 for AWS, gcp for GCP and azure for Azure.
timeout: 2s
extensions:
health_check:
endpoint: 0.0.0.0:13133
zpages:
endpoint: 0.0.0.0:55679
pprof:
endpoint: 0.0.0.0:1777
exporters:
clickhousetraces:
datasource:
docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER}
low_cardinal_exception_grouping: ${LOW_CARDINAL_EXCEPTION_GROUPING}
clickhousemetricswrite:
endpoint:
resource_to_telemetry_conversion:
enabled: true
clickhousemetricswrite/prometheus:
endpoint:
prometheus:
endpoint: 0.0.0.0:8889
# logging: {}
clickhouselogsexporter:
dsn:
docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER}
timeout: 5s
sending_queue:
queue_size: 100
retry_on_failure:
enabled: true
initial_interval: 5s
max_interval: 30s
max_elapsed_time: 300s
service:
telemetry:
metrics:
address: 0.0.0.0:8888
extensions:
- health_check
- zpages
- pprof
pipelines:
traces:
receivers: [jaeger, otlp]
processors: [signozspanmetrics/prometheus, batch]
exporters: [clickhousetraces]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [clickhousemetricswrite]
metrics/generic:
receivers: [hostmetrics]
processors: [resourcedetection, batch]
exporters: [clickhousemetricswrite]
metrics/prometheus:
receivers: [prometheus]
processors: [batch]
exporters: [clickhousemetricswrite/prometheus]
metrics/spanmetrics:
receivers: [otlp/spanmetrics]
exporters: [prometheus]
logs:
receivers: [otlp, tcplog/docker]
processors: [logstransform/internal, batch]
exporters: [clickhouselogsexporter]
nitya-signoz
12:22 PMreceivers:
filelog:
include: [ /tmp/access.log, /tmp/error.log ]
start_at: beginning
tcplog/docker:
listen_address: "0.0.0.0:2255"
operators:
- type: regex_parser
regex: '^<([0-9]+)>[0-9]+ (?P<timestamp>[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}(\.[0-9]+)?([zZ]|([\+-])([01]\d|2[0-3]):?([0-5]\d)?)?) (?P<container_id>\S+) (?P<container_name>\S+) [0-9]+ - -( (?P<body>.*))?'
timestamp:
parse_from: attributes.timestamp
layout: '%Y-%m-%dT%H:%M:%S.%LZ'
- type: move
from: attributes["body"]
to: body
- type: remove
field: attributes.timestamp
# please remove names from below if you want to collect logs from them
- type: filter
id: signoz_logs_filter
expr: 'attributes.container_name matches "^signoz-(logspout|frontend|alertmanager|query-service|otel-collector|otel-collector-metrics|clickhouse|zookeeper)"'
opencensus:
endpoint: 0.0.0.0:55678
otlp/spanmetrics:
protocols:
grpc:
endpoint: localhost:12345
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
jaeger:
protocols:
grpc:
endpoint: 0.0.0.0:14250
thrift_http:
endpoint: 0.0.0.0:14268
# thrift_compact:
# endpoint: 0.0.0.0:6831
# thrift_binary:
# endpoint: 0.0.0.0:6832
hostmetrics:
collection_interval: 30s
scrapers:
cpu: {}
load: {}
memory: {}
disk: {}
filesystem: {}
network: {}
prometheus:
config:
global:
scrape_interval: 60s
scrape_configs:
# otel-collector internal metrics
- job_name: otel-collector
static_configs:
- targets:
- localhost:8888
labels:
job_name: otel-collector
processors:
logstransform/internal:
operators:
- type: trace_parser
if: '"trace_id" in attributes or "span_id" in attributes'
trace_id:
parse_from: attributes.trace_id
span_id:
parse_from: attributes.span_id
output: remove_trace_id
- type: trace_parser
if: '"traceId" in attributes or "spanId" in attributes'
trace_id:
parse_from: attributes.traceId
span_id:
parse_from: attributes.spanId
output: remove_traceId
- id: remove_traceId
type: remove
if: '"traceId" in attributes'
field: attributes.traceId
output: remove_spanId
- id: remove_spanId
type: remove
if: '"spanId" in attributes'
field: attributes.spanId
- id: remove_trace_id
type: remove
if: '"trace_id" in attributes'
field: attributes.trace_id
output: remove_span_id
- id: remove_span_id
type: remove
if: '"span_id" in attributes'
field: attributes.span_id
batch:
send_batch_size: 10000
send_batch_max_size: 11000
timeout: 10s
signozspanmetrics/prometheus:
metrics_exporter: prometheus
latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ]
dimensions_cache_size: 100000
dimensions:
- name: service.namespace
default: default
- name: deployment.environment
default: default
# This is added to ensure the uniqueness of the timeseries
# Otherwise, identical timeseries produced by multiple replicas of
# collectors result in incorrect APM metrics
- name: 'signoz.collector.id'
# memory_limiter:
# # 80% of maximum memory up to 2G
# limit_mib: 1500
# # 25% of limit up to 2G
# spike_limit_mib: 512
# check_interval: 5s
#
# # 50% of the maximum memory
# limit_percentage: 50
# # 20% of max memory usage spike expected
# spike_limit_percentage: 20
# queued_retry:
# num_workers: 4
# queue_size: 100
# retry_on_failure: true
resourcedetection:
# Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
detectors: [env, system] # include ec2 for AWS, gcp for GCP and azure for Azure.
timeout: 2s
extensions:
health_check:
endpoint: 0.0.0.0:13133
zpages:
endpoint: 0.0.0.0:55679
pprof:
endpoint: 0.0.0.0:1777
exporters:
clickhousetraces:
datasource:
docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER}
low_cardinal_exception_grouping: ${LOW_CARDINAL_EXCEPTION_GROUPING}
clickhousemetricswrite:
endpoint:
resource_to_telemetry_conversion:
enabled: true
clickhousemetricswrite/prometheus:
endpoint:
prometheus:
endpoint: 0.0.0.0:8889
# logging: {}
clickhouselogsexporter:
dsn:
docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER}
timeout: 5s
sending_queue:
queue_size: 100
retry_on_failure:
enabled: true
initial_interval: 5s
max_interval: 30s
max_elapsed_time: 300s
service:
telemetry:
metrics:
address: 0.0.0.0:8888
extensions:
- health_check
- zpages
- pprof
pipelines:
traces:
receivers: [jaeger, otlp]
processors: [signozspanmetrics/prometheus, batch]
exporters: [clickhousetraces]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [clickhousemetricswrite]
metrics/generic:
receivers: [hostmetrics]
processors: [resourcedetection, batch]
exporters: [clickhousemetricswrite]
metrics/prometheus:
receivers: [prometheus]
processors: [batch]
exporters: [clickhousemetricswrite/prometheus]
metrics/spanmetrics:
receivers: [otlp/spanmetrics]
exporters: [prometheus]
logs:
receivers: [otlp, filelog]
processors: [logstransform/internal, batch]
exporters: [clickhouselogsexporter]
nitya-signoz
12:29 PM/Untiled/Users/nooramin-ali/Documents/Log file/access.log:/tmp/access.log
mount the file
Noor
12:46 PMNoor
01:30 PMNoor
03:04 PMNoor
03:25 PMNoor
04:25 PMNoor
04:25 PMNoor
04:59 PMNoor
06:37 PMNoor
06:49 PMNoor
06:50 PMNoor
07:21 PMNoor
08:41 PMNoor
08:43 PMNoor
08:44 PMNoor
09:22 PMNoor
09:23 PMNoor
09:39 PM include: [ /tmp/access.log, /tmp/error.log ]
start_at: beginning and this now /Untiled/Users/nooramin-ali/Documents/Log file/access.log:/tmp/access.log. What else I need to do in order to log show up on front end please it is very important for this POC to become successful . Thanks
Aug 22, 2023 (1 month ago)
nitya-signoz
04:16 AMPlease add a
-
at the beginning as seen in other mounts and also reconfirm the path of the file once again. There might be some changes as thare are spaces in between your folder namesNoor
11:24 AMNoor
11:28 AMNoor
11:37 AMNoor
12:44 PMnitya-signoz
12:45 PMNoor
12:45 PMNoor
12:46 PMNoor
12:46 PMnitya-signoz
12:46 PMnitya-signoz
12:47 PMdocker container restart signoz-otel-collector
Noor
12:48 PMnitya-signoz
12:50 PMNoor
12:54 PMNoor
12:55 PMNoor
12:55 PMNoor
12:57 PMnitya-signoz
01:02 PMNoor
01:03 PMNoor
01:06 PMNoor
02:35 PMnitya-signoz
02:40 PMNoor
02:43 PMnitya-signoz
02:48 PMnitya-signoz
02:49 PMNoor
02:49 PMNoor
03:05 PMnitya-signoz
03:06 PMNoor
03:06 PMNoor
03:09 PMNoor
03:10 PMNoor
03:11 PMNoor
03:12 PMnitya-signoz
03:15 PMNoor
03:15 PMNoor
03:22 PMNoor
03:24 PMNoor
03:28 PMNoor
05:13 PMAug 23, 2023 (1 month ago)
Noor
11:23 AMnitya-signoz
11:38 AMtestaccess.rtf
logs in signoz right ?Noor
11:39 AMNoor
11:41 AMnitya-signoz
11:43 AMNoor
11:43 AMNoor
11:46 AMnitya-signoz
11:51 AMnitya-signoz
11:52 AMDepends on person to person and how much domain knowledge they.
Noor
11:55 AMnitya-signoz
11:55 AMNoor
12:34 PMNoor
12:55 PMnitya-signoz
01:01 PMNoor
01:07 PMAug 24, 2023 (1 month ago)
Noor
12:05 PMSigNoz Community
Indexed 825 threads (61% resolved)
Similar Threads
Extracting Custom Fields as Attributes from Log in SigNoz
Harald wants to have custom fields from their body parsed as attributes in the SigNoz logger. They have tried implementing suggestions from nitya-signoz and Prashant, but the issue remains unsolved due to a potential race condition encountered while executing the code. They have provided a full example using Kind for further assistance.



Assistance Needed with Signoz POC and Log Collection from IBM AIX Servers
Noor wished to add logs for POC I from IBM AIX to their Signoz installation on MAC. Pranay and Shivanshu recommended using log collection agents or `filelog` receivers and provided steps, but a firm resolution wasn't reached.
Issues with SigNoz Setup and Data Persistence in AKS
Vaibhavi experienced issues setting up SigNoz in AKS, and faced data persistence issues after installation. Srikanth provided guidance on ClickHouse version compatibility and resource requirements, helping Vaibhavi troubleshoot and resolve the issue.
Removing Default Logs and Collecting Application Logs
Srikanth wanted to remove default logs and collect their application logs. nitya-signoz provided two solutions, and Srikanth chose the second one, which involves editing the ote-collector-config.yaml file.

Parsing log entry and extracting fields with OTEL
Syed needed help parsing a log entry and extracting fields using OTEL. nitya-signoz provided guidance and the correct configurations, resolving the issue.

