#support

Troubleshooting and Adding Log Files to SigNoz POC

TLDR Noor has requested help with incorporating log files into their SigNoz POC. In collaboration with vishal-signoz and nitya-signoz, they managed to successfully setup and resolve their issues.

Powered by Struct AI
115
1mo
Solved
Join the chat
Aug 18, 2023 (1 month ago)
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
02:59 AM
I did not get reply back from your team member can you reply to my slack by Friday please. Thanks
03:40
Noor
03:40 AM
Do you why I am un able to login into Signoz I get the following error something_went_wrong
nitya-signoz
Photo of md5-a52b9d6c34f193d9a1ff940024f36f77
nitya-signoz
04:19 AM
How are you running your otel collector ?

If you are running it in docker, then you will have to mount these files to otel-collector.
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
12:22 PM
I had to uninstall and reinstall everything Nitya
12:26
Noor
12:26 PM
I am using local host to log in it will not let me log in any longer
12:30
Noor
12:30 PM
Nitya where do I mount files in docker
Image 1 for Nitya where do I mount files in docker
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
01:03 PM
where do I save this
receivers:
  filelog:

include: [ /Untiled/Users/nooramin-ali/Documents/Log file/access.log
]
    operators:
      - type: regex_parser
        regex: '^(?P<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}) (?P<sev>[A-Z]*) (?P<msg>.*)$'
        timestamp:
          parse_from: attributes.time
          layout: '%Y-%m-%d %H:%M:%S'
        severity:
          parse_from: attributes.sev

in order for the logs to show up on Signoz we are doing POC without the Agent install
01:07
Noor
01:07 PM
I am also unable to open signoz running on my mac with IP address I get following error
Image 1 for I am also unable to open signoz running on my mac with IP address I get following error
nitya-signoz
Photo of md5-a52b9d6c34f193d9a1ff940024f36f77
nitya-signoz
01:09 PM
you will have to mount your file to otel-collector.

then you have to add the path to the mounted file in include: and save the receiver here https://github.com/SigNoz/signoz/blob/6fb071cf377a9662d063b082ca20c73db65cbec3/deploy/docker/clickhouse-setup/otel-collector-config.yaml

Once done, restart your otel collector.
01:09
nitya-signoz
01:09 PM
I am not sure about the ip, isn’t localhost working ?
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
01:14 PM
I will add the path file first . I will try trouble shoot the local host Ip address issue I had reboot my mac yesterday evening after that local host via ip address stop working I am not able log in to sig noz front end at all. Thanks Nitya
01:37
Noor
01:37 PM
Hello Nitya please let me know if I did this correctly .
Image 1 for Hello Nitya please let me know if  I did this correctly .
04:34
Noor
04:34 PM
Also can you please send best way to restart collector I restarted my Mac also to fix local host issue where i cannot log into Signoz front end any longer to view the logs any reason why
06:14
Noor
06:14 PM
Now I am able see the local host and front end us fine all is good I can see the alerts, logs, and dashboard when you say restart your otel collector.What is command for it or how I do it please . Thanks for your help I am so close to my POC is now . Thanks
07:44
Noor
07:44 PM
is this command to restart systemctl restart otelcol-contrib
07:53
Noor
07:53 PM
or this the right command sudo service otelcol restrate. where do I run this command please
Aug 19, 2023 (1 month ago)
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
12:32 PM
I hope some one in your team can really make my POC success full . Every time I make change in signoz on my mac my front host crashes now I got this . It is an old version can some one please delete my login Id from your end Noor-Ul-Amin.Ali I want recreate my log ID again and see if it works. If the POC does not work they might not consider getting the product . Thanks
Image 1 for I hope some one in your team can really make my POC success full . Every time I make change in signoz on my mac my front host crashes now I got this . It is an old version can  some one please delete my login Id from your end Noor-Ul-Amin.Ali I want recreate my log ID again and see if it works. If the POC does not work they might not consider getting the product . Thanks
nitya-signoz
Photo of md5-a52b9d6c34f193d9a1ff940024f36f77
nitya-signoz
12:39 PM
Please select a slot here , we can get on a call https://calendly.com/nityananda-1/30min
Aug 21, 2023 (1 month ago)
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
11:29 AM
Let me know if this time works for you or not for today 7 am this morning. Thanks
nitya-signoz
Photo of md5-a52b9d6c34f193d9a1ff940024f36f77
nitya-signoz
11:31 AM
Let connect on the agreed time.
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
11:40 AM
ok I did not see phone # in meeting invite
11:41
Noor
11:41 AM
I just saw it thanks
12:19
Noor
12:19 PM
receivers:
include: [ /Untiled/Users/nooramin-ali/Documents/Log file/access.log
tcplog/docker:
listen_address: "0.0.0.0:2255"
operators:
- type: regex_parser
regex: '^<([0-9]+)>[0-9]+ (?P<timestamp>[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}(\.[0-9]+)?([zZ]|([\+-])([01]\d|2[0-3]):?([0-5]\d)?)?) (?P<container_id>\S+) (?P<container_name>\S+) [0-9]+ - -( (?P<body>.*))?'
timestamp:
parse_from: attributes.timestamp
layout: '%Y-%m-%dT%H:%M:%S.%LZ'
- type: move
from: attributes["body"]
to: body
- type: remove
field: attributes.timestamp
# please remove names from below if you want to collect logs from them
- type: filter
id: signoz_logs_filter
expr: 'attributes.container_name matches "^signoz-(logspout|frontend|alertmanager|query-service|otel-collector|otel-collector-metrics|clickhouse|zookeeper)"'
opencensus:
endpoint: 0.0.0.0:55678
otlp/spanmetrics:
protocols:
grpc:
endpoint: localhost:12345
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
jaeger:
protocols:
grpc:
endpoint: 0.0.0.0:14250
thrift_http:
endpoint: 0.0.0.0:14268
# thrift_compact:
# endpoint: 0.0.0.0:6831
# thrift_binary:
# endpoint: 0.0.0.0:6832
hostmetrics:
collection_interval: 30s
scrapers:
cpu: {}
load: {}
memory: {}
disk: {}
filesystem: {}
network: {}
prometheus:
config:
global:
scrape_interval: 60s
scrape_configs:
# otel-collector internal metrics
- job_name: otel-collector
static_configs:
- targets:
- localhost:8888
labels:
job_name: otel-collector


processors:
logstransform/internal:
operators:
- type: trace_parser
if: '"trace_id" in attributes or "span_id" in attributes'
trace_id:
parse_from: attributes.trace_id
span_id:
parse_from: attributes.span_id
output: remove_trace_id
- type: trace_parser
if: '"traceId" in attributes or "spanId" in attributes'
trace_id:
parse_from: attributes.traceId
span_id:
parse_from: attributes.spanId
output: remove_traceId
- id: remove_traceId
type: remove
if: '"traceId" in attributes'
field: attributes.traceId
output: remove_spanId
- id: remove_spanId
type: remove
if: '"spanId" in attributes'
field: attributes.spanId
- id: remove_trace_id
type: remove
if: '"trace_id" in attributes'
field: attributes.trace_id
output: remove_span_id
- id: remove_span_id
type: remove
if: '"span_id" in attributes'
field: attributes.span_id
batch:
send_batch_size: 10000
send_batch_max_size: 11000
timeout: 10s
signozspanmetrics/prometheus:
metrics_exporter: prometheus
latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ]
dimensions_cache_size: 100000
dimensions:
- name: service.namespace
default: default
- name: deployment.environment
default: default
# This is added to ensure the uniqueness of the timeseries
# Otherwise, identical timeseries produced by multiple replicas of
# collectors result in incorrect APM metrics
- name: 'signoz.collector.id'
# memory_limiter:
# # 80% of maximum memory up to 2G
# limit_mib: 1500
# # 25% of limit up to 2G
# spike_limit_mib: 512
# check_interval: 5s
#
# # 50% of the maximum memory
# limit_percentage: 50
# # 20% of max memory usage spike expected
# spike_limit_percentage: 20
# queued_retry:
# num_workers: 4
# queue_size: 100
# retry_on_failure: true
resourcedetection:
# Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
detectors: [env, system] # include ec2 for AWS, gcp for GCP and azure for Azure.
timeout: 2s

extensions:
health_check:
endpoint: 0.0.0.0:13133
zpages:
endpoint: 0.0.0.0:55679
pprof:
endpoint: 0.0.0.0:1777

exporters:
clickhousetraces:
datasource:
docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER}
low_cardinal_exception_grouping: ${LOW_CARDINAL_EXCEPTION_GROUPING}
clickhousemetricswrite:
endpoint:
resource_to_telemetry_conversion:
enabled: true
clickhousemetricswrite/prometheus:
endpoint:
prometheus:
endpoint: 0.0.0.0:8889
# logging: {}

clickhouselogsexporter:
dsn:
docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER}
timeout: 5s
sending_queue:
queue_size: 100
retry_on_failure:
enabled: true
initial_interval: 5s
max_interval: 30s
max_elapsed_time: 300s

service:
telemetry:
metrics:
address: 0.0.0.0:8888
extensions:
- health_check
- zpages
- pprof
pipelines:
traces:
receivers: [jaeger, otlp]
processors: [signozspanmetrics/prometheus, batch]
exporters: [clickhousetraces]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [clickhousemetricswrite]
metrics/generic:
receivers: [hostmetrics]
processors: [resourcedetection, batch]
exporters: [clickhousemetricswrite]
metrics/prometheus:
receivers: [prometheus]
processors: [batch]
exporters: [clickhousemetricswrite/prometheus]
metrics/spanmetrics:
receivers: [otlp/spanmetrics]
exporters: [prometheus]
logs:
receivers: [otlp, tcplog/docker]
processors: [logstransform/internal, batch]
exporters: [clickhouselogsexporter]
nitya-signoz
Photo of md5-a52b9d6c34f193d9a1ff940024f36f77
nitya-signoz
12:22 PM
receivers:
  filelog:
    include: [ /tmp/access.log, /tmp/error.log ]
    start_at: beginning

  tcplog/docker:
    listen_address: "0.0.0.0:2255"
    operators:
      - type: regex_parser
        regex: '^<([0-9]+)>[0-9]+ (?P<timestamp>[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}(\.[0-9]+)?([zZ]|([\+-])([01]\d|2[0-3]):?([0-5]\d)?)?) (?P<container_id>\S+) (?P<container_name>\S+) [0-9]+ - -( (?P<body>.*))?'
        timestamp:
          parse_from: attributes.timestamp
          layout: '%Y-%m-%dT%H:%M:%S.%LZ'
      - type: move
        from: attributes["body"]
        to: body
      - type: remove
        field: attributes.timestamp
        # please remove names from below if you want to collect logs from them
      - type: filter
        id: signoz_logs_filter
        expr: 'attributes.container_name matches "^signoz-(logspout|frontend|alertmanager|query-service|otel-collector|otel-collector-metrics|clickhouse|zookeeper)"'
  opencensus:
    endpoint: 0.0.0.0:55678
  otlp/spanmetrics:
    protocols:
      grpc:
        endpoint: localhost:12345
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
  jaeger:
    protocols:
      grpc:
        endpoint: 0.0.0.0:14250
      thrift_http:
        endpoint: 0.0.0.0:14268
      # thrift_compact:
      #   endpoint: 0.0.0.0:6831
      # thrift_binary:
      #   endpoint: 0.0.0.0:6832
  hostmetrics:
    collection_interval: 30s
    scrapers:
      cpu: {}
      load: {}
      memory: {}
      disk: {}
      filesystem: {}
      network: {}
  prometheus:
    config:
      global:
        scrape_interval: 60s
      scrape_configs:
        # otel-collector internal metrics
        - job_name: otel-collector
          static_configs:
          - targets:
              - localhost:8888
            labels:
              job_name: otel-collector
processors:
  logstransform/internal:
    operators:
      - type: trace_parser
        if: '"trace_id" in attributes or "span_id" in attributes'
        trace_id:
          parse_from: attributes.trace_id
        span_id:
          parse_from: attributes.span_id
        output: remove_trace_id
      - type: trace_parser
        if: '"traceId" in attributes or "spanId" in attributes'
        trace_id:
          parse_from: attributes.traceId
        span_id:
          parse_from: attributes.spanId
        output: remove_traceId
      - id: remove_traceId
        type: remove
        if: '"traceId" in attributes'
        field: attributes.traceId
        output: remove_spanId
      - id: remove_spanId
        type: remove
        if: '"spanId" in attributes'
        field: attributes.spanId
      - id: remove_trace_id
        type: remove
        if: '"trace_id" in attributes'
        field: attributes.trace_id
        output: remove_span_id
      - id: remove_span_id
        type: remove
        if: '"span_id" in attributes'
        field: attributes.span_id
  batch:
    send_batch_size: 10000
    send_batch_max_size: 11000
    timeout: 10s
  signozspanmetrics/prometheus:
    metrics_exporter: prometheus
    latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ]
    dimensions_cache_size: 100000
    dimensions:
      - name: service.namespace
        default: default
      - name: deployment.environment
        default: default
      # This is added to ensure the uniqueness of the timeseries
      # Otherwise, identical timeseries produced by multiple replicas of
      # collectors result in incorrect APM metrics
      - name: 'signoz.collector.id'
  # memory_limiter:
  #   # 80% of maximum memory up to 2G
  #   limit_mib: 1500
  #   # 25% of limit up to 2G
  #   spike_limit_mib: 512
  #   check_interval: 5s
  #
  #   # 50% of the maximum memory
  #   limit_percentage: 50
  #   # 20% of max memory usage spike expected
  #   spike_limit_percentage: 20
  # queued_retry:
  #   num_workers: 4
  #   queue_size: 100
  #   retry_on_failure: true
  resourcedetection:
    # Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
    detectors: [env, system] # include ec2 for AWS, gcp for GCP and azure for Azure.
    timeout: 2s
extensions:
  health_check:
    endpoint: 0.0.0.0:13133
  zpages:
    endpoint: 0.0.0.0:55679
  pprof:
    endpoint: 0.0.0.0:1777
exporters:
  clickhousetraces:
    datasource: 
    docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER}
    low_cardinal_exception_grouping: ${LOW_CARDINAL_EXCEPTION_GROUPING}
  clickhousemetricswrite:
    endpoint: 
    resource_to_telemetry_conversion:
      enabled: true
  clickhousemetricswrite/prometheus:
    endpoint: 
  prometheus:
    endpoint: 0.0.0.0:8889
  # logging: {}
  clickhouselogsexporter:
    dsn: 
    docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER}
    timeout: 5s
    sending_queue:
      queue_size: 100
    retry_on_failure:
      enabled: true
      initial_interval: 5s
      max_interval: 30s
      max_elapsed_time: 300s
service:
  telemetry:
    metrics:
      address: 0.0.0.0:8888
  extensions:
    - health_check
    - zpages
    - pprof
  pipelines:
    traces:
      receivers: [jaeger, otlp]
      processors: [signozspanmetrics/prometheus, batch]
      exporters: [clickhousetraces]
    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [clickhousemetricswrite]
    metrics/generic:
      receivers: [hostmetrics]
      processors: [resourcedetection, batch]
      exporters: [clickhousemetricswrite]
    metrics/prometheus:
      receivers: [prometheus]
      processors: [batch]
      exporters: [clickhousemetricswrite/prometheus]
    metrics/spanmetrics:
      receivers: [otlp/spanmetrics]
      exporters: [prometheus]
    logs:
      receivers: [otlp, filelog]
      processors: [logstransform/internal, batch]
      exporters: [clickhouselogsexporter]

12:29
nitya-signoz
12:29 PM
/Untiled/Users/nooramin-ali/Documents/Log file/access.log:/tmp/access.log

mount the file
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
12:46 PM
I am getting ready the mount the files now. Please give me the line # I will add it
Image 1 for I am getting ready the mount the files now. Please give me the line # I will add it
01:30
Noor
01:30 PM
Is this correct
Image 1 for Is this correct
03:04
Noor
03:04 PM
I see this now how long it will take to for logs to show up in Signoz
Image 1 for I see this now how long it will take to for logs to show up in Signoz
03:25
Noor
03:25 PM
Let me know if I need to move this in different location
Image 1 for Let me know if I need to move this in different location
04:25
Noor
04:25 PM
Let me know what else I need to do now
Image 1 for Let me know what else I need to do now
04:25
Noor
04:25 PM
I have not see any alerts show up yet
04:59
Noor
04:59 PM
So now I need to this udo systemctl restart otelcol-contrib.service what is best command to restart otecol and where should it be run from . Thanks
06:37
Noor
06:37 PM
Have see this error : * error decoding 'volumes[1]': invalid spec: /var/lib/docker/containers:/var/lib/docker/containers:ro /Untiled/Users/nooramin-ali/Documents/Log file/access.log:/tmp/access.log: too many colons
06:49
Noor
06:49 PM
I need to tweak this a little if you guide why I am getting this error now .It will help full I am very close now to get this POC moving. Thanks
06:50
Noor
06:50 PM
I am getting this error when I try to run docker-compose restart
07:21
Noor
07:21 PM
I need to please request one more meeting to make this work right for my POC I am getting close I am new to Signoz platform I don't have that much time left I want get this POC done by Thursday. Thanks
08:41
Noor
08:41 PM
Well I had to redo it again to get logs showing up . Now I want get some input why it keeps doing.
Image 1 for Well I had to redo it again to get logs showing up . Now I want get some input why it keeps doing.
08:43
Noor
08:43 PM
I still don't now why it is not working correctly for me I need to make sure these location edit is correct please
Image 1 for I still don't now why it is not working correctly for me I need to make sure these location edit is correct pleaseImage 2 for I still don't now why it is not working correctly for me I need to make sure these location edit is correct please
08:44
Noor
08:44 PM
This I really really want it to pull logs from and show it to me . Thanks
09:22
Noor
09:22 PM
Well I had to deleted and reinstalled it looks good now
09:23
Noor
09:23 PM
So fare so good I don't why it has given me so many issue in last few days on my mac
Image 1 for So fare so good I don't why it has given me so many issue in last few days on my mac
09:39
Noor
09:39 PM
Please connect with me so I can get this going all I want to make sure after adding this part
 include: [ /tmp/access.log, /tmp/error.log ]

start_at: beginning and this now /Untiled/Users/nooramin-ali/Documents/Log file/access.log:/tmp/access.log. What else I need to do in order to log show up on front end please it is very important for this POC to become successful . Thanks
Aug 22, 2023 (1 month ago)
nitya-signoz
Photo of md5-a52b9d6c34f193d9a1ff940024f36f77
nitya-signoz
04:16 AM
The mounting path doesn’t seem to be correct.
Please add a - at the beginning as seen in other mounts and also reconfirm the path of the file once again. There might be some changes as thare are spaces in between your folder names
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
11:24 AM
ok will do
11:28
Noor
11:28 AM
-/Untiled/Users/nooramin-ali/Documents/Logfile/access.log:/tmp/access.log this look ok now
11:37
Noor
11:37 AM
I still don't see log showing up on front end side
12:44
Noor
12:44 PM
Let me know what else I need to do to make sure these logs show up
nitya-signoz
Photo of md5-a52b9d6c34f193d9a1ff940024f36f77
nitya-signoz
12:45 PM
It’s looks okay, did you restart the signoz-otel-collector container ?
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
12:45 PM
this is what it looks like now
Image 1 for this is what it looks like now
12:46
Noor
12:46 PM
I did run docker ps twice
12:46
Noor
12:46 PM
what other commands can I run
nitya-signoz
Photo of md5-a52b9d6c34f193d9a1ff940024f36f77
nitya-signoz
12:46 PM
please format it properly, there should be a space after -
12:47
nitya-signoz
12:47 PM
docker container restart signoz-otel-collector
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
12:48 PM
here you go
Image 1 for here you go
nitya-signoz
Photo of md5-a52b9d6c34f193d9a1ff940024f36f77
nitya-signoz
12:50 PM
Here you mentioned that you have renamed the name of the folder and removed space, the above screenshot has a space https://signoz-community.slack.com/archives/C01HWQ1R0BC/p1692703731583299?thread_ts=1692222584.184529&cid=C01HWQ1R0BC
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
12:54 PM
you space between Log and file
Image 1 for you space between Log and file
12:55
Noor
12:55 PM
I have removed the space now
12:55
Noor
12:55 PM
run the command also now
12:57
Noor
12:57 PM
where do I run this command signoz, deploy or docker docker container restart signoz-otel-collector
nitya-signoz
Photo of md5-a52b9d6c34f193d9a1ff940024f36f77
nitya-signoz
01:02 PM
anywhere
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
01:03 PM
I did
Image 1 for I did
01:06
Noor
01:06 PM
how long should it take to pull log file from the location of the path
02:35
Noor
02:35 PM
Also what is best way find out if logs are showing up after collector gets restarted
nitya-signoz
Photo of md5-a52b9d6c34f193d9a1ff940024f36f77
nitya-signoz
02:40 PM
Share the logs of signoz-otel-collector
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
02:43 PM
what I mean Nitya is how do I find out here after I have restarted the collector few times.
Image 1 for what I mean Nitya is how do I find out here after I have restarted the collector few times.
nitya-signoz
Photo of md5-a52b9d6c34f193d9a1ff940024f36f77
nitya-signoz
02:48 PM
Noor please share a private github repository with me, I will send you the setup which you can run directly
02:49
nitya-signoz
02:49 PM
Also add the log files there
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
02:49 PM
ok
03:05
Noor
03:05 PM
I just added you here let me know if have access to it
Image 1 for I just added you here let me know if have access to it
nitya-signoz
Photo of md5-a52b9d6c34f193d9a1ff940024f36f77
nitya-signoz
03:06 PM
Got it, please upload the log file there.
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
03:06 PM
ok
03:09
Noor
03:09 PM
I am getting this error Yowza, that’s a big file. Try again with a file smaller than 25MB.
03:10
Noor
03:10 PM
my log file size is Log File - 741.9 MB
03:11
Noor
03:11 PM
other size is Log File - 217.6 MB
03:12
Noor
03:12 PM
so will it not work is the size of the file is too large
nitya-signoz
Photo of md5-a52b9d6c34f193d9a1ff940024f36f77
nitya-signoz
03:15 PM
Can you get a trimmed down version of your log file ?
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
03:15 PM
ok I will try
03:22
Noor
03:22 PM
done for POC demo purpose and reason this will do
Image 1 for done for POC demo purpose and reason this will do
03:24
Noor
03:24 PM
Thank you very much for your time and effort on it I am learning Signoz now in very short time
03:28
Noor
03:28 PM
Please let me know if I need to do anything on my end
05:13
Noor
05:13 PM
Hello Nitya my POC is on Thursday you think I have it done by that time. Thanks
Aug 23, 2023 (1 month ago)
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
11:23 AM
Hello Nitya I hope and pray all is well with you and your team. Please let me what else I can do on my side for this POC . Thanks
nitya-signoz
Photo of md5-a52b9d6c34f193d9a1ff940024f36f77
nitya-signoz
11:38 AM
Okay you want the testaccess.rtf logs in signoz right ?
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
11:39 AM
Yes for this POC I just want to show how Signoz will work for log correlation
11:41
Noor
11:41 AM
I have already created rest of docs for Signoz now all I want to do is live demo as POC with my team and our Sr Architect
nitya-signoz
Photo of md5-a52b9d6c34f193d9a1ff940024f36f77
nitya-signoz
11:43 AM
I will send you the setup on github
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
11:43 AM
ok
11:46
Noor
11:46 AM
do you how long will it take someone to learn Signoz what are common commands to work on alerts, dashboard , traces, and logs. Does your company have docs like that or not? Just wondering that is all. Thanks
nitya-signoz
Photo of md5-a52b9d6c34f193d9a1ff940024f36f77
nitya-signoz
11:51 AM
Check your repo, and follow the commands in readme.
11:52
nitya-signoz
11:52 AM
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
11:55 AM
do you mean hear
Image 1 for do you mean hear
nitya-signoz
Photo of md5-a52b9d6c34f193d9a1ff940024f36f77
nitya-signoz
11:55 AM
Yes please refresh
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
12:23 PM
I did refresh my local host this http://192.168.1.229:3301/logs?q=&order=desc
12:29
Noor
12:29 PM
or do you me this one http://localhost:3301/
12:34
Noor
12:34 PM
just got this error
Image 1 for just got this error
12:55
Noor
12:55 PM
Do you know why we the error
nitya-signoz
Photo of md5-a52b9d6c34f193d9a1ff940024f36f77
nitya-signoz
01:01 PM
Please follow the instructions in the error message, you will have to make changes to your docker
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
01:07 PM
ok I will do that thanks
Aug 24, 2023 (1 month ago)
Noor
Photo of md5-54f045fadc868912ffcd8bfcfccbccc9
Noor
12:05 PM
Hello Nitya is never worked for me I am having a POC\ Demo on Signoz with my logs showing up in front end side . My POC\Demo will be on Friday. Mean while find out an answer why I was not able move logs from my . Right now I am going to touch anything till my Demo and POC is done. I will share you few screen shots with you so you can share with your team members . Thanks