TLDR Riccardo explained facing issues with empty dashboard panels after installing SigNoz v0.19.0. Prashant and Srikanth offered troubleshooting suggestions, including upgrading to otel-collector 0.76.1, but the issue remains unresolved.
can you share screenshot of the empty dashboard?
Can you open a widget and try plotting any of the metrics below?
Also, do check logs of otel-collectors if there were any errors.
Hi Prashant, thank you for your help.
Here is the screenshot of empty dashboard:
And the screenshot of Traces section.
I also attached the log of otel-collector on source VM; it show a "connection refused" that seems replied by application, because there is no firewall between VMs and if from source I try to telnet the destination I obtain the following:
```Trying 213.152.204.19...
Connected to
Riccardo which version of signoz are you using? also, did you select hostname from top selection bar?
Signoz v0.19.0
I can't see top selection bar
From logs, it seems like SigNoz-Otel-Collector is either not accessible or not healthy.
logs of signoz-otel-collector would be helpful.
also, look for errors in clickhouse container.
I already have attached the log from otel collector on source VM
do you need some other log?
The one you shared would be logs from otel-collector binary.
yes
But the signoz cluster contains of two collector components: signoz-otel-collector and signoz-otel-collector-metrics.
which has the OTLP receiver and responsible for writing data to clickhouse db.
logs of signoz-otel-collector container would be helpful
could you please tell me where to find those logs?
how did you install signoz? where is it deployed?
I've installed it as docker standalone, following tutorial on
with ```docker ps``` all containers seem to be working: ```CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 095aa079cd0c signoz/frontend:0.19.0 "nginx -g 'daemon of…" 23 hours ago Up 21 hours 80/tcp, 0.0.0.0:3301->3301/tcp, :::3301->3301/tcp frontend e0d39167b1b5 signoz/alertmanager:0.23.1 "/bin/alertmanager -…" 23 hours ago Up 21 hours 9093/tcp clickhouse-setup_alertmanager_1 f07b23de2200 signoz/signoz-otel-collector:0.76.1 "/signoz-collector -…" 23 hours ago Up 21 hours 0.0.0.0:4317-4318->4317-4318/tcp, :::4317-4318->4317-4318/tcp clickhouse-setup_otel-collector_1 e254a8502327 signoz/signoz-otel-collector:0.76.1 "/signoz-collector -…" 23 hours ago Up 21 hours 4317-4318/tcp clickhouse-setup_otel-collector-metrics_1 80ed2daaa665 signoz/query-service:0.19.0 "./query-service -co…" 23 hours ago Up 21 hours (healthy) 8080/tcp query-service 9ac491ef7bee clickhouse/clickhouse-server:22.8.8-alpine "/entrypoint.sh" 23 hours ago Up 21 hours (healthy) 0.0.0.0:8123->8123/tcp, :::8123->8123/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp, 0.0.0.0:9181->9181/tcp, :::9181->9181/tcp, 9009/tcp clickhouse ac8aed417d9f bitnami/zookeeper:3.7.1 "/opt/bitnami/script…" 23 hours ago Up 21 hours 0.0.0.0:2181->2181/tcp, :::2181->2181/tcp, 0.0.0.0:2888->2888/tcp, :::2888->2888/tcp, 0.0.0.0:3888->3888/tcp, :::3888->3888/tcp, 8080/tcp zookeeper-1 79a1737c0e82 jaegertracing/example-hotrod:1.30 "/go/bin/hotrod-linu…" 23 hours ago Up 21 hours 8080-8083/tcp hotrod 07ab59b1e368 grubykarol/locust:1.2.3-python3.9-alpine3.12 "/docker-entrypoint.…" 23 hours ago Up 21 hours 5557-5558/tcp, 8089/tcp ```
Logs of signoz otel-collector: ```docker logs clickhouse-setup_otel-collector_1```
Logs of `clickhouse`: ```docker logs --tail 200 clickhouse```
> 2023-06-06T12:52:26.041Z error clickhousetracesexporter/writer.go:416 Could not append span to batch: > "name": "TestingSpan", "startTimeUnixNano": 11651379494838206464, "serviceName": "<nil-service-name>", > ... > error": "clickhouse: dateTime overflow. timestamp must be between 1925-01-01 00:00:00 and 2283-11-11 00:00:00"} I see dummy trace data with `2339` year as start time. Can you make sure you have correct time set on the instance or the environment? cc <@4K143e> Srikanth
the date should be correct: ```root@signoz:~# date Wed 07 Jun 2023 10:35:53 AM CEST```
Srikanth could you please look into this?
on side note, can you upgrade the binary from `0.66.0` to `0.76.x`?
`0.76.1` to be exact
yes, I could, but I have to tell you that I tried with 0.78.0 binary and it was not working (a lot of errors on startup) So I had to fall back to the version named in the tutorial, because I'm not expert in signoz
could you please paste the exact link to download the right binary version?
The source VM is a Debian 10 x64
You can find all the release assets here:
I have installed 0.76.1 binary, but now the binary log show the following:
```Jun 07 12:12:00
What’s the SDK you are using?
Currently no SDK, just otel-collector binary installed on a Debian 10 Next step will be to evaluate wordpress (php) instrumenting
Can you share what is the main problem here?
The main problem: the dashboard is empty:
Share the signoz version and dashboard json you are using.
signoz v0.19.0
Riccardo
Tue, 06 Jun 2023 13:01:59 UTCHi all, I'm new to SigNoz software, I just installed an instance of Docker Standalone on a VM on Debian 11 (dest). The purpose is to collect data from another Debian 10 VM (source) on the same subnet. On the source VM I've installed otelcol-contrib_0.66.0_linux_amd64.deb package, then latest docker/standalone/config.yaml where the otlp endpoint has been set to the dest VM. The setup seems working correctly, if I run on source the command: ```telemetrygen traces --traces 1 --otlp-endpoint localhost:4317 --otlp-insecure``` then I can see on the Traces section the telemetrygen Service data (lets-go and okey-dokey). So I've generated HostMetrics dashboards for the source VM and imported the resulting json to the Dashboards section on the dest VM. The HostMetrics Dashboard - <source-VM-Name> appeared, but all the panels are empty. I've also tried troubleshooting using: ```troubleshoot checkEndpoint --endpoint=dest-IP:4317``` and the answer was: ```INFO workspace/main.go:28 STARTING! INFO checkEndpoint/checkEndpoint.go:41 checking reachability of SigNoz endpoint INFO workspace/main.go:46 Successfully sent sample data to signoz ...``` There is no firewall between the two VM and tcpdump show traffic packets flowing from source to dest and back. Where am I wrong? Any help is very appreciated, tks.