Monitoring Apache Server with otel-collector-config.yaml

TLDR Josu was unable to see Apache server metrics using otel-collector-config.yaml. Srikanth suggested adding the Apache receiver to the metrics pipeline, which resolved the issue.

Photo of Josu
Josu
Sun, 26 Feb 2023 14:11:03 UTC

Hello, I want to monitor Apache. For this, in otel-collector-config.yaml I have added these lines: receivers: Apache: endpoint: "" Where can I see the metrics of my Apache server? In alerts I can't see the metrics.

Photo of Srikanth
Srikanth
Mon, 27 Feb 2023 00:42:21 UTC

Did you configure the server to enable status report?

Photo of Josu
Josu
Mon, 27 Feb 2023 11:33:53 UTC

Yes, I put in the browser and I see the status page.

Photo of Josu
Josu
Mon, 27 Feb 2023 11:33:56 UTC

```but I don't know where I can see the apache metrics.```

Photo of Srikanth
Srikanth
Mon, 27 Feb 2023 12:04:34 UTC

If the collector can scrape successfully the metrics show up when you type the metric name dashboards or alerts. You may want to look at the logs if there are any issues while getting metrics from apache.

Photo of Josu
Josu
Mon, 27 Feb 2023 12:40:31 UTC

so in apache I don't have to do anything right? Where can the logs be seen?

Photo of Srikanth
Srikanth
Mon, 27 Feb 2023 12:52:39 UTC

> so in apache I don’t have to do anything right? You need to configure the `httpd.conf` to let the collector receive the metrics > Where can the logs be seen? In the pod/container for the collector where you configured the apache receiver. Remember just adding the receiver also doesn’t create metrics. You need to add it to the pipeline.

Photo of Josu
Josu
Mon, 27 Feb 2023 14:40:23 UTC

el apache esta bien configurado. Cuando entra a : veo esto: localhost ServerVersion: Apache/2.4.53 (Unix) ServerMPM: event Server Built: Mar 29 2022 10:03:39 CurrentTime: Monday, 27-Feb-2023 14:38:14 UTC RestartTime: Monday, 27-Feb-2023 14:37:34 UTC ParentServerConfigGeneration: 1 ParentServerMPMGeneration: 0 ServerUptimeSeconds: 40 ServerUptime: 40 seconds Load1: 4.33 Load5: 3.14 Load15: 1.67 Total Accesses: 1 Total kBytes: 0 Total Duration: 68 CPUUser: .03 CPUSystem: .02 CPUChildrenUser: 0 CPUChildrenSystem: 0 CPULoad: .125 Uptime: 40 ReqPerSec: .025 BytesPerSec: 0 BytesPerReq: 0 DurationPerReq: 68 BusyWorkers: 1 IdleWorkers: 74 Processes: 3 Stopping: 0 BusyWorkers: 1 IdleWorkers: 74 ConnsTotal: 0 ConnsAsyncWriting: 0 ConnsAsyncKeepAlive: 0 ConnsAsyncClosing: 0 Scoreboard:

Photo of Josu
Josu
Mon, 27 Feb 2023 14:42:43 UTC

```and on the other hand in otel-collector-config.yaml I have configured this: receivers: Apache: endpoint: ""```

Photo of Josu
Josu
Mon, 27 Feb 2023 14:43:58 UTC

```but what has been said... from the Dashboard or Alert I see the metrics```

Photo of Josu
Josu
Mon, 27 Feb 2023 14:45:34 UTC

```he apache log without problem. I see the calls to the status: 172.17.0.1 - - [26/Feb/2023:13:04:29 +0000] "GET /server-status?auto%22 HTTP/1.1" 200 1168 172.17.0.1 - - [26/Feb/2023:13:04:29 +0000] "GET /server-status?auto%22 HTTP/1.1" 200 1169 172.17.0.1 - - [26/Feb/2023:13:04:29 +0000] "GET /server-status?auto%22 HTTP/1.1" 200 1169 172.17.0.1 - - [26/Feb/2023:13:04:30 +0000] "GET /server-status?auto%22 HTTP/1.1" 200 1169 172.17.0.1 - - [26/Feb/2023:13:04:32 +0000] "GET /server-status?auto HTTP/1.1" 200 1169 172.17.0.1 - - [26/Feb/2023:13:04:34 +0000] "GET /server-status?auto HTTP/1.1" 200 1168 172.17.0.1 - - [26/Feb/2023:13:04:35 +0000] "GET /server-status?auto HTTP/1.1" 200 1169 172.17.0.1 - - [26/Feb/2023:13:04:35 +0000] "GET /server-status?auto HTTP/1.1" 200 1169 172.17.0.1 - - [26/Feb/2023:13:04:35 +0000] "GET /server-status?auto HTTP/1.1" 200 1168```

Photo of Srikanth
Srikanth
Mon, 27 Feb 2023 16:38:22 UTC

Share your full collector configuration

Photo of Josu
Josu
Mon, 27 Feb 2023 16:49:00 UTC

receivers: filelog/dockercontainers: include: [ "/var/lib/docker/containers/*/*.log" ] start_at: end include_file_path: true include_file_name: false operators: - type: json_parser id: parser-docker output: extract_metadata_from_filepath timestamp: parse_from: attributes.time layout: '%Y-%m-%dT%H:%M:%S.%LZ' - type: regex_parser id: extract_metadata_from_filepath regex: '^.*containers/(?P<container_id>[^_]+)/.*log$' parse_from: attributes["log.file.path"] output: parse_body - type: move id: parse_body from: attributes.log to: body output: time - type: remove id: time field: attributes.time opencensus: endpoint: 0.0.0.0:55678 otlp/spanmetrics: protocols: grpc: endpoint: localhost:12345 otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 jaeger: protocols: grpc: endpoint: 0.0.0.0:14250 thrift_http: endpoint: 0.0.0.0:14268 # thrift_compact: # endpoint: 0.0.0.0:6831 # thrift_binary: # endpoint: 0.0.0.0:6832 hostmetrics: collection_interval: 30s scrapers: cpu: {} load: {} memory: {} disk: {} filesystem: {} network: {} prometheus: config: global: scrape_interval: 60s scrape_configs: # otel-collector internal metrics - job_name: otel-collector static_configs: - targets: - localhost:8888 labels: job_name: otel-collector apache: endpoint: "" processors: batch: send_batch_size: 10000 send_batch_max_size: 11000 timeout: 10s signozspanmetrics/prometheus: metrics_exporter: prometheus latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ] dimensions_cache_size: 100000 dimensions: - name: service.namespace default: default - name: deployment.environment default: default # This is added to ensure the uniqueness of the timeseries # Otherwise, identical timeseries produced by multiple replicas of # collectors result in incorrect APM metrics - name: 'signoz.collector.id' # memory_limiter: # # 80% of maximum memory up to 2G # limit_mib: 1500 # # 25% of limit up to 2G # spike_limit_mib: 512 # check_interval: 5s # # # 50% of the maximum memory # limit_percentage: 50 # # 20% of max memory usage spike expected # spike_limit_percentage: 20 # queued_retry: # num_workers: 4 # queue_size: 100 # retry_on_failure: true resourcedetection: # Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels. detectors: [env, system] # include ec2 for AWS, gce for GCP and azure for Azure. timeout: 2s extensions: health_check: endpoint: 0.0.0.0:13133 zpages: endpoint: 0.0.0.0:55679 pprof: endpoint: 0.0.0.0:1777 exporters: clickhousetraces: datasource: docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER} low_cardinal_exception_grouping: ${LOW_CARDINAL_EXCEPTION_GROUPING} clickhousemetricswrite: endpoint: resource_to_telemetry_conversion: enabled: true clickhousemetricswrite/prometheus: endpoint: prometheus: endpoint: 0.0.0.0:8889 # logging: {} clickhouselogsexporter: dsn: docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER} timeout: 5s sending_queue: queue_size: 100 retry_on_failure: enabled: true initial_interval: 5s max_interval: 30s max_elapsed_time: 300s service: telemetry: metrics: address: 0.0.0.0:8888 extensions: - health_check - zpages - pprof pipelines: traces: receivers: [jaeger, otlp] processors: [signozspanmetrics/prometheus, batch] exporters: [clickhousetraces] metrics: receivers: [otlp] processors: [batch] exporters: [clickhousemetricswrite] metrics/generic: receivers: [hostmetrics] processors: [resourcedetection, batch] exporters: [clickhousemetricswrite] metrics/prometheus: receivers: [prometheus] processors: [batch] exporters: [clickhousemetricswrite/prometheus] metrics/spanmetrics: receivers: [otlp/spanmetrics] exporters: [prometheus] logs: receivers: [otlp, filelog/dockercontainers] processors: [batch] exporters: [clickhouselogsexporter]

Photo of Josu
Josu
Mon, 27 Feb 2023 16:49:46 UTC

```on line 70 you will see: Apache: endpoint: ""```

Photo of Srikanth
Srikanth
Mon, 27 Feb 2023 17:31:57 UTC

You only added the receiver to the list but didn’t add it to the pipeline. As I mentioned earlier, just adding receiver alone doesn’t enable it you need to add it to the metrics pipeline.

Photo of Josu
Josu
Mon, 27 Feb 2023 18:06:45 UTC

```Would it also be added in?: otel-collector-metrics-config.yaml and how would it be done? I don't see anything in the documentation```

Photo of Srikanth
Srikanth
Mon, 27 Feb 2023 18:12:46 UTC

Add the apache in the pipeline here ``` metrics: receivers: [otlp] processors: [batch] exporters: [clickhousemetricswrite]``` ``` metrics: receivers: [otlp, apache] processors: [batch] exporters: [clickhousemetricswrite]```

Photo of Josu
Josu
Mon, 27 Feb 2023 19:31:48 UTC

```Thank you very much for the help. I see the apache metrics```