Configuration changes for Opentelemetry with ISD

To configure Opentelementry in ISD you need to setup the Opentelementry collector service and integrate Prometheus with the Opentelemetry Collector.

Setting Up OpenTelemetry Collector Service

Follow the steps below to setup Opentelementry collector service.

  1. Create the OpenTelemetry Collector Config (otel-config.yaml)

  2. Deploy the OpenTelemetry Collector

  3. Expose the Collector as a service

  4. Deploy the OpenTelemetry service

  5. Add Secrets

Create the OpenTelemetry Collector Config (otel-config.yaml)

Add the following code to create the Opentelementry collector config.

apiVersion: v1
kind: ConfigMap
metadata:
 name: otel-collector-config
data:
 otel-config.yaml: |
   receivers:
     otlp:
       protocols:
         http:
           endpoint: "0.0.0.0:4318"  
  
   processors:
     batch:
       send_batch_max_size: 100
       send_batch_size: 10
       timeout: 10s

   exporters:
     debug:
       verbosity: detailed
     prometheus:
       endpoint: "0.0.0.0:9464"
       send_timestamps: true
       metric_expiration: 10m
       resource_to_telemetry_conversion:
         enabled: true

   service:
     pipelines:
       metrics:
         receivers: [ otlp ]
         processors: [ batch ]
         exporters: [ prometheus, debug ]  
       traces: 
         receivers: [ otlp ]
         processors: [ batch ]
         exporters: [ debug ] 
       logs:
         receivers: [ otlp ]
         processors: [ batch ]
         exporters: [debug ]

Apply otel-config.yaml into the namesspace using the below command

kubectl apply -f otel-config.yaml -n ns

Deploy the OpenTelemetry Collector

Add the following code to deploy the Opentelementry collector.

kubectl create deployment otel-collector 
--image=otel/opentelemetry-collector-contrib:0.119.0 -n <namespace> --dry-run=client 
-o yaml > otel-deployment.yaml

Apply the otel-deployment.yaml into the namesspace using the below command

kubectl apply -f otel-deployment.yaml -n <namespace>

Add the below section in the otel-deployment.yaml file (volumes and volumeMounts section)

spec:
      containers:
      - image: otel/opentelemetry-collector-contrib:0.119.0
        name: opentelemetry-collector-contrib
        command:
            - /otelcol-contrib
            - '--config=/conf/otel-config.yaml'
        resources: {}
        volumeMounts:
        - mountPath: /conf
          name: otel-agent-config-vol
        - mountPath: /var/log/pods
          name: varlogpods
          readOnly: true
        - mountPath: /var/lib/docker/containers
          name: varlibdockercontainers
          readOnly: true
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          name: otel-collector-config
        name: otel-agent-config-vol
      - emptyDir: {}
        name: varlogpods
      - emptyDir: {}
        name: varlibdockercontainers

Apply otel-deployment.yaml into the namesspace using the below command

kubectl apply -f otel-deployment.yaml -n <namespace>

Expose the Collector as a Service

To expose the collector as a service, create otel-service.yaml file and add the below configurations

apiVersion: v1
kind: Service
metadata:
  name: otel-collector
  labels:
    app: otel-collector
spec:
  selector:
    app: otel-collector
  ports:
    - name: otlp-http
      port: 4318          
      targetPort: 4318
    - name: prometheus-metrics  # Prometheus metrics endpoint
      port: 9464
      targetPort: 9464
    - name: metrics       
      port: 8888
      targetPort: 8888
  type: ClusterIP

Deploy the OpenTelemetry service

Add the following command to deploy the Opentelemetry service.

kubectl apply -f otel-service.yaml -n <namespace>

Add Secrets

The oes-audit-service-config should include the below config above and parallel to open-telemetry.

otel:
  exporter:
    otlp:
      metric-prefix:  // Default to empty for Prometheus, namespace for mosaic
      metrics:
        endpoint: http://otel-collector:4318/v1/metrics
        temporality.preference: CUMULATIVE //skip as default is cumulative
      traces:
        endpoint: http://otel-collector:4318/v1/traces
open-telemetry:
  enabled: true

To Debug

Export the environment variables for otel-collector if you find any issues with connection after deployment:

export OTEL_METRICS_EXPORTER=otlp
export OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4318
export OTEL_EXPORTER_OTLP_METRICS_PROTOCOL=http

Integrating Prometheus with OpenTelemetry Collector

To integrate Prometheus with the OpenTelemetry collector, follow the steps given below:

Update ConfigMap(otel-config.yaml) for OpenTelemetry Collector:

Update the configmap using the following code. Ensure that the following configuration is under the exporters section in otel-config:

prometheus:
   endpoint: "0.0.0.0:9464"
   send_timestamps: true
   metric_expiration: 10m
   resource_to_telemetry_conversion:
     enabled: true

Modify Metrics Pipeline:

Modify the metrics pipeline. Ensure that prometheus is added to the list of exporters under the metrics pipeline in the otel-config:

metrics:
         receivers: [ otlp ]
         processors: [ batch ]
         exporters: [ prometheus, debug ]

Configure Prometheus Scraping

Update the Prometheus ConfigMap (opsmx-prometheus-server) under the scrape_configs section to replace oes-audit-service:19090 with the OpenTelemetry Collector port to scrape metrics from otel as shown below:

static_configs:
  - targets:
    - otel-collector:9464
  • Restart the Prometheus service to apply the configuration changes.

  • View the metrics in Prometheus.

  • Remove the 19090 port in the audit service deployment

Last updated

Was this helpful?