Here's how to configure ingress-nginx with OpenTelemetry (Otel) for traffic monitoring, in Kubernetes environments such as EKS, GKE, AKE. It also covers troubleshooting common issues encountered during configuration.

Nginx integration

This section details the steps required to install and configure the ingress-nginx load balancer to successfully instrument incoming HTTP requests.

  1. If ingress-nginx is not present, please follow below instruction to install ingress-nginx via helm. If already installed, proceed to step 3

  1. To verify the ingress nginx installation, please run sudo kubectl get all -n ingress-nginx and output should look like

NAME                                                    READY    STATUS    RESTARTS   AGE

    pod/nginx-ingress-ingress-nginx-controller-xxx-xx       1/1      Running   0          1m

    NAME                                                       TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE

    service/nginx-ingress-ingress-nginx-controller             LoadBalancer   10.100.58.201               80:32404/TCP,443:31635/TCP   1m
    service/nginx-ingress-ingress-nginx-controller-admission   ClusterIP      10.100.66.225             443/TCP                      1m

    NAME                                                         READY   UP-TO-DATE   AVAILABLE  AGE
    deployment.apps/nginx-ingress-ingress-nginx-controller       1/1     1            1          1m

    NAME                                                         DESIRED   CURRENT   READY   AGE
    replicaset.apps/nginx-ingress-ingress-nginx-controller-xxx   1         1         1       1m
  1. **Create working directory: mkdir -p ~/getastra/ingress_nginx_instrumentation. Change into working directory: cd ~/getastra/ingress_nginx_instrumentation **

  2. Take the backup of the existing ingress-nginx helm values if they are already customized.

sudo helm get values ingress-nginx -n ingress-nginx -o yaml | sudo tee ingress-nginx-original-values.yaml

||| It's essential to inject custom configuration into ingress-nginx deployment to have ingress-nginx successfully instrument the incoming request and response. Although these customization doesn't alter the functionality of ingress-nginx, we strongly recommend to take the backup of existing helm values. If the values of ingress-nginx are not customized, then default values are being used which can be found here.

  1. Fetch and edit nginx.tmpl to instrument http response headers

sudo helm upgrade ingress-nginx ingress-nginx/ingress-nginx -n ingress-nginx
POD_NAME=$(sudo kubectl get pods -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}')
sudo kubectl cp -n ingress-nginx $POD_NAME:/etc/nginx/template/nginx.tmpl ori_nginx.tmpl
sudo cp ori_nginx.tmpl nginx.tmpl
set $resp_headers '';
header_filter_by_lua_block {
                ...
                local cjson = require "cjson"
                ngx.var.resp_headers = cjson.encode(ngx.resp.get_headers())
 }
  1. ** Create the configmap resource using command **

sudo kubectl create configmap nginx-template-astra-instrumented --from-file=nginx.tmpl -n ingress-nginx
  1. Create configmap ga-obs-config.yaml with following content

apiVersion: v1
kind: ConfigMap
metadata:
     name: ga-obs-config
     namespace: ingress-nginx
data:
     ga_obs_nginx.toml: |
            exporter = "otlp"
            processor = "batch"
            [exporters.otlp]
            host = "astra-traffic-collector.astra-collector.svc.cluster.local"
            port = 4317
            # Optional: enable SSL, for endpoints that support it
            # use_ssl = true
            # Optional: set a filesystem path to a pem file to be used for SSL encryption
            # (when use_ssl = true)
            # ssl_cert_path = "/path/to/cacert.pem"
            [processors.batch]
            max_queue_size = 2048
            schedule_delay_millis = 5000
            max_export_batch_size = 512
            [service]
            # Can also be set by the OTEL_SERVICE_NAME environment variable.
            name = "nginx-proxy" # Opentelemetry resource name
            [sampler]
            name = "AlwaysOn" # Also: AlwaysOff, TraceIdRatioBased
            ratio = 1.0
            parent_based = false
  1. **Create the configmap resource using command **

sudo kubectl apply -f ga-obs-config.yaml
  1. **Verify if the configmap resource is created in the same namespace as that of ingress-nginx **

sudo kubectl get cm -n ingress-nginx
  1. Create values.yaml with following content:

controller:

        extraVolumeMounts:
          - mountPath: /etc/nginx/template/nginx.tmpl
            subPath: nginx.tmpl
            name: nginx-template-volume
            readOnly: true
          - mountPath: /etc/nginx/ga-obs
            name: ga-obs
            readOnly: true

        extraVolumes:
          - name: nginx-template-volume
            configMap:
                name: nginx-template-astra-instrumented
                defaultMode: 420
          - name: ga-obs
            configMap:
                name: ga-obs-config
                defaultMode: 420
        
        config:
            allow-snippet-annotations: "true"
            http-snippet: |
                opentelemetry_config /etc/nginx/ga-obs/ga_obs_nginx.toml;
                opentelemetry_ignore_paths /is-dynamic-lb-initialized|/health|/metric;
            location-snippet: |
                opentelemetry_operation_name ga-otel-nginx;
                opentelemetry_attribute sensor.version $nginx_version;

                opentelemetry_attribute sensor.id ;

                opentelemetry_attribute http.request.body $request_body;
                opentelemetry_attribute net.sock.peer.addr $remote_addr;
                opentelemetry_attribute net.sock.peer.port $remote_port;
                access_by_lua_block {
                    local cjson = require "cjson"
                    ngx.var.req_headers = cjson.encode(ngx.req.get_headers())
                }
                opentelemetry_attribute http.request.headers $req_headers;
                opentelemetry_attribute http.response.headers $resp_headers;
            main-snippet: |
                load_module /etc/nginx/modules/otel_ngx_module.so;
            server-snippet: |
                set $req_headers '';
  1. **Upgrade ingress-nginx with instrumentation **

sudo helm diff upgrade ingress-nginx -n ingress-nginx ingress-nginx/ingress-nginx -f values.yaml --debug
sudo helm upgrade ingress-nginx -n ingress-nginx ingress-nginx/ingress-nginx -f values.yaml --debug

Troubleshooting

  1. Unable to send trace from nginx to traffic collector

    Symptoms

[Error] File: /tmp/build/opentelemetry-cpp/exporters/otlp/src/otlp_grpc_exporter.cc:66 [OTLP TRACE GRPC Exporter] Export() failed with status_code: "UNAVAILABLE" error_message: "DNS resolution failed for ...

Cause

host = "astra-traffic-collector.{namespace}.svc.cluster.local"
port = 4317
  1. Unable to see entries in inventory

Symptoms

FAQ (Frequently Asked Questions)

  1. Can I see what trace are sent from my environment?

Yes, one can see the traces sent by traffic-collector. For non kuberenetes based deployment, run docker logs astra-traffic-collector. For kubernetes based deployment, run kubectl logs astra-traffic-collector-0 -n astra-collector

  1. How to get IP Address Of Traffic Collector in Non Kubernetes environment?

As container network is the same network as host network, the IP of the container would be the same as the Virtual Machine IP.