How to set up Astra Traffic Monitoring with Nginx in Kubernetes

Here's how to configure NGINX with OpenTelemetry (OTEL) for traffic monitoring, in Kubernetes environments such as EKS, GKE, AKE. It also covers troubleshooting common issues encountered during configuration.

Nginx integration



This section details the steps required to install and configure the ingress-nginx load balancer to successfully instrument incoming HTTP requests.

If ingress-nginx is not present, please follow below instruction to install ingress-nginx via helm. If already installed, proceed to step 3
Install ingress-nginx
Configure k8s ingress resource

To verify the ingress nginx installation, please run kubectl get all -n ingress-nginx and output should look like

NAME                                                    READY    STATUS    RESTARTS   AGE

    pod/nginx-ingress-ingress-nginx-controller-xxx-xx       1/1      Running   0          1m



    NAME                                                       TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE

    service/nginx-ingress-ingress-nginx-controller             LoadBalancer   10.100.58.201   <ip>            80:32404/TCP,443:31635/TCP   1m
    service/nginx-ingress-ingress-nginx-controller-admission   ClusterIP      10.100.66.225   <none>          443/TCP                      1m

    NAME                                                         READY   UP-TO-DATE   AVAILABLE  AGE
    deployment.apps/nginx-ingress-ingress-nginx-controller       1/1     1            1          1m

    NAME                                                         DESIRED   CURRENT   READY   AGE
    replicaset.apps/nginx-ingress-ingress-nginx-controller-xxx   1         1         1       1m


Create working directory: mkdir /getastra/ingress_nginx. Change into working directory: cd /getastra/ingress_nginx

Create configmap ga-obs-config.yaml with following content

apiVersion: v1
kind: ConfigMap
metadata:
     name: ga-obs-config
     namespace: ingress-nginx
data:
     ga_obs_nginx.toml: |
            exporter = "otlp"
            processor = "batch"
            [exporters.otlp]
            # Alternatively the OTEL_EXPORTER_OTLP_ENDPOINT environment variable can also be used.
            host = "astra-traffic-collector.obs-client.svc.cluster.local"#astra collector svc
            port = 4317
            # Optional: enable SSL, for endpoints that support it
            # use_ssl = true
            # Optional: set a filesystem path to a pem file to be used for SSL encryption
            # (when use_ssl = true)
            # ssl_cert_path = "/path/to/cacert.pem"
            [processors.batch]
            max_queue_size = 2048
            schedule_delay_millis = 5000
            max_export_batch_size = 512
            [service]
            # Can also be set by the OTEL_SERVICE_NAME environment variable.
            name = "nginx-proxy" # Opentelemetry resource name
            [sampler]
            name = "AlwaysOn" # Also: AlwaysOff, TraceIdRatioBased
            ratio = 1.0
            parent_based = false


Create the configmap resource using command kubectl apply -f ga-obs-config.yaml

Fetch and edit nginx.tmpl to instrument http response headers

Fetch the nginx.tmpl from ingress-nginx pod by executing: kubectl exec -it $(kubectl get pods -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}') -n ingress-nginx -- sh -c "cat /etc/nginx/template/nginx.tmpl" > ori_nginx.tmpl

Copy the file: cp ori_nginx.tmpl nginx.tmpl

Open the nginx.tmpl in vi editor: vi nginx.tmpl and then search for header_filter_by_lua_block. Add a line above this block and add couple of lines at the end of this block as shown below.
set $resp_headers '';
header_filter_by_lua_block {
                ...
                local cjson = require "cjson"
                ngx.var.resp_headers = cjson.encode(ngx.resp.get_headers())
 }


Create the configmap resource using command kubectl create configmap nginx-template-astra-instrumented --from-file=nginx.tmpl -n ingress-nginx

Verify if the configmap resource is created in the same namespace as that of ingress-nginx kubectl get cm -n ingress-nginx

Create values.yaml with following content:
Replace <your_sensor_id> with the integration ID displayed for your nginx integration in Integrations page in UI.

controller:
        extraVolumeMounts:
          - mountPath: /etc/nginx/template/nginx.tmpl
            subPath: nginx.tmpl
            name: nginx-template-volume
            readOnly: true
          - mountPath: /etc/nginx/ga-obs
            name: ga-obs
            readOnly: true

        extraVolumes:
          - name: nginx-template-volume
            configMap:
                name: nginx-template-astra-instrumented
                defaultMode: 420
          - name: ga-obs
            configMap:
                name: ga-obs-config
                defaultMode: 420
        
        config:
            allow-snippet-annotations: "true"
            http-snippet: |
                opentelemetry_config /etc/nginx/ga-obs/ga_obs_nginx.toml;
                opentelemetry_ignore_paths /is-dynamic-lb-initialized|/health|/metric;
            location-snippet: |
                opentelemetry_operation_name ga-otel-nginx;
                opentelemetry_attribute sensor.version $nginx_version;
                opentelemetry_attribute sensor.id <your_sensor_id>;
                opentelemetry_attribute url.query $query_string;
                opentelemetry_attribute http.request.body $request_body;
                opentelemetry_attribute net.sock.peer.addr $remote_addr;
                opentelemetry_attribute net.sock.peer.port $remote_port;
                access_by_lua_block {
                    local cjson = require "cjson"
                    ngx.var.req_headers = cjson.encode(ngx.req.get_headers())
                }
                opentelemetry_attribute http.request.headers $req_headers;
                opentelemetry_attribute http.response.headers $resp_headers;
            main-snippet: |
                load_module /etc/nginx/modules/otel_ngx_module.so;
            server-snippet: |
                set $req_headers '';


Add ingress-nginx to helm repo by executing helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

Run helm upgrade ingress-nginx -n ingress-nginx ingress-nginx/ingress-nginx -f values.yaml --debug


Traffic Collector integration



Fresh Installation



Create obs-client namespace for traffic collector installation by running kubectl create ns obs-client

Add Astra traffic collector helm repository by running helm repo add getastra <https://raw.githubusercontent.com/getastra/obs-deployments/gh-pages/>

Create values.yaml
Replace <collectorId> with the integrationId displayed for your traffic collector integration in Integrations page in UI.
Replace <clientId> with the clientId displayed during the creation of traffic collector integration.
Replace <clientSecret> with the clientSecret displayed during the creation of traffic collector integration.

secret:
      name: astra-collector-secrets
      collectorId: <collectorId>
      clientId: <clientId>
      clientSecret: <clientSecret>
      tokenUrl: https://auth.getastra.com/realms/astra_api_scanner/protocol/openid-connect/token


Install the helm chart by running helm upgrade --install traffic-collector getastra/traffic-collector-chart --namespace obs-client --debug --values values.yaml

To have a customized config, create and edit config_custom.yaml. Refer here

Create configmap containing content of config_custom.yaml file, if it's updated in previous step. Run kubectl create configmap astra-collector-custom-config --from-file=./config_custom.yaml -n obs-client

To mount the customized configuration, create values.yaml with following
Replace <collectorId> with the integrationId displayed for your traffic collector integration in Integrations page in UI.
Replace <clientId> with the clientId displayed during the creation of traffic collector integration.
Replace <clientSecret> with the clientSecret displayed during the creation of traffic collector integration.

secret:
    name: astra-collector-secrets
    collectorId: <collectorId>
    clientId: <clientId>
    clientSecret: <clientSecret>
    tokenUrl: https://auth.getastra.com/realms/astra_api_scanner/protocol/openid-connect/token

volumes:
  - configMap:
      defaultMode: 444
      name: astra-collector-custom-config
    name: custom-config
  
volumeMounts:
  - name: collector-message
    mountPath: /var/lib/otelcol/file_storage
  - name: custom-config
    mountPath: /etc/otelcol-contrib/config_custom.yaml
    subPath: config_custom.yaml


Upgrade the traffic collector by giving updated values.yaml helm upgrade --install traffic-collector getastra/traffic-collector-chart --namespace obs-client --debug --values values.yaml


Upgrade



Upgrading traffic collector helm chart to latest version:

helm repo update
helm upgrade --install traffic-collector getastra/traffic-collector-chart --namespace obs-client --debug --values values.yaml



Troubleshooting



Unable to send trace from nginx to traffic collector

Symptoms

No entries in inventory/ inventory not getting updated

Following or similar error seen in nginx log

[Error] File: /tmp/build/opentelemetry-cpp/exporters/otlp/src/otlp_grpc_exporter.cc:66 [OTLP TRACE GRPC Exporter] Export() failed with status_code: "UNAVAILABLE" error_message: "DNS resolution failed for ...


Cause

Nginx is unable to resolve traffic-collector address

Solution

Non kubernetes based env

If address of the traffic collector given in nginx.conf is incorrect, Locate the http block in your NGINX configuration file, find the otel_exporter {} block, update the endpoint with your FQDN or Public IP and PORT, then restart NGINX

How to get right Address? The Address should be the your public ip of the collector server instance. Address could also be a configured Fully Qualified Domain Name (FQDN)

Kubernetes based env

Address of the traffic collector given in the configmap ga-obs-config is incorrect

Assign following values in ga-obs-config configmap

host = "astra-traffic-collector.obs-client.svc.cluster.local"
              port = 4317



Unable to send traces from traffic collector to ga collector


Symptoms

No entries in inventory/ inventory not getting updated

Following error is seen in astra-traffic-collector container log

error	exporterhelper/queue_sender.go:92	Exporting failed. Dropping data.	{"kind": "exporter", "data_type": "traces", "name": "otlp", "error": "not retryable error: Permanent error: rpc error: code = Unauthenticated desc = transport: per-RPC creds failed due to error: failed to get security token from token endpoint (endpoint \"https://auth.getastra.com/realms/astra_api_scanner/protocol/openid-connect/token\"); oauth2: \"unauthorized_client\" \"Invalid client or Invalid client credentials\"", "dropped_items": 1}


Cause

Authenication fails with IAM server

Solution

Non kubernetes based env

Edit /opt/astra-traffic-collector/.env and update it with right credentials.

Run start traffic-collector service

Kubernetes based env

update the values.yaml with right crdentials and then run helm upgrade:

helm upgrade --install traffic-collector getastra/traffic-collector-chart --namespace obs-client --debug --values values.yaml



Unable to see entries in inventory

Symptoms

No entries in inventory/ inventory not getting updated

No error in nginx/traffic-collector log

Cause

Unregistered hostname

Solution

Double check is the hostname is registered under "*target to be scanned**"

Add the hostname under extra hosts to be scanned if it's not registered in the first place



FAQ (Frequently Asked Questions)



Can I see what trace are sent from my environment?

Yes, one can see the traces sent by traffic-collector. For non kuberenetes based deployment, run docker logs astra-traffic-collector. For kubernetes based deployment, run kubectl logs astra-traffic-collector-0 -n obs-client

How to get IP Address Of Traffic Collector in Non Kubernetes environment?

As container network is the same network as host network, the IP of the container would be the same as the Virtual Machine IP.

Updated on: 18/10/2024

Was this article helpful?

Share your feedback

Cancel

Thank you!