Skip to main content

Override Nginx configuration in the Traffic container

🎯 Goal: Customize the Nginx configuration used by the Traffic container in a Kubernetes scope, enabling advanced features such as WebSocket and custom headers, while preserving nullplatform metrics compatibility.

Introduction​

By default, the Kubernetes scope provided by nullplatform ships with a preconfigured Nginx setup inside the Traffic container. In some advanced scenarios, you may need to override this configuration, for example:

  • Enabling WebSocket support
  • Adding custom headers
  • Tuning Nginx behavior for specific workloads

This tutorial walks you through creating a scope override that replaces the default nginx.conf using a ConfigMap. You’ll then wire that configuration into the Traffic container in a safe, supported way.

Important

Metrics shown in nullplatform are derived from the Traffic container logs. Do not modify the logging block in the Nginx configuration, or metrics may break.


What you’ll set up​

By the end of this guide, you’ll have:

  • A custom Nginx configuration stored in a ConfigMap
  • The Traffic container mounting that configuration
  • A modified deployment workflow that renders and applies the ConfigMap
  • An agent-backed scope using your customized Traffic container

Prerequisites​

1. Start from the main scope branch​

Clone the scopes repository and make sure you are working from the main branch:

git clone https://github.com/nullplatform/scopes.git
cd scopes
git checkout main

All changes in this tutorial will be applied on top of the base Kubernetes scope.

2. Create a ConfigMap template for Nginx​

Navigate to the following directory:

k8s/templates/

Create a new file with the .yaml.tpl extension, for example:

config-maps-traffic.yaml.tpl

This template defines a ConfigMap that contains your custom nginx.conf.

Preserve the logging configuration​

Make sure your nginx.conf keeps the original logging block, as shown below:

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

log_format main '$remote_addr [$time_local] "$request" '
'$status $request_time $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" "$quality"';
map $status $quality {
~^[23] "OK (2XX, 3XX)";
~^4 "DENIED (4XXs)";
default "ERROR (5XXs)";
}
...
}

You can extend the configuration with WebSocket support or custom headers, as long as this block remains unchanged.

🤓 Here's an example of what your file should look like: configmaps.yaml.tpl

3. Mount the ConfigMap into the Traffic container​

Open the deployment template located at:

k8s/deployment/templates/deployment.yaml.tpl

Here's where we'll reference the ConfigMap so it can be used as nginx.conf.

Add the volume definition​

Inside the spec section, add:

volumes:
- name: nginx-config
configMap:
name: nginx-config-{{ .scope.id }}-{{ .deployment.id }}

Add volume mounts to the Traffic container​

In the Traffic container definition, add:

volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- name: nginx-config
mountPath: /etc/nginx/conf.d/default.conf
subPath: default.conf

This ensures Nginx uses your custom configuration files at runtime.

🤓 Here's an example of what your file should look like: deployment.yaml.tpl

4. Render the ConfigMap during deployment​

Open the build script:

k8s/build_deployment

Define the ConfigMap output path​

Add the following line:

CONFIGMAP_PATH="$OUTPUT_DIR/configmap-$SCOPE_ID-$DEPLOYMENT_ID.yaml"

Render the ConfigMap template​

Append this rendering block:

echo "Building Template: $CONFIGMAP_TEMPLATE to $CONFIGMAP_PATH"

gomplate -c .="$CONTEXT_PATH" \
--file "$CONFIGMAP_TEMPLATE" \
--out "$CONFIGMAP_PATH"

TEMPLATE_GENERATION_STATUS=$?

if [[ $TEMPLATE_GENERATION_STATUS -ne 0 ]]; then
echo "Error building ConfigMap template"
exit 1
fi

rm "$CONTEXT_PATH"

This step ensures the ConfigMap is generated alongside other deployment manifests.

5. Register the ConfigMap template​

Open the values file:

k8s/values.yaml

Add the following entry:

configuration:
CONFIGMAP_TEMPLATE: "$SERVICE_PATH/deployment/templates/config-maps-traffic.yaml.tpl"

This tells the deployment workflow where to find the ConfigMap template.

6. Include the ConfigMap in the workflow​

Edit the initial deployment workflow:

k8s/deployment/workflows/initial.yaml

Add the following file input:

- name: CONFIGMAP_PATH
type: file
file: "$OUTPUT_DIR/configmap-$SCOPE_ID-$DEPLOYMENT_ID.yaml"

This ensures the ConfigMap is applied as part of the deployment lifecycle.

7. Deploy using an agent-backed scope​

Once all changes are committed and pushed to your scope repository, update the agent configuration to point to it:

helm upgrade nullplatform-agent nullplatform/nullplatform-agent \
--namespace nullplatform-tools \
--set configuration.values.AGENT_REPOS="https://github.com/your-org/scopes.git#main" \
--reuse-values

Your agent will now deploy scopes using the customized Nginx Traffic container.

Wrap-up 🎉​

You’ve successfully:

  • Overridden the default Nginx configuration for the Traffic container
  • Preserved metric compatibility with nullplatform
  • Extended the Kubernetes scope using a clean, maintainable override

This pattern can be reused for other advanced scope customizations while keeping full compatibility with the platform.