Skip to main content

Kubernetes

Use nullplatform to seamlessly deploy and manage your containerized assets within Kubernetes. This offers the same experience as using EC2 instances, including Progressive Blue-Green Deployments, parameter handling, metrics, and more.

Currently, nullplatform supports AWS's Elastic Kubernetes Service (EKS) as a Kubernetes provider. We're actively working on integrating with other Kubernetes solutions. Please reach out to nullplatform support for inquiries about providers not yet listed.

EKS

Configuration

Nullplatform Setup

To configure your eks cluster you will need to add the following keys to nrn

k8s.clusterId
k8s.namespace

Optionally, if you want to use your application's namespace as the Kubernetes namespace:

  k8s.useNullNamespace: true

Warning: Using the application namespace will create separate balancers for both private and public traffic for each namespace.

Cluster

For nullplatform to interact with your EKS cluster, you need to grant appropriate permissions

Modify the aws-auth config map to include:

- groups:
- system:masters
rolearn: arn:aws:iam::<your account>:role/null-scope-and-deploy-manager
username: null-scope-and-deploy-manager
- groups:
- eks:k8s-metrics
rolearn: arn:aws:iam::<your account>:role/null-telemetry-manager
username: null_scope_manager_role
- groups:
- np:pod-reader
rolearn: arn:aws:iam::<your account>:role/null-telemetry-manager
username: null_scope_manager_role

The configured roles are the ones you have configured in nullplatform to use aws

Install ALB Ingress controller (for EKS)

Install ALB ingress controller to be possible for null direct traffic https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html

tip

Remember configure subnets to be eligible for the ingress controller to create the ingress controller

kubernetes.io/role/internal-elb: 1 //For internal balancers
kubernetes.io/role/elb: 1 //For public balancers

Metrics

First install metrics server kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

To enable metrics you will need to apply the following configuration in your cluster

cat << EOF > null-metrics.yaml
kind: Namespace
apiVersion: v1
metadata:
name: nullplatform-tools
labels:
name: nullplatform-tools
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nullplatform-pod-metadata-reader-sa
namespace: nullplatform-tools
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nullplatform-pod-metadata-reader
rules:
- apiGroups: ["","metrics.k8s.io"]
resources: ["pods", "nodes", "configmaps"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nullplatform-leases
rules:
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "list","create","patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: np:pod-reader-role
annotations:
description: "Allows read access to pods"
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "get", "watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
---
# clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nullplatform-pod-metadata-reader-binding
subjects:
- kind: ServiceAccount
name: nullplatform-pod-metadata-reader-sa
namespace: nullplatform-tools
roleRef:
kind: ClusterRole
name: nullplatform-pod-metadata-reader
apiGroup: rbac.authorization.k8s.io
---
# clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nullplatform-leases-pod-metadata-reader-binding
subjects:
- kind: ServiceAccount
name: nullplatform-pod-metadata-reader-sa
namespace: nullplatform-tools
roleRef:
kind: ClusterRole
name: nullplatform-leases
apiGroup: rbac.authorization.k8s.io
# clusterrolebinding.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: np:pod-reader
annotations:
description: "Binds the pod reader role to a group"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: np:pod-reader-role
subjects:
- kind: Group
name: np:pod-reader
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nullplatform-log-controller
namespace: nullplatform-tools
spec:
selector:
matchLabels:
name: nullplatform-log-controller
template:
metadata:
labels:
name: nullplatform-log-controller
spec:
serviceAccountName: nullplatform-pod-metadata-reader-sa
containers:
- name: nullplatform-log-controller
image: public.ecr.aws/nullplatform/k8s-logs-controller:latest
volumeMounts:
- mountPath: /var/log
name: host-logs
env:
- name: CLOUDWATCH_RETENTION_DAYS
value: '7'
- name: CLOUDWATCH_REGION
value: us-east-1
- name: CLOUDWATCH_LOGS_ENABLED
value: 'true'
- name: CLOUDWATCH_PERFORMANCE_METRICS_ENABLED
value: 'true'
- name: CLOUDWATCH_CUSTOM_METRICS_ENABLED
value: 'true'
volumes:
- name: host-logs
hostPath:
path: /var/log
    kubectl apply -f null-metrics.yaml

In the configuration you can configure the following env vars to change some metrics behaviors

  • CLOUDWATCH_RETENTION_DAYS: retention in days of logs
  • CLOUDWATCH_REGION: the cloudwatch region to be used
  • CLOUDWATCH_LOGS_ENABLED: if it's different to "true" logs will not be enabled
  • CLOUDWATCH_PERFORMANCE_METRICS_ENABLED: if it's different to "true" performance metrics will not be enabled
  • CLOUDWATCH_CUSTOM_METRICS_ENABLED: if it's different to "true" custom metrics will not be enabled

Logs

The Logs Controller allows you to send logs to various log providers, such as CloudWatch and Datadog, according to your preferences. Additionally, when utilizing Kubernetes (k8s), you have the option to integrate logs directly with your cluster. This integration gets logs directly from all your pods to nullplatform (nullplatform will not store your logs), which helps avoid the costs associated with log integration services.

Advantages of Direct Kubernetes Logging

  • Cost Efficiency: Reduces expenses by bypassing third-party log integrators.
  • Raw Log Access: Provides unfiltered access to raw logs, ensuring that no data is pre-filtered, limited, or discarded.

To enable direct logging through Kubernetes, update your runtime configuration or nrn with the following key:

global.logProvider: "k8slogs"

*It could take up to 5 minutes to impact the configuration *

Cluster scaling, CPU & memory limits

To make sure your cluster have the required resources to scale we recommend you to have installed https://karpenter.sh/

In case you want to fine tune resource allocation for a scope, it is possible to configure the amount of CPU and memory that's reserved and / or requested for the underlying Kubernetes pods.

CPU

You can configure the minimum CPU ("request" in K8S terminology) needed to schedule the pod in a node and the maximum CPU it can use ("limit" in K8S terms) through a configuration that assigns the CPU as a multiplier over the GB of RAM. The default multiplier is 3, so your pods can use some additional CPU resources when under large workloads.

info

Why a multiplier?. The reason to use a multiplier is that we assume that there's a correlation between large workloads in terms of memory usage and CPU usage.

You can configure this multiplier through Runtime Configurations using the maxCoresMultiplier key under the k8s namespace (the default value is 3).

For example:

POST /runtime_configuration
{
"nrn": "organization=1",
"dimensions": {
"environment": "development"
},
"values": {
"k8s": {
"maxCoresMultiplier": 2 // Set the limit cpu as double the requested (limit = 2 * request)
}
}
}

Setting a cap for reserved CPU cores. It's also possible to set a cap for the amount of CPU cores that K8S will assign to the pods. This can be used to reserve less CPU cores to scopes that are configured to use a lot of RAM memory. In this case you have to set the key maxMiliCores to the desired value.

Memory: soft and hard limits

warning

We recommend against changing this configuration as it will over-commit your nodes and can have unwanted consequences, like pods being killed.

While the default behavior is for scopes to reserve memory beforehand, it's possible to configure nullplatform to create pods that request less memory than what they might actually require (limit memory).

info

What's being configured when I create a scope? The value that's configured by the developer when the scope is created is normally both the requested and limit memory but, if the default values are overridden, then it's the requested memory (ie, actual hard limit will be higher).

You can configure this multiplier through Runtime Configurations using the memoryRequestToLimitRatio key under the k8s namespace (the default value is 1).

For example:

POST /runtime_configuration
{
"nrn": "organization=1",
"dimensions": {
"environment": "development"
},
"values": {
"k8s": {
"memoryRequestToLimitRatio": 2 // Set the limit memory as double the requested (limit = 2 * request)
}
}
}