Skip to main content

Run a custom scope locally

This guide helps you prepare your local environment to work with nullplatform. You’ll learn how to build, run, and test a custom scope on your local machine.

Before you begin

Make sure you have the following tools installed:

  • The nullplatform CLI: curl https://cli.nullplatform.com/install.sh | sh
  • Docker: brew install --cask docker
  • Gomplate: brew install gomplate
  • Minikube (or other Kubernetes local runtime): brew install minikube

We recommend creating a new application for this guide to avoid conflicts with existing scopes or configuration.

To create a new application:

  1. In the platform UI, go to your home dashboard and click + Create application.
  2. Fill in the application details and click Create application.

2. Clone the scopes repository

Open a terminal and clone the example scopes repository:

git clone https://github.com/nullplatform/examples-custom-scope-workshop.git
cd examples-custom-scope-workshop

This repo includes:

  • A base implementation of a Kubernetes scope
  • Utility scripts for configuration and cluster setup
  • The agent deployment flow

You’ll use this as your working directory throughout the guide.

3. Create your API key

Create an API key that gives your agent the permissions it needs to connect to nullplatform.

  1. From the platform, go to Platform settings > API keys, and click + New API key.
  2. Select the account resource where the API key's roles will be assigned.

⚠️ Important: Roles must be assigned at the account level or the setup won’t work.

  1. Select the required roles:
    • agent, developer, ops, secops, secrets-reader, and
    • ci (only for this guide)
  2. Save your API key somewhere safe—you’ll need it later.

4. Set the environment variables

Set these environment variables to configure your local environment:

export NP_API_KEY=<your_api_key_here>
export NRN=<your_application_nrn>
export SERVICE_PATH=k8s

What each variable does:

  • NP_API_KEY: Your API key with agent-level roles.
  • NRN: The NRN of your application (you can retrieve it from the platform UI).
  • SERVICE_PATH: Path to the scope implementation. For this guide, use k8s.

5. Create a release asset

In the root of the repo, run:

./create_asset

This creates a release using a public Docker image defined in the repo.

You can confirm the release was created by going to Development > Releases in your application. You should see a new release named "custom scope demo".

6. Configure the custom scope

Time to set up the core components of your custom scope. The configuration script:

  • Registers the JSON schema that defines the scope’s parameters.
  • Creates action specs, like create-scope, delete-scope.
  • Sets up a notification channel so your agent can receive events.

Run the command:

./configure

7. Set up your local Kubernetes cluster

Provision a local cluster to run the agent and supporting components. This guide uses Minikube:

./agent/setup_minikube --enable-mount

Then:

./agent/configure_cluster

This creates the required Kubernetes namespaces and resources: nullplatform and nullplatform-tools.

8. Deploy the nullplatform agent

Deploy the agent on your local machine:

./agent/deploy_agent

From the menu:

  1. Choose option 1 to deploy the agent.
  2. Choose option 3 to tail the logs.

You should see the following message when the agent is connected:

{"level":"info","message":"Commands Executor connected"}

Check that it works

To test the setup, create a new scope:

  1. In the platform UI, go to your application and open the Scopes view.
  2. Click + New scope.
  3. Choose your new custom scope (look under target).

    Note: The default name is NKS-<your-username>.

  4. Fill in the config details and click Create scope.

That’s it! Your custom scope is running and your app is live.

Test the deployment

Let’s confirm that the scope was deployed correctly.

In a new terminal, forward the service port:

SERVICE=$(kubectl get service -n nullplatform -o json | jq -r '.items[0].metadata.name // empty')
kubectl port-forward service/"$SERVICE" 8080:8080 -n nullplatform

Then check the health endpoint:

curl http://localhost:8080/health

You should see a response like:

{"status":"ok"}

You can also test the application routes:

curl http://localhost:8080/api/users

Clean up your environment

Now that you’ve finished testing your custom scope locally, clean up your environment to free up system resources.

  1. Delete the scopes:

    • In the platform UI, go to your application's Scopes view.
    • Locate and delete the scopes you created during this guide.
  2. Shut down the agent:

    • Run the agent script again:

      ./agent/deploy_agent
    • Choose option 2 to shut down the agent.

    • Then choose option 5 to exit.

  3. Stop Minikube:

    Shut down your local Kubernetes cluster:

    minikube stop
  4. (Optional) Delete the application:

    If you created a temporary application for this guide, feel free to delete it from the UI.

What's next?

Now that you're ready to deploy a scope to a real cluster, go to the next guide to set up a production-ready Kubernetes environment.


Troubleshooting

Agent image not found (Minikube)

If your local Docker image isn’t available in the cluster:

eval "$(minikube docker-env)"
docker build -t agent-local:latest .

If that doesn’t work:

minikube stop && minikube delete
minikube start --dns-proxy=false --dns-domain="cluster.local"

This disables problematic DNS behavior that can break agent discovery.

Error: ErrImageNeverPull

If your agent pod logs show an ErrImageNeverPull error, your Kubernetes runtime cannot access your local Docker image.

Fix:

Make sure the image is locally available to your Kubernetes environment. Then rebuild and redeploy:

eval "$(minikube docker-env)"
docker build -t agent-local:latest .
Agent log tail shows no activity

If your agent is deployed but not reacting to events:

  1. Verify your application NRN is correct.
  2. Confirm NP_API_KEY includes the required roles and was assigned at the account level.
  3. Manually trigger a scope action and tail logs:
./agent/deploy_agent  # Option 3: Tail logs