Stream telemetry from your systems
You can stream logs and metrics from your internal tools, dashboards, or telemetry pipelines using the agent and telemetry API.
This guide shows you how to:
- Connect your own telemetry backend
- Configure the agent
- Return structured responses for each telemetry operation
You stay in full control of the data—everything runs inside your environment. The agent securely bridges nullplatform and your service: it receives requests, forwards them, and returns results to the UI.
How it works
When telemetry is requested—either from the UI or triggered by a workflow—the agent will forward the request to your configured telemetry service. Your system handles the logic, returns structured results, and the agent passes those results to the platform for visualization, alerts, or audit trails.
To make this work, you'll configure:
- A telemetry provider (logs, metrics, or both)
- A notification channel with
"source": "telemetry"
to route telemetry actions to your system
You get full control over what’s returned, and everything stays local.
1. Point telemetry to your system
Configure the telemetry provider at the service level to route requests for a specific service to your custom handler.
This setup ensures that logs and metrics for that service are handled by your own telemetry backend.
np nrn patch \
--nrn "$NRN" \
--body "{
\"global.${SERVICE_SPEC_SLUG}_metric_provider\": \"externalmetrics\",
\"global.${SERVICE_SPEC_SLUG}_log_provider\": \"external\"
}"
Replace:
$NRN
with your application’s NRN$SERVICE_SPEC_SLUG
with your service identifier (e.g.billing-api
)
2. Set up a telemetry channel
Once the provider is configured, the agent needs to know where to send telemetry actions.
To do that, you'll create a notification channel with the source
set to "telemetry"
. This tells the agent to
forward telemetry requests—like logs or metrics—to your own handler.
Here’s a minimal example:
{
"source": ["telemetry"],
}
This connects platform telemetry operations to a local script, binary, or service—just like any other agent-triggered workflow.
The agent will automatically route the following telemetry actions to your handler:
log:read
metric:list
metric:data
instance:list
👉 You can find a full example and setup instructions in the agent channel docs.
What your system should return
Each telemetry action expects a structured JSON response. Below are the supported operations, expected schemas, and example outputs.
Log read operation (log:read
)
When this action is requested, your service should return log entries matching the provided filters.
- Action:
log:read
Response schema your service must return:
{
"type": "object",
"required": ["results"],
"properties": {
"results": {
"type": "array",
"items": {
"type": "object",
"required": ["message", "datetime"],
"properties": {
"message": {
"type": "string",
"description": "The log message content"
},
"datetime": {
"type": "string",
"format": "date-time",
"description": "ISO 8601 formatted timestamp of when the log was generated"
}
}
}
},
"next_page_token": {
"type": "string",
"description": "Token for pagination to retrieve the next set of results (optional)"
}
}
}
Filters your service will receive (all optional):
Filter name | Type | Description |
---|---|---|
scope_id | integer | Filter by scope ID |
deploy_id | string | Filter by deployment ID |
instance_id | string | Filter by instance ID |
container_id | string | Filter by container ID |
limit | integer | Maximum number of results |
start_time | timestamp/ISO8601 | Start time for log search |
end_time | timestamp/ISO8601 | End time for log search |
log_type | string | Type of log |
filter_pattern | string | Search pattern for log content |
application_id | integer | Application identifier |
next_page_token | string | Token for pagination continuation |
Your service may receive different subsets of these filters depending on the log type being requested.
Example response your service should return:
{
"results": [
{
"message": "Application started successfully",
"datetime": "2024-01-15T10:30:45.123Z"
},
{
"message": "Processing request from user 12345",
"datetime": "2024-01-15T10:30:46.456Z"
}
],
"next_page_token": "eyJzdGFydCI6MTY4OTIzNzQwMTAwMH0="
}
Metric list operation (metric:list
)
Return available metrics and the dimensions your system supports for filtering or grouping.
- Action:
metric:list
Response schema your service must return:
{
"results": [
{
"name": "cpu_utilization",
"title": "CPU Utilization",
"unit": "percent",
"available_filters": ["instance_id", "scope_id"],
"available_group_by": ["instance_id"]
}
]
}
Metric data operation (metric:data
)
When this action is requested, your service should return time-series data for the specified metric.
Action: metric:data
Response schema your service must return:
{
"type": "object",
"required": ["results"],
"properties": {
"metric": {
"type": "string",
"description": "Name of the metric"
},
"type": {
"type": "string",
"description": "Type of metric aggregation"
},
"period_in_seconds": {
"type": "number",
"description": "Resolution of data points in seconds"
},
"unit": {
"type": "string",
"description": "Unit of measurement"
},
"results": {
"type": "array",
"items": {
"type": "object",
"required": ["selector", "data"],
"properties": {
"selector": {
"type": "object",
"description": "Dimensions identifying this data series"
},
"data": {
"type": "array",
"items": {
"type": "object",
"required": ["timestamp", "value"],
"properties": {
"timestamp": {
"type": "string",
"description": "ISO 8601 timestamp"
},
"value": {
"type": "number",
"description": "Metric value at this timestamp"
}
}
}
}
}
}
}
}
}
Filters your service will receive (all optional):
Filter name | Type | Description |
---|---|---|
metric | string | Name of the metric to retrieve |
start_time | timestamp/ISO8601 | Start time for data |
end_time | timestamp/ISO8601 | End time for data |
filters | object | Key-value pairs for filtering (e.g., {"instance_id": "i-123"} ) |
period | integer | Data resolution in seconds |
group_by | array | List of dimensions to group by |
scope_id | integer | Filter by scope |
application_id | integer | Application identifier |
Example response your service should return:
{
"metric": "cpu_utilization",
"type": "Average",
"period_in_seconds": 300,
"unit": "percent",
"results": [
{
"selector": {
"instance_id": "i-1234567890",
"instance_type": "t3.medium"
},
"data": [
{
"timestamp": "2025-01-15T10:00:00.000Z",
"value": 45.2
},
{
"timestamp": "2025-01-15T10:05:00.000Z",
"value": 48.7
}
]
}
]
}
Custom instances operation (instance:list
)
Action: instance:list
Description: When this action is requested, your service should return a list of available instances.
Response schema your service must return:
{
"type": "object",
"required": ["results"],
"properties": {
"results": {
"type": "array",
"items": {
"type": "object",
"required": ["selector", "state", "launch_time"],
"properties": {
"id": {
"type": "string",
"description": "Instance identifier (optional)"
},
"selector": {
"type": "object",
"description": "Instance selection criteria"
},
"details": {
"type": "object",
"description": "Additional instance details (optional)"
},
"state": {
"type": "string",
"description": "Current instance state"
},
"launch_time": {
"type": "string",
"description": "When the instance was launched"
},
"spot": {
"type": "boolean",
"description": "Whether this is a spot instance (optional)"
}
}
}
}
}
}
Filters your service will receive (all optional)::
Filter name | Type | Description |
---|---|---|
instance_id | string | Filter by instance ID |
instance_type | string | Filter by instance type |
status | string | Filter by instance status |
scope_id | integer | Filter by scope |
application_id | integer | Application identifier |
cluster | string | Filter by cluster |
limit | integer | Maximum number of results |
Example Response Your Service Should Return:
{
"results": [
{
"id": "i-1234567890",
"selector": {
"instance_id": "i-1234567890",
"region": "us-east-1"
},
"details": {
"instance_type": "t3.medium",
"availability_zone": "us-east-1a"
},
"state": "running",
"launch_time": "2025-01-15T09:00:00.000Z",
"spot": false
}
]
}
Integration details
When a user views telemetry in the UI or a workflow requests telemetry data:
- The platform sends a request to the agent
- The agent forwards it to your handler (via the telemetry channel)
- Your handler responds using the required schema
- The platform displays the results in real time
Each request includes a context
object with:
arguments
: Filter parameters (insnake_case
)output_schema
: The JSON schema your response must follow- Other context: e.g. scope type and provider details
Filters are automatically converted from camelCase to snake_case. For example:
From | To |
---|---|
deployId | deploy_id |
startTime | start_time |
Your service must return a response that matches the output_schema
. If it doesn’t, the platform will return a validation error.
What’s next
You now have full control over telemetry flows. From here, you can:
- Explore custom scopes to use this data during task execution.