Scope management
Creating a scope
On the sidebar of your application's dashboard, choose the Scopes section and click on New scope
.
Target
Whenever you create a scope you have to declare if this scope will be used for server-based or serverless assets. Once the scope is created you cannot change its target.
Name and memory
Every scope requires you to provide a name, after a slug will also be derived. This slug will be composed with the namespace and application slugs to generate the URL where the scope will be accesible (the exact shape of this URL varies between server-based and serverless scopes).
Public vs. private scopes
Nullplatform supports both public and private applications, this is, applications that are open to the outside world in the internet (public) or applications that can only be reached out by other applications inside your account ( private).
The rule of thumb is that you most likely want to make your applications private unless you have the need of receiving direct traffic from the internet.
Advanced configuration
Scopes can be extensively configured to have specific configurations, here are some examples.
Any scope:
- Continuous Delivery
- Logs
- Metrics
- Storage
Serverless only:
- Language & Runtime
- Function Handler
- Function Timeout
Server-based only:
- Processor
- Instance Size & Scaling Policy
- Health Check
- Spot Instances
Please refer to the Capabilities section for more details on advanced scope configuration.
Exposing additional ports on your instance
This section applies to instance-based scopes only.
In certain scenarios, it may be necessary to expose additional ports from a container to the host. This is often the
case for consensus algorithms or similar use cases. To expose extra ports, you must create an environment variable
parameter delivered to your container as NP_APP_CONTAINER_EXTRA_PORTS
.
Format. The parameter value needs to be a comma-separated list in the format HOST_PORT:CONTAINER_PORT
, for
example: HOST_PORT_1:CONTAINER_PORT_1,HOST_PORT_N:CONTAINER_PORT_N
.
Example. If you need to expose ports 8080 and 9090 on the host to ports 80 and 90 in the container, respectively, your configuration will be:
NP_APP_CONTAINER_EXTRA_PORTS=2022:8080,9090:90
Ports 8080, 80 and 2020 are reserved for nullplatform use.
AWS: scaling beyond 100 scopes and deployments
Why you have to care about the number of scopes and deployments
AWS imposes a hard-limit of 100 deployments per balancer (specifically, the AWS ELBListener
component used by the Application Load Balancer has a hard limit of 100 rules), so you have to add new application
load balancers as you increase the number of scopes, which will normally have one active deployment plus another one
for each active blue-green deployment.
The scopes that DO count towards this limit are:
- Deployed scopes. These are instance-based, K8S or private lambda scopes that are currently deployed / running.
- Blue-Green deployments for instances or K8S. A blue-green deployment creates a rule to balance traffic, therefore consumes this quota.
The scopes that DO NOT count towards this limit are:
- Public lambdas.
- Created but not deployed scopes.
Here's an example:
Created but not deployed scopes | Public lambdas | Deployed scopes | Concurrent blue-green deployments | Total rules on ALB (100 is the hard-limit) |
---|---|---|---|---|
10 | 15 | 40 | 5 | 45 (there's room for 40 more concurrent blue-green deployments) |
10 | 15 | 80 | 20 ⚠️ | 100 ⚠️ There's no room for a blue-green deployment on instances / K8S |
10 | 20 | 100 | 0 ⚠️ | 100 ⚠️ There's no room for a blue-green deployment on instances / K8S |
How we prevent you reaching this limit
Nullplatform does two things to help you manage this restriction:
- Notification when reaching 80% of the quota. Refer to the next section to set up your notification channel
- Scope creation / deployment will fail when 99% of the quota is reached. We will prevent scope / deployment creation, leaving a minimum room to allow for at least a single concurrent blue-green deployment.
Setting the thresholds. The allowed quota before alerting or preventing new scope / deployments can be set through a runtime configuration using the listenerRuleAlertThreshold key in the aws namespace.
Get notified when your ALBs are reaching capacity
You can receive slack notifications when one of your ALBs is reaching its capacity by creating a channel
with source = 'alert'
.
Please refer to our notifications section for instructions to set up the notifications.
Adding a new balancer
These are the steps to add a new balancer:
- Create a new ALB and corresponding ELBListener in your AWS account
- Create a runtime configuration establishing the
aws.privateListenerARN
andaws.privateAlbARN
for private scopes, oraws.publicAlbARN
andaws.publicListenerARN
for public scopes. Make sure to apply this configuration consistently across all your accounts. - You're done, existing scopes will remain associated with the original ALB, while new scopes will be created under the new ALB.
Mind how you assign scopes to balancers. While creating a new balancer will add the capacity for another 100 deployments, any scope that is re-created will be migrated to the new balancer, making the new one crowded while the old one will become empty. To mitigate this situation, you might want to assign balancers using dimensions (eg: by environment) or by namespaces, so each balancer will handle something that can be furthered (and sustainably) partitioned.
Scopes with assets on another AWS accounts
When you maintain a centralized Amazon Elastic Container Registry (ECR) repository in one AWS account and operate your clusters in different AWS accounts, you need to grant the necessary permissions to the instances that will be retrieving the assets by following these steps:
1. Configure nullplatform to create the required AWS policy on application creation.
Create a runtime configuration at the Organization, Account, or Namespace level setting the ecrPolicy
key in the aws
namespace (from now on we’ll use the dot notation: aws.ecrPolicy
). The content of aws.ecrPolicy
has to be similar to the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowPull",
"Effect": "Allow",
"Principal": "*",
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:DescribeImages",
"ecr:DescribeRepositories",
"ecr:GetDownloadUrlForLayer"
],
"Condition": {
"ForAnyValue:StringLike": {
"aws:PrincipalOrgPaths": "o-xxxxxxxxxx/*/ou-xxxx-xxxxxxxx/*"
}
}
}
]
}
Where you have to replace o-xxxxxxxxxx/*/ou-xxxx-xxxxxxxx/*
with your AWS Organization's path to restrict access to specific organizational units. Note that "Principal": "*"
allows any principal within the specified condition to perform the actions.
2. Attach the policy to the application workflow role
In your IAM configuration, ensure that null-application-manager
role has been granted the ecr:SetRepositoryPolicy
action (this allows the policy to be attached automatically when the ECR registry is created).
It’s recommended to define the policy at the organizational unit (OU) level. Since the repository policy is applied at the time of registry creation, it is recommended to define the policy at the organizational unit (OU) level. This approach ensures that new accounts added to the OU inherit the necessary permissions automatically, eliminating the need for manual updates to repository policies.
For more details on sharing Amazon ECR repositories across multiple AWS accounts using AWS Organizations, refer to the AWS Containers Blog.
This policy is enforced when an application is created and does not affect existing applications that have been previously created or imported.