Starting today, you can deploy applications that use IPv6 address space on Amazon Elastic Kubernetes Service (EKS).
Many of our customers are standardizing Kubernetes as their compute infrastructure platform for cloud and on-premises applications. Amazon EKS makes it easy to deploy containerized workloads. It provides highly available clusters and automates tasks such as patching, node provisioning, and updates.
Kubernetes uses a flat networking model that requires each pod to receive an IP address. This simplified approach enables low-friction porting of applications from virtual machines to containers but requires a significant number of IP addresses that many private VPC IPv4 networks are not equipped to handle. Some cluster administrators work around this IPv4 space limitation by installing container network plugins (CNI) that virtualize IP addresses a layer above the VPC, but this architecture limits an administrator’s ability to effectively observe and troubleshoot applications and has a negative impact on network performance at scale. Further, to communicate with internet services outside the VPC, traffic from IPv4 pods is routed through multiple network hops before reaching its destination, which adds latency and puts a strain on network engineering teams who need to maintain complex routing setups.
To avoid IP address exhaustion, minimize latency at scale, and simplify routing configuration, the solution is to use IPv6 address space.
IPv6 is not new. In 1996, I bought my first book on “IPng, Internet Protocol Next Generation”, as it was called 25 years ago. It provides a 128-bit address space, allowing 3.4 x 10^38 possible IP addresses for our devices, servers, or containers. We could assign an IPv6 address to every atom on the surface of the planet and still have enough addresses left to do another 100-plus Earths.
There are a few advantages to using Amazon EKS clusters with an IPv6 network. First, you can run more pods on one single host or subnet without the risk of exhausting all available IPv4 addresses available in your VPC. Second, it allows for lower-latency communications with other IPv6 services, running on-premises, on AWS, or on the internet, by avoiding an extra NAT hop. Third, it relieves network engineers of the burden of maintaining complex routing configurations.
Kubernetes cluster administrators can focus on migrating and scaling applications without spending efforts working around IPv4 limits. Finally, pod networking is configured so that the pods can communicate with IPv4-based applications outside the cluster, allowing you to adopt the benefits of IPv6 on Amazon EKS without requiring that all dependent services deployed across your organization are first migrated to IPv6.
As usual, I built a short demo to show you how it works.
How It Works
Before I get started, I create an IPv6 VPC. I use this CDK script to create an IPv6-enabled VPC in a few minutes (thank you Angus Lees for the code). Just install CDK v2 (npm install -g aws-cdk@next
) and deploy the stack (cdk bootstrap && cdk deploy
).
When the VPC with IPv6 is created, I use the console to configure auto-assignment of IPv6 addresses to resources deployed in the public subnets (I do this for each public subnet).
I take note of the subnet IDs created by the CDK script above (they are listed in the output of the script) and define a couple of variables I’ll use throughout the demo. I also create a cluster IAM role and a node IAM role, as described in the Amazon EKS documentation. When you already have clusters deployed, these two roles exist already.
I open a Terminal and type:
CLUSTER_ROLE_ARN="arn:aws:iam::0123456789:role/EKSClusterRole"
NODE_ROLE_ARN="arn:aws:iam::0123456789:role/EKSNodeRole"
SUBNET1="subnet-06000a8"
SUBNET2="subnet-03000cc"
CLUSTER_NAME="AWSNewsBlog"
KEYPAIR_NAME="my-key-pair-name"
Next, I create an Amazon EKS IPv6 cluster. In a terminal, I type:
aws eks create-cluster --cli-input-json "{
\"name\": \"${CLUSTER_NAME}\",
\"version\": \"1.21\",
\"roleArn\": \"${CLUSTER_ROLE_ARN}\",
\"resourcesVpcConfig\": {
\"subnetIds\": [
\"${SUBNET1}\", \"${SUBNET2}\"
],
\"endpointPublicAccess\": true,
\"endpointPrivateAccess\": true
},
\"kubernetesNetworkConfig\": {
\"ipFamily\": \"ipv6\"
}
}"
{
"cluster": {
"name": "AWSNewsBlog",
"arn": "arn:aws:eks:us-west-2:486652066693:cluster/AWSNewsBlog",
"createdAt": "2021-11-02T17:29:32.989000+01:00",
"version": "1.21",
...redacted for brevity...
"status": "CREATING",
"certificateAuthority": {},
"platformVersion": "eks.4",
"tags": {}
}
}
I use the describe-cluster
while waiting for the cluster to be created. When the cluster is ready, it has "status" : "ACTIVE"
aws eks describe-cluster --name "${CLUSTER_NAME}"
Then I create a node group:
aws eks create-nodegroup \
--cluster-name ${CLUSTER_NAME} \
--nodegroup-name AWSNewsBlog-nodegroup \
--node-role ${NODE_ROLE_ARN} \
--subnets "${SUBNET1}" "${SUBNET2}" \
--remote-access ec2SshKey=${KEYPAIR_NAME}
{
"nodegroup": {
"nodegroupName": "AWSNewsBlog-nodegroup",
"nodegroupArn": "arn:aws:eks:us-west-2:0123456789:nodegroup/AWSNewsBlog/AWSNewsBlog-nodegroup/3ebe70c7-6c45-d498-6d42-4001f70e7833",
"clusterName": "AWSNewsBlog",
"version": "1.21",
"releaseVersion": "1.21.4-20211101",
"status": "CREATING",
"capacityType": "ON_DEMAND",
... redacted for brevity ...
}
Once the node group is created, I see two EC2 instances in the console. I use the AWS Command Line Interface (CLI) to verify that the instances received an IPv6 address:
aws ec2 describe-instances --query "Reservations[].Instances[? State.Name == 'running' ][].NetworkInterfaces[].Ipv6Addresses" --output text
2600:1f13:812:0000:0000:0000:0000:71eb
2600:1f13:812:0000:0000:0000:0000:3c07
I use the kubectl
command to verify the cluster from a Kubernetes point of view.
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-0-108.us-west-2.compute.internal Ready <none> 2d13h v1.21.4-eks-033ce7e 2600:1f13:812:0000:0000:0000:0000:2263 18.0.0.205 Amazon Linux 2 5.4.149-73.259.amzn2.x86_64 docker://20.10.7
ip-10-0-1-217.us-west-2.compute.internal Ready <none> 2d13h v1.21.4-eks-033ce7e 2600:1f13:812:0000:0000:0000:0000:7f3e 52.0.0.122 Amazon Linux 2 5.4.149-73.259.amzn2.x86_64 docker://20.10.7
Then I deploy a Pod. I follow these steps in the EKS documentation. It deploys a sample nginx web server.
kubectl create namespace aws-news-blog
namespace/aws-news-blog created
# sample-service.yml is available at https://docs.aws.amazon.com/eks/latest/userguide/sample-deployment.html
kubectl apply -f sample-service.yml
service/my-service created
deployment.apps/my-deployment created
kubectl get pods -n aws-news-blog -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-deployment-5dd5dfd6b9-7rllg 1/1 Running 0 17m 2600:0000:0000:0000:405b::2 ip-10-0-1-217.us-west-2.compute.internal <none> <none>
my-deployment-5dd5dfd6b9-h6mrt 1/1 Running 0 17m 2600:0000:0000:0000:46f9:: ip-10-0-0-108.us-west-2.compute.internal <none> <none>
my-deployment-5dd5dfd6b9-mrkfv 1/1 Running 0 17m 2600:0000:0000:0000:46f9::1 ip-10-0-0-108.us-west-2.compute.internal <none> <none>
I take note of the IPv6 address of my pods, and try to connect it from my laptop. As my awesome service provider doesn’t provide me with an IPv6 at home yet, the connection fails. This is expected as the pods do not have an IPv4 address at all. Notice the -g
option telling curl
to not consider :
in the IP address as the separator for the port number and -6
to tell curl
to connect through IPv6 only (required when you provide curl
with a DNS hostname).
curl -g -6 http://\[2600:0000:0000:35000000:46f9::1\]
curl: (7) Couldn't connect to server
To test IPv6 connectivity, I start a dual stack (IPv4 and IPv6) EC2 instance in the same VPC as the cluster. I SSH connect to the instance and try the curl
command again. I see I receive the default HTML page served by nginx. IPv6 connectivity to the pod works!
curl -g -6 http://\[2600:0000:0000:35000000:46f9::1\]
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
... redacted for brevity ...
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
If it does not work for you, verify the three parameters to enable internet access for a subnet : does your VPC has an Internet Gateway? Does the routing table attached to the subnet has a default route to the Internet Gateway? Does the security group for the cluster EC2 nodes has a rule allowing incoming connections on port TCP 80
from ::/0
? The Internet Gateway and the routing table are automatically configured by the CDK script I provided as part of this demo.
A Few Things to Remember
Before I wrap up, I’d like to answer some frequent questions received from customers who have already experimented with this new capability:
- IPv6 is enabled by the same VPC CNI Kubernetes plugin as the one you are using for IPv4 today. The plugin is automatically configured for IPv4 or IPv6, depending on the pod networking choice you make when you create your cluster.
- You can also install the VPC CNI plugin configured for IPv6 in your self-managed clusters. However, as you manage the cluster, it is your responsibility to configure the control plane to support IPv6.
- IPv6 support is turned on only at cluster creation time. As of today, you can not migrate a cluster from IPv4 to IPv6. If you have an existing cluster you want to migrate, you may have to redeploy the workload to a new IPv6-based cluster and progressively migrate traffic from the IPv4 to the IPv6 cluster, such as described in this technical note.
- You can bring your own IPv6 range. Follow these instructions to bring your own IP address range on EC2.
- You can deploy Linux in your IPv6 cluster. Windows is not supported at the moment.
- Be sure to install the latest version of the AWS Load Balancer Controller in your cluster to route external traffic to IPv6 pods using Application Load Balancers and/or Network Load Balancers.
Pricing and Availability
IPv6 support for your Amazon Elastic Kubernetes Service (EKS) cluster is available today in all AWS Regions where Amazon EKS is available, at no additional cost.