Migrating your EKS Cluster from Cluster Autoscaler to EKS Auto Mode

Adefemi Afuwpe
5 min readJan 26, 2025

--

Amazon released a new feature for EKS called EKS Auto Mode, and I have been playing around with this new feature. I will be documenting my learnings and below are what we will look at in this article

  • What is Cluster Autoscaler
  • EKS AutoMode??
  • Migrating from cluster autoscaler to EKS Auto Mode
  • Creating custom nodepool and nodeclass for EKS Automode

What is ClusterAutoscaler?

In short, cluster autoscaler is implemented to automatically adjust the size of nodes in a cluster (in our case, EKS) up and down (horizontal scaling) based on scheduling demands. This tool can be installed on your cluster via Helm; a sample installation can be found here (using Terraform).

EKS AutoMode

EKS Automode is a new feature that fully automates the management of compute, storage, and networking for the Kubernetes cluster. It simplifies cluster operations by offloading infrastructure management to AWS while maintaining Kubernetes conformance. Some of its major components are AWS cloud features as core components that would otherwise have to be managed as add-ons. This includes built-in support for Pod IP address assignments, Pod network policies, local DNS services, GPU plug-ins, health checkers, EBS CSI storage and Karpenter to manage compute.

EKS Automode manages your cluster infrastructure by automatically selecting optimal EC2 instances for your workloads, dynamically scaling nodes based on application demands, managing node lifecycle and updates, handling OS patches and security updates, integrating core add-ons and AWS services, and providing ephemeral compute for enhanced security. In turn, it reduces operational overhead and manual node management.

Migrating From Cluster Autoscaler to EKS Automode

Below are some of the prerequisites needed for this migration

  • EKS Cluster running Kubernetes version 1.29 or later
  • Supported in all AWS Regions except AWS GovCloud (US) and China Regions

I have created a cluster using Terraform that installs required addons such as cluster autoscaler, EBS CSI driver, ALB load balancer controller, VPC CNI which are available as a module in this repository.

The authentication mode the cluster is currently running on was created using aws-auth ConfigMap for authentication and authorization.

EKS Automode requires EKS API and ConfigMap for authorization but based on our setup we will need to add more required policies to the cluster role as shown in the screenshot below.

The last step will be to add sts:TagSession to the trust policy of the Cluster role, this gives AWS Security Token Service (STS) to tag sessions, which is crucial for certain EKS operations. The new trust policy would look like below

Once this is updated and saved, you can go back to the EKS console and Enable Automode

The above screenshot shows enabling EKS Automode but also creating it with the built-in node pools, lastly, I created the default role which requires just two permission policies as illustrated in the screenshot below

Once this is done, you can delete the default node group we created from the Terraform configuration. Once it is completely deleted alongside the nodes, new nodes will be created with their compute type as Automode.

From the screenshot above, EKS AutoMode is now managing the nodes in our cluster so with this we can delete cluster autosclaer from our cluster as well as other applications such as EBS CSI driver, ALB Load balancer controller, VPC CNI.

With this, we have successfully moved from Cluster Autoscaler to EKS Automode. The next step will be to create a custom nodepool and nodeclass

Creating Custom NodePool and NodeClass for EKS AutoMode

Creating a custom nodepool and nodeclass is closely related to how Karpenter creates nodepool and nodeclass but with slight changes in some of the keys. We need custom nodepools and nodeclass to manage the instance category, instance family, instance generation, capacity type (SPOT/ON-DEMAND), and instance hypervisor (Xen and Nitro). The below example creates a custom nodepool and nodeclass. Don't forget to update the ${CLUSTER_NAME} variable with your value.

The GitHub gist above creates a custom nodepool using the spot instance, Linux based operating system, with the r instance type and a nitro-based hypervisor, we are also creating a nodeclass and this is where the ${CLUSTER_NAME} variable needs to be updated with your cluster name.

Apply these changes to your cluster, then go back to your EKS console and on the manage EKS AutoMode, unselect general-purpose and system under built-in node pools and save finally save the changes, your new EKS AutoMode configuration should look like the below

Once these changes are saved, the managed nodepools and nodeclass would start getting drained, unschedulable and finally deleted and the custom nodepool and nodeclass we created should spin off new nodes in our cluster like below

From the above, we have covered migrating from Cluster Autoscaler to EKS Automode. If you ever wonder why we need to migrate to EKS Automode, this article by nops covered it all, although they refer to Karpenter, in AutoMode the compute is also managed using Karpenter.

A quick recap of what has been learned:

  1. We discussed what cluster autoscaler is and its purpose
  2. We looked into what is EKS AutoMode and why
  3. We looked into how to properly migrate to EKS AutoMode
  4. Finally, we looked into creating a custom nodepool and nodeclass for EKS Automode.

And with that friends, see you in the next article😉.

--

--

Adefemi Afuwpe
Adefemi Afuwpe

No responses yet