Setting up Karpenter Nodes with Multus on Amazon EKS using Amazon Web Services

Spread the love

Container-based Telco workloads utilize Multus CNI for network segmentation, with Amazon Elastic Kubernetes Service (Amazon EKS) supporting Multus CNI for advanced network configuration on AWS. Karpenter, a cluster autoscaler, enables node elasticity by automatically launching compute resources based on application demand. This post demonstrates a deployment model using EKS cluster with Karpenter provisioned nodepools with Multus interfaces, specifically designed for Telco workloads on AWS.

The deployment model utilizes Karpenter for group-less nodepools hosting application pods requiring Multus CNI networking, while Amazon EKS managed nodegroup workers run other pods not needing Multus CNI. This approach decouples application workers from specific instance types and sizes, providing more flexibility in scaling out applications. By utilizing Karpenter for fast and simple compute provisioning, nodes provisioned are group-less and outside of Amazon EKS node groups.

Key steps in the deployment process include the setup of the environment, creation of VPC, EKS clusters, and managed node groups, as well as the installation of Multus CNI and Whereabouts plugins. IAM role setup for Karpenter, along with the installation and configuration of the Karpenter cluster autoscaler, are crucial components of the deployment. The deployment also includes the installation of Node-Latency-For-K8s solution to analyze node launch latency. A sample application deployment demonstrates the scaling capabilities of Karpenter in provisioning nodes based on pod demand.

Monitoring the Karpenter nodes and pods, as well as validating the application pods running on Multus interfaces, are essential steps in the deployment process. Optional steps such as automatic assignment of Multus Pod IPs and testing scaling actions using Karpenter are also outlined. Lastly, steps for cleaning up the deployment resources and deleting CloudFormation stacks are provided.

In conclusion, the deployment model showcases how Karpenter, in conjunction with Multus CNI, efficiently manages ENIs and provides autoscaling capabilities for Kubernetes clusters. Karpenter improves the efficiency and cost-effectiveness of running workloads by dynamically provisioning and removing nodes based on pod requirements. The post emphasizes the flexibility and benefits of using Karpenter as an autoscaling solution for Kubernetes clusters. For further information and best practices, readers are directed to the AWS GitHub link.

Article Source
https://aws.amazon.com/blogs/containers/deploying-karpenter-nodes-with-multus-on-amazon-eks/