Before proceeding, it's recommended to check your organization's preferred networking and security settings for EKS clusters. Flyte doesn't require anything beyond what the Kubernetes nodes require to operate::
- Connection to the EKS control plane (also called
API Server
)
In this tutorial we make use of the following configuration:
- API server endpoint access:
Public
andPrivate
- Node group using public subnets
- VPC already configured with both Public and Private subnets
- Public subnets enabled to Auto-assign public IPV4 address
Learn more about EKS networking
- Make sure that your system has the following components installed:
- If you haven't done it previously, complete the awscli quick configuration
- Go to the AWS Management console > Virtual Private Cloud > Subnets and take note of both the Public and Private subnet IDs available in the supported Availability Zones on the VPC you plan to use to deploy Flyte:
- Go to the terminal and submit the command to create the EKS control plane, following these recommendations:
- Pick a Kubernetes version >= 1.19
- Give the cluster an informative name (eg. flyte-eks-cluster)
eksctl create cluster --name my-cluster --region region-code --version 1.25 --vpc-private-subnets private-subnet-ID1,private-subnet-ID2 --vpc-public-subnets public-subnetID1,public-subnetID2 --without-nodegroup
After a few minutes, the deployment should finish with an output similar to:
EKS cluster "my-cluster" in "region-code" region is ready
- From the AWS Management Console go to EKS
- Click on your cluster name and select the Networking tab
- Click on Manage networking
- Select Public and private as the Cluster endpoint access mode and Save changes
In this step we'll deploy the Kubernetes worker nodes where the actual workloads will run.
- From the AWS Management console, go to EKS
- Select the cluster you created in the previous section
- Go to Compute and click on Add node group
- Assign a name to the node group
- In the Node IAM role menu, select the
<eks-node-role>
created in the Roles and Service accounts section - For learning purposes, do not use launch templates, labels or taints
- Choose the default
Amazon Linux 2 (AL2_x86_x64)
AMI type - Use
On-demand
Capacity type - Instance type and size can be chosen based on your devops requirements.
NOTE: Previous testing suggests that
t3.xlarge
provides the required resources to run the example workflows referenced in this tutorial.
- Keep all other options by default.
- Review the configuration and click Create
Learn more about AWS managed node groups
Now, you have to “let your terminal know” where’s the Kubernetes cluster you’re trying to connect to.
Use your AWS account access keys to run the following commands, updating the kubectl
config and switching to the new EKS cluster context:
- Store your AWS login information in environment variables so they can be grabbed by
awscli
:
export AWS_ACCESS_KEY_ID=<YOUR-AWS-ACCOUNT-ACCESS-KEY-ID>
export AWS_SECRET_ACCESS_KEY=<YOUR-AWS-SECRET-ACCESS-KEY>
export AWS_SESSION_TOKEN=<YOUR-AWS-SESSION-TOKEN>
- Switch to the new EKS cluster’s context:
aws eks update-kubeconfig --name <my-cluster> --region <region>
- Verify the context has switched:
$ kubectl config current-context
arn:aws:eks:<region>:<AWS_ACCOUNT_ID>:cluster/<Name-EKS-Cluster>
- Verify there are no Pod resources created (this step also validates connectivity to the EKS API Server)
$ kubectl get pods
No resources found in default namespace.
This is the blob storage resource that will be used by FlytePropeller and DataCatalog to store metadata for versioning and other use cases.
- From the AWS Management Console, go to S3
- Create a bucket leaving Block all public access enabled
- Choose the same region and VPC as your EKS cluster
- Take note of the name as it will be used in your Helm
values
file