How to setup a Kubernetes Cluster on AWS using Kops

Kops is used to bringing up the Kubernetes cluster in the easiest possible way. It is a command-line tool used to create Kubernetes Clusters. Kops officially supports AWS where GCP, DigitalOcean, and OpenStack are in Beta. Kops can also generate Terraform files for the required cluster configuration. One can not only easily create a cluster using Kops, but also modify, delete and upgrade the Kubernetes version in the cluster.

In this article, we will see the steps to create a Kubernetes cluster with 1 master and 1 worker node on AWS. Before we proceed, it is assumed that you are already familiar with Kubernetes 

Pre-requisites

  1. AWS Account(Create if you don’t have one).
  2. EC2 Ubuntu 18.04 Instance (Click here to learn to create an EC2 instance on AWS ).
  3. S3 Bucket (Click here to learn to create an S3 Bucket on AWS).
  4. Domain Name (Search for "How to buy a Domain Name on AWS?" to understand the steps to create a Domain on AWS).
  5. IAM Role with sufficient/admin permissions(Click here to learn to create an IAM role on AWS).

What will we do?

  1. Login to AWS.
  2. Check the S3 Bucket, IAM Role.
  3. Attach the IAM Role to the instance.
  4. Install Kubectl and Kops on the EC2 instance.
  5. Validate  Recordset rules and a hosted zone.
  6. Create a Kubernetes Cluster using Kops.
  7. Delete the cluster.

Login to AWS

Click here to go to the login page where you can enter your credentials to get into the account.

Login page

Once you successfully login to your AWS account, you will see the main AWS Management Console as follows.

AWS Main Console

Check the S3 Bucket, IAM Role

To create a cluster using Kops, we need an S3 bucket where Kops will store all the cluster configuration.

Check for the bucket that you want to be used to store Kops configurations.

S3 Bucket

Verify if the IAM role that you are going to use has sufficient/admin permissions. Kops does not need admin permissions, if you are not much aware of AWS IAM and permissions and don't want to fall into any access issues you can use admin permission.

IAM Role

Attach the IAM Role to the instance

Once you have the role, attach it to the EC2 instance you will use to execute the kops commands. Go to EC2 --> select the EC2 instance --> click on Actions --> Security -- > Modify IAM role.

Update EC2 instance

Select the IAM role and save the changes.

Attach the IAM role to the EC2 instance

Install Kubectl and Kops on the EC2 instance

Till this point, you have an S3 bucket and an EC2 instance with the required role attached to it. Now login into the EC2 instance you will use to create a cluster using Kops.

The next step is to install Kubectl on the EC2 instance.

Execute the following commands to install kubectl on Ubuntu Server

curl -LO "https://dl.k8s.io/release/$(curl -L -s 

curl -LO "https://dl.k8s.io/$(curl -L -s 

echo "$(<kubectl.sha256) kubectl" | sha256sum --check

sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

mkdir -p ~/.local/bin/kubectl

mv ./kubectl ~/.local/bin/kubectl

Check the kubectl version using the following command.

kubectl version --client

Install Kops on the EC2 instance

Now you are ready to install Kops on the same EC2 instance.

Check if the kops exists, if not then install it using the following commands on Ubuntu Server.

kops

curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64

chmod +x kops-linux-amd64

sudo mv kops-linux-amd64 /usr/local/bin/kops

Now you should have kops on the server.

kops

Install Kubectl on the EC2 instance

Validate  Recordset rules and a hosted zone.

Kops needs the required DNS records to build a cluster. 

Here I have a second hosted zone in route53.

Create a Hosted Zone

Also, I have copied the NS servers of my SUBDOMAIN  to the PARENT domain in Route53.

Go to Route53 -- > Hosted zones -- > Go to the main default Hosted zone-- > Check for the recordset and verify its values.

Create a Record set in the main Hosted Zone

Create a Kubernetes Cluster using Kops

Now we are all set to create a cluster. Before creating a cluster, let's see what we get when we try to list the clusters.

kops get clusters

The above command will fail as it needs an S3 bucket as a parameter.

kops get clusters --state s3://kops.devopslee.com

Since there are no existing clusters, the command will not list anything.

If you don't want to give s3 bucket name as a parameter to the command, you can export its value in the terminal to the "KOPS_STATE_STORE" variable.

export KOPS_STATE_STORE=s3://kops.devopslee.com

This time you don't need to specify the S3 bucket in the command

kops get clusters

Get cluster

Now, let's try to create a cluster with-

  1. 1 master node with the instance of type t2.medium
  2. 1 worker node with the instance of type t2.micro
  3. Availability zone as us-east-1a,us-east-1b,us-east-1c

kops create cluster --name kops.devopslee.com --state s3://kops.devopslee.com --cloud aws --master-size t2.medium --master-count 1 --master-zones us-east-1a --node-size t2.micro --node-count 1 --zones us-east-1a,us-east-1b,us-east-1c

The above command will give errors as we have not specified any ssh key.

Check if you have a key-pair in your instance.

ls -l ~/.ssh/

If you don't have any key-pair, you can create it using the following command.

ssh-keygen

Generate ssh keys

This time if you execute the create command again, it will fail because the S3 was updated by the previous command with cluster configuration even if it failed due to the absence of ssh key.

kops create cluster --name kops.devopslee.com --state s3://kops.devopslee.com --cloud aws --master-size t2.medium --master-count 1 --master-zones us-east-1a --node-size t2.micro --node-count 1 --zones us-east-1a,us-east-1b,us-east-1c  --ssh-public-key ~/.ssh/id_rsa.pub

So, let's delete the cluster configuration and recreate the cluster with ssh key.

kops delete cluster --name kops.devopslee.com --state s3://kops.devopslee.com --yes

Recreation fails, delete the cluster configuration

This time we are passing ssh public key while creating the cluster.

kops create cluster --name kops.devopslee.com --state s3://kops.devopslee.com --cloud aws --master-size t2.medium --master-count 1 --master-zones us-east-1a --node-size t2.micro --node-count 1 --zones us-east-1a,us-east-1b,us-east-1c  --ssh-public-key ~/.ssh/id_rsa.pub

Create a cluster configuration with a private key

First, the cluster configuration will be created.

Cluster creations details

Now, we have the cluster configuration. If we want to make any changes in the configuration we can do, else we can proceed with the creation of the cluster. You can go to the S3 bucket and see a cluster configuration in it.

Configuration updated in the S3 Bucket

This time you will get to see that the cluster is available.

kops get cluster

But the resources are not yet created.

To create the resources immediately, we need to update the cluster with --yes as an option to the command.

kops update cluster --name kops.devopslee.com --yes

Update the cluster to create cloud resources

Cluster creation will take some time. You can validate the state of the cluster using the following "validate" command.

kops validate cluster --wait 10m

Validate the cluster, cluster may take 10 mins to become active

Once all the cluster resources are created, the cluster will be ready to use. 

Cluster in Ready state

Once the EC2 instances are ready, kops updates the Hosted zone with A records containing IPs of the master.

Records are updated with Master node's IP

You are now ready to use the cluster. To check existing pods in the default namespace execute the following command.

kubectl  get pods

You can even check pods from all the namespaces

kubectl  get pods -A

Check nodes in the cluster.

kubectl  get nodes

To fetch more details of the nodes, use -o wide in the command.

kubectl  get nodes -o wide

Check the system pods in the cluster

Delete the cluster

If you no longer need the cluster, you can delete it easily using Kops.

kops get cluster

You just need to execute a single command.

kops delete cluster --name kops.devopslee.com --state s3://kops.devopslee.com --yes

Delete the cluster

Kops will delete all the resources it created for the cluster to make it fully functional

Cluster deletion successful

Conclusion

In this article, we saw all the steps to create a Kubernetes cluster using Kops. We saw that Kops needs a domain to create a fully functional cluster. We saw how easy it is to create and delete a cluster using Kops.

Share this page:

Suggested articles

2 Comment(s)

Add comment

Comments

By: Andrew Newman at: 2021-02-26 12:41:47

 Hi, following this tutorial I've sucessfully created a kops cluster, but when i go "kops update cluster [my_domain_name] --yes  it comes back with the error "error reading cluster configuration: Cluster.kops [my_domain_name] not found"   but it liked it when i created the cluster...can anybody help?

 

By: Rahul Shivalkar at: 2021-02-27 17:33:32

Hi Andrew,

Try to export your bucket name used to store the state.

E.g.

export KOPS_STATE_STORE=s3://bucket-name