Getting started

Edit This Page

Running Kubernetes on AWS EC2

This page describes how to install a Kubernetes cluster on AWS.

Before you begin

To create a Kubernetes cluster on AWS, you will need an Access Key ID and a Secret Access Key from AWS.

Supported Production Grade Tools

  • conjure-up is an open-source installer for Kubernetes that creates Kubernetes clusters with native AWS integrations on Ubuntu.

  • Kubernetes Operations - Production Grade K8s Installation, Upgrades, and Management. Supports running Debian, Ubuntu, CentOS, and RHEL in AWS.

  • CoreOS Tectonic includes the open-source Tectonic Installer that creates Kubernetes clusters with Container Linux nodes on AWS.

  • CoreOS originated and the Kubernetes Incubator maintains a CLI tool, kube-aws, that creates and manages Kubernetes clusters with Container Linux nodes, using AWS tools: EC2, CloudFormation and Autoscaling.

  • KubeOne is an open source cluster lifecycle management tool that creates, upgrades and manages Kubernetes Highly-Available clusters.

Getting started with your cluster

Command line administration tool: kubectl

The cluster startup script will leave you with a kubernetes directory on your workstation. Alternately, you can download the latest Kubernetes release from this page.

Next, add the appropriate binary folder to your PATH to access kubectl:

# macOS
export PATH=<path/to/kubernetes-directory>/platforms/darwin/amd64:$PATH

# Linux
export PATH=<path/to/kubernetes-directory>/platforms/linux/amd64:$PATH

An up-to-date documentation page for this tool is available here: kubectl manual

By default, kubectl will use the kubeconfig file generated during the cluster startup for authenticating against the API. For more information, please read kubeconfig files

Examples

See a simple nginx example to try out your new cluster.

The “Guestbook” application is another popular example to get started with Kubernetes: guestbook example

For more complete applications, please look in the examples directory

Scaling the cluster

Adding and removing nodes through kubectl is not supported. You can still scale the amount of nodes manually through adjustments of the ‘Desired’ and ‘Max’ properties within the Auto Scaling Group, which was created during the installation.

Tearing down the cluster

Make sure the environment variables you used to provision your cluster are still exported, then call the following script inside the kubernetes directory:

cluster/kube-down.sh

Support Level

IaaS ProviderConfig. MgmtOSNetworkingDocsConformsSupport Level
AWSkopsDebiank8s (VPC)docsCommunity (@justinsb)
AWSCoreOSCoreOSflanneldocsCommunity
AWSJujuUbuntuflannel, calico, canaldocs100%Commercial, Community
AWSKubeOneUbuntu, CoreOS, CentOScanal, weavenetdocs100%Commercial, Community

Further reading

Please see the Kubernetes docs for more details on administering and using a Kubernetes cluster.

Feedback