MULTI-CLOUD KUBERNETES CLUSTER

Umesh Tyagi
4 min readJul 25, 2021

Hello everyone, Hope you all are doing well. In this blog, I’m gonna set up a Multi-Cloud Kubernetes cluster. So girls and guys are be focused and learn how we can create this. Before starting let me tell you about all the stuff such as Clouds(AWS, GCP, AZURE), Kubernetes, and Kubernetes cluster?

What is Cloud Computing?

The traditional approach is to deploy applications in on-prems. Whereas it is too complex to manage our own data center for your small business. Cloud Computing is the way that provides the resources such as storage, compute, networking. It ensures high security and high availability for your application. Basically three types of cloud computing:

  • Public Cloud
  • Private Cloud
  • Hybrid Cloud

What are AWS, GCP, AZURE?

These all are the clouds that come under the public cloud. These all are the platform that provides the flexible, reliable, easy to use cloud computing solution. We can take compute, storage, networking resources on-demand and the good thing about these clouds are they charge you only for what you use. AWS stands for Amazon Web Services and is managed by Amazon, and GCP stands for Google Cloud Platform owned by Google. the last one is Microsoft Azure.

What is Kubernetes?

Basically, We can create and ship our application with help of any container platform. But when it comes to managing, scaling, monitor, and upgrades of the application and its complete infrastructure then docker is not useful, However, docker also provides a solution to manage but Kubernetes is the master in the monitor, scaling, roll back and roll out, It provides high availability for our application using cluster.

Now I’m going to start…

Requirements

Should be account on all the clouds(AWS, GCP, AZURE).

What we will do?

We will launch one instance/VM in every cloud.

  • Kubernetes Master — AWS EC2 Instance
  • Kubernetes Slave-1 — GCP VM
  • Kubernetes Slave-2 — AZURE VM
  • Kubernetes Slave-3 — Local System(RHEL8)

Kubernetes Master Node

First Create an EC2 instance in the AWS for the Kubernetes master node. Use the following commands to configure the master node.

yum install docker -ysystemctl start –now dockercat <<EOF> /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]}EOFsystmctl docker restartcat <<EOF> /etc/yum.repos.d/k8s.repo[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOFyum install kubeadm kubectl kubelet -ysystemctl enable kubelet –nowkubeadm config images pullyum install iproute-tccat <<EOF> /etc/sysctl.f/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOFkubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Memmkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlkubeadm token create --print-join-command

Kubernetes Slave-1 over GCP

Use the following command to create salve over the GCP cloud. Print token from the master node and after configuration of the slave the node the last command will be the same for join salve to master.

yum install iproute-tcsudo yum remove docker docker-common docker-selinux docker-engine-selinux docker-engine docker-cecat <<EOF> /etc/yum.repos.d/docker-ce.repo[docker-ce-stable]
name=Docker CE Stable
baseurl=https://download.docker.com/linux/centos/7/x86_64/stable
enabled=1
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg
EOF
yum install docker-ce –nobest -yy
systemctl start --now docker
cat <<EOF> /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]}
EOF
systemctl restart dockercat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
yum repolist -vyum install kubeadm kubectl kubelet -y
systemctl enable kubelet --now
cat <<EOF> /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl –system
echo 1 > /proc/sys/net/ipv4/ip_forward
Last command is the token that we copied form the master

Kubernetes Slave-2 over Azure

yum install iproute-tcsudo yum remove docker docker-common docker-selinux docker-engine-selinux docker-engine docker-cecat <<EOF> /etc/yum.repos.d/docker-ce.repo[docker-ce-stable]
name=Docker CE Stable
baseurl=https://download.docker.com/linux/centos/7/x86_64/stable
enabled=1
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg
EOF
yum install docker-ce –nobest -yy
systemctl start --now docker
cat <<EOF> /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]}
EOF
systemctl restart dockercat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
yum repolist -vyum install kubeadm kubectl kubelet -y
systemctl enable kubelet --now
cat <<EOF> /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
Last command is the token that we copied form the master

Kubernetes Slave-3 over local system RHEL8

yum install iproute-tcsudo yum remove docker docker-common docker-selinux docker-engine-selinux docker-engine docker-cecat <<EOF> /etc/yum.repos.d/docker-ce.repo[docker-ce-stable]
name=Docker CE Stable
baseurl=https://download.docker.com/linux/centos/7/x86_64/stable
enabled=1
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg
EOF
yum install docker-ce –nobest -yy
systemctl start --now docker
cat <<EOF> /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]}
EOF
systemctl restart dockercat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
yum repolist -vyum install kubeadm kubectl kubelet -y
systemctl enable kubelet --now
cat <<EOF> /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl –systemswapoff -aLast command is the token that we copied form the master

After the run, all the commands successfully check the number of nodes available in the cluster

kubectl get nodes

Thanks for reading…..

--

--