In this tutorial we will install Kubernetes cluster using calico plugin. If you are interested there is a long list of Container Network Interface (CNI) available to configure network interfaces in Linux containers.
Overview on Calico CNI
The project Calico attempts to solve the speed and efficiency problems that using virtual LANs, bridging, and tunneling can cause. It achieves this by connecting your containers to a vRouter, which then routes traffic directly over the L3 network. This can give huge advantages when you are sending data between multiple data centers as there is no reliance on NAT and the smaller packet sizes reduce CPU utilization.
The Calico architecture contains four important components in order to provide a better networking solution:
- Felix, the Calico worker process, is the heart of Calico networking, which primarily routes and provides desired connectivity to and from the workloads on host. It also provides the interface to kernels for outgoing endpoint traffic.
- BIRD, the route distribution open source BGP, exchanges routing information between hosts. The kernel endpoints, which are picked up by BIRD, are distributed to BGP peers in order to provide inter-host routing. Two BIRD processes run in the calico-node container, IPv4 (bird) and one for IPv6 (bird6).
- Confd, a templating process to auto-generate configuration for BIRD, monitors the etcd store for any changes to BGP configuration such as log levels and IPAM information. Confd also dynamically generates BIRD configuration files based on data from etcd and triggers automatically as updates are applied to data. Confd triggers BIRD to load new files whenever a configuration file is changed.
- calicoctl, the command line used to configure and start the Calico service, even allows the datastore (etcd) to define and apply security policy.
Bring up Kubernetes Cluster
Lab Environment
I am using Oracle VirtualBox to create multiple Virtual machines with Linux OS. I will use these individual VMs to create my Kubernetes Cluster using kubeadm and Calico CNI. These VMs are installed with CentOS 8 and using Bridged Networking.
Following are the specs of each VM:
Resources | controller | worker-1 | worker-2 |
---|---|---|---|
OS | CentOS 8 | CentOS 8 | CentOS 8 |
hostname | controller | worker-1 | worker-2 |
FQDN | controller.example.com | worker-1.example.com | worker-2.example.com |
Storage | 20GB | 20GB | 20GB |
vCPU | 2 | 2 | 2 |
RAM | 6GB | 6GB | 6GB |
Adapter-1 (Bridged) | 192.168.0.150 | 192.168.0.151 | 192.168.0.152 |
The following sections are already covered in detail so you can follow the respective hyperlink which all link to the same article and different sections:
Pre-requisites
Installing container runtime
Install Kubernetes components (kubelet, kubectl and kubeadm)
Initialize control node
At the end of this section your controller node should be initialized
[root@controller ~]# kubectl cluster-info Kubernetes control plane is running at https://192.168.0.150:6443 KubeDNS is running at https://192.168.0.150:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Following are the list of pods available at this stage:
The output of kubectl get nodes
should be something like following:
[root@controller ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
controller.example.com NotReady control-plane,master 77s v1.20.5
The controller node would be in NotReady
state so next we must install our Container Network Interface plugin.
Install Calico network on Kubernetes
In this section we will install the Calico CNI on our Kubernetes cluster nodes:
Configure Firewall
In addition to the ports which you may have already added to your firewall following the pre-requisite link earlier, you would also need to enable port 179 for Calico networking (BGP) on all the cluster nodes.
You can check Networking Requirements from the official page to get any more list of ports which needs to be enabled based on your environment.
~]# firewall-cmd --add-port=179/tcp --permanent success ~]# firewall-cmd --reload success
Download Calico CNI plugin
We will download the Calico networking manifest and use it to install the plugin for the Kubernetes API datastore.
[root@controller ~]# curl https://docs.projectcalico.org/manifests/calico.yaml -O
Command output from my controller:
This will download calico.yaml
file in your current working directory
[root@controller ~]# ls -l calico.yaml
-rw-r--r-- 1 root root 189190 Mar 24 00:19 calico.yaml
Modify pod CIDR (Optional)
Next you must assign a pod CIDR subnet. CIDR stands for Classless Inter-Domain Routing, also known as supernetting. By default Calico assumes that you wish to assign 192.168.0.0/16
subnet for the pod network but if you wish to choose any other subnet then you can add the same in calico.yaml
file.
I am already using 192.168.0.0/24
for my Kubernetes Cluster and I don't want to use the same range for my Pods. So I will assign a random subnet 10.142.0.0/24
as my CIDR for pods.
We will open the calico.yaml
using vim editor and modify CALICO_IPV4POOL_CIDR
variable in the manifest and set it to 10.142.0.0/24
as shown below:
- name: CALICO_IPV4POOL_CIDR value: "10.142.0.0/24"
Install Calico Plugin
Next we can go ahead and install the Calico network using kubectl
command with calico manifest file:
[root@controller ~]# kubectl apply -f calico.yaml
configmap/calico-config unchanged
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org configured
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrole.rbac.authorization.k8s.io/calico-node unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created
Check the status of the newly created pods under kube-system
namespace:
So we have new calico pods coming up and they are still at init-container stage.
Check the status of the pods again in some time and now the calico pods should be in Running
state and the containers should be in READY
state. For any issues follow the troubleshooting section on projectcalico.org
Install calicoctl
The calicoctl tool also provides the simple interface for general management of Calico configuration irrespective of whether Calico is running on VMs, containers, or bare metal.. Although the usage of this tool is out of the scope of this tutorial.
You can follow the official guide to install calicoctl tool on your controller node.
Join worker nodes
Now we can join our worker nodes. I hope you have saved the kubeadm join
command from the kubeadm init
stage which we executed earlier.
kubeadm join
command with the token id then you can generate a new one using
kubeadm token create --print-join-command
Since we had stored the kubeadm join
command, I will execute the same on my worker nodes to join the Kubernetes cluster:
On worker-1.example.com
On worker-2.example.com
The above command will only start the kubelet
service so we must manually enable it to auto-start after every reboot on all the worker nodes:
[root@worker-1 ~]# systemctl enable kubelet Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service. [root@worker-2 ~]# systemctl enable kubelet Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
Now check the status of kubernetes cluster on the controller node:
[root@controller ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION controller.example.com Ready control-plane,master 9h v1.20.5 worker-1.example.com Ready <none> 9h v1.20.5 worker-2.example.com Ready <none> 9h v1.20.5
The status of controller node and all other worker nodes are Ready
so all seems good.
Additionally if you check the list of pods under kube-system
, you will realize that we have new calico-node
and kube-proxy
pods for each worker nodes:
[root@controller ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-69496d8b75-nbdpq 1/1 Running 0 11m calico-node-2ppgj 1/1 Running 0 4m8s calico-node-hhz9s 1/1 Running 0 4m26s calico-node-q9t7r 1/1 Running 0 11m coredns-74ff55c5b-mr4zv 1/1 Running 0 17m coredns-74ff55c5b-zvsqz 1/1 Running 0 17m etcd-controller.example.com 1/1 Running 0 17m kube-apiserver-controller.example.com 1/1 Running 0 17m kube-controller-manager-controller.example.com 1/1 Running 0 17m kube-proxy-mcqxb 1/1 Running 0 17m kube-proxy-nkqh9 1/1 Running 0 4m8s kube-proxy-rs4ct 1/1 Running 0 4m26s kube-scheduler-controller.example.com 1/1 Running 0 17m
Create a Pod (Verify Calico network)
Now let's try to create a Pod to make sure it is getting the IP Address from our POD CIDR which we assigned to the Calico manifest. Here I have a YAML file for a simple nginx pod:
Let's create this pod:
[root@controller ~]# kubectl create -f nginx.yaml
pod/nginx created
Check the IP assigned to this Pod via Calico network:
[root@controller ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 25s 10.142.0.1 worker-2.example.com <none> <none>
So the Pod has got the IP from our subnet 10.142.0.0/24
which we assigned while installing the Calico network in our Kubernetes Cluster.
Summary
Calico provides a scalable networking solution for connecting containers, VMs, or bare metal. Calico provides connectivity using the scalable IP networking principle as a layer 3 approach. Calico can be deployed without overlays or encapsulation. It also handles all the necessary IP routing, security policy rules, and distribution of routes across a cluster of nodes. We can further use calicoctl
to configure the networking and policies to be used by the Pod containers.
Related Searches: kubectl calico, calico kubernetes, kubernetes install calico, calico k8s, kubernetes install calico plugin, what is calico in kubernetes, calico kubernetes compatibility, installing calico on kubernetes, kubernetes networking calico, kubernetes cni calico, calicot manifestation, calico running
Hi ,
I am having a server installed with single node K8 cluster. The server has 2 interface with IP assigned(ens01 ens2) . All installation operations are done through putty using IP assigned to ens01. Now i need to access the cluster(Kubectl get nodes/pods) by logging in with the IP from ens02. Is it possible? How to make it work that way
You need below options to provide ingress to your pod
1. LB listening on ens2 and forwarding traffic to pod
2. CITM ( or any ingress controller) listening on ens2 and forwarding traffic to Pod
3. you can use k8 port forwarding from ens2 to Pod
or 4. nodePort you can use