Beginners guide to Kubernetes Services with examples


Kubernetes Tutorial

You have learned so far about pods and different ways to deploy them using deployments, ReplicaSets etc. Now you may have a requirement to access the Pod via an external network. With all what we have learned, pods can only communicate internally but what if we have a requirement to access the Pod outside the Kubernetes cluster?

 

Why we need Kubernetes Services?

Unlike in the non-Kubernetes world, where a sysadmin would configure each client app by specifying the exact IP address or hostname of the server providing the service in the client’s configuration files, doing the same in Kubernetes wouldn’t work, because

  • Pods are ephemeral: They may come and go at any time, whether it’s because a pod is removed from a node to make room for other pods, because someone scaled down the number of pods, or because a cluster node has failed.
  • Kubernetes assigns an IP address to a pod after the pod has been scheduled to a node and before it’s started. Clients thus can’t know the IP address of the server pod up front.
  • Horizontal scaling means multiple pods may provide the same service. Each of those pods has its own IP address. Clients shouldn’t care how many pods are backing the service and what their IPs are. They shouldn’t have to keep a list of all the individual IPs of pods. Instead, all those pods should be accessible through a single IP address.

 

Overview on Kubernetes Services

  • A Kubernetes Service is a resource you create to make a single, constant point of entry to a group of pods providing the same service.
  • Each service has an IP address and port that never change while the service exists.
  • Clients can open connections to that IP and port, and those connections are then routed to one of the pods backing that service.
  • This way, clients of a service don’t need to know the location of individual pods providing the service, allowing those pods to be moved around the cluster at any time.
  • The kube-proxy agent on the nodes watches the Kubernetes API for new services and endpoints
  • After creation, it opens random ports and listens for traffic to the clusterIP port and next redirects traffic to the randomly generated service endpoints.

Beginners guide to Kubernetes Services with examples

In this diagram you should understand the basic idea behind Kubernetes service. Here we create a service which can be used to access all the three Pods outside the cluster. This was the different Pods will be exposed to a single IP address through which the external clients can connect to the pods. Also the service address doesn’t change even if the pod’s IP address changes.

 

Understanding different Kubernetes Service Types

There are different Service Types available which you can choose from as per your environment:

  • ClusterIP: It is the default type, but it provides internal access only.
  • NodePort: which allocates a specific node port which needs to be opened on the firewall. That means that by using these node ports, external users, as long as they can reach out to the nodes' IP addresses, are capable of reaching out to the Service.
  • LoadBalancer: currently only implemented in public cloud. So if you're on Kubernetes in Azure or AWS, you will find a load balancer.
  • ExternalName: which is a relatively new object that works on DNS names and redirection is happening at the DNS level.
  • Service without selector: which is used for direct connections based on IP port combinations without an endpoint. And this is useful for connections to a database or between namespaces.

 

Create Kubernetes Service

In the previous diagram you saw that a service can be backed by more than one pod. Connections to the service are load-balanced across all the backing pods. But how exactly do you define which pods are part of the service and which aren’t?

You must recall about labels and selectors we learned in ReplicaSets and ReplicaControllers, the same logic is used here to identify the pods. Now we can create a service either by exposing an existing object or creating a new service object. We will explore both these options:

 

Using kubectl expose

The easiest way to create a service is through kubectl expose. We will use type as NodePort so that this port can be used to access the application from the controller.

First let me create a deployment with nginx image:

[root@controller ~]# kubectl create deployment nginx-lab-1 --image=nginx --replicas=3 --dry-run=client -o yaml > nginx-lab-1.yml

This command will not create the deployment, instead it will give us a deployment template file in YAML format which we can modify as per our requirement to further create our deployment. I find this mode easier then writing a new template file from scratch.

Next let me modify few sections and following is my final template file to create a new deployment nginx-lab-1 with a label app=dev and 3 replicas.

[root@controller ~]# cat nginx-lab-1.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: dev
  name: nginx-lab-1
spec:
  replicas: 3
  selector:
    matchLabels:
      app: dev
  template:
    metadata:
      labels:
        app: dev
    spec:
      containers:
      - image: nginx
        name: nginx

Let us create our deployment using this YAML file:

[root@controller ~]# kubectl create -f nginx-lab-1.yml
deployment.apps/nginx-lab-1 created

The command was executed successfully and the three pods have started to create container.

[root@controller ~]# kubectl get pods
NAME                           READY   STATUS              RESTARTS   AGE
nginx-lab-1-58f9bf94f7-hq9nv   0/1     ContainerCreating   0          3s
nginx-lab-1-58f9bf94f7-jk85s   0/1     ContainerCreating   0          3s
nginx-lab-1-58f9bf94f7-l2slb   0/1     ContainerCreating   0          3s

We can check the status again in some time and the containers should be in Running state:

[root@controller ~]# kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
nginx-lab-1-58f9bf94f7-hq9nv   1/1     Running   0          16s
nginx-lab-1-58f9bf94f7-jk85s   1/1     Running   0          16s
nginx-lab-1-58f9bf94f7-l2slb   1/1     Running   0          16s

To create the service, you’ll tell Kubernetes to expose the Deployment you created earlier, here port 80 is the default port on which our nginx application would be listening on. This command will randomly assign a NodePort to the service:

[root@controller ~]# kubectl expose deployment nginx-lab-1 --type=NodePort --port=80
service/nginx-lab-1 exposed

To check the list of available services:

[root@controller ~]# kubectl get service
NAME          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes    ClusterIP   10.96.0.1       <none>        443/TCP        35d
nginx-lab-1   NodePort    10.108.252.53   <none>        80:32481/TCP   6s

So now we have a new service nginx-lab-1. The list shows that the IP address assigned to the service is 10.108.252.53. Because this is the cluster IP, it’s only accessible from inside the cluster.

To get more details about the service you can use:

[root@controller ~]# kubectl describe svc nginx-lab-1
Name:                     nginx-lab-1
Namespace:                default
Labels:                   app=dev
Annotations:              <none>
Selector:                 app=dev
Type:                     NodePort
IP:                       10.108.252.53
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  32481/TCP
Endpoints:                10.36.0.1:80,10.44.0.1:80,10.44.0.2:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

 

Access container inside the cluster

The kubectl exec command allows you to remotely run arbitrary commands inside an existing container of a pod. This comes in handy when you want to examine the contents, state, and/or environment of a container. I am verifying the ClusterIP on one of the pods part of the deployment:

[root@controller ~]# kubectl exec nginx-lab-1-58f9bf94f7-jk85s -- curl -s http://10.108.252.53
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
NOTE:
It is possible you may get "command terminated with exit code 7" which is most likely possible as the NodePort range is blocked in the firewall. We had enabled a range of ports between 30000-32767/tcp which is used by the NodePort so if this is not allowed then you must allow the respective NodePort on the worker nodes.

 

Access container outside the cluster

Now to access the container externally from the outside network we can use the public IP of individual worker node along with the NodePort in the following format

curl https://<PUBLIC-IP>:<NODE-PORT>

You can check my Lab Environment, the public IP of my worker nodes are 192.168.43.49 and 192.168.43.50 so based on the Pods which are running on individual worker nodes, I can use the respective public IP. To get the worker node details of individual pods:

[root@controller ~]# kubectl get pods -o wide
NAME                           READY   STATUS    RESTARTS   AGE   IP              NODE                   NOMINATED NODE   READINESS GATES
nginx-lab-1-58f9bf94f7-hq9nv   1/1     Running   0          47m   10.44.0.2       worker-2.example.com   <none>           <none>
nginx-lab-1-58f9bf94f7-jk85s   1/1     Running   0          52m   10.44.0.1       worker-2.example.com   <none>           <none>
nginx-lab-1-58f9bf94f7-l2slb   1/1     Running   1          52m   10.36.0.1       worker-1.example.com   <none>           <none>

For example to access the nginx-lab-1-58f9bf94f7-jk85s pod running on worker-2 node so I would use the public IP of worker-2 node i.e. 192.168.43.50 with NodePort 32481:

Beginners guide to Kubernetes Services with examples

 

Creating a service through a YAML descriptor

In this section we will create a service using YAML descriptor file. A Service in Kubernetes is a REST object, similar to a Pod. Like all of the REST objects, you can POST a Service definition to the API server to create a new instance.

First of all we need a Deployment with n number pods having certain label which can be used by the Service object. Now I had created a deployment in the previous example but for the sake of demonstration I will delete and re-create another deployment using following YAML file:

[root@controller ~]# cat nginx-deploy.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: dev
  name: nginx-deploy
spec:
  replicas: 2
  selector:
    matchLabels:
      app: dev
  template:
    metadata:
      labels:
        app: dev
    spec:
      containers:
      - image: nginx
        name: nginx

To create this deployment with 2 replicas:

[root@controller ~]# kubectl create -f nginx-deploy.yml
deployment.apps/nginx-deploy created

Verify the status of newly created pod and deployment:

[root@controller ~]# kubectl get pods -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP              NODE                   NOMINATED NODE   READINESS GATES
nginx-deploy-58f9bf94f7-4cwlr   1/1     Running   0          43s   10.36.0.1       worker-1.example.com   <none>           <none>
nginx-deploy-58f9bf94f7-98jr8   1/1     Running   0          43s   10.44.0.1       worker-2.example.com   <none>           <none>

[root@controller ~]# kubectl get deployment
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deploy   2/2     2            2           65s

Next we will create our Service object. We will need the KIND and Version to create a service object. To get the KIND value we can list api-resources and look out for the matching KIND value:

[root@controller ~]# kubectl api-resources | grep -iE 'KIND|service'
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
serviceaccounts                   sa                                          true         ServiceAccount
services                          svc                                         true         Service
apiservices                                    apiregistration.k8s.io         false        APIService

Now that we know that our KIND value is Service, so we can check for the VERSION value using following command:

[root@controller ~]# kubectl explain Service | head -n 2
KIND:     Service
VERSION:  v1

So the KIND value is Service and Version would be v1 to create a Service object. Here is a sample service file which we will use to create our object with matching label from our pod i.e. app=dev:

[root@controller ~]# cat nginx-service.yml
apiVersion: v1
kind: Service
metadata:
  name: nginx-deploy
  labels:
    app: dev
spec:
  type: NodePort
  ports:
  - port: 80
    protocol: TCP
  selector:
    app: dev

Let's create our service:

[root@controller ~]# kubectl create -f nginx-service.yml
service/nginx-deploy created

Check the status of the service along with the mapped labels:

[root@controller ~]# kubectl get svc --show-labels
NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE   LABELS
kubernetes     ClusterIP   10.96.0.1       <none>        443/TCP        35d   component=apiserver,provider=kubernetes
nginx-deploy   NodePort    10.110.95.181   <none>        80:31499/TCP   13m   app=dev

To get more details of the service:

[root@controller ~]# kubectl describe service nginx-deploy
Name:                     nginx-deploy
Namespace:                default
Labels:                   app=dev
Annotations:              <none>
Selector:                 app=dev
Type:                     NodePort
IP:                       10.110.95.181
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  31499/TCP
Endpoints:                10.36.0.1:80,10.44.0.1:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

Now as we did earlier in this tutorial, we can connect to the containers using the ClusterIP within the Cluster and Public IP from external network.

 

Access container inside the cluster

To connect to the container from within the cluster network:

[root@controller ~]# kubectl exec nginx-deploy-58f9bf94f7-4cwlr -- curl -s http://10.110.95.181
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

 

Access container outside the cluster

We can use the public IP of the worker node to connect to the container using NodePort which can be checked using following command:

[root@controller ~]# kubectl get svc
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes     ClusterIP   10.96.0.1        <none>        443/TCP        35d
nginx-deploy   NodePort    10.110.95.181   <none>        80:31499/TCP   5s

Then try to access the pod using public IP of the respective worker node:

Beginners guide to Kubernetes Services with examples

Delete Kubernetes Service

We have already terminated a service in previous examples but let me do it again for the newly created service. First of all you will need the service name to be deleted which you can get from the following command:

[root@controller ~]# kubectl get svc
NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes     ClusterIP   10.96.0.1       <none>        443/TCP        35d
nginx-deploy   NodePort    10.110.95.181   <none>        80:31499/TCP   22m

Here we want to delete nginx-deploy service, so to delete a service we can use:

[root@controller ~]# kubectl delete service nginx-deploy
service "nginx-deploy" deleted

Verify if the service is actually deleted:

[root@controller ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   35d

 

Conclusion

In this Kubernetes Tutorial we learned how to create Kubernetes Service resources to expose the services available in your application, regardless of how many pod instances are providing each service. You’ve learned how Kubernetes

  • Exposes multiple pods that match a certain label selector under a single, stable IP address and port.
  • Makes services accessible from inside the cluster by default, but allows you to make the service accessible from outside the cluster by setting its type to either NodePort or LoadBalancer
  • Run a bash shell in an existing pod’s container
Deepak Prasad

Deepak Prasad

He is the founder of GoLinuxCloud and brings over a decade of expertise in Linux, Python, Go, Laravel, DevOps, Kubernetes, Git, Shell scripting, OpenShift, AWS, Networking, and Security. With extensive experience, he excels in various domains, from development to DevOps, Networking, and Security, ensuring robust and efficient solutions for diverse projects. You can connect with him on his LinkedIn profile.

Can't find what you're searching for? Let us assist you.

Enter your query below, and we'll provide instant results tailored to your needs.

If my articles on GoLinuxCloud has helped you, kindly consider buying me a coffee as a token of appreciation.

Buy GoLinuxCloud a Coffee

For any other feedbacks or questions you can send mail to admin@golinuxcloud.com

Thank You for your support!!

Leave a Comment