preloader
  • Home
  • Cilium: Boosting Network Security in Kubernetes with eBPF

Cilium is an open-source solution that uses eBPF (Extended Berkeley Packet Filter) technology to provide advanced networking, security and observability features in Kubernetes environments.

blog-thumb

📖 Estimated reading time: 8 min


Cilium is an open-source solution that uses eBPF technology

Extended Berkeley Packet Filter: Provides advanced networking, security and observability features in Kubernetes environments.

Unlike traditional solutions, which rely on iptables rules or other less efficient methods, Cilium operates directly in the Linux kernel, providing greater performance and flexibility.

Cilium’s importance in the Kubernetes community was reinforced with its recent graduation from the Cloud Native Computing Foundation (CNCF) in October 2023. This demonstrates its maturity and growing adoption in production environments.


What is eBPF?

The eBPF is a technology that allows programs to run securely in the Linux kernel without the need to modify their source code or load additional modules. In the context of Kubernetes, it enables:

* Efficient network management without relying on iptables.

* Advanced monitoring of traffic and applications.

* Implementation of security policies directly in the kernel.


Cilium features


1. Advanced Networking

Cilium acts as a CNI (Container Network Interface), enabling:

* Communication between pods with lower latency.

* Support for load balancing at layer 4 and 7.

* Native integration with mesh services such as Istio.


2. Enhanced Security

* Identity-based access control (Identity-Aware Security Policies).

* Firewall rules applied in the kernel, reducing the overhead of traditional rules.

* Detailed connection monitoring to prevent suspicious traffic.


3. Observability

* Integrated tools for traffic inspection.

* Export of metrics to Prometheus and Grafana.

* Detailed view of application behavior in real time.

Hubble

Installing Cilium on Kubernetes

To install Cilium on your Kubernetes cluster, follow the steps below.

Prerequisites

* A working Kubernetes cluster (minikube, kind, or a managed cluster such as AKS, EKS, GKE).

* kubectl installed and configured.


🔶 Tip: If you are running minikube in a VM, I recommend that you have at least 4 vCPU and 8GB RAM.


Installing kubectl:


$ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

$ sudo install kubectl /usr/local/bin/
 

Installing Cilium

See the cilium documentation for more information.


In this lab, we’ll be using Minikube. Click on the link to learn how to install it.



$ cilium install --version 1.17.2
 

Minikube Cilium Deploy

Checking the installation

$ kubectl get pods -n kube-system | grep cilium
 
cilium-ckzz2                       1/1     Running   0             86s
cilium-envoy-mxhsl                 1/1     Running   0             86s
cilium-operator-799f498c8-lp8rw    1/1     Running   0             86s
 

It may take a few minutes for the pods to initialize.



$ kubectl get DaemonSet -n=kube-system

NAME           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
cilium         1         1         1       1            1           kubernetes.io/os=linux   77m
cilium-envoy   1         1         1       1            1           kubernetes.io/os=linux   77m
kube-proxy     1         1         1       1            1           kubernetes.io/os=linux   77m

 
$ kubectl describe DaemonSet cilium -n=kube-system | grep Status

Pods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed
  

Check the logs:


$ kubectl logs -n kube-system -l k8s-app=cilium
 

Also check the events:


$ kubectl get events --sort-by=.metadata.creationTimestamp
 

In case of errors, try restarting Cilium:


$ kubectl rollout restart ds cilium -n kube-system
 

Install the Cilium CLIs:


$ CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)

$ CLI_ARCH=amd64

$ if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi

$ curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}

$ sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum

$ sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin

$ rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
 

If everything is correct, you’ll see the Cilium pods running normally.


$ cilium status --wait
 
Cilium Status

Run the following command to validate that the cluster has adequate network connectivity:


$ cilium connectivity test
 
Cilium Status

After a few minutes, the test should be complete.


Configuring Hubble observability

Hubble is Cilium’s observability layer, and can be used to gain visibility of the entire Kubernetes cluster at the network and security layers.


$ cilium hubble enable
 

Run cilium status to validate that Hubble is activated and running:

Cilium Status

Install the Hubble Client

To access the observability data collected by Hubble, you must first install the Hubble CLI.


$ HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)

$ HUBBLE_ARCH=amd64

$ if [ "$(uname -m)" = "aarch64" ]; then HUBBLE_ARCH=arm64; fi

$ curl -L --fail --remote-name-all https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-${HUBBLE_ARCH}.tar.gz{,.sha256sum}

$ sha256sum --check hubble-linux-${HUBBLE_ARCH}.tar.gz.sha256sum

$ sudo tar xzvfC hubble-linux-${HUBBLE_ARCH}.tar.gz /usr/local/bin

$ rm hubble-linux-${HUBBLE_ARCH}.tar.gz{,.sha256sum}
 

Validate access to the Hubble API

$ cilium hubble port-forward &
[1] 47502
 
$ ℹ️  Hubble Relay is available at 127.0.0.1:4245

$ hubble status

Healthcheck (via localhost:4245): Ok
Current/Max Flows: 4,095/4,095 (100.00%)
Flows/s: 18.82
Connected Nodes: 1/1

You can also consult the flow API:


$ hubble observe
 
Hubble API Access

Star Wars Demo App

Install the Star Wars Demo application to be able to run the next tests.

- May the Force be with you.


Inspecting Network Flows using the CLI

Let’s issue some requests to emulate some traffic again. This first request is allowed by policy.

Just install Star Wars Demo, without creating the policies, and run the following commands.


$ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed
 

$ kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port
Panic: deathstar exploded
 

Now, create the policies as described in the link above, and check that some access will be denied.


$ kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port
Access denied
 

We can also see all the endpoints managed by Cilium (each pod is represented by an endpoint in Cilium):


$ kubectl get cep --all-namespaces
 
NAMESPACE       NAME                              SECURITY IDENTITY   ENDPOINT STATE   IPV4         IPV6
cilium-test-1   client-645b68dcf7-7cxfq           14824               ready            10.0.0.62
cilium-test-1   client2-66475877c6-2f8r5          36115               ready            10.0.0.124
cilium-test-1   echo-same-node-6c98489c8d-62k7b   28856               ready            10.0.0.54
default         deathstar-8c4c77fb7-qh765         3898                ready            10.0.0.26
default         deathstar-8c4c77fb7-r8hlf         3898                ready            10.0.0.243
default         tiefighter                        31334               ready            10.0.0.103
default         xwing                             24825               ready            10.0.0.22
kube-system     coredns-668d6bf9bc-dn2v9          54688               ready            10.0.0.71
kube-system     hubble-relay-59cc4d545b-wtjdq     2611                ready            10.0.0.142
kube-system     hubble-ui-76d4965bb6-x4ng5        8275                ready            10.0.0.38

Hubble UI

Enable the Hubble user interface by running the following command:


$ cilium hubble enable --ui
 
$ cilium hubble ui
 
ℹ️  Opening "http://localhost:12000" in your browser...
 

⚠️ IMPORTANT ⚠️

Minikube must have been started with the option --listen-address=0.0.0.0 or --static-ip <IP> so that you can access the Huble UI over the network.

✅ An alternative is to use socat:


$ sudo apt install socat
 
$ nohup socat TCP-LISTEN:12001,fork TCP:127.0.0.1:12000 &
 

I bet you didn’t know that one! 😆


Monitoring Traffic with Hubble

As you already know, Hubble is the Cilium observability tool.

Open the browser at port http://<IP>:12001/

Hubble GUI Browser

Run the connectivity test again, and observe the Hubble GUI.

Hubble GUI Connection Test


Creating a Test Pod:

Use the YAML below to upload a test pod.


$ cat cilium-test-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  namespace: linuxelite
  name: cilium-test-pod
  labels:
    app: cilium-test
spec:
  containers:
  - name: busybox
    image: busybox
    command: ["sleep", "3600"]
  restartPolicy: Never
   

Create the pod:


$ kubectl create namespace linuxelite

$ kubectl apply -f cilium-test-pod.yaml

$ kubectl -n linuxelite get pods
NAME              READY   STATUS    RESTARTS   AGE
cilium-test-pod   1/1     Running   0          18s
  

Now we’re checking the traffic via Hubble:


$ kubectl -n linuxelite exec -it cilium-test-pod -- ping google.com

PING google.com (142.251.135.142): 56 data bytes
64 bytes from 142.251.135.142: seq=0 ttl=56 time=9.831 ms
64 bytes from 142.251.135.142: seq=1 ttl=56 time=9.050 ms
64 bytes from 142.251.135.142: seq=2 ttl=56 time=11.945 ms
64 bytes from 142.251.135.142: seq=3 ttl=56 time=6.733 ms
64 bytes from 142.251.135.142: seq=4 ttl=56 time=37.642 ms
  

Hubble GUI Connection Test Linuxelite

In another tab of the terminal, we used Hubble to inspect the traffic:


$ kubectl exec -n kube-system ds/cilium -- cilium monitor --type trace

Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
Listening for events on 4 CPUs with 64x4096 of shared memory
Press Ctrl-C to quit
time="2025-03-16T23:27:42.731883377Z" level=info msg="Initializing dissection cache..." subsys=monitor
-> endpoint 532 flow 0xcc30b1be , identity host->8275 state new ifindex lxc143c320db00c orig-ip 10.0.0.77: 10.0.0.77:47350 -> 10.0.0.38:8081 tcp SYN
-> stack flow 0x636c3669 , identity 8275->host state reply ifindex 0 orig-ip 0.0.0.0: 10.0.0.38:8081 -> 10.0.0.77:47350 tcp SYN, ACK
-> endpoint 532 flow 0xcc30b1be , identity host->8275 state established ifindex lxc143c320db00c orig-ip 10.0.0.77: 10.0.0.77:47350 -> 10.0.0.38:8081 tcp ACK
-> endpoint 532 flow 0xcc30b1be , identity host->8275 state established ifindex lxc143c320db00c orig-ip 10.0.0.77: 10.0.0.77:47350 -> 10.0.0.38:8081 tcp ACK
-> stack flow 0x636c3669 , identity 8275->host state reply ifindex 0 orig-ip 0.0.0.0: 10.0.0.38:8081 -> 10.0.0.77:47350 tcp ACK
-> stack flow 0x636c3669 , identity 8275->host state reply ifindex 0 orig-ip 0.0.0.0: 10.0.0.38:8081 -> 10.0.0.77:47350 tcp ACK, FIN
-> endpoint 532 flow 0xcc30b1be , identity host->8275 state established ifindex lxc143c320db00c orig-ip 10.0.0.77: 10.0.0.77:47350 -> 10.0.0.38:8081 tcp ACK, FIN
-> endpoint 3213 flow 0x8c0cf16d , identity host->28856 state new ifindex lxc40ece8a8f974 orig-ip 10.0.0.77: 10.0.0.77:38522 -> 10.0.0.54:8181 tcp SYN
-> stack flow 0xbcfe676b , identity 28856->host state reply ifindex 0 orig-ip 0.0.0.0: 10.0.0.54:8181 -> 10.0.0.77:38522 tcp SYN, ACK
-> endpoint 3213 flow 0x8c0cf16d , identity host->28856 state established ifindex lxc40ece8a8f974 orig-ip 10.0.0.77: 10.0.0.77:38522 -> 10.0.0.54:8181 tcp ACK
-> endpoint 3213 flow 0x8c0cf16d , identity host->28856 state established ifindex lxc40ece8a8f974 orig-ip 10.0.0.77: 10.0.0.77:38522 -> 10.0.0.54:8181 tcp ACK
-> stack flow 0xbcfe676b , identity 28856->host state reply ifindex 0 orig-ip 0.0.0.0: 10.0.0.54:8181 -> 10.0.0.77:38522 tcp ACK
-> endpoint 3213 flow 0x61c76007 , identity host->28856 state new ifindex lxc40ece8a8f974 orig-ip 10.0.0.77: 10.0.0.77:42912 -> 10.0.0.54:8080 tcp SYN
-> stack flow 0xf409f8b1 , identity 28856->host state reply ifindex 0 orig-ip 0.0.0.0: 10.0.0.54:8080 -> 10.0.0.77:42912 tcp SYN, ACK
-> endpoint 3213 flow 0x61c76007 , identity host->28856 state established ifindex lxc40ece8a8f974 orig-ip 10.0.0.77: 10.0.0.77:42912 -> 10.0.0.54:8080 tcp ACK
-> stack flow 0xbcfe676b , identity 28856->host state reply ifindex 0 orig-ip 0.0.0.0: 10.0.0.54:8181 -> 10.0.0.77:38522 tcp ACK, FIN
-> endpoint 3213 flow 0x61c76007 , identity host->28856 state established ifindex lxc40ece8a8f974 orig-ip 10.0.0.77: 10.0.0.77:42912 -> 10.0.0.54:8080 tcp ACK
-> endpoint 3213 flow 0x8c0cf16d , identity host->28856 state established ifindex lxc40ece8a8f974 orig-ip 10.0.0.77: 10.0.0.77:38522 -> 10.0.0.54:8181 tcp ACK, FIN
-> stack flow 0xf409f8b1 , identity 28856->host state reply ifindex 0 orig-ip 0.0.0.0: 10.0.0.54:8080 -> 10.0.0.77:42912 tcp ACK
-> stack flow 0xf409f8b1 , identity 28856->host state reply ifindex 0 orig-ip 0.0.0.0: 10.0.0.54:8080 -> 10.0.0.77:42912 tcp ACK, FIN
-> endpoint 3213 flow 0x61c76007 , identity host->28856 state established ifindex lxc40ece8a8f974 orig-ip 10.0.0.77: 10.0.0.77:42912 -> 10.0.0.54:8080 tcp ACK
-> endpoint 3213 flow 0x61c76007 , identity host->28856 state established ifindex lxc40ece8a8f974 orig-ip 10.0.0.77: 10.0.0.77:42912 -> 10.0.0.54:8080 tcp ACK, FIN
 

If Cilium is managing the network correctly, you will see the test pod’s connections being logged in Cilium’s monitor.



Conclusion

Cilium is a powerful solution that significantly improves network security on Kubernetes, compared to the limitations of traditional approaches.

With support for eBPF and integration with monitoring tools, it becomes an ideal choice for modern clusters.

If you want to boost the security and performance of your Kubernetes environment, trying out Cilium could be a great next step!

See also the Cheat Sheet.

Cilium Cheat Sheet

To find out more, visit the official Cilium page.


Share this post and keep following our site for more news on Kubernetes and open-source technologies!

Take a look at these other interesting articles! 🔥



Support our work 💖

Like what you find here? With every click on a banner, you help keep this site alive and free. Your support makes all the difference so that we can keep bringing you the content you love. Thank you very much! 😊


Articles you might like 📌
comments powered by Disqus