Cilium is a cloud-native solution that enhances network security and observability between workloads, powered by the Linux kernel’s eBPF (extended Berkeley Packet Filter) technology. This blog post will dive into how Cilium operates within Kubernetes, the role of eBPF, and how to integrate it with monitoring tools like Prometheus and Grafana.
What is Cilium?
Cilium provides secure network connectivity between Kubernetes workloads using network policies called CiliumNetworkPolicy. It handles various types of network communication within Kubernetes, including:
- Container-to-container communication
- Pod-to-pod communication (Cluster IP)
- Pod-to-service communication (NodePort)
- External communication (Ingress and Egress)
Key Features of Cilium
Cilium enhances network security by monitoring and addressing various network issues, such as DNS, Layer 4 TCP, and Layer 7 problems. For example, it can detect DNS resolution issues within the last ten minutes or track unanswered TCP SYN requests and communication timeouts. Additionally, Cilium monitors HTTP response codes and latency percentiles across the cluster, providing layer 7 application visibility and control, allowing you to filter HTTP paths, methods, or domain names.
Configuration and Integration
Cilium’s configuration is split into two entities: the main configuration is stored in a config map called cilium-config
, while network policies leverage Custom Resource Definitions (CRD). eBPF plays a crucial role in Cilium’s functionality, especially in observability and security, with tools like Hubble and Prometheus being used for monitoring and troubleshooting.
Traditional Network Security vs. Cilium
In traditional Linux network security, iptables
is used to filter IP addresses and TCP/UDP ports. However, in dynamic microservices environments where IP addresses change frequently, maintaining connectivity and scalability becomes challenging. Cilium addresses this by leveraging eBPF, which provides dynamic visibility and efficient updates to access control lists.
Deep Dive into eBPF
eBPF (extended Berkeley Packet Filter) is a revolutionary technology that allows for efficient modification and analysis of network packets at the kernel level without changing application code. The Linux kernel supports various eBPF hooks in the network stack, which Cilium utilizes, including:
XDP (eXpress Data Path): Executes BPF programs at the earliest point in the network stack that runs on the packet data before any other processing can happen, it is ideal for filtering malicious traffic and DDoS protection.
Traffic Control Ingress and Egress: Attaches to network interfaces, running before Layer 3 processing, with access to packet metadata.
Socket Operations: Hooks attached to specific cgroups, running on TCP events.
Socket Send/Recv: Hooks running on every TCP socket send operation.
These hooks, combined with virtual interfaces (cilium_host, cilium_net and optional overlay interface like cilium_vxlan) and userspace proxies like Envoy, enhance observability, security, networking, and load balancing.
eBPF runs in the kernel runtime, providing significant benefits in observability, security controls, networking, network security, and load balancing. Programming languages like C++, Go, and R can leverage eBPF SDKs to create powerful programs that enhance these functionalities.
Leveraging eBPF Maps
eBPF programs utilize eBPF maps to store and retrieve data in various data structures. These maps can be accessed both from eBPF programs and applications, enabling complex operations and enhanced network security. Here’s how it works:
- Packet Handling:
- When a packet is sent to the node through the network interface (e.g., eth0), it is processed by a user-mode program (e.g., foo.o), The user-mode program sends the packet to the native code, which then forwards it to the eBPF maps, The eBPF maps store the network packets, deciding whether to accept or deny traffic based on predefined security rules.
2. eBPF in Kubernetes with Cilium:
In a Kubernetes environment, eBPF plays a crucial role in managing network traffic efficiently.
- Pod Deployment: When a deployment command creates a pod, the etcd service manages the service endpoints, The kube-proxy, operating in IPVS mode, attaches service IP addresses, distributing network traffic using IPVS instead of iptables.
3. Cilium Agent:
- Cilium uses the Cilium Agent to collect endpoint IP addresses, These addresses are stored in eBPF maps and persisted in the service map.
Implementing Cilium with Prometheus and Grafana
Prerequisites
- Install Kubernetes command-line tool kubectl
.
Steps to Deploy Cilium and Monitoring Tools
- Create a Kubernetes Cluster with Kind:
curl -LO https://raw.githubusercontent.com/cilium/cilium/1.15.5/Documentation/installation/kind-config.yaml
kind create cluster — config=kind-config.yaml
2. Install kubectl:
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
echo "$(cat kubectl.sha256) kubectl" | sha256sum --check
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client
3. Deploy Prometheus and Grafana:
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/HEAD/examples/kubernetes/addons/prometheus/monitoring-example.yaml
4. Install Cilium:
download the cilium release tarball and change to kubernetes install directory
curl -LO https://github.com/cilium/cilium/archive/main.tar.gz
tar xzf main.tar.gz
cd cilium-main/install/kubernetes
Next is to deploy cilium via helm and enable all metrics
helm install cilium ./cilium \
--namespace kube-system \
--set prometheus.enabled=true \
--set operator.prometheus.enabled=true \
--set hubble.enabled=true \
--set hubble.metrics.enableOpenMetrics=true \
--set hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,httpV2:exemplars=true;labelsContext=source_ip\,source_namespace\,source_workload\,destination_ip\,destination_namespace\,destination_workload\,traffic_direction}"
5. Access Grafana:
kubectl -n cilium-monitoring port-forward service/grafana - address 0.0.0.0 - address :: 3000:3000
#access via localhost:3000
6. Access Prometheus
kubectl -n cilium-monitoring port-forward service/prometheus --address 0.0.0.0 --address :: 9090:9090
Monitoring Cilium with Prometheus and Grafana
Once Cilium and monitoring tools are deployed, you can access Grafana to visualize the metrics collected by Prometheus. The Grafana dashboard includes preloaded Cilium dashboards, providing insights into the health of the Cilium environment and the network flows within your Kubernetes cluster.
CILIUM METRICS DASHBOARD
API METRICS
CILIUM METRICS ON END POINTS
KUBERNETES INTEGRATION
HUBBLE METRICS ON NETWORK
Conclusion
Cilium, powered by eBPF, offers robust network security and observability for Kubernetes environments. By integrating with Prometheus and Grafana, you can effectively monitor and troubleshoot networking issues, ensuring a secure and efficient cloud-native infrastructure.