content
large_stringlengths
3
20.5k
url
large_stringlengths
54
193
branch
large_stringclasses
4 values
source
large_stringclasses
42 values
embeddings
listlengths
384
384
score
float64
-0.21
0.65
ClusterIP services. If yes, then the client's network namespace cookie is used as the source - it allows to implement affinity at the socket layer at which the socket-LB operates (a source IP is not available there, as the endpoint selection happens before a network packet has been built by the kernel). If the socket-LB is not used (i.e. the loadbalancing is done at the pod network interface, on a per-packet basis), then the request's source IP address is used as the source. The session affinity of a service with multiple ports is per service IP and port. Meaning that all requests for a given service sent from the same source and to the same service port will be routed to the same service endpoints; but two requests for the same service, sent from the same source but to different service ports may be routed to distinct service endpoints. Note that if the session affinity feature is used in combination with Maglev consistent hashing to select backends, then Maglev will not take the source port as input for its hashing in order to respect the user's ClientIP choice (see also `GH#26709 `\_\_ for further details). kube-proxy Replacement Health Check server \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* To enable health check server for the kube-proxy replacement, the ``kubeProxyReplacementHealthzBindAddr`` option has to be set (disabled by default). The option accepts the IP address with port for the health check server to serve on. E.g. to enable for IPv4 interfaces set ``kubeProxyReplacementHealthzBindAddr='0.0.0.0:10256'``, for IPv6 - ``kubeProxyReplacementHealthzBindAddr='[::]:10256'``. The health check server is accessible via the HTTP ``/healthz`` endpoint. LoadBalancer Source Ranges Checks \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* When a ``LoadBalancer`` service is configured with ``spec.loadBalancerSourceRanges``, Cilium's eBPF kube-proxy replacement restricts access from outside (e.g. external world traffic) to the service to the white-listed CIDRs specified in the field. If the field is empty, no restrictions for the access will be applied. When accessing the service from inside a cluster, the kube-proxy replacement will ignore the field regardless whether it is set. This means that any pod or any host process in the cluster will be able to access the ``LoadBalancer`` service internally. By default the specified white-listed CIDRs in ``spec.loadBalancerSourceRanges`` only apply to the ``LoadBalancer`` service, but not the corresponding ``NodePort`` or ``ClusterIP`` service which get installed along with the ``LoadBalancer`` service. If this behavior is not desired, then there are two options available: One possibility is to avoid the creation of corresponding ``NodePort`` and ``ClusterIP`` services via ``service.cilium.io/type`` annotation: .. code-block:: yaml apiVersion: v1 kind: Service metadata: name: example-service annotations: service.cilium.io/type: LoadBalancer spec: ports: - port: 80 targetPort: 80 type: LoadBalancer loadBalancerSourceRanges: - 192.168.1.0/24 The other possibility is to propagate the white-listed CIDRs to all externally exposed service types. Meaning, ``NodePort`` as well as ``ClusterIP`` (if externally accessible, see :ref:`External Access To ClusterIP Services ` section) also filter traffic based on the source IP addresses. This option can be enabled in Helm via ``bpf.lbSourceRangeAllTypes=true``. The ``loadBalancerSourceRanges`` by default specifies an allow-list of CIDRs, meaning, traffic originating not from those CIDRs is automatically dropped. Cilium also supports the option to turn this list into a deny-list, in order to block traffic from certain CIDRs while allowing everything else. This behavior can be achieved through the ``service.cilium.io/src-ranges-policy`` annotation which accepts the values of ``allow`` or ``deny``. The default ``loadBalancerSourceRanges`` behavior equals to ``service.cilium.io/src-ranges-policy: allow``: .. code-block:: yaml apiVersion: v1 kind: Service metadata: name: example-service annotations: service.cilium.io/type: LoadBalancer service.cilium.io/src-ranges-policy: allow spec: ports: - port: 80 targetPort: 80 type: LoadBalancer loadBalancerSourceRanges: - 192.168.1.0/24 In order to turn the CIDR list into a deny-list while allowing traffic not originating from this set, this can be changed into
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/kubeproxy-free.rst
main
cilium
[ -0.068853959441185, -0.056546736508607864, -0.03919113799929619, -0.036901410669088364, -0.029310574755072594, -0.012140098959207535, 0.08797432482242584, -0.023228293284773827, 0.05486523360013962, -0.026595117524266243, -0.029478497803211212, 0.04525523632764816, 0.016879692673683167, -0...
0.131907
.. code-block:: yaml apiVersion: v1 kind: Service metadata: name: example-service annotations: service.cilium.io/type: LoadBalancer service.cilium.io/src-ranges-policy: allow spec: ports: - port: 80 targetPort: 80 type: LoadBalancer loadBalancerSourceRanges: - 192.168.1.0/24 In order to turn the CIDR list into a deny-list while allowing traffic not originating from this set, this can be changed into ``service.cilium.io/src-ranges-policy: deny``: .. code-block:: yaml apiVersion: v1 kind: Service metadata: name: example-service annotations: service.cilium.io/type: LoadBalancer service.cilium.io/src-ranges-policy: deny spec: ports: - port: 80 targetPort: 80 type: LoadBalancer loadBalancerSourceRanges: - 192.168.1.0/24 Service Proxy Name Configuration \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Like kube-proxy, Cilium also honors the ``service.kubernetes.io/service-proxy-name`` service annotation and only manages services that contain a matching service-proxy-name label. This name can be configured by setting ``k8s.serviceProxyName`` option and the behavior is identical to that of kube-proxy. The service proxy name defaults to an empty string which instructs Cilium to only manage services not having ``service.kubernetes.io/service-proxy-name`` label. For more details on the usage of ``service.kubernetes.io/service-proxy-name`` label and its working, take a look at `this KEP `\_\_. .. note:: If Cilium with a non-empty service proxy name is meant to manage all services in kube-proxy free mode, make sure that default Kubernetes services like ``kube-dns`` and ``kubernetes`` have the required label value. Traffic Distribution and Topology Aware Hints \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* The kube-proxy replacement implements both Kubernetes `Topology Aware Routing `\_\_, and the more recent `Traffic Distribution `\_\_ features. Both of these features work by setting ``hints`` on EndpointSlices that enable Cilium to route to endpoints residing in the same zone. To enable the feature, set ``loadBalancer.serviceTopology=true``. Neighbor Discovery \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* When kube-proxy replacement and XDP acceleration are enabled, Cilium does L2 neighbor discovery of nodes and service backends in the cluster. This is required for the service load-balancing to populate L2 addresses for backends since it is not possible to dynamically resolve neighbors on demand in the fast-path. L2 neighbor discovery is automatically enabled when the agent detects that XDP is in use, but can also be manually turned on by setting the ``--enable-l2-neigh-discovery=true`` flag or ``l2NeighDiscovery.enabled=true`` Helm option. The agent fully relies on the Linux kernel to discover gateways or hosts on the same L2 network. Both IPv4 and IPv6 neighbor discovery is supported in the Cilium agent. As per our kernel work `presented at Plumbers `\_\_, "managed" neighbor entries have been `upstreamed `\_\_ and will be available in Linux kernel v5.16 or later which the Cilium agent will detect and transparently use. In this case, the agent pushes down L3 addresses of new nodes joining the cluster as externally learned "managed" neighbor entries. For introspection, iproute2 displays them as "managed extern\_learn". The ``extern\_learn`` attribute prevents garbage collection of the entries by the kernel's neighboring subsystem. Such "managed" neighbor entries are dynamically resolved and periodically refreshed by the Linux kernel itself in case there is no active traffic for a certain period of time. That is, the kernel attempts to always keep them in ``REACHABLE`` state. For Linux kernels v5.15 or earlier where "managed" neighbor entries are not present, the Cilium agent similarly pushes L3 addresses of new nodes into the kernel for dynamic resolution. For introspection, iproute2 displays them only as ``extern\_learn`` in this case. If there is no active traffic for a certain period of time and entries become state, the Cilium agent triggers the Linux kernel-based re-resolution for attempting to keep them in ``REACHABLE`` state. The Cilium agent actively monitors devices, routes, and neighbors and reconciles the neighbor entries in the kernel. For example if a device is added new neighbor entries for the device are added. When routes change, such as a change to the next-hop, the Cilium agent updates the neighbor entries accordingly.
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/kubeproxy-free.rst
main
cilium
[ -0.07044609636068344, 0.06416834145784378, -0.08251651376485825, -0.08247607201337814, 0.0010844521457329392, -0.047944311052560806, 0.008098830468952656, -0.054256223142147064, 0.0230877585709095, -0.01458744890987873, 0.06245262920856476, -0.04758521169424057, 0.008789765648543835, 0.020...
0.122766
The Cilium agent actively monitors devices, routes, and neighbors and reconciles the neighbor entries in the kernel. For example if a device is added new neighbor entries for the device are added. When routes change, such as a change to the next-hop, the Cilium agent updates the neighbor entries accordingly. And when neighbor entries are flushed due to for example a carrier-down event, the Cilium agent restores the neighbor entries as soon as possible. The neighbor discovery supports multi-device environments where each node has multiple devices and multiple next-hops to another node. The Cilium agent pushes neighbor entries for all target devices, including the direct routing device. Currently, it supports one next-hop per device. The following example illustrates how the neighbor discovery works in a multi-device environment. Each node has two devices connected to different L3 networks (10.69.0.64/26 and 10.69.0.128/26), and global scope addresses each (10.69.0.1/26 and 10.69.0.2/26). A next-hop from node1 to node2 is either ``10.69.0.66 dev eno1`` or ``10.69.0.130 dev eno2``. The Cilium agent pushes neighbor entries for both ``10.69.0.66 dev eno1`` and ``10.69.0.130 dev eno2`` in this case. :: +---------------+ +---------------+ | node1 | | node2 | | 10.69.0.1/26 | | 10.69.0.2/26 | | eno1+-----+eno1 | | | | | | | | 10.69.0.65/26 | |10.69.0.66/26 | | | | | | eno2+-----+eno2 | | | | | | | | 10.69.0.129/26| | 10.69.0.130/26| +---------------+ +---------------+ With, on node1: .. code-block:: shell-session $ ip route show 10.69.0.2 nexthop via 10.69.0.66 dev eno1 weight 1 nexthop via 10.69.0.130 dev eno2 weight 1 $ ip neigh show 10.69.0.66 dev eno1 lladdr 96:eb:75:fd:89:fd extern\_learn REACHABLE 10.69.0.130 dev eno2 lladdr 52:54:00:a6:62:56 extern\_learn REACHABLE .. \_external\_access\_to\_clusterip\_services: External Access To ClusterIP Services \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* As per `k8s Service `\_\_, Cilium's eBPF kube-proxy replacement by default disallows access to a ClusterIP service from outside the cluster. This can be allowed by setting ``bpf.lbExternalClusterIP=true``. Kubernetes API server high availability \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* If you are running multiple instances of Kubernetes API servers in your cluster, you can set the ``k8s-api-server-urls`` flag so that Cilium can fail over to an active instance. Cilium switches to the ``kubernetes`` service address so that API requests are load-balanced to API server endpoints during runtime. However, if the initially configured API servers are rotated while the agent is down, you can update the ``k8s-api-server-urls`` flag with the updated API servers. .. cilium-helm-install:: :namespace: kube-system :set: kubeProxyReplacement=true k8s.apiServerURLs="https://172.21.0.4:6443 https://172.21.0.5:6443 https://172.21.0.6:6443" Observability \*\*\*\*\*\*\*\*\*\*\*\*\* You can trace socket LB related datapath events using Hubble and cilium monitor. Apply the following pod and service: .. code-block:: yaml apiVersion: v1 kind: Pod metadata: name: nginx labels: app: proxy spec: containers: - name: nginx image: nginx:stable ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx-service spec: selector: app: proxy ports: - port: 80 Deploy a client pod to start traffic. .. parsed-literal:: $ kubectl create -f \ |SCM\_WEB|\/examples/kubernetes-dns/dns-sw-app.yaml .. code-block:: shell-session $ kubectl get svc | grep nginx nginx-service ClusterIP 10.96.128.44 80/TCP 140m $ kubectl exec -it mediabot -- curl -v --connect-timeout 5 10.96.128.44 Follow the Hubble :ref:`hubble\_cli` guide to see the network flows. The Hubble output prints datapath events before and after socket LB translation between service and selected service endpoint. .. code-block:: shell-session $ hubble observe --all | grep mediabot Jan 13 13:47:20.932: default/mediabot (ID:5618) <> default/nginx-service:80 (world) pre-xlate-fwd TRACED (TCP) Jan 13 13:47:20.932: default/mediabot (ID:5618) <> default/nginx:80 (ID:35772) post-xlate-fwd TRANSLATED (TCP) Jan 13 13:47:20.932: default/nginx:80 (ID:35772) <> default/mediabot (ID:5618) pre-xlate-rev TRACED (TCP) Jan 13 13:47:20.932: default/nginx-service:80 (world) <> default/mediabot (ID:5618) post-xlate-rev TRANSLATED (TCP) Jan 13 13:47:20.932: default/mediabot:38750 (ID:5618) <> default/nginx (ID:35772) pre-xlate-rev TRACED (TCP) Socket LB tracing with Hubble requires cilium agent to
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/kubeproxy-free.rst
main
cilium
[ 0.006115379743278027, -0.06052257865667343, -0.07199138402938843, -0.035035401582717896, 0.025283383205533028, -0.09421764314174652, -0.06482602655887604, -0.018653111532330513, -0.007362688425928354, -0.021165696904063225, 0.02230004593729973, 0.00162160350009799, 0.041426870971918106, -0...
0.194353
default/mediabot (ID:5618) <> default/nginx:80 (ID:35772) post-xlate-fwd TRANSLATED (TCP) Jan 13 13:47:20.932: default/nginx:80 (ID:35772) <> default/mediabot (ID:5618) pre-xlate-rev TRACED (TCP) Jan 13 13:47:20.932: default/nginx-service:80 (world) <> default/mediabot (ID:5618) post-xlate-rev TRANSLATED (TCP) Jan 13 13:47:20.932: default/mediabot:38750 (ID:5618) <> default/nginx (ID:35772) pre-xlate-rev TRACED (TCP) Socket LB tracing with Hubble requires cilium agent to detect pod cgroup paths. If you see a message in cilium agent ``Failed to setup socket load-balancing tracing with Hubble.``, you can trace packets using ``cilium-dbg monitor`` instead. .. note:: If you observe the message about socket load-balancing setup failure in the logs, please file a GitHub issue with the cgroup path for any of your pods, obtained by running the following command on a Kubernetes node in your cluster: ``sudo crictl inspectp -o=json $POD\_ID | grep cgroup``. .. code-block:: shell-session $ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mediabot 1/1 Running 0 54m 10.244.1.237 kind-worker nginx 1/1 Running 0 3h25m 10.244.1.246 kind-worker $ kubectl exec -n kube-system cilium-rt2jh -- cilium-dbg monitor -v -t trace-sock CPU 11: [pre-xlate-fwd] cgroup\_id: 479586 sock\_cookie: 7123674, dst [10.96.128.44]:80 tcp CPU 11: [post-xlate-fwd] cgroup\_id: 479586 sock\_cookie: 7123674, dst [10.244.1.246]:80 tcp CPU 11: [pre-xlate-rev] cgroup\_id: 479586 sock\_cookie: 7123674, dst [10.244.1.246]:80 tcp CPU 11: [post-xlate-rev] cgroup\_id: 479586 sock\_cookie: 7123674, dst [10.96.128.44]:80 tcp You can identify the client pod using its printed ``cgroup id`` metadata. The pod ``cgroup path`` corresponding to the ``cgroup id`` has its UUID. The socket cookie is a unique socket identifier allocated in the Linux kernel. The socket cookie metadata can be used to identify all the trace events from a socket. .. code-block:: shell-session $ kubectl get pods -o custom-columns=PodName:.metadata.name,PodUID:.metadata.uid PodName PodUID mediabot b620703c-c446-49c7-84c8-e23f4ba5626b nginx 73b9938b-7e4b-4cbd-8c4c-67d4f253ccf4 $ kubectl exec -n kube-system cilium-rt2jh -- find /run/cilium/cgroupv2/ -inum 479586 Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), clean-cilium-state (init) /run/cilium/cgroupv2/kubelet.slice/kubelet-kubepods.slice/kubelet-kubepods-besteffort.slice/kubelet-kubepods-besteffort-podb620703c\_c446\_49c7\_84c8\_e23f4ba5626b.slice/cri-containerd-4e7fc71c8bef8c05c9fb76d93a186736fca266e668722e1239fe64503b3e80d3.scope Troubleshooting \*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Validate BPF cgroup programs attachment ======================================= Cilium attaches BPF ``cgroup`` programs to enable socket-based load-balancing (aka ``host-reachable`` services). If you see connectivity issues for ``clusterIP`` services, check if the programs are attached to the host ``cgroup root``. The default ``cgroup`` root is set to ``/run/cilium/cgroupv2``. Run the following commands from a Cilium agent pod as well as the underlying kubernetes node where the pod is running. If the container runtime in your cluster is running in the cgroup namespace mode, Cilium agent pod can attach BPF ``cgroup`` programs to the ``virtualized cgroup root``. In such cases, Cilium kube-proxy replacement based load-balancing may not be effective leading to connectivity issues. For more information, ensure that you have the fix `Pull Request `\_\_. .. code-block:: shell-session $ mount | grep cgroup2 none on /run/cilium/cgroupv2 type cgroup2 (rw,relatime) $ bpftool cgroup tree /run/cilium/cgroupv2/ CgroupPath ID AttachType AttachFlags Name /run/cilium/cgroupv2 10613 device multi 48497 connect4 48493 connect6 48499 sendmsg4 48495 sendmsg6 48500 recvmsg4 48496 recvmsg6 48498 getpeername4 48494 getpeername6 Known Issues ############ Connection Collisions When a Service Endpoint Is Accessed via Multiple VIPs \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* If a given backend endpoint is reachable through multiple services (i.e., via different VIPs or NodePorts), a new connection from a client to a different VIP or NodePort may reuse an existing connection tracking state from a connection to a different VIP or NodePort. This can happen if the client selects the same source port. In such cases, the connection might be dropped. The following scenarios are prone to this problem: \* With :ref:`DSR`: A client running outside a cluster sends requests ``CLIENT\_IP:SRC\_PORT -> LB1\_IP:LB1\_PORT`` and ``CLIENT\_IP:SRC\_PORT -> LB2\_IP:LB2\_PORT`` via an intermediate K8s node(s). The intermediate node selects ``BACKEND\_IP:BACKEND\_PORT`` for each request and forwards them to the backend endpoint. Each request appears
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/kubeproxy-free.rst
main
cilium
[ -0.004896261729300022, 0.005139857530593872, -0.04388721287250519, -0.06599336862564087, 0.08077957481145859, -0.02163621224462986, -0.07038310170173645, 0.03148706629872322, 0.022605018690228462, 0.019980289041996002, 0.0010628473246470094, -0.011727708391845226, 0.025203943252563477, 0.0...
0.09022
dropped. The following scenarios are prone to this problem: \* With :ref:`DSR`: A client running outside a cluster sends requests ``CLIENT\_IP:SRC\_PORT -> LB1\_IP:LB1\_PORT`` and ``CLIENT\_IP:SRC\_PORT -> LB2\_IP:LB2\_PORT`` via an intermediate K8s node(s). The intermediate node selects ``BACKEND\_IP:BACKEND\_PORT`` for each request and forwards them to the backend endpoint. Each request appears identical as ``CLIENT\_IP:SRC\_PORT -> BACKEND\_IP:BACKEND\_PORT``, so the backend cannot distinguish between them. \* With or without DSR: A client running outside a cluster sends requests ``CLIENT\_IP:SRC\_PORT -> LB1\_IP:LB1\_PORT`` and ``CLIENT\_IP:SRC\_PORT -> LB2\_IP:LB2\_PORT`` to a K8s node that runs the selected backend endpoint. Again, each request appears the same: ``CLIENT\_IP:SRC\_PORT -> BACKEND\_IP:BACKEND\_PORT``. \* Without Socket LB: A client running in a Pod sends requests ``CLIENT\_IP:SRC\_PORT -> LB1\_IP:LB1\_PORT`` and ``CLIENT\_IP:SRC\_PORT -> LB2\_IP:LB2\_PORT``. The per-packet load-balancer then DNATs each request to the backend, resulting in ``CLIENT\_IP:SRC\_PORT -> BACKEND\_IP:BACKEND\_PORT``. Therefore, it is highly recommended not to expose a backend endpoint via multiple VIPs :gh-issue:`11810` :gh-issue:`18632`. Limitations ########### \* Cilium's eBPF kube-proxy replacement relies upon the socket-LB feature which uses eBPF cgroup hooks to implement the service translation. Using it with libceph deployments currently requires support for the getpeername(2) hook address translation in eBPF. \* NFS and SMB mounts may break when mounted to a ``Service`` cluster IP while using socket-LB. This issue is known to impact Longhorn, Portworx, and Robin, but may impact other storage systems that implement ``ReadWriteMany`` volumes using this pattern. To avoid this problem, ensure that the following commits are part of your underlying kernel: \* ``0bdf399342c5 ("net: Avoid address overwrite in kernel\_connect")`` \* ``86a7e0b69bd5 ("net: prevent rewrite of msg\_name in sock\_sendmsg()")`` \* ``01b2885d9415 ("net: Save and restore msg\_namelen in sock\_sendmsg")`` \* ``cedc019b9f26 ("smb: use kernel\_connect() and kernel\_bind()")`` (SMB only) These patches have been backported to all stable kernels and some distro-specific kernels: \* \*\*Ubuntu\*\*: ``5.4.0-187-generic``, ``5.15.0-113-generic``, ``6.5.0-41-generic`` or newer. \* \*\*RHEL 8\*\*: ``4.18.0-553.8.1.el8\_10.x86\_64`` or newer (RHEL 8.10+). \* \*\*RHEL 9\*\*: ``kernel-5.14.0-427.31.1.el9\_4`` or newer (RHEL 9.4+). For a more detailed discussion see :gh-issue:`21541`. \* Cilium's DSR NodePort mode currently does not operate well in environments with TCP Fast Open (TFO) enabled. It is recommended to switch to ``snat`` mode in this situation. \* Cilium's eBPF kube-proxy replacement does not support the SCTP transport protocol except in a few basic cases. For more information, see :ref:`sctp`. Only TCP and UDP are fully supported as a transport for services at this time. \* Cilium's eBPF kube-proxy replacement does not allow ``hostPort`` port configurations for Pods that overlap with the configured NodePort range. In such case, the ``hostPort`` setting will be ignored and a warning emitted to the Cilium agent log. Similarly, explicitly binding the ``hostIP`` to the loopback address in the host namespace is currently not supported and will log a warning to the Cilium agent log. \* The neighbor discovery in a multi-device environment doesn't work with the runtime device detection which means that the target devices for the neighbor discovery doesn't follow the device changes. \* When socket-LB feature is enabled, pods sending (connected) UDP and TCP traffic to services can continue to send traffic to a service backend even after it's deleted. Cilium agent handles such scenarios by forcefully terminating application sockets that are connected to deleted backends, so that the applications can be load-balanced to active backends. This functionality requires these kernel configs to be enabled: ``CONFIG\_INET\_DIAG``, ``CONFIG\_INET\_UDP\_DIAG`` and ``CONFIG\_INET\_DIAG\_DESTROY``. If ``lb-sock-terminate-all-protos`` is enabled the functionality will additionally require kernel config ``CONFIG\_INET\_TCP\_DIAG``. \* Cilium's BPF-based masquerading is recommended over iptables when using the BPF-based NodePort. Otherwise, there is a risk for port collisions between BPF and iptables SNAT, which might result in dropped NodePort connections :gh-issue:`23604`. Further Readings ################
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/kubeproxy-free.rst
main
cilium
[ -0.09442092478275299, -0.025928333401679993, 0.053131770342588425, 0.0016314622480422258, 0.0021208804100751877, -0.0695168673992157, -0.04218170791864395, -0.020563919097185135, 0.0078499810770154, 0.014160187914967537, -0.01334903109818697, -0.03369523584842682, 0.0035383093636482954, -0...
0.05958
``CONFIG\_INET\_DIAG\_DESTROY``. If ``lb-sock-terminate-all-protos`` is enabled the functionality will additionally require kernel config ``CONFIG\_INET\_TCP\_DIAG``. \* Cilium's BPF-based masquerading is recommended over iptables when using the BPF-based NodePort. Otherwise, there is a risk for port collisions between BPF and iptables SNAT, which might result in dropped NodePort connections :gh-issue:`23604`. Further Readings ################ The following presentations describe inner-workings of the kube-proxy replacement in eBPF in great details: \* "Liberating Kubernetes from kube-proxy and iptables" (KubeCon North America 2019, `slides `\_\_, `video `\_\_) \* "Kubernetes service load-balancing at scale with BPF & XDP" (Linux Plumbers 2020, `slides `\_\_, `video `\_\_) \* "eBPF as a revolutionary technology for the container landscape" (Fosdem 2020, `slides `\_\_, `video `\_\_) \* "Kernel improvements for Cilium socket LB" (LSF/MM/BPF 2020, `slides `\_\_)
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/kubeproxy-free.rst
main
cilium
[ 0.0015897045377641916, 0.019673295319080353, 0.00205314252525568, -0.09259329736232758, -0.011970045045018196, -0.010187532752752304, -0.024003995582461357, 0.008578581735491753, -0.062087662518024445, 0.004119538702070713, -0.023494387045502663, -0.05599384009838104, -0.01586129330098629, ...
0.076768
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_kata: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Kata Containers with Cilium \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* `Kata Containers `\_ is an open source project that provides a secure container runtime with lightweight virtual machines that feel and perform like containers, but provide stronger workload isolation using hardware virtualization technology as a second layer of defense. Kata Containers implements OCI runtime spec, just like ``runc`` that is used by Docker. Cilium can be used along with Kata Containers, using both enables higher degree of security. Kata Containers enhances security in the compute layer, while Cilium provides policy and observability in the networking layer. .. warning:: Due to the different Kata Containers Networking model, there are limitations that can cause connectivity disruptions in Cilium. Please refer to the below `Limitations`\_ section. This guide shows how to install Cilium along with Kata Containers. It assumes that you have already followed the official `Kata Containers installation user guide `\_ to get the Kata Containers runtime up and running on your platform of choice but that you haven't yet setup Kubernetes. .. note:: This guide has been validated by following the Kata Containers guide for Google Compute Engine (GCE) and using Ubuntu 18.04 LTS with the packaged version of Kata Containers, CRI-containerd and Kubernetes 1.18.3. Setup Kubernetes with CRI ========================= Kata Containers runtime is an OCI compatible runtime and cannot directly interact with the CRI API level. For this reason, it relies on a CRI implementation to translate CRI into OCI. At the time of writing this guide, there are two supported ways called CRI-O and CRI-containerd. It is up to you to choose the one that you want, but you have to pick one. Refer to the section :ref:`k8s\_requirements` for detailed instruction on how to prepare your Kubernetes environment and make sure to use Kubernetes >= 1.12. Then, follow the `official guide to run Kata Containers with Kubernetes `\_. .. note:: Minimum version of kubernetes 1.12 is required to use the RuntimeClass Feature for Kata Container runtime described below. With your Kubernetes cluster ready, you can now proceed to deploy Cilium. Deploy Cilium ============= .. include:: ../../installation/k8s-install-download-release.rst Deploy Cilium release via Helm: .. tabs:: .. group-tab:: Using CRI-O .. cilium-helm-install:: :namespace: kube-system :set: bpf.autoMount.enabled=false .. group-tab:: Using CRI-containerd .. cilium-helm-install:: :namespace: kube-system .. warning:: When using :ref:`kube-proxy-replacement ` or its socket-level loadbalancer with Kata containers, the socket-level loadbalancer should be disabled for pods by setting ``socketLB.hostNamespaceOnly=true``. See :ref:`socketlb-host-netns-only` for more details. .. include:: ../../installation/k8s-install-validate.rst Run Kata Containers with Cilium CNI =================================== Now that your Kubernetes cluster is configured with the Kata Containers runtime and Cilium as the CNI, you can run a sample workload by following `these instructions `\_. Limitations =========== Due to its different `Networking Design Architecture `\_, the Kata runtime adds an additional layer of abstraction inside the Container Networking Namespace created by Cilium (referred to as "outer"). In that namespace, Kata creates an isolated VM with an additional Container Networking Namespace (referred to as "inside") to host the requested Pod, as depicted below. .. image:: https://raw.githubusercontent.com/kata-containers/documentation/refs/heads/master/design/arch-images/network.png :alt: Kata Container Networking Architecture Upon the outer Container Networking Namespace creation, the Cilium CNI performs the following two actions: 1. creates the ``eth0`` interface with the same ``device MTU`` of either the detected underlying network, or the MTU specified in the Cilium ConfigMap; 2. adjusts the ``default route MTU`` (computed as ``device MTU - overhead``) to account for the additional networking overhead given by the Cilium configuration (ex. +50B for VXLAN, +80B for WireGuard, etc.). However, during
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/kata.rst
main
cilium
[ -0.04623746499419212, 0.10841215401887894, -0.10301249474287033, -0.04674294590950012, 0.021814361214637756, -0.10800955444574356, -0.0038019316270947456, 0.003192614996805787, 0.024081232026219368, -0.011922217905521393, 0.058519452810287476, -0.07945176959037781, 0.05324660241603851, -0....
0.270885
``device MTU`` of either the detected underlying network, or the MTU specified in the Cilium ConfigMap; 2. adjusts the ``default route MTU`` (computed as ``device MTU - overhead``) to account for the additional networking overhead given by the Cilium configuration (ex. +50B for VXLAN, +80B for WireGuard, etc.). However, during the inner Container Networking Namespace creation (i.e., the pod inside the VM), only the outer ``eth0 device MTU`` (1) is copied over by Kata to the inner ``eth0``, while the ``default route MTU`` (2) is ignored. For this reason, depending on the types of connections, users might experience performance degradation or even packet drops between traditional pods and KataPod connections due to multiple (unexpected) fragmentation. There are currently two possible workarounds, with (b) being preferred: a. set a lower MTU value in the Cilium ConfigMap to account for the overhead. This would allow the KataPod to have a lower device MTU and prevent unwanted fragmentation. However, this is not recommended as it would have a relevant impact on all the other types of communications (ex. traditional pod-to-pod, pod-to-node, etc.) due to the lower device MTU value being set on all the Cilium-managed interfaces. b. modify the KataPod deployment by adding an ``initContainer`` (with NET\_ADMIN) to adjust the route MTU inside the inner pod. This would not only align the KataPod configuration to all the other pods, but also it would not harm all the other types of connections, given that it is a self-contained solution in the KataPod itself. The correct ``route MTU`` value to set can be either manually computed or retrieved by issuing ``ip route`` on a Cilium Pod (or inside a traditional pod). Here follows an example of a KataPod deployment (``runtimeClassName: kata-clh``) on a cluster with only Cilium VXLAN enabled (``route MTU = 1500B - 50B = 1450``): .. code-block:: yaml apiVersion: v1 kind: Pod metadata: name: nginx-pod labels: app: nginx spec: runtimeClassName: kata-clh containers: - name: nginx image: nginx:latest ports: - containerPort: 80 initContainers: - name: set-mtu image: busybox:latest command: - sh - -c - | DEFAULT="$(ip route show default)" ip route replace "$DEFAULT" mtu 1450 securityContext: capabilities: add: - NET\_ADMIN
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/kata.rst
main
cilium
[ -0.014136525802314281, 0.008396984077990055, 0.0192851722240448, 0.004370769485831261, -0.03909030929207802, -0.10743989795446396, -0.09213680773973465, 0.03297410160303116, -0.02013382501900196, -0.036444585770368576, 0.07141327112913132, -0.06899693608283997, 0.032880138605833054, -0.050...
0.18657
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gsg\_ipam: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Configuring IPAM Modes \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Cilium supports multiple IP Address Management (IPAM) modes to meet the needs of different environments and cloud providers. The following sections provide documentation for each supported IPAM mode: .. toctree:: :maxdepth: 1 :glob: ipam-crd ipam-cluster-pool ipam-multi-pool ../concepts/ipam/kubernetes ../concepts/ipam/azure ../concepts/ipam/azure-delegated-ipam ../concepts/ipam/eni
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/ipam.rst
main
cilium
[ 0.0024919246789067984, 0.003103201277554035, -0.05645899102091789, -0.03907768428325653, -0.006792416796088219, -0.052129849791526794, -0.024818964302539825, -0.0615432970225811, 0.041867759078741074, 0.015080474317073822, 0.06031574308872223, -0.04902637377381325, 0.02046249248087406, -0....
0.144543
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io \*\*\*\*\*\*\*\* Concepts \*\*\*\*\*\*\*\* .. \_k8s\_concepts\_deployment: Deployment ========== The configuration of a standard Cilium Kubernetes deployment consists of several Kubernetes resources: \* A ``DaemonSet`` resource: describes the Cilium pod that is deployed to each Kubernetes node. This pod runs the cilium-agent and associated daemons. The configuration of this DaemonSet includes the image tag indicating the exact version of the Cilium docker container (e.g., v1.0.0) and command-line options passed to the cilium-agent. \* A ``ConfigMap`` resource: describes common configuration values that are passed to the cilium-agent, such as the kvstore endpoint and credentials, enabling/disabling debug mode, etc. \* ``ServiceAccount``, ``ClusterRole``, and ``ClusterRoleBindings`` resources: the identity and permissions used by cilium-agent to access the Kubernetes API server when Kubernetes RBAC is enabled. \* A ``Secret`` resource: describes the credentials used to access the etcd kvstore, if required. Networking For Existing Pods ============================ In case pods were already running before the Cilium :term:`DaemonSet` was deployed, these pods will still be connected using the previous networking plugin according to the CNI configuration. A typical example for this is the ``kube-dns`` service which runs in the ``kube-system`` namespace by default. A simple way to change networking for such existing pods is to rely on the fact that Kubernetes automatically restarts pods in a Deployment if they are deleted, so we can simply delete the original kube-dns pod and the replacement pod started immediately after will have networking managed by Cilium. In a production deployment, this step could be performed as a rolling update of kube-dns pods to avoid downtime of the DNS service. .. code-block:: shell-session $ kubectl --namespace kube-system delete pods -l k8s-app=kube-dns pod "kube-dns-268032401-t57r2" deleted Running ``kubectl get pods`` will show you that Kubernetes started a new set of ``kube-dns`` pods while at the same time terminating the old pods: .. code-block:: shell-session $ kubectl --namespace kube-system get pods NAME READY STATUS RESTARTS AGE cilium-5074s 1/1 Running 0 58m kube-addon-manager-minikube 1/1 Running 0 59m kube-dns-268032401-j0vml 3/3 Running 0 9s kube-dns-268032401-t57r2 3/3 Terminating 0 57m Default Ingress Allow from Local Host ===================================== Kubernetes has functionality to indicate to users the current health of their applications via `Liveness Probes and Readiness Probes `\_. In order for ``kubelet`` to run these health checks for each pod, by default, Cilium will always allow all ingress traffic from the local host to each pod.
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/concepts.rst
main
cilium
[ 0.03506435453891754, 0.03175928816199303, -0.03803492337465286, -0.041465286165475845, 0.020915774628520012, -0.03730170056223869, -0.025209477171301842, 0.010836967267096043, 0.06406392902135849, -0.036067187786102295, 0.036214228719472885, -0.08833285421133041, 0.029051074758172035, -0.0...
0.210518
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_k8s\_intro: \*\*\*\*\*\*\*\*\*\*\*\* Introduction \*\*\*\*\*\*\*\*\*\*\*\* What does Cilium provide in your Kubernetes Cluster? ==================================================== The following functionality is provided as you run Cilium in your Kubernetes cluster: \* :term:`CNI` plugin support to provide pod\_connectivity\_ with `multi\_host\_networking`. \* Identity based implementation of the `NetworkPolicy` resource to isolate :term:`pod` to pod connectivity on Layer 3 and 4. \* An extension to NetworkPolicy in the form of a :term:`CustomResourceDefinition` which extends policy control to add: \* Layer 7 policy enforcement on ingress and egress for the following application protocols: \* HTTP \* Kafka \* Egress support for CIDRs to secure access to external services \* Enforcement to external headless services to automatically restrict to the set of Kubernetes endpoints configured for a service. \* ClusterIP implementation to provide distributed load-balancing for pod to pod traffic. \* Fully compatible with existing kube-proxy model .. admonition:: Video :class: attention If you'd like to learn more about Kubernetes networking and Cilium, check out `eCHO episode 99: Explain Kubernetes Networking and Cilium to Network Engineers `\_\_. .. \_pod\_connectivity: Pod-to-Pod Connectivity ======================= In Kubernetes, containers are deployed within units referred to as :term:`Pods`, which include one or more containers reachable via a single IP address. With Cilium, each Pod gets an IP address from the node prefix of the Linux node running the Pod. See :ref:`address\_management` for additional details. In the absence of any network security policies, all Pods can reach each other. Pod IP addresses are typically local to the Kubernetes cluster. If pods need to reach services outside the cluster as a client, the network traffic is automatically masqueraded as it leaves the node. Service Load-balancing ====================== Kubernetes has developed the Services abstraction which provides the user the ability to load balance network traffic to different pods. This abstraction allows the pods reaching out to other pods by a single IP address, a virtual IP address, without knowing all the pods that are running that particular service. Without Cilium, kube-proxy is installed on every node, watches for endpoints and services addition and removal on the kube-master which allows it to apply the necessary enforcement on iptables. Thus, the received and sent traffic from and to the pods are properly routed to the node and port serving for that service. For more information you can check out the kubernetes user guide for `Services `\_. When implementing ClusterIP, Cilium acts on the same principles as kube-proxy, it watches for services addition or removal, but instead of doing the enforcement on the iptables, it updates eBPF map entries on each node. For more information, see the `Pull Request `\_\_. Further Reading =============== The Kubernetes documentation contains more background on the `Kubernetes Networking Model `\_ and `Kubernetes Network Plugins `\_.
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/intro.rst
main
cilium
[ -0.0016224454157054424, 0.03741392865777016, -0.011745338328182697, -0.06385750323534012, 0.05082780867815018, -0.01588510535657406, -0.015581028535962105, -0.028642810881137848, 0.07045295834541321, -0.0017915330827236176, 0.03800453245639801, -0.12834426760673523, 0.022684210911393166, -...
0.183038
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: http://docs.cilium.io .. \_gsg\_ipam\_crd\_cluster\_pool: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* CRD-Backed by Cilium Cluster-Pool IPAM \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This is a quick tutorial walking through how to enable CRD-backed by Cilium cluster-pool IPAM. The purpose of this tutorial is to show how components are configured and resources interact with each other to enable users to automate or extend on their own. For more details, see the section :ref:`ipam\_crd\_cluster\_pool` Enable Cluster-pool IPAM mode ============================= #. Setup Cilium for Kubernetes using helm with the options: ``--set ipam.mode=cluster-pool``. #. Depending if you are using IPv4 and / or IPv6, you might want to adjust the ``podCIDR`` allocated for your cluster's pods with the options: \* ``--set ipam.operator.clusterPoolIPv4PodCIDRList=`` \* ``--set ipam.operator.clusterPoolIPv6PodCIDRList=`` #. To adjust the CIDR size that should be allocated for each node you can use the following options: \* ``--set ipam.operator.clusterPoolIPv4MaskSize=`` \* ``--set ipam.operator.clusterPoolIPv6MaskSize=`` #. Deploy Cilium and Cilium-Operator. Cilium will automatically wait until the ``podCIDR`` is allocated for its node by Cilium Operator. Validate installation ===================== #. Validate that Cilium has started up correctly .. code-block:: shell-session $ cilium-dbg status --all-addresses KVStore: Ok etcd: 1/1 connected, has-quorum=true: https://192.168.60.11:2379 - 3.3.12 (Leader) [...] IPAM: IPv4: 2/256 allocated, Allocated addresses: 10.0.0.1 (router) 10.0.0.3 (health) #. Validate the ``spec.ipam.podCIDRs`` section: .. code-block:: shell-session $ kubectl get cn k8s1 -o yaml apiVersion: cilium.io/v2 kind: CiliumNode metadata: name: k8s1 [...] spec: ipam: podCIDRs: - 10.0.0.0/24
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/ipam-cluster-pool.rst
main
cilium
[ -0.0657602846622467, 0.00835338607430458, -0.07456043362617493, -0.05157531797885895, 0.0022765696048736572, -0.030472707003355026, -0.043042343109846115, 0.016887661069631577, 0.00705802533775568, -0.060077276080846786, 0.08422281593084335, -0.08939902484416962, 0.025488082319498062, -0.0...
0.14738
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_k8s\_configuration: \*\*\*\*\*\*\*\*\*\*\*\*\* Configuration \*\*\*\*\*\*\*\*\*\*\*\*\* ConfigMap Options ----------------- In the :term:`ConfigMap` there are several options that can be configured according to your preferences: \* ``debug`` - Sets to run Cilium in full debug mode, which enables verbose logging and configures eBPF programs to emit more visibility events into the output of ``cilium-dbg monitor``. \* ``enable-ipv4`` - Enable IPv4 addressing support \* ``enable-ipv6`` - Enable IPv6 addressing support \* ``clean-cilium-bpf-state`` - Removes all eBPF state from the filesystem on startup. Endpoints will be restored with the same IP addresses, but ongoing connections may be briefly disrupted and loadbalancing decisions will be lost, so active connections via the loadbalancer will break. All eBPF state will be reconstructed from their original sources (for example, from Kubernetes or the kvstore). This may be used to mitigate serious issues regarding eBPF maps. This option should be turned off again after restarting the daemon. \* ``clean-cilium-state`` - Removes \*\*all\*\* Cilium state, including unrecoverable information such as all endpoint state, as well as recoverable state such as eBPF state pinned to the filesystem, CNI configuration files, library code, links, routes, and other information. \*\*This operation is irreversible\*\*. Existing endpoints currently managed by Cilium may continue to operate as before, but Cilium will no longer manage them and they may stop working without warning. After using this operation, endpoints must be deleted and reconnected to allow the new instance of Cilium to manage them. \* ``monitor-aggregation`` - This option enables coalescing of tracing events in ``cilium-dbg monitor`` to only include periodic updates from active flows, or any packets that involve an L4 connection state change. Valid options are ``none``, ``low``, ``medium``, ``maximum``. - ``none`` - Generate a tracing event on every receive and send packet. - ``low`` - Generate a tracing event on every send packet. - ``medium`` - Generate a tracing event for send packets only on every new connection, any time a packet contains TCP flags that have not been previously seen for the packet direction, and on average once per ``monitor-aggregation-interval`` (assuming that a packet is seen during the interval). Each direction tracks TCP flags and report interval separately. If Cilium drops a packet, it will emit one event per packet dropped. - ``maximum`` - An alias for the most aggressive aggregation level. Currently this is equivalent to setting ``monitor-aggregation`` to ``medium``. When socket load-balancing is enabled, these aggregation levels also apply to socket translation events: - ``none`` - Emit all socket trace events - ``lowest``/``low`` - Suppress reverse-direction (recv) socket trace events - ``medium``/``maximum`` - Emit socket trace events only for connect system calls \* ``monitor-aggregation-interval`` - Defines the interval to report tracing events. Only applicable for ``monitor-aggregation`` levels ``medium`` or higher. Assuming new packets are sent at least once per interval, this ensures that on average one event is sent during the interval. \* ``preallocate-bpf-maps`` - Pre-allocation of map entries allows per-packet latency to be reduced, at the expense of up-front memory allocation for the entries in the maps. Set to ``true`` to optimize for latency. If this value is modified, then during the next Cilium startup connectivity may be temporarily disrupted for endpoints with active connections. Any changes that you perform in the Cilium :term:`ConfigMap` and in ``cilium-etcd-secrets`` ``Secret`` will require you to restart any existing Cilium pods in order for them to pick the latest configuration. .. attention:: When updating keys or values in the ConfigMap, the changes might take up to 2 minutes to be
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/configuration.rst
main
cilium
[ 0.005638478323817253, -0.007291904650628567, -0.09754887223243713, 0.029749158769845963, 0.05291266739368439, -0.03820899873971939, -0.03553115203976631, 0.046141333878040314, -0.032350875437259674, 0.0023163913283497095, 0.06089525297284126, -0.1279662400484085, 0.031612031161785126, -0.0...
0.18007
changes that you perform in the Cilium :term:`ConfigMap` and in ``cilium-etcd-secrets`` ``Secret`` will require you to restart any existing Cilium pods in order for them to pick the latest configuration. .. attention:: When updating keys or values in the ConfigMap, the changes might take up to 2 minutes to be propagated to all nodes running in the cluster. For more information see the official Kubernetes docs: `Mounted ConfigMaps are updated automatically `\_\_ The following :term:`ConfigMap` is an example where the etcd cluster is running in 2 nodes, ``node-1`` and ``node-2`` with TLS, and client to server authentication enabled. .. code-block:: yaml apiVersion: v1 kind: ConfigMap metadata: name: cilium-config namespace: kube-system data: # The kvstore configuration is used to enable use of a kvstore for state # storage. kvstore: etcd kvstore-opt: '{"etcd.config": "/var/lib/etcd-config/etcd.config"}' # This etcd-config contains the etcd endpoints of your cluster. If you use # TLS please make sure you follow the tutorial in https://cilium.link/etcd-config etcd-config: |- --- endpoints: - https://node-1:31079 - https://node-2:31079 # # In case you want to use TLS in etcd, uncomment the 'trusted-ca-file' line # and create a kubernetes secret by following the tutorial in # https://cilium.link/etcd-config trusted-ca-file: '/var/lib/etcd-secrets/etcd-client-ca.crt' # # In case you want client to server authentication, uncomment the following # lines and create a kubernetes secret by following the tutorial in # https://cilium.link/etcd-config key-file: '/var/lib/etcd-secrets/etcd-client.key' cert-file: '/var/lib/etcd-secrets/etcd-client.crt' # If you want to run cilium in debug mode change this value to true debug: "false" enable-ipv4: "true" # If you want to clean cilium state; change this value to true clean-cilium-state: "false" CNI === :term:`CNI` - Container Network Interface is the plugin layer used by Kubernetes to delegate networking configuration. You can find additional information on the :term:`CNI` project website. CNI configuration is automatically taken care of when deploying Cilium via the provided :term:`DaemonSet`. The ``cilium`` pod will generate an appropriate CNI configuration file and write it to disk on startup. .. note:: In order for CNI installation to work properly, the ``kubelet`` task must either be running on the host filesystem of the worker node, or the ``/etc/cni/net.d`` and ``/opt/cni/bin`` directories must be mounted into the container where ``kubelet`` is running. This can be achieved with :term:`Volumes` mounts. The CNI auto installation is performed as follows: 1. The ``/etc/cni/net.d`` and ``/opt/cni/bin`` directories are mounted from the host filesystem into the pod where Cilium is running. 2. The binary ``cilium-cni`` is installed to ``/opt/cni/bin``. Any existing binary with the name ``cilium-cni`` is overwritten. 3. The file ``/etc/cni/net.d/05-cilium.conflist`` is written. Adjusting CNI configuration --------------------------- The CNI configuration file is automatically written and maintained by the cilium pod. It is written after the agent has finished initialization and is ready to handle pod sandbox creation. In addition, the agent will remove any other CNI configuration files by default. There are a number of Helm variables that adjust CNI configuration management. For a full description, see the helm documentation. A brief summary: +--------------------+----------------------------------------+---------+ | Helm variable | Description | Default | +====================+========================================+=========+ | ``cni.customConf`` | Disable CNI configuration management | false | +--------------------+----------------------------------------+---------+ | ``cni.exclusive`` | Remove other CNI configuration files | true | +--------------------+----------------------------------------+---------+ | ``cni.install`` | Install CNI configuration and binaries | true | +--------------------+----------------------------------------+---------+ If you want to provide your own custom CNI configuration file, you can do so by passing a path to a cni template file, either on disk or provided via a configMap. The Helm options that configure this are: +----------------------+----------------------------------------------------------------+ | Helm variable | Description | +======================+================================================================+ | ``cni.readCniConf`` | Path (inside the agent) to a source CNI configuration file | +----------------------+----------------------------------------------------------------+ | ``cni.configMap`` | Name of a ConfigMap
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/configuration.rst
main
cilium
[ 0.026534205302596092, -0.018164917826652527, 0.026976099237799644, -0.03857942670583725, -0.011655488982796669, -0.02508031204342842, -0.10367707163095474, -0.03999729081988335, 0.0692175030708313, 0.014060723595321178, 0.05745299160480499, -0.055642418563365936, -0.012713565491139889, -0....
0.107211
to a cni template file, either on disk or provided via a configMap. The Helm options that configure this are: +----------------------+----------------------------------------------------------------+ | Helm variable | Description | +======================+================================================================+ | ``cni.readCniConf`` | Path (inside the agent) to a source CNI configuration file | +----------------------+----------------------------------------------------------------+ | ``cni.configMap`` | Name of a ConfigMap containing a source CNI configuration file | +----------------------+----------------------------------------------------------------+ | ``cni.configMapKey`` | Install CNI configuration and binaries | +----------------------+----------------------------------------------------------------+ These Helm variables are converted to a smaller set of cilium ConfigMap keys: +-------------------------------+--------------------------------------------------------+ | ConfigMap key | Description | +===============================+========================================================+ | ``write-cni-conf-when-ready`` | Path to write the CNI configuration file | +-------------------------------+--------------------------------------------------------+ | ``read-cni-conf`` | Path to read the source CNI configuration file | +-------------------------------+--------------------------------------------------------+ | ``cni-exclusive`` | Whether or not to remove other CNI configuration files | +-------------------------------+--------------------------------------------------------+ CRD Validation ============== Custom Resource Validation was introduced in Kubernetes since version ``1.8.0``. This is still considered an alpha feature in Kubernetes ``1.8.0`` and beta in Kubernetes ``1.9.0``. Since Cilium ``v1.0.0-rc3``, Cilium will create, or update in case it exists, the Cilium Network Policy (CNP) Resource Definition with the embedded validation schema. This allows the validation of CiliumNetworkPolicy to be done on the kube-apiserver when the policy is imported with an ability to provide direct feedback when importing the resource. To enable this feature, the flag ``--feature-gates=CustomResourceValidation=true`` must be set when starting kube-apiserver. Cilium itself will automatically make use of this feature and no additional flag is required. .. note:: In case there is an invalid CNP before updating to Cilium ``v1.0.0-rc3``, which contains the validator, the kube-apiserver validator will prevent Cilium from updating that invalid CNP with Cilium node status. By checking Cilium logs for ``unable to update CNP, retrying...``, it is possible to determine which Cilium Network Policies are considered invalid after updating to Cilium ``v1.0.0-rc3``. To verify that the CNP resource definition contains the validation schema, run the following command: .. code-block:: shell-session $ kubectl get crd ciliumnetworkpolicies.cilium.io -o json | grep -A 12 openAPIV3Schema "openAPIV3Schema": { "oneOf": [ { "required": [ "spec" ] }, { "required": [ "specs" ] } ], In case the user writes a policy that does not conform to the schema, Kubernetes will return an error, e.g.: .. code-block:: shell-session cat < ./bad-cnp.yaml apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: my-new-cilium-object spec: description: "Policy to test multiple rules in a single file" endpointSelector: matchLabels: app: details track: stable version: v1 ingress: - fromEndpoints: - matchLabels: app: reviews track: stable version: v1 toPorts: - ports: - port: '65536' protocol: TCP rules: http: - method: GET path: "/health" EOF kubectl create -f ./bad-cnp.yaml ... spec.ingress.toPorts.ports.port in body should match '^(6553[0-5]|655[0-2][0-9]|65[0-4][0-9]{2}|6[0-4][0-9]{3}|[1-5][0-9]{4}|[0-9]{1,4})$' In this case, the policy has a port out of the 0-65535 range. .. \_bpffs\_systemd: Mounting BPFFS with systemd =========================== Due to how systemd `mounts `\_\_ filesystems, the mount point path must be reflected in the unit filename. .. code-block:: shell-session cat <
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/configuration.rst
main
cilium
[ -0.024565842002630234, 0.07179222255945206, -0.11861121654510498, -0.06107551231980324, -0.008221900090575218, 0.022060628980398178, -0.00916206929832697, 0.08791344612836838, 0.006894738879054785, 0.017546499148011208, 0.04454585164785385, -0.09443126618862152, 0.037837229669094086, -0.05...
0.052406
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_CiliumCIDRGroup: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* CiliumCIDRGroup \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* CiliumCIDRGroup (CCG) is a feature that allows administrators to reference a group of CIDR blocks in a :ref:`CiliumNetworkPolicy`. Unlike :ref:`CiliumEndpoint` resources, which are managed by the Cilium agent, CiliumCIDRGroup resources are intended to be managed directly by administrators. It is particularly useful for enforcing policies on groups of external CIDR blocks. Additionally, any traffic to CIDRs referenced in the CiliumCIDRGroup will have their :ref:`Hubble ` flows annotated with the CCG's name and labels. The following is an example of a ``CiliumCIDRGroup`` object: .. code-block:: yaml apiVersion: cilium.io/v2alpha1 kind: CiliumCIDRGroup metadata: name: vpn-example-1 labels: role: vpn spec: externalCIDRs: - "10.48.0.0/24" - "10.16.0.0/24" The CCG can be referenced in a ``CiliumNetworkPolicy`` by using the ``fromCIDRSet`` directive. CCGs may be selected by names or labels. .. code-block:: yaml apiVersion: cilium.io/v2 kind: CiliumNetworkPolicy metadata: name: from-vpn-example spec: endpointSelector: {} ingress: ## select by name - fromCIDRSet: - cidrGroupRef: vpn-example-1 ## alternatively, select by label: - fromCIDRSet: - cidrGroupSelector: matchLabels: role: vpn In this example, the ``fromCIDRSet`` directive in the CNP references the ``vpn-example-1`` group defined in the ``CiliumCIDRGroup``. This allows the CNP to apply ingress rules based on the CIDRs grouped under the ``vpn-example-1`` name.
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/ciliumcidrgroup.rst
main
cilium
[ -0.07911137491464615, 0.04391200840473175, -0.09988930821418762, 0.003027656814083457, 0.05304574593901634, -0.027935795485973358, 0.013650549575686455, -0.014099856838583946, 0.06565874814987183, -0.026932576671242714, 0.06989864259958267, -0.07948305457830429, 0.04561597853899002, -0.001...
0.156426
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_CiliumEndpointSlice: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* CiliumEndpointSlice \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* .. note:: This is a beta feature. Please provide feedback and file a GitHub issue if you experience any problems. The tasks needed for graduating this feature "Stable" are documented in :gh-issue:`31904`. This document describes CiliumEndpointSlices (CES), which enable batching of CiliumEndpoint (CEP) objects in the cluster to achieve better scalability. When enabled, Cilium Operator watches CEP objects and groups/batches slim versions of them into CES objects. Cilium Agent watches CES objects to learn about remote endpoints in this mode. API-server stress due to remote endpoint info propagation should be reduced in this case, allowing for better scalability, at the cost of potentially longer delay before identities of new endpoints are recognized throughout the cluster. .. note:: CiliumEndpointSlice is a concept that is specific to Cilium and is not related to `Kubernetes' EndpointSlice`\_. Although the names are similar, and even though the concept of slices in each feature brings similar improvements for scalability, they address different problems. Kubernetes' Endpoints and EndpointSlices allow Cilium to make load-balancing decisions for a particular Service object; Kubernetes' EndpointSlices offer a scalable way to track Service back-ends within a cluster. By contrast, CiliumEndpoints and CiliumEndpointSlices are used to make network routing and policy decisions. So CiliumEndpointSlices focus on tracking Pods, batching CEPs to reduce the number of updates to propagate through the API-server on large clusters. Enabling one does not affect the other. .. \_Kubernetes' EndpointSlice: https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/ Deploy Cilium with CES ======================= CES are disabled by default. This section describes the steps necessary for enabling them. Pre-Requisites ~~~~~~~~~~~~~~ \* Make sure that CEPs are enabled (the ``--disable-endpoint-crd`` flag is not set to ``true``) \* Make sure you are not relying on the Egress Gateway which is not compatible with CES (see Egress Gateway :ref:`egress-gateway-incompatible-features`) Migration Procedure ~~~~~~~~~~~~~~~~~~~ In order to minimize endpoint propagation delays, it is recommended to upgrade the Operator first, let it create all CES objects, and then upgrade the Agents afterwards. #. Enable CES on the Operator by setting the ``ciliumEndpointSlice.enabled`` value to ``true`` in your Helm chart or by directly setting the ``--enable-cilium-endpoint-slice`` flag to ``true`` on the Operator. Re-deploy the Operator. #. Once the Operator is running, verify that the ``CiliumEndpointSlice`` CRD has been successfully registered: .. code-block:: shell-session $ kubectl get crd ciliumendpointslices.cilium.io NAME CREATED AT ciliumendpointslices.cilium.io 2021-11-05T05:41:28Z #. Verify that the Operator has started creating CES objects: .. code-block:: shell-session $ kubectl get ces NAME AGE ces-2fvynpvzn-4ncg9 1m17s ces-2jyqj8pfl-tdfm8 1m20s #. Let the Operator create CES objects for all existing CEPs in the cluster. This may take some time, depending on the size of the cluster. You can monitor the progress by checking the rate of CES object creation in the cluster, for example by looking at the ``apiserver\_storage\_objects`` Kubernetes metric or by looking at ``ciliumendpointslices`` resource creation requests in Kubernetes Audit Logs. You can also monitor the metrics emitted by the Operator, such as ``cilium\_operator\_ces\_sync\_total``. All CES-related metrics are documented in the :ref:`ces\_metrics` section of the metric documentation. #. Once the metrics have stabilized (in other words, when the Operator has created CES objects for all existing CEPs), upgrade the Cilium Agents on all nodes by setting the ``--enable-cilium-endpoint-slice`` flag to ``true`` and re-deploying them. Downgrade Procedure ~~~~~~~~~~~~~~~~~~~ In order to avoid connectivity disruptions, if you need to disable CES and go back to using CEPs, you will need to first disable CES in the Cilium Agents, so the agents will return to watching on CEPs. This
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/ciliumendpointslice.rst
main
cilium
[ -0.05178511142730713, 0.013652066700160503, -0.06921590864658356, -0.01836220547556877, 0.048543382436037064, -0.04874696210026741, -0.022046562284231186, -0.0027090099174529314, 0.025894710794091225, -0.021096155047416687, 0.06578488647937775, -0.03849981352686882, 0.0700150653719902, -0....
0.195364
the ``--enable-cilium-endpoint-slice`` flag to ``true`` and re-deploying them. Downgrade Procedure ~~~~~~~~~~~~~~~~~~~ In order to avoid connectivity disruptions, if you need to disable CES and go back to using CEPs, you will need to first disable CES in the Cilium Agents, so the agents will return to watching on CEPs. This can be done by setting the ``--enable-cilium-endpoint-slice`` flag to ``false`` and re-deploying the agents. Then, once all agents have been updated, you can disable CES in the operator, which will stop creating new CES objects and delete existing ones. Configuration Options ===================== Several options are available to adjust the performance and behavior of the CES feature: \* You can configure the way CEPs are batched into CES by changing the maximum number of CEPs in a CES (``--ces-max-cilium-endpoints-per-ces``). \* You can also fine-tune rate-limiting settings for the Operator communications with the API-server. Refer to the ``--ces-\*`` flags for the ``cilium-operator`` binary. \* You can annotate priority namespaces by setting annotation ``cilium.io/ces-namespace`` to the value “priority”. When dealing with large clusters, the propagation of changes during Network Policy updates can be significantly delayed. When namespace's annotation ``cilium.io/ces-namespace`` is set to "priority", the updates from this namespace will be processed before non-priority updates. This allows to quicker enforce updated network policy in critical namespaces.
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/ciliumendpointslice.rst
main
cilium
[ 0.009914436377584934, -0.013637852855026722, -0.04520278051495552, -0.011656002141535282, -0.04876038059592247, -0.10593950003385544, -0.060385771095752716, -0.020334748551249504, -0.0076274024322628975, -0.001855898997746408, 0.07829109579324722, -0.08224247395992279, 0.00649083498865366, ...
0.121509
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_k8s\_requirements: \*\*\*\*\*\*\*\*\*\*\*\* Requirements \*\*\*\*\*\*\*\*\*\*\*\* Kubernetes Version ================== All Kubernetes versions listed are e2e tested and guaranteed to be compatible with this Cilium version. Older Kubernetes versions not listed here do not have Cilium support. Newer Kubernetes versions, while not listed, will depend on the backward compatibility offered by Kubernetes. \* 1.31 \* 1.32 \* 1.33 \* 1.34 Additionally, Cilium runs e2e tests against various cloud providers' managed Kubernetes offerings using multiple Kubernetes versions. See the following links for the current test matrix for each cloud provider: - :git-tree:`AKS <.github/actions/azure/k8s-versions.yaml>` - :git-tree:`EKS <.github/actions/eks/k8s-versions.yaml>` - :git-tree:`GKE <.github/actions/gke/k8s-versions.yaml>` System Requirements =================== See :ref:`admin\_system\_reqs` for all of the Cilium system requirements. Enable CNI in Kubernetes ======================== :term:`CNI` - Container Network Interface is the plugin layer used by Kubernetes to delegate networking configuration and is enabled by default in Kubernetes 1.24 and later. Previously, CNI plugins were managed by the kubelet using the ``--network-plugin=cni`` command-line parameter. For more information, see the `Kubernetes CNI network-plugins documentation `\_. Enable automatic node CIDR allocation (Recommended) =================================================== Kubernetes has the capability to automatically allocate and assign a per node IP allocation CIDR. Cilium automatically uses this feature if enabled. This is the easiest method to handle IP allocation in a Kubernetes cluster. To enable this feature, simply add the following flag when starting ``kube-controller-manager``: .. code-block:: shell-session --allocate-node-cidrs This option is not required but highly recommended.
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/requirements.rst
main
cilium
[ 0.010613206773996353, -0.00737361004576087, 0.02469886839389801, -0.0437203012406826, -0.009882459416985512, -0.0477488711476326, -0.06286522001028061, 0.020263906568288803, 0.055366672575473785, -0.003344149561598897, 0.011592007242143154, -0.08923602104187012, -0.007259165868163109, -0.0...
0.175538
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: http://docs.cilium.io .. \_gsg\_ipam\_crd\_multi\_pool: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* CRD-Backed by Cilium Multi-Pool IPAM \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This is a quick tutorial walking through how to enable multi-pool IPAM backed by the ``CiliumPodIPPool`` CRD. The purpose of this tutorial is to show how components are configured and resources interact with each other to enable users to automate or extend on their own. For more details, see the section :ref:`ipam\_crd\_multi\_pool` Enable Multi-pool IPAM mode =========================== #. Setup Cilium for Kubernetes using helm with the options: \* ``--set ipam.mode=multi-pool`` \* ``--set kubeProxyReplacement=true`` \* ``--set bpf.masquerade=true`` For more details on why each of these options are needed, please refer to :ref:`ipam\_crd\_multi\_pool\_limitations`. #. Create the ``default`` pool for IPv4 addresses with the options: \* ``--set ipam.operator.autoCreateCiliumPodIPPools.default.ipv4.cidrs='{10.10.0.0/16}'`` \* ``--set ipam.operator.autoCreateCiliumPodIPPools.default.ipv4.maskSize=27`` #. Deploy Cilium and Cilium-Operator. Cilium will automatically wait until the ``podCIDR`` is allocated for its node by Cilium Operator. Validate installation ===================== #. Validate that Cilium has started up correctly .. code-block:: shell-session $ cilium status --wait /¯¯\ /¯¯\\_\_/¯¯\ Cilium: OK \\_\_/¯¯\\_\_/ Operator: OK /¯¯\\_\_/¯¯\ Envoy DaemonSet: disabled (using embedded mode) \\_\_/¯¯\\_\_/ Hubble Relay: OK \\_\_/ ClusterMesh: disabled [...] #. Validate that the ``CiliumPodIPPool`` resource for the ``default`` pool was created with the CIDRs specified in the ``ipam.operator.autoCreateCiliumPodIPPools.default.\*`` Helm values: .. code-block:: shell-session $ kubectl get ciliumpodippool default -o yaml apiVersion: cilium.io/v2alpha1 kind: CiliumPodIPPool metadata: name: default spec: ipv4: cidrs: - 10.10.0.0/16 maskSize: 27 #. Create an additional pod IP pool ``mars`` using the following ``CiliumPodIPPool`` resource: .. code-block:: shell-session $ cat < nginx-default-79885c7f58-qch6b 1/1 Running 0 5s 10.10.10.77 kind-worker nginx-mars-76766f95f5-d9vzt 1/1 Running 0 5s 10.20.0.20 kind-worker2 nginx-mars-76766f95f5-mtn2r 1/1 Running 0 5s 10.20.0.37 kind-worker #. Test connectivity between pods: .. code-block:: shell-session $ kubectl exec pod/nginx-default-79885c7f58-fdfgf -- curl -s -o /dev/null -w "%{http\_code}" http://10.20.0.37 200 #. Alternatively, the ``ipam.cilium.io/ipam-pool`` annotation can also be applied to a namespace: .. code-block:: shell-session $ kubectl create namespace cilium-test-1 $ kubectl annotate namespace cilium-test-1 ipam.cilium.io/ip-pool=mars All new pods created in the namespace ``cilium-test-1`` will be assigned IPv4 addresses from the ``mars`` pool. Run the Cilium connectivity tests (which use namespace ``cilium-test-1`` by default to create their workloads) to verify connectivity: .. code-block:: shell-session $ cilium connectivity test [...] ✅ All 42 tests (295 actions) successful, 13 tests skipped, 0 scenarios skipped. \*\*Note:\*\* The connectivity test requires a cluster with at least 2 worker nodes to complete successfully. #. Verify that the connectivity test pods were assigned IPv4 addresses from the 10.20.0.0/16 CIDR defined in the ``mars`` pool: .. code-block:: shell-session $ kubectl --namespace cilium-test get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES client-6f6788d7cc-7fw9w 1/1 Running 0 8m56s 10.20.0.238 kind-worker client2-bc59f56d5-hsv2g 1/1 Running 0 8m56s 10.20.0.193 kind-worker echo-other-node-646976b7dd-5zlr4 2/2 Running 0 8m56s 10.20.1.145 kind-worker2 echo-same-node-58f99d79f4-4k5v4 2/2 Running 0 8m56s 10.20.0.202 kind-worker ...
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/ipam-multi-pool.rst
main
cilium
[ -0.06568501144647598, 0.003798834513872862, -0.07926947623491287, -0.08497912436723709, -0.027162613347172737, -0.049870919436216354, -0.011112905107438564, 0.00872725248336792, 0.01581433042883873, -0.05674344673752785, 0.08040653169155121, -0.10297887772321701, 0.02976306714117527, -0.03...
0.122175
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_CiliumEndpoint: \*\*\*\*\*\*\*\*\*\*\*\* Endpoint CRD \*\*\*\*\*\*\*\*\*\*\*\* When managing pods in Kubernetes, Cilium will create a Custom Resource Definition (CRD) of Kind ``CiliumEndpoint``. One ``CiliumEndpoint`` is created for each pod managed by Cilium, with the same name and in the same namespace. The ``CiliumEndpoint`` objects contain the same information as the json output of ``cilium-dbg endpoint get`` under the ``.status`` field, but can be fetched for all pods in the cluster. Adding the ``-o json`` will export more information about each endpoint. This includes the endpoint's labels, security identity and the policy in effect on it. For example: .. code-block:: shell-session $ kubectl get ciliumendpoints --all-namespaces NAMESPACE NAME AGE default app1-55d7944bdd-l7c8j 1h default app1-55d7944bdd-sn9xj 1h default app2 1h default app3 1h kube-system cilium-health-minikube 1h kube-system microscope 1h .. note:: Each cilium-agent pod will create a CiliumEndpoint to represent its own inter-agent health-check endpoint. These are not pods in Kubernetes and are in the ``kube-system`` namespace. They are named as ``cilium-health-``
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/ciliumendpoint.rst
main
cilium
[ 0.0393914096057415, -0.0014032989274710417, -0.02853887900710106, 0.010741932317614555, 0.0005499821854755282, -0.0038518744986504316, 0.008688685484230518, -0.0033695579040795565, 0.11314892768859863, -0.023144591599702835, 0.013558132573962212, -0.10052283108234406, 0.035814348608255386, ...
0.212395
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_troubleshooting\_k8s: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Troubleshooting \*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Verifying the installation ========================== Check the status of the :term:`DaemonSet` and verify that all desired instances are in "ready" state: .. code-block:: shell-session $ kubectl --namespace kube-system get ds NAME DESIRED CURRENT READY NODE-SELECTOR AGE cilium 1 1 0 3s In this example, we see a desired state of 1 with 0 being ready. This indicates a problem. The next step is to list all cilium pods by matching on the label ``k8s-app=cilium`` and also sort the list by the restart count of each pod to easily identify the failing pods: .. code-block:: shell-session $ kubectl --namespace kube-system get pods --selector k8s-app=cilium \ --sort-by='.status.containerStatuses[0].restartCount' NAME READY STATUS RESTARTS AGE cilium-813gf 0/1 CrashLoopBackOff 2 44s Pod ``cilium-813gf`` is failing and has already been restarted 2 times. Let's print the logfile of that pod to investigate the cause: .. code-block:: shell-session $ kubectl --namespace kube-system logs cilium-813gf INFO \_ \_ \_ INFO \_\_\_|\_| |\_|\_ \_ \_\_\_\_\_ INFO | \_| | | | | | | INFO |\_\_\_|\_|\_|\_|\_\_\_|\_|\_|\_| INFO Cilium 0.8.90 f022e2f Thu, 27 Apr 2017 23:17:56 -0700 go version go1.7.5 linux/amd64 CRIT kernel version: NOT OK: minimal supported kernel version is >= 4.8 In this example, the cause for the failure is a Linux kernel running on the worker node which is not meeting :ref:`admin\_system\_reqs`. If the cause for the problem is not apparent based on these simple steps, please come and seek help on `Cilium Slack`\_. Apiserver outside of cluster ============================== If you are running Kubernetes Apiserver outside of your cluster for some reason (like keeping master nodes behind a firewall), make sure that you run Cilium on master nodes too. Otherwise Kubernetes pod proxies created by Apiserver will not be able to route to pod IPs and you may encounter errors when trying to proxy traffic to pods. You may run Cilium as a `static pod `\_ or set `tolerations `\_ for Cilium DaemonSet to ensure that Cilium pods will be scheduled on your master nodes. The exact way to do it depends on your setup.
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/troubleshooting.rst
main
cilium
[ 0.017605308443307877, 0.05027296021580696, -0.003316438291221857, -0.04518934339284897, 0.021483026444911957, -0.015166225843131542, 0.020130259916186333, -0.025041555985808372, 0.05795912817120552, 0.02711217850446701, 0.059714425355196, -0.1738509237766266, 0.021818723529577255, -0.06935...
0.170797
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_k8scompatibility: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Kubernetes Compatibility \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Cilium is compatible with multiple Kubernetes API Groups. Some are deprecated or beta, and may only be available in specific versions of Kubernetes. All Kubernetes versions listed are e2e tested and guaranteed to be compatible with Cilium. Older and newer Kubernetes versions, while not listed, will depend on the forward / backward compatibility offered by Kubernetes. +------------------------+---------------------------+----------------------------------+ | k8s Version | k8s NetworkPolicy API | CiliumNetworkPolicy | +------------------------+---------------------------+----------------------------------+ | | | ``cilium.io/v2`` has a | | 1.31, 1.32, 1.33, 1.34 | \* `networking.k8s.io/v1`\_ | :term:`CustomResourceDefinition` | +------------------------+---------------------------+----------------------------------+ As a general rule, Cilium aims to run e2e tests using the latest build from the development branch against currently supported Kubernetes versions defined in `Kubernetes Patch Releases `\_ page. Once a release branch gets created from the development branch, Cilium typically does not change the Kubernetes versions it uses to run e2e tests for the entire maintenance period of that particular release. Additionally, Cilium runs e2e tests against various cloud providers' managed Kubernetes offerings using multiple Kubernetes versions. See the following links for the current test matrix for each cloud provider: - :git-tree:`AKS <.github/actions/azure/k8s-versions.yaml>` - :git-tree:`EKS <.github/actions/eks/k8s-versions.yaml>` - :git-tree:`GKE <.github/actions/gke/k8s-versions.yaml>` Cilium CRD schema validation ============================ Cilium uses a CRD for its Network Policies in Kubernetes. This CRD might have changes in its schema validation, which allows it to verify the correctness of a Cilium Clusterwide Network Policy (CCNP) or a Cilium Network Policy (CNP). The CRD itself has an annotation, ``io.cilium.k8s.crd.schema.version``, with the schema definition version. By default, Cilium automatically updates the CRD, and its validation, with a newer one. The following table lists all Cilium versions and their expected schema validation version: .. include:: compatibility-table.rst .. \_networking.k8s.io/v1: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/#networkpolicy-v1-networking-k8s-io
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/compatibility.rst
main
cilium
[ -0.020472336560487747, 0.0041551473550498486, -0.011306079104542732, -0.07474687695503235, 0.011747244745492935, -0.023789355531334877, -0.06367611885070801, 0.01804460398852825, 0.06912082433700562, -0.01562909223139286, 0.02291114814579487, -0.07408328354358673, -0.02156122960150242, -0....
0.196677
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_IdentityManagementMode: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Identity Management Mode \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Cilium supports Cilium Identity (CID) management by either the Cilium Agents (default) or the Cilium Operator. When the Operator manages identities, identity creation is centralized. This provides benefits such as reduced CID duplication, which can occur when multiple Agents simultaneously create identities for the same set of labels. Given that there is a limitation on the maximum number of identities in a cluster and eBPF Policy Map size (see :ref:`bpf\_map\_limitations`), when the operator manages identities, we can improve the reliability of network policies and cluster scalability. .. note:: Labels relevant to identity management may be configured in the Cilium ConfigMap (see: :ref:`identity-relevant-labels`). If the Cilium Operator is managing identities, both the Operator and Agents must be restarted to pick up the new label pattern setting. Enable Identity Management by the Cilium Operator (Beta) ========================================================= .. include:: ../../beta.rst The Cilium Agents manage CIDs by default. This section describes the steps necessary for enabling CID management by the Cilium Operator. Enable Operator Managing Identities on a New Cluster ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To enable the Cilium Operator to manage identities on a new cluster, set the ``identityManagementMode`` value to ``operator`` in your Helm chart or set the ``identity-management-mode`` flag to ``operator`` in the ``cilium-config`` configmap. How to Migrate from Cilium Agent to Cilium Operator Managing Identities ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In order to minimize disruptions to connections or workload management, the following procedure should be followed. Note that in order to prevent disruptions to the cluster, there is an intermediate state where both the Cilium Agents and the Operator manage identities. As long as the Cilium Agents are creating identities, the CID duplication issue may occur. The transitional state is intended to only be used temporarily for the purpose of migrating identity management modes. #. Allow the Operator to also manage identities by setting the ``identityManagementMode`` value to ``both`` in your Helm chart or by setting the ``identity-management-mode`` flag to ``both`` in the ``cilium-config`` configmap. Restart the Operator. #. Once the operator is running, upgrade the Cilium Agents by setting the ``identityManagementMode`` value to ``operator`` or by setting the ``identity-management-mode`` flag to ``operator`` and restarting the Cilium Agent DaemonSet. How to Downgrade from Cilium Operator to Cilium Agent Managing Identities ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For a safe downgrade, the following procedure should be followed. #. First, downgrade the Cilium Agents by setting the ``identityManagementMode`` value to ``both`` in your Helm chart or by setting the ``identity-management-mode`` flag to ``both`` in the ``cilium-config`` configmap. Restart the Cilium Agent DaemonSet. #. Once the Cilium Agents are running, downgrade the Operator by setting the ``identityManagementMode`` value to ``agent`` and restarting the Operator. Metrics ======== Metrics for identity management by the operator are documented in the :ref:`identity\_management\_metrics` section of the metric documentation.
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/identity-management-mode.rst
main
cilium
[ -0.0061027598567306995, 0.01241974625736475, -0.06657751649618149, -0.019632291048765182, 0.008773816749453545, 0.017140202224254608, 0.03860897570848465, -0.018844015896320343, -0.0018573449924588203, -0.0005368262063711882, 0.08013398200273514, -0.1219763457775116, 0.08229536563158035, -...
0.199412
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gsg\_ipam\_crd: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\* CRD-Backed IPAM \*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This is a quick tutorial walking through how to enable CRD-backed IPAM. The purpose of this tutorial is to show how components are configured and resources interact with each other to enable users to automate or extend on their own. For more details, see the section :ref:`concepts\_ipam\_crd` Enable CRD IPAM mode ==================== #. Setup Cilium for Kubernetes using any of the available guides. #. Run Cilium with the ``--ipam=crd`` option or set ``ipam: crd`` in the ``cilium-config`` ConfigMap. #. Restart Cilium. Cilium will automatically register the CRD if not available already :: msg="Waiting for initial IP to become available in 'k8s1' custom resource" subsys=ipam #. Validate that the CRD has been registered: .. code-block:: shell-session $ kubectl get crds NAME CREATED AT [...] ciliumnodes.cilium.io 2019-06-08T12:26:41Z Create a CiliumNode CR ====================== #. Import the following custom resource to make IPs available in the Cilium agent. .. code-block:: yaml apiVersion: "cilium.io/v2" kind: CiliumNode metadata: name: "k8s1" spec: ipam: pool: 192.168.1.1: {} 192.168.1.2: {} 192.168.1.3: {} 192.168.1.4: {} #. Validate that Cilium has started up correctly .. code-block:: shell-session $ cilium-dbg status --all-addresses KVStore: Ok etcd: 1/1 connected, has-quorum=true: https://192.168.60.11:2379 - 3.3.12 (Leader) [...] IPAM: IPv4: 2/4 allocated, Allocated addresses: 192.168.1.1 (router) 192.168.1.3 (health) #. Validate the ``status.IPAM.used`` section: .. code-block:: shell-session $ kubectl get cn k8s1 -o yaml apiVersion: cilium.io/v2 kind: CiliumNode metadata: name: k8s1 [...] spec: ipam: pool: 192.168.1.1: {} 192.168.1.2: {} 192.168.1.3: {} 192.168.1.4: {} status: ipam: used: 192.168.1.1: owner: router 192.168.1.3: owner: health .. note:: At the moment only single IP addresses are allowed. CIDR's are not supported.
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/ipam-crd.rst
main
cilium
[ -0.019214102998375893, -0.022023001685738564, -0.03771021217107773, -0.04417349398136139, -0.04422255605459213, -0.03322839364409447, -0.014445130713284016, 0.011076408438384533, 0.012646138668060303, -0.03721606358885765, 0.06828825175762177, -0.09992914646863937, 0.03165881335735321, -0....
0.106582
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_bandwidth-manager: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Bandwidth Manager \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This guide explains how to configure Cilium's bandwidth manager to optimize TCP and UDP workloads and efficiently rate limit individual Pods if needed through the help of EDT (Earliest Departure Time) and eBPF. Cilium's bandwidth manager is also prerequisite for enabling BBR congestion control for Pods as outlined :ref:`below`. The bandwidth manager does not rely on CNI chaining and is natively integrated into Cilium instead. Hence, it does not make use of the `bandwidth CNI `\_ plugin. Due to scalability concerns in particular for multi-queue network interfaces, it is not recommended to use the bandwidth CNI plugin which is based on TBF (Token Bucket Filter) instead of EDT. .. note:: It is strongly recommended to use Bandwidth Manager in combination with :ref:`BPF Host Routing` as otherwise legacy routing through the upper stack could potentially result in undesired high latency (see `this comparison `\_ for more details). Cilium's bandwidth manager supports both ``kubernetes.io/egress-bandwidth`` and ``kubernetes.io/ingress-bandwidth`` Pod annotations. The ``egress-bandwidth`` is enforced on egress at the native host networking devices using EDT (Earliest Departure Time), while the ``ingress-bandwidth`` is enforced using an eBPF-based token bucket implementation. The bandwidth enforcement is supported for direct routing as well as tunneling mode in Cilium. .. include:: ../../installation/k8s-install-download-release.rst Cilium's bandwidth manager is disabled by default on new installations. To install Cilium with the bandwidth manager enabled, run .. cilium-helm-install:: :namespace: kube-system :set: bandwidthManager.enabled=true To enable the bandwidth manager on an existing installation, run .. cilium-helm-upgrade:: :namespace: kube-system :extra-args: --reuse-values :set: bandwidthManager.enabled=true :post-commands: kubectl -n kube-system rollout restart ds/cilium The native host networking devices are auto detected as native devices which have the default route on the host or have Kubernetes ``InternalIP`` or ``ExternalIP`` assigned. ``InternalIP`` is preferred over ``ExternalIP`` if both exist. To change and manually specify the devices, set their names in the ``devices`` helm option (e.g. ``devices='{eth0,eth1,eth2}'``). Each listed device has to be named the same on all Cilium-managed nodes. Verify that the Cilium Pods have come up correctly: .. code-block:: shell-session $ kubectl -n kube-system get pods -l k8s-app=cilium NAME READY STATUS RESTARTS AGE cilium-crf7f 1/1 Running 0 10m cilium-db21a 1/1 Running 0 10m In order to verify whether the bandwidth manager feature has been enabled in Cilium, the ``cilium status`` CLI command provides visibility through the ``BandwidthManager`` info line. It also dumps a list of devices on which the egress bandwidth limitation is enforced: .. code-block:: shell-session $ kubectl -n kube-system exec ds/cilium -- cilium-dbg status | grep BandwidthManager BandwidthManager: EDT with BPF [BBR] [eth0] To verify that bandwidth limits are indeed being enforced, one can deploy two ``netperf`` Pods in different nodes: .. code-block:: yaml --- apiVersion: v1 kind: Pod metadata: annotations: # Limits egress bandwidth to 10Mbit/s and ingress bandwidth to 20Mbit/s. kubernetes.io/egress-bandwidth: "10M" kubernetes.io/ingress-bandwidth: "20M" labels: # This pod will act as server. app.kubernetes.io/name: netperf-server name: netperf-server spec: containers: - name: netperf image: cilium/netperf args: - iperf3 - "-s" ports: - containerPort: 5201 --- apiVersion: v1 kind: Pod metadata: # This Pod will act as client. name: netperf-client spec: affinity: # Prevents the client from being scheduled to the # same node as the server. podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - netperf-server topologyKey: kubernetes.io/hostname containers: - name: netperf args: - sleep - infinity image: cilium/netperf Once up and running, the ``netperf-client`` Pod can be used to test bandwidth enforcement on the ``netperf-server`` Pod. First test the egress bandwidth: .. code-block:: shell-session $
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/bandwidth-manager.rst
main
cilium
[ -0.03049812652170658, 0.012841671705245972, -0.08970026671886444, -0.016666198149323463, -0.016118425875902176, -0.05972255766391754, 0.0076872119680047035, 0.021065212786197662, -0.0040118154138326645, -0.019815759733319283, -0.009356585331261158, -0.08733337372541428, -0.011295887641608715...
0.18491
labelSelector: matchExpressions: - key: app.kubernetes.io/name operator: In values: - netperf-server topologyKey: kubernetes.io/hostname containers: - name: netperf args: - sleep - infinity image: cilium/netperf Once up and running, the ``netperf-client`` Pod can be used to test bandwidth enforcement on the ``netperf-server`` Pod. First test the egress bandwidth: .. code-block:: shell-session $ NETPERF\_SERVER\_IP=$(kubectl get pod netperf-server -o jsonpath='{.status.podIP}') $ kubectl exec netperf-client -- \ iperf3 -R -c "${NETPERF\_SERVER\_IP}" Connecting to host 10.42.0.52, port 5201 Reverse mode, remote host 10.42.0.52 is sending [ 5] local 10.42.1.23 port 49422 connected to 10.42.0.52 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 1.19 MBytes 9.99 Mbits/sec [ 5] 1.00-2.00 sec 1.17 MBytes 9.77 Mbits/sec [ 5] 2.00-3.00 sec 1.10 MBytes 9.26 Mbits/sec [ 5] 3.00-4.00 sec 1.17 MBytes 9.77 Mbits/sec [ 5] 4.00-5.00 sec 1.17 MBytes 9.77 Mbits/sec [ 5] 5.00-6.00 sec 1.10 MBytes 9.26 Mbits/sec [ 5] 6.00-7.00 sec 1.17 MBytes 9.77 Mbits/sec [ 5] 7.00-8.00 sec 1.10 MBytes 9.26 Mbits/sec [ 5] 8.00-9.00 sec 1.17 MBytes 9.77 Mbits/sec [ 5] 9.00-10.00 sec 1.10 MBytes 9.26 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.09 sec 14.1 MBytes 11.7 Mbits/sec 0 sender [ 5] 0.00-10.00 sec 11.4 MBytes 9.59 Mbits/sec receiver As can be seen, egress traffic of the ``netperf-server`` Pod has been limited to 10Mbit per second. Then test the ingress bandwidth. .. code-block:: shell-session $ NETPERF\_SERVER\_IP=$(kubectl get pod netperf-server -o jsonpath='{.status.podIP}') $ kubectl exec netperf-client -- \ iperf3 -c "${NETPERF\_SERVER\_IP}" Connecting to host 10.42.0.52, port 5201 [ 5] local 10.42.1.23 port 40058 connected to 10.42.0.52 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 6.73 MBytes 56.4 Mbits/sec 551 25.9 KBytes [ 5] 1.00-2.00 sec 3.56 MBytes 29.9 Mbits/sec 159 8.19 KBytes [ 5] 2.00-3.00 sec 2.45 MBytes 20.6 Mbits/sec 191 2.73 KBytes [ 5] 3.00-4.00 sec 1.17 MBytes 9.77 Mbits/sec 170 34.1 KBytes [ 5] 4.00-5.00 sec 2.39 MBytes 20.1 Mbits/sec 224 8.19 KBytes [ 5] 5.00-6.00 sec 2.45 MBytes 20.6 Mbits/sec 274 6.83 KBytes [ 5] 6.00-7.00 sec 2.39 MBytes 20.1 Mbits/sec 170 2.73 KBytes [ 5] 7.00-8.00 sec 2.45 MBytes 20.6 Mbits/sec 262 5.46 KBytes [ 5] 8.00-9.00 sec 2.45 MBytes 20.6 Mbits/sec 260 5.46 KBytes [ 5] 9.00-10.00 sec 2.42 MBytes 20.3 Mbits/sec 210 32.8 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 28.5 MBytes 23.9 Mbits/sec 2471 sender [ 5] 0.00-10.04 sec 25.6 MBytes 21.4 Mbits/sec receiver As can be seen, ingress traffic of the ``netperf-server`` Pod has been limited to 20Mbit per second. In order to introspect current endpoint bandwidth settings from BPF side, the following command can be run (replace ``cilium-xxxxx`` with the name of the Cilium Pod that is co-located with the ``netperf-server`` Pod): .. code-block:: shell-session $ kubectl exec -it -n kube-system cilium-xxxxxx -- cilium-dbg bpf bandwidth list IDENTITY DIRECTION PRIO BANDWIDTH (BitsPerSec) 724 Egress 0 10M 724 Ingress 0 50M Each Pod is represented in Cilium as an :ref:`endpoint` which has an identity. The above identity can then be correlated with the ``cilium-dbg endpoint list`` command. .. note:: Bandwidth limits apply on a per-Pod scope. In our example, if multiple replicas of the Pod are created, then each of the Pod instances receives a 10M bandwidth limit. .. \_BBR Pods: BBR for Pods ############ The base infrastructure around MQ/FQ setup provided by
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/bandwidth-manager.rst
main
cilium
[ 0.058881428092718124, 0.04825659841299057, -0.035210371017456055, -0.011283171363174915, -0.013197415508329868, -0.033069346100091934, 0.06488800048828125, -0.012625441886484623, 0.05818222090601921, 0.07983541488647461, -0.021788761019706726, -0.13275961577892303, -0.07686708867549896, -0...
0.168012
``cilium-dbg endpoint list`` command. .. note:: Bandwidth limits apply on a per-Pod scope. In our example, if multiple replicas of the Pod are created, then each of the Pod instances receives a 10M bandwidth limit. .. \_BBR Pods: BBR for Pods ############ The base infrastructure around MQ/FQ setup provided by Cilium's bandwidth manager also allows for use of TCP `BBR congestion control `\_ for Pods. BBR is in particular suitable when Pods are exposed behind Kubernetes Services which face external clients from the Internet. BBR achieves higher bandwidths and lower latencies for Internet traffic, for example, it has been `shown `\_ that BBR's throughput can reach as much as 2,700x higher than today's best loss-based congestion control and queueing delays can be 25x lower. .. note:: BBR for Pods requires a v5.18.x or more recent Linux kernel. To enable the bandwidth manager with BBR congestion control, deploy with the following: .. cilium-helm-upgrade:: :namespace: kube-system :extra-args: --reuse-values :set: bandwidthManager.enabled=true bandwidthManager.bbr=true :post-commands: kubectl -n kube-system rollout restart ds/cilium In order for BBR to work reliably for Pods, it requires a 5.18 or higher kernel. As outlined in our `Linux Plumbers 2021 talk `\_, this is needed since older kernels do not retain timestamps of network packets when switching from Pod to host network namespace. Due to the latter, the kernel's pacing infrastructure does not function properly in general (not specific to Cilium). We helped with fixing this issue for recent kernels to retain timestamps and therefore to get BBR for Pods working. Prior to that kernel, BBR was only working for sockets which are in the initial network namespace (hostns). BBR also needs eBPF Host-Routing in order to retain the network packet's socket association all the way until the packet hits the FQ queueing discipline on the physical device in the host namespace. (Without eBPF Host-Routing the packet's socket association would otherwise be orphaned inside the host stacks forwarding/routing layer.). In order to verify whether the bandwidth manager with BBR has been enabled in Cilium, the ``cilium status`` CLI command provides visibility again through the ``BandwidthManager`` info line: .. code-block:: shell-session $ kubectl -n kube-system exec ds/cilium -- cilium-dbg status | grep BandwidthManager BandwidthManager: EDT with BPF [BBR] [eth0] Once this setting is enabled, it will use BBR as a default for all newly spawned Pods. Ideally, BBR is selected upon initial Cilium installation when the cluster is created such that all nodes and Pods in the cluster homogeneously use BBR as otherwise there could be `potential unfairness issues `\_ for other connections still using CUBIC. Also note that due to the nature of BBR's probing you might observe a higher rate of TCP retransmissions compared to CUBIC. We recommend to use BBR in particular for clusters where Pods are exposed as Services which serve external clients connecting from the Internet. BBR for The Host ################ In legacy routing mode, it is not possible to enable BBR for Cilium-managed pods (``hostNetwork: false``) for the reasons mentioned above; however, it is possible to enable BBR for \*only\* the host network namespace by adding the ``bandwidthManager.bbrHostNamespaceOnly=true`` flag. .. cilium-helm-upgrade:: :namespace: kube-system :extra-args: --reuse-values :set: bandwidthManager.enabled=true bandwidthManager.bbr=true bandwidthManager.bbrHostNamespaceOnly=true :post-commands: kubectl -n kube-system rollout restart ds/cilium With ``bandwidthManager.bbrHostNamespaceOnly``, processes in the host network namespace, including pods that set ``hostNetwork`` to ``true``, will use BBR. Limitations ########### \* Bandwidth enforcement currently does not work in combination with L7 Cilium Network Policies. In case they select the Pod at egress, then the bandwidth enforcement will be disabled for those Pods. \* Bandwidth enforcement doesn't work with nested network namespace environments like Kind. This is because they typically don't
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/bandwidth-manager.rst
main
cilium
[ 0.0021833369974046946, -0.013467410579323769, -0.05844338238239288, -0.018224650993943214, -0.03824911266565323, -0.06024749204516411, -0.006128181703388691, 0.02688933163881302, 0.006236289162188768, 0.02428496442735195, -0.020689111202955246, -0.05419478565454483, -0.020351892337203026, ...
0.233453
\* Bandwidth enforcement currently does not work in combination with L7 Cilium Network Policies. In case they select the Pod at egress, then the bandwidth enforcement will be disabled for those Pods. \* Bandwidth enforcement doesn't work with nested network namespace environments like Kind. This is because they typically don't have access to the global sysctl under ``/proc/sys/net/core`` and the bandwidth enforcement depends on them. .. admonition:: Video :class: attention For more insights on Cilium's bandwidth manager, check out this `KubeCon talk on Better Bandwidth Management with eBPF `\_\_ and `eCHO episode 98: Exploring the bandwidth manager with Cilium `\_\_.
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/bandwidth-manager.rst
main
cilium
[ 0.023628612980246544, -0.028384963050484657, -0.03732678294181824, -0.02435099147260189, -0.013622415252029896, -0.03775101527571678, 0.007541683968156576, -0.010511801578104496, -0.05593322217464447, 0.04739472642540932, -0.012859617359936237, -0.06371793150901794, -0.05356424301862717, -...
0.140446
+--------------------+----------------+ | Cilium | CNP and CCNP | | Version | Schema Version | +--------------------+----------------+ | v1.17.0-pre.0 | 1.30.1 | +--------------------+----------------+ | v1.17.0-pre.1 | 1.30.2 | +--------------------+----------------+ | v1.17.0-pre.2 | 1.30.4 | +--------------------+----------------+ | v1.17.0-pre.3 | 1.30.5 | +--------------------+----------------+ | v1.17.0-rc.0 | 1.30.6 | +--------------------+----------------+ | v1.17.0-rc.1 | 1.30.6 | +--------------------+----------------+ | v1.17.0-rc.2 | 1.30.7 | +--------------------+----------------+ | v1.17.0 | 1.30.8 | +--------------------+----------------+ | v1.17.1 | 1.30.8 | +--------------------+----------------+ | v1.17.2 | 1.30.8 | +--------------------+----------------+ | v1.17.3 | 1.30.8 | +--------------------+----------------+ | v1.17.4 | 1.30.8 | +--------------------+----------------+ | v1.17.5 | 1.30.8 | +--------------------+----------------+ | v1.17.6 | 1.30.8 | +--------------------+----------------+ | v1.17.7 | 1.30.8 | +--------------------+----------------+ | v1.17.8 | 1.30.8 | +--------------------+----------------+ | v1.17.9 | 1.30.8 | +--------------------+----------------+ | v1.17.10 | 1.30.8 | +--------------------+----------------+ | v1.17.11 | 1.30.8 | +--------------------+----------------+ | v1.17 | 1.30.8 | +--------------------+----------------+ | v1.18.0-pre.0 | 1.31.2 | +--------------------+----------------+ | v1.18.0-pre.1 | 1.31.7 | +--------------------+----------------+ | v1.18.0-pre.2 | 1.31.9 | +--------------------+----------------+ | v1.18.0-pre.3 | 1.31.10 | +--------------------+----------------+ | v1.18.0-rc.0 | 1.31.11 | +--------------------+----------------+ | v1.18.0-rc.1 | 1.31.11 | +--------------------+----------------+ | v1.18.0 | 1.31.11 | +--------------------+----------------+ | v1.18.1 | 1.31.11 | +--------------------+----------------+ | v1.18.2 | 1.31.11 | +--------------------+----------------+ | v1.18.3 | 1.31.11 | +--------------------+----------------+ | v1.18.4 | 1.31.11 | +--------------------+----------------+ | v1.18.5 | 1.31.11 | +--------------------+----------------+ | v1.18 | 1.31.11 | +--------------------+----------------+ | v1.19.0-pre.0 | 1.32.1 | +--------------------+----------------+ | v1.19.0-pre.1 | 1.32.2 | +--------------------+----------------+ | v1.19.0-pre.2 | 1.32.3 | +--------------------+----------------+ | v1.19.0-pre.3 | 1.32.4 | +--------------------+----------------+ | v1.19.0-pre.4 | 1.32.4 | +--------------------+----------------+ | latest / main | 1.32.4 | +--------------------+----------------+
https://github.com/cilium/cilium/blob/main//Documentation/network/kubernetes/compatibility-table.rst
main
cilium
[ -0.011013736017048359, -0.00021167189697735012, -0.0745808556675911, -0.07998619973659515, -0.006876813713461161, -0.03666260465979576, -0.1372014880180359, 0.07410044968128204, -0.08179015666246414, -0.014950010925531387, 0.13371290266513824, -0.12250227481126785, -0.08211643248796463, -0...
0.12894
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_egress-gateway: \*\*\*\*\*\*\*\*\*\*\*\*\*\* Egress Gateway \*\*\*\*\*\*\*\*\*\*\*\*\*\* The egress gateway feature routes all IPv4 and IPv6 connections originating from pods and destined to specific cluster-external CIDRs through particular nodes, from now on called "gateway nodes". When the egress gateway feature is enabled and egress gateway policies are in place, matching packets that leave the cluster are masqueraded with selected, predictable IPs associated with the gateway nodes. As an example, this feature can be used in combination with legacy firewalls to allow traffic to legacy infrastructure only from specific pods within a given namespace. The pods typically have ever-changing IP addresses, and even if masquerading was to be used as a way to mitigate this, the IP addresses of nodes can also change frequently over time. This document explains how to enable the egress gateway feature and how to configure egress gateway policies to route and SNAT the egress traffic for a specific workload. .. note:: This guide assumes that Cilium has been correctly installed in your Kubernetes cluster. Please see :ref:`k8s\_quick\_install` for more information. If unsure, run ``cilium status`` and validate that Cilium is up and running. .. admonition:: Video :class: attention For more insights on Cilium's Egress Gateway, check out `eCHO episode 76: Cilium Egress Gateway `\_\_. Preliminary Considerations ========================== Cilium must make use of network-facing interfaces and IP addresses present on the designated gateway nodes. These interfaces and IP addresses must be provisioned and configured by the operator based on their networking environment. The process is highly-dependent on said networking environment. For example, in AWS/EKS, and depending on the requirements, this may mean creating one or more Elastic Network Interfaces with one or more IP addresses and attaching them to instances that serve as gateway nodes so that AWS can adequately route traffic flowing from and to the instances. Other cloud providers have similar networking requirements and constructs. Additionally, the enablement of the egress gateway feature requires that both BPF masquerading and the kube-proxy replacement are enabled. Delay for enforcement of egress policies on new pods ---------------------------------------------------- When new pods are started, there is a delay before egress gateway policies are applied for those pods. That means traffic from those pods may leave the cluster with a source IP address (pod IP or node IP) that doesn't match the egress gateway IP. That egressing traffic will also not be redirected through the gateway node. .. \_egress-gateway-incompatible-features: Incompatibility with other features ----------------------------------- Because egress gateway isn't compatible with identity allocation mode ``kvstore``, you must use Kubernetes as Cilium's identity store (``identityAllocationMode`` set to ``crd``). This is the default setting for new installations. Egress gateway is not compatible with the Cluster Mesh feature. The gateway selected by an egress gateway policy must be in the same cluster as the selected pods. Egress gateway is not compatible with the CiliumEndpointSlice feature (see :gh-issue:`24833` for details). Enable egress gateway ===================== The egress gateway feature and all the requirements can be enabled as follow: .. tabs:: .. group-tab:: Helm .. cilium-helm-upgrade:: :namespace: kube-system :extra-args: --reuse-values :set: egressGateway.enabled=true bpf.masquerade=true kubeProxyReplacement=true .. group-tab:: ConfigMap .. code-block:: yaml enable-bpf-masquerade: true enable-egress-gateway: true kube-proxy-replacement: true Rollout both the agent pods and the operator pods to make the changes effective: .. code-block:: shell-session $ kubectl rollout restart ds cilium -n kube-system $ kubectl rollout restart deploy cilium-operator -n kube-system Writing egress gateway policies =============================== The API provided by Cilium to drive the egress gateway feature is the ``CiliumEgressGatewayPolicy`` resource. Metadata -------- ``CiliumEgressGatewayPolicy`` is a
https://github.com/cilium/cilium/blob/main//Documentation/network/egress-gateway/egress-gateway.rst
main
cilium
[ 0.031027110293507576, 0.024923015385866165, -0.03842693939805031, -0.014173791743814945, 0.028979692608118057, -0.023058593273162842, -0.015345314517617226, -0.04520900174975395, 0.03811661899089813, -0.031040748581290245, 0.021596798673272133, -0.002683572703972459, -0.015753645449876785, ...
0.199836
pods to make the changes effective: .. code-block:: shell-session $ kubectl rollout restart ds cilium -n kube-system $ kubectl rollout restart deploy cilium-operator -n kube-system Writing egress gateway policies =============================== The API provided by Cilium to drive the egress gateway feature is the ``CiliumEgressGatewayPolicy`` resource. Metadata -------- ``CiliumEgressGatewayPolicy`` is a cluster-scoped custom resource definition, so a ``.metadata.namespace`` field should not be specified. .. code-block:: yaml apiVersion: cilium.io/v2 kind: CiliumEgressGatewayPolicy metadata: name: example-policy To target pods belonging to a given namespace only labels/expressions should be used instead (as described below). Selecting source pods --------------------- The ``selectors`` field of a ``CiliumEgressGatewayPolicy`` resource is used to select source pods via a label selector. This can be done using ``matchLabels``: .. code-block:: yaml selectors: - podSelector: matchLabels: labelKey: labelVal It can also be done using ``matchExpressions``: .. code-block:: yaml selectors: - podSelector: matchExpressions: - {key: testKey, operator: In, values: [testVal]} - {key: testKey2, operator: NotIn, values: [testVal2]} Moreover, multiple ``podSelector`` can be specified: .. code-block:: text selectors: - podSelector: [..] - podSelector: [..] To select pods belonging to a given namespace, the special ``io.kubernetes.pod.namespace`` label should be used. To only select pods on certain nodes, you can use the ``nodeSelector``: .. code-block:: yaml selectors: - podSelector: matchLabels: labelKey: labelVal nodeSelector: matchLabels: nodeLabelKey: nodeLabelVal .. note:: Only security identities will be taken into account. See :ref:`identity-relevant-labels` for more information. ``nodeSelector`` cannot be used alone, it must be used together with ``podSelector``. Selecting the destination ------------------------- One or more destination CIDRs can be specified with ``destinationCIDRs``: .. code-block:: yaml destinationCIDRs: - "a.b.c.d/32" - "e.f.g.0/24" - "a:b::/48" .. note:: Any IP belonging to these ranges which is also an internal cluster IP (e.g. pods, nodes, Kubernetes API server) will be excluded from the egress gateway SNAT logic. It's possible to specify exceptions to the ``destinationCIDRs`` list with ``excludedCIDRs``: .. code-block:: yaml destinationCIDRs: - "a.b.0.0/16" - "a:b::/48" excludedCIDRs: - "a.b.c.0/24" - "a:b:c::/64" In this case traffic destined to the ``a.b.0.0/16`` CIDR, except for the ``a.b.c.0/24`` destination, will go through egress gateway and leave the cluster with the designated egress IP. Selecting and configuring the gateway node ------------------------------------------ The node that should act as gateway node for a given policy can be configured with the ``egressGateway`` field. The node is matched based on its labels, with the ``nodeSelector`` field: .. code-block:: yaml egressGateway: nodeSelector: matchLabels: testLabel: testVal .. note:: In case multiple nodes are a match for the given set of labels, the first node in lexical ordering based on their name will be selected. .. note:: If there is no match for the given set of labels, Cilium drops the traffic that matches the destination CIDR(s). The IP address that should be used to SNAT traffic must also be configured. There are 3 different ways this can be achieved: 1. By specifying the interface: .. code-block:: yaml egressGateway: nodeSelector: matchLabels: testLabel: testVal interface: ethX In this case the first IPv4 and IPv6 addresses assigned to the ``ethX`` interface will be used. 2. By explicitly specifying the egress IP: .. code-block:: yaml egressGateway: nodeSelector: matchLabels: testLabel: testVal egressIP: a.b.c.d .. warning:: The egress IP must be assigned to a network device on the node. 3. By omitting both ``egressIP`` and ``interface`` properties, which will make the agent use the first IPv4 and IPv6 addresses assigned to the interface for the default route. .. code-block:: yaml egressGateway: nodeSelector: matchLabels: testLabel: testVal Regardless of which way the egress IP is configured, the user must ensure that Cilium is running on the device that has the egress IP assigned to it, by setting the ``--devices`` agent option accordingly. .. warning:: The ``egressIP``
https://github.com/cilium/cilium/blob/main//Documentation/network/egress-gateway/egress-gateway.rst
main
cilium
[ 0.042177073657512665, 0.003705069189891219, -0.016371194273233414, -0.013111874461174011, -0.06103865057229996, -0.02034350298345089, -0.008714141324162483, 0.009351917542517185, 0.056811608374118805, 0.048451125621795654, -0.02217554673552513, -0.09138602018356323, -0.03877871111035347, -...
0.176246
the default route. .. code-block:: yaml egressGateway: nodeSelector: matchLabels: testLabel: testVal Regardless of which way the egress IP is configured, the user must ensure that Cilium is running on the device that has the egress IP assigned to it, by setting the ``--devices`` agent option accordingly. .. warning:: The ``egressIP`` and ``interface`` properties cannot both be specified in the ``egressGateway`` spec. Egress Gateway Policies that contain both of these properties will be ignored by Cilium. .. note:: When Cilium is unable to select the Egress IP for an Egress Gateway policy (for example because the specified ``egressIP`` is not configured for any network interface on the gateway node), then the gateway node will drop traffic that matches the policy with the reason ``No Egress IP configured``. .. note:: After Cilium has selected the Egress IP for an Egress Gateway policy (or failed to do so), it does not automatically respond to a change in the gateway node's network configuration (for example if an IP address is added or deleted). You can force a fresh selection by re-applying the Egress Gateway policy. Selecting multiple gateway nodes -------------------------------- It's possible to select multiple gateway nodes in the same policy. In this case, the gateway nodes can be configured using the ``egressGateways`` list field. Entries on this list have the exact same configuration options as the ``egressGateway`` field: .. code-block:: yaml egressGateways: - nodeSelector: matchLabels: testLabel: testVal1 - nodeSelector: matchLabels: testLabel: testVal2 .. note:: The same restrictions as with the ``egressGateway`` field apply to each item of the ``egressGateways`` list. .. note:: When using multiple gateways the source endpoints matched by the policy will still egress traffic through a single gateway, not all of them. The endpoints will be assigned to a gateway based on its CiliumEndpoint's UID. Hence, an endpoint should use the same gateway during its lifetime as long as the gateway nodes matched by the ``nodeSelector`` fields don't change. If a ``nodeSelector`` field is added, removed, or modified, or if a node matching one of the ``nodeSelector`` fields is added or removed, the list of gateways will change and the endpoints will be reassigned. .. warning:: As with single-gateway policies, changing the gateway node will break existing egress connections. Please read the following :gh-issue:`39245` which tracks this issue. Example policy -------------- Below is an example of a ``CiliumEgressGatewayPolicy`` resource that conforms to the specification above: .. code-block:: yaml apiVersion: cilium.io/v2 kind: CiliumEgressGatewayPolicy metadata: name: egress-sample spec: # Specify which pods should be subject to the current policy. # Multiple pod selectors can be specified. selectors: - podSelector: matchLabels: org: empire class: mediabot # The following label selects default namespace io.kubernetes.pod.namespace: default nodeSelector: # optional, if not specified the policy applies to all nodes matchLabels: node.kubernetes.io/name: node1 # only traffic from this node will be SNATed # Specify which destination CIDR(s) this policy applies to. # Multiple CIDRs can be specified. destinationCIDRs: - "0.0.0.0/0" - "::/0" # Configure the gateway node. egressGateway: # Specify which node should act as gateway for this policy. nodeSelector: matchLabels: node.kubernetes.io/name: node2 # Specify the IP address used to SNAT traffic matched by the policy. # It must exist as an IP associated with a network interface on the instance. egressIP: 10.168.60.100 # Alternatively it's possible to specify the interface to be used for egress traffic. # In this case the first IPv4 and IPv6 addresses assigned to that interface will be used # as egress IP. # interface: enp0s8 Creating the ``CiliumEgressGatewayPolicy`` resource above would cause all traffic originating from pods with the ``org: empire`` and ``class: mediabot`` labels in the ``default`` namespace on node
https://github.com/cilium/cilium/blob/main//Documentation/network/egress-gateway/egress-gateway.rst
main
cilium
[ 0.029128465801477432, -0.0006260870723053813, -0.04119671881198883, 0.0063117011450231075, -0.051145706325769424, -0.03703460469841957, -0.007443312089890242, -0.005366581492125988, -0.0020377880427986383, -0.028657352551817894, 0.03773794323205948, -0.037523504346609116, -0.0108018424361944...
0.108267
# In this case the first IPv4 and IPv6 addresses assigned to that interface will be used # as egress IP. # interface: enp0s8 Creating the ``CiliumEgressGatewayPolicy`` resource above would cause all traffic originating from pods with the ``org: empire`` and ``class: mediabot`` labels in the ``default`` namespace on node ``node1`` and destined to ``0.0.0.0/0`` or ``::/0`` (i.e. all traffic leaving the cluster) to be routed through the gateway node with the ``node.kubernetes.io/name: node2`` label, which will then SNAT said traffic with the ``10.168.60.100`` egress IP. Selection of the egress network interface ========================================= For gateway nodes with multiple network interfaces, Cilium selects the egress network interface based on the node's routing setup (``ip route get from ``). Testing the egress gateway feature ================================== In this section we are going to show the necessary steps to test the feature. First we deploy a pod that connects to a cluster-external service. Then we apply a ``CiliumEgressGatewayPolicy`` and observe that the pod's connection gets redirected through the Gateway node. We assume a 2-node cluster with IPs ``192.168.60.11`` (node1) and ``192.168.60.12`` (node2). The client pod gets deployed to node1, and the CEGP selects node2 as Gateway node. Create an external service (optional) ------------------------------------- If you don't have an external service to experiment with, you can use Nginx, as the server access logs will show from which IP address the request is coming. Create an nginx service on a Linux node that is external to the existing Kubernetes cluster, and use it as the destination of the egress traffic: .. code-block:: shell-session $ # Install and start nginx $ sudo apt install nginx $ sudo systemctl start nginx In this example, the IP associated with the host running the Nginx instance will be ``192.168.60.13``. Deploy client pods ------------------ Deploy a client pod that will be used to connect to the Nginx instance: .. parsed-literal:: $ kubectl create -f \ |SCM\_WEB|\/examples/kubernetes-dns/dns-sw-app.yaml $ kubectl get pods NAME READY STATUS RESTARTS AGE pod/mediabot 1/1 Running 0 14s $ kubectl exec mediabot -- curl http://192.168.60.13:80 Verify from the Nginx access log (or other external services) that the request is coming from one of the nodes in the Kubernetes cluster. In this example the access logs should contain something like: .. code-block:: shell-session $ tail /var/log/nginx/access.log [...] 192.168.60.11 - - [04/Apr/2021:22:06:57 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.52.1" since the client pod is running on the node ``192.168.60.11`` it is expected that, without any Cilium egress gateway policy in place, traffic will leave the cluster with the IP of the node. Apply egress gateway policy --------------------------- Download the ``egress-sample`` Egress Gateway Policy yaml: .. parsed-literal:: $ wget \ |SCM\_WEB|\/examples/kubernetes-egress-gateway/egress-gateway-policy.yaml Modify the ``destinationCIDRs`` to include the IP of the host where your designated external service is running on. Specifying an IP address in the ``egressIP`` field is optional. To make things easier in this example, it is possible to comment out that line. This way, the agent will use the first IPv4 and IPv6 addresses assigned to the interface for the default route. To let the policy select the node designated to be the Egress Gateway, apply the label ``egress-node: true`` to it: .. code-block:: shell-session $ kubectl label nodes egress-node=true Note that the Egress Gateway node should be a different node from the one where the ``mediabot`` pod is running on. Apply the ``egress-sample`` egress gateway Policy, which will cause all traffic from the mediabot pod to leave the cluster with the IP of the Egress Gateway node: .. code-block:: shell-session $ kubectl apply -f egress-gateway-policy.yaml Verify the setup ---------------- We can now verify with the client pod that
https://github.com/cilium/cilium/blob/main//Documentation/network/egress-gateway/egress-gateway.rst
main
cilium
[ 0.0559067502617836, 0.006302572786808014, 0.023542370647192, -0.02673688717186451, -0.019228918477892876, 0.018264004960656166, 0.013017736375331879, -0.01568588614463806, 0.017744604498147964, 0.023404933512210846, -0.039969153702259064, -0.08492735028266907, -0.013375772163271904, -0.094...
0.164785
running on. Apply the ``egress-sample`` egress gateway Policy, which will cause all traffic from the mediabot pod to leave the cluster with the IP of the Egress Gateway node: .. code-block:: shell-session $ kubectl apply -f egress-gateway-policy.yaml Verify the setup ---------------- We can now verify with the client pod that the policy is working correctly: .. code-block:: shell-session $ kubectl exec mediabot -- curl http://192.168.60.13:80 [...] The access log from Nginx should show that the request is coming from the selected Egress IP rather than the one of the node where the pod is running: .. code-block:: shell-session $ tail /var/log/nginx/access.log [...] 192.168.60.100 - - [04/Apr/2021:22:06:57 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.52.1" Troubleshooting --------------- To troubleshoot a policy that is not behaving as expected, you can view the egress configuration in a cilium agent (the configuration is propagated to all agents, so it shouldn't matter which one you pick). .. code-block:: shell-session $ kubectl -n kube-system exec ds/cilium -- cilium-dbg bpf egress list Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), wait-for-node-init (init), clean-cilium-state (init) Source IP Destination CIDR Egress IP Gateway IP 192.168.2.23 192.168.60.13/32 0.0.0.0 192.168.60.12 The Source IP address matches the IP address of each pod that matches the policy's ``podSelector``. The Gateway IP address matches the (internal) IP address of the egress node that matches the policy's ``nodeSelector``. The Egress IP is 0.0.0.0 on all agents except for the one running on the egress gateway node, where you should see the Egress IP address being used for this traffic (which will be the ``egressIP`` from the policy, if specified). If the egress list shown does not contain entries as expected to match your policy, check that the pod(s) and egress node are labeled correctly to match the policy selectors. Troubleshooting SNAT Connection Limits -------------------------------------- For more advanced troubleshooting topics please see advanced egress gateway troubleshooting topic for :ref:`SNAT connection limits`.
https://github.com/cilium/cilium/blob/main//Documentation/network/egress-gateway/egress-gateway.rst
main
cilium
[ 0.06394457817077637, 0.029899640008807182, 0.027871694415807724, -0.033845994621515274, 0.01098623313009739, -0.007166978437453508, 0.012623326852917671, -0.02342318557202816, 0.05086616799235344, 0.09140980988740921, -0.040557295083999634, -0.016082070767879486, -0.035994987934827805, -0....
0.082099
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_egress\_gateway\_troubeshooting: Egress Gateway Advanced Troubleshooting ======================================= This document explains various issues users may encounter using egress-gateway. .. \_snat\_connection\_limits: SNAT Connection Limits ---------------------- In use-cases where egress-gateway is being used to masquerade traffic to a small set of remote endpoints, it's possible to cause issues by exceeding the max number of source IPs that can be allocated by Cilium's NAT mapping per remote endpoint. This can cause issues with existing connections, as old connections are automatically evicted to accommodate new connections. Example Scenario ---------------- Imagine you have a Kubernetes cluster using Cilium's egress-gateway with policy configured such that egress-IP ``10.1.0.0`` is used to masquerade external connections to a server on address ``10.2.0.0:8080``, which is behind a firewall. The firewall only allows connections through that match the source IP ``10.1.0.0``. Many clients on the cluster will connect to the backend server via the same tuple of ``{egress-IP, remote endpoint IP, remote endpoint Port}`` => ``{10.1.0.0, 10.2.0.0, 8080}``. These connections will have the same source IP and destination IP & port. In Cilium's datapath, each connection to this destination will be mapped using a unique source port. If too many connections are made through the egress-gateway node, Cilium's SNAT map can reach capacity, which will result in old connections not being tracked, causing connectivity issues. The limit is equal to the difference between max NAT node port value (65535) and the upper bound of ``--node-port-range`` (default: 32767). By default, an egress-gateway Node can handle 65535 - 32767 = 32768 possible connections to a common remote endpoint address, using the same egress IP. High SNAT port mapping utilization can also result in egress-gateway connection failures as Cilium's SNAT mapping fails to find available source ports for masquerade SNAT. Cilium agent stores stats about the top 30 such connection tuples, this can be accessed inside a cilium agent container using the ``cilium-dbg`` utility. .. code-block:: shell-session $ kubectl -n kube-system exec ds/cilium -- cilium-dbg shell -- db/show nat-stats # IPFamily Proto EgressIP RemoteAddr Count ipv4 TCP 10.244.1.160 10.244.3.174:4240 1 ipv4 ICMP 172.18.0.2 172.18.0.3 1 ipv4 TCP 172.18.0.2 172.18.0.3:4240 1 ipv4 TCP 172.18.0.2 172.18.0.4:6443 50 ipv4 TCP 172.18.0.2 104.198.14.52:443 294 ipv6 ICMPv6 [fc00:c111::2] [fc00:c111::3] 1 ipv6 TCP [fd00:10:244:1::ec5d] [fd00:10:244:3::730c]:4240 1 ipv6 TCP [fd00:10:244:1::ec5d] [fd00:10:244::915]:4240 1 \*\*Note\*\*: These stats are re-calculated every 30 seconds by default. So there is a delay between new connections occurring and when the stats are updated. If you observe one or more row having a very large connection count (i.e. approaching the default connection limit: 32768), then this may indicate SNAT connection overflow issues. Because this problem is a result of hitting a hard limit on Cilium's Egress Gateway functionality, the only solution is to reduce the number of connections that are being SNATed through an egress-gateway, This can be done by having clients avoid creating as many new connections, or by lowering the amount of connections going to the same remote address (with a common egress IP) by splitting up traffic via different egress IPs and/or remote endpoint addresses. For alerting and observability on SNAT source port utilization please see the :ref:`NAT endpoint max connection ` metric which tracks the top saturation (as a percentage of total the max available) of a Cilium Agent.
https://github.com/cilium/cilium/blob/main//Documentation/network/egress-gateway/egress-gateway-troubleshooting.rst
main
cilium
[ 0.018258171156048775, -0.0057999189011752605, -0.06747912615537643, 0.007473839446902275, -0.033302027732133865, -0.03324282169342041, -0.030823975801467896, 0.008788779377937317, 0.022515513002872467, -0.003075572894886136, -0.015992658212780952, -0.0007716358522884548, -0.03778942301869392...
0.077897
of total the max available) of a Cilium Agent.
https://github.com/cilium/cilium/blob/main//Documentation/network/egress-gateway/egress-gateway-troubleshooting.rst
main
cilium
[ 0.04640405997633934, -0.04542696848511696, -0.10286890715360641, -0.01094072125852108, -0.08331511914730072, -0.07404237240552902, 0.005647170823067427, 0.12574076652526855, -0.0288532767444849, -0.020106308162212372, 0.023312747478485107, -0.06657544523477554, 0.020506277680397034, 0.0308...
0.188261
To install Cilium on `ACK (Alibaba Cloud Container Service for Kubernetes) `\_, perform the following steps: \*\*Disable ACK CNI (ACK Only):\*\* If you are running an ACK cluster, you should delete the ACK CNI. .. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io Cilium will manage ENIs instead of the ACK CNI, so any running DaemonSet from the list below has to be deleted to prevent conflicts. - ``kube-flannel-ds`` - ``terway`` - ``terway-eni`` - ``terway-eniip`` .. note:: If you are using ACK with Flannel (DaemonSet ``kube-flannel-ds``), the Cloud Controller Manager (CCM) will create a route (Pod CIDR) in VPC. If your cluster is a Managed Kubernetes you cannot disable this behavior. Please consider creating a new cluster. .. code-block:: shell-session kubectl -n kube-system delete daemonset The next step is to remove CRD below created by ``terway\*`` CNI .. code-block:: shell-session kubectl delete crd \ ciliumclusterwidenetworkpolicies.cilium.io \ ciliumendpoints.cilium.io \ ciliumidentities.cilium.io \ ciliumnetworkpolicies.cilium.io \ ciliumnodes.cilium.io \ bgpconfigurations.crd.projectcalico.org \ clusterinformations.crd.projectcalico.org \ felixconfigurations.crd.projectcalico.org \ globalnetworkpolicies.crd.projectcalico.org \ globalnetworksets.crd.projectcalico.org \ hostendpoints.crd.projectcalico.org \ ippools.crd.projectcalico.org \ networkpolicies.crd.projectcalico.org \*\*Create AlibabaCloud Secrets:\*\* Before installing Cilium, a new Kubernetes Secret with the AlibabaCloud Tokens needs to be added to your Kubernetes cluster. This Secret will allow Cilium to gather information from the AlibabaCloud API which is needed to implement ToGroups policies. \*\*AlibabaCloud Access Keys:\*\* To create a new access token the `following guide can be used `\_. These keys need to have certain `RAM Permissions `\_: .. code-block:: json { "Version": "1", "Statement": [{ "Action": [ "ecs:CreateNetworkInterface", "ecs:DescribeNetworkInterfaces", "ecs:AttachNetworkInterface", "ecs:DetachNetworkInterface", "ecs:DeleteNetworkInterface", "ecs:DescribeInstanceAttribute", "ecs:DescribeInstanceTypes", "ecs:AssignPrivateIpAddresses", "ecs:UnassignPrivateIpAddresses", "ecs:DescribeInstances", "ecs:DescribeSecurityGroups", "ecs:ListTagResources" ], "Resource": [ "\*" ], "Effect": "Allow" }, { "Action": [ "vpc:DescribeVSwitches", "vpc:ListTagResources", "vpc:DescribeVpcs" ], "Resource": [ "\*" ], "Effect": "Allow" } ] } As soon as you have the access tokens, the following secret needs to be added, with each empty string replaced by the associated value as a base64-encoded string: .. code-block:: yaml apiVersion: v1 kind: Secret metadata: name: cilium-alibabacloud namespace: kube-system type: Opaque data: ALIBABA\_CLOUD\_ACCESS\_KEY\_ID: "" ALIBABA\_CLOUD\_ACCESS\_KEY\_SECRET: "" The base64 command line utility can be used to generate each value, for example: .. code-block:: shell-session $ echo -n "access\_key" | base64 YWNjZXNzX2tleQ== This secret stores the AlibabaCloud credentials, which will be used to connect to the AlibabaCloud API. .. code-block:: shell-session $ kubectl create -f cilium-secret.yaml \*\*Install Cilium:\*\* Install Cilium release via Helm: .. cilium-helm-install:: :namespace: kube-system :set: alibabacloud.enabled=true ipam.mode=alibabacloud enableIPv4Masquerade=false routingMode=native .. note:: You must ensure that the security groups associated with the ENIs (``eth1``, ``eth2``, ...) allow for egress traffic to go outside of the VPC. By default, the security groups for pod ENIs are derived from the primary ENI (``eth0``).
https://github.com/cilium/cilium/blob/main//Documentation/installation/alibabacloud-eni.rst
main
cilium
[ -0.021357696503400803, 0.030061883851885796, -0.09788007289171219, -0.04592509567737579, 0.0302627831697464, -0.040400028228759766, -0.029171789065003395, -0.023682283237576485, 0.10913334786891937, -0.009732993319630623, 0.04385834187269211, -0.08170925825834274, -0.0019069865811616182, -...
0.136895
Restart unmanaged Pods ====================== If you did not create a cluster with the nodes tainted with the taint ``node.cilium.io/agent-not-ready``, then unmanaged pods need to be restarted manually. Restart all already running pods which are not running in host-networking mode to ensure that Cilium starts managing them. This is required to ensure that all pods which have been running before Cilium was deployed have network connectivity provided by Cilium and NetworkPolicy applies to them: .. code-block:: shell-session $ kubectl get pods --all-namespaces -o custom-columns=NAMESPACE:.metadata.namespace,NAME:.metadata.name,HOSTNETWORK:.spec.hostNetwork --no-headers=true | grep '' | awk '{print "-n "$1" "$2}' | xargs -L 1 -r kubectl delete pod pod "event-exporter-v0.2.3-f9c896d75-cbvcz" deleted pod "fluentd-gcp-scaler-69d79984cb-nfwwk" deleted pod "heapster-v1.6.0-beta.1-56d5d5d87f-qw8pv" deleted pod "kube-dns-5f8689dbc9-2nzft" deleted pod "kube-dns-5f8689dbc9-j7x5f" deleted pod "kube-dns-autoscaler-76fcd5f658-22r72" deleted pod "kube-state-metrics-7d9774bbd5-n6m5k" deleted pod "l7-default-backend-6f8697844f-d2rq2" deleted pod "metrics-server-v0.3.1-54699c9cc8-7l5w2" deleted .. note:: This may error out on macOS due to ``-r`` being unsupported by ``xargs``. In this case you can safely run this command without ``-r`` with the symptom that this will hang if there are no pods to restart. You can stop this with ``ctrl-c``.
https://github.com/cilium/cilium/blob/main//Documentation/installation/k8s-install-restart-pods.rst
main
cilium
[ 0.09187612682580948, -0.003669286845251918, -0.012867581099271774, 0.013725928030908108, 0.03360188752412796, -0.05334153771400452, -0.03886696323752403, -0.047969866544008255, 0.05299606919288635, 0.045810092240571976, 0.031649500131607056, -0.10781484097242355, 0.0005521764978766441, -0....
0.164497
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_k8s\_install\_portmap: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Portmap (HostPort) \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Starting from Cilium 1.8, the Kubernetes HostPort feature is supported natively through Cilium's eBPF-based kube-proxy replacement. CNI chaining is therefore not needed anymore. For more information, see section :ref:`kubeproxyfree\_hostport`. However, for the case where Cilium is deployed as ``kubeProxyReplacement=false``, the HostPort feature can then be enabled via CNI chaining with the portmap plugin which implements HostPort. This guide documents how to enable the latter for the chaining case. For more general information about the Kubernetes HostPort feature, check out the upstream documentation: `Kubernetes hostPort-CNI plugin documentation `\_. .. note:: Before using HostPort, read the `Kubernetes Configuration Best Practices `\_ to understand the implications of this feature. Deploy Cilium with the portmap plugin enabled ============================================= Install the ``portmap`` binaries. Some Kubernetes distributions will do this for you, in which case you don't need to do anything. However, if ``portmap`` is not available on your worker nodes, you must install it into ``/opt/cni/bin/``. You can find binaries from the `CNI project releases page `\_. .. include:: k8s-install-download-release.rst Deploy Cilium release via Helm: .. cilium-helm-install:: :namespace: kube-system :set: cni.chainingMode=portmap .. note:: You can combine the ``cni.chainingMode=portmap`` option with any of the other installation guides. As Cilium is deployed as a DaemonSet, it will write a new CNI configuration. The new configuration now enables HostPort. Any new pod scheduled is now able to make use of the HostPort functionality. Restart existing pods ===================== The new CNI chaining configuration will \*not\* apply to any pod that is already running the cluster. Existing pods will be reachable and Cilium will load-balance to them but policy enforcement will not apply to them and load-balancing is not performed for traffic originating from existing pods. You must restart these pods in order to invoke the chaining configuration on them.
https://github.com/cilium/cilium/blob/main//Documentation/installation/cni-chaining-portmap.rst
main
cilium
[ -0.0022214436903595924, 0.053721215575933456, -0.010397003963589668, -0.0848427340388298, -0.021751608699560165, -0.03605730086565018, -0.07341517508029938, 0.002447183709591627, -0.010902061127126217, 0.002699778415262699, 0.031713757663965225, -0.08850038796663284, 0.02001778408885002, -...
0.127007
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_k0s\_install: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Installation k0s Using k0sctl \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This guide walks you through installation of Cilium on `k0s `\_, an open source, all-inclusive Kubernetes distribution, which is configured with all of the features needed to build a Kubernetes cluster. Cilium is presently supported on amd64 and arm64 architectures. Install a Master Node ===================== Ensure you have the k0sctl binary installed locally. Setup your VMs: How to do this is out of the scope of this guide, please refer to your favorite virtualization tool. After deploying the VMs, export their IP addresses to environment variables (see example below). These will be used in a later step. .. code-block:: shell-session export node1\_IP=192.168.2.1 export node2\_IP=192.168.2.2 export node3\_IP=192.168.2.3 Prepare the yaml configuration file k0sctl will use: .. code-block:: shell-session # The following command assumes the user has deployed 3 VMs # with the default user "k0s" using the default ssh-key (without passphrase) k0sctl init --k0s -n "myk0scluster" -u "k0s" -i "~/.ssh/id\_rsa" -C "1" "${node1\_IP}" "${node2\_IP}" "${node3\_IP}" > k0s-myk0scluster-config.yaml Next step is editing ``k0s-myk0scluster-config.yaml``:: # replace ... provider: kuberouter ... # with ... provider: custom ... Finally apply the config file: .. code-block:: shell-session k0sctl apply --config k0s-myk0scluster-config.yaml --no-wait .. note:: If running Cilium in :ref:`kubeproxy-free` mode disable kube-proxy in the k0s config file .. code-block:: shell-session # edit k0s-myk0scluster-config.yaml # replace ... network: kubeProxy: disabled: false ... # with ... network: kubeProxy: disabled: true ... Configure Cluster Access ======================== For the Cilium CLI to access the cluster in successive steps you will need to generate the ``kubeconfig`` file, store it in ``~/.kube/k0s-mycluster.config`` and setting the ``KUBECONFIG`` environment variable: .. code-block:: shell-session k0sctl kubeconfig --config k0s-myk0scluster-config.yaml > ~/.kube/k0s-mycluster.config export KUBECONFIG=~/.kube/k0s-mycluster.config Install Cilium ============== .. include:: cli-download.rst Install Cilium by running: .. parsed-literal:: cilium install |CHART\_VERSION| Validate the Installation ========================= .. include:: cli-status.rst .. include:: cli-connectivity-test.rst .. include:: next-steps.rst
https://github.com/cilium/cilium/blob/main//Documentation/installation/k0s.rst
main
cilium
[ 0.013491868041455746, 0.04626365378499031, -0.012564219534397125, -0.10507873445749283, 0.004338556434959173, -0.021001271903514862, -0.027711551636457443, -0.05006202682852745, 0.057271119207143784, 0.007982910610735416, 0.02542572095990181, -0.1046353206038475, 0.034799568355083466, -0.0...
0.212416
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_rancher\_desktop\_install: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Installation Using Rancher Desktop \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This guide walks you through installation of Cilium on `Rancher Desktop `\_, an open-source desktop application for Mac, Windows and Linux. Configure Rancher Desktop ========================= .. include:: rancher-desktop-configure.rst Install Cilium ============== .. include:: cli-download.rst Install Cilium by running: .. parsed-literal:: cilium install |CHART\_VERSION| Validate the Installation ========================= .. include:: cli-status.rst .. include:: cli-connectivity-test.rst .. include:: next-steps.rst
https://github.com/cilium/cilium/blob/main//Documentation/installation/rancher-desktop.rst
main
cilium
[ 0.006727560888975859, -0.02901829592883587, -0.05708254128694534, -0.07174339145421982, 0.055857449769973755, -0.05228879675269127, -0.07598205655813217, 0.03895171731710434, 0.02049208991229534, -0.025997141376137733, 0.09806627035140991, -0.03392869234085083, 0.07440807670354843, -0.0450...
0.081883
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gs\_kind: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Installation Using Kind \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This guide uses `kind `\_ to demonstrate deployment and operation of Cilium in a multi-node Kubernetes cluster running locally on Docker. Install Dependencies ==================== .. include:: kind-install-deps.rst Configure kind ============== .. include:: kind-configure.rst Create a cluster ================ .. include:: kind-create-cluster.rst .. \_kind\_install\_cilium: Install Cilium ============== .. include:: k8s-install-download-release.rst .. include:: kind-preload.rst Then, install Cilium release via Helm: .. cilium-helm-install:: :namespace: kube-system :set: image.pullPolicy=IfNotPresent ipam.mode=kubernetes .. note:: To enable Cilium's Socket LB (:ref:`kubeproxy-free`), cgroup v2 needs to be enabled, and Kind nodes need to run in separate `cgroup namespaces `\_\_, and these namespaces need to be different from the cgroup namespace of the underlying host so that Cilium can attach BPF programs at the right cgroup hierarchy. To verify this, run the following commands, and ensure that the cgroup values are different: .. code-block:: shell-session $ docker exec kind-control-plane ls -al /proc/self/ns/cgroup lrwxrwxrwx 1 root root 0 Jul 20 19:20 /proc/self/ns/cgroup -> 'cgroup:[4026532461]' $ docker exec kind-worker ls -al /proc/self/ns/cgroup lrwxrwxrwx 1 root root 0 Jul 20 19:20 /proc/self/ns/cgroup -> 'cgroup:[4026532543]' $ ls -al /proc/self/ns/cgroup lrwxrwxrwx 1 root root 0 Jul 19 09:38 /proc/self/ns/cgroup -> 'cgroup:[4026531835]' One way to enable cgroup v2 is to set the kernel parameter ``systemd.unified\_cgroup\_hierarchy=1``. To enable cgroup namespaces, a container runtime needs to configured accordingly. For example in Docker, dockerd's ``--default-cgroupns-mode`` has to be set to ``private``. Another requirement for the Socket LB on Kind to properly function is that either cgroup v1 controllers ``net\_cls`` and ``net\_prio`` are disabled (or cgroup v1 altogether is disabled e.g., by setting the kernel parameter ``cgroup\_no\_v1="all"``), or the host kernel should be 5.14 or more recent to include this `fix `\_\_. See the `Pull Request `\_\_ for more details. .. include:: k8s-install-validate.rst .. include:: next-steps.rst Attaching a Debugger ==================== Cilium's Kind configuration enables access to Delve debug server instances running in the agent and operator Pods by default. See :ref:`gs\_debugging` to learn how to use it. Troubleshooting =============== Unable to contact k8s api-server -------------------------------- In the :ref:`Cilium agent logs ` you will see:: level=info msg="Establishing connection to apiserver" host="https://10.96.0.1:443" subsys=k8s level=error msg="Unable to contact k8s api-server" error="Get https://10.96.0.1:443/api/v1/namespaces/kube-system: dial tcp 10.96.0.1:443: connect: no route to host" ipAddr="https://10.96.0.1:443" subsys=k8s level=fatal msg="Unable to initialize Kubernetes subsystem" error="unable to create k8s client: unable to create k8s client: Get https://10.96.0.1:443/api/v1/namespaces/kube-system: dial tcp 10.96.0.1:443: connect: no route to host" subsys=daemon As Kind is running nodes as containers in Docker, they're sharing your host machines' kernel. If the socket LB wasn't disabled, the eBPF programs attached by Cilium may be out of date and no longer routing api-server requests to the current ``kind-control-plane`` container. Recreating the kind cluster and using the helm command :ref:`kind\_install\_cilium` will detach the inaccurate eBPF programs. Crashing Cilium agent pods -------------------------- Check if Cilium agent pods are crashing with following logs. This may indicate that you are deploying a kind cluster in an environment where Cilium is already running (for example, in the Cilium development VM). This can also happen if you have other overlapping BPF ``cgroup`` type programs attached to the parent ``cgroup`` hierarchy of the kind container nodes. In such cases, either tear down Cilium, or manually detach the overlapping BPF ``cgroup`` programs running in the parent ``cgroup`` hierarchy by following the `bpftool documentation `\_. For more information, see the `Pull Request `\_\_. :: level=warning msg="+ bpftool cgroup attach /var/run/cilium/cgroupv2 connect6 pinned /sys/fs/bpf/tc/globals/cilium\_cgroups\_connect6" subsys=datapath-loader level=warning msg="Error: failed to attach program" subsys=datapath-loader level=warning msg="+ RETCODE=255" subsys=datapath-loader .. \_gs\_kind\_cluster\_mesh: Cluster
https://github.com/cilium/cilium/blob/main//Documentation/installation/kind.rst
main
cilium
[ -0.020952381193637848, 0.056449390947818756, 0.008743918500840664, -0.022437168285250664, 0.06770038604736328, -0.030563944950699806, -0.043083660304546356, 0.012194592505693436, 0.02215331792831421, -0.011981631629168987, 0.03489287197589874, -0.07245595008134842, 0.015301118604838848, -0...
0.138382
the overlapping BPF ``cgroup`` programs running in the parent ``cgroup`` hierarchy by following the `bpftool documentation `\_. For more information, see the `Pull Request `\_\_. :: level=warning msg="+ bpftool cgroup attach /var/run/cilium/cgroupv2 connect6 pinned /sys/fs/bpf/tc/globals/cilium\_cgroups\_connect6" subsys=datapath-loader level=warning msg="Error: failed to attach program" subsys=datapath-loader level=warning msg="+ RETCODE=255" subsys=datapath-loader .. \_gs\_kind\_cluster\_mesh: Cluster Mesh ============ With Kind we can simulate Cluster Mesh in a sandbox too. Kind Configuration ------------------ This time we need to create (2) ``config.yaml``, one for each kubernetes cluster. We will explicitly configure their ``pod-network-cidr`` and ``service-cidr`` to not overlap. Example ``kind-cluster1.yaml``: .. code-block:: yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane - role: worker - role: worker - role: worker networking: disableDefaultCNI: true podSubnet: "10.0.0.0/16" serviceSubnet: "10.1.0.0/16" Example ``kind-cluster2.yaml``: .. code-block:: yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane - role: worker - role: worker - role: worker networking: disableDefaultCNI: true podSubnet: "10.2.0.0/16" serviceSubnet: "10.3.0.0/16" Create Kind Clusters -------------------- We can now create the respective clusters: .. code-block:: shell-session kind create cluster --name=cluster1 --config=kind-cluster1.yaml kind create cluster --name=cluster2 --config=kind-cluster2.yaml Setting up Cluster Mesh ------------------------ We can deploy Cilium, and complete setup by following the Cluster Mesh guide with :ref:`gs\_clustermesh`. For Kind, we'll want to deploy the ``NodePort`` service into the ``kube-system`` namespace.
https://github.com/cilium/cilium/blob/main//Documentation/installation/kind.rst
main
cilium
[ -0.052112460136413574, -0.0355866365134716, -0.05235765501856804, 0.016459550708532333, -0.02665243111550808, -0.04339379072189331, -0.035314444452524185, -0.01678711548447609, 0.02751493640244007, -0.031848758459091187, 0.039232999086380005, -0.12496251612901688, -0.0013531479053199291, -...
0.129132
You can monitor as Cilium and all required components are being installed: .. code-block:: shell-session $ kubectl -n kube-system get pods --watch NAME READY STATUS RESTARTS AGE cilium-operator-cb4578bc5-q52qk 0/1 Pending 0 8s cilium-s8w5m 0/1 PodInitializing 0 7s coredns-86c58d9df4-4g7dd 0/1 ContainerCreating 0 8m57s coredns-86c58d9df4-4l6b2 0/1 ContainerCreating 0 8m57s It may take a couple of minutes for all components to come up: .. code-block:: shell-session cilium-operator-cb4578bc5-q52qk 1/1 Running 0 4m13s cilium-s8w5m 1/1 Running 0 4m12s coredns-86c58d9df4-4g7dd 1/1 Running 0 13m coredns-86c58d9df4-4l6b2 1/1 Running 0 13m
https://github.com/cilium/cilium/blob/main//Documentation/installation/kubectl-status.rst
main
cilium
[ 0.06686234474182129, -0.01974722184240818, -0.027177991345524788, -0.016369439661502838, -0.0009297052747569978, -0.05015762150287628, -0.06552594900131226, -0.0033762867096811533, 0.05596494302153587, 0.022251956164836884, 0.03714793175458908, -0.11015026271343231, -0.04882066324353218, -...
0.175223
Run the following command to validate that your cluster has proper network connectivity: .. code-block:: shell-session $ cilium connectivity test ℹ️ Monitor aggregation detected, will skip some flow validation steps ✨ [k8s-cluster] Creating namespace for connectivity check... (...) --------------------------------------------------------------------------------------------------------------------- 📋 Test Report --------------------------------------------------------------------------------------------------------------------- ✅ 69/69 tests successful (0 warnings) .. note:: The connectivity test may fail to deploy due to too many open files in one or more of the pods. If you notice this error, you can increase the ``inotify`` resource limits on your host machine (see `Pod errors due to "too many open files" `\_). Congratulations! You have a fully functional Kubernetes cluster with Cilium. 🎉
https://github.com/cilium/cilium/blob/main//Documentation/installation/cli-connectivity-test.rst
main
cilium
[ 0.019996391609311104, -0.01868469826877117, -0.06209666654467583, 0.01887308806180954, -0.08636605739593506, -0.0283784382045269, -0.06750624626874924, -0.02866820991039276, 0.030006933957338333, 0.03256642073392868, 0.012988256290555, -0.09570036828517914, -0.014176934026181698, -0.011054...
0.109238
To create a cluster with the configuration defined above, pass the ``kind-config.yaml`` you created with the ``--config`` flag of kind. .. code-block:: shell-session kind create cluster --config=kind-config.yaml After a couple of seconds or minutes, a 4 nodes cluster should be created. A new ``kubectl`` context (``kind-kind``) should be added to ``KUBECONFIG`` or, if unset, to ``${HOME}/.kube/config``: .. code-block:: shell-session kubectl cluster-info --context kind-kind .. note:: The cluster nodes will remain in state ``NotReady`` until Cilium is deployed. This behavior is expected.
https://github.com/cilium/cilium/blob/main//Documentation/installation/kind-create-cluster.rst
main
cilium
[ 0.05339125171303749, 0.005300881341099739, -0.07189791649580002, 0.0017470860620960593, -0.03488743305206299, 0.02837991900742054, -0.030782444402575493, 0.017093133181333542, 0.03161168470978737, 0.036437250673770905, 0.03585227206349373, -0.09758442640304565, 0.01589776761829853, -0.0339...
0.133967
To install Cilium on `Amazon Elastic Kubernetes Service (EKS) `\_, perform the following steps: \*\*Default Configuration:\*\* ===================== =================== ============== Datapath IPAM Datastore ===================== =================== ============== Direct Routing (ENI) AWS ENI Kubernetes CRD ===================== =================== ============== For more information on AWS ENI mode, see :ref:`ipam\_eni`. .. tip:: To chain Cilium on top of the AWS CNI, see :ref:`chaining\_aws\_cni`. You can also bring up Cilium in a Single-Region, Multi-Region, or Multi-AZ environment for EKS. \*\*Requirements:\*\* \* The `EKS Managed Nodegroups `\_ must be properly tainted to ensure applications pods are properly managed by Cilium: \* ``managedNodeGroups`` should be tainted with ``node.cilium.io/agent-not-ready=true:NoExecute`` to ensure application pods will only be scheduled once Cilium is ready to manage them. However, there are other options. Please make sure to read and understand the documentation page on :ref:`taint effects and unmanaged pods`. Below is an example on how to use `ClusterConfig `\_ file to create the cluster: .. code-block:: yaml apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig # ... managedNodeGroups: - name: ng-1 # ... # # taint nodes so that application pods are # not scheduled/executed until Cilium is deployed. # Alternatively, see the note above regarding taint effects. taints: - key: "node.cilium.io/agent-not-ready" value: "true" effect: "NoExecute" \*\*Limitations:\*\* \* The AWS ENI integration of Cilium is currently only enabled for IPv4. If you want to use IPv6, use a datapath/IPAM mode other than ENI.
https://github.com/cilium/cilium/blob/main//Documentation/installation/requirements-eks.rst
main
cilium
[ 0.038050577044487, -0.012635531835258007, -0.022994138300418854, -0.06865198910236359, -0.0249473974108696, -0.06414730101823807, -0.0322057269513607, -0.0016758919227868319, 0.02312903106212616, 0.02967352233827114, 0.05715981125831604, -0.11240009218454361, -0.0018513858085498214, -0.048...
0.129353
Configuring Rancher Desktop is done using a YAML configuration file. This step is necessary in order to disable the default CNI and replace it with Cilium. Next you need to start Rancher Desktop with ``containerd`` and create a :download:`override.yaml `: .. literalinclude:: /installation/rancher-desktop-override.yaml :language: yaml After the file is created move it into your Rancher Desktop's ``lima/\_config`` directory: .. tabs:: .. group-tab:: Linux .. code-block:: shell-session cp override.yaml ~/.local/share/rancher-desktop/lima/\_config/override.yaml .. group-tab:: macOS .. code-block:: shell-session cp override.yaml ~/Library/Application\ Support/rancher-desktop/lima/\_config/override.yaml Finally, open the Rancher Desktop UI and go to the Troubleshooting panel and click "Reset Kubernetes". After a few minutes Rancher Desktop will start back up prepared for installing Cilium.
https://github.com/cilium/cilium/blob/main//Documentation/installation/rancher-desktop-configure.rst
main
cilium
[ 0.05981749668717384, -0.045926839113235474, 0.0031499757897108793, -0.09322098642587662, 0.009025027975440025, -0.02079453319311142, -0.0677540972828865, 0.01122134830802679, 0.06225821375846863, 0.023805879056453705, 0.08126268535852432, -0.06106318533420563, 0.03100471757352352, 0.019019...
0.032941
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_admin\_install\_daemonset: .. \_k8s\_install\_etcd: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Installation with external etcd \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This guide walks you through the steps required to set up Cilium on Kubernetes using an external etcd. Use of an external etcd provides better performance and is suitable for larger environments. Should you encounter any issues during the installation, please refer to the :ref:`troubleshooting\_k8s` section and/or seek help on `Cilium Slack`\_. When do I need to use a kvstore? ================================ Unlike the section :ref:`k8s\_quick\_install`, this guide explains how to configure Cilium to use an external kvstore such as etcd. If you are unsure whether you need to use a kvstore at all, the following is a list of reasons when to use a kvstore: \* If you are running in an environment where you observe a high overhead in state propagation caused by Kubernetes events. \* If you do not want Cilium to store state in Kubernetes custom resources (CRDs). \* If you run a cluster with more pods and more nodes than the ones tested in the :ref:`scalability\_guide`. .. \_ds\_deploy: .. include:: requirements-intro.rst You will also need an external etcd version 3.4.0 or higher. Kvstore and Cilium dependency ============================= When using an external kvstore, it's important to break the circular dependency between Cilium and kvstore. If kvstore pods are running within the same cluster and are using a pod network then kvstore relies on Cilium. However, Cilium also relies on the kvstore, which creates a circular dependency. There are two recommended ways of breaking this dependency: \* Deploy kvstore outside of cluster or on separately managed cluster. \* Deploy kvstore pods with a host network, by specifying ``hostNetwork: true`` in the pod spec. Configure Cilium =========================== When using an external kvstore, the address of the external kvstore needs to be configured in the ConfigMap. Download the base YAML and configure it with :term:`Helm`: .. include:: k8s-install-download-release.rst Deploy Cilium release via Helm: .. cilium-helm-install:: :namespace: kube-system :set: etcd.enabled=true "etcd.endpoints[0]=http://etcd-endpoint1:2379" "etcd.endpoints[1]=http://etcd-endpoint2:2379" "etcd.endpoints[2]=http://etcd-endpoint3:2379" If you do not want Cilium to store state in Kubernetes custom resources (CRDs), consider setting ``identityAllocationMode``:: --set identityAllocationMode=kvstore Optional: Configure the SSL certificates ---------------------------------------- Create a Kubernetes secret with the root certificate authority, and client-side key and certificate of etcd: .. code-block:: shell-session kubectl create secret generic -n kube-system cilium-etcd-secrets \ --from-file=etcd-client-ca.crt=ca.crt \ --from-file=etcd-client.key=client.key \ --from-file=etcd-client.crt=client.crt Adjust the helm template generation to enable SSL for etcd and use https instead of http for the etcd endpoint URLs: .. cilium-helm-install:: :namespace: kube-system :set: etcd.enabled=true etcd.ssl=true "etcd.endpoints[0]=https://etcd-endpoint1:2379" "etcd.endpoints[1]=https://etcd-endpoint2:2379" "etcd.endpoints[2]=https://etcd-endpoint3:2379" .. include:: k8s-install-validate.rst .. include:: next-steps.rst
https://github.com/cilium/cilium/blob/main//Documentation/installation/k8s-install-external-etcd.rst
main
cilium
[ -0.04426787793636322, 0.033517975360155106, 0.008680177852511406, -0.09992700815200806, -0.02113453485071659, -0.02676738053560257, -0.08194703608751297, 0.013219761662185192, 0.015987692400813103, -0.007425514981150627, 0.06179292872548103, -0.07472778111696243, 0.02031715027987957, -0.02...
0.119849
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Installation using kubeadm \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This guide describes deploying Cilium on a Kubernetes cluster created with ``kubeadm``. For installing ``kubeadm`` on your system, please refer to `the official kubeadm documentation `\_ The official documentation also describes additional options of kubeadm which are not mentioned here. If you are interested in using Cilium's kube-proxy replacement, please follow the :ref:`kubeproxy-free` guide and skip this one. Create the cluster ================== Initialize the control plane via executing on it: .. code-block:: shell-session kubeadm init .. note:: If you want to use Cilium's kube-proxy replacement, kubeadm needs to skip the kube-proxy deployment phase, so it has to be executed with the ``--skip-phases=addon/kube-proxy`` option: .. code-block:: shell-session kubeadm init --skip-phases=addon/kube-proxy For more information please refer to the :ref:`kubeproxy-free` guide. Afterwards, join worker nodes by specifying the control-plane node IP address and the token returned by ``kubeadm init``: .. code-block:: shell-session kubeadm join <..> Deploy Cilium ============= .. include:: k8s-install-download-release.rst Deploy Cilium release via Helm: .. cilium-helm-install:: :namespace: kube-system .. include:: k8s-install-validate.rst .. include:: next-steps.rst
https://github.com/cilium/cilium/blob/main//Documentation/installation/k8s-install-kubeadm.rst
main
cilium
[ -0.0017286894144490361, -0.0016987287672236562, -0.018974173814058304, -0.08440162986516953, -0.0030706559773534536, -0.03767893463373184, -0.03701271489262581, -0.022469356656074524, 0.052975915372371674, -0.012174413539469242, 0.022966070100665092, -0.07217913120985031, 0.03193280845880508...
0.18535
To install Cilium on `Google Kubernetes Engine (GKE) `\_, perform the following steps: \*\*Default Configuration:\*\* =============== =================== =============== Datapath IPAM Datastore =============== =================== =============== Direct Routing Kubernetes PodCIDR Kubernetes CRD =============== =================== =============== \*\*Requirements:\*\* \* The cluster should be created with the taint ``node.cilium.io/agent-not-ready=true:NoExecute`` using ``--node-taints`` option. However, there are other options. Please make sure to read and understand the documentation page on :ref:`taint effects and unmanaged pods`.
https://github.com/cilium/cilium/blob/main//Documentation/installation/requirements-gke.rst
main
cilium
[ 0.0031722434796392918, -0.033630676567554474, 0.017848504707217216, -0.09125592559576035, -0.050862811505794525, -0.05195753276348114, -0.029859567061066628, -0.01597057282924652, -0.023577220737934113, -0.017871834337711334, 0.03695870190858841, -0.16443200409412384, 0.012906020507216454, ...
0.109179
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_k8s\_install\_broadcom\_vmware\_esxi\_nsx: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Installation on Broadcom VMware ESXi / NSX \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Cilium can be installed on VMware ESXi with or without NSX by using official image. Deploying Cilium on Broadcom VMware vSphere ESXi with or without NSX(-T) ======================================================================== Cilium can be deployed on VMware vSphere ESXi, with or without NSX(-T). However, there are known issues when using tunnel mode with VXLAN as the encapsulation. .. tabs:: .. group-tab:: VXLAN Install Cilium via ``helm install`` with VXLAN Protocol .. cilium-helm-install:: :namespace: kube-system :set: image.pullPolicy=IfNotPresent ipam.mode=kubernetes tunnelProtocol=vxlan .. note:: With NSX(-T), use a custom port for the ``tunnelPort`` flag, for instance ``--set tunnelPort=8223``. :gh-issue:`21801` tracks some reports of problems with offloads when using the VXLAN UDP port standard (4789) or draft (8472). .. group-tab:: Geneve Install Cilium via ``helm install`` with Geneve Protocol .. cilium-helm-install:: :namespace: kube-system :set: image.pullPolicy=IfNotPresent ipam.mode=kubernetes tunnelProtocol=geneve .. note:: NSX(-T) with Network Virtualization (with Edge T0/T1) also uses Geneve Protocol between Transport Nodes (ESXi, Edge). Be aware when troubleshooting that the Geneve traffic you observe on the network may be generated by either NSX(-T) or Cilium. Troubleshooting =============== Pod Communication Failure Across Hosts -------------------------------------- When deploying Cilium with some old release ESXi (7) or with NSX-T (3.x/4.x), with VXLAN encapsulation, the inter-host pod communication may fail, except for ICMP (ping), which still functions. In the :ref:`Cilium-health status ` you will see: .. code-block:: shell-session ==== detail from pod cilium-mvrb6 , on node alg-cilium-cp Probe time: 2025-03-12T16:55:02Z Nodes: alg-cilium-cp (localhost): Host connectivity to 10.44.144.20: ICMP to stack: OK, RTT=640.959µs HTTP to agent: OK, RTT=148.15µs Endpoint connectivity to 10.42.0.38: ICMP to stack: OK, RTT=632.181µs HTTP to agent: OK, RTT=295.409µs alg-cilium-wk1: Host connectivity to 10.44.144.21: ICMP to stack: OK, RTT=764.463µs HTTP to agent: OK, RTT=1.154573ms Endpoint connectivity to 10.42.4.211: ICMP to stack: OK, RTT=765.081µs HTTP to agent: Get "http://10.42.4.211:4240/hello": context deadline exceeded (Client.Timeout exceeded while awaiting headers) The problem originates from a `bug in the VMXNET3 driver `\_\_ related to NIC offload support for VXLAN encapsulation. This is due to the use of an outdated standard port (8472) for VXLAN. In this case you need to change to VXLAN Port ``--set tunnelPort=8223`` or use Geneve tunnel Protocol ``--set tunnelProtocol=geneve``. There is some workaround about `Disable NIC Offload `\_\_ but it is not recommended solution.
https://github.com/cilium/cilium/blob/main//Documentation/installation/k8s-install-broadcom-vmware-esxi-nsx.rst
main
cilium
[ -0.05518600717186928, 0.038230929523706436, -0.03681479021906853, -0.06396714597940445, 0.020684707909822464, -0.06906188279390335, -0.04386157542467117, 0.03279615938663483, -0.01806926727294922, -0.015227092429995537, 0.04392753541469574, -0.051879312843084335, -0.03785434365272522, -0.0...
0.213214
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_aks\_install: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Installation using Azure CNI Powered by Cilium in AKS \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This guide walks you through the installation of Cilium on AKS (Azure Kubernetes Service) via the `Azure Container Network Interface (CNI) Powered by Cilium `\_\_ option. Create the cluster ================== Create an Azure CNI Powered by Cilium AKS cluster with ``network-plugin azure`` and ``--network-dataplane cilium``. You can create the cluster either in ``podsubnet`` or ``overlay`` mode. In both modes, traffic is routed through the Azure Virtual Network Stack. The choice between these modes depends on the specific use case and requirements of the cluster. Refer to `the related documentation `\_\_ to know more about these two modes. .. tabs:: .. group-tab:: Overlay .. code-block:: shell-session az aks create -n -g -l \ --network-plugin azure \ --network-dataplane cilium \ --network-plugin-mode overlay \ --pod-cidr 192.168.0.0/16 See also `the detailed instructions from scratch `\_\_. .. group-tab:: Podsubnet .. code-block:: shell-session az aks create -n -g -l \ --network-plugin azure \ --network-dataplane cilium \ --vnet-subnet-id /subscriptions//resourceGroups//providers/Microsoft.Network/virtualNetworks//subnets/nodesubnet \ --pod-subnet-id /subscriptions//resourceGroups//providers/Microsoft.Network/virtualNetworks//subnets/podsubnet See also `the detailed instructions from scratch `\_. .. include:: k8s-install-validate.rst Delegated Azure IPAM ==================== Delegated Azure IPAM (IP Address Manager) manages the IP allocation for pods created in Azure CNI Powered by Cilium clusters. It assigns IPs that are routable in Azure Virtual Network stack. To know more about the Delegated Azure IPAM, see :ref:`azure\_delegated\_ipam`.
https://github.com/cilium/cilium/blob/main//Documentation/installation/k8s-install-aks.rst
main
cilium
[ -0.05609416589140892, 0.018737107515335083, -0.09090587496757507, -0.0379578173160553, 0.0352884903550148, -0.0417654886841774, -0.0404864177107811, -0.0335785448551178, 0.028554733842611313, 0.05549708008766174, 0.04377393797039986, -0.06683969497680664, 0.04027004539966583, -0.0384448803...
0.173767
To install Cilium on `k3s `\_, perform the following steps: \*\*Default Configuration:\*\* =============== =============== ============== Datapath IPAM Datastore =============== =============== ============== Encapsulation Cluster Pool Kubernetes CRD =============== =============== ============== \*\*Requirements:\*\* \* Install your k3s cluster as you normally would but making sure to disable support for the default CNI plugin and the built-in network policy enforcer so you can install Cilium on top: .. code-block:: shell-session curl -sfL https://get.k3s.io | INSTALL\_K3S\_EXEC='--flannel-backend=none --disable-network-policy' sh - \* For the Cilium CLI to access the cluster in successive steps you will need to use the ``kubeconfig`` file stored at ``/etc/rancher/k3s/k3s.yaml`` by setting the ``KUBECONFIG`` environment variable: .. code-block:: shell-session export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
https://github.com/cilium/cilium/blob/main//Documentation/installation/requirements-k3s.rst
main
cilium
[ -0.00234285113401711, -0.03440845012664795, -0.029457824304699898, -0.09401343762874603, -0.053039636462926865, -0.05688443407416344, -0.023184295743703842, -0.03257627412676811, 0.008713904768228531, 0.014049392193555832, 0.037155382335186005, -0.14582662284374237, -0.010896031744778156, ...
0.153252
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io \*\*\*\*\*\*\*\*\* Weave Net \*\*\*\*\*\*\*\*\* This guide instructs how to install Cilium in chaining configuration on top of `Weave Net `\_. .. include:: cni-chaining-limitations.rst Create a CNI configuration ========================== Create a ``chaining.yaml`` file based on the following template to specify the desired CNI chaining configuration: .. code-block:: yaml apiVersion: v1 kind: ConfigMap metadata: name: cni-configuration namespace: kube-system data: cni-config: |- { "cniVersion": "0.3.1", "name": "weave", "plugins": [ { "name": "weave", "type": "weave-net", "hairpinMode": true }, { "type": "portmap", "capabilities": {"portMappings": true}, "snat": true }, { "type": "cilium-cni" } ] } Deploy the :term:`ConfigMap`: .. code-block:: shell-session kubectl apply -f chaining.yaml Deploy Cilium with the portmap plugin enabled ============================================= .. include:: k8s-install-download-release.rst Deploy Cilium release via Helm: .. cilium-helm-install:: :namespace: kube-system :set: cni.chainingMode=generic-veth cni.customConf=true cni.configMap=cni-configuration routingMode=native enableIPv4Masquerade=false .. note:: The new CNI chaining configuration will \*not\* apply to any pod that is already running the cluster. Existing pods will be reachable and Cilium will load-balance to them but policy enforcement will not apply to them and load-balancing is not performed for traffic originating from existing pods. You must restart these pods in order to invoke the chaining configuration on them. .. include:: k8s-install-validate.rst .. include:: next-steps.rst
https://github.com/cilium/cilium/blob/main//Documentation/installation/cni-chaining-weave.rst
main
cilium
[ -0.050359826534986496, 0.018702907487750053, -0.04232284054160118, -0.06254240870475769, 0.02019294537603855, -0.03324921056628227, -0.054302919656038284, -0.0035030969884246588, 0.024321138858795166, -0.008521834388375282, 0.061351627111434937, -0.09521748125553131, 0.034109070897102356, ...
0.121124
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_chaining\_azure: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Azure CNI (Legacy) \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* .. note:: For most users, the best way to run Cilium on AKS is either AKS BYO CNI as described in :ref:`k8s\_install\_quick` or `Azure CNI Powered by Cilium `\_\_. This guide provides alternative instructions to run Cilium with Azure CNI in a chaining configuration. This is the legacy way of running Azure CNI with cilium as Azure IPAM is legacy, for more information see :ref:`ipam\_azure`. .. include:: cni-chaining-limitations.rst .. admonition:: Video :class: attention If you'd like a video explanation of the Azure CNI Powered by Cilium, check out `eCHO episode 70: Azure CNI Powered by Cilium `\_\_. This guide explains how to set up Cilium in combination with Azure CNI in a chaining configuration. In this hybrid mode, the Azure CNI plugin is responsible for setting up the virtual network devices as well as address allocation (IPAM). After the initial networking is setup, the Cilium CNI plugin is called to attach eBPF programs to the network devices set up by Azure CNI to enforce network policies, perform load-balancing, and encryption. Create an AKS + Cilium CNI configuration ======================================== Create a ``chaining.yaml`` file based on the following template to specify the desired CNI chaining configuration. This :term:`ConfigMap` will be installed as the CNI configuration file on all nodes and defines the chaining configuration. In the example below, the Azure CNI, portmap, and Cilium are chained together. .. code-block:: yaml apiVersion: v1 kind: ConfigMap metadata: name: cni-configuration namespace: kube-system data: cni-config: |- { "cniVersion": "0.3.0", "name": "azure", "plugins": [ { "type": "azure-vnet", "mode": "transparent", "ipam": { "type": "azure-vnet-ipam" } }, { "type": "portmap", "capabilities": {"portMappings": true}, "snat": true }, { "name": "cilium", "type": "cilium-cni" } ] } Deploy the :term:`ConfigMap`: .. code-block:: shell-session kubectl apply -f chaining.yaml Deploy Cilium ============= .. include:: k8s-install-download-release.rst Deploy Cilium release via Helm: .. cilium-helm-install:: :namespace: kube-system :set: cni.chainingMode=generic-veth cni.customConf=true cni.exclusive=false nodeinit.enabled=true cni.configMap=cni-configuration routingMode=native enableIPv4Masquerade=false endpointRoutes.enabled=true This will create both the main cilium daemonset, as well as the cilium-node-init daemonset, which handles tasks like mounting the eBPF filesystem and updating the existing Azure CNI plugin to run in 'transparent' mode. .. include:: k8s-install-restart-pods.rst .. include:: k8s-install-validate.rst .. include:: next-steps.rst
https://github.com/cilium/cilium/blob/main//Documentation/installation/cni-chaining-azure-cni.rst
main
cilium
[ -0.05857212096452713, -0.0007512217853218317, -0.07866419106721878, -0.03762558102607727, -0.04058779776096344, -0.01955755241215229, 0.004364319145679474, -0.0660056322813034, 0.03759966790676117, 0.06369519233703613, 0.057002123445272446, -0.039643023163080215, 0.0701366737484932, 0.0089...
0.148473
\*\*Configuration:\*\* ============= ============ ============== Datapath IPAM Datastore ============= ============ ============== Encapsulation Cluster Pool Kubernetes CRD ============= ============ ============== \*\*Requirements:\*\* .. note:: On AKS, Cilium can be installed either manually by administrators via Bring your own CNI or automatically by AKS via Azure CNI Powered by Cilium. Bring your own CNI offers more flexibility and customization as administrators have full control over the installation, but it does not integrate natively with the Azure network stack and administrators need to handle Cilium upgrades. Azure CNI Powered by Cilium integrates natively with the Azure network stack and upgrades are handled by AKS, but it does not offer as much flexibility and customization as it is controlled by AKS. The following instructions assume Bring your own CNI. For Azure CNI Powered by Cilium, see the external installer guide :ref:`aks\_install` for dedicated instructions. \* The AKS cluster must be created with ``--network-plugin none``. See the `Bring your own CNI `\_ documentation for more details about BYOCNI prerequisites / implications. \* Make sure that you set a cluster pool IPAM pod CIDR that does not overlap with the default service CIDR of AKS. For example, you can use ``--helm-set ipam.operator.clusterPoolIPv4PodCIDRList=192.168.0.0/16``.
https://github.com/cilium/cilium/blob/main//Documentation/installation/requirements-aks.rst
main
cilium
[ -0.01371004432439804, -0.03509943559765816, -0.017355235293507576, -0.026850886642932892, -0.07804171741008759, 0.025811560451984406, 0.034380342811346054, -0.06519340723752975, 0.011989643797278404, 0.08426373451948166, 0.005628170445561409, -0.08738161623477936, 0.059868715703487396, 0.0...
0.180443
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_k8s\_install\_helm: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Installation using Helm \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This guide will show you how to install Cilium using `Helm `\_. This involves a couple of additional steps compared to the :ref:`k8s\_quick\_install` and requires you to manually select the best datapath and IPAM mode for your particular environment. Helm Installation Methods ========================== Cilium can be installed using Helm in two ways: 1. \*\*OCI Registry (Recommended)\*\* — Install directly from OCI registries without adding a Helm repository 2. \*\*Traditional Repository\*\* — Use the classic ``https://helm.cilium.io/`` repository Using OCI Registries (Recommended) ----------------------------------- Cilium Helm charts are available directly from OCI container registries, eliminating the need for a separate Helm repository. .. tip:: No ``helm repo add`` required! Just reference the chart directly with ``oci://quay.io/cilium/charts/cilium``. \*\*Why OCI Registries?\*\* Storing Helm charts in OCI registries alongside container images offers several advantages: \* \*\*Signed charts\*\* — All charts are signed with cosign for verification \* \*\*Simpler setup\*\* — No repository configuration needed \* \*\*Digest pinning\*\* — Reference exact chart versions by SHA for reproducibility \* \*\*Unified tooling\*\* — Use the same registry infrastructure for images and charts \*\*Quick Start with OCI:\*\* .. only:: stable .. parsed-literal:: helm install cilium oci://quay.io/cilium/charts/cilium \ --version |CHART\_VERSION| \ --namespace kube-system .. only:: not stable .. code-block:: shell-session helm install cilium oci://quay.io/cilium/charts/cilium \ --version \ --namespace kube-system Replace ```` with the desired version (e.g., ``1.15.0``). \*\*Finding Available Versions:\*\* OCI registries don't support ``helm search``. Here's how to find available versions: .. important:: \*\*Version format matters\*\*: Helm chart versions follow SemVer 2.0 \*without\* the "v" prefix (e.g., ``1.15.0``). Container image tags \*include\* the "v" (e.g., ``v1.15.0``). Use versions without the "v" for Helm commands. \* \*\*Browse the registry:\*\* `Quay.io tags `\_ \* \*\*Query via CLI:\*\* .. code-block:: shell-session # Using crane crane ls quay.io/cilium/charts/cilium \* \*\*Check releases:\*\* https://github.com/cilium/cilium/releases \*\*Verifying Chart Signatures:\*\* All charts are signed with cosign. Verify before installing: .. code-block:: shell-session cosign verify \ --certificate-identity-regexp='https://github.com/cilium/cilium/.\*' \ --certificate-oidc-issuer=https://token.actions.githubusercontent.com \ quay.io/cilium/charts/cilium: See https://docs.sigstore.dev/cosign/installation/ for cosign installation. \*\*Pinning by Digest:\*\* For reproducible deployments, pin charts by digest instead of tag: .. code-block:: shell-session # Get the digest helm pull oci://quay.io/cilium/charts/cilium --version # Install with digest helm install cilium oci://quay.io/cilium/charts/cilium@sha256: \ --namespace kube-system This guarantees the exact same chart every time. Using Traditional Helm Repository ---------------------------------- You can also install Cilium using the traditional Helm repository method. Both installation methods are fully supported. Install Cilium ============== .. include:: k8s-install-download-release.rst .. tabs:: .. group-tab:: Generic These are the generic instructions on how to install Cilium into any Kubernetes cluster using the default configuration options below. Please see the other tabs for distribution/platform specific instructions which also list the ideal default configuration for particular platforms. \*\*Default Configuration:\*\* =============== =============== ============== Datapath IPAM Datastore =============== =============== ============== Encapsulation Cluster Pool Kubernetes CRD =============== =============== ============== .. include:: requirements-generic.rst \*\*Install Cilium:\*\* Deploy Cilium release via Helm: .. cilium-helm-install:: :namespace: kube-system .. group-tab:: GKE .. include:: requirements-gke.rst \*\*Install Cilium:\*\* Extract the Cluster CIDR to enable native-routing: .. code-block:: shell-session NATIVE\_CIDR="$(gcloud container clusters describe "${NAME}" --zone "${ZONE}" --format 'value(clusterIpv4Cidr)')" echo $NATIVE\_CIDR Deploy Cilium release via Helm: .. cilium-helm-install:: :namespace: kube-system :set: nodeinit.enabled=true nodeinit.reconfigureKubelet=true nodeinit.removeCbrBridge=true cni.binPath=/home/kubernetes/bin gke.enabled=true ipam.mode=kubernetes ipv4NativeRoutingCIDR=$NATIVE\_CIDR The NodeInit DaemonSet is required to prepare the GKE nodes as nodes are added to the cluster. The NodeInit DaemonSet will perform the following actions: \* Reconfigure kubelet to run in CNI mode \* Mount the eBPF filesystem .. group-tab:: AKS .. include:: ../installation/requirements-aks.rst \*\*Install Cilium:\*\* Deploy Cilium release via Helm: .. cilium-helm-install:: :namespace: kube-system :set: aksbyocni.enabled=true ..
https://github.com/cilium/cilium/blob/main//Documentation/installation/k8s-install-helm.rst
main
cilium
[ 0.010724201798439026, 0.0761600136756897, -0.012566658668220043, -0.08764923363924026, 0.0451204888522625, -0.035519469529390335, -0.0381903201341629, 0.032317936420440674, 0.027139130979776382, -0.02049838751554489, 0.060833144932985306, -0.10825756192207336, 0.016529645770788193, -0.0365...
0.118242
GKE nodes as nodes are added to the cluster. The NodeInit DaemonSet will perform the following actions: \* Reconfigure kubelet to run in CNI mode \* Mount the eBPF filesystem .. group-tab:: AKS .. include:: ../installation/requirements-aks.rst \*\*Install Cilium:\*\* Deploy Cilium release via Helm: .. cilium-helm-install:: :namespace: kube-system :set: aksbyocni.enabled=true .. note:: Installing Cilium via helm is supported only for AKS BYOCNI cluster and not for Azure CNI Powered by Cilium clusters. .. group-tab:: EKS .. include:: requirements-eks.rst \*\*Patch VPC CNI (aws-node DaemonSet)\*\* Cilium will manage ENIs instead of VPC CNI, so the ``aws-node`` DaemonSet has to be patched to prevent conflict behavior. .. code-block:: shell-session kubectl -n kube-system patch daemonset aws-node --type='strategic' -p='{"spec":{"template":{"spec":{"nodeSelector":{"io.cilium/aws-node-enabled":"true"}}}}}' \*\*Install Cilium:\*\* Deploy Cilium release via Helm: .. cilium-helm-install:: :namespace: kube-system :set: eni.enabled=true .. note:: This helm command sets ``eni.enabled=true``, meaning that Cilium will allocate a fully-routable AWS ENI IP address for each pod, similar to the behavior of the `Amazon VPC CNI plugin `\_. This mode depends on a set of :ref:`ec2privileges` from the EC2 API. Cilium can alternatively run in EKS using an overlay mode that gives pods non-VPC-routable IPs. This allows running more pods per Kubernetes worker node than the ENI limit but includes the following caveats: 1. Pod connectivity to resources outside the cluster (e.g., VMs in the VPC or AWS managed services) is masqueraded (i.e., SNAT) by Cilium to use the VPC IP address of the Kubernetes worker node. 2. The EKS API Server is unable to route packets to the overlay network. This implies that any `webhook `\_ which needs to be accessed must be host networked or exposed through a service or ingress. To set up Cilium overlay mode, follow the steps below: 1. Excluding the line ``eni.enabled=true`` from the helm command will configure Cilium to use overlay routing mode (which is the helm default). 2. Flush iptables rules added by VPC CNI .. code-block:: shell-session iptables -t nat -F AWS-SNAT-CHAIN-0 \ && iptables -t nat -F AWS-SNAT-CHAIN-1 \ && iptables -t nat -F AWS-CONNMARK-CHAIN-0 \ && iptables -t nat -F AWS-CONNMARK-CHAIN-1 .. group-tab:: OpenShift .. include:: requirements-openshift.rst \*\*Install Cilium:\*\* Cilium is a `Certified OpenShift CNI Plugin `\_ and is best installed when an OpenShift cluster is created using the OpenShift installer. Please refer to :ref:`k8s\_install\_openshift\_okd` for more information. .. group-tab:: RKE .. include:: requirements-rke.rst .. group-tab:: k3s .. include:: requirements-k3s.rst \*\*Install Cilium:\*\* .. cilium-helm-install:: :namespace: $CILIUM\_NAMESPACE :set: operator.replicas=1 .. group-tab:: Rancher Desktop \*\*Configure Rancher Desktop:\*\* To install Cilium on `Rancher Desktop `\_, perform the following steps: .. include:: rancher-desktop-configure.rst \*\*Install Cilium:\*\* .. cilium-helm-install:: :namespace: $CILIUM\_NAMESPACE :set: operator.replicas=1 cni.binPath=/usr/libexec/cni .. group-tab:: Talos Linux To install Cilium on `Talos Linux `\_, perform the following steps. .. include:: k8s-install-talos-linux.rst .. group-tab:: Alibaba ACK .. include:: ../installation/alibabacloud-eni.rst .. admonition:: Video :class: attention If you'd like to learn more about Cilium Helm values, check out `eCHO episode 117: A Tour of the Cilium Helm Values `\_\_. Upgrading ========= Using OCI Registry ------------------ .. only:: stable .. parsed-literal:: helm upgrade cilium oci://quay.io/cilium/charts/cilium \ --version |CHART\_VERSION| \ --namespace kube-system .. only:: not stable .. code-block:: shell-session helm upgrade cilium oci://quay.io/cilium/charts/cilium \ --version \ --namespace kube-system Migrating from Traditional Repository to OCI --------------------------------------------- If you're using the traditional repository (``https://helm.cilium.io/``), switching to OCI is straightforward as the charts are identical: .. only:: stable .. parsed-literal:: helm upgrade cilium oci://quay.io/cilium/charts/cilium \ --version |CHART\_VERSION| \ --namespace kube-system \ --reuse-values .. only:: not stable .. code-block:: shell-session helm upgrade cilium oci://quay.io/cilium/charts/cilium \ --version \ --namespace kube-system \ --reuse-values The ``--reuse-values`` flag preserves your existing configuration. OCI vs Traditional Repository ============================== +---------------------+---------------------------+---------------------------+ | Feature | OCI Registry | Traditional Repository |
https://github.com/cilium/cilium/blob/main//Documentation/installation/k8s-install-helm.rst
main
cilium
[ -0.013159535825252533, 0.017896324396133423, -0.028895771130919456, -0.032860927283763885, 0.022601090371608734, 0.0055577498860657215, -0.0348154678940773, -0.03842507675290108, 0.042319707572460175, 0.08397387713193893, 0.011434460058808327, -0.14009518921375275, 0.0472012497484684, -0.0...
0.161952
cilium oci://quay.io/cilium/charts/cilium \ --version |CHART\_VERSION| \ --namespace kube-system \ --reuse-values .. only:: not stable .. code-block:: shell-session helm upgrade cilium oci://quay.io/cilium/charts/cilium \ --version \ --namespace kube-system \ --reuse-values The ``--reuse-values`` flag preserves your existing configuration. OCI vs Traditional Repository ============================== +---------------------+---------------------------+---------------------------+ | Feature | OCI Registry | Traditional Repository | +=====================+===========================+===========================+ | Setup | None | ``helm repo add`` | +---------------------+---------------------------+---------------------------+ | Chart signing | Yes (cosign) | No | +---------------------+---------------------------+---------------------------+ | Digest pinning | Yes | Limited | +---------------------+---------------------------+---------------------------+ | Air-gapped install | Standard OCI mirror tools | Separate chart mirror | +---------------------+---------------------------+---------------------------+ Both methods remain fully supported. Troubleshooting =============== "failed to authorize: failed to fetch anonymous token" ------------------------------------------------------ This usually means network or registry connectivity issues. Test access: .. code-block:: shell-session curl https://quay.io/v2/ "chart not found" ----------------- Double-check your version number. Remember: no "v" prefix for Helm versions. .. include:: k8s-install-restart-pods.rst .. include:: k8s-install-validate.rst .. include:: next-steps.rst
https://github.com/cilium/cilium/blob/main//Documentation/installation/k8s-install-helm.rst
main
cilium
[ 0.0011050199391320348, 0.009614293463528156, -0.04826372489333153, -0.04316922649741173, 0.003079269314184785, -0.10647294670343399, -0.061805713921785355, 0.03174791485071182, 0.028438584879040718, -0.027860408648848534, 0.053954094648361206, -0.0821012482047081, -0.007572360336780548, -0...
0.142695
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_kops\_guide: .. \_k8s\_install\_kops: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Installation using Kops \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* As of kops 1.9 release, Cilium can be plugged into kops-deployed clusters as the CNI plugin. This guide provides steps to create a Kubernetes cluster on AWS using kops and Cilium as the CNI plugin. Note, the kops deployment will automate several deployment features in AWS by default, including AutoScaling, Volumes, VPCs, etc. Kops offers several out-of-the-box configurations of Cilium including :ref:`kubeproxy-free`, :ref:`ipam\_eni`, and dedicated etcd cluster for Cilium. This guide will just go through a basic setup. Prerequisites ============= \* `aws cli `\_ \* `kubectl `\_ \* aws account with permissions: \* AmazonEC2FullAccess \* AmazonRoute53FullAccess \* AmazonS3FullAccess \* IAMFullAccess \* AmazonVPCFullAccess Installing kops =============== .. tabs:: .. group-tab:: Linux .. code-block:: shell-session curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag\_name | cut -d '"' -f 4)/kops-linux-amd64 chmod +x kops-linux-amd64 sudo mv kops-linux-amd64 /usr/local/bin/kops .. group-tab:: MacOS .. code-block:: shell-session brew update && brew install kops Setting up IAM Group and User ============================= Assuming you have all the prerequisites, run the following commands to create the kops user and group: .. code-block:: shell-session $ # Create IAM group named kops and grant access $ aws iam create-group --group-name kops $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops $ aws iam create-user --user-name kops $ aws iam add-user-to-group --user-name kops --group-name kops $ aws iam create-access-key --user-name kops kops requires the creation of a dedicated S3 bucket in order to store the state and representation of the cluster. You will need to change the bucket name and provide your unique bucket name (for example a reverse of FQDN added with short description of the cluster). Also make sure to use the region where you will be deploying the cluster. .. code-block:: shell-session $ aws s3api create-bucket --bucket prefix-example-com-state-store --region us-west-2 --create-bucket-configuration LocationConstraint=us-west-2 $ export KOPS\_STATE\_STORE=s3://prefix-example-com-state-store The above steps are sufficient for getting a working cluster installed. Please consult `kops aws documentation `\_ for more detailed setup instructions. Cilium Prerequisites ==================== \* Ensure the :ref:`admin\_system\_reqs` are met, particularly the Linux kernel and key-value store versions. The default AMI satisfies the minimum kernel version required by Cilium, which is what we will use in this guide. Creating a Cluster ================== \* Note that you will need to specify the ``--master-zones`` and ``--zones`` for creating the master and worker nodes. The number of master zones should be \* odd (1, 3, ...) for HA. For simplicity, you can just use 1 region. \* To keep things simple when following this guide, we will use a gossip-based cluster. This means you do not have to create a hosted zone upfront. cluster ``NAME`` variable must end with ``k8s.local`` to use the gossip protocol. If creating multiple clusters using the same kops user, then make the cluster name unique by adding a prefix such as ``com-company-emailid-``. .. code-block:: shell-session $ export NAME=com-company-emailid-cilium.k8s.local $ kops create cluster --state=${KOPS\_STATE\_STORE} --node-count 3 --topology private --master-zones us-west-2a,us-west-2b,us-west-2c --zones us-west-2a,us-west-2b,us-west-2c --networking cilium --cloud-labels "Team=Dev,Owner=Admin" ${NAME} --yes You may be prompted to create a ssh public-private key pair. .. code-block:: shell-session $ ssh-keygen (Please see :ref:`appendix\_kops`) .. include:: k8s-install-validate.rst .. \_appendix\_kops: Deleting a Cluster ================== To undo the dependencies and other deployment features in AWS from the kops cluster creation, use kops to destroy a cluster \*immediately\*
https://github.com/cilium/cilium/blob/main//Documentation/installation/k8s-install-kops.rst
main
cilium
[ -0.0396493598818779, 0.02208944596350193, -0.08192741870880127, -0.050899047404527664, 0.0692322850227356, -0.007362295873463154, -0.05585567280650139, -0.03710392490029335, 0.0420328713953495, -0.009838207624852657, 0.026270654052495956, -0.09499261528253555, 0.06013942137360573, -0.03399...
0.144803
You may be prompted to create a ssh public-private key pair. .. code-block:: shell-session $ ssh-keygen (Please see :ref:`appendix\_kops`) .. include:: k8s-install-validate.rst .. \_appendix\_kops: Deleting a Cluster ================== To undo the dependencies and other deployment features in AWS from the kops cluster creation, use kops to destroy a cluster \*immediately\* with the parameter ``--yes``: .. code-block:: shell-session $ kops delete cluster ${NAME} --yes Further reading on using Cilium with Kops ========================================= \* See the `kops networking documentation `\_ for more information on the configuration options kops offers. \* See the `kops cluster spec documentation `\_ for a comprehensive list of all the options Appendix: Details of kops flags used in cluster creation ======================================================== The following section explains all the flags used in create cluster command. \* ``--state=${KOPS\_STATE\_STORE}`` : KOPS uses an S3 bucket to store the state of your cluster and representation of your cluster \* ``--node-count 3`` : No. of worker nodes in the kubernetes cluster. \* ``--topology private`` : Cluster will be created with private topology, what that means is all masters/nodes will be launched in a private subnet in the VPC \* ``--master-zones eu-west-1a,eu-west-1b,eu-west-1c`` : The 3 zones ensure the HA of master nodes, each belonging in a different Availability zones. \* ``--zones eu-west-1a,eu-west-1b,eu-west-1c`` : Zones where the worker nodes will be deployed \* ``--networking cilium`` : Networking CNI plugin to be used - cilium. You can also use ``cilium-etcd``, which will use a dedicated etcd cluster as key/value store instead of CRDs. \* ``--cloud-labels "Team=Dev,Owner=Admin"`` : Labels for your cluster that will be applied to your instances \* ``${NAME}`` : Name of the cluster. Make sure the name ends with k8s.local for a gossip based cluster
https://github.com/cilium/cilium/blob/main//Documentation/installation/k8s-install-kops.rst
main
cilium
[ -0.006560813635587692, 0.06086181476712227, -0.03600865602493286, -0.010633238591253757, 0.04116682708263397, 0.017607633024454117, -0.020301027223467827, -0.06358698010444641, 0.057951632887125015, 0.036270853132009506, 0.03640836849808693, -0.04909445345401764, 0.04157087206840515, -0.02...
0.076672
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_generic\_veth\_cni\_chaining: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Generic Veth Chaining \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* The generic veth chaining plugin enables CNI chaining on top of any CNI plugin that is using a veth device model. The majority of CNI plugins use such a model. .. include:: cni-chaining-limitations.rst Validate that the current CNI plugin is using veth ================================================== 1. Log into one of the worker nodes using SSH 2. Run ``ip -d link`` to list all network devices on the node. You should be able spot network devices representing the pods running on that node. 3. A network device might look something like this: .. code-block:: shell-session 103: lxcb3901b7f9c02@if102: mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 3a:39:92:17:75:6f brd ff:ff:ff:ff:ff:ff link-netnsid 18 promiscuity 0 veth addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso\_max\_size 65536 gso\_max\_segs 65535 4. The ``veth`` keyword on line 3 indicates that the network device type is virtual ethernet. If the CNI plugin you are chaining with is currently not using veth then the ``generic-veth`` plugin is not suitable. In that case, a full CNI chaining plugin is required which understands the device model of the underlying plugin. Writing such a plugin is trivial, contact us on `Cilium Slack`\_ for more details. Create a CNI configuration to define your chaining configuration ================================================================ Create a ``chaining.yaml`` file based on the following template to specify the desired CNI chaining configuration: .. code-block:: yaml apiVersion: v1 kind: ConfigMap metadata: name: cni-configuration namespace: kube-system data: cni-config: |- { "name": "generic-veth", "cniVersion": "0.3.1", "plugins": [ { "type": "XXX", [...] }, { "type": "cilium-cni", "chaining-mode": "generic-veth" } ] } Deploy the :term:`ConfigMap`: .. code-block:: shell-session kubectl apply -f chaining.yaml Deploy Cilium with the portmap plugin enabled ============================================= .. include:: k8s-install-download-release.rst Deploy Cilium release via Helm: .. cilium-helm-install:: :namespace: kube-system :set: cni.chainingMode=generic-veth cni.customConf=true cni.configMap=cni-configuration routingMode=native enableIPv4Masquerade=false
https://github.com/cilium/cilium/blob/main//Documentation/installation/cni-chaining-generic-veth.rst
main
cilium
[ -0.06046181917190552, 0.029849398881196976, -0.05204751715064049, -0.073970265686512, 0.03424152359366417, -0.046348437666893005, -0.045229725539684296, 0.040801599621772766, 0.005136583000421524, -0.0401834137737751, 0.0859396681189537, -0.060332730412483215, 0.05488516390323639, 0.044499...
0.158362
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_rke\_install: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Installation using Rancher Kubernetes Engine \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This guide walks you through installation of Cilium on \*\*standalone\*\* `Rancher Kubernetes Engine (RKE) `\_\_ clusters, SUSE's CNCF-certified Kubernetes distribution with built-in security and compliance capabilities. RKE solves the common frustration of installation complexity with Kubernetes by removing most host dependencies and presenting a stable path for deployment, upgrades, and rollbacks. If you're using the Rancher Management Console/UI to install your RKE clusters, head over to the :ref:`Installation using Rancher ` guide. .. \_rke1\_cni\_none: Install a Cluster Using RKE1 ============================= The first step is to install a cluster based on the `RKE1 Kubernetes installation guide `\_\_. When creating the cluster, make sure to `change the default network plugin `\_\_ in the generated ``config.yaml`` file. Change: .. code-block:: yaml network: options: flannel\_backend\_type: "vxlan" plugin: "canal" To: .. code-block:: yaml network: plugin: none Install a Cluster Using RKE2 ============================= The first step is to install a cluster based on the `RKE2 Kubernetes installation guide `\_\_. You can either use the `RKE2-integrated Cilium version `\_\_ or you can configure the RKE2 cluster with ``cni: none`` (see `doc `\_\_), and install Cilium with Helm. You can use either method while the directly integrated one is recommended for most users. Cilium power-users might want to use the ``cni: none`` method as Rancher is using a custom ``rke2-cilium`` `Helm chart `\_\_ with independent release cycles for its integrated Cilium version. By instead using the out-of-band Cilium installation (based on the official `Cilium Helm chart `\_\_), power-users gain more flexibility from a Cilium perspective. Deploy Cilium ============= .. tabs:: .. group-tab:: Helm v3 Install Cilium via ``helm install``: .. cilium-helm-install:: :namespace: $CILIUM\_NAMESPACE .. group-tab:: Cilium CLI .. include:: cli-download.rst Install Cilium by running: .. parsed-literal:: cilium install |CHART\_VERSION| .. include:: k8s-install-validate.rst .. include:: next-steps.rst
https://github.com/cilium/cilium/blob/main//Documentation/installation/k8s-install-rke.rst
main
cilium
[ 0.003241458907723427, 0.0008375272736884654, -0.024192454293370247, -0.07514801621437073, 0.004710118751972914, -0.06421048194169998, -0.0696839764714241, 0.011533219367265701, 0.047004155814647675, -0.02823234349489212, 0.0579402782022953, -0.08740250766277313, 0.018409958109259605, -0.06...
0.156017
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. only:: stable Setup Helm repository: .. tabs:: .. group-tab:: Helm Repository .. code-block:: shell-session helm repo add cilium https://helm.cilium.io/ .. group-tab:: OCI Registry Cilium charts are also available via OCI registries (Quay.io and Docker Hub). No setup required - you can install directly using ``oci://`` URLs. See the :ref:`OCI Registry section ` for more information, including chart signing verification and digest-based installations. .. only:: not stable Download the Cilium release tarball and change to the kubernetes install directory: .. parsed-literal:: curl -LO |SCM\_ARCHIVE\_LINK| tar xzf |SCM\_ARCHIVE\_FILENAME| cd |SCM\_ARCHIVE\_NAME|/install/kubernetes
https://github.com/cilium/cilium/blob/main//Documentation/installation/k8s-install-download-release.rst
main
cilium
[ 0.041866619139909744, 0.020278751850128174, -0.04906174913048744, -0.07580189406871796, 0.03344132751226425, -0.05389132723212242, -0.07272598147392273, 0.05066666007041931, 0.06819570809602737, -0.024831727147102356, 0.057648997753858566, -0.10726295411586761, 0.0034641737584024668, -0.02...
0.150127
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_k8s\_install\_kubespray: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Installation using Kubespray \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* The guide is to use Kubespray for creating an AWS Kubernetes cluster running Cilium as the CNI. The guide uses: - Kubespray v2.6.0 - Latest `Cilium released version`\_ (instructions for using the version are mentioned below) Please consult `Kubespray Prerequisites `\_\_ and Cilium :ref:`admin\_system\_reqs`. .. \_Cilium released version: `latest released Cilium version`\_ Installing Kubespray ==================== .. code-block:: shell-session $ git clone --branch v2.6.0 https://github.com/kubernetes-sigs/kubespray Install dependencies from ``requirements.txt`` .. code-block:: shell-session $ cd kubespray $ sudo pip install -r requirements.txt Infrastructure Provisioning =========================== We will use Terraform for provisioning AWS infrastructure. Configure AWS credentials ------------------------- Export the variables for your AWS credentials .. code-block:: shell-session export AWS\_ACCESS\_KEY\_ID="www" export AWS\_SECRET\_ACCESS\_KEY ="xxx" export AWS\_SSH\_KEY\_NAME="yyy" export AWS\_DEFAULT\_REGION="zzz" Configure Terraform Variables ----------------------------- We will start by specifying the infrastructure needed for the Kubernetes cluster. .. code-block:: shell-session $ cd contrib/terraform/aws $ cp contrib/terraform/aws/terraform.tfvars.example terraform.tfvars Open the file and change any defaults particularly, the number of master, etcd, and worker nodes. You can change the master and etcd number to 1 for deployments that don't need high availability. By default, this tutorial will create: - VPC with 2 public and private subnets - Bastion Hosts and NAT Gateways in the Public Subnet - Three of each (masters, etcd, and worker nodes) in the Private Subnet - AWS ELB in the Public Subnet for accessing the Kubernetes API from the internet - Terraform scripts using ``CoreOS`` as base image. Example ``terraform.tfvars`` file: .. code-block:: bash #Global Vars aws\_cluster\_name = "kubespray" #VPC Vars aws\_vpc\_cidr\_block = "XXX.XXX.192.0/18" aws\_cidr\_subnets\_private = ["XXX.XXX.192.0/20","XXX.XXX.208.0/20"] aws\_cidr\_subnets\_public = ["XXX.XXX.224.0/20","XXX.XXX.240.0/20"] #Bastion Host aws\_bastion\_size = "t2.medium" #Kubernetes Cluster aws\_kube\_master\_num = 3 aws\_kube\_master\_size = "t2.medium" aws\_etcd\_num = 3 aws\_etcd\_size = "t2.medium" aws\_kube\_worker\_num = 3 aws\_kube\_worker\_size = "t2.medium" #Settings AWS ELB aws\_elb\_api\_port = 6443 k8s\_secure\_api\_port = 6443 kube\_insecure\_apiserver\_address = "0.0.0.0" Apply the configuration ----------------------- ``terraform init`` to initialize the following modules - ``module.aws-vpc`` - ``module.aws-elb`` - ``module.aws-iam`` .. code-block:: shell-session $ terraform init Once initialized , execute: .. code-block:: shell-session $ terraform plan -out=aws\_kubespray\_plan This will generate a file, ``aws\_kubespray\_plan``, depicting an execution plan of the infrastructure that will be created on AWS. To apply, execute: .. code-block:: shell-session $ terraform init $ terraform apply "aws\_kubespray\_plan" Terraform automatically creates an Ansible Inventory file at ``inventory/hosts``. Installing Kubernetes cluster with Cilium as CNI ================================================ Kubespray uses Ansible as its substrate for provisioning and orchestration. Once the infrastructure is created, you can run the Ansible playbook to install Kubernetes and all the required dependencies. Execute the below command in the kubespray clone repo, providing the correct path of the AWS EC2 ssh private key in ``ansible\_ssh\_private\_key\_file=`` We recommend using the `latest released Cilium version`\_ by passing the variable when running the ``ansible-playbook`` command. For example, you could add the following flag to the command below: ``-e cilium\_version=v1.11.0``. .. code-block:: shell-session $ ansible-playbook -i ./inventory/hosts ./cluster.yml -e ansible\_user=core -e bootstrap\_os=coreos -e kube\_network\_plugin=cilium -b --become-user=root --flush-cache -e ansible\_ssh\_private\_key\_file= .. \_latest released Cilium version: https://github.com/cilium/cilium/releases If you are interested in configuring your Kubernetes cluster setup, you should consider copying the sample inventory. Then, you can edit the variables in the relevant file in the ``group\_vars`` directory. .. code-block:: shell-session $ cp -r inventory/sample inventory/my-inventory $ cp ./inventory/hosts ./inventory/my-inventory/hosts $ echo 'cilium\_version: "v1.11.0"' >> ./inventory/my-inventory/group\_vars/k8s\_cluster/k8s-net-cilium.yml $ ansible-playbook -i ./inventory/my-inventory/hosts ./cluster.yml -e ansible\_user=core -e bootstrap\_os=coreos -e kube\_network\_plugin=cilium -b --become-user=root --flush-cache -e ansible\_ssh\_private\_key\_file= Validate Cluster ================ To check if cluster is created successfully, ssh into the bastion host with the user ``core``. .. code-block::
https://github.com/cilium/cilium/blob/main//Documentation/installation/k8s-install-kubespray.rst
main
cilium
[ -0.028792517259716988, 0.00798640213906765, -0.05311011150479317, -0.10224577784538269, 0.013742871582508087, -0.04755522683262825, -0.033885106444358826, 0.0029913405887782574, 0.06152241304516792, -0.015963546931743622, 0.05247809737920761, -0.120111383497715, 0.03810592368245125, -0.049...
0.204979
-r inventory/sample inventory/my-inventory $ cp ./inventory/hosts ./inventory/my-inventory/hosts $ echo 'cilium\_version: "v1.11.0"' >> ./inventory/my-inventory/group\_vars/k8s\_cluster/k8s-net-cilium.yml $ ansible-playbook -i ./inventory/my-inventory/hosts ./cluster.yml -e ansible\_user=core -e bootstrap\_os=coreos -e kube\_network\_plugin=cilium -b --become-user=root --flush-cache -e ansible\_ssh\_private\_key\_file= Validate Cluster ================ To check if cluster is created successfully, ssh into the bastion host with the user ``core``. .. code-block:: shell-session $ # Get information about the basiton host $ cat ssh-bastion.conf $ ssh -i ~/path/to/ec2-key-file.pem core@public\_ip\_of\_bastion\_host Execute the commands below from the bastion host. If ``kubectl`` isn't installed on the bastion host, you can login to the master node to test the below commands. You may need to copy the private key to the bastion host to access the master node. .. include:: k8s-install-validate.rst Delete Cluster ============== .. code-block:: shell-session $ cd contrib/terraform/aws $ terraform destroy
https://github.com/cilium/cilium/blob/main//Documentation/installation/k8s-install-kubespray.rst
main
cilium
[ 0.08048570901155472, 0.003103468334302306, -0.05633990466594696, 0.026624852791428566, -0.031715117394924164, 0.02996203489601612, -0.05310629680752754, -0.006323068868368864, 0.05021756514906883, 0.034256067126989365, 0.07767798751592636, -0.10999797284603119, -0.02538979984819889, -0.004...
0.11031
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_talos\_linux\_install: \*\*Prerequisites / Limitations\*\* - Cilium's Talos Linux support is only tested with Talos versions ``>=1.5.0``. - As Talos `does not allow loading Kernel modules`\_ by Kubernetes workloads, ``SYS\_MODULE`` needs to be dropped from the Cilium default capability list. - Talos Linux's `Forwarding kube-dns to Host DNS`\_ (enabled by default since Talos 1.8+) doesn't work together with Cilium's :ref:`eBPF\_Host\_Routing`. To make it work, you must set ``bpf.hostLegacyRouting`` to ``true`` as DNS won't work otherwise. .. \_`does not allow loading Kernel modules`: https://www.talos.dev/latest/learn-more/process-capabilities/ .. \_`Forwarding kube-dns to Host DNS`: https://www.talos.dev/latest/talos-guides/network/host-dns/#forwarding-kube-dns-to-host-dns .. note:: The official Talos Linux documentation already covers many different Cilium deployment options inside their `Deploying Cilium CNI guide`\_. Thus, this guide will only focus on the most recommended deployment option, from a Cilium perspective: - Deployment via official `Cilium Helm chart`\_ - Cilium `Kube-Proxy replacement` enabled - Reuse the ``cgroupv2`` mount that Talos already provides - `Kubernetes Host Scope` IPAM mode as Talos, by default, assigns ``PodCIDRs`` to ``v1.Node`` resources .. \_`Cilium Helm chart`: https://github.com/cilium/charts .. \_`Deploying Cilium CNI guide`: https://www.talos.dev/latest/kubernetes-guides/network/deploying-cilium/ \*\*Configure Talos Linux\*\* Before installing Cilium, there are two `Talos Linux Kubernetes configurations`\_ that need to be adjusted: #. Ensuring no other CNI is deployed via ``cluster.network.cni.name: none`` #. Disabling Kube-Proxy deployment via ``cluster.proxy.disabled: true`` Prepare a ``patch.yaml`` file: .. code-block:: yaml cluster: network: cni: name: none proxy: disabled: true Next, generate the configuration files for the Talos cluster by using the ``talosctl gen config`` command: .. code-block:: shell-session talosctl gen config \ my-cluster https://mycluster.local:6443 \ --config-patch @patch.yaml .. \_`Talos Linux Kubernetes configurations`: https://www.talos.dev/latest/reference/configuration/v1alpha1/config/#Config.cluster \*\*Install Cilium\*\* To run Cilium with `Kube-Proxy replacement` enabled, it's required to configure ``k8sServiceHost`` and ``k8sServicePort``, and point them to the Kubernetes API. Luckily, Talos Linux provides KubePrism\_ which allows it to access the Kubernetes API in a convenient way, which solely relies on host networking without using an external loadbalancer. This KubePrism\_ endpoint can be accessed from every Talos Linux node on ``localhost:7445``. .. cilium-helm-install:: :namespace: $CILIUM\_NAMESPACE :set: ipam.mode=kubernetes kubeProxyReplacement=true securityContext.capabilities.ciliumAgent="{CHOWN,KILL,NET\_ADMIN,NET\_RAW,IPC\_LOCK,SYS\_ADMIN,SYS\_RESOURCE,DAC\_OVERRIDE,FOWNER,SETGID,SETUID}" securityContext.capabilities.cleanCiliumState="{NET\_ADMIN,SYS\_ADMIN,SYS\_RESOURCE}" cgroup.autoMount.enabled=false cgroup.hostRoot=/sys/fs/cgroup k8sServiceHost=localhost k8sServicePort=7445 .. \_KubePrism: https://www.talos.dev/v1.6/kubernetes-guides/configuration/kubeprism/
https://github.com/cilium/cilium/blob/main//Documentation/installation/k8s-install-talos-linux.rst
main
cilium
[ 0.0055706980638206005, -0.007215970661491156, -0.042664915323257446, -0.041861068457365036, -0.0039183939807116985, -0.07685534656047821, -0.045427706092596054, 0.037255436182022095, 0.011941294185817242, -0.010941405780613422, 0.03959392011165619, -0.07541979849338531, -0.06573428213596344,...
0.182024
Install the latest version of the Cilium CLI. The Cilium CLI can be used to install Cilium, inspect the state of a Cilium installation, and enable/disable various features (e.g. clustermesh, Hubble). .. tabs:: .. group-tab:: Linux .. code-block:: shell-session CILIUM\_CLI\_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt) CLI\_ARCH=amd64 if [ "$(uname -m)" = "aarch64" ]; then CLI\_ARCH=arm64; fi curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM\_CLI\_VERSION}/cilium-linux-${CLI\_ARCH}.tar.gz{,.sha256sum} sha256sum --check cilium-linux-${CLI\_ARCH}.tar.gz.sha256sum sudo tar xzvfC cilium-linux-${CLI\_ARCH}.tar.gz /usr/local/bin rm cilium-linux-${CLI\_ARCH}.tar.gz{,.sha256sum} .. group-tab:: macOS .. code-block:: shell-session CILIUM\_CLI\_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt) CLI\_ARCH=amd64 if [ "$(uname -m)" = "arm64" ]; then CLI\_ARCH=arm64; fi curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM\_CLI\_VERSION}/cilium-darwin-${CLI\_ARCH}.tar.gz{,.sha256sum} shasum -a 256 -c cilium-darwin-${CLI\_ARCH}.tar.gz.sha256sum sudo tar xzvfC cilium-darwin-${CLI\_ARCH}.tar.gz /usr/local/bin rm cilium-darwin-${CLI\_ARCH}.tar.gz{,.sha256sum} .. group-tab:: Other See the full page of `releases `\_. .. only:: not stable Clone the Cilium GitHub repository so that the Cilium CLI can access the latest unreleased Helm chart from the main branch: .. parsed-literal:: git clone git@github.com:cilium/cilium.git cd cilium
https://github.com/cilium/cilium/blob/main//Documentation/installation/cli-download.rst
main
cilium
[ 0.026541734114289284, 0.0032389869447797537, -0.08102437853813171, -0.038458921015262604, -0.011140527203679085, -0.06310790032148361, -0.0961819514632225, 0.0032610204070806503, 0.027163075283169746, -0.027012625709176064, 0.09387519955635071, -0.08674021810293198, 0.009070567786693573, -...
0.137583
You can deploy the "connectivity-check" to test connectivity between pods. It is recommended to create a separate namespace for this. .. code-block:: shell-session kubectl create ns cilium-test Deploy the check with: .. parsed-literal:: kubectl apply -n cilium-test -f \ |SCM\_WEB|\/examples/kubernetes/connectivity-check/connectivity-check.yaml It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test: .. code-block:: shell-session $ kubectl get pods -n cilium-test NAME READY STATUS RESTARTS AGE echo-a-76c5d9bd76-q8d99 1/1 Running 0 66s echo-b-795c4b4f76-9wrrx 1/1 Running 0 66s echo-b-host-6b7fc94b7c-xtsff 1/1 Running 0 66s host-to-b-multi-node-clusterip-85476cd779-bpg4b 1/1 Running 0 66s host-to-b-multi-node-headless-dc6c44cb5-8jdz8 1/1 Running 0 65s pod-to-a-79546bc469-rl2qq 1/1 Running 0 66s pod-to-a-allowed-cnp-58b7f7fb8f-lkq7p 1/1 Running 0 66s pod-to-a-denied-cnp-6967cb6f7f-7h9fn 1/1 Running 0 66s pod-to-b-intra-node-nodeport-9b487cf89-6ptrt 1/1 Running 0 65s pod-to-b-multi-node-clusterip-7db5dfdcf7-jkjpw 1/1 Running 0 66s pod-to-b-multi-node-headless-7d44b85d69-mtscc 1/1 Running 0 66s pod-to-b-multi-node-nodeport-7ffc76db7c-rrw82 1/1 Running 0 65s pod-to-external-1111-d56f47579-d79dz 1/1 Running 0 66s pod-to-external-fqdn-allow-google-cnp-78986f4bcf-btjn7 1/1 Running 0 66s .. note:: If you deploy the connectivity check to a single node cluster, pods that check multi-node functionalities will remain in the ``Pending`` state. This is expected since these pods need at least 2 nodes to be scheduled successfully. Once done with the test, remove the ``cilium-test`` namespace: .. code-block:: shell-session kubectl delete ns cilium-test
https://github.com/cilium/cilium/blob/main//Documentation/installation/kubectl-connectivity-test.rst
main
cilium
[ 0.05418780446052551, -0.061856597661972046, -0.04437389597296715, -0.0033420671243220568, -0.04856669157743454, -0.05144719034433365, -0.046529654413461685, -0.0327254980802536, 0.03869051858782768, 0.05352622643113136, 0.030354581773281097, -0.05257727578282356, -0.01708201877772808, -0.0...
0.18959
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_k8s\_install\_openshift\_okd: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Installation on OpenShift OKD \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* There is currently no community-maintained installation of Cilium on OpenShift. However, Cilium can be installed on OpenShift by using vendor-maintained OLM images. These images, and the relevant installation instructions for them, can be found on the Red Hat Ecosystem Catalog: \* `Isovalent Enterprise for Cilium Software Page `\_\_ \* `Certified Isovalent Enterprise for Cilium OLM container images `\_\_ .. admonition:: Video :class: attention To learn more about OpenShift and Cilium, check out `eCHO episode 31: OpenShift Test Environment with Cilium `\_\_.
https://github.com/cilium/cilium/blob/main//Documentation/installation/k8s-install-openshift-okd.rst
main
cilium
[ 0.008977661840617657, 0.03762223944067955, -0.043171241879463196, -0.03331319987773895, 0.07066912949085236, -0.0650428906083107, -0.09545394778251648, 0.02058372087776661, 0.033063240349292755, -0.0425453782081604, 0.06560280919075012, -0.0542093962430954, -0.0024507036432623863, -0.04279...
0.201902
Configuring kind cluster creation is done using a YAML configuration file. This step is necessary in order to disable the default CNI and replace it with Cilium. Create a :download:`kind-config.yaml <./kind-config.yaml>` file based on the following template. It will create a cluster with 3 worker nodes and 1 control-plane node. .. literalinclude:: kind-config.yaml :language: yaml By default, the latest version of Kubernetes from when the kind release was created is used. To change the version of Kubernetes being run, ``image`` has to be defined for each node. See the `Node Configuration `\_ documentation for more information. .. tip:: By default, kind uses the following pod and service subnets:: Networking.PodSubnet = "10.244.0.0/16" Networking.ServiceSubnet = "10.96.0.0/12" If any of these subnets conflicts with your local network address range, update the ``networking`` section of the kind configuration file to specify different subnets that do not conflict or you risk having connectivity issues when deploying Cilium. For example: .. code-block:: yaml networking: disableDefaultCNI: true podSubnet: "10.10.0.0/16" serviceSubnet: "10.11.0.0/16"
https://github.com/cilium/cilium/blob/main//Documentation/installation/kind-configure.rst
main
cilium
[ 0.04931472986936569, -0.0027648406103253365, 0.0341256745159626, -0.019260864704847336, 0.020854966714978218, 0.04929517209529877, -0.04922106862068176, -0.0178325604647398, 0.029488086700439453, 0.0356714241206646, -0.012022671289741993, -0.0232232678681612, -0.004614364821463823, 0.02375...
0.089704
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_taint\_effects: ##################################################### Considerations on Node Pool Taints and Unmanaged Pods ##################################################### Depending on the environment or cloud provider being used, a CNI plugin and/or configuration file may be pre-installed in nodes belonging to a given cluster where Cilium is being installed or already running. Upon starting on a given node, and if it is intended as the exclusive CNI plugin for the cluster, Cilium does its best to take ownership of CNI on the node. However, a couple situations can prevent this from happening: \* Cilium can only take ownership of CNI on a node after starting. Pods starting before Cilium runs on a given node may get IPs from the pre-configured CNI. \* Some cloud providers may revert changes made to the CNI configuration by Cilium during operations such as node reboots, updates or routine maintenance. This is notably the case with GKE (non-Dataplane V2), in which node reboots and upgrades will undo changes made by Cilium and re-instate the default CNI configuration. To help overcome this situation to the largest possible extent in environments and cloud providers where Cilium isn't supported as the single CNI, Cilium can manipulate Kubernetes's `taints `\_ on a given node to help preventing pods from starting before Cilium runs on said node. The mechanism works as follows: 1. The cluster administrator places a specific taint (see below) on a given uninitialized node. Depending on the taint's effect (see below), this prevents pods that don't have a matching toleration from either being scheduled or altogether running on the node until the taint is removed. 2. Cilium runs on the node, initializes it and, once ready, removes the aforementioned taint. 3. From this point on, pods will start being scheduled and running on the node, having their networking managed by Cilium. 4. If Cilium is temporarily removed from the node, the Operator will re-apply the taint (but only with NoSchedule). By default, the taint key is ``node.cilium.io/agent-not-ready``, but in some scenarios (such as when Cluster Autoscaler is being used but its flags cannot be configured) this key may need to be tweaked. This can be done using the ``agent-not-ready-taint-key`` option. In the aforementioned example, users should specify a key starting with ``ignore-taint.cluster-autoscaler.kubernetes.io/``. When such a value is used, the Cluster Autoscaler will ignore it when simulating scheduling, allowing the cluster to scale up. The taint's effect should be chosen taking into account the following considerations: \* If ``NoSchedule`` is used, pods won't be \*scheduled\* to a node until Cilium has the chance to remove the taint. However, one practical effect of this is that if some external process (such as a reboot) resets the CNI configuration on said node, pods that were already scheduled will be allowed to start concurrently with Cilium when the node next reboots, and hence may become unmanaged and have their networking being managed by another CNI plugin. \* If ``NoExecute`` is used, pods won't be \*executed\* (nor \*scheduled\*) on a node until Cilium has had the chance to remove the taint. One practical effect of this is that whenever the taint is added back to the node by some external process (such as during an upgrade or eventually a routine operation), pods will be evicted from the node until Cilium has had the chance to remove the taint. Another important thing to consider is the concept of node itself, and the different point of views over a node. For example, the instance/VM which backs
https://github.com/cilium/cilium/blob/main//Documentation/installation/taints.rst
main
cilium
[ -0.02750491537153721, 0.01661843992769718, -0.005671096500009298, -0.05109666287899017, 0.12015742808580399, -0.05762229114770889, -0.0061099654994904995, -0.0007177651859819889, 0.06620477139949799, 0.000421991542680189, 0.06030319631099701, -0.10367532819509506, 0.009464875794947147, -0....
0.195499
an upgrade or eventually a routine operation), pods will be evicted from the node until Cilium has had the chance to remove the taint. Another important thing to consider is the concept of node itself, and the different point of views over a node. For example, the instance/VM which backs a Kubernetes node can be patched or reset filesystem-wise by a cloud provider, or altogether replaced with an entirely new instance/VM that comes back with the same name as the already-existing Kubernetes ``Node`` resource. Even though in said scenarios the node-pool-level taint will be added back to the ``Node`` resource, pods that were already scheduled to the node having this name will run on the node at the same time as Cilium, potentially becoming unmanaged. This is why ``NoExecute`` is recommended, as assuming the taint is added back in this scenario, already-scheduled pods won't run. However, on some environments or cloud providers, and as mentioned above, it may happen that a taint established at the node-pool level is added back to a node after Cilium has removed it and for reasons other than a node upgrade/reset. The exact circumstances in which this may happen may vary, but this may lead to unexpected/undesired pod evictions in the particular case when ``NoExecute`` is being used as the taint effect. It is, thus, recommended that in each deployment and depending on the environment or cloud provider, a careful decision is made regarding the taint effect (or even regarding whether to use the taint-based approach at all) based on the information above, on the environment or cloud provider's documentation, and on the fact that one is essentially establishing a trade-off between having unmanaged pods in the cluster (which can lead to dropped traffic and other issues) and having unexpected/undesired evictions (which can lead to application downtime). Taking into account all of the above, throughout the Cilium documentation we recommend ``NoExecute`` to be used as we believe it to be the least disruptive mode that users can use to deploy Cilium on cloud providers.
https://github.com/cilium/cilium/blob/main//Documentation/installation/taints.rst
main
cilium
[ 0.012677853927016258, 0.006112673785537481, 0.07621712982654572, 0.020736847072839737, 0.07484017312526703, -0.03039557673037052, -0.013368193060159683, -0.050759509205818176, 0.11555488407611847, -0.007025107275694609, 0.03865563124418259, -0.00616705697029829, -0.002580389380455017, -0.0...
0.188033
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_cni\_chaining: \*\*\*\*\*\*\*\*\*\*\*\* CNI Chaining \*\*\*\*\*\*\*\*\*\*\*\* CNI chaining allows to use Cilium in combination with other CNI plugins. With Cilium CNI chaining, the base network connectivity and IP address management is managed by the non-Cilium CNI plugin, but Cilium attaches eBPF programs to the network devices created by the non-Cilium plugin to provide L3/L4 network visibility, policy enforcement and other advanced features. .. toctree:: :maxdepth: 1 :glob: cni-chaining-aws-cni cni-chaining-azure-cni cni-chaining-calico cni-chaining-generic-veth cni-chaining-portmap cni-chaining-weave
https://github.com/cilium/cilium/blob/main//Documentation/installation/cni-chaining.rst
main
cilium
[ -0.02462019957602024, 0.0006983779021538794, -0.07003737986087799, -0.036455076187849045, 0.007529539056122303, -0.030822794884443283, -0.0643228217959404, -0.06494380533695221, -0.003089690348133445, 0.023059170693159103, 0.046333152800798416, -0.019382141530513763, 0.04691452160477638, -...
0.123434
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_chaining\_aws\_cni: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* AWS VPC CNI plugin \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This guide explains how to set up Cilium in combination with the AWS VPC CNI plugin. In this hybrid mode, the AWS VPC CNI plugin is responsible for setting up the virtual network devices as well as for IP address management (IPAM) via ENIs. After the initial networking is setup for a given pod, the Cilium CNI plugin is called to attach eBPF programs to the network devices set up by the AWS VPC CNI plugin in order to enforce network policies, perform load-balancing and provide encryption. .. image:: aws-cilium-architecture.png .. include:: cni-chaining-limitations.rst .. admonition:: Video :class: attention If you require advanced features of Cilium, consider migrating fully to Cilium. To help you with the process, you can watch two Principal Engineers at Meltwater talk about `how they migrated Meltwater's production Kubernetes clusters - from the AWS VPC CNI plugin to Cilium `\_\_. .. important:: Please ensure that you are running version `1.11.2 `\_ or newer of the AWS VPC CNI plugin to guarantee compatibility with Cilium. .. code-block:: shell-session $ kubectl -n kube-system get ds/aws-node -o json | jq -r '.spec.template.spec.containers[0].image' 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni:v1.11.2 If you are running an older version, as in the above example, you can upgrade it with: .. code-block:: shell-session $ kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.11/config/master/aws-k8s-cni.yaml .. image:: aws-cni-architecture.png Setting up a cluster on AWS =========================== Follow the instructions in the :ref:`k8s\_install\_quick` guide to set up an EKS cluster, or use any other method of your preference to set up a Kubernetes cluster on AWS. Ensure that the `aws-vpc-cni-k8s `\_ plugin is installed — which will already be the case if you have created an EKS cluster. Also, ensure the version of the plugin is up-to-date as per the above. .. include:: k8s-install-download-release.rst Deploy Cilium via Helm: .. cilium-helm-install:: :namespace: kube-system :set: cni.chainingMode=aws-cni cni.exclusive=false enableIPv4Masquerade=false routingMode=native This will enable chaining with the AWS VPC CNI plugin. It will also disable tunneling, as it's not required since ENI IP addresses can be directly routed in the VPC. For the same reason, masquerading can be disabled as well. Restart existing pods ===================== The new CNI chaining configuration \*will not\* apply to any pod that is already running in the cluster. Existing pods will be reachable, and Cilium will load-balance \*to\* them, but not \*from\* them. Policy enforcement will also not be applied. For these reasons, you must restart these pods so that the chaining configuration can be applied to them. The following command can be used to check which pods need to be restarted: .. code-block:: bash for ns in $(kubectl get ns -o jsonpath='{.items[\*].metadata.name}'); do ceps=$(kubectl -n "${ns}" get cep \ -o jsonpath='{.items[\*].metadata.name}') pods=$(kubectl -n "${ns}" get pod \ -o custom-columns=NAME:.metadata.name,NETWORK:.spec.hostNetwork \ | grep -E '\s(|false)' | awk '{print $1}' | tr '\n' ' ') ncep=$(echo "${pods} ${ceps}" | tr ' ' '\n' | sort | uniq -u | paste -s -d ' ' -) for pod in $(echo $ncep); do echo "${ns}/${pod}"; done done .. include:: k8s-install-validate.rst Advanced ======== Enabling security groups for pods (EKS) --------------------------------------- Cilium can be used alongside the `security groups for pods `\_ feature of EKS in supported clusters when running in chaining mode. Follow the instructions below to enable this feature: .. important:: The following guide requires `jq `\_ and the `AWS CLI `\_ to be installed and configured. Make sure that the ``AmazonEKSVPCResourceController`` managed policy is attached to the IAM role associated with the EKS cluster: .. code-block:: shell-session
https://github.com/cilium/cilium/blob/main//Documentation/installation/cni-chaining-aws-cni.rst
main
cilium
[ -0.0525791309773922, 0.004636325407773256, -0.09504792839288712, -0.06426674872636795, 0.0006378128309734166, -0.021401645615696907, -0.01880592107772827, -0.009400216862559319, 0.025963956490159035, 0.026301097124814987, 0.03878882899880409, -0.05623083934187889, 0.05807284265756607, -0.0...
0.171444
in chaining mode. Follow the instructions below to enable this feature: .. important:: The following guide requires `jq `\_ and the `AWS CLI `\_ to be installed and configured. Make sure that the ``AmazonEKSVPCResourceController`` managed policy is attached to the IAM role associated with the EKS cluster: .. code-block:: shell-session export EKS\_CLUSTER\_NAME="my-eks-cluster" # Change accordingly export EKS\_CLUSTER\_ROLE\_NAME=$(aws eks describe-cluster \ --name "${EKS\_CLUSTER\_NAME}" \ | jq -r '.cluster.roleArn' | awk -F/ '{print $NF}') aws iam attach-role-policy \ --policy-arn arn:aws:iam::aws:policy/AmazonEKSVPCResourceController \ --role-name "${EKS\_CLUSTER\_ROLE\_NAME}" Then, as mentioned above, make sure that the version of the AWS VPC CNI plugin running in the cluster is up-to-date: .. code-block:: shell-session kubectl -n kube-system get ds/aws-node \ -o jsonpath='{.spec.template.spec.containers[0].image}' 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni:v1.7.10 Next, patch the ``kube-system/aws-node`` DaemonSet in order to enable security groups for pods: .. code-block:: shell-session kubectl -n kube-system patch ds aws-node \ -p '{"spec":{"template":{"spec":{"initContainers":[{"env":[{"name":"DISABLE\_TCP\_EARLY\_DEMUX","value":"true"}],"name":"aws-vpc-cni-init"}],"containers":[{"env":[{"name":"ENABLE\_POD\_ENI","value":"true"}],"name":"aws-node"}]}}}}' kubectl -n kube-system rollout status ds aws-node After the rollout is complete, all nodes in the cluster should have the ``vps.amazonaws.com/has-trunk-attached`` label set to ``true``: .. code-block:: shell-session kubectl get nodes -L vpc.amazonaws.com/has-trunk-attached NAME STATUS ROLES AGE VERSION HAS-TRUNK-ATTACHED ip-192-168-111-169.eu-west-2.compute.internal Ready 22m v1.19.6-eks-49a6c0 true ip-192-168-129-175.eu-west-2.compute.internal Ready 22m v1.19.6-eks-49a6c0 true From this moment everything should be in place. For details on how to actually associate security groups to pods, please refer to the `official documentation `\_. .. include:: next-steps.rst
https://github.com/cilium/cilium/blob/main//Documentation/installation/cni-chaining-aws-cni.rst
main
cilium
[ -0.005226133856922388, 0.026548035442829132, -0.028858607634902, 0.05224744230508804, 0.025957753881812096, 0.04702867940068245, 0.006914467550814152, -0.052705395966768265, 0.02199273183941841, 0.10790110379457474, 0.027604535222053528, -0.11297772079706192, 0.020887838676571846, -0.01298...
0.09987
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_k3s\_install: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Installation Using K3s \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This guide walks you through installation of Cilium on `K3s `\_, a highly available, certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances. Cilium is presently supported on amd64 and arm64 architectures. Install a Master Node ===================== The first step is to install a K3s master node making sure to disable support for the default CNI plugin and the built-in network policy enforcer: .. note:: If running Cilium in :ref:`kubeproxy-free` mode, add option ``--disable-kube-proxy`` .. code-block:: shell-session curl -sfL https://get.k3s.io | INSTALL\_K3S\_EXEC='--flannel-backend=none --disable-network-policy' sh - Install Agent Nodes (Optional) ============================== K3s can run in standalone mode or as a cluster making it a great choice for local testing with multi-node data paths. Agent nodes are joined to the master node using a node-token which can be found on the master node at ``/var/lib/rancher/k3s/server/node-token``. Install K3s on agent nodes and join them to the master node making sure to replace the variables with values from your environment: .. code-block:: shell-session curl -sfL https://get.k3s.io | K3S\_URL='https://${MASTER\_IP}:6443' K3S\_TOKEN=${NODE\_TOKEN} sh - Should you encounter any issues during the installation, please refer to the :ref:`troubleshooting\_k8s` section and/or seek help on `Cilium Slack`\_. Please consult the Kubernetes :ref:`k8s\_requirements` for information on how you need to configure your Kubernetes cluster to operate with Cilium. Configure Cluster Access ======================== For the Cilium CLI to access the cluster in successive steps you will need to use the ``kubeconfig`` file stored at ``/etc/rancher/k3s/k3s.yaml`` by setting the ``KUBECONFIG`` environment variable: .. code-block:: shell-session export KUBECONFIG=/etc/rancher/k3s/k3s.yaml Install Cilium ============== .. include:: cli-download.rst .. note:: Install Cilium with ``--set=ipam.operator.clusterPoolIPv4PodCIDRList="10.42.0.0/16"`` to match k3s default podCIDR 10.42.0.0/16. .. note:: If you are using Rancher Desktop, you may need to override the cni path by adding the additional flag ``--set 'cni.binPath=/usr/libexec/cni'`` Install Cilium by running: .. parsed-literal:: cilium install |CHART\_VERSION| --set=ipam.operator.clusterPoolIPv4PodCIDRList="10.42.0.0/16" Validate the Installation ========================= .. include:: cli-status.rst .. include:: cli-connectivity-test.rst .. include:: next-steps.rst
https://github.com/cilium/cilium/blob/main//Documentation/installation/k3s.rst
main
cilium
[ -0.027947818860411644, -0.012914258986711502, -0.03831276670098305, -0.07356660813093185, 0.01849714107811451, -0.08121226727962494, -0.039116039872169495, -0.059048645198345184, 0.07209152728319168, 0.003291201777756214, 0.02705460973083973, -0.07052519172430038, 0.030412469059228897, -0....
0.209362
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_cni\_migration: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Migrating a cluster to Cilium \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Cilium can be used to migrate from another cni. Running clusters can be migrated on a node-by-node basis, without disrupting existing traffic or requiring a complete cluster outage or rebuild depending on the complexity of the migration case. This document outlines how migrations with Cilium work. You will have a good understanding of the basic requirements, as well as see an example migration which you can practice using :ref:`Kind `. Background ========== When the kubelet creates a Pod's Sandbox, the installed CNI, as configured in ``/etc/cni/net.d/``, is called. The cni will handle the networking for a pod - including allocating an ip address, creating & configuring a network interface, and (potentially) establishing an overlay network. The Pod's network configuration shares the same life cycle as the PodSandbox. In the case of migration, we typically reconfigure ``/etc/cni/net.d/`` to point to Cilium. However, any existing pods will still have been configured by the old network plugin and any new pods will be configured by the newer CNI. To complete the migration all Pods on the cluster that are configured by the old cni must be recycled in order to be a member of the new CNI. A naive approach to migrating a CNI would be to reconfigure all nodes with a new CNI and then gradually restart each node in the cluster, thus replacing the CNI when the node is brought back up and ensuring that all pods are part of the new CNI. This simple migration, while effective, comes at the cost of disrupting cluster connectivity during the rollout. Unmigrated and migrated nodes would be split in to two "islands" of connectivity, and pods would be randomly unable to reach one-another until the migration is complete. Migration via dual overlays --------------------------- Instead, Cilium supports a \*hybrid\* mode, where two separate overlays are established across the cluster. While pods on a given node can only be attached to one network, they have access to both Cilium and non-Cilium pods while the migration is taking place. As long as Cilium and the existing networking provider use a separate IP range, the Linux routing table takes care of separating traffic. In this document we will discuss a model for live migrating between two deployed CNI implementations. This will have the benefit of reducing downtime of nodes and workloads and ensuring that workloads on both configured CNIs can communicate during migration. For live migration to work, Cilium will be installed with a separate CIDR range and encapsulation port than that of the currently installed CNI. As long as Cilium and the existing CNI use a separate IP range, the Linux routing table takes care of separating traffic. Requirements ============ Live migration requires the following: - A new, distinct Cluster CIDR for Cilium to use - Use of the :ref:`Cluster Pool IPAM mode` - A distinct overlay, either protocol or port - An existing network plugin that uses the Linux routing stack, such as Flannel, Calico, or AWS-CNI Limitations =========== Currently, Cilium migration has not been tested with: - BGP-based routing - Changing IP families (e.g. from IPv4 to IPv6) - Migrating from Cilium in chained mode - An existing NetworkPolicy provider During migration, Cilium's NetworkPolicy and CiliumNetworkPolicy enforcement will be disabled. Otherwise, traffic from non-Cilium pods may be incorrectly dropped. Once the migration process is complete, policy enforcement can be re-enabled. If there is an existing NetworkPolicy provider, you may wish
https://github.com/cilium/cilium/blob/main//Documentation/installation/k8s-install-migration.rst
main
cilium
[ -0.03857194259762764, -0.023177912458777428, -0.08546619117259979, -0.0340513251721859, 0.0755467563867569, -0.07064957171678543, -0.07779441028833389, -0.01337284967303276, -0.028026042506098747, 0.0003030922671314329, 0.03873687982559204, -0.09557103365659714, 0.0702679306268692, -0.0629...
0.230666
from Cilium in chained mode - An existing NetworkPolicy provider During migration, Cilium's NetworkPolicy and CiliumNetworkPolicy enforcement will be disabled. Otherwise, traffic from non-Cilium pods may be incorrectly dropped. Once the migration process is complete, policy enforcement can be re-enabled. If there is an existing NetworkPolicy provider, you may wish to temporarily delete all NetworkPolicies before proceeding. It is strongly recommended to install Cilium using the :ref:`cluster-pool ` IPAM allocator. This provides the strongest assurance that there will be no IP collisions. .. warning:: Migration is highly dependent on the exact configuration of existing clusters. It is, thus, strongly recommended to perform a trial migration on a test or lab cluster. Overview ======== The migration process utilizes the :ref:`per-node configuration` feature to selectively enable Cilium CNI. This allows for a controlled rollout of Cilium without disrupting existing workloads. Cilium will be installed, first, in a mode where it establishes an overlay but does not provide CNI networking for any pods. Then, individual nodes will be migrated. In summary, the process looks like: 1. Install cilium in "secondary" mode 2. Cordon, drain, migrate, and reboot each node 3. Remove the existing network provider 4. (Optional) Reboot each node again Migration procedure =================== Preparation ----------- - Optional: Create a :ref:`Kind ` cluster and install `Flannel `\_ on it. .. parsed-literal:: $ cat < kind-config.yaml apiVersion: kind.x-k8s.io/v1alpha4 kind: Cluster nodes: - role: control-plane - role: worker - role: worker networking: disableDefaultCNI: true EOF $ kind create cluster --config=kind-config.yaml $ kubectl apply -n kube-system --server-side -f \ |SCM\_WEB|\/examples/misc/migration/install-reference-cni-plugins.yaml $ kubectl apply --server-side -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml $ kubectl wait --for=condition=Ready nodes --all - Optional: Monitor connectivity. You may wish to install a tool such as `goldpinger `\_ to detect any possible connectivity issues. 1. Select a \*\*new\*\* CIDR for pods. It must be distinct from all other CIDRs in use. For Kind clusters, the default is ``10.244.0.0/16``. So, for this example, we will use ``10.245.0.0/16``. 2. Select a \*\*distinct\*\* encapsulation port. For example, if the existing cluster is using VXLAN, then you should either use GENEVE or configure Cilium to use VXLAN with a different port. For this example, we will use VXLAN with a non-default port of 8473. 3. Create a helm ``values-migration.yaml`` file based on the following example. Be sure to fill in the CIDR you selected in step 1. .. code-block:: yaml operator: unmanagedPodWatcher: restart: false # Migration: Don't restart unmigrated pods routingMode: tunnel # Migration: Optional: default is tunneling, configure as needed tunnelProtocol: vxlan # Migration: Optional: default is VXLAN, configure as needed tunnelPort: 8473 # Migration: Optional, change only if both networks use the same port by default cni: customConf: true # Migration: Don't install a CNI configuration file uninstall: false # Migration: Don't remove CNI configuration on shutdown ipam: mode: "cluster-pool" operator: clusterPoolIPv4PodCIDRList: ["10.245.0.0/16"] # Migration: Ensure this is distinct and unused policyEnforcementMode: "never" # Migration: Disable policy enforcement bpf: hostLegacyRouting: true # Migration: Allow for routing between Cilium and the existing overlay 4. Configure any additional Cilium Helm values. Cilium supports a number of :ref:`Helm configuration options`. You may choose to auto-detect typical ones using the :ref:`cilium-cli `. This will consume the template and auto-detect any other relevant Helm values. Review these values for your particular installation. .. parsed-literal:: $ cilium install |CHART\_VERSION| --values values-migration.yaml --dry-run-helm-values > values-initial.yaml $ cat values-initial.yaml 5. Install cilium using :ref:`helm `. .. code-block:: shell-session $ helm repo add cilium https://helm.cilium.io/ $ helm install cilium cilium/cilium --namespace kube-system --values values-initial.yaml At this point, you should have a cluster with Cilium installed and an overlay established, but no pods managed by Cilium itself. You can
https://github.com/cilium/cilium/blob/main//Documentation/installation/k8s-install-migration.rst
main
cilium
[ 0.005403696559369564, -0.05590332671999931, -0.031014328822493553, -0.001965525094419718, 0.015542794950306416, -0.07225415110588074, -0.07568971067667007, -0.016274020075798035, -0.021494299173355103, -0.002142301993444562, 0.0798104852437973, -0.05869794264435768, 0.020629847422242165, -...
0.158098
cat values-initial.yaml 5. Install cilium using :ref:`helm `. .. code-block:: shell-session $ helm repo add cilium https://helm.cilium.io/ $ helm install cilium cilium/cilium --namespace kube-system --values values-initial.yaml At this point, you should have a cluster with Cilium installed and an overlay established, but no pods managed by Cilium itself. You can verify this with the ``cilium`` command. .. code-block:: shell-session $ cilium status --wait ... Cluster Pods: 0/3 managed by Cilium 6. Create a :ref:`per-node config` that will instruct Cilium to "take over" CNI networking on the node. Initially, this will apply to no nodes; you will roll it out gradually via the migration process. .. code-block:: shell-session cat < values-final.yaml # optional, can cause brief interruptions $ diff values-initial.yaml values-final.yaml Then, apply the changes to the cluster: .. code-block:: shell-session $ helm upgrade --namespace kube-system cilium cilium/cilium --values values-final.yaml $ kubectl -n kube-system rollout restart daemonset cilium $ cilium status --wait 3. Delete the per-node configuration: .. code-block:: shell-session $ kubectl delete -n kube-system ciliumnodeconfig cilium-default 4. Delete the previous network plugin. At this point, all pods should be using Cilium for networking. You can easily verify this with ``cilium status``. It is now safe to delete the previous network plugin from the cluster. Most network plugins leave behind some resources, e.g. iptables rules and interfaces. These will be cleaned up when the node next reboots. If desired, you may perform a rolling reboot again.
https://github.com/cilium/cilium/blob/main//Documentation/installation/k8s-install-migration.rst
main
cilium
[ 0.04860628768801689, -0.011735946871340275, -0.020986905321478844, -0.039178963750600815, -0.0017305881483480334, -0.06912118941545486, -0.05197002738714218, 0.0154495220631361, 0.05968684330582619, 0.04299478232860565, 0.04650746285915375, -0.10386268049478531, 0.0012087165378034115, -0.0...
0.197586
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_rancher\_managed\_rke\_clusters: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Installation using Rancher \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Introduction ============ If you're not using the Rancher Management Console/UI to install your clusters, head over to the :ref:`installation guides for standalone RKE clusters `. Rancher comes with `official support for Cilium `\_\_. For most Rancher users, that's the recommended way to use Cilium on Rancher-managed clusters. However, as Rancher is using a custom ``rke2-cilium`` `Helm chart `\_\_ with independent release cycles, Cilium power-users might want to use an out-of-band Cilium installation instead, based on the official `Cilium Helm chart `\_\_, on top of their Rancher-managed RKE2 downstream clusters. This guide explains how to achieve this. .. note:: This guide only shows a step-by-step guide for Rancher-managed (\*\*non-standalone\*\*) \*\*RKE2\*\* clusters. .. note:: This guide shows how to install Cilium on Rancher-managed Custom Clusters. However, this method also applies to clusters created with providers such as VMware vSphere. Prerequisites ============= \* Fully functioning `Rancher Version 2.x `\_\_ instance \* At least one empty Linux VM, to be used as initial downstream "Custom Cluster" (Control Plane) node \* DNS record pointing to the Kubernetes API of the downstream "Custom Cluster" Control Plane node(s) or L4 load-balancer Create a New Cluster ==================== In Rancher UI, navigate to the Cluster Management page. In the top right, click on the ``Create`` button to create a new cluster. .. image:: images/rancher\_add\_cluster.png On the Cluster creation page select to create a new ``Custom`` cluster: .. image:: images/rancher\_existing\_nodes.png When the ``Create Custom`` page opens, provide a name for the cluster. In the same ``Basics`` section, expand ``Container Network`` drop down list and select ``none``. .. image:: images/rancher\_select\_cni.png Go through the other configuration options and configure the ones that are relevant for your setup. Add ``HelmChart`` manifests to install Cilium using the RKE2 built-in Helm Operator. Go to the ``Additional Manifests`` section and paste the following YAML. Add relevant values for your Cilium installation. .. code-block:: yaml apiVersion: catalog.cattle.io/v1 kind: ClusterRepo metadata: name: cilium spec: url: https://helm.cilium.io .. code-block:: yaml apiVersion: helm.cattle.io/v1 kind: HelmChart metadata: name: cilium namespace: kube-system spec: targetNamespace: kube-system createNamespace: false version: v1.18.0 chart: cilium repo: https://helm.cilium.io bootstrap: true valuesContent: |- # paste your Cilium values here: k8sServiceHost: 127.0.0.1 k8sServicePort: 6443 kubeProxyReplacement: true .. image:: images/rancher\_additional\_manifests.png .. note:: ``k8sServiceHost`` should be set to ``127.0.0.1`` and ``k8sServicePort`` to ``6443``. Cilium Agent running on control plane nodes will use local address for communication with Kubernetes API process. On Control Plane nodes you can verify this by running: .. code-block:: shell-session $ sudo ss -tulpn | grep 6443 tcp LISTEN 0 4096 \*:6443 \*:\* users:(("kube-apiserver",pid=124481,fd=3)) While On worker nodes, Cilium Agent will use the local address to communicate with ``rke2`` process, which is listening on port ``6443``. The process ``rke2`` proxies requests to the Kubernetes API server running on the Control Plane node(s): .. code-block:: shell-session $ sudo ss -tulpn | grep 6443 tcp LISTEN 0 4096 127.0.0.1:6443 0.0.0.0:\* users:(("rke2",pid=113574,fd=8)) Click the ``Edit as YAML`` box at the bottom of the page. The cluster configuration will open in an editor within the window. Within the ``Cluster`` Custom Resource (``provisioning.cattle.io/v1``), verify the ``rkeConfig`` section. It should consist of the manifests that you added to the ``Additional Manifests`` section. If you like to disable the default kube-proxy and your Cilium configuration enables :ref:`Kube-Proxy Replacement `, check the ``spec.rkeConfig.machineGlobalConfig`` section and set ``spec.rkeConfig.machineGlobalConfig.disable-kube-proxy`` to ``true``. .. image:: images/rancher\_config\_yaml.png When you are ready, click ``Create`` and Rancher will create the cluster. .. image:: images/rancher\_cluster\_state\_provisioning.png The cluster will stay in ``Updating`` state
https://github.com/cilium/cilium/blob/main//Documentation/installation/k8s-install-rancher-existing-nodes.rst
main
cilium
[ 0.009721006266772747, -0.05631077289581299, -0.054631926119327545, 0.0051473272033035755, 0.10549160093069077, -0.048243045806884766, -0.05825868993997574, 0.010608007200062275, 0.02891986258327961, -0.01989153027534485, 0.10217077285051346, -0.07398024946451187, 0.09223664551973343, -0.04...
0.134565
If you like to disable the default kube-proxy and your Cilium configuration enables :ref:`Kube-Proxy Replacement `, check the ``spec.rkeConfig.machineGlobalConfig`` section and set ``spec.rkeConfig.machineGlobalConfig.disable-kube-proxy`` to ``true``. .. image:: images/rancher\_config\_yaml.png When you are ready, click ``Create`` and Rancher will create the cluster. .. image:: images/rancher\_cluster\_state\_provisioning.png The cluster will stay in ``Updating`` state until you add nodes. Click on the cluster. In the ``Registration`` tab you should see the generated ``Registration command`` you need to run on the downstream cluster nodes. Do not forget to select the correct node roles. Rancher comes with the default to deploy all three roles (``etcd``, ``Control Plane``, and ``Worker``), which is often not what you want for multi-node clusters. .. image:: images/rancher\_registration\_command.png A few seconds after you added at least a single node, you should see the new node(s) in the ``Machines`` tab. Cilium CNI will be installed during the cluster bootstrap process by Helm Operator, which creates a Kubernetes Job that will install Cilium on the cluster. After a few minutes, you should see that the node changed to the ``Ready`` status: .. code-block:: shell-session kubectl get nodes -A NAME STATUS ROLES AGE VERSION ip-10-1-1-167 Ready control-plane,etcd,master,worker 41m v1.32.6+rke2r1 ip-10-1-1-231 Ready control-plane,etcd,master,worker 41m v1.32.6+rke2r1 ip-10-1-1-50 Ready control-plane,etcd,master,worker 45m v1.32.6+rke2r1 Back in the Rancher UI, you should see that the cluster changed to the healthy ``Active`` status: .. image:: images/rancher\_cluster\_created.png That's it! You can now work with this cluster as if you had installed the CNI using the default Rancher method. You can scale the cluster up or down, add or remove nodes, and so on. Verify Cilium Installation ========================== After the installation, the Cilium repository and Helm release will be tracked by Rancher. You can manage the Cilium lifecycle using the Rancher UI. To verify that Cilium is installed, check the Cilium app in the Rancher UI. Navigate to ```` -> ``Apps`` -> ``Installed Apps``. From the top drop-down menu, select ``All Namespaces`` or ``Project: System -> kube-system`` to see the Cilium app. .. image:: images/rancher\_cluster\_cilium\_app.png The Cilium Helm repository has been added to Rancher within the ``Additional Manifests`` section. .. image:: images/rancher\_cilium\_repo.png Once the new Cilium version will be available, you will now see a small hint on this app entry when there's a new Cilium version released. You can then upgrade directly via Rancher UI. .. image:: images/rancher\_cluster\_cilium\_app\_upgrade.png .. image:: images/rancher\_cluster\_cilium\_app\_upgrade\_versions.png
https://github.com/cilium/cilium/blob/main//Documentation/installation/k8s-install-rancher-existing-nodes.rst
main
cilium
[ 0.038952212780714035, -0.07376366853713989, -0.019846802577376366, -0.019749538972973824, -0.024587901309132576, -0.04170841723680496, -0.08262543380260468, 0.03579891100525856, 0.0008321696659550071, 0.01922602951526642, 0.028425635769963264, -0.054114047437906265, 0.07203632593154907, -0...
0.088133
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io \*\*\*\*\*\* Calico \*\*\*\*\*\* This guide instructs how to install Cilium in chaining configuration on top of `Calico `\_. .. include:: cni-chaining-limitations.rst Create a CNI configuration ========================== Create a ``chaining.yaml`` file based on the following template to specify the desired CNI chaining configuration: .. code-block:: yaml apiVersion: v1 kind: ConfigMap metadata: name: cni-configuration namespace: kube-system data: cni-config: |- { "name": "generic-veth", "cniVersion": "0.3.1", "plugins": [ { "type": "calico", "log\_level": "info", "datastore\_type": "kubernetes", "mtu": 1440, "ipam": { "type": "calico-ipam" }, "policy": { "type": "k8s" }, "kubernetes": { "kubeconfig": "/etc/cni/net.d/calico-kubeconfig" } }, { "type": "portmap", "snat": true, "capabilities": {"portMappings": true} }, { "type": "cilium-cni" } ] } Deploy the :term:`ConfigMap`: .. code-block:: shell-session kubectl apply -f chaining.yaml Deploy Cilium with the portmap plugin enabled ============================================= .. include:: k8s-install-download-release.rst Deploy Cilium release via Helm: .. cilium-helm-install:: :namespace: kube-system :set: cni.chainingMode=generic-veth cni.customConf=true cni.configMap=cni-configuration routingMode=native enableIPv4Masquerade=false enableIdentityMark=false .. note:: The new CNI chaining configuration will \*not\* apply to any pod that is already running the cluster. Existing pods will be reachable and Cilium will load-balance to them but policy enforcement will not apply to them and load-balancing is not performed for traffic originating from existing pods. You must restart these pods in order to invoke the chaining configuration on them. .. include:: k8s-install-validate.rst .. include:: next-steps.rst
https://github.com/cilium/cilium/blob/main//Documentation/installation/cni-chaining-calico.rst
main
cilium
[ -0.03458013758063316, 0.010454755276441574, -0.04251214116811752, -0.06863804161548615, 0.014920537360012531, -0.04566434025764465, -0.047933369874954224, 0.015240323729813099, 0.04759461432695389, -0.017979402095079422, 0.04421943426132202, -0.1093696653842926, 0.03007584623992443, -0.015...
0.158379
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_proxy\_visibility: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Layer 7 Protocol Visibility \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* .. note:: This feature requires enabling L7 Proxy support. While :ref:`monitor` provides introspection into datapath state, by default, it will only provide visibility into L3/L4 packet events. If you want L7 protocol visibility, you can use L7 Cilium Network Policies (see :ref:`l7\_policy`). To enable visibility for L7 traffic, create a ``CiliumNetworkPolicy`` that specifies L7 rules. Traffic flows matching a L7 rule in a ``CiliumNetworkPolicy`` will become visible to Cilium and, thus, can be exposed to the end user. It's important to remember that L7 network policies not only enables visibility but also restrict what traffic is allowed to flow in and out of a Pod. The following example enables visibility for DNS (TCP/UDP/53) and HTTP (ports TCP/80 and TCP/8080) traffic within the ``default`` namespace by specifying two L7 rules -- one for DNS and one for HTTP. It also restricts egress communication and drops anything that is not matched. L7 matching conditions on the rules have been omitted or wildcarded, which will permit all requests that match the L4 section of each rule: .. code-block:: yaml apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: "l7-visibility" spec: endpointSelector: matchLabels: "k8s:io.kubernetes.pod.namespace": default egress: - toPorts: - ports: - port: "53" protocol: ANY rules: dns: - matchPattern: "\*" - toEndpoints: - matchLabels: "k8s:io.kubernetes.pod.namespace": default toPorts: - ports: - port: "80" protocol: TCP - port: "8080" protocol: TCP rules: http: [{}] Based on the above policy, Cilium will pick up all TCP/UDP/53, TCP/80 and TCP/8080 egress traffic from Pods in the ``default`` namespace and redirect it to the proxy (see :ref:`proxy\_injection`) such that the output of ``cilium monitor`` or ``hubble observe`` shows the L7 flow details. Below is the example of running ``hubble observe -f -t l7 -o compact`` command: :: default/testapp-5b9cc645cb-4slbs:45240 (ID:26450) -> kube-system/coredns-787d4945fb-bdmdq:53 (ID:9313) dns-request proxy FORWARDED (DNS Query web.default.svc.cluster.local. A) default/testapp-5b9cc645cb-4slbs:45240 (ID:26450) <- kube-system/coredns-787d4945fb-bdmdq:53 (ID:9313) dns-response proxy FORWARDED (DNS Answer "10.96.118.37" TTL: 30 (Proxy web.default.svc.cluster.local. A)) default/testapp-5b9cc645cb-4slbs:33044 (ID:26450) -> default/echo-594485b8dc-fp57l:8080 (ID:32531) http-request FORWARDED (HTTP/1.1 GET http://web/) default/testapp-5b9cc645cb-4slbs:33044 (ID:26450) <- default/echo-594485b8dc-fp57l:8080 (ID:32531) http-response FORWARDED (HTTP/1.1 200 4ms (GET http://web/)) Security Implications --------------------- Monitoring Layer 7 traffic involves security considerations for handling potentially sensitive information, such as usernames, passwords, query parameters, API keys, and others. .. warning:: By default, Hubble does not redact potentially sensitive information present in `Layer 7 Hubble Flows `\_. To harden security, Cilium provides the ``--hubble-redact-enabled`` option which enables Hubble to handle sensitive information present in Layer 7 flows. More specifically, it offers the following features for supported Layer 7 protocols: \* For HTTP: redacting URL query (GET) parameters (``--hubble-redact-http-urlquery``) \* For HTTP: redacting URL user info (for example, password used in basic auth) (``--hubble-redact-http-userinfo``) \* For HTTP headers: redacting all headers except those defined in the ``--hubble-redact-http-headers-allow`` list or redacting only the headers defined in the ``--hubble-redact-http-headers-deny`` list For more information on configuring Cilium, see :ref:`Cilium Configuration `. Limitations ----------- \* DNS visibility is available on egress only. \* L7 policies for SNATed IPv6 traffic (e.g., pod-to-world) require a kernel with the `fix `\_\_ applied. The stable kernel versions with the fix are 6.14.1, 6.12.22, 6.6.86, 6.1.133, 5.15.180, 5.10.236. See :gh-issue:`37932` for the reference.
https://github.com/cilium/cilium/blob/main//Documentation/observability/visibility.rst
main
cilium
[ -0.03788692131638527, 0.017152927815914154, -0.08492130786180496, -0.006958493031561375, 0.06398912519216537, -0.06182150915265083, -0.017626680433750153, 0.002138059353455901, 0.02253001742064953, -0.018686939030885696, 0.05670277029275894, -0.048449065536260605, 0.009815339930355549, -0....
0.12579
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_metrics: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Monitoring & Metrics \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Cilium and Hubble can both be configured to serve `Prometheus `\_ metrics. Prometheus is a pluggable metrics collection and storage system and can act as a data source for `Grafana `\_, a metrics visualization frontend. Unlike some metrics collectors like statsd, Prometheus requires the collectors to pull metrics from each source. Cilium and Hubble metrics can be enabled independently of each other. Cilium Metrics ============== Cilium metrics provide insights into the state of Cilium itself, namely of the ``cilium-agent``, ``cilium-envoy``, and ``cilium-operator`` processes. To run Cilium with Prometheus metrics enabled, deploy it with the ``prometheus.enabled=true`` Helm value set. Cilium metrics are exported under the ``cilium\_`` Prometheus namespace. Envoy metrics are exported under the ``envoy\_`` Prometheus namespace, of which the Cilium-defined metrics are exported under the ``envoy\_cilium\_`` namespace. When running and collecting in Kubernetes they will be tagged with a pod name and namespace. Installation ------------ You can enable metrics for ``cilium-agent`` (including Envoy) with the Helm value ``prometheus.enabled=true``. ``cilium-operator`` metrics are enabled by default, if you want to disable them, set Helm value ``operator.prometheus.enabled=false``. .. cilium-helm-install:: :namespace: kube-system :set: prometheus.enabled=true operator.prometheus.enabled=true Cilium Metrics Scraping ----------------------- Prometheus Port Configuration ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The ports can be configured via ``prometheus.port``, ``envoy.prometheus.port``, or ``operator.prometheus.port`` respectively. When metrics are enabled and ServiceMonitor is not enabled (``hubble.metrics.serviceMonitor.enabled: false``), all Cilium components will have the following annotations. These annotations can be used to signal Prometheus whether to scrape metrics. If ServiceMonitor is enabled (``hubble.metrics.serviceMonitor.enabled: true``), these annotations are omitted and Prometheus discovers metrics via the ServiceMonitor resource. .. code-block:: yaml prometheus.io/scrape: true prometheus.io/port: 9962 To collect Envoy metrics the Cilium chart will create a Kubernetes headless service named ``cilium-agent`` with the ``prometheus.io/scrape:'true'`` annotation set: .. code-block:: yaml prometheus.io/scrape: true prometheus.io/port: 9964 This additional headless service in addition to the other Cilium components is needed as each component can only have one Prometheus scrape and port annotation. Prometheus will pick up the Cilium and Envoy metrics automatically if the following option is set in the ``scrape\_configs`` section: .. code-block:: yaml scrape\_configs: - job\_name: 'kubernetes-pods' kubernetes\_sd\_configs: - role: pod relabel\_configs: - source\_labels: [\_\_meta\_kubernetes\_pod\_annotation\_prometheus\_io\_scrape] action: keep regex: true - source\_labels: [\_\_address\_\_, \_\_meta\_kubernetes\_pod\_annotation\_prometheus\_io\_port] action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: ${1}:${2} target\_label: \_\_address\_\_ Prometheus Operator ServiceMonitor ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You can automatically create a `Prometheus Operator `\_\_ ``ServiceMonitor`` by setting ``prometheus.serviceMonitor.enabled=true``, or ``envoy.prometheus.serviceMonitor.enabled=true``, or ``operator.prometheus.serviceMonitor.enabled=true`` respectively. .. \_hubble\_metrics: Hubble Metrics ============== While Cilium metrics allow you to monitor the state of Cilium itself, Hubble metrics on the other hand allow you to monitor the network behavior of your Cilium-managed Kubernetes pods with respect to connectivity and security. Some of the metrics can also be configured with additional options. See the :ref:`Hubble exported metrics` section for the full list of available metrics and their options. Static or dynamic exporter -------------------------- Hubble Metrics can either be configured with a static or dynamic exporter. The dynamic metrics exporter allows you to change defined metrics as needed without requiring an agent restart. Installation with a static metrics exporter ------------------------------------------- To deploy Cilium with Hubble Metrics static exporter enabled, you need to enable Hubble with ``hubble.enabled=true`` and provide a set of Hubble metrics you want to enable via ``hubble.metrics.enabled``. .. cilium-helm-install:: :namespace: kube-system :set: prometheus.enabled=true operator.prometheus.enabled=true hubble.enabled=true hubble.metrics.enableOpenMetrics=true hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,httpV2:exemplars=true;labelsContext=source\_ip\\,source\_namespace\\,source\_workload\\,destination\_ip\\,destination\_namespace\\,destination\_workload\\,traffic\_direction}" Installation with a dynamic metrics exporter -------------------------------------------- To deploy Cilium with Hubble dynamic metrics enabled, you need to enable Hubble with ``hubble.enabled=true`` and ``hubble.metrics.dynamic.enabled=true``. In this example, a ``ConfigMap`` with a set of metrics will be applied before enabling the
https://github.com/cilium/cilium/blob/main//Documentation/observability/metrics.rst
main
cilium
[ -0.06088713929057121, -0.000540678040124476, -0.10281163454055786, -0.03252914547920227, -0.024658041074872017, -0.062200114130973816, -0.06379196047782898, 0.03602873533964157, 0.03483287990093231, -0.05011286213994026, 0.031048981472849846, -0.1203930675983429, 0.0651591345667839, -0.006...
0.233373
.. cilium-helm-install:: :namespace: kube-system :set: prometheus.enabled=true operator.prometheus.enabled=true hubble.enabled=true hubble.metrics.enableOpenMetrics=true hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,httpV2:exemplars=true;labelsContext=source\_ip\\,source\_namespace\\,source\_workload\\,destination\_ip\\,destination\_namespace\\,destination\_workload\\,traffic\_direction}" Installation with a dynamic metrics exporter -------------------------------------------- To deploy Cilium with Hubble dynamic metrics enabled, you need to enable Hubble with ``hubble.enabled=true`` and ``hubble.metrics.dynamic.enabled=true``. In this example, a ``ConfigMap`` with a set of metrics will be applied before enabling the exporter, but the desired set of metrics (together with the ``ConfigMap``) can be created during installation. See the :ref:`helm\_reference` (keys with ``hubble.metrics.dynamic.\*``) .. code-block:: yaml apiVersion: v1 kind: ConfigMap metadata: name: cilium-dynamic-metrics-config namespace: kube-system data: dynamic-metrics.yaml: | metrics: - name: dns - contextOptions: - name: sourceContext values: - workload-name - reserved-identity - name: destinationContext values: - workload-name - reserved-identity name: flow - name: drop - name: tcp - contextOptions: - name: sourceContext values: - workload-name - reserved-identity name: icmp - contextOptions: - name: exemplars values: - true - name: labelsContext values: - source\_ip - source\_namespace - source\_workload - destination\_ip - destination\_namespace - destination\_workload - traffic\_direction - name: sourceContext values: - workload-name - reserved-identity - name: destinationContext values: - workload-name - reserved-identity name: httpV2 - contextOptions: - name: sourceContext values: - app - workload-name - pod - reserved-identity - name: destinationContext values: - app - workload-name - pod - dns - reserved-identity - name: labelsContext values: - source\_namespace - destination\_namespace excludeFilters: - destination\_pod: - default/ name: policy Deploy the :term:`ConfigMap`: .. code-block:: shell-session kubectl apply -f dynamic-metrics.yaml .. cilium-helm-install:: :namespace: kube-system :set: prometheus.enabled=true operator.prometheus.enabled=true hubble.enabled=true hubble.metrics.enableOpenMetrics=true hubble.metrics.enabled=[] hubble.metrics.dynamic.enabled=true hubble.metrics.dynamic.config.configMapName=cilium-dynamic-metrics-config hubble.metrics.dynamic.config.createConfigMap=false Hubble Metrics Scraping ----------------------- Prometheus Port Configuration ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The port of the Hubble metrics can be configured with the ``hubble.metrics.port`` Helm value. For details on enabling Hubble metrics with TLS see the :ref:`hubble\_configure\_metrics\_tls` section of the documentation. .. Note:: L7 metrics such as HTTP, are only emitted for pods that enable :ref:`Layer 7 Protocol Visibility `. When deployed with a non-empty ``hubble.metrics.enabled`` Helm value, the Cilium chart will create a Kubernetes headless service named ``hubble-metrics`` with the ``prometheus.io/scrape:'true'`` annotation set: .. code-block:: yaml prometheus.io/scrape: true prometheus.io/port: 9965 Set the following options in the ``scrape\_configs`` section of Prometheus to have it scrape all Hubble metrics from the endpoints automatically: .. code-block:: yaml scrape\_configs: - job\_name: 'kubernetes-endpoints' scrape\_interval: 30s kubernetes\_sd\_configs: - role: endpoints relabel\_configs: - source\_labels: [\_\_meta\_kubernetes\_service\_annotation\_prometheus\_io\_scrape] action: keep regex: true - source\_labels: [\_\_address\_\_, \_\_meta\_kubernetes\_service\_annotation\_prometheus\_io\_port] action: replace target\_label: \_\_address\_\_ regex: (.+)(?::\d+);(\d+) replacement: $1:$2 Prometheus Operator ServiceMonitor ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You can automatically create a `Prometheus Operator `\_\_ ``ServiceMonitor`` by setting ``hubble.metrics.serviceMonitor.enabled=true``. .. \_hubble\_open\_metrics: OpenMetrics ----------- Additionally, you can opt-in to `OpenMetrics `\_ by setting ``hubble.metrics.enableOpenMetrics=true``. Enabling OpenMetrics configures the Hubble metrics endpoint to support exporting metrics in OpenMetrics format when explicitly requested by clients. Using OpenMetrics supports additional functionality such as Exemplars, which enables associating metrics with traces by embedding trace IDs into the exported metrics. Prometheus needs to be configured to take advantage of OpenMetrics and will only scrape exemplars when the `exemplars storage feature is enabled `\_. OpenMetrics imposes a few additional requirements on metrics names and labels, so this functionality is currently opt-in, though we believe all of the Hubble metrics conform to the OpenMetrics requirements. .. \_clustermesh\_apiserver\_metrics: Cluster Mesh API Server Metrics =============================== Cluster Mesh API Server metrics provide insights into the state of the ``clustermesh-apiserver`` process, the ``kvstoremesh`` process (if enabled), and the sidecar etcd instance. Cluster Mesh API Server metrics are exported under the ``cilium\_clustermesh\_apiserver\_`` Prometheus namespace. KVStoreMesh metrics are exported under the ``cilium\_kvstoremesh\_`` Prometheus namespace. Etcd metrics are exported under the ``etcd\_`` Prometheus namespace. Installation ------------ You can enable the metrics for different Cluster Mesh API Server components by setting the following values: \* clustermesh-apiserver: ``clustermesh.apiserver.metrics.enabled=true`` \* kvstoremesh: ``clustermesh.apiserver.metrics.kvstoremesh.enabled=true`` \* sidecar etcd instance: ``clustermesh.apiserver.metrics.etcd.enabled=true`` .. cilium-helm-install:: :namespace: kube-system
https://github.com/cilium/cilium/blob/main//Documentation/observability/metrics.rst
main
cilium
[ 0.023016395047307014, 0.018567416816949844, -0.03742896020412445, -0.04517792910337448, -0.02172943390905857, -0.04915280267596245, -0.06730005145072937, 0.03906499966979027, 0.05123108625411987, -0.015828346833586693, 0.002036790596321225, -0.13900955021381378, -0.02388859912753105, -0.05...
0.135288
are exported under the ``cilium\_kvstoremesh\_`` Prometheus namespace. Etcd metrics are exported under the ``etcd\_`` Prometheus namespace. Installation ------------ You can enable the metrics for different Cluster Mesh API Server components by setting the following values: \* clustermesh-apiserver: ``clustermesh.apiserver.metrics.enabled=true`` \* kvstoremesh: ``clustermesh.apiserver.metrics.kvstoremesh.enabled=true`` \* sidecar etcd instance: ``clustermesh.apiserver.metrics.etcd.enabled=true`` .. cilium-helm-install:: :namespace: kube-system :set: clustermesh.useAPIServer=true clustermesh.apiserver.metrics.enabled=true clustermesh.apiserver.metrics.kvstoremesh.enabled=true clustermesh.apiserver.metrics.etcd.enabled=true You can figure the ports by way of ``clustermesh.apiserver.metrics.port``, ``clustermesh.apiserver.metrics.kvstoremesh.port`` and ``clustermesh.apiserver.metrics.etcd.port`` respectively. You can automatically create a `Prometheus Operator `\_ ``ServiceMonitor`` by setting ``clustermesh.apiserver.metrics.serviceMonitor.enabled=true``. Example Prometheus & Grafana Deployment ======================================= If you don't have an existing Prometheus and Grafana stack running, you can deploy a stack with: .. parsed-literal:: kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/addons/prometheus/monitoring-example.yaml It will run Prometheus and Grafana in the ``cilium-monitoring`` namespace. If you have either enabled Cilium or Hubble metrics, they will automatically be scraped by Prometheus. You can then expose Grafana to access it via your browser. .. code-block:: shell-session kubectl -n cilium-monitoring port-forward service/grafana --address 0.0.0.0 --address :: 3000:3000 Open your browser and access http://localhost:3000/ Metrics Reference ================= cilium-agent ------------ Configuration ^^^^^^^^^^^^^ To expose any metrics, invoke ``cilium-agent`` with the ``--prometheus-serve-addr`` option. This option takes a ``IP:Port`` pair but passing an empty IP (e.g. ``:9962``) will bind the server to all available interfaces (there is usually only one in a container). To customize ``cilium-agent`` metrics, configure the ``--metrics`` option with ``"+metric\_a -metric\_b -metric\_c"``, where ``+/-`` means to enable/disable the metric. For example, for really large clusters, users may consider to disable the following two metrics as they generate too much data: - ``cilium\_node\_connectivity\_status`` - ``cilium\_node\_connectivity\_latency\_seconds`` You can then configure the agent with ``--metrics="-cilium\_node\_connectivity\_status -cilium\_node\_connectivity\_latency\_seconds"``. Feature Metrics ~~~~~~~~~~~~~~~ Cilium Feature Metrics are exported under the ``cilium\_feature`` Prometheus namespace. The following tables categorize feature metrics into four groups: - \*\*Advanced Connectivity and Load Balancing\*\* (:ref:`cilium-feature-adv-connect-and-lb`) This category includes features related to advanced networking and load balancing capabilities, such as Bandwidth Manager, BGP, Envoy Proxy, and Cluster Mesh. - \*\*Control Plane\*\* (:ref:`cilium-feature-controlplane`) These metrics track control plane configurations, including identity allocation modes and IP address management (IPAM). - \*\*Datapath\*\* (:ref:`cilium-feature-datapath`) Metrics in this group monitor datapath configurations, such as Internet protocol modes, chaining modes, and network modes. - \*\*Network Policies\*\* (:ref:`cilium-feature-network-policies`) This group encompasses metrics related to policy enforcement, including Cilium Network Policies, Host Firewall, DNS policies, and Mutual Auth. For example, to check if the Bandwidth Manager is enabled on a Cilium agent, observe the metric ``cilium\_feature\_adv\_connect\_and\_lb\_bandwidth\_manager\_enabled``. All metrics follow the format ``cilium\_feature`` + group name + metric name. A value of ``0`` indicates that the feature is disabled, while ``1`` indicates it is enabled. .. note:: For metrics of type "counter", the agent has processed the associated object (e.g., a network policy) but might not be actively enforcing it. These metrics serve to observe if the object has been received and processed, but not necessarily enforced by the agent. .. include:: feature-metrics-agent.txt Exported Metrics ^^^^^^^^^^^^^^^^ Endpoint ~~~~~~~~ ============================================ ================================================== ========== ======================================================== Name Labels Default Description ============================================ ================================================== ========== ======================================================== ``endpoint`` Enabled Number of endpoints managed by this agent ``endpoint\_restoration\_endpoints`` ``phase``, ``outcome`` Enabled Number of restored endpoints labeled by phase and outcome ``endpoint\_restoration\_duration\_seconds`` ``phase`` Enabled Duration of restoration phases in seconds ``endpoint\_regenerations\_total`` ``outcome`` Enabled Count of all endpoint regenerations that have completed ``endpoint\_regeneration\_time\_stats\_seconds`` ``scope`` Enabled Endpoint regeneration time stats ``endpoint\_state`` ``state`` Enabled Count of all endpoints ============================================ ================================================== ========== ======================================================== Services ~~~~~~~~ ========================================== ================================================== ========== ======================================================== Name Labels Default Description ========================================== ================================================== ========== ======================================================== ``services\_events\_total`` Enabled Number of services events labeled by action type ``service\_implementation\_delay`` ``action`` Enabled Duration in seconds to propagate the data plane programming of a service, its network and endpoints from the time the service or the service
https://github.com/cilium/cilium/blob/main//Documentation/observability/metrics.rst
main
cilium
[ 0.030115177854895592, 0.0035881991498172283, -0.04503132402896881, -0.03423696756362915, -0.017354080453515053, 0.018490547314286232, -0.13868264853954315, -0.020924028009176254, 0.007280685007572174, -0.029230576008558273, 0.04970948398113251, -0.11778857558965683, -0.03821391984820366, 0...
0.094691
~~~~~~~~ ========================================== ================================================== ========== ======================================================== Name Labels Default Description ========================================== ================================================== ========== ======================================================== ``services\_events\_total`` Enabled Number of services events labeled by action type ``service\_implementation\_delay`` ``action`` Enabled Duration in seconds to propagate the data plane programming of a service, its network and endpoints from the time the service or the service pod was changed excluding the event queue latency ========================================== ================================================== ========== ======================================================== Cluster health ~~~~~~~~~~~~~~ ========================================== ================================================== ========== ======================================================== Name Labels Default Description ========================================== ================================================== ========== ======================================================== ``unreachable\_nodes`` Enabled Number of nodes that cannot be reached ``unreachable\_health\_endpoints`` Enabled Number of health endpoints that cannot be reached ========================================== ================================================== ========== ======================================================== Node Connectivity ~~~~~~~~~~~~~~~~~ ============================================= ======================================== ========== ========================================================================================================================================== Name Labels Default Description ============================================= ======================================== ========== ========================================================================================================================================== ``node\_health\_connectivity\_status`` ``type``, ``status`` Enabled Number of endpoints with last observed status of both ICMP and HTTP connectivity between the current Cilium agent and other Cilium nodes ``node\_health\_connectivity\_latency\_seconds`` ``type``, ``address\_type``, ``protocol`` Enabled Histogram of the last observed latency between the current Cilium agent and other Cilium nodes in seconds ============================================= ======================================== ========== ========================================================================================================================================== Clustermesh ~~~~~~~~~~~ ================================================ ================== ========== ================================================================= Name Labels Default Description ================================================ ================== ========== ================================================================= ``clustermesh\_remote\_cluster\_services`` ``target\_cluster`` Enabled The total number of services per remote cluster ``clustermesh\_remote\_cluster\_endpoints`` ``target\_cluster`` Enabled The total number of endpoints per remote cluster ``clustermesh\_remote\_cluster\_nodes`` ``target\_cluster`` Enabled The total number of nodes per remote cluster ``clustermesh\_remote\_clusters`` Enabled The total number of remote clusters meshed with the local cluster ``clustermesh\_remote\_cluster\_failures`` ``target\_cluster`` Enabled The total number of failures related to the remote cluster ``clustermesh\_remote\_cluster\_last\_failure\_ts`` ``target\_cluster`` Enabled The timestamp of the last failure of the remote cluster ``clustermesh\_remote\_cluster\_readiness\_status`` ``target\_cluster`` Enabled The readiness status of the remote cluster ``clustermesh\_remote\_cluster\_cache\_revocations`` ``target\_cluster`` Enabled The total number of cache revocations related to the remote cluster ================================================ ================== ========== ================================================================= Datapath ~~~~~~~~ ============================================= ================================================== ========== ======================================================== Name Labels Default Description ============================================= ================================================== ========== ======================================================== ``datapath\_conntrack\_dump\_resets\_total`` ``area``, ``name``, ``family`` Enabled Number of conntrack dump resets. Happens when a BPF entry gets removed while dumping the map is in progress. ``datapath\_conntrack\_gc\_runs\_total`` ``status`` Enabled Number of times that the conntrack garbage collector process was run ``datapath\_conntrack\_gc\_key\_fallbacks\_total`` Enabled The number of alive and deleted conntrack entries at the end of a garbage collector run labeled by datapath family ``datapath\_conntrack\_gc\_entries`` ``family`` Enabled The number of alive and deleted conntrack entries at the end of a garbage collector run ``datapath\_conntrack\_gc\_duration\_seconds`` ``status`` Enabled Duration in seconds of the garbage collector process ============================================= ================================================== ========== ======================================================== IPsec ~~~~~ ============================================= ================================================== ========== =========================================================== Name Labels Default Description ============================================= ================================================== ========== =========================================================== ``ipsec\_xfrm\_error`` ``error``, ``type`` Enabled Total number of xfrm errors ``ipsec\_keys`` Enabled Number of keys in use ``ipsec\_xfrm\_states`` ``direction`` Enabled Number of XFRM states ``ipsec\_xfrm\_policies`` ``direction`` Enabled Number of XFRM policies ============================================= ================================================== ========== =========================================================== eBPF ~~~~ ========================================== ===================================================================== ========== ======================================================== Name Labels Default Description ========================================== ===================================================================== ========== ======================================================== ``bpf\_syscall\_duration\_seconds`` ``operation``, ``outcome`` Disabled Duration of eBPF system call performed ``bpf\_map\_ops\_total`` ``map\_name``, ``operation``, ``outcome`` Enabled Number of eBPF map operations performed. ``bpf\_map\_pressure`` ``map\_name`` Enabled Map pressure is defined as a ratio of the required map size compared to its configured size. Values < 1.0 indicate the map's utilization, while values >= 1.0 indicate that the map is full. Policy map pressure metrics are emitted only when map utilization exceeds the threshold set by ``policyMapPressureMetricsThreshold`` helm value, which defaults to 0.1 (10% full). ``bpf\_map\_capacity`` ``map\_group`` Enabled Maximum size of eBPF maps by group of maps (type of map that have the same max capacity size). Map types with size of 65536 are not emitted, missing map types can be assumed to be 65536. ``bpf\_maps\_virtual\_memory\_max\_bytes`` Enabled Max memory used by eBPF maps installed in the system ``bpf\_progs\_virtual\_memory\_max\_bytes`` Enabled Max memory used by eBPF programs installed in the system ``bpf\_ratelimit\_dropped\_total`` ``usage`` Enabled
https://github.com/cilium/cilium/blob/main//Documentation/observability/metrics.rst
main
cilium
[ -0.026899419724941254, 0.01791154406964779, -0.047420650720596313, -0.026218773797154427, -0.06244494020938873, 0.05491308867931366, 0.0937776193022728, 0.02418510429561138, 0.011995485983788967, -0.10313071310520172, 0.025648897513747215, -0.08574537932872772, -0.04253194108605385, -0.049...
0.027229
the same max capacity size). Map types with size of 65536 are not emitted, missing map types can be assumed to be 65536. ``bpf\_maps\_virtual\_memory\_max\_bytes`` Enabled Max memory used by eBPF maps installed in the system ``bpf\_progs\_virtual\_memory\_max\_bytes`` Enabled Max memory used by eBPF programs installed in the system ``bpf\_ratelimit\_dropped\_total`` ``usage`` Enabled Total drops resulting from BPF ratelimiter, tagged by source of drop ========================================== ===================================================================== ========== ======================================================== Both ``bpf\_maps\_virtual\_memory\_max\_bytes`` and ``bpf\_progs\_virtual\_memory\_max\_bytes`` are currently reporting the system-wide memory usage of eBPF that is directly and not directly managed by Cilium. This might change in the future and only report the eBPF memory usage directly managed by Cilium. Drops/Forwards (L3/L4) ~~~~~~~~~~~~~~~~~~~~~~ ========================================== ================================================== ========== ======================================================== Name Labels Default Description ========================================== ================================================== ========== ======================================================== ``drop\_count\_total`` ``reason``, ``direction`` Enabled Total dropped packets ``drop\_bytes\_total`` ``reason``, ``direction`` Enabled Total dropped bytes ``forward\_count\_total`` ``direction`` Enabled Total forwarded packets ``forward\_bytes\_total`` ``direction`` Enabled Total forwarded bytes ``mtu\_error\_message\_total`` ``direction`` Enabled Total number of icmp fragmentation-needed or ICMPv6 packet-too-big messages processed ``fragmented\_count\_total`` ``direction`` Enabled Total number of fragmented packets processed ========================================== ================================================== ========== ======================================================== Policy ~~~~~~ ========================================== ================================================== ========== ======================================================== Name Labels Default Description ========================================== ================================================== ========== ======================================================== ``policy`` Enabled Number of policies currently loaded ``policy\_max\_revision`` Enabled Highest policy revision number in the agent ``policy\_change\_total`` Enabled Number of policy changes by outcome ``policy\_endpoint\_enforcement\_status`` Enabled Number of endpoints labeled by policy enforcement status ``policy\_implementation\_delay`` ``source`` Enabled Time in seconds between a policy change and it being fully deployed into the datapath, labeled by the policy's source ``policy\_selector\_match\_count\_max`` ``class`` Enabled The maximum number of identities selected by a network policy selector ``policy\_incremental\_update\_duration`` ``scope`` Enabled The time taken for newly learned identities to be added to the policy system, including BPF policy maps and L7 proxies. ========================================== ================================================== ========== ======================================================== Policy L7 (HTTP/Kafka/FQDN) ~~~~~~~~~~~~~~~~~~~~~~~~~~~ ======================================== ================================================== ========== ======================================================== Name Labels Default Description ======================================== ================================================== ========== ======================================================== ``proxy\_redirects`` ``protocol`` Enabled Number of redirects installed for endpoints ``proxy\_upstream\_reply\_seconds`` ``error``, ``protocol\_l7``, ``scope`` Enabled Seconds waited for upstream server to reply to a request ``proxy\_datapath\_update\_timeout\_total`` Disabled Number of total datapath update timeouts due to FQDN IP updates ``policy\_l7\_total`` ``rule``, ``proxy\_type`` Enabled Number of total L7 requests/responses ======================================== ================================================== ========== ======================================================== Identity ~~~~~~~~ ======================================== ================================================== ========== ======================================================== Name Labels Default Description ======================================== ================================================== ========== ======================================================== ``identity`` ``type`` Enabled Number of identities currently allocated ``identity\_label\_sources`` ``source`` Enabled Number of identities which contain at least one label from the given label source ``identity\_gc\_entries`` ``identity\_type`` Enabled Number of alive and deleted identities at the end of a garbage collector run ``identity\_gc\_runs`` ``outcome``, ``identity\_type`` Enabled Number of times identity garbage collector has run ``identity\_gc\_latency`` ``outcome``, ``identity\_type`` Enabled Duration of the last successful identity GC run ``ipcache\_errors\_total`` ``type``, ``error`` Enabled Number of errors interacting with the ipcache ``ipcache\_events\_total`` ``type`` Enabled Number of events interacting with the ipcache ``identity\_cache\_timer\_duration`` ``name`` Enabled Seconds required to execute periodic policy processes. ``name="id-alloc-update-policy-maps"`` is the time taken to apply incremental updates to the BPF policy maps. ``identity\_cache\_timer\_trigger\_latency`` ``name`` Enabled Seconds spent waiting for a previous process to finish before starting the next round. ``name="id-alloc-update-policy-maps"`` is the time waiting before applying incremental updates to the BPF policy maps. ``identity\_cache\_timer\_trigger\_folds`` ``name`` Enabled Number of timer triggers that were coalesced in to one execution. ``name="id-alloc-update-policy-maps"`` applies the incremental updates to the BPF policy maps. ======================================== ================================================== ========== ======================================================== Events external to Cilium ~~~~~~~~~~~~~~~~~~~~~~~~~ ======================================== ================================================== ========== ======================================================== Name Labels Default Description ======================================== ================================================== ========== ======================================================== ``event\_ts`` ``source`` Enabled Last timestamp when Cilium received an event from a control plane source, per resource and per action ``k8s\_event\_lag\_seconds`` ``source`` Disabled Lag for Kubernetes events - computed value between receiving a CNI ADD event from kubelet and a Pod event received from kube-api-server ======================================== ================================================== ========== ========================================================
https://github.com/cilium/cilium/blob/main//Documentation/observability/metrics.rst
main
cilium
[ 0.09228617697954178, -0.02039387822151184, -0.0454707071185112, -0.01465616561472416, -0.012612557038664818, 0.004856748506426811, -0.02275293692946434, 0.04534993693232536, -0.10443767160177231, -0.0012480769073590636, 0.017192667350172997, -0.1223134696483612, -0.02743801660835743, -0.07...
0.101834
======================================================== ``event\_ts`` ``source`` Enabled Last timestamp when Cilium received an event from a control plane source, per resource and per action ``k8s\_event\_lag\_seconds`` ``source`` Disabled Lag for Kubernetes events - computed value between receiving a CNI ADD event from kubelet and a Pod event received from kube-api-server ======================================== ================================================== ========== ======================================================== Controllers ~~~~~~~~~~~ ======================================== ================================================== ========== ======================================================== Name Labels Default Description ======================================== ================================================== ========== ======================================================== ``controllers\_runs\_total`` ``status`` Enabled Number of times that a controller process was run ``controllers\_runs\_duration\_seconds`` ``status`` Enabled Duration in seconds of the controller process ``controllers\_group\_runs\_total`` ``status``, ``group\_name`` Enabled Number of times that a controller process was run, labeled by controller group name ``controllers\_failing`` Enabled Number of failing controllers ======================================== ================================================== ========== ======================================================== The ``controllers\_group\_runs\_total`` metric reports the success and failure count of each controller within the system, labeled by controller group name and completion status. Due to the large number of controllers, enabling this metric is on a per-controller basis. This is configured using an allow-list which is passed as the ``controller-group-metrics`` configuration flag, or the ``prometheus.controllerGroupMetrics`` helm value. The current recommended default set of group names can be found in the values file of the Cilium Helm chart. The special names "all" and "none" are supported. SubProcess ~~~~~~~~~~ ======================================== ================================================== ========== ======================================================== Name Labels Default Description ======================================== ================================================== ========== ======================================================== ``subprocess\_start\_total`` ``subsystem`` Enabled Number of times that Cilium has started a subprocess ======================================== ================================================== ========== ======================================================== Kubernetes ~~~~~~~~~~ =========================================== ================================================== ========== ======================================================== Name Labels Default Description =========================================== ================================================== ========== ======================================================== ``kubernetes\_events\_received\_total`` ``scope``, ``action``, ``validity``, ``equal`` Enabled Number of Kubernetes events received ``kubernetes\_events\_total`` ``scope``, ``action``, ``outcome`` Enabled Number of Kubernetes events processed ``k8s\_cnp\_status\_completion\_seconds`` ``attempts``, ``outcome`` Enabled Duration in seconds in how long it took to complete a CNP status update ``k8s\_terminating\_endpoints\_events\_total`` Enabled Number of terminating endpoint events received from Kubernetes =========================================== ================================================== ========== ======================================================== Kubernetes Rest Client ~~~~~~~~~~~~~~~~~~~~~~ ============================================= ============================================= ========== =========================================================== Name Labels Default Description ============================================= ============================================= ========== =========================================================== ``k8s\_client\_api\_latency\_time\_seconds`` ``path``, ``method`` Enabled Duration of processed API calls labeled by path and method ``k8s\_client\_rate\_limiter\_duration\_seconds`` Enabled Kubernetes client rate limiter latency in seconds. ``k8s\_client\_api\_calls\_total`` ``host``, ``method``, ``return\_code`` Enabled Number of API calls made to kube-apiserver labeled by host, method and return code ============================================= ============================================= ========== =========================================================== Kubernetes workqueue ~~~~~~~~~~~~~~~~~~~~ ==================================================== ============================================= ========== =========================================================== Name Labels Default Description ==================================================== ============================================= ========== =========================================================== ``k8s\_workqueue\_depth`` ``name`` Enabled Current depth of workqueue ``k8s\_workqueue\_adds\_total`` ``name`` Enabled Total number of adds handled by workqueue ``k8s\_workqueue\_queue\_duration\_seconds`` ``name`` Enabled Duration in seconds an item stays in workqueue prior to request ``k8s\_workqueue\_work\_duration\_seconds`` ``name`` Enabled Duration in seconds to process an item from workqueue ``k8s\_workqueue\_unfinished\_work\_seconds`` ``name`` Enabled Duration in seconds of work in progress that hasn't been observed by work\_duration. Large values indicate stuck threads. You can deduce the number of stuck threads by observing the rate at which this value increases. ``k8s\_workqueue\_longest\_running\_processor\_seconds`` ``name`` Enabled Duration in seconds of the longest running processor for workqueue ``k8s\_workqueue\_retries\_total`` ``name`` Enabled Total number of retries handled by workqueue ==================================================== ============================================= ========== =========================================================== IPAM ~~~~ ======================================== ============================================ ========== ======================================================== Name Labels Default Description ======================================== ============================================ ========== ======================================================== ``ipam\_capacity`` ``family`` Enabled Total number of IPs in the IPAM pool labeled by family ``ipam\_events\_total`` Enabled Number of IPAM events received labeled by action and datapath family type ``ip\_addresses`` ``family`` Enabled Number of allocated IP addresses ======================================== ============================================ ========== ======================================================== KVstore ~~~~~~~ ======================================== ============================================ ========== ======================================================== Name Labels Default Description ======================================== ============================================ ========== ======================================================== ``kvstore\_operations\_duration\_seconds`` ``action``, ``kind``, ``outcome``, ``scope`` Enabled Duration of kvstore operation ``kvstore\_events\_queue\_seconds`` ``action``, ``scope`` Enabled Seconds waited before a received event was queued ``kvstore\_quorum\_errors\_total`` ``error`` Enabled Number of quorum errors ``kvstore\_sync\_errors\_total`` ``scope``, ``source\_cluster`` Enabled Number of times synchronization to the kvstore failed ``kvstore\_sync\_queue\_size`` ``scope``, ``source\_cluster`` Enabled Number of elements queued for synchronization
https://github.com/cilium/cilium/blob/main//Documentation/observability/metrics.rst
main
cilium
[ 0.03322484344244003, 0.033244192600250244, 0.02098284289240837, -0.0155892763286829, -0.050640370696783066, -0.026780156418681145, -0.01686009019613266, -0.01863987185060978, 0.11555586755275726, 0.00891045480966568, 0.004804794676601887, -0.14575272798538208, -0.03032919391989708, -0.0601...
0.162317
``action``, ``kind``, ``outcome``, ``scope`` Enabled Duration of kvstore operation ``kvstore\_events\_queue\_seconds`` ``action``, ``scope`` Enabled Seconds waited before a received event was queued ``kvstore\_quorum\_errors\_total`` ``error`` Enabled Number of quorum errors ``kvstore\_sync\_errors\_total`` ``scope``, ``source\_cluster`` Enabled Number of times synchronization to the kvstore failed ``kvstore\_sync\_queue\_size`` ``scope``, ``source\_cluster`` Enabled Number of elements queued for synchronization in the kvstore ``kvstore\_initial\_sync\_completed`` ``scope``, ``source\_cluster``, ``action`` Enabled Whether the initial synchronization from/to the kvstore has completed ======================================== ============================================ ========== ======================================================== Agent ~~~~~ ================================ ================================ ========== ======================================================== Name Labels Default Description ================================ ================================ ========== ======================================================== ``agent\_bootstrap\_seconds`` ``scope``, ``outcome`` Enabled Deprecated, will be removed in Cilium 1.20 - use ``cilium\_hive\_jobs\_oneshot\_last\_run\_duration\_seconds`` of respective job instead. Duration of various bootstrap phases ``api\_process\_time\_seconds`` Enabled Processing time of all the API calls made to the cilium-agent, labeled by API method, API path and returned HTTP code. ================================ ================================ ========== ======================================================== FQDN ~~~~ ================================== ================================ ============ ======================================================== Name Labels Default Description ================================== ================================ ============ ======================================================== ``fqdn\_gc\_deletions\_total`` Enabled Number of FQDNs that have been cleaned on FQDN garbage collector job ``fqdn\_active\_names`` ``endpoint`` Disabled Number of domains inside the DNS cache that have not expired (by TTL), per endpoint ``fqdn\_active\_ips`` ``endpoint`` Disabled Number of IPs inside the DNS cache associated with a domain that has not expired (by TTL), per endpoint ``fqdn\_alive\_zombie\_connections`` ``endpoint`` Disabled Number of IPs associated with domains that have expired (by TTL) yet still associated with an active connection (aka zombie), per endpoint ``fqdn\_selectors`` Enabled Number of registered ToFQDN selectors ================================== ================================ ============ ======================================================== Jobs ~~~~ =================================================== ================================ ============ ======================================================== Name Labels Default Description =================================================== ================================ ============ ======================================================== ``hive\_jobs\_runs\_total`` ``module``, ``job\_name`` Enabled Total number of jobs runs ``hive\_jobs\_runs\_failed`` ``module``, ``job\_name`` Enabled Number of jobs runs that returned an error ``hive\_jobs\_oneshot\_last\_run\_duration\_seconds`` ``module``, ``job\_name`` Enabled Duration of last one shot job run ``hive\_jobs\_observer\_last\_run\_duration\_seconds`` ``module``, ``job\_name`` Enabled Duration of last observer job run ``hive\_jobs\_observer\_run\_duration\_seconds`` ``module``, ``job\_name`` Enabled Histogram of observer job run duration ``hive\_jobs\_timer\_last\_run\_duration\_seconds`` ``module``, ``job\_name`` Enabled Duration of last timer job run ``hive\_jobs\_timer\_run\_duration\_seconds`` ``module``, ``job\_name`` Enabled Histogram of timer job run duration =================================================== ================================ ============ ======================================================== CIDRGroups ~~~~~~~~~~ =================================================== ===================== ============================= Name Labels Default Description =================================================== ===================== ============================= ``cidrgroups\_referenced`` Enabled Number of CNPs and CCNPs referencing at least one CiliumCIDRGroup. CNPs with empty or non-existing CIDRGroupRefs are not considered ``cidrgroup\_translation\_time\_stats\_seconds`` Disabled CIDRGroup translation time stats =================================================== ===================== ============================= .. \_metrics\_api\_rate\_limiting: API Rate Limiting ~~~~~~~~~~~~~~~~~ ============================================== ========================================== ========== ======================================================== Name Labels Default Description ============================================== ========================================== ========== ======================================================== ``api\_limiter\_adjustment\_factor`` ``api\_call`` Enabled Most recent adjustment factor for automatic adjustment ``api\_limiter\_processed\_requests\_total`` ``api\_call``, ``outcome``, ``return\_code`` Enabled Total number of API requests processed ``api\_limiter\_processing\_duration\_seconds`` ``api\_call``, ``value`` Enabled Mean and estimated processing duration in seconds ``api\_limiter\_rate\_limit`` ``api\_call``, ``value`` Enabled Current rate limiting configuration (limit and burst) ``api\_limiter\_requests\_in\_flight`` ``api\_call`` ``value`` Enabled Current and maximum allowed number of requests in flight ``api\_limiter\_wait\_duration\_seconds`` ``api\_call``, ``value`` Enabled Mean, min, and max wait duration ``api\_limiter\_wait\_history\_duration\_seconds`` ``api\_call`` Disabled Histogram of wait duration per API call processed ============================================== ========================================== ========== ======================================================== .. \_metrics\_bgp\_control\_plane: BGP Control Plane ~~~~~~~~~~~~~~~~~ ================================== =============================================================== ======== =================================================================== Name Labels Default Description ================================== =============================================================== ======== =================================================================== ``session\_state`` ``vrouter``, ``neighbor``, ``neighbor\_asn`` Enabled Current state of the BGP session with the peer, Up = 1 or Down = 0 ``advertised\_routes`` ``vrouter``, ``neighbor``, ``neighbor\_asn``, ``afi``, ``safi`` Enabled Number of routes advertised to the peer ``received\_routes`` ``vrouter``, ``neighbor``, ``neighbor\_asn``, ``afi``, ``safi`` Enabled Number of routes received from the peer ``reconcile\_errors\_total`` ``vrouter`` Enabled Number of reconciliation runs that returned an error ``reconcile\_run\_duration\_seconds`` ``vrouter`` Enabled Histogram of reconciliation run duration ================================== =============================================================== ======== =================================================================== All metrics are enabled only when the BGP Control Plane is enabled. cilium-operator --------------- Configuration ^^^^^^^^^^^^^ ``cilium-operator`` can be configured to serve metrics by running with the option ``--enable-metrics``. By default, the operator will expose metrics on port 9963, the
https://github.com/cilium/cilium/blob/main//Documentation/observability/metrics.rst
main
cilium
[ 0.006249844096601009, -0.018803179264068604, -0.0332970954477787, 0.05639820918440819, -0.08261274546384811, 0.032587140798568726, 0.0006875158287584782, -0.010466487146914005, 0.02497684769332409, 0.01764121651649475, 0.061325058341026306, -0.08531445264816284, 0.009210665710270405, -0.01...
0.070663
Enabled Histogram of reconciliation run duration ================================== =============================================================== ======== =================================================================== All metrics are enabled only when the BGP Control Plane is enabled. cilium-operator --------------- Configuration ^^^^^^^^^^^^^ ``cilium-operator`` can be configured to serve metrics by running with the option ``--enable-metrics``. By default, the operator will expose metrics on port 9963, the port can be changed with the option ``--operator-prometheus-serve-addr``. Feature Metrics ~~~~~~~~~~~~~~~ Cilium Operator Feature Metrics are exported under the ``cilium\_operator\_feature`` Prometheus namespace. The following tables categorize feature metrics into the following groups: - \*\*Advanced Connectivity and Load Balancing\*\* (:ref:`cilium-operator-feature-adv-connect-and-lb`) This category includes features related to advanced networking and load balancing capabilities, such as Gateway API, Ingress Controller, LB IPAM, Node IPAM and L7 Aware Traffic Management. For example, to check if the Gateway API is enabled on a Cilium operator, observe the metric ``cilium\_operator\_feature\_adv\_connect\_and\_lb\_gateway\_api\_enabled``. All metrics follows the format ``cilium\_operator\_feature`` + group name + metric name. A value of ``0`` indicates that the feature is disabled, while ``1`` indicates it is enabled. .. note:: For metrics of type "counter," the operator has processed the associated object (e.g., a network policy) but might not be actively enforcing it. These metrics serve to observe if the object has been received and processed, but not necessarily enforced by the operator. .. include:: feature-metrics-operator.txt Exported Metrics ^^^^^^^^^^^^^^^^ All metrics are exported under the ``cilium\_operator\_`` Prometheus namespace. .. \_metrics\_bgp\_control\_plane\_operator: BGP Control Plane Operator ~~~~~~~~~~~~~~~~~~~~~~~~~~ ================================== ===================================== ======== ====================================================================== Name Labels Default Description ================================== ===================================== ======== ====================================================================== ``reconcile\_errors\_total`` ``resource\_kind``, ``resource\_name`` Enabled Number of errors returned per BGP resource reconciliation ``reconcile\_run\_duration\_seconds`` Enabled Histogram of reconciliation run duration ================================== ===================================== ======== ====================================================================== All metrics are enabled only when the BGP Control Plane is enabled. .. \_ipam\_metrics: IPAM ~~~~ .. Note:: IPAM metrics are all ``Enabled`` only if using the AWS, Alibabacloud or Azure IPAM plugins. ======================================== ================================================================= ========== ======================================================== Name Labels Default Description ======================================== ================================================================= ========== ======================================================== ``ipam\_ips`` ``type`` Enabled Number of IPs allocated ``ipam\_ip\_allocation\_ops`` ``subnet\_id`` Enabled Number of IP allocation operations. ``ipam\_ip\_release\_ops`` ``subnet\_id`` Enabled Number of IP release operations. ``ipam\_interface\_creation\_ops`` ``subnet\_id`` Enabled Number of interfaces creation operations. ``ipam\_release\_duration\_seconds`` ``type``, ``status``, ``subnet\_id`` Enabled Release ip or interface latency in seconds ``ipam\_allocation\_duration\_seconds`` ``type``, ``status``, ``subnet\_id`` Enabled Allocation ip or interface latency in seconds ``ipam\_available\_interfaces`` Enabled Number of interfaces with addresses available ``ipam\_nodes`` ``category`` Enabled Number of nodes by category { total | in-deficit | at-capacity } ``ipam\_resync\_total`` Enabled Number of synchronization operations with external IPAM API ``ipam\_api\_duration\_seconds`` ``operation``, ``response\_code`` Enabled Duration of interactions with external IPAM API. ``ipam\_api\_rate\_limit\_duration\_seconds`` ``operation`` Enabled Duration of rate limiting while accessing external IPAM API ``ipam\_available\_ips`` ``target\_node`` Enabled Number of available IPs on a node (taking into account plugin specific NIC/Address limits). ``ipam\_used\_ips`` ``target\_node`` Enabled Number of currently used IPs on a node. ``ipam\_needed\_ips`` ``target\_node`` Enabled Number of IPs needed to satisfy allocation on a node. ======================================== ================================================================= ========== ======================================================== LB-IPAM ~~~~~~~ ======================================== ================================================================= ========== ======================================================== Name Labels Default Description ======================================== ================================================================= ========== ======================================================== ``lbipam\_conflicting\_pools`` Enabled Number of conflicting pools ``lbipam\_ips\_available`` ``pool`` Enabled Number of available IPs per pool ``lbipam\_ips\_used`` ``pool`` Enabled Number of used IPs per pool ``lbipam\_services\_matching`` Enabled Number of matching services ``lbipam\_services\_unsatisfied`` Enabled Number of services which did not get requested IPs ======================================== ================================================================= ========== ======================================================== Controllers ~~~~~~~~~~~ ======================================== ================================================== ========== ======================================================== Name Labels Default Description ======================================== ================================================== ========== ======================================================== ``controllers\_group\_runs\_total`` ``status``, ``group\_name`` Enabled Number of times that a controller process was run, labeled by controller group name ======================================== ================================================== ========== ======================================================== The ``controllers\_group\_runs\_total`` metric reports the success and failure count of each controller within the system, labeled by controller group name and completion status. Due to the large number of controllers, enabling this metric is on a per-controller basis. This is
https://github.com/cilium/cilium/blob/main//Documentation/observability/metrics.rst
main
cilium
[ 0.01449596881866455, -0.07773073017597198, -0.033878255635499954, -0.04695896431803703, -0.14913223683834076, 0.01663186587393284, -0.0750754252076149, -0.028429435566067696, -0.004235605243593454, -0.08974040299654007, 0.004872745834290981, -0.04483828321099281, -0.03916633501648903, 0.00...
0.090088
run, labeled by controller group name ======================================== ================================================== ========== ======================================================== The ``controllers\_group\_runs\_total`` metric reports the success and failure count of each controller within the system, labeled by controller group name and completion status. Due to the large number of controllers, enabling this metric is on a per-controller basis. This is configured using an allow-list which is passed as the ``controller-group-metrics`` configuration flag, or the ``prometheus.controllerGroupMetrics`` helm value. The current recommended default set of group names can be found in the values file of the Cilium Helm chart. The special names "all" and "none" are supported. .. \_ces\_metrics: CiliumEndpointSlices (CES) ~~~~~~~~~~~~~~~~~~~~~~~~~~ ============================================== ================================ ======================================================== Name Labels Description ============================================== ================================ ======================================================== ``number\_of\_ceps\_per\_ces`` The number of CEPs batched in a CES ``number\_of\_cep\_changes\_per\_ces`` ``opcode``, ``failure\_type`` The number of changed CEPs in each CES update ``ces\_sync\_total`` ``outcome`` The number of completed CES syncs by outcome ``ces\_queueing\_delay\_seconds`` CiliumEndpointSlice queueing delay in seconds ============================================== ================================ ======================================================== Note that the CES controller has multiple internal queues for handling CES updates. Detailed metrics which are emitted by these queues can be found in the :ref:`Internal WorkQueues ` section below. Unmanaged Pods ~~~~~~~~~~~~~~ ============================================ ======= ========== ==================================================================== Name Labels Default Description ============================================ ======= ========== ==================================================================== ``unmanaged\_pods`` Enabled The total number of pods observed to be unmanaged by Cilium operator ============================================ ======= ========== ==================================================================== "Double Write" Identity Allocation Mode ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When the ":ref:`Double Write `" identity allocation mode is enabled, the following metrics are available: ============================================ ======= ========== ============================================================ Name Labels Default Description ============================================ ======= ========== ============================================================ ``doublewrite\_crd\_identities`` Enabled The total number of CRD identities ``doublewrite\_kvstore\_identities`` Enabled The total number of identities in the KVStore ``doublewrite\_crd\_only\_identities`` Enabled The number of CRD identities not present in the KVStore ``doublewrite\_kvstore\_only\_identities`` Enabled The number of identities in the KVStore not present as a CRD ============================================ ======= ========== ============================================================ .. \_identity\_management\_metrics: Identity Management Mode ~~~~~~~~~~~~~~~~~~~~~~~~ =========================================== =========================== ===================================================================================== Name Labels Description =========================================== =========================== ===================================================================================== ``cid\_controller\_work\_queue\_event\_count`` ``resource``, ``outcome`` Counts processed events by CID controller work queues ``cid\_controller\_work\_queue\_latency`` ``resource``, ``phase`` Duration of CID controller work queues enqueuing and processing latencies in seconds =========================================== =========================== ===================================================================================== .. \_internal\_workqueues\_metrics: Internal WorkQueues ~~~~~~~~~~~~~~~~~~~~ The Operator uses internal queues to manage the processing of various tasks. Currently, only the Cilium Node Synchronizer queues and Cilium EndpointSlice Controller queues are reporting the metrics listed below. ==================================================== ============================================= ========== =========================================================== Name Labels Default Description ==================================================== ============================================= ========== =========================================================== ``workqueue\_depth`` ``queue\_name`` Enabled Current depth of workqueue ``workqueue\_adds\_total`` ``queue\_name`` Enabled Total number of adds handled by workqueue ``workqueue\_queue\_duration\_seconds`` ``queue\_name`` Enabled Duration in seconds an item stays in workqueue prior to request ``workqueue\_work\_duration\_seconds`` ``queue\_name`` Enabled Duration in seconds to process an item from workqueue ``workqueue\_unfinished\_work\_seconds`` ``queue\_name`` Enabled Duration in seconds of work in progress that hasn't been observed by work\_duration. Large values indicate stuck threads. You can deduce the number of stuck threads by observing the rate at which this value increases. ``workqueue\_longest\_running\_processor\_seconds`` ``queue\_name`` Enabled Duration in seconds of the longest running processor for workqueue ``workqueue\_retries\_total`` ``queue\_name`` Enabled Total number of retries handled by workqueue ==================================================== ============================================= ========== =========================================================== MCS-API ~~~~~~~ ========================================= ======================================================================== ========== ========================================================= Name Labels Default Description ========================================= ======================================================================== ========== ========================================================= ``mcsapi\_serviceexport\_info`` ``serviceexport``, ``namespace`` Enabled Information about ServiceExport in the local cluster ``mcsapi\_serviceexport\_status\_condition`` ``serviceexport``, ``namespace``, ``condition``, ``status``, ``reason`` Enabled Status Condition of ServiceExport in the local cluster ``mcsapi\_serviceimport\_info`` ``serviceimport``, ``namespace`` Enabled Information about ServiceImport in the local cluster ``mcsapi\_serviceimport\_status\_condition`` ``serviceimport``, ``namespace``, ``condition``, ``status``, ``reason`` Enabled Status Condition of ServiceImport in the local cluster ``mcsapi\_serviceimport\_status\_clusters`` ``serviceimport``, ``namespace`` Enabled The number of clusters currently backing a ServiceImport ========================================= ======================================================================== ========== ========================================================= Clustermesh ~~~~~~~~~~~ ================================================= ================== ========== ==================================================================== Name Labels Default Description ================================================= ================== ========== ==================================================================== ``clustermesh\_remote\_clusters`` Enabled The total number of remote clusters meshed with the
https://github.com/cilium/cilium/blob/main//Documentation/observability/metrics.rst
main
cilium
[ -0.010937291197478771, -0.06978476047515869, -0.14686957001686096, 0.025208791717886925, -0.04167662560939789, 0.008317738771438599, 0.027495982125401497, 0.013974561356008053, -0.014310222119092941, 0.024341348558664322, 0.07012391835451126, -0.032245881855487823, 0.04749642312526703, -0....
0.104245
Enabled Status Condition of ServiceImport in the local cluster ``mcsapi\_serviceimport\_status\_clusters`` ``serviceimport``, ``namespace`` Enabled The number of clusters currently backing a ServiceImport ========================================= ======================================================================== ========== ========================================================= Clustermesh ~~~~~~~~~~~ ================================================= ================== ========== ==================================================================== Name Labels Default Description ================================================= ================== ========== ==================================================================== ``clustermesh\_remote\_clusters`` Enabled The total number of remote clusters meshed with the local cluster ``clustermesh\_remote\_cluster\_failures`` ``target\_cluster`` Enabled The total number of failures related to the remote cluster ``clustermesh\_remote\_cluster\_last\_failure\_ts`` ``target\_cluster`` Enabled The timestamp of the last failure of the remote cluster ``clustermesh\_remote\_cluster\_readiness\_status`` ``target\_cluster`` Enabled The readiness status of the remote cluster ``clustermesh\_remote\_cluster\_cache\_revocations`` ``target\_cluster`` Enabled The total number of cache revocations related to the remote cluster ``clustermesh\_remote\_cluster\_services`` ``target\_cluster`` Enabled The total number of services per remote cluster ``clustermesh\_remote\_cluster\_service\_exports`` ``target\_cluster`` Enabled The total number of MCS-API service exports per remote cluster ================================================= ================== ========== ==================================================================== Hubble ------ Configuration ^^^^^^^^^^^^^ Hubble metrics are served by a Hubble instance running inside ``cilium-agent``. The command-line options to configure them are ``--enable-hubble``, ``--hubble-metrics-server``, and ``--hubble-metrics``. ``--hubble-metrics-server`` takes an ``IP:Port`` pair, but passing an empty IP (e.g. ``:9965``) will bind the server to all available interfaces. ``--hubble-metrics`` takes a space-separated list of metrics. It's also possible to configure Hubble metrics to listen with TLS and optionally use mTLS for authentication. For details see :ref:`hubble\_configure\_metrics\_tls`. Some metrics can take additional semicolon-separated options per metric, e.g. ``--hubble-metrics="dns:query;ignoreAAAA http:destinationContext=workload-name"`` will enable the ``dns`` metric with the ``query`` and ``ignoreAAAA`` options, and the ``http`` metric with the ``destinationContext=workload-name`` option. .. \_hubble\_context\_options: Context Options ^^^^^^^^^^^^^^^ Hubble metrics support configuration via context options. Supported context options for all metrics: - ``sourceContext`` - Configures the ``source`` label on metrics for both egress and ingress traffic. - ``sourceEgressContext`` - Configures the ``source`` label on metrics for egress traffic (takes precedence over ``sourceContext``). - ``sourceIngressContext`` - Configures the ``source`` label on metrics for ingress traffic (takes precedence over ``sourceContext``). - ``destinationContext`` - Configures the ``destination`` label on metrics for both egress and ingress traffic. - ``destinationEgressContext`` - Configures the ``destination`` label on metrics for egress traffic (takes precedence over ``destinationContext``). - ``destinationIngressContext`` - Configures the ``destination`` label on metrics for ingress traffic (takes precedence over ``destinationContext``). - ``labelsContext`` - Configures a list of labels to be enabled on metrics. There are also some context options that are specific to certain metrics. See the documentation for the individual metrics to see what options are available for each. See below for details on each of the different context options. Most Hubble metrics can be configured to add the source and/or destination context as a label using the ``sourceContext`` and ``destinationContext`` options. The possible values are: ===================== =================================================================================== Option Value Description ===================== =================================================================================== ``identity`` All Cilium security identity labels ``namespace`` Kubernetes namespace name ``pod`` Kubernetes pod name and namespace name in the form of ``namespace/pod``. ``pod-name`` Kubernetes pod name. ``dns`` All known DNS names of the source or destination (comma-separated) ``ip`` The IPv4 or IPv6 address ``reserved-identity`` Reserved identity label. ``workload`` Kubernetes pod's workload name and namespace in the form of ``namespace/workload-name``. ``workload-name`` Kubernetes pod's workload name (workloads are: Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift), etc). ``app`` Kubernetes pod's app name, derived from pod labels (``app.kubernetes.io/name``, ``k8s-app``, or ``app``). ===================== =================================================================================== When specifying the source and/or destination context, multiple contexts can be specified by separating them via the ``|`` symbol. When multiple are specified, then the first non-empty value is added to the metric as a label. For example, a metric configuration of ``flow:destinationContext=dns|ip`` will first try to use the DNS name of the target for the label. If no DNS name is known for the target, it will fall back and use the IP address of the target instead.
https://github.com/cilium/cilium/blob/main//Documentation/observability/metrics.rst
main
cilium
[ 0.021011093631386757, -0.06872960180044174, -0.02524947002530098, 0.02783588506281376, 0.009258459322154522, 0.011950872838497162, -0.00743393087759614, -0.058666445314884186, -0.07484512776136398, 0.03824968263506889, 0.04397709295153618, -0.011845681816339493, 0.08886642754077911, -0.006...
0.174372
added to the metric as a label. For example, a metric configuration of ``flow:destinationContext=dns|ip`` will first try to use the DNS name of the target for the label. If no DNS name is known for the target, it will fall back and use the IP address of the target instead. .. note:: There are 3 cases in which the identity label list contains multiple reserved labels: 1. ``reserved:kube-apiserver`` and ``reserved:host`` 2. ``reserved:kube-apiserver`` and ``reserved:remote-node`` 3. ``reserved:kube-apiserver`` and ``reserved:world`` In all of these 3 cases, ``reserved-identity`` context returns ``reserved:kube-apiserver``. Hubble metrics can also be configured with a ``labelsContext`` which allows providing a list of labels that should be added to the metric. Unlike ``sourceContext`` and ``destinationContext``, instead of different values being put into the same metric label, the ``labelsContext`` puts them into different label values. ============================== =============================================================================== Option Value Description ============================== =============================================================================== ``source\_ip`` The source IP of the flow. ``source\_namespace`` The namespace of the pod if the flow source is from a Kubernetes pod. ``source\_pod`` The pod name if the flow source is from a Kubernetes pod. ``source\_workload`` The name of the source pod's workload (Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift)). ``source\_workload\_kind`` The kind of the source pod's workload, for example, Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift). ``source\_app`` The app name of the source pod, derived from pod labels (``app.kubernetes.io/name``, ``k8s-app``, or ``app``). ``destination\_ip`` The destination IP of the flow. ``destination\_namespace`` The namespace of the pod if the flow destination is from a Kubernetes pod. ``destination\_pod`` The pod name if the flow destination is from a Kubernetes pod. ``destination\_workload`` The name of the destination pod's workload (Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift)). ``destination\_workload\_kind`` The kind of the destination pod's workload, for example, Deployment, Statefulset, Daemonset, ReplicationController, CronJob, Job, DeploymentConfig (OpenShift). ``destination\_app`` The app name of the source pod, derived from pod labels (``app.kubernetes.io/name``, ``k8s-app``, or ``app``). ``traffic\_direction`` Identifies the traffic direction of the flow. Possible values are ``ingress``, ``egress`` and ``unknown``. ============================== =============================================================================== When specifying the flow context, multiple values can be specified by separating them via the ``,`` symbol. All labels listed are included in the metric, even if empty. For example, a metric configuration of ``http:labelsContext=source\_namespace,source\_pod`` will add the ``source\_namespace`` and ``source\_pod`` labels to all Hubble HTTP metrics. .. note:: To limit metrics cardinality hubble will remove data series bound to specific pod after one minute from pod deletion. Metric is considered to be bound to a specific pod when at least one of the following conditions is met: \* ``sourceContext`` is set to ``pod`` and metric series has ``source`` label matching ``/`` \* ``destinationContext`` is set to ``pod`` and metric series has ``destination`` label matching ``/`` \* ``labelsContext`` contains both ``source\_namespace`` and ``source\_pod`` and metric series labels match namespace and name of deleted pod \* ``labelsContext`` contains both ``destination\_namespace`` and ``destination\_pod`` and metric series labels match namespace and name of deleted pod .. \_hubble\_exported\_metrics: Exported Metrics ^^^^^^^^^^^^^^^^ Hubble metrics are exported under the ``hubble\_`` Prometheus namespace. lost events ~~~~~~~~~~~ This metric, unlike other ones, is not directly tied to network flows. It's enabled if any of the other metrics is enabled. ================================ ======================================== ========== ================================================== Name Labels Default Description ================================ ======================================== ========== ================================================== ``lost\_events\_total`` ``source`` Enabled Number of lost events ================================ ======================================== ========== ================================================== Labels """""" - ``source`` identifies the source of lost events, one of: - ``perf\_event\_ring\_buffer`` - ``observer\_events\_queue`` - ``hubble\_ring\_buffer`` ``dns`` ~~~~~~~ ================================ ======================================== ========== =================================== Name Labels Default Description ================================ ======================================== ========== =================================== ``dns\_queries\_total`` ``rcode``, ``qtypes``, ``ips\_returned`` Disabled Number of DNS queries observed ``dns\_responses\_total`` ``rcode``, ``qtypes``, ``ips\_returned`` Disabled Number of DNS responses observed ``dns\_response\_types\_total`` ``type``, ``qtypes`` Disabled Number of
https://github.com/cilium/cilium/blob/main//Documentation/observability/metrics.rst
main
cilium
[ 0.0013013430871069431, -0.029005233198404312, -0.05030108243227005, -0.02014777995646, 0.03117400035262108, 0.042604900896549225, 0.027986083179712296, -0.013018385507166386, 0.1222131997346878, -0.040411125868558884, -0.006498199887573719, -0.13197477161884308, 0.047875575721263885, -0.03...
0.153367
lost events, one of: - ``perf\_event\_ring\_buffer`` - ``observer\_events\_queue`` - ``hubble\_ring\_buffer`` ``dns`` ~~~~~~~ ================================ ======================================== ========== =================================== Name Labels Default Description ================================ ======================================== ========== =================================== ``dns\_queries\_total`` ``rcode``, ``qtypes``, ``ips\_returned`` Disabled Number of DNS queries observed ``dns\_responses\_total`` ``rcode``, ``qtypes``, ``ips\_returned`` Disabled Number of DNS responses observed ``dns\_response\_types\_total`` ``type``, ``qtypes`` Disabled Number of DNS response types ================================ ======================================== ========== =================================== Options """"""" ============== ============= ==================================================================================== Option Key Option Value Description ============== ============= ==================================================================================== ``query`` N/A Include the query as label "query" ``ignoreAAAA`` N/A Ignore any AAAA requests/responses ============== ============= ==================================================================================== This metric supports :ref:`Context Options`. ``drop`` ~~~~~~~~ ================================ ======================================== ========== =================================== Name Labels Default Description ================================ ======================================== ========== =================================== ``drop\_total`` ``reason``, ``protocol`` Disabled Number of drops ================================ ======================================== ========== =================================== Options """"""" This metric supports :ref:`Context Options`. ``flow`` ~~~~~~~~ ================================ ======================================== ========== =================================== Name Labels Default Description ================================ ======================================== ========== =================================== ``flows\_processed\_total`` ``type``, ``subtype``, ``verdict`` Disabled Total number of flows processed ================================ ======================================== ========== =================================== Options """"""" This metric supports :ref:`Context Options`. ``flows-to-world`` ~~~~~~~~~~~~~~~~~~ This metric counts all non-reply flows containing the ``reserved:world`` label in their destination identity. By default, dropped flows are counted if and only if the drop reason is ``Policy denied``. Set ``any-drop`` option to count all dropped flows. ================================ ======================================== ========== ============================================ Name Labels Default Description ================================ ======================================== ========== ============================================ ``flows\_to\_world\_total`` ``protocol``, ``verdict`` Disabled Total number of flows to ``reserved:world``. ================================ ======================================== ========== ============================================ Options """"""" ============== ============= ====================================================== Option Key Option Value Description ============== ============= ====================================================== ``any-drop`` N/A Count any dropped flows regardless of the drop reason. ``port`` N/A Include the destination port as label ``port``. ``syn-only`` N/A Only count non-reply SYNs for TCP flows. ============== ============= ====================================================== This metric supports :ref:`Context Options`. ``http`` ~~~~~~~~ Deprecated, use ``httpV2`` instead. These metrics can not be enabled at the same time as ``httpV2``. ================================= ======================================= ========== ============================================== Name Labels Default Description ================================= ======================================= ========== ============================================== ``http\_requests\_total`` ``method``, ``protocol``, ``reporter`` Disabled Count of HTTP requests ``http\_responses\_total`` ``method``, ``status``, ``reporter`` Disabled Count of HTTP responses ``http\_request\_duration\_seconds`` ``method``, ``reporter`` Disabled Histogram of HTTP request duration in seconds ================================= ======================================= ========== ============================================== Labels """""" - ``method`` is the HTTP method of the request/response. - ``protocol`` is the HTTP protocol of the request, (For example: ``HTTP/1.1``, ``HTTP/2``). - ``status`` is the HTTP status code of the response. - ``reporter`` identifies the origin of the request/response. It is set to ``client`` if it originated from the client, ``server`` if it originated from the server, or ``unknown`` if its origin is unknown. Options """"""" This metric supports :ref:`Context Options`. ``httpV2`` ~~~~~~~~~~ ``httpV2`` is an updated version of the existing ``http`` metrics. These metrics can not be enabled at the same time as ``http``. The main difference is that ``http\_requests\_total`` and ``http\_responses\_total`` have been consolidated, and use the response flow data. Additionally, the ``http\_request\_duration\_seconds`` metric source/destination related labels now are from the perspective of the request. In the ``http`` metrics, the source/destination were swapped, because the metric uses the response flow data, where the source/destination are swapped, but in ``httpV2`` we correctly account for this. ================================= =================================================== ========== ============================================== Name Labels Default Description ================================= =================================================== ========== ============================================== ``http\_requests\_total`` ``method``, ``protocol``, ``status``, ``reporter`` Disabled Count of HTTP requests ``http\_request\_duration\_seconds`` ``method``, ``reporter`` Disabled Histogram of HTTP request duration in seconds ================================= =================================================== ========== ============================================== Labels """""" - ``method`` is the HTTP method of the request/response. - ``protocol`` is the HTTP protocol of the request, (For example: ``HTTP/1.1``, ``HTTP/2``). - ``status`` is the HTTP status code of the response. - ``reporter`` identifies the origin of the request/response. It is set to ``client`` if it originated from the client, ``server`` if it originated from the server, or ``unknown`` if its origin is
https://github.com/cilium/cilium/blob/main//Documentation/observability/metrics.rst
main
cilium
[ 0.009232702665030956, -0.016086827963590622, 0.014385882765054703, 0.009843898937106133, -0.0028831888921558857, 0.033102646470069885, 0.041036900132894516, -0.0396011546254158, 0.1026303768157959, -0.06073732301592827, 0.02259855903685093, -0.09852322936058044, -0.07652164995670319, -0.07...
0.031511
protocol of the request, (For example: ``HTTP/1.1``, ``HTTP/2``). - ``status`` is the HTTP status code of the response. - ``reporter`` identifies the origin of the request/response. It is set to ``client`` if it originated from the client, ``server`` if it originated from the server, or ``unknown`` if its origin is unknown. Options """"""" ============== ============== ============================================================================================================= Option Key Option Value Description ============== ============== ============================================================================================================= ``exemplars`` ``true`` Include extracted trace IDs in HTTP metrics. Requires :ref:`OpenMetrics to be enabled`. ============== ============== ============================================================================================================= This metric supports :ref:`Context Options`. ``icmp`` ~~~~~~~~ ================================ ======================================== ========== =================================== Name Labels Default Description ================================ ======================================== ========== =================================== ``icmp\_total`` ``family``, ``type`` Disabled Number of ICMP messages ================================ ======================================== ========== =================================== Options """"""" This metric supports :ref:`Context Options`. ``kafka`` ~~~~~~~~~ =================================== ===================================================== ========== ============================================== Name Labels Default Description =================================== ===================================================== ========== ============================================== ``kafka\_requests\_total`` ``topic``, ``api\_key``, ``error\_code``, ``reporter`` Disabled Count of Kafka requests by topic ``kafka\_request\_duration\_seconds`` ``topic``, ``api\_key``, ``reporter`` Disabled Histogram of Kafka request duration by topic =================================== ===================================================== ========== ============================================== Options """"""" This metric supports :ref:`Context Options`. ``port-distribution`` ~~~~~~~~~~~~~~~~~~~~~ ================================ ======================================== ========== ================================================== Name Labels Default Description ================================ ======================================== ========== ================================================== ``port\_distribution\_total`` ``protocol``, ``port`` Disabled Numbers of packets distributed by destination port ================================ ======================================== ========== ================================================== Options """"""" This metric supports :ref:`Context Options`. ``tcp`` ~~~~~~~ ================================ ======================================== ========== ================================================== Name Labels Default Description ================================ ======================================== ========== ================================================== ``tcp\_flags\_total`` ``flag``, ``family`` Disabled TCP flag occurrences ================================ ======================================== ========== ================================================== Options """"""" This metric supports :ref:`Context Options`. dynamic\_exporter\_exporters\_total ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is dynamic hubble exporter metric. ==================================== ======================================== ========== ================================================== Name Labels Default Description ==================================== ======================================== ========== ================================================== ``dynamic\_exporter\_exporters\_total`` ``source`` Enabled Number of configured hubble exporters ==================================== ======================================== ========== ================================================== Labels """""" - ``status`` identifies status of exporters, can be one of: - ``active`` - ``inactive`` dynamic\_exporter\_up ~~~~~~~~~~~~~~~~~~~ This is dynamic hubble exporter metric. ==================================== ======================================== ========== ================================================== Name Labels Default Description ==================================== ======================================== ========== ================================================== ``dynamic\_exporter\_up`` ``source`` Enabled Status of exporter (1 - active, 0 - inactive) ==================================== ======================================== ========== ================================================== Labels """""" - ``name`` identifies exporter name dynamic\_exporter\_reconfigurations\_total ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is dynamic hubble exporter metric. =========================================== ======================================== ========== ================================================== Name Labels Default Description =========================================== ======================================== ========== ================================================== ``dynamic\_exporter\_reconfigurations\_total`` ``op`` Enabled Number of dynamic exporters reconfigurations =========================================== ======================================== ========== ================================================== Labels """""" - ``op`` identifies reconfiguration operation type, can be one of: - ``add`` - ``update`` - ``remove`` dynamic\_exporter\_config\_hash ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is dynamic hubble exporter metric. ==================================== ======================================== ========== ================================================== Name Labels Default Description ==================================== ======================================== ========== ================================================== ``dynamic\_exporter\_config\_hash`` Enabled Hash of last applied config ==================================== ======================================== ========== ================================================== dynamic\_exporter\_config\_last\_applied ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is dynamic hubble exporter metric. ======================================== ======================================== ========== ================================================== Name Labels Default Description ======================================== ======================================== ========== ================================================== ``dynamic\_exporter\_config\_last\_applied`` Enabled Timestamp of last applied config ======================================== ======================================== ========== ================================================== .. \_clustermesh\_apiserver\_metrics\_reference: clustermesh-apiserver --------------------- Configuration ^^^^^^^^^^^^^ To expose any metrics, invoke ``clustermesh-apiserver`` with the ``--prometheus-serve-addr`` option. This option takes a ``IP:Port`` pair but passing an empty IP (e.g. ``:9962``) will bind the server to all available interfaces (there is usually only one in a container). Exported Metrics ^^^^^^^^^^^^^^^^ All metrics are exported under the ``cilium\_clustermesh\_apiserver\_`` Prometheus namespace. Bootstrap ~~~~~~~~~ ======================================== ======================================================== Name Description ======================================== ======================================================== ``bootstrap\_seconds`` Duration in seconds to complete bootstrap ======================================== ======================================================== KVstore ~~~~~~~ ======================================== ============================================ ======================================================== Name Labels Description ======================================== ============================================ ======================================================== ``kvstore\_operations\_duration\_seconds`` ``action``, ``kind``, ``outcome``, ``scope`` Duration of kvstore operation ``kvstore\_events\_queue\_seconds`` ``action``, ``scope`` Seconds waited before a received event was queued ``kvstore\_quorum\_errors\_total`` ``error`` Number of quorum errors ``kvstore\_sync\_errors\_total`` ``scope``, ``source\_cluster`` Number of times synchronization to the kvstore failed ``kvstore\_sync\_queue\_size`` ``scope``, ``source\_cluster`` Number of elements queued for synchronization in the kvstore ``kvstore\_initial\_sync\_completed`` ``scope``, ``source\_cluster``, ``action`` Whether the initial synchronization from/to the kvstore has completed ======================================== ============================================ ======================================================== API Rate Limiting ~~~~~~~~~~~~~~~~~ ============================================== ========================================== ======================================================== Name Labels Description ==============================================
https://github.com/cilium/cilium/blob/main//Documentation/observability/metrics.rst
main
cilium
[ -0.08513106405735016, 0.05839052051305771, -0.05349096283316612, -0.0486018992960453, -0.03439374640583992, -0.08149969577789307, 0.00579619687050581, -0.06564968824386597, 0.07167701423168182, 0.029815271496772766, 0.019250160083174706, -0.017759844660758972, -0.004778534639626741, -0.025...
0.00944
``kvstore\_sync\_errors\_total`` ``scope``, ``source\_cluster`` Number of times synchronization to the kvstore failed ``kvstore\_sync\_queue\_size`` ``scope``, ``source\_cluster`` Number of elements queued for synchronization in the kvstore ``kvstore\_initial\_sync\_completed`` ``scope``, ``source\_cluster``, ``action`` Whether the initial synchronization from/to the kvstore has completed ======================================== ============================================ ======================================================== API Rate Limiting ~~~~~~~~~~~~~~~~~ ============================================== ========================================== ======================================================== Name Labels Description ============================================== ========================================== ======================================================== ``api\_limiter\_processed\_requests\_total`` ``api\_call``, ``outcome``, ``return\_code`` Total number of API requests processed ``api\_limiter\_processing\_duration\_seconds`` ``api\_call``, ``value`` Mean and estimated processing duration in seconds ``api\_limiter\_rate\_limit`` ``api\_call``, ``value`` Current rate limiting configuration (limit and burst) ``api\_limiter\_requests\_in\_flight`` ``api\_call`` ``value`` Current and maximum allowed number of requests in flight ``api\_limiter\_wait\_duration\_seconds`` ``api\_call``, ``value`` Mean, min, and max wait duration ============================================== ========================================== ======================================================== Controllers ~~~~~~~~~~~ ======================================== ================================================== ========== ======================================================== Name Labels Default Description ======================================== ================================================== ========== ======================================================== ``controllers\_group\_runs\_total`` ``status``, ``group\_name`` Enabled Number of times that a controller process was run, labeled by controller group name ======================================== ================================================== ========== ======================================================== The ``controllers\_group\_runs\_total`` metric reports the success and failure count of each controller within the system, labeled by controller group name and completion status. Enabling this metric is on a per-controller basis. This is configured using an allow-list which is passed as the ``controller-group-metrics`` configuration flag. The current default set for ``clustermesh-apiserver`` found in the Cilium Helm chart is the special name "all", which enables the metric for all controller groups. The special name "none" is also supported. .. \_kvstoremesh\_metrics\_reference: kvstoremesh ----------- Configuration ^^^^^^^^^^^^^ To expose any metrics, invoke ``kvstoremesh`` with the ``--prometheus-serve-addr`` option. This option takes a ``IP:Port`` pair but passing an empty IP (e.g. ``:9964``) binds the server to all available interfaces (there is usually only one interface in a container). Exported Metrics ^^^^^^^^^^^^^^^^ All metrics are exported under the ``cilium\_kvstoremesh\_`` Prometheus namespace. Bootstrap ~~~~~~~~~ ======================================== ======================================================== Name Description ======================================== ======================================================== ``bootstrap\_seconds`` Duration in seconds to complete bootstrap ======================================== ======================================================== KVStoremesh ~~~~~~~~~~~ ================================= ======== ========================== Name Labels Description ================================= ======== ========================== ``leader\_election\_master\_status`` ``name`` The leader election status ================================= ======== ========================== Clustermesh ~~~~~~~~~~~ Note that these metrics are not prefixed by ``clustermesh\_``. =============================================== ================== ==================================================================== Name Labels Description =============================================== ================== ==================================================================== ``remote\_clusters`` The total number of remote clusters meshed with the local cluster ``remote\_cluster\_failures`` ``target\_cluster`` The total number of failures related to the remote cluster ``remote\_cluster\_last\_failure\_ts`` ``target\_cluster`` The timestamp of the last failure of the remote cluster ``remote\_cluster\_readiness\_status`` ``target\_cluster`` The readiness status of the remote cluster ``remote\_cluster\_cache\_revocations`` ``target\_cluster`` The total number of cache revocations related to the remote cluster =============================================== ================== ==================================================================== KVstore ~~~~~~~ ======================================== ============================================ ======================================================== Name Labels Description ======================================== ============================================ ======================================================== ``kvstore\_operations\_duration\_seconds`` ``action``, ``kind``, ``outcome``, ``scope`` Duration of kvstore operation ``kvstore\_events\_queue\_seconds`` ``action``, ``scope`` Seconds waited before a received event was queued ``kvstore\_quorum\_errors\_total`` ``error`` Number of quorum errors ``kvstore\_sync\_errors\_total`` ``scope``, ``source\_cluster`` Number of times synchronization to the kvstore failed ``kvstore\_sync\_queue\_size`` ``scope``, ``source\_cluster`` Number of elements queued for synchronization in the kvstore ``kvstore\_initial\_sync\_completed`` ``scope``, ``source\_cluster``, ``action`` Whether the initial synchronization from/to the kvstore has completed ======================================== ============================================ ======================================================== API Rate Limiting ~~~~~~~~~~~~~~~~~ ============================================== ========================================== ======================================================== Name Labels Description ============================================== ========================================== ======================================================== ``api\_limiter\_processed\_requests\_total`` ``api\_call``, ``outcome``, ``return\_code`` Total number of API requests processed ``api\_limiter\_processing\_duration\_seconds`` ``api\_call``, ``value`` Mean and estimated processing duration in seconds ``api\_limiter\_rate\_limit`` ``api\_call``, ``value`` Current rate limiting configuration (limit and burst) ``api\_limiter\_requests\_in\_flight`` ``api\_call`` ``value`` Current and maximum allowed number of requests in flight ``api\_limiter\_wait\_duration\_seconds`` ``api\_call``, ``value`` Mean, min, and max wait duration ============================================== ========================================== ======================================================== Controllers ~~~~~~~~~~~ ======================================== ================================================== ========== ======================================================== Name Labels Default Description ======================================== ================================================== ========== ======================================================== ``controllers\_group\_runs\_total`` ``status``, ``group\_name`` Enabled Number of times that a controller process was run, labeled by controller group name ======================================== ================================================== ========== ======================================================== The ``controllers\_group\_runs\_total`` metric reports the success and failure count of each controller within the system, labeled by controller group name and completion status. Enabling this metric is
https://github.com/cilium/cilium/blob/main//Documentation/observability/metrics.rst
main
cilium
[ -0.02233183942735195, -0.01930250972509384, -0.0330188125371933, 0.06587109714746475, -0.07942381501197815, -0.011903409846127033, -0.013733507134020329, -0.06455834209918976, 0.008075007237493992, -0.041990455240011215, 0.05365245044231415, -0.06777319312095642, 0.04573266580700874, -0.10...
0.05637