content
large_stringlengths
3
20.5k
url
large_stringlengths
54
193
branch
large_stringclasses
4 values
source
large_stringclasses
42 values
embeddings
listlengths
384
384
score
float64
-0.21
0.65
======================================================== ``controllers\_group\_runs\_total`` ``status``, ``group\_name`` Enabled Number of times that a controller process was run, labeled by controller group name ======================================== ================================================== ========== ======================================================== The ``controllers\_group\_runs\_total`` metric reports the success and failure count of each controller within the system, labeled by controller group name and completion status. Enabling this metric is on a per-controller basis. This is configured using an allow-list which is passed as the ``controller-group-metrics`` configuration flag. The current default set for ``kvstoremesh`` found in the Cilium Helm chart is the special name "all", which enables the metric for all controller groups. The special name "none" is also supported. NAT ~~~ .. \_nat\_metrics: ======================================== ================================================== ========== ======================================================== Name Labels Default Description ======================================== ================================================== ========== ======================================================== ``nat\_endpoint\_max\_connection`` ``family`` Enabled Saturation of the most saturated distinct NAT mapped connection, in terms of egress-IP and remote endpoint address. ======================================== ================================================== ========== ======================================================== These metrics are for monitoring Cilium's NAT mapping functionality. NAT is used by features such as Egress Gateway and BPF masquerading. The NAT map holds mappings for masqueraded connections. Connection held in the NAT table that are masqueraded with the same egress-IP and are going to the same remote endpoints IP and port all require a unique source port for the mapping. This means that any Node masquerading connections to a distinct external endpoint is limited by the possible ephemeral source ports. Given a Node forwarding one or more such egress-IP and remote endpoint tuples, the ``nat\_endpoint\_max\_connection`` metric is the most saturated such connection in terms of a percent of possible source ports available. This metric is especially useful when using the egress gateway feature where it's possible to overload a Node if many connections are all going to the same endpoint. In general, this metric should normally be fairly low. A high number here may indicate that a Node is reaching its limit for connections to one or more external endpoints. Local Redirect Policy (control plane) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. \_local\_redirect\_policy\_metrics: ============================================= ======================================== ========== ========================================================================================================================================== Name Labels Default Description ============================================= ======================================== ========== ========================================================================================================================================== ``controller\_duration\_seconds`` Enabled Histogram of processing times for local redirect policies ============================================= ======================================== ========== ==========================================================================================================================================
https://github.com/cilium/cilium/blob/main//Documentation/observability/metrics.rst
main
cilium
[ -0.014302344992756844, -0.051875654608011246, -0.14224977791309357, 0.0034163251984864473, -0.049766235053539276, 0.0408855676651001, 0.009385380893945694, 0.033830393105745316, -0.023610707372426987, 0.02163081057369709, 0.054409366101026535, -0.04821101948618889, -0.009854531846940517, -...
0.049595
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_install\_metrics: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Running Prometheus & Grafana \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Install Prometheus & Grafana ============================ This is an example deployment that includes Prometheus and Grafana in a single deployment. .. admonition:: Video :class: attention You can see Cilium, Prometheus and Grafana in action together in the KubeCon + CloudNativeCon talk `Effortless Open Source Observability with Cilium, Prometheus and Grafana `\_\_. The default installation contains: - \*\*Grafana\*\*: A visualization dashboard with Cilium Dashboard pre-loaded. - \*\*Prometheus\*\*: a time series database and monitoring system. .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/addons/prometheus/monitoring-example.yaml namespace/cilium-monitoring created serviceaccount/prometheus-k8s created configmap/grafana-config created configmap/grafana-cilium-dashboard created configmap/grafana-cilium-operator-dashboard created configmap/grafana-hubble-dashboard created configmap/prometheus created clusterrole.rbac.authorization.k8s.io/prometheus unchanged clusterrolebinding.rbac.authorization.k8s.io/prometheus unchanged service/grafana created service/prometheus created deployment.apps/grafana created deployment.apps/prometheus created This example deployment of Prometheus and Grafana will automatically scrape the Cilium and Hubble metrics. See the :ref:`metrics` configuration guide on how to configure a custom Prometheus instance. Deploy Cilium and Hubble with metrics enabled ============================================= \*Cilium\*, \*Hubble\*, and \*Cilium Operator\* do not expose metrics by default. Enabling metrics for these services will open ports ``9962``, ``9965``, and ``9963`` respectively on all nodes of your cluster where these components are running. The metrics for Cilium, Hubble, and Cilium Operator can all be enabled independently of each other with the following Helm values: - ``prometheus.enabled=true``: Enables metrics for ``cilium-agent``. - ``operator.prometheus.enabled=true``: Enables metrics for ``cilium-operator``. - ``hubble.metrics.enabled``: Enables the provided list of Hubble metrics. For Hubble metrics to work, Hubble itself needs to be enabled with ``hubble.enabled=true``. See :ref:`Hubble exported metrics` for the list of available Hubble metrics. Refer to :ref:`metrics` for more details about the individual metrics. .. include:: ../installation/k8s-install-download-release.rst Deploy Cilium via Helm as follows to enable all metrics: .. cilium-helm-install:: :namespace: kube-system :set: prometheus.enabled=true operator.prometheus.enabled=true hubble.enabled=true hubble.metrics.enableOpenMetrics=true hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,httpV2:exemplars=true;labelsContext=source\_ip\\,source\_namespace\\,source\_workload\\,destination\_ip\\,destination\_namespace\\,destination\_workload\\,traffic\_direction}" .. note:: You can combine the above Helm options with any of the other installation guides. How to access Grafana ===================== Expose the port on your local machine .. code-block:: shell-session kubectl -n cilium-monitoring port-forward service/grafana --address 0.0.0.0 --address :: 3000:3000 Access it via your browser: http://localhost:3000 How to access Prometheus ======================== Expose the port on your local machine .. code-block:: shell-session kubectl -n cilium-monitoring port-forward service/prometheus --address 0.0.0.0 --address :: 9090:9090 Access it via your browser: http://localhost:9090 Examples ======== Generic ------- .. image:: images/grafana\_generic.png Network ------- .. image:: images/grafana\_network.png Policy ------- .. image:: images/grafana\_policy.png .. image:: images/grafana\_policy2.png Endpoints --------- .. image:: images/grafana\_endpoints.png Controllers ----------- .. image:: images/grafana\_controllers.png Kubernetes ---------- .. image:: images/grafana\_k8s.png Hubble General Processing ------------------------- .. image:: images/grafana\_hubble\_general\_processing.png Hubble Networking ----------------- .. note:: The ``port-distribution`` metric is disabled by default. Refer to :ref:`metrics` for more details about the individual metrics. .. image:: images/grafana\_hubble\_network.png .. image:: images/grafana\_hubble\_tcp.png .. image:: images/grafana\_hubble\_icmp.png Hubble DNS ---------- .. image:: images/grafana\_hubble\_dns.png Hubble HTTP ----------- .. image:: images/grafana\_hubble\_http.png Hubble Network Policy --------------------- .. image:: images/grafana\_hubble\_network\_policy.png
https://github.com/cilium/cilium/blob/main//Documentation/observability/grafana.rst
main
cilium
[ -0.02933991141617298, 0.005178798455744982, -0.03407112881541252, -0.07697780430316925, 0.005391163285821676, -0.041903767734766006, -0.011774593964219093, -0.03547830134630203, 0.015382891520857811, -0.025744181126356125, 0.0621543787419796, -0.13185498118400574, -0.010023959912359715, 0....
0.188875
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_hubble\_intro: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Network Observability with Hubble \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Observability is provided by Hubble which enables deep visibility into the communication and behavior of services as well as the networking infrastructure in a completely transparent manner. Hubble is able to provide visibility at the node level, cluster level or even across clusters in a :ref:`Cluster Mesh` scenario. For an introduction to Hubble and how it relates to Cilium, read the section :ref:`intro`. By default, Hubble API operates within the scope of the individual node on which the Cilium agent runs. This confines the network insights to the traffic observed by the local Cilium agent. Hubble CLI (``hubble``) can be used to query the Hubble API provided via a local Unix Domain Socket. The Hubble CLI binary is installed by default on Cilium agent pods. Upon deploying Hubble Relay, network visibility is provided for the entire cluster or even multiple clusters in a ClusterMesh scenario. In this mode, Hubble data can be accessed by directing Hubble CLI (``hubble``) to the Hubble Relay service or via Hubble UI. Hubble UI is a web interface which enables automatic discovery of the services dependency graph at the L3/L4 and even L7 layer, allowing user-friendly visualization and filtering of data flows as a service map. .. toctree:: :maxdepth: 2 :glob: setup hubble-cli hubble-ui configuration/export configuration/tls
https://github.com/cilium/cilium/blob/main//Documentation/observability/hubble/index.rst
main
cilium
[ -0.00025641772663220763, 0.024213597178459167, -0.07381986081600189, -0.029104098677635193, 0.13343997299671173, -0.011497764848172665, -0.09576582163572311, 0.013439583592116833, 0.011519230902194977, -0.00009487158240517601, 0.029707705602049828, -0.026439543813467026, 0.08233638107776642,...
0.20373
.. note:: The following commands use the ``-P`` (``--port-forward``) flag to automatically port-forward the Hubble Relay service from your local machine on port ``4245``. You can also omit the flag and create a port-forward manually with the Cilium CLI: .. code-block:: shell-session $ cilium hubble port-forward ℹ️ Hubble Relay is available at 127.0.0.1:4245 Or with kubectl: .. code-block:: shell-session $ kubectl -n kube-system port-forward service/hubble-relay 4245:80 Forwarding from 127.0.0.1:4245 -> 4245 Forwarding from [::1]:4245 -> 4245 For more information on this method, see `Use Port Forwarding to Access Application in a Cluster `\_.
https://github.com/cilium/cilium/blob/main//Documentation/observability/hubble/port-forward.rst
main
cilium
[ 0.0694221705198288, 0.036367520689964294, -0.057463254779577255, -0.06315981596708298, -0.03193746879696846, -0.008704149164259434, -0.08162063360214233, 0.051489777863025665, -0.020128315314650536, 0.02081664465367794, -0.01981223002076149, -0.03835804760456085, 0.01850353367626667, -0.05...
0.133684
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_hubble\_gsg: .. \_hubble\_ui: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Service Map & Hubble UI \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This tutorial guides you through enabling the Hubble UI to access the graphical service map. .. image:: images/hubble\_sw\_service\_map.png .. note:: This guide assumes that Cilium and Hubble have been correctly installed in your Kubernetes cluster. Please see :ref:`k8s\_quick\_install` and :ref:`hubble\_setup` for more information. If unsure, run ``cilium status`` and validate that Cilium and Hubble are installed. Enable the Hubble UI ==================== Enable the Hubble UI by running the following command: .. tabs:: .. group-tab:: Cilium CLI If Hubble is already enabled with ``cilium hubble enable``, you must first temporarily disable Hubble with ``cilium hubble disable``. This is because the Hubble UI cannot be added at runtime. .. code-block:: shell-session cilium hubble enable --ui 🔑 Found existing CA in secret cilium-ca ✨ Patching ConfigMap cilium-config to enable Hubble... ♻️ Restarted Cilium pods ✅ Relay is already deployed ✅ Hubble UI is already deployed .. group-tab:: Helm .. cilium-helm-upgrade:: :namespace: $CILIUM\_NAMESPACE :extra-args: --reuse-values :set: hubble.relay.enabled=true hubble.ui.enabled=true .. group-tab:: Helm (Standalone install) Clusters sometimes come with Cilium, Hubble, and Hubble relay already installed. When this is the case you can still use Helm to install only Hubble UI on top of the pre-installed components. You will need to set ``hubble.ui.standalone.enabled`` to ``true`` and optionally provide a volume to mount Hubble UI client certificates if TLS is enabled on Hubble Relay server side. Below is an example deploying Hubble UI as standalone, with client certificates mounted from a ``my-hubble-ui-client-certs`` secret: .. cilium-helm-upgrade:: :namespace: $CILIUM\_NAMESPACE :extra-args: --reuse-values :set: hubble.relay.enabled=true hubble.ui.enabledd=true :post-helm-commands: --values - < 8081 Forwarding from [::]:12000 -> 8081 .. tip:: The above command will block and continue running while the port forward is active. You can interrupt the command to abort the port forward and re-run the command to make the UI accessible again. If your browser has not automatically opened the UI, open the page http://localhost:12000 in your browser. You should see a screen with an invitation to select a namespace, use the namespace selector dropdown on the left top corner to select a namespace: .. image:: images/hubble\_service\_map\_namespace\_selector.png In this example, we are deploying the Star Wars demo from the :ref:`gs\_http` guide. However you can apply the same techniques to observe application connectivity dependencies in your own namespace, and clusters for application of any type. Once the deployment is ready, issue a request from both spaceships to emulate some traffic. .. code-block:: shell-session $ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing Ship landed $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing Ship landed These requests will then be displayed in the UI as service dependencies between the different pods: .. image:: images/hubble\_sw\_service\_map.png In the bottom of the interface, you may also inspect each recent Hubble flow event in your current namespace individually. Inspecting a wide variety of network traffic ============================================ In order to generate some network traffic, run the connectivity test in a loop: .. code-block:: shell-session while true; do cilium connectivity test; done To see the traffic in Hubble, open http://localhost:12000/cilium-test in your browser.
https://github.com/cilium/cilium/blob/main//Documentation/observability/hubble/hubble-ui.rst
main
cilium
[ 0.0345597006380558, 0.011030087247490883, 0.0029667699709534645, -0.08908739686012268, 0.03835300728678703, 0.01683671772480011, -0.0603494793176651, 0.027394860982894897, 0.012981419451534748, -0.015622461214661598, 0.05916125699877739, -0.06473889201879501, 0.06901972740888596, -0.033531...
0.161731
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_hubble\_cli: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Inspecting Network Flows with the CLI \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This guide walks you through using the Hubble CLI to inspect network flows and gain visibility into what is happening on the network level. The best way to get help if you get stuck is to ask a question on `Cilium Slack`\_. With Cilium contributors across the globe, there is almost always someone available to help. .. note:: This guide uses examples based on the Demo App. If you would like to run them, deploy the Demo App first. Please refer to :ref:`gs\_http` for more details. Pre-Requisites ============== \* Cilium has been correctly :ref:`installed in your Kubernetes cluster`. \* :ref:`Hubble is enabled`. \* :ref:`Hubble CLI is installed`. \* :ref:`The Hubble API is accessible`. If unsure, run ``cilium status`` and validate that Cilium and Hubble are up and running then run ``hubble status`` to verify you can communicate with the Hubble API . Inspecting the cluster's network traffic with Hubble Relay ========================================================== Let's issue some requests to emulate some traffic again. This first request is allowed by the policy. .. code-block:: shell-session kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing Ship landed This next request is accessing an HTTP endpoint which is denied by policy. .. code-block:: shell-session kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port Access denied Finally, this last request will hang because the ``xwing`` pod does not have the ``org=empire`` label required by policy. Press Control-C to kill the curl request, or wait for it to time out. .. code-block:: shell-session kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing command terminated with exit code 28 Let's now inspect this traffic using the CLI. The command below filters all traffic on the application layer (L7, HTTP) to the ``deathstar`` pod: .. code-block:: shell-session hubble observe --pod deathstar --protocol http May 4 13:23:40.501: default/tiefighter:42690 -> default/deathstar-c74d84667-cx5kp:80 http-request FORWARDED (HTTP/1.1 POST http://deathstar.default.svc.cluster.local/v1/request-landing) May 4 13:23:40.502: default/tiefighter:42690 <- default/deathstar-c74d84667-cx5kp:80 http-response FORWARDED (HTTP/1.1 200 0ms (POST http://deathstar.default.svc.cluster.local/v1/request-landing)) May 4 13:23:43.791: default/tiefighter:42742 -> default/deathstar-c74d84667-cx5kp:80 http-request DROPPED (HTTP/1.1 PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port) The following command shows all traffic to the ``deathstar`` pod that has been dropped: .. code-block:: shell-session hubble observe --pod deathstar --verdict DROPPED May 4 13:23:43.791: default/tiefighter:42742 -> default/deathstar-c74d84667-cx5kp:80 http-request DROPPED (HTTP/1.1 PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port) May 4 13:23:47.852: default/xwing:42818 <> default/deathstar-c74d84667-cx5kp:80 Policy denied DROPPED (TCP Flags: SYN) May 4 13:23:47.852: default/xwing:42818 <> default/deathstar-c74d84667-cx5kp:80 Policy denied DROPPED (TCP Flags: SYN) May 4 13:23:48.854: default/xwing:42818 <> default/deathstar-c74d84667-cx5kp:80 Policy denied DROPPED (TCP Flags: SYN) Feel free to further inspect the traffic. To get help for the ``observe`` command, use ``hubble help observe``. Filtering Encrypted Traffic =========================== You can filter flows based on whether they are encrypted (via WireGuard or IPsec) or not. To see only encrypted flows: .. code-block:: shell-session $ hubble observe --encrypted To see only unencrypted flows: .. code-block:: shell-session $ hubble observe --unencrypted Next Steps ========== \* :ref:`hubble\_api\_tls`
https://github.com/cilium/cilium/blob/main//Documentation/observability/hubble/hubble-cli.rst
main
cilium
[ 0.0011292131384834647, 0.011340763419866562, -0.05227702856063843, -0.06028114631772041, 0.09390144050121307, -0.04378214105963707, -0.08229527622461319, 0.026026539504528046, -0.02405730076134205, -0.05809641629457474, 0.04590006172657013, -0.020586878061294556, 0.05121341347694397, -0.07...
0.156512
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_hubble\_setup: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Setting up Hubble Observability \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Hubble is the observability layer of Cilium and can be used to obtain cluster-wide visibility into the network and security layer of your Kubernetes cluster. .. note:: This guide assumes that Cilium has been correctly installed in your Kubernetes cluster. Please see :ref:`k8s\_quick\_install` for more information. If unsure, run ``cilium status`` and validate that Cilium is up and running. Enable Hubble in Cilium ======================= .. tip:: Enabling Hubble requires the TCP port 4244 to be open on all nodes running Cilium. This is required for Relay to operate correctly. .. tabs:: .. group-tab:: Cilium CLI In order to enable Hubble and install Hubble relay, run the command ``cilium hubble enable`` as shown below: .. code-block:: shell-session $ cilium hubble enable 🔑 Found existing CA in secret cilium-ca ✨ Patching ConfigMap cilium-config to enable Hubble... ♻️ Restarted Cilium pods 🔑 Generating certificates for Relay... 2021/04/13 17:11:23 [INFO] generate received request 2021/04/13 17:11:23 [INFO] received CSR 2021/04/13 17:11:23 [INFO] generating key: ecdsa-256 2021/04/13 17:11:23 [INFO] encoded CSR 2021/04/13 17:11:23 [INFO] signed certificate with serial number 365589302067830033295858933512588007090526050046 2021/04/13 17:11:24 [INFO] generate received request 2021/04/13 17:11:24 [INFO] received CSR 2021/04/13 17:11:24 [INFO] generating key: ecdsa-256 2021/04/13 17:11:24 [INFO] encoded CSR 2021/04/13 17:11:24 [INFO] signed certificate with serial number 644167683731852948186644541769558498727586273511 ✨ Deploying Relay... .. group-tab:: Helm If you installed Cilium via ``helm install``, Hubble is enabled by default. You may enable Hubble Relay with the following command: .. cilium-helm-upgrade:: :namespace: kube-system :extra-args: --reuse-values :set: hubble.relay.enabled=true Run ``cilium status`` to validate that Hubble is enabled and running: .. code-block:: shell-session $ cilium status /¯¯\ /¯¯\\_\_/¯¯\ Cilium: OK \\_\_/¯¯\\_\_/ Operator: OK /¯¯\\_\_/¯¯\ Envoy DaemonSet: OK \\_\_/¯¯\\_\_/ Hubble Relay: OK \\_\_/ ClusterMesh: disabled DaemonSet cilium Desired: 1, Ready: 1/1, Available: 1/1 DaemonSet cilium-envoy Desired: 1, Ready: 1/1, Available: 1/1 Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1 Deployment hubble-relay Desired: 1, Ready: 1/1, Available: 1/1 Containers: cilium Running: 1 cilium-envoy Running: 1 cilium-operator Running: 1 clustermesh-apiserver hubble-relay Running: 1 Cluster Pods: 8/8 managed by Cilium Helm chart version: 1.17.0 Image versions cilium quay.io/cilium/cilium:latest: 1 cilium-envoy quay.io/cilium/cilium-envoy:v1.32.3-1739240299-e85e926b0fa4cec519cefff54b60bd7942d7871b@sha256:ced8a89d642d10d648471afc2d8737238f1479c368955e6f2553ded58029ac88: 1 cilium-operator quay.io/cilium/operator-generic-ci:latest: 1 hubble-relay quay.io/cilium/hubble-relay-ci:latest: 1 .. \_hubble\_cli\_install: Install the Hubble Client ========================= In order to access the observability data collected by Hubble, you must first install Hubble CLI. Select the tab for your platform below and install the latest release of Hubble CLI. .. tabs:: .. group-tab:: Linux Download the latest hubble release: .. code-block:: shell-session HUBBLE\_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/main/stable.txt) HUBBLE\_ARCH=amd64 if [ "$(uname -m)" = "aarch64" ]; then HUBBLE\_ARCH=arm64; fi curl -L --fail --remote-name-all https://github.com/cilium/hubble/releases/download/$HUBBLE\_VERSION/hubble-linux-${HUBBLE\_ARCH}.tar.gz{,.sha256sum} sha256sum --check hubble-linux-${HUBBLE\_ARCH}.tar.gz.sha256sum sudo tar xzvfC hubble-linux-${HUBBLE\_ARCH}.tar.gz /usr/local/bin rm hubble-linux-${HUBBLE\_ARCH}.tar.gz{,.sha256sum} .. group-tab:: MacOS Download the latest hubble release: .. code-block:: shell-session HUBBLE\_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/main/stable.txt) HUBBLE\_ARCH=amd64 if [ "$(uname -m)" = "arm64" ]; then HUBBLE\_ARCH=arm64; fi curl -L --fail --remote-name-all https://github.com/cilium/hubble/releases/download/$HUBBLE\_VERSION/hubble-darwin-${HUBBLE\_ARCH}.tar.gz{,.sha256sum} shasum -a 256 -c hubble-darwin-${HUBBLE\_ARCH}.tar.gz.sha256sum sudo tar xzvfC hubble-darwin-${HUBBLE\_ARCH}.tar.gz /usr/local/bin rm hubble-darwin-${HUBBLE\_ARCH}.tar.gz{,.sha256sum} .. group-tab:: Windows Download the latest hubble release: .. code-block:: shell-session curl -LO "https://raw.githubusercontent.com/cilium/hubble/main/stable.txt" set /p HUBBLE\_VERSION= 172.18.0.2:4244 (host) to-stack FORWARDED (TCP Flags: ACK, PSH) ... .. note:: If you port forward to a port other than ``4245`` (``--port-forward-port PORT`` when using automatic port-forwarding), make sure to use the ``--server`` flag or ``HUBBLE\_SERVER`` environment variable to set the Hubble server address (default: ``localhost:4245``). For more information, check out Hubble CLI's help message by running ``hubble help status`` or ``hubble help observe`` as well as ``hubble config`` for configuring Hubble CLI. .. note:: If you have :ref:`enabled TLS` then you will
https://github.com/cilium/cilium/blob/main//Documentation/observability/hubble/setup.rst
main
cilium
[ 0.03192167729139328, 0.023245323449373245, -0.037700336426496506, -0.05799150839447975, 0.06935525685548782, 0.013000119477510452, -0.06671006232500076, 0.011369749903678894, 0.019514117389917374, -0.013667202554643154, 0.0314614400267601, -0.08566663414239883, 0.07313676178455353, -0.0623...
0.193911
flag or ``HUBBLE\_SERVER`` environment variable to set the Hubble server address (default: ``localhost:4245``). For more information, check out Hubble CLI's help message by running ``hubble help status`` or ``hubble help observe`` as well as ``hubble config`` for configuring Hubble CLI. .. note:: If you have :ref:`enabled TLS` then you will need to specify additional flags to :ref:`access the Hubble API`. Troubleshooting Hubble Deployment ================================= Validate the state of Hubble and/or Hubble Relay by running ``cilium status``: .. code-block:: shell-session $ cilium status /¯¯\ /¯¯\\_\_/¯¯\ Cilium: OK \\_\_/¯¯\\_\_/ Operator: OK /¯¯\\_\_/¯¯\ Envoy DaemonSet: OK \\_\_/¯¯\\_\_/ Hubble Relay: OK \\_\_/ ClusterMesh: disabled DaemonSet cilium Desired: 1, Ready: 1/1, Available: 1/1 DaemonSet cilium-envoy Desired: 1, Ready: 1/1, Available: 1/1 Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1 Deployment hubble-relay Desired: 1, Ready: 1/1, Available: 1/1 Containers: cilium Running: 1 cilium-envoy Running: 1 cilium-operator Running: 1 clustermesh-apiserver hubble-relay Running: 1 Cluster Pods: 8/8 managed by Cilium Helm chart version: 1.17.0 Image versions cilium quay.io/cilium/cilium:latest: 1 cilium-envoy quay.io/cilium/cilium-envoy:v1.32.3-1739240299-e85e926b0fa4cec519cefff54b60bd7942d7871b@sha256:ced8a89d642d10d648471afc2d8737238f1479c368955e6f2553ded58029ac88: 1 cilium-operator quay.io/cilium/operator-generic-ci:latest: 1 hubble-relay quay.io/cilium/hubble-relay-ci:latest: 1 Hubble Relay ------------ If Hubble Relay is enabled, ``cilium status`` should display: ``OK``. Otherwise, we should expect to see errors/warnings reported: .. code-block:: shell-session $ cilium status /¯¯\ /¯¯\\_\_/¯¯\ Cilium: OK \\_\_/¯¯\\_\_/ Operator: OK /¯¯\\_\_/¯¯\ Envoy DaemonSet: OK \\_\_/¯¯\\_\_/ Hubble Relay: 1 errors, 2 warnings \\_\_/ ClusterMesh: disabled DaemonSet cilium Desired: 1, Ready: 1/1, Available: 1/1 DaemonSet cilium-envoy Desired: 1, Ready: 1/1, Available: 1/1 Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1 Deployment hubble-relay Desired: 1, Unavailable: 1/1 Containers: cilium Running: 1 cilium-envoy Running: 1 cilium-operator Running: 1 clustermesh-apiserver hubble-relay Pending: 1 Cluster Pods: 8/8 managed by Cilium Helm chart version: 1.17.0 Image versions cilium quay.io/cilium/cilium:latest: 1 cilium-envoy quay.io/cilium/cilium-envoy:v1.32.3-1739240299-e85e926b0fa4cec519cefff54b60bd7942d7871b@sha256:ced8a89d642d10d648471afc2d8737238f1479c368955e6f2553ded58029ac88: 1 cilium-operator quay.io/cilium/operator-generic-ci:latest: 1 hubble-relay quay.io/cilium/hubble-relay-ci:latest-: 1 Errors: hubble-relay hubble-relay 1 pods of Deployment hubble-relay are not ready Warnings: hubble-relay hubble-relay-85f98cc7df-s2lkq pod is pending hubble-relay hubble-relay-85f98cc7df-s2lkq pod is pending .. tip:: If warnings or errors are reported for both ``Cilium`` and ``Hubble Relay``, it often hints at a misconfiguration in Hubble or the Hubble system failing to start. Since Hubble is a non-critical system running in the Cilium Agent, it is expected for the Cilium pods to remain running and healthy even when Hubble fails to start. See the :ref:`hubble\_setup\_troubleshooting` section below for Hubble-specific troubleshooting steps. Verify the state of the pods with: .. code-block:: shell-session $ kubectl -n kube-system get pods -l k8s-app=hubble-relay NAME READY STATUS RESTARTS AGE hubble-relay-6467f4f4d-x825b 0/1 CrashLoopBackOff 5 (19s ago) 7m28s If one or more pods are in ``Pending`` state, describe the pod(s) with: .. code-block:: shell-session $ kubectl describe -n kube-system pod/cilium-5bjkq Name: hubble-relay-6467f4f4d-x825b Namespace: kube-system ... If one or more pods are not in ``Running`` state, look at the pod(s) logs with: .. code-block:: shell-session $ kubectl -n kube-system logs hubble-relay-6467f4f4d-x825b time="2025-02-12T21:21:40.246596435Z" level=info msg="Starting gRPC health server..." addr=":4222" subsys=hubble-relay time="2025-02-12T21:21:40.246611018Z" level=info msg="Starting gRPC server..." options="{peerTarget:hubble-peer.kube-system.svc.cluster.local.:443 retryTimeout:30000000000 listenAddress::4245 healthListenAddress::4222 metricsListenAddress: log:0x400038fc00 serverTLSConfig: insecureServer:true clientTLSConfig:0x4000b12528 clusterName:cluster insecureClient:false observerOptions:[0x28cb1e0 0x28cb2e0] grpcMetrics: grpcUnaryInterceptors:[] grpcStreamInterceptors:[]}" subsys=hubble-relay time="2025-02-12T21:21:40.251658493Z" level=info msg="Failed to create peer notify client for peers change notification; will try again after the timeout has expired" connection timeout=30s error="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.96.49.4:443: connect: connection refused\"" subsys=hubble-relay time="2025-02-12T21:22:10.25956541Z" level=info msg="Failed to create peer notify client for peers change notification; will try again after the timeout has expired" connection timeout=30s error="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.96.49.4:443: connect: connection refused\"" subsys=hubble-relay time="2025-02-12T21:22:40.265123839Z" level=info msg="Failed to create peer notify client for peers change notification; will try again after the timeout has expired" connection timeout=30s error="rpc error: code
https://github.com/cilium/cilium/blob/main//Documentation/observability/hubble/setup.rst
main
cilium
[ 0.09090761840343475, 0.019442236050963402, -0.013157729059457779, -0.059987422078847885, 0.013474524021148682, -0.016162220388650894, -0.10509096831083298, 0.016432955861091614, -0.04202280938625336, 0.03668336942791939, -0.005066771991550922, -0.05476737022399902, 0.08215019851922989, 0.0...
-0.010323
expired" connection timeout=30s error="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.96.49.4:443: connect: connection refused\"" subsys=hubble-relay time="2025-02-12T21:22:40.265123839Z" level=info msg="Failed to create peer notify client for peers change notification; will try again after the timeout has expired" connection timeout=30s error="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp 10.96.49.4:443: connect: connection refused\"" subsys=hubble-relay time="2025-02-12T21:22:49.055746359Z" level=info msg="Stopping server..." subsys=hubble-relay time="2025-02-12T21:22:49.056293486Z" level=info msg="Server stopped" subsys=hubble-relay If you face a ``connection refused`` error, it means that Hubble-Relay can't connect to the Hubble API exposed by Cilium agents through the ``hubble-peer`` service. See the :ref:`hubble\_setup\_troubleshooting` section below for Hubble-specific troubleshooting steps. For TLS related errors, see :ref:`Hubble TLS Troubleshooting`. .. \_hubble\_setup\_troubleshooting: Hubble ------ If Hubble is enabled, ``cilium status`` should display: ``OK`` for ``Cilium``. Otherwise, we should expect to see errors/warnings reported: .. code-block:: shell-session $ cilium status /¯¯\ /¯¯\\_\_/¯¯\ Cilium: 1 warnings \\_\_/¯¯\\_\_/ Operator: OK /¯¯\\_\_/¯¯\ Envoy DaemonSet: OK \\_\_/¯¯\\_\_/ Hubble Relay: 1 errors \\_\_/ ClusterMesh: disabled DaemonSet cilium Desired: 1, Ready: 1/1, Available: 1/1 DaemonSet cilium-envoy Desired: 1, Ready: 1/1, Available: 1/1 Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1 Deployment hubble-relay Desired: 1, Unavailable: 1/1 Containers: cilium Running: 1 cilium-envoy Running: 1 cilium-operator Running: 1 clustermesh-apiserver hubble-relay Running: 1 Cluster Pods: 8/8 managed by Cilium Helm chart version: 1.17.0 Image versions cilium quay.io/cilium/cilium:latest: 1 cilium-envoy quay.io/cilium/cilium-envoy:v1.32.3-1739240299-e85e926b0fa4cec519cefff54b60bd7942d7871b@sha256:ced8a89d642d10d648471afc2d8737238f1479c368955e6f2553ded58029ac88: 1 cilium-operator quay.io/cilium/operator-generic-ci:latest: 1 hubble-relay quay.io/cilium/hubble-relay-ci:latest: 1 Errors: hubble-relay hubble-relay 1 pods of Deployment hubble-relay are not ready Warnings: cilium cilium-5bjkq Hubble: failed to setup metrics: metric 'unknown-metric' does not exist Verify the state of the pods with: .. code-block:: shell-session $ kubectl -n kube-system get pods -l k8s-app=cilium NAME READY STATUS RESTARTS AGE cilium-5bjkq 1/1 Running 1 (18m ago) 33m If one or more pods are in ``Pending`` state, describe the pod(s) with: .. code-block:: shell-session $ kubectl describe -n kube-system pod/cilium-5bjkq Name: cilium-5bjkq Namespace: kube-system ... If one or more pods are not in ``Running`` state, look at the pod(s) logs with: .. code-block:: shell-session $ kubectl logs -n kube-system -c cilium-agent -l k8s-app=cilium --tail=-1 | grep subsys=hubble time="2025-02-12T22:12:01.227357082Z" level=info msg="Starting Hubble Metrics server" address=":9965" metrics=unknown-metric subsys=hubble tls=false time="2025-02-12T22:12:01.22740229Z" level=error msg="Failed to launch hubble" error="failed to setup metrics: metric 'unknown-metric' does not exist" subsys=hubble Next Steps ========== \* :ref:`hubble\_cli` \* :ref:`hubble\_ui` \* :ref:`hubble\_enable\_tls`
https://github.com/cilium/cilium/blob/main//Documentation/observability/hubble/setup.rst
main
cilium
[ 0.025305969640612602, 0.0006239610956981778, -0.005337595473974943, 0.007607547100633383, 0.09479232877492905, 0.02015831507742405, -0.11916381120681763, 0.007081605028361082, 0.04344059154391289, 0.05336211249232292, -0.013815972954034805, 0.0593942366540432, 0.03324928134679794, 0.053286...
0.011685
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_hubble\_enable\_tls: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Configure TLS with Hubble \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This page provides guidance to configure Hubble with TLS in a way that suits your environment. Instructions to enable Hubble are provided as part of each Cilium :ref:`getting\_started` guide. Enable TLS on the Hubble API ============================ When Hubble Relay is deployed, Hubble listens on a TCP port on the host network. This allows Hubble Relay to communicate with all Hubble instances in the cluster. Connections between Hubble instances and Hubble Relay are secured using mutual TLS (mTLS) by default. TLS certificates can be generated automatically or manually provided. The following options are available to configure TLS certificates automatically: \* cilium's `certgen `\_\_ (using a Kubernetes ``CronJob``) \* `cert-manager `\_\_ \* `Helm `\_\_ Each of these method handles certificate rotation differently, but the end result is the secrets containing the key pair will be updated. As Hubble server and Hubble Relay support TLS certificates hot reloading, including CA certificates, this does not disrupt any existing connection. New connections are automatically established using the new certificates without having to restart Hubble server or Hubble Relay. .. tabs:: .. group-tab:: CronJob (certgen) When using certgen, TLS certificates are generated at installation time and a Kubernetes ``CronJob`` is scheduled to renew them (regardless of their expiration date). The certgen method is easier to implement than cert-manager but less flexible. :: --set hubble.tls.auto.enabled=true # enable automatic TLS certificate generation --set hubble.tls.auto.method=cronJob # auto generate certificates using cronJob method --set hubble.tls.auto.certValidityDuration=1095 # certificates validity duration in days (default 3 years) --set hubble.tls.auto.schedule="0 0 1 \*/4 \*" # schedule for certificates re-generation (crontab syntax) .. group-tab:: cert-manager This method relies on `cert-manager `\_\_ to generate the TLS certificates. cert-manager has becomes the de facto way to manage TLS on Kubernetes, and it has the following advantages compared to the other documented methods: \* Support for multiple issuers (e.g. a custom CA, `Vault `\_\_, `Let's Encrypt `\_\_, `Google's Certificate Authority Service `\_\_, and more) allowing to choose the issuer fitting your organization's requirements. \* Manages certificates via a `CRD `\_\_ which is easier to inspect with Kubernetes tools than PEM files. \*\*Installation steps\*\*: #. First, install `cert-manager `\_\_ and setup an `issuer `\_. Please make sure that your issuer is able to create certificates under the ``cilium.io`` domain name. #. Install/upgrade Cilium including the following Helm flags: :: --set hubble.tls.auto.enabled=true # enable automatic TLS certificate generation --set hubble.tls.auto.method=certmanager # auto generate certificates using cert-manager --set hubble.tls.auto.certValidityDuration=1095 # certificates validity duration in days (default 3 years) --set hubble.tls.auto.certManagerIssuerRef.group="cert-manager.io" # Reference to cert-manager's issuer --set hubble.tls.auto.certManagerIssuerRef.kind="ClusterIssuer" --set hubble.tls.auto.certManagerIssuerRef.name="ca-issuer" .. group-tab:: Helm When using Helm, TLS certificates are (re-)generated every time Helm is used for install or upgrade. :: --set hubble.tls.auto.enabled=true # enable automatic TLS certificate generation --set hubble.tls.auto.method=helm # auto generate certificates using helm method --set hubble.tls.auto.certValidityDuration=1095 # certificates validity duration in days (default 3 years) The downside of the Helm method is that while certificates are automatically generated, they are not automatically renewed. Consequently, running ``helm upgrade`` is required when certificates are about to expire (i.e. before the configured ``hubble.tls.auto.certValidityDuration``). .. group-tab:: User Provided Certificates In order to provide your own TLS certificates, ``hubble.tls.auto.enabled`` must be set to ``false``, secrets containing the certificates must be created in the ``kube-system`` namespace, and the secret names must be provided to Helm. Provided files must be \*\*base64 encoded\*\* PEM certificates. In addition, the \*\*Common Name (CN)\*\* and \*\*Subject Alternative Name (SAN)\*\* of the certificate for Hubble server MUST be
https://github.com/cilium/cilium/blob/main//Documentation/observability/hubble/configuration/tls.rst
main
cilium
[ -0.013861299492418766, 0.07469677180051804, -0.05390836298465729, -0.0683063343167305, 0.03179128095507622, -0.06367356330156326, -0.07910995930433273, 0.009207060560584068, 0.04669346660375595, -0.041541922837495804, 0.024464547634124756, -0.031014878302812576, 0.07966530323028564, -0.002...
0.063223
set to ``false``, secrets containing the certificates must be created in the ``kube-system`` namespace, and the secret names must be provided to Helm. Provided files must be \*\*base64 encoded\*\* PEM certificates. In addition, the \*\*Common Name (CN)\*\* and \*\*Subject Alternative Name (SAN)\*\* of the certificate for Hubble server MUST be set to ``\*.{cluster-name}.hubble-grpc.cilium.io`` where ``{cluster-name}`` is the cluster name defined by ``cluster.name`` (defaults to ``default``). Once the certificates have been issued, the secrets must be created in the ``kube-system`` namespace. Each secret must contain the following keys: - ``tls.crt``: The certificate file. - ``tls.key``: The private key file. - ``ca.crt``: The CA certificate file. The following examples demonstrates how to create the secrets. Create the hubble server certificate secret: .. code-block:: shell-session $ kubectl -n kube-system create secret generic hubble-server-certs --from-file=hubble-server.crt --from-file=hubble-server.key --from-file=ca.crt If hubble-relay is enabled, the following secrets must be created: .. code-block:: shell-session $ kubectl -n kube-system create secret generic hubble-relay-server-certs --from-file=hubble-relay-server.crt --from-file=hubble-relay-server.key --from-file=ca.crt $ kubectl -n kube-system create secret generic hubble-relay-client-certs --from-file=hubble-relay-client.crt --from-file=hubble-relay-client.key --from-file=ca.crt If hubble-ui is enabled, the following secret must be created: .. code-block:: shell-session $ kubectl -n kube-system create secret generic hubble-ui-client-certs --from-file=hubble-ui-client.crt --from-file=hubble-ui-client.key --from-file=ca.crt Lastly, if the Hubble metrics API is enabled, the following secret must be created: .. code-block:: shell-session $ kubectl -n kube-system create secret generic hubble-metrics-certs --from-file=hubble-metrics.crt --from-file=hubble-metrics.key --from-file=ca.crt After the secrets have been created, the secret names must be provided to Helm and automatic certificate generation must be disabled: :: --set hubble.tls.auto.enabled=false # Disable automatic TLS certificate generation --set hubble.tls.server.existingSecret="hubble-server-certs" --set hubble.relay.tls.server.enabled=true # Enable TLS on Hubble Relay (optional) --set hubble.relay.tls.server.existingSecret="hubble-relay-server-certs" --set hubble.relay.tls.client.existingSecret="hubble-relay-client-certs" --set hubble.ui.tls.client.existingSecret="hubble-ui-client-certs" --set hubble.metrics.tls.enabled=true # Enable TLS on the Hubble metrics API (optional) --set hubble.metrics.tls.server.existingSecret="hubble-metrics-certs" - ``hubble.relay.tls.server.existingSecret`` and ``hubble.ui.tls.client.existingSecret`` only need to be provided when ``hubble.relay.tls.server.enabled=true`` (default ``false``). - ``hubble.ui.tls.client.existingSecret`` only needs to be provided when ``hubble.ui.enabled`` (default ``false``). - ``hubble.metrics.tls.server.existingSecret`` only needs to be provided when ``hubble.metrics.tls.enabled`` (default ``false``). For more details on configuring the Hubble metrics API with TLS, see :ref:`hubble\_configure\_metrics\_tls`. .. \_hubble\_enable\_tls\_troubleshooting: Troubleshooting --------------- If you encounter issues after enabling TLS, you can use the following instructions to help diagnose the problem. .. tabs:: .. group-tab:: cert-manager While installing Cilium or cert-manager you may get the following error: :: Error: Internal error occurred: failed calling webhook "webhook.cert-manager.io": Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": dial tcp x.x.x.x:443: connect: connection refused This happens when cert-manager's webhook (which is used to verify the ``Certificate``'s CRD resources) is not available. There are several ways to resolve this issue. Pick one of the following options: .. tabs:: .. tab:: Install CRDs first Install cert-manager CRDs before Cilium and cert-manager (see `cert-manager's documentation about installing CRDs with kubectl `\_\_): .. code-block:: shell-session $ kubectl create -f cert-manager.crds.yaml Then install cert-manager, configure an issuer, and install Cilium. .. tab:: Upgrade Cilium Upgrade Cilium from an installation with TLS disabled: .. code-block:: shell-session $ helm install cilium cilium/cilium \ --set hubble.tls.enabled=false \ ... Then install cert-manager, configure an issuer, and upgrade Cilium enabling TLS: .. code-block:: shell-session $ helm install cilium cilium/cilium --set hubble.tls.enabled=true .. tab:: Disable webhook Disable cert-manager validation (assuming Cilium is installed in the ``kube-system`` namespace): .. code-block:: shell-session $ kubectl label namespace kube-system cert-manager.io/disable-validation=true Then install Cilium, cert-manager, and configure an issuer. .. tab:: Host network webhook Configure cert-manager to expose its webhook within the host network namespace: .. code-block:: shell-session $ helm install cert-manager jetstack/cert-manager \ --set webhook.hostNetwork=true \ --set webhook.tolerations='["operator": "Exists"]' Then configure an issuer and install Cilium. .. group-tab:: Helm When using Helm certificates are not automatically renewed. If you encounter issues with expired certificates, you can manually renew them by running ``helm upgrade`` to renew the
https://github.com/cilium/cilium/blob/main//Documentation/observability/hubble/configuration/tls.rst
main
cilium
[ 0.007063789293169975, 0.030949672684073448, -0.04261051490902901, -0.03295186907052994, -0.008449828252196312, -0.014452838338911533, -0.06327391415834427, 0.02897355705499649, 0.05997391417622566, -0.011347088031470776, 0.014408806338906288, -0.09883696585893631, 0.09928277879953384, -0.0...
0.056217
shell-session $ helm install cert-manager jetstack/cert-manager \ --set webhook.hostNetwork=true \ --set webhook.tolerations='["operator": "Exists"]' Then configure an issuer and install Cilium. .. group-tab:: Helm When using Helm certificates are not automatically renewed. If you encounter issues with expired certificates, you can manually renew them by running ``helm upgrade`` to renew the certificates. .. group-tab:: User Provided Certificates If you encounter issues with the certificates, you can check the certificates and keys by decoding them: .. code-block:: shell-session $ kubectl -n kube-system get secret hubble-server-certs -o jsonpath='{.data.tls\.crt}' | base64 -d | openssl x509 -text -noout $ kubectl -n kube-system get secret hubble-server-certs -o jsonpath='{.data.tls\.key}' | base64 -d | openssl rsa -text -noout $ kubectl -n kube-system get secret hubble-server-certs -o jsonpath='{.data.ca\.crt}' | base64 -d | openssl x509 -text -noout The same commands can be used for the other secrets as well. If hubble-relay is enabled but not responding or the pod is failing it's readiness probe, check the certificates and ensure the client certificate is issued by the CA (``ca.crt``) specified in the ``hubble-server-certs`` secret. Additionally you must ensure the \*\*Common Name (CN)\*\* and \*\*Subject Alternative Name (SAN)\*\* of the certificate for Hubble server MUST be set to ``\*.{cluster-name}.hubble-grpc.cilium.io`` where ``{cluster-name}`` is the cluster name defined by ``cluster.name`` (defaults to ``default``). Validating the Installation --------------------------- The following section guides you through validating that TLS is enabled for Hubble and the connection between Hubble Relay and Hubble Server is using mTLS to secure the session. Additionally, the commands below can be used to troubleshoot issues with your TLS configuration if you encounter any issues. Before beginning verify TLS has been configured correctly by running the following command: .. code-block:: shell-session $ kubectl get configmap -n kube-system cilium-config -oyaml | grep hubble-disable-tls hubble-disable-tls: "false" You should see that the ``hubble-disable-tls`` configuration option is set to ``false``. Start by creating a Hubble CLI pod within the namespace that Hubble components are running in (for example: ``kube-system``): .. code-block:: shell-session $ kubectl apply -n kube-system -f https://raw.githubusercontent.com/cilium/cilium/main/examples/hubble/hubble-cli.yaml List Hubble Servers by running ``hubble watch peers`` within the newly created pod: .. code-block:: shell-session $ kubectl exec -it -n kube-system deployment/hubble-cli -- \ hubble watch peers --server unix:///var/run/cilium/hubble.sock PEER\_ADDED 172.18.0.2 kind-worker (TLS.ServerName: kind-worker.default.hubble-grpc.cilium.io) PEER\_ADDED 172.18.0.3 kind-control-plane (TLS.ServerName: kind-control-plane.kind.hubble-grpc.cilium.io) Copy the IP and the server name of the first peer into the following environment variables for the next steps: .. note:: If the ``TLS.ServerName`` is missing from your output then TLS is not enabled for the Hubble server and the following steps will not work. If this is the case, please refer to the previous sections to enable TLS. .. code-block:: shell-session $ IP=172.18.0.2 $ SERVERNAME=kind-worker.default.hubble-grpc.cilium.io Connect to the first peer with the Hubble Relay client certificate to confirm that the Hubble server is accepting connections from clients who present the correct certificate: .. code-block:: shell-session $ kubectl exec -it -n kube-system deployment/hubble-cli -- \ hubble observe --server tls://${IP?}:4244 \ --tls-server-name ${SERVERNAME?} \ --tls-ca-cert-files /var/lib/hubble-relay/tls/hubble-server-ca.crt \ --tls-client-cert-file /var/lib/hubble-relay/tls/client.crt \ --tls-client-key-file /var/lib/hubble-relay/tls/client.key Dec 13 08:49:58.888: 10.20.1.124:60588 (host) -> kube-system/coredns-565d847f94-pp8zs:8181 (ID:7518) to-endpoint FORWARDED (TCP Flags: SYN) Dec 13 08:49:58.888: 10.20.1.124:36308 (host) <- kube-system/coredns-565d847f94-pp8zs:8080 (ID:7518) to-stack FORWARDED (TCP Flags: SYN, ACK) Dec 13 08:49:58.888: 10.20.1.124:60588 (host) <- kube-system/coredns-565d847f94-pp8zs:8181 (ID:7518) to-stack FORWARDED (TCP Flags: SYN, ACK) ... ... Now try to query the Hubble server without providing any client certificate: .. code-block:: shell-session $ kubectl exec -it -n kube-system deployment/hubble-cli -- \ hubble observe --server tls://${IP?}:4244 \ --tls-server-name ${SERVERNAME?} \ --tls-ca-cert-files /var/lib/hubble-relay/tls/hubble-server-ca.crt failed to connect to '172.18.0.2:4244': context deadline exceeded: connection error: desc = "error reading server preface: remote error: tls: certificate requiredd" command terminated with exit code 1 You can also
https://github.com/cilium/cilium/blob/main//Documentation/observability/hubble/configuration/tls.rst
main
cilium
[ 0.02188136801123619, 0.06200703606009483, -0.04150978475809097, -0.02909991703927517, 0.03347248211503029, -0.03477240353822708, -0.07660819590091705, 0.012893632985651493, 0.053371161222457886, -0.004476060159504414, 0.0194581039249897, -0.08454892784357071, 0.020934853702783585, 0.007464...
0.033909
shell-session $ kubectl exec -it -n kube-system deployment/hubble-cli -- \ hubble observe --server tls://${IP?}:4244 \ --tls-server-name ${SERVERNAME?} \ --tls-ca-cert-files /var/lib/hubble-relay/tls/hubble-server-ca.crt failed to connect to '172.18.0.2:4244': context deadline exceeded: connection error: desc = "error reading server preface: remote error: tls: certificate requiredd" command terminated with exit code 1 You can also try to connect without TLS: .. code-block:: shell-session $ kubectl exec -it -n kube-system deployment/hubble-cli -- \ hubble observe --server ${IP?}:4244 failed to connect to '172.18.0.2:4244': context deadline exceeded: connection error: desc = "error reading server preface: EOF" command terminated with exit code 1 To troubleshoot the connection, install OpenSSL in the Hubble CLI pod: .. code-block:: shell-session $ kubectl exec -it -n kube-system deployment/hubble-cli -- apk add --update openssl Then, use OpenSSL to connect to the Hubble server get more details about the TLS handshake: .. code-block:: shell-session $ kubectl exec -it -n kube-system deployment/hubble-cli -- \ openssl s\_client -showcerts -servername ${SERVERNAME} -connect ${IP?}:4244 \ -CAfile /var/lib/hubble-relay/tls/hubble-server-ca.crt CONNECTED(00000004) depth=1 C = US, ST = San Francisco, L = CA, O = Cilium, OU = Cilium, CN = Cilium CA verify return:1 depth=0 CN = \*.default.hubble-grpc.cilium.io verify return:1 --- Certificate chain 0 s:CN = \*.default.hubble-grpc.cilium.io i:C = US, ST = San Francisco, L = CA, O = Cilium, OU = Cilium, CN = Cilium CA a:PKEY: id-ecPublicKey, 256 (bit); sigalg: ecdsa-with-SHA256 v:NotBefore: Aug 15 17:39:00 2024 GMT; NotAfter: Aug 15 17:39:00 2027 GMT -----BEGIN CERTIFICATE----- MIICNzCCAd2gAwIBAgIUAlgykDuc1J+mzseHS0pREX6Uv3cwCgYIKoZIzj0EAwIw aDELMAkGA1UEBhMCVVMxFjAUBgNVBAgTDVNhbiBGcmFuY2lzY28xCzAJBgNVBAcT AkNBMQ8wDQYDVQQKEwZDaWxpdW0xDzANBgNVBAsTBkNpbGl1bTESMBAGA1UEAxMJ Q2lsaXVtIENBMB4XDTI0MDgxNTE3MzkwMFoXDTI3MDgxNTE3MzkwMFowKjEoMCYG A1UEAwwfKi5kZWZhdWx0Lmh1YmJsZS1ncnBjLmNpbGl1bS5pbzBZMBMGByqGSM49 AgEGCCqGSM49AwEHA0IABGjtY50MM21TolEy5RUrBa6WqHsw7PjNB3MhYLCsuJmO aQ1tIy6J2e7a9Cw2jmBlyj+dL8g0YLhRQX4n+leItSSjgaIwgZ8wDgYDVR0PAQH/ BAQDAgWgMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHQYDVR0O BBYEFCDf5epVs8yyyZCdtBzc90HrQzpFMB8GA1UdIwQYMBaAFDKuJMmhNPJ71FvB AyHEMztI62NbMCoGA1UdEQQjMCGCHyouZGVmYXVsdC5odWJibGUtZ3JwYy5jaWxp dW0uaW8wCgYIKoZIzj0EAwIDSAAwRQIhAP0kyl0Eb7FBQw1uZE+LWnRyr5GDsB3+ 6rA/Rx042XZgAiBZML3lOW60tWMI1Pyn4cR4trFbzZpsUSwnQmOAb+paEw== -----END CERTIFICATE----- --- Server certificate subject=CN = \*.default.hubble-grpc.cilium.io issuer=C = US, ST = San Francisco, L = CA, O = Cilium, OU = Cilium, CN = Cilium CA --- Acceptable client certificate CA names C = US, ST = San Francisco, L = CA, O = Cilium, OU = Cilium, CN = Cilium CA Requested Signature Algorithms: RSA-PSS+SHA256:ECDSA+SHA256:Ed25519:RSA-PSS+SHA384:RSA-PSS+SHA512:RSA+SHA256:RSA+SHA384:RSA+SHA512:ECDSA+SHA384:ECDSA+SHA512:RSA+SHA1:ECDSA+SHA1 Shared Requested Signature Algorithms: RSA-PSS+SHA256:ECDSA+SHA256:Ed25519:RSA-PSS+SHA384:RSA-PSS+SHA512:RSA+SHA256:RSA+SHA384:RSA+SHA512:ECDSA+SHA384:ECDSA+SHA512 Peer signing digest: SHA256 Peer signature type: ECDSA Server Temp Key: X25519, 253 bits --- SSL handshake has read 1106 bytes and written 437 bytes Verification: OK --- New, TLSv1.3, Cipher is TLS\_AES\_128\_GCM\_SHA256 Server public key is 256 bit This TLS version forbids renegotiation. No ALPN negotiated Early data was not sent Verify return code: 0 (ok) --- 08EBFFFFFF7F0000:error:0A00045C:SSL routines:ssl3\_read\_bytes:tlsv13 alert certificate required:ssl/record/rec\_layer\_s3.c:1605:SSL alert number 116 command terminated with exit code 1 Breaking the output down: - ``Server Certificate``: This is the server certificate presented by the server. - ``Acceptable client certificate CA names``: These are the CA names that the server accepts for client certificates. - ``SSL handshake has read 1108 bytes and written 387 bytes``: Details on the handshake. Errors could be presented here if any occurred. - ``Verification: OK``: The server certificate is valid. - ``Verify return code: 0 (ok)``: The server certificate was verified successfully. - ``error:0A00045C:SSL routines:ssl3\_read\_bytes:tlsv13 alert certificate required``: The server requires a client certificate to be provided. Since a client certificate was not provided, the connection failed. If you provide the correct client certificate and key, the connection should be successful: .. code-block:: shell-session $ kubectl exec -i -n kube-system deployment/hubble-cli -- \ openssl s\_client -showcerts -servername ${SERVERNAME} -connect ${IP?}:4244 \ -CAfile /var/lib/hubble-relay/tls/hubble-server-ca.crt \ -cert /var/lib/hubble-relay/tls/client.crt \ -key /var/lib/hubble-relay/tls/client.key CONNECTED(00000004) depth=1 C = US, ST = San Francisco, L = CA, O = Cilium, OU = Cilium, CN = Cilium CA verify return:1 depth=0 CN = \*.default.hubble-grpc.cilium.io verify return:1 --- Certificate chain 0 s:CN = \*.default.hubble-grpc.cilium.io i:C = US, ST = San Francisco, L = CA, O = Cilium, OU = Cilium, CN = Cilium CA a:PKEY: id-ecPublicKey, 256 (bit); sigalg: ecdsa-with-SHA256 v:NotBefore: Aug 15 17:39:00 2024 GMT; NotAfter: Aug 15 17:39:00 2027
https://github.com/cilium/cilium/blob/main//Documentation/observability/hubble/configuration/tls.rst
main
cilium
[ 0.05974666774272919, 0.05333670228719711, 0.002401069039478898, -0.040185537189245224, -0.011834776028990746, -0.0004577072977554053, -0.03684886917471886, -0.0007390157552435994, 0.09499747306108475, 0.06925367563962936, -0.01512226089835167, -0.09497438371181488, 0.036248911172151566, 0....
0.038941
depth=0 CN = \*.default.hubble-grpc.cilium.io verify return:1 --- Certificate chain 0 s:CN = \*.default.hubble-grpc.cilium.io i:C = US, ST = San Francisco, L = CA, O = Cilium, OU = Cilium, CN = Cilium CA a:PKEY: id-ecPublicKey, 256 (bit); sigalg: ecdsa-with-SHA256 v:NotBefore: Aug 15 17:39:00 2024 GMT; NotAfter: Aug 15 17:39:00 2027 GMT -----BEGIN CERTIFICATE----- MIICNzCCAd2gAwIBAgIUAlgykDuc1J+mzseHS0pREX6Uv3cwCgYIKoZIzj0EAwIw aDELMAkGA1UEBhMCVVMxFjAUBgNVBAgTDVNhbiBGcmFuY2lzY28xCzAJBgNVBAcT AkNBMQ8wDQYDVQQKEwZDaWxpdW0xDzANBgNVBAsTBkNpbGl1bTESMBAGA1UEAxMJ Q2lsaXVtIENBMB4XDTI0MDgxNTE3MzkwMFoXDTI3MDgxNTE3MzkwMFowKjEoMCYG A1UEAwwfKi5kZWZhdWx0Lmh1YmJsZS1ncnBjLmNpbGl1bS5pbzBZMBMGByqGSM49 AgEGCCqGSM49AwEHA0IABGjtY50MM21TolEy5RUrBa6WqHsw7PjNB3MhYLCsuJmO aQ1tIy6J2e7a9Cw2jmBlyj+dL8g0YLhRQX4n+leItSSjgaIwgZ8wDgYDVR0PAQH/ BAQDAgWgMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHQYDVR0O BBYEFCDf5epVs8yyyZCdtBzc90HrQzpFMB8GA1UdIwQYMBaAFDKuJMmhNPJ71FvB AyHEMztI62NbMCoGA1UdEQQjMCGCHyouZGVmYXVsdC5odWJibGUtZ3JwYy5jaWxp dW0uaW8wCgYIKoZIzj0EAwIDSAAwRQIhAP0kyl0Eb7FBQw1uZE+LWnRyr5GDsB3+ 6rA/Rx042XZgAiBZML3lOW60tWMI1Pyn4cR4trFbzZpsUSwnQmOAb+paEw== -----END CERTIFICATE----- --- Server certificate subject=CN = \*.default.hubble-grpc.cilium.io issuer=C = US, ST = San Francisco, L = CA, O = Cilium, OU = Cilium, CN = Cilium CA --- Acceptable client certificate CA names C = US, ST = San Francisco, L = CA, O = Cilium, OU = Cilium, CN = Cilium CA Requested Signature Algorithms: RSA-PSS+SHA256:ECDSA+SHA256:Ed25519:RSA-PSS+SHA384:RSA-PSS+SHA512:RSA+SHA256:RSA+SHA384:RSA+SHA512:ECDSA+SHA384:ECDSA+SHA512:RSA+SHA1:ECDSA+SHA1 Shared Requested Signature Algorithms: RSA-PSS+SHA256:ECDSA+SHA256:Ed25519:RSA-PSS+SHA384:RSA-PSS+SHA512:RSA+SHA256:RSA+SHA384:RSA+SHA512:ECDSA+SHA384:ECDSA+SHA512 Peer signing digest: SHA256 Peer signature type: ECDSA Server Temp Key: X25519, 253 bits --- SSL handshake has read 1106 bytes and written 1651 bytes Verification: OK --- New, TLSv1.3, Cipher is TLS\_AES\_128\_GCM\_SHA256 Server public key is 256 bit This TLS version forbids renegotiation. No ALPN negotiated Early data was not sent Verify return code: 0 (ok) --- --- Post-Handshake New Session Ticket arrived: SSL-Session: Protocol : TLSv1.3 Cipher : TLS\_AES\_128\_GCM\_SHA256 Session-ID: 9ADFAFBDFFB876A9A8D4CC025470168D25485FF51929615199E9561F46FBF97B Session-ID-ctx: Resumption PSK: 58DD7621E7B353BD5C6FC3AAB5A907FF3D3251FAA184D28D2C69560E96806495 PSK identity: None PSK identity hint: None SRP username: None TLS session ticket lifetime hint: 604800 (seconds) TLS session ticket: 0000 - 55 93 99 70 30 37 6a 77-43 d7 0c 34 9f 24 51 40 U..p07jwC..4.$Q@ ... ... 0690 - 11 6d 26 ec 99 3a 6e a9-56 c9 ad a0 49 e2 f5 6a .m&..:n.V...I..j Press ``ctrl-d`` to signal the TLS session and connection should be terminated. After the session has ended you will see output similar to the following: .. code-block:: shell-session @DONE 06a0 - bf eb 8b 1d 8d 43 46 2a-07 02 e1 44 35 45 b1 a0 .....CF\*...D5E.. 06b0 - 7d bb 27 2f 1a 35 b2 da-0d 00 15 fd 6c 1f 00 3b }.'/.5......l..; 06c0 - 9a 6e ff c9 5d ad 6b af-f7 20 39 99 5b ae 72 03 .n..].k.. 9.[.r. 06d0 - c8 2d 93 7a e5 a7 e0 d5-70 95 8f b5 0b 56 9c .-.z....p....V. Start Time: 1723744378 Timeout : 7200 (sec) Verify return code: 0 (ok) Extended master secret: no Max Early Data: 0 --- read R BLOCK The output of this OpenSSL command is similar to the previous output, but without the error message. There is also an additional section, starting with ``Post-Handshake New Session Ticket arrived``, the presence of which indicates that the client certificate is valid and a TLS session was established. The summary of the TLS session printed after the connection has ended can also be used as an indicator of the established TLS session. .. \_hubble\_configure\_metrics\_tls: Hubble Metrics TLS and Authentication ===================================== Starting with Cilium 1.16, Hubble supports configuring TLS on the Hubble metrics API in addition to the Hubble observer API. This can be done by specifying the following options to Helm at install or upgrade time, along with the TLS configuration options described in the previous section. .. note:: This section assumes that you have already enabled :ref:`Hubble metrics`. To enable TLS on the Hubble metrics API, add the following Helm flag to your list of options: :: --set hubble.metrics.tls.enabled=true # Enable TLS on the Hubble metrics API If you also want to enable authentication using mTLS on the Hubble metrics API, first create a ConfigMap with a CA certificate to use for verifying client certificates: :: kubectl -n kube-system create configmap hubble-metrics-ca --from-file=ca.crt Then, add the following flags to your Helm command
https://github.com/cilium/cilium/blob/main//Documentation/observability/hubble/configuration/tls.rst
main
cilium
[ -0.03163405507802963, 0.018914368003606796, -0.02245953492820263, -0.07149787247180939, 0.05522351711988449, -0.12151553481817245, -0.10674023628234863, 0.011133159510791302, 0.06501559913158417, -0.05287938937544823, 0.06928081810474396, -0.08678572624921799, 0.015380709432065487, -0.0447...
0.0356
on the Hubble metrics API If you also want to enable authentication using mTLS on the Hubble metrics API, first create a ConfigMap with a CA certificate to use for verifying client certificates: :: kubectl -n kube-system create configmap hubble-metrics-ca --from-file=ca.crt Then, add the following flags to your Helm command to enable mTLS: :: --set hubble.metrics.tls.enabled=true # Enable TLS on the Hubble metrics API --set hubble.metrics.tls.server.mtls.enabled=true # Enable mTLS authentication on the Hubble metrics API --set hubble.metrics.tls.server.mtls.name=hubble-metrics-ca # Use the CA certificate from the ConfigMap After the configuration is applied, clients will be required to authenticate using a certificate signed by the configured CA certificate to access the Hubble metrics API. .. note:: When using TLS with the Hubble metrics API you will need to update your Prometheus scrape configuration to use HTTPS by setting a ``tls\_config`` and provide the path to the CA certificate. When using mTLS you will also need to provide a client certificate and key signed by the CA certificate for Prometheus to authenticate to the Hubble metrics API. .. \_hubble\_api\_tls: Access the Hubble API with TLS Enabled ====================================== The examples are adapted from :ref:`hubble\_cli`. Before you can access the Hubble API with TLS enabled, you need to obtain the CA certificate from the secret that was created when enabling TLS. The following examples demonstrate how to obtain the CA certificate and use it to access the Hubble API. Run the following command to obtain the CA certificate from the ``hubble-relay-server-certs`` secret: .. code-block:: shell-session $ kubectl -n kube-system get secret hubble-relay-server-certs -o jsonpath='{.data.ca\.crt}' | base64 -d > hubble-ca.crt After obtaining the CA certificate you can use the ``--tls`` to enable TLS and ``--tls-ca-cert-files`` flag to specify the CA certificate. Additionally, when port-forwarding to Hubble Relay, you will need to specify the ``--tls-server-name`` flag: .. code-block:: shell-session $ hubble observe --tls --tls-ca-cert-files ./hubble-ca.crt --tls-server-name hubble.hubble-relay.cilium.io --pod deathstar --protocol http May 4 13:23:40.501: default/tiefighter:42690 -> default/deathstar-c74d84667-cx5kp:80 http-request FORWARDED (HTTP/1.1 POST http://deathstar.default.svc.cluster.local/v1/request-landing) May 4 13:23:40.502: default/tiefighter:42690 <- default/deathstar-c74d84667-cx5kp:80 http-response FORWARDED (HTTP/1.1 200 0ms (POST http://deathstar.default.svc.cluster.local/v1/request-landing)) May 4 13:23:43.791: default/tiefighter:42742 -> default/deathstar-c74d84667-cx5kp:80 http-request DROPPED (HTTP/1.1 PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port) To persist these options for the shell session, set the following environment variables: .. code-block:: shell-session $ export HUBBLE\_TLS=true $ export HUBBLE\_TLS\_CA\_CERT\_FILES=./hubble-ca.crt $ export HUBBLE\_TLS\_SERVER\_NAME=hubble.hubble-relay.cilium.io
https://github.com/cilium/cilium/blob/main//Documentation/observability/hubble/configuration/tls.rst
main
cilium
[ -0.03457899019122124, 0.06417820602655411, 0.026105262339115143, -0.02382550574839115, -0.02773980237543583, -0.06192760914564133, -0.053429920226335526, 0.013490615412592888, 0.05435153841972351, 0.011926759965717793, -0.03343750536441803, -0.12499041855335236, 0.10071264952421188, -0.003...
-0.033711
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Configuring Hubble exporter \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* \*\*Hubble Exporter\*\* is a feature of ``cilium-agent`` that lets you write Hubble flows to a file for later consumption as logs. Hubble Exporter supports file rotation, size limits, filters, and field masks. Prerequisites ============= .. include:: /installation/k8s-install-download-release.rst Basic Configuration =================== Setup ----- \*\*Hubble Exporter\*\* is enabled with Config Map property. It is disabled until you set a file path value for ``hubble-export-file-path``. You can use helm to install cilium with hubble exporter enabled: .. cilium-helm-install:: :set: hubble.enabled=true hubble.export.static.enabled=true hubble.export.static.filePath=/var/run/cilium/hubble/events.log Wait for ``cilium`` pod to become ready: .. code-block:: shell-session kubectl -n kube-system rollout status ds/cilium Verify that flow logs are stored in target files: .. code-block:: shell-session kubectl -n kube-system exec ds/cilium -- tail -f /var/run/cilium/hubble/events.log Once you have configured the Hubble Exporter, you can configure your logging solution to consume logs from your Hubble export file path. To get Hubble flows directly exported to the logs instead of written to a rotated file, ``stdout`` can be defined as ``hubble-export-file-path``. To disable the static configuration, you must remove the ``hubble-export-file-path`` key in the ``cilium-config`` ConfigMap and manually clean up the log files created in the specified location in the container. The below command will restart the Cilium pods. If you edit the ConfigMap manually, you will need to restart the Cilium pods. .. code-block:: shell-session cilium config delete hubble-export-file-path Configuration options --------------------- Helm chart configuration options include: - ``hubble.export.static.filePath``: file path of target log file. (default /var/run/cilium/hubble/events.log) - ``hubble.export.fileMaxSizeMb``: size in MB at which to rotate the Hubble export file. (default 10) - ``hubble.export.fileMaxBackups``: number of rotated Hubble export files to keep. (default 5) - ``hubble.export.fileCompress``: enable compression of rotated files. (default false) Performance tuning ================== Configuration options impacting performance of \*\*Hubble exporter\*\* include: - ``hubble.export.static.allowList``: specify an allowlist as JSON encoded FlowFilters to Hubble exporter. - ``hubble.export.static.denyList``: specify a denylist as JSON encoded FlowFilters to Hubble exporter. - ``hubble.export.static.fieldMask``: specify a list of fields to use for field masking in Hubble exporter. Filters ------- You can use ``hubble`` CLI to generated required filters (see `Specifying Raw Flow Filters`\_ for more examples). .. \_Specifying Raw Flow Filters: https://github.com/cilium/hubble#specifying-raw-flow-filters For example, to filter flows with verdict ``DENIED`` or ``ERROR``, run: .. code-block:: shell-session $ hubble observe --verdict DROPPED --verdict ERROR --print-raw-filters allowlist: - '{"verdict":["DROPPED","ERROR"]}' Then paste the output to ``hubble-export-allowlist`` in ``cilium-config`` Config Map: .. code-block:: shell-session kubectl -n kube-system patch cm cilium-config --patch-file=/dev/stdin <<-EOF data: hubble-export-allowlist: '{"verdict":["DROPPED","ERROR"]}' EOF Or use helm chart to update your cilium installation setting value flag ``hubble.export.static.allowList``. .. cilium-helm-upgrade:: :set: hubble.enabled=true hubble.export.static.enabled=true hubble.export.static.allowList[0]='{"verdict":["DROPPED","ERROR"]}' You can do the same to selectively filter data. For example, to filter all flows in the ``kube-system`` namespace, run: .. code-block:: shell-session $ hubble observe --not --namespace kube-system --print-raw-filters denylist: - '{"source\_pod":["kube-system/"]}' - '{"destination\_pod":["kube-system/"]}' Then paste the output to ``hubble-export-denylist`` in ``cilium-config`` Config Map: .. code-block:: shell-session kubectl -n kube-system patch cm cilium-config --patch-file=/dev/stdin <<-EOF data: hubble-export-denylist: '{"source\_pod":["kube-system/"]},{"destination\_pod":["kube-system/"]}' EOF Or use helm chart to update your cilium installation setting value flag ``hubble.export.static.denyList``. .. cilium-helm-upgrade:: :set: hubble.enabled=true hubble.export.static.enabled=true hubble.export.static.denyList[0]='{"source\_pod":["kube-system/"]}' hubble.export.static.denyList[1]='{"destination\_pod":["kube-system/"]}' Field mask ---------- Field mask can't be generated with ``hubble``. Field mask is a list of field names from the `flow proto`\_ definition. .. \_flow proto: https://github.com/cilium/cilium/blob/main/api/v1/flow/flow.proto Examples include: - To keep all information except pod labels: .. code-block:: shell-session hubble-export-fieldmask: time source.identity source.namespace source.pod\_name destination.identity destination.namespace destination.pod\_name source\_service destination\_service l4 IP ethernet l7 Type node\_name is\_reply event\_type verdict Summary - To keep only timestamp, verdict, ports, IP addresses, node name, pod name,
https://github.com/cilium/cilium/blob/main//Documentation/observability/hubble/configuration/export.rst
main
cilium
[ 0.023499129340052605, 0.043801020830869675, -0.08010648936033249, -0.019298676401376724, 0.10383306443691254, -0.01729605533182621, -0.10420684516429901, 0.06774003058671951, 0.0029991380870342255, -0.0022549401037395, 0.032217394560575485, -0.030934693291783333, 0.05996669456362724, -0.05...
0.127191
.. \_flow proto: https://github.com/cilium/cilium/blob/main/api/v1/flow/flow.proto Examples include: - To keep all information except pod labels: .. code-block:: shell-session hubble-export-fieldmask: time source.identity source.namespace source.pod\_name destination.identity destination.namespace destination.pod\_name source\_service destination\_service l4 IP ethernet l7 Type node\_name is\_reply event\_type verdict Summary - To keep only timestamp, verdict, ports, IP addresses, node name, pod name, and namespace: .. code-block:: shell-session hubble-export-fieldmask: time source.namespace source.pod\_name destination.namespace destination.pod\_name l4 IP node\_name is\_reply verdict The following is a complete example of configuring Hubble Exporter. - Configuration: .. cilium-helm-upgrade:: :set: hubble.enabled=true hubble.export.static.enabled=true hubble.export.static.filePath=/var/run/cilium/hubble/events.log hubble.export.static.allowList[0]='{"verdict":["DROPPED","ERROR"]}' hubble.export.static.denyList[0]='{"source\_pod":["kube-system/"]}' hubble.export.static.denyList[1]='{"destination\_pod":["kube-system/"]}' "hubble.export.static.fieldMask={time,source.namespace,source.pod\_name,destination.namespace,destination.pod\_name,l4,IP,node\_name,is\_reply,verdict,drop\_reason\_desc}" - Command: .. code-block:: shell-session kubectl -n kube-system exec ds/cilium -- tail -f /var/run/cilium/hubble/events.log - Output: :: {"flow":{"time":"2023-08-21T12:12:13.517394084Z","verdict":"DROPPED","IP":{"source":"fe80::64d8:8aff:fe72:fc14","destination":"ff02::2","ipVersion":"IPv6"},"l4":{"ICMPv6":{"type":133}},"source":{},"destination":{},"node\_name":"kind-kind/kind-worker","drop\_reason\_desc":"INVALID\_SOURCE\_IP"},"node\_name":"kind-kind/kind-worker","time":"2023-08-21T12:12:13.517394084Z"} {"flow":{"time":"2023-08-21T12:12:18.510175415Z","verdict":"DROPPED","IP":{"source":"10.244.1.60","destination":"10.244.1.5","ipVersion":"IPv4"},"l4":{"TCP":{"source\_port":44916,"destination\_port":80,"flags":{"SYN":true}}},"source":{"namespace":"default","pod\_name":"xwing"},"destination":{"namespace":"default","pod\_name":"deathstar-7848d6c4d5-th9v2"},"node\_name":"kind-kind/kind-worker","drop\_reason\_desc":"POLICY\_DENIED"},"node\_name":"kind-kind/kind-worker","time":"2023-08-21T12:12:18.510175415Z"} Dynamic exporter configuration ============================== Standard hubble exporter configuration accepts only one set of filters and requires cilium pod restart to change config. Dynamic flow logs allow configuring multiple filters at the same time and saving output in separate files. Additionally it does not require cilium pod restarts to apply changed configuration. \*\*Dynamic Hubble Exporter\*\* is enabled with Config Map property. It is disabled until you set a file path value for ``hubble-flowlogs-config-path``. Install cilium with dynamic exporter enabled: .. cilium-helm-install:: :set: hubble.enabled=true hubble.export.dynamic.enabled=true Wait for ``cilium`` pod to become ready: .. code-block:: shell-session kubectl -n kube-system rollout status ds/cilium You can change flow log settings without a need for pod to be restarted (changes should be reflected within 60s because of configmap propagation delay): .. cilium-helm-upgrade:: :set: hubble.enabled=true hubble.export.dynamic.enabled=true hubble.export.dynamic.config.content[0].name=system hubble.export.dynamic.config.content[0].filePath=/var/run/cilium/hubble/events-system.log hubble.export.dynamic.config.content[0].includeFilters[0].source\_pod[0]='kube\_system/' hubble.export.dynamic.config.content[0].includeFilters[1].destination\_pod[0]='kube\_system/' Dynamic flow logs can be configured with ``end`` property which means that it will automatically stop logging after specified date time. It supports the same field masking and filtering as static hubble exporter. For max output file size and backup files dynamic exporter reuses the same settings as static one: ``hubble.export.fileMaxSizeMb`` and ``hubble.export.fileMaxBackups`` Sample dynamic flow logs configs: :: hubble: export: dynamic: enabled: true config: enabled: true content: - name: "test001" filePath: "/var/run/cilium/hubble/test001.log" fieldMask: [] includeFilters: [] excludeFilters: [] end: "2023-10-09T23:59:59-07:00" - name: "test002" filePath: "/var/run/cilium/hubble/test002.log" fieldMask: ["source.namespace", "source.pod\_name", "destination.namespace", "destination.pod\_name", "verdict"] includeFilters: - source\_pod: ["default/"] event\_type: - type: 1 - destination\_pod: ["frontend/webserver-975996d4c-7hhgt"] excludeFilters: [] end: "2023-10-09T23:59:59-07:00" - name: "test003" filePath: "/var/run/cilium/hubble/test003.log" fieldMask: ["source", "destination","verdict"] includeFilters: [] excludeFilters: - destination\_pod: ["ingress/"]
https://github.com/cilium/cilium/blob/main//Documentation/observability/hubble/configuration/export.rst
main
cilium
[ 0.03786738961935043, 0.09507673233747482, -0.07785727828741074, -0.031360477209091187, 0.08807715028524399, 0.019551703706383705, -0.021853210404515266, 0.0366302914917469, 0.044102706015110016, 0.014514327049255371, 0.006784540601074696, -0.09556343406438828, -0.05033157393336296, -0.0788...
0.170595
Cluster Mesh Troubleshooting ============================ Install the Cilium CLI ---------------------- .. include:: /installation/cli-download.rst Automatic Verification ---------------------- #. Validate that Cilium pods are healthy and ready: .. code-block:: shell-session cilium status #. Validate that Cluster Mesh is enabled and operational: .. code-block:: shell-session cilium clustermesh status #. In case of errors, run the troubleshoot command to automatically investigate Cilium agents connectivity issues towards the ClusterMesh control plane in remote clusters: .. code-block:: shell-session kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg troubleshoot clustermesh The troubleshoot command performs a set of automatic checks to validate DNS resolution, network connectivity, TLS authentication, etcd authorization and more, and reports the output in a user friendly format. When KVStoreMesh is enabled, the output of the troubleshoot command refers to the connections from the agents to the local cache, and it is expected to be the same for all the clusters they are connected to. Run the troubleshoot command inside the clustermesh-apiserver to investigate KVStoreMesh connectivity issues towards the ClusterMesh control plane in remote clusters: .. code-block:: shell-session kubectl exec -it -n kube-system deploy/clustermesh-apiserver -c kvstoremesh -- \ clustermesh-apiserver kvstoremesh-dbg troubleshoot .. tip:: You can specify one or more cluster names as parameters of the troubleshoot command to run the checks only towards a subset of remote clusters. Manual Verification ------------------- As an alternative to leveraging the tools presented in the previous section, you may perform the following steps to troubleshoot ClusterMesh issues. #. Validate that each cluster is assigned a \*\*unique\*\* human-readable name as well as a numeric cluster ID (1-255). #. Validate that the clustermesh-apiserver is initialized correctly for each cluster: .. code-block:: shell-session $ kubectl logs -n kube-system deployment/clustermesh-apiserver -c apiserver ... level=info msg="Connecting to etcd server..." config=/var/lib/cilium/etcd-config.yaml endpoints="[https://127.0.0.1:2379]" subsys=kvstore level=info msg="Got lock lease ID 7c0281854b945c07" subsys=kvstore level=info msg="Initial etcd session established" config=/var/lib/cilium/etcd-config.yaml endpoints="[https://127.0.0.1:2379]" subsys=kvstore level=info msg="Successfully verified version of etcd endpoint" config=/var/lib/cilium/etcd-config.yaml endpoints="[https://127.0.0.1:2379]" etcdEndpoint="https://127.0.0.1:2379" subsys=kvstore version=3.4.13 #. Validate that ClusterMesh is healthy running ``cilium-dbg status --all-clusters`` inside each Cilium agent:: ClusterMesh: 1/1 remote clusters ready k8s-c2: ready, 3 nodes, 25 endpoints, 8 identities, 10 services, 0 MCS-API service exports, 0 reconnections (last: never) └ etcd: 1/1 connected, leases=0, lock lease-ID=7c028201b53de662, has-quorum=true: https://k8s-c2.mesh.cilium.io:2379 - 3.5.4 (Leader) └ remote configuration: expected=true, retrieved=true, cluster-id=3, kvstoremesh=false, sync-canaries=true, service-exports=disabled └ synchronization status: nodes=true, endpoints=true, identities=true, services=true When KVStoreMesh is enabled, additionally check its status and validate that it is correctly connected to all remote clusters: .. code-block:: shell-session $ kubectl --context $CLUSTER1 exec -it -n kube-system deploy/clustermesh-apiserver \ -c kvstoremesh -- clustermesh-apiserver kvstoremesh-dbg status --verbose #. Validate that the required TLS secrets are set up properly. By default, the following TLS secrets must be available in the namespace in which Cilium is installed: \* ``clustermesh-apiserver-server-cert``, which is used by the etcd container in the clustermesh-apiserver deployment. Not applicable if an external etcd cluster is used. \* ``clustermesh-apiserver-admin-cert``, which is used by the apiserver/kvstoremesh containers in the clustermesh-apiserver deployment, to authenticate against the sidecar etcd instance. Not applicable if an external etcd cluster is used. \* ``clustermesh-apiserver-remote-cert``, which is used by Cilium agents, or the kvstoremesh container in the clustermesh-apiserver deployment when KVStoreMesh is enabled, to authenticate against remote etcd instances. \* ``clustermesh-apiserver-local-cert``, which is used by Cilium agents to authenticate against the local etcd instance. Only applicable if KVStoreMesh is enabled. #. Validate that the configuration for remote clusters is picked up correctly. For each remote cluster, an info log message ``New remote cluster configuration`` along with the remote cluster name must be logged in the ``cilium-agent`` logs. If the configuration is not found, check the following: \* The ``cilium-clustermesh`` Kubernetes secret is present and correctly
https://github.com/cilium/cilium/blob/main//Documentation/operations/troubleshooting_clustermesh.rst
main
cilium
[ 0.03476670756936073, -0.022970125079154968, -0.014562208205461502, -0.01263208594173193, 0.02833404205739498, -0.05999234318733215, -0.0510328933596611, -0.011657720431685448, 0.04457900673151016, 0.04833691939711571, 0.06156157702207565, -0.13579513132572174, 0.008183669298887253, 0.00245...
0.209747
for remote clusters is picked up correctly. For each remote cluster, an info log message ``New remote cluster configuration`` along with the remote cluster name must be logged in the ``cilium-agent`` logs. If the configuration is not found, check the following: \* The ``cilium-clustermesh`` Kubernetes secret is present and correctly mounted by the Cilium agent pods. \* The secret contains a file for each remote cluster with the filename matching the name of the remote cluster as provided by the ``--cluster-name`` argument or the ``cluster-name`` ConfigMap option. \* Each file named after a remote cluster contains a valid etcd configuration consisting of the endpoints to reach the remote etcd cluster, and the path of the certificate and private key to authenticate against that etcd cluster. Additional files may be included in the secret to provide the certificate and private key themselves. \* The ``/var/lib/cilium/clustermesh`` directory inside any of the Cilium agent pods contains the files mounted from the ``cilium-clustermesh`` secret. You can use ``kubectl exec -ti -n kube-system ds/cilium -c cilium-agent -- ls /var/lib/cilium/clustermesh`` to list the files present. #. Validate that the connection to the remote cluster could be established. You will see a log message like this in the ``cilium-agent`` logs for each remote cluster:: level=info msg="Connection to remote cluster established" If the connection failed, you will see a warning like this:: level=warning msg="Unable to establish etcd connection to remote cluster" If the connection fails, check the following: \* When KVStoreMesh is disabled, validate that the ``hostAliases`` section in the Cilium DaemonSet maps each remote cluster to the IP of the LoadBalancer that makes the remote control plane available; When KVStoreMesh is enabled, validate the ``hostAliases`` section in the clustermesh-apiserver Deployment. \* Validate that a local node in the source cluster can reach the IP specified in the ``hostAliases`` section. When KVStoreMesh is disabled, the ``cilium-clustermesh`` secret contains a configuration file for each remote cluster, it will point to a logical name representing the remote cluster; When KVStoreMesh is enabled, it exists in the ``cilium-kvstoremesh`` secret. .. code-block:: yaml endpoints: - https://cluster1.mesh.cilium.io:2379 The name will \*NOT\* be resolvable via DNS outside the Cilium agent pods. The name is mapped to an IP using ``hostAliases``. Run ``kubectl -n kube-system get daemonset cilium -o yaml`` when KVStoreMesh is disabled, or run ``kubectl -n kube-system get deployment clustermesh-apiserver -o yaml`` when KVStoreMesh is enabled, grep for the FQDN to retrieve the IP that is configured. Then use ``curl`` to validate that the port is reachable. \* A firewall between the local cluster and the remote cluster may drop the control plane connection. Ensure that port 2379/TCP is allowed. State Propagation ----------------- #. Run ``cilium-dbg node list`` in one of the Cilium pods and validate that it lists both local nodes and nodes from remote clusters. If remote nodes are not present, validate that Cilium agents (or KVStoreMesh, if enabled) are correctly connected to the given remote cluster. Additionally, verify that the initial nodes synchronization from all clusters has completed. #. Validate the connectivity health matrix across clusters by running ``cilium-health status`` inside any Cilium pod. It will list the status of the connectivity health check to each remote node. If this fails, make sure that the network allows the health checking traffic as specified in the :ref:`firewall\_requirements` section. #. Validate that identities are synchronized correctly by running ``cilium-dbg identity list`` in one of the Cilium pods. It must list identities from all clusters. You can determine what cluster an identity belongs to by looking at the label ``io.cilium.k8s.policy.cluster``. If remote identities are not present, validate that Cilium agents (or KVStoreMesh, if
https://github.com/cilium/cilium/blob/main//Documentation/operations/troubleshooting_clustermesh.rst
main
cilium
[ 0.03536373749375343, -0.027499884366989136, 0.00566532788798213, -0.03716721385717392, 0.019182546064257622, 0.006370592396706343, -0.06535817682743073, -0.015947530046105385, 0.06540095061063766, 0.03489094600081444, 0.049332909286022186, -0.08294813334941864, 0.05423261225223541, -0.0009...
0.134011
identities are synchronized correctly by running ``cilium-dbg identity list`` in one of the Cilium pods. It must list identities from all clusters. You can determine what cluster an identity belongs to by looking at the label ``io.cilium.k8s.policy.cluster``. If remote identities are not present, validate that Cilium agents (or KVStoreMesh, if enabled) are correctly connected to the given remote cluster. Additionally, verify that the initial identities synchronization from all clusters has completed. #. Validate that the IP cache is synchronized correctly by running ``cilium-dbg bpf ipcache list`` or ``cilium-dbg map get cilium\_ipcache``. The output must contain pod IPs from local and remote clusters. If remote IP addresses are not present, validate that Cilium agents (or KVStoreMesh, if enabled) are correctly connected to the given remote cluster. Additionally, verify that the initial IPs synchronization from all clusters has completed. #. When using global services, ensure that global services are configured with endpoints from all clusters. Run ``cilium-dbg service list`` in any Cilium pod and validate that the backend IPs consist of pod IPs from all clusters running relevant backends. You can further validate the correct datapath plumbing by running ``cilium-dbg bpf lb list`` to inspect the state of the eBPF maps. If this fails: \* Run ``cilium-dbg debuginfo`` and look for the section ``k8s-service-cache``. In that section, you will find the contents of the service correlation cache. It will list the Kubernetes services and endpoints of the local cluster. It will also have a section ``externalEndpoints`` which must list all endpoints of remote clusters. :: #### k8s-service-cache (\*k8s.ServiceCache)(0xc00000c500)({ [...] services: (map[k8s.ServiceID]\*k8s.Service) (len=2) { (k8s.ServiceID) default/kubernetes: (\*k8s.Service)(0xc000cd11d0)(frontend:172.20.0.1/ports=[https]/selector=map[]), (k8s.ServiceID) kube-system/kube-dns: (\*k8s.Service)(0xc000cd1220)(frontend:172.20.0.10/ports=[metrics dns dns-tcp]/selector=map[k8s-app:kube-dns]) }, endpoints: (map[k8s.ServiceID]\*k8s.Endpoints) (len=2) { (k8s.ServiceID) kube-system/kube-dns: (\*k8s.Endpoints)(0xc0000103c0)(10.16.127.105:53/TCP,10.16.127.105:53/UDP,10.16.127.105:9153/TCP), (k8s.ServiceID) default/kubernetes: (\*k8s.Endpoints)(0xc0000103f8)(192.168.60.11:6443/TCP) }, externalEndpoints: (map[k8s.ServiceID]k8s.externalEndpoints) { } }) The sections ``services`` and ``endpoints`` represent the services of the local cluster, the section ``externalEndpoints`` lists all remote services and will be correlated with services matching the same ``ServiceID``.
https://github.com/cilium/cilium/blob/main//Documentation/operations/troubleshooting_clustermesh.rst
main
cilium
[ 0.009060423821210861, -0.025668228045105934, -0.03613709285855293, -0.03857526183128357, -0.026983825489878654, -0.058572132140398026, -0.00039909229963086545, -0.06490832567214966, 0.05161009356379509, 0.010533833876252174, 0.04372291639447212, -0.0941719263792038, 0.03591140732169151, -0...
0.11398
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. warning:: Read the full upgrade guide to understand all the necessary steps before performing them. Do not upgrade to \ |NEXT\_RELEASE| before reading the section :ref:`current\_release\_required\_changes` and completing the required steps. Skipping this step may lead to an non-functional upgrade. The only tested rollback and upgrade path is between consecutive minor releases. Always perform rollbacks and upgrades between one minor release at a time. This means that going from (a hypothetical) 1.1 to 1.2 and back is supported while going from 1.1 to 1.3 and back is not. Always update to the latest patch release of your current version before attempting an upgrade.
https://github.com/cilium/cilium/blob/main//Documentation/operations/upgrade-warning.rst
main
cilium
[ 0.023083476349711418, -0.008963711559772491, -0.017383810132741928, -0.07640635967254639, 0.03789504989981651, -0.08112835884094238, -0.09244178980588913, 0.01105624157935381, -0.009551155380904675, -0.003206938970834017, 0.08900775760412216, 0.004906726535409689, -0.013722854666411877, -0...
0.08982
Service Mesh Troubleshooting ============================ Install the Cilium CLI ---------------------- .. include:: /installation/cli-download.rst Generic ------- #. Validate that the ``ds/cilium`` as well as the ``deployment/cilium-operator`` pods are healthy and ready. .. code-block:: shell-session $ cilium status Manual Verification of Setup ---------------------------- #. Validate that ``kubeProxyReplacement`` is true. .. code-block:: shell-session $ kubectl exec -n kube-system ds/cilium -- cilium-dbg status ... KubeProxyReplacement: True ... #. Validate that runtime the values of ``enable-envoy-config`` and ``enable-ingress-controller`` are true. Ingress controller flag is optional if customer only uses ``CiliumEnvoyConfig`` or ``CiliumClusterwideEnvoyConfig`` CRDs. .. code-block:: shell-session $ kubectl -n kube-system get cm cilium-config -o json | egrep "enable-ingress-controller|enable-envoy-config" "enable-envoy-config": "true", "enable-ingress-controller": "true", Ingress Troubleshooting ----------------------- Internally, the Cilium Ingress controller will create one Load Balancer service, one ``CiliumEnvoyConfig`` and one dummy Endpoint resource for each Ingress resource. .. code-block:: shell-session $ kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE basic-ingress cilium \* 10.97.60.117 80 16m # For dedicated Load Balancer mode $ kubectl get service cilium-ingress-basic-ingress NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cilium-ingress-basic-ingress LoadBalancer 10.97.60.117 10.97.60.117 80:31911/TCP 17m # For dedicated Load Balancer mode $ kubectl get cec cilium-ingress-default-basic-ingress NAME AGE cilium-ingress-default-basic-ingress 18m # For shared Load Balancer mode $ kubectl get services -n kube-system cilium-ingress NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cilium-ingress LoadBalancer 10.111.109.99 10.111.109.99 80:32690/TCP,443:31566/TCP 38m # For shared Load Balancer mode $ kubectl get cec -n kube-system cilium-ingress NAME AGE cilium-ingress 15m #. Validate that the Load Balancer service has either an external IP or FQDN assigned. If it's not available after a long time, please check the Load Balancer related documentation from your respective cloud provider. #. Check if there is any warning or error message while Cilium is trying to provision the ``CiliumEnvoyConfig`` resource. This is unlikely to happen for CEC resources originating from the Cilium Ingress controller. .. include:: /network/servicemesh/warning.rst Connectivity Troubleshooting ---------------------------- This section is for troubleshooting connectivity issues mainly for Ingress resources, but the same steps can be applied to manually configured ``CiliumEnvoyConfig`` resources as well. It's best to have ``debug`` and ``debug-verbose`` enabled with below values. Kindly note that any change of Cilium flags requires a restart of the Cilium agent and operator. .. code-block:: shell-session $ kubectl get -n kube-system cm cilium-config -o json | grep "debug" "debug": "true", "debug-verbose": "flow", .. note:: The originating source IP is used for enforcing ingress traffic. The request normally traverses from LoadBalancer service to pre-assigned port of your node, then gets forwarded to the Cilium Envoy proxy, and finally gets proxied to the actual backend service. #. The first step between cloud Load Balancer to node port is out of Cilium scope. Please check related documentation from your respective cloud provider to make sure your clusters are configured properly. #. The second step could be checked by connecting with SSH to your underlying host, and sending the similar request to localhost on the relevant port: .. code-block:: shell-session $ kubectl get service cilium-ingress-basic-ingress NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cilium-ingress-basic-ingress LoadBalancer 10.97.60.117 10.97.60.117 80:31911/TCP 17m # After ssh to any of k8s node $ curl -v http://localhost:31911/ \* Trying 127.0.0.1:31911... \* TCP\_NODELAY set \* Connected to localhost (127.0.0.1) port 31911 (#0) > GET / HTTP/1.1 > Host: localhost:31911 > User-Agent: curl/7.68.0 > Accept: \*/\* > \* Mark bundle as not supporting multiuse < HTTP/1.1 503 Service Unavailable < content-length: 19 < content-type: text/plain < date: Thu, 07 Jul 2022 12:25:56 GMT < server: envoy < \* Connection #0 to host localhost left intact # Flows for world identity $ kubectl -n kube-system exec ds/cilium -- hubble observe -f --identity 2 Jul 7 12:28:27.970: 127.0.0.1:54704 <- 127.0.0.1:13681 http-response FORWARDED
https://github.com/cilium/cilium/blob/main//Documentation/operations/troubleshooting_servicemesh.rst
main
cilium
[ 0.02034432627260685, 0.013703728094696999, -0.00884641520678997, -0.06593003123998642, -0.03263285383582115, -0.06353375315666199, -0.029069723561406136, 0.0032909337896853685, 0.0314674973487854, 0.04414162039756775, 0.057171545922756195, -0.1168675646185875, -0.008460599929094315, 0.0051...
0.154668
content-length: 19 < content-type: text/plain < date: Thu, 07 Jul 2022 12:25:56 GMT < server: envoy < \* Connection #0 to host localhost left intact # Flows for world identity $ kubectl -n kube-system exec ds/cilium -- hubble observe -f --identity 2 Jul 7 12:28:27.970: 127.0.0.1:54704 <- 127.0.0.1:13681 http-response FORWARDED (HTTP/1.1 503 0ms (GET http://localhost:31911/)) Alternatively, you can also send a request directly to the Envoy proxy port. For Ingress, the proxy port is randomly assigned by the Cilium Ingress controller. For manually configured ``CiliumEnvoyConfig`` resources, the proxy port is retrieved directly from the spec. .. code-block:: shell-session $ kubectl logs -f -n kube-system ds/cilium --timestamps | egrep "envoy|proxy" ... 2022-07-08T08:05:13.986649816Z level=info msg="Adding new proxy port rules for cilium-ingress-default-basic-ingress:19672" proxy port name=cilium-ingress-default-basic-ingress subsys=proxy # After ssh to any of k8s node, send request to Envoy proxy port directly $ curl -v http://localhost:19672 \* Trying 127.0.0.1:19672... \* TCP\_NODELAY set \* Connected to localhost (127.0.0.1) port 19672 (#0) > GET / HTTP/1.1 > Host: localhost:19672 > User-Agent: curl/7.68.0 > Accept: \*/\* > \* Mark bundle as not supporting multiuse < HTTP/1.1 503 Service Unavailable < content-length: 19 < content-type: text/plain < date: Fri, 08 Jul 2022 08:12:35 GMT < server: envoy If you see a response similar to the above, it means that the request is being redirected to proxy successfully. The http response will have one special header ``server: envoy`` accordingly. The same can be observed from ``hubble observe`` command :ref:`hubble\_troubleshooting`. The most common root cause is either that the Cilium Envoy proxy is not running on the node, or there is some other issue with CEC resource provisioning. .. code-block:: shell-session $ kubectl exec -n kube-system ds/cilium -- cilium-dbg status ... Controller Status: 49/49 healthy Proxy Status: OK, ip 10.0.0.25, 6 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 #. Assuming that the above steps are done successfully, you can proceed to send a request via an external IP or via FQDN next. Double-check whether your backend service is up and healthy. The Envoy Discovery Service (EDS) has a name that follows the convention ``/:``. .. code-block:: shell-session $ LB\_IP=$(kubectl get ingress basic-ingress -o json | jq '.status.loadBalancer.ingress[0].ip' | jq -r .) $ curl -s http://$LB\_IP/details/1 no healthy upstream $ kubectl get cec cilium-ingress-default-basic-ingress -o json | jq '.spec.resources[] | select(.type=="EDS")' { "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster", "connectTimeout": "5s", "name": "default/details:9080", "outlierDetection": { "consecutiveLocalOriginFailure": 2, "splitExternalLocalOriginErrors": true }, "type": "EDS", "typedExtensionProtocolOptions": { "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": { "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions", "useDownstreamProtocolConfig": { "http2ProtocolOptions": {} } } } } { "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster", "connectTimeout": "5s", "name": "default/productpage:9080", "outlierDetection": { "consecutiveLocalOriginFailure": 2, "splitExternalLocalOriginErrors": true }, "type": "EDS", "typedExtensionProtocolOptions": { "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": { "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions", "useDownstreamProtocolConfig": { "http2ProtocolOptions": {} } } } } If everything is configured correctly, you will be able to see the flows from ``world`` (identity 2), ``ingress`` (identity 8) and your backend pod as per below. .. code-block:: shell-session # Flows for world identity $ kubectl exec -n kube-system ds/cilium -- hubble observe --identity 2 -f Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init) Jul 7 13:07:46.726: 192.168.49.1:59608 -> default/details-v1-5498c86cf5-cnt9q:9080 http-request FORWARDED (HTTP/1.1 GET http://10.97.60.117/details/1) Jul 7 13:07:46.727: 192.168.49.1:59608 <- default/details-v1-5498c86cf5-cnt9q:9080 http-response FORWARDED (HTTP/1.1 200 1ms (GET http://10.97.60.117/details/1)) # Flows for Ingress identity (e.g. envoy proxy) $ kubectl exec -n kube-system ds/cilium -- hubble observe --identity 8 -f Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init) Jul 7 13:07:46.726: 10.0.0.95:42509 -> default/details-v1-5498c86cf5-cnt9q:9080 to-endpoint FORWARDED (TCP Flags: SYN) Jul 7 13:07:46.726: 10.0.0.95:42509 <- default/details-v1-5498c86cf5-cnt9q:9080 to-stack FORWARDED (TCP Flags: SYN, ACK) Jul 7 13:07:46.726: 10.0.0.95:42509 -> default/details-v1-5498c86cf5-cnt9q:9080 to-endpoint FORWARDED (TCP Flags: ACK) Jul 7
https://github.com/cilium/cilium/blob/main//Documentation/operations/troubleshooting_servicemesh.rst
main
cilium
[ 0.027966124936938286, 0.027980724349617958, -0.02326168864965439, -0.018550880253314972, -0.07552620023488998, -0.0827321782708168, -0.07312501966953278, 0.0033145544584840536, 0.07052911818027496, 0.05465099215507507, -0.02541893720626831, -0.07584364712238312, -0.03289979323744774, -0.04...
0.192816
Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init) Jul 7 13:07:46.726: 10.0.0.95:42509 -> default/details-v1-5498c86cf5-cnt9q:9080 to-endpoint FORWARDED (TCP Flags: SYN) Jul 7 13:07:46.726: 10.0.0.95:42509 <- default/details-v1-5498c86cf5-cnt9q:9080 to-stack FORWARDED (TCP Flags: SYN, ACK) Jul 7 13:07:46.726: 10.0.0.95:42509 -> default/details-v1-5498c86cf5-cnt9q:9080 to-endpoint FORWARDED (TCP Flags: ACK) Jul 7 13:07:46.726: 10.0.0.95:42509 -> default/details-v1-5498c86cf5-cnt9q:9080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Jul 7 13:07:46.727: 10.0.0.95:42509 <- default/details-v1-5498c86cf5-cnt9q:9080 to-stack FORWARDED (TCP Flags: ACK, PSH) # Flows for backend pod, the identity can be retrieved via cilium identity list command $ kubectl exec -n kube-system ds/cilium -- hubble observe --identity 48847 -f Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init) Jul 7 13:07:46.726: 10.0.0.95:42509 -> default/details-v1-5498c86cf5-cnt9q:9080 to-endpoint FORWARDED (TCP Flags: SYN) Jul 7 13:07:46.726: 10.0.0.95:42509 <- default/details-v1-5498c86cf5-cnt9q:9080 to-stack FORWARDED (TCP Flags: SYN, ACK) Jul 7 13:07:46.726: 10.0.0.95:42509 -> default/details-v1-5498c86cf5-cnt9q:9080 to-endpoint FORWARDED (TCP Flags: ACK) Jul 7 13:07:46.726: 10.0.0.95:42509 -> default/details-v1-5498c86cf5-cnt9q:9080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Jul 7 13:07:46.726: 192.168.49.1:59608 -> default/details-v1-5498c86cf5-cnt9q:9080 http-request FORWARDED (HTTP/1.1 GET http://10.97.60.117/details/1) Jul 7 13:07:46.727: 10.0.0.95:42509 <- default/details-v1-5498c86cf5-cnt9q:9080 to-stack FORWARDED (TCP Flags: ACK, PSH) Jul 7 13:07:46.727: 192.168.49.1:59608 <- default/details-v1-5498c86cf5-cnt9q:9080 http-response FORWARDED (HTTP/1.1 200 1ms (GET http://10.97.60.117/details/1)) Jul 7 13:08:16.757: 10.0.0.95:42509 <- default/details-v1-5498c86cf5-cnt9q:9080 to-stack FORWARDED (TCP Flags: ACK, FIN) Jul 7 13:08:16.757: 10.0.0.95:42509 -> default/details-v1-5498c86cf5-cnt9q:9080 to-endpoint FORWARDED (TCP Flags: ACK, FIN) # Sample output of cilium-dbg monitor $ ksysex ds/cilium -- cilium-dbg monitor level=info msg="Initializing dissection cache..." subsys=monitor -> endpoint 212 flow 0x3000e251 , identity ingress->61131 state new ifindex lxcfc90a8580fd6 orig-ip 10.0.0.192: 10.0.0.192:34219 -> 10.0.0.164:9080 tcp SYN -> stack flow 0x2481d648 , identity 61131->ingress state reply ifindex 0 orig-ip 0.0.0.0: 10.0.0.164:9080 -> 10.0.0.192:34219 tcp SYN, ACK -> endpoint 212 flow 0x3000e251 , identity ingress->61131 state established ifindex lxcfc90a8580fd6 orig-ip 10.0.0.192: 10.0.0.192:34219 -> 10.0.0.164:9080 tcp ACK -> endpoint 212 flow 0x3000e251 , identity ingress->61131 state established ifindex lxcfc90a8580fd6 orig-ip 10.0.0.192: 10.0.0.192:34219 -> 10.0.0.164:9080 tcp ACK -> Request http from 0 ([reserved:world]) to 212 ([k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default k8s:io.cilium.k8s.policy.cluster=minikube k8s:io.cilium.k8s.policy.serviceaccount=bookinfo-details k8s:io.kubernetes.pod.namespace=default k8s:version=v1 k8s:app=details]), identity 2->61131, verdict Forwarded GET http://10.99.74.157/details/1 => 0 -> stack flow 0x2481d648 , identity 61131->ingress state reply ifindex 0 orig-ip 0.0.0.0: 10.0.0.164:9080 -> 10.0.0.192:34219 tcp ACK -> Response http to 0 ([reserved:world]) from 212 ([k8s:io.kubernetes.pod.namespace=default k8s:version=v1 k8s:app=details k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default k8s:io.cilium.k8s.policy.cluster=minikube k8s:io.cilium.k8s.policy.serviceaccount=bookinfo-details]), identity 61131->2, verdict Forwarded GET http://10.99.74.157/details/1 => 200
https://github.com/cilium/cilium/blob/main//Documentation/operations/troubleshooting_servicemesh.rst
main
cilium
[ -0.021851835772395134, 0.03105858340859413, -0.07720372080802917, -0.026628252118825912, 0.006305744405835867, -0.09197577834129333, -0.03840208798646927, 0.03584688529372215, 0.034514669328927994, -0.018174603581428528, 0.05974297225475311, -0.07840307801961899, -0.03569799289107323, 0.00...
0.159673
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_admin\_upgrade: \*\*\*\*\*\*\*\*\*\*\*\*\* Upgrade Guide \*\*\*\*\*\*\*\*\*\*\*\*\* .. \_upgrade\_general: This upgrade guide is intended for Cilium running on Kubernetes. If you have questions, feel free to ping us on `Cilium Slack`\_. .. include:: upgrade-warning.rst .. \_pre\_flight: Running pre-flight check (Required) =================================== When rolling out an upgrade with Kubernetes, Kubernetes will first terminate the pod followed by pulling the new image version and then finally spin up the new image. In order to reduce the downtime of the agent and to prevent ``ErrImagePull`` errors during upgrade, the pre-flight check pre-pulls the new image version. If you are running in :ref:`kubeproxy-free` mode you must also pass on the Kubernetes API Server IP and / or the Kubernetes API Server Port when generating the ``cilium-preflight.yaml`` file. .. tabs:: .. group-tab:: kubectl .. cilium-helm-template:: :namespace: kube-system :set: preflight.enabled=true agent=false operator.enabled=false :post-helm-commands: > cilium-preflight.yaml :post-commands: kubectl create -f cilium-preflight.yaml .. group-tab:: Helm .. cilium-helm-install:: :name: cilium-preflight :namespace: kube-system :set: preflight.enabled=true agent=false operator.enabled=false .. group-tab:: kubectl (kubeproxy-free) .. cilium-helm-template:: :namespace: kube-system :set: preflight.enabled=true agent=false operator.enabled=false k8sServiceHost=API\_SERVER\_IP k8sServicePort=API\_SERVER\_PORT :post-helm-commands: > cilium-preflight.yaml :post-commands: kubectl create -f cilium-preflight.yaml .. group-tab:: Helm (kubeproxy-free) .. cilium-helm-install:: :name: cilium-preflight :namespace: kube-system :set: preflight.enabled=true agent=false operator.enabled=false k8sServiceHost=API\_SERVER\_IP k8sServicePort=API\_SERVER\_PORT After applying the ``cilium-preflight.yaml``, ensure that the number of READY pods is the same number of Cilium pods running. .. code-block:: shell-session $ kubectl get daemonset -n kube-system | sed -n '1p;/cilium/p' NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE cilium 2 2 2 2 2 1h20m cilium-pre-flight-check 2 2 2 2 2 7m15s Once the number of READY pods are equal, make sure the Cilium pre-flight deployment is also marked as READY 1/1. If it shows READY 0/1, consult the :ref:`cnp\_validation` section and resolve issues with the deployment before continuing with the upgrade. .. code-block:: shell-session $ kubectl get deployment -n kube-system cilium-pre-flight-check -w NAME READY UP-TO-DATE AVAILABLE AGE cilium-pre-flight-check 1/1 1 0 12s .. \_cleanup\_preflight\_check: Clean up pre-flight check ------------------------- Once the number of READY for the preflight :term:`DaemonSet` is the same as the number of cilium pods running and the preflight ``Deployment`` is marked as READY ``1/1`` you can delete the cilium-preflight and proceed with the upgrade. .. tabs:: .. group-tab:: kubectl .. code-block:: shell-session kubectl delete -f cilium-preflight.yaml .. group-tab:: Helm .. code-block:: shell-session helm delete cilium-preflight --namespace=kube-system .. \_upgrade\_minor: Upgrading Cilium ================ During normal cluster operations, all Cilium components should run the same version. Upgrading just one of them (e.g., upgrading the agent without upgrading the operator) could result in unexpected cluster behavior. The following steps will describe how to upgrade all of the components from one stable release to a later stable release. .. include:: upgrade-warning.rst Step 1: Upgrade to latest patch version --------------------------------------- When upgrading from one minor release to another minor release, for example 1.x to 1.y, it is recommended to upgrade to the `latest patch release `\_\_ for a Cilium release series first. Upgrading to the latest patch release ensures the most seamless experience if a rollback is required following the minor release upgrade. The upgrade guides for previous versions can be found for each minor version at the bottom left corner. Step 2: Use Helm to Upgrade your Cilium deployment -------------------------------------------------------------------------------------- :term:`Helm` can be used to either upgrade Cilium directly or to generate a new set of YAML files that can be used to upgrade an existing deployment via ``kubectl``. By default, Helm will generate the new templates using the default values files packaged with each new release. You still
https://github.com/cilium/cilium/blob/main//Documentation/operations/upgrade.rst
main
cilium
[ 0.023119065910577774, 0.012745141983032227, -0.0001589165476616472, -0.03812849894165993, 0.03702769801020622, -0.06304595619440079, -0.04954966530203819, -0.023655971512198448, 0.017670828849077225, 0.008625240996479988, 0.06210077181458473, -0.04064919427037239, -0.0062976619228720665, -...
0.176555
-------------------------------------------------------------------------------------- :term:`Helm` can be used to either upgrade Cilium directly or to generate a new set of YAML files that can be used to upgrade an existing deployment via ``kubectl``. By default, Helm will generate the new templates using the default values files packaged with each new release. You still need to ensure that you are specifying the equivalent options as used for the initial deployment, either by specifying a them at the command line or by committing the values to a YAML file. .. include:: ../installation/k8s-install-download-release.rst To minimize datapath disruption during the upgrade, the ``upgradeCompatibility`` option should be set to the initial Cilium version which was installed in this cluster. .. tabs:: .. group-tab:: kubectl Generate the required YAML file and deploy it: .. cilium-helm-template:: :namespace: kube-system :set: upgradeCompatibility=1.X :post-helm-commands: > cilium.yaml :post-commands: kubectl apply -f cilium.yaml .. group-tab:: Helm Deploy Cilium release via Helm: .. cilium-helm-upgrade:: :namespace: kube-system :set: upgradeCompatibility=1.X .. note:: Instead of using ``--set``, you can also save the values relative to your deployment in a YAML file and use it to regenerate the YAML for the latest Cilium version. Running any of the previous commands will overwrite the existing cluster's :term:`ConfigMap` so it is critical to preserve any existing options, either by setting them at the command line or storing them in a YAML file, similar to: .. code-block:: yaml agent: true upgradeCompatibility: "1.8" ipam: mode: "kubernetes" k8sServiceHost: "API\_SERVER\_IP" k8sServicePort: "API\_SERVER\_PORT" kubeProxyReplacement: "true" You can then upgrade using this values file by running: .. cilium-helm-upgrade:: :namespace: kube-system :extra-args: -f my-values.yaml When upgrading from one minor release to another minor release using ``helm upgrade``, do \*not\* use Helm's ``--reuse-values`` flag. The ``--reuse-values`` flag ignores any newly introduced values present in the new release and thus may cause the Helm template to render incorrectly. Instead, if you want to reuse the values from your existing installation, save the old values in a values file, check the file for any renamed or deprecated values, and then pass it to the ``helm upgrade`` command as described above. You can retrieve and save the values from an existing installation with the following command: .. code-block:: shell-session helm get values cilium --namespace=kube-system -o yaml > old-values.yaml The ``--reuse-values`` flag may only be safely used if the Cilium chart version remains unchanged, for example when ``helm upgrade`` is used to apply configuration changes without upgrading Cilium. Step 3: Rolling Back -------------------- Occasionally, it may be necessary to undo the rollout because a step was missed or something went wrong during upgrade. To undo the rollout run: .. tabs:: .. group-tab:: kubectl .. code-block:: shell-session kubectl rollout undo daemonset/cilium -n kube-system .. group-tab:: Helm .. code-block:: shell-session helm history cilium --namespace=kube-system helm rollback cilium [REVISION] --namespace=kube-system This will revert the latest changes to the Cilium ``DaemonSet`` and return Cilium to the state it was in prior to the upgrade. .. note:: When rolling back after new features of the new minor version have already been consumed, consult the :ref:`version\_notes` to check and prepare for incompatible feature use before downgrading/rolling back. This step is only required after new functionality introduced in the new minor version has already been explicitly used by creating new resources or by opting into new features via the :term:`ConfigMap`. .. \_version\_notes: .. \_upgrade\_version\_specifics: Version Specific Notes ====================== This section details the upgrade notes specific to |CURRENT\_RELEASE|. Read them carefully and take the suggested actions before upgrading Cilium to |CURRENT\_RELEASE|. For upgrades to earlier releases, see the :prev-docs:`upgrade notes to the previous version `. The only tested upgrade and rollback path is between consecutive minor releases. Always perform upgrades and rollbacks
https://github.com/cilium/cilium/blob/main//Documentation/operations/upgrade.rst
main
cilium
[ 0.03394053876399994, 0.00999308843165636, 0.0433947779238224, -0.06826262921094894, 0.0008345498354174197, -0.011174222454428673, -0.06934855878353119, 0.03869739547371864, 0.03471129387617111, 0.032800015062093735, 0.0717739388346672, -0.07629596441984177, 0.006521611474454403, -0.0683808...
0.068813
the upgrade notes specific to |CURRENT\_RELEASE|. Read them carefully and take the suggested actions before upgrading Cilium to |CURRENT\_RELEASE|. For upgrades to earlier releases, see the :prev-docs:`upgrade notes to the previous version `. The only tested upgrade and rollback path is between consecutive minor releases. Always perform upgrades and rollbacks between one minor release at a time. Additionally, always update to the latest patch release of your current version before attempting an upgrade. Tested upgrades are expected to have minimal to no impact on new and existing connections matched by either no Network Policies, or L3/L4 Network Policies only. Any traffic flowing via user space proxies (for example, because an L7 policy is in place, or using Ingress/Gateway API) will be disrupted during upgrade. Endpoints communicating via the proxy must reconnect to re-establish connections. .. \_current\_release\_required\_changes: .. \_1.19\_upgrade\_notes: 1.19 Upgrade Notes ------------------ \* MCS-API CoreDNS configuration recommendation has been updated. See :ref:`clustermesh\_mcsapi\_prereqs` for more details. \* The ``v2alpha1`` version of ``CiliumLoadBalancerIPPool`` CRD has been deprecated in favor of the ``v2`` version. Please change ``apiVersion: cilium.io/v2alpha1`` to ``apiVersion: cilium.io/v2`` in your manifests for all ``CiliumLoadBalancerIPPool`` resources. \* In a Cluster Mesh environment, network policy ingress and egress selectors currently select by default endpoints from all clusters unless one or more clusters are explicitly specified in the policy itself. The ``policy-default-local-cluster`` flag allows to change this behavior, and only select endpoints from the local cluster, unless explicitly specified, to improve the default security posture. This option is now enabled by default in Cilium v1.19. If you are using Cilium ClusterMesh and network policies, you need to take action to update your network policies to avoid this change from breaking connectivity for applications across different clusters. See :ref:`change\_policy\_default\_local\_cluster` for more details and migration recommendations to update your network policies. \* Kafka Network Policy support is deprecated and will be removed in Cilium v1.20. \* Hubble field mask support was stabilized. In the Observer gRPC API, ``GetFlowsRequest.Experimental.field\_mask`` was removed in favor of ``GetFlowsRequest.field\_mask``. In the Hubble CLI, the ``--experimental-field-mask`` has been renamed to ``--field-mask`` and ``--experimental-use-default-field-mask`` renamed to ``-use-default-field-mask`` (now ``true`` by default). \* Cilium-agent ClusterMesh status will no longer report the global services count. When using the CLI with a version lower than 1.19, the global services count will be reported as 0. \* ``enable-remote-node-masquerade`` config option is introduced. To masquerade traffic to remote nodes in BPF masquerading mode, use the option ``enable-remote-node-masquerade: "true"``. This option requires ``enable-bpf-masquerade: "true"`` and also either ``enable-ipv4-masquerade: "true"`` or ``enable-ipv6-masquerade: "true"`` to SNAT traffic for IPv4 and IPv6, respectively. This flag currently masquerades traffic to node ``InternalIP`` addresses. This may change in future. See :gh-issue:`35823` and :gh-issue:`17177` for further discussion on this topic. \* If MCS-API support is enabled, Cilium now installs and manages MCS-API CRDs by default. You can set ``clustermesh.mcsapi.installCRDs`` to ``false`` to opt-out. \* Cilium will stop reporting its local cluster name and node name in metrics. Users relying on those should configure their metrics collection system to add similar labels instead. \* The previously deprecated ``CiliumBGPPeeringPolicy`` CRD and its control plane (BGPv1) has been removed. Please migrate to ``cilium.io/v2`` CRDs (``CiliumBGPClusterConfig``, ``CiliumBGPPeerConfig``, ``CiliumBGPAdvertisement``, ``CiliumBGPNodeConfigOverride``) before upgrading. \* If running Cilium with IPsec, Kube-Proxy Replacement, and BPF Masquerading enabled, `eBPF\_Host\_Routing` will be automatically enabled. That was already the case when running without IPsec. Running BPF Host Routing with IPsec however requires `a kernel bugfix <`https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c4327229948879814229b46aa26a750718888503>`\_. You can disable BPF Host Routing with ``--enable-host-legacy-routing=true``. \* Certificate generation with the CronJob method for Hubble and ClusterMesh has changed. The Job resource to generate certificates is now created like any other resource and is no longer part
https://github.com/cilium/cilium/blob/main//Documentation/operations/upgrade.rst
main
cilium
[ -0.029309606179594994, -0.05059897154569626, 0.012019922956824303, -0.05793486163020134, -0.011987058445811272, -0.05560976639389992, -0.07123810797929764, -0.03582083806395531, -0.0470220185816288, 0.040871020406484604, 0.07545604556798935, 0.05639232322573662, -0.027468349784612656, -0.0...
0.108007
BPF Host Routing with IPsec however requires `a kernel bugfix <`https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c4327229948879814229b46aa26a750718888503>`\_. You can disable BPF Host Routing with ``--enable-host-legacy-routing=true``. \* Certificate generation with the CronJob method for Hubble and ClusterMesh has changed. The Job resource to generate certificates is now created like any other resource and is no longer part of Helm post-install or post-upgrade hooks. This makes it compatible by default with the Helm ``--wait`` option or through ArgoCD. You are no longer expected to create a Job manually or as part of your own automation when bootstrapping your clusters. \* Adding ClusterMesh certificates and keys in Helm values is deprecated. You are now expected to pre-create those secrets outside of the Cilium Helm chart when setting ``clustermesh.apiserver.tls.auto.enabled=false``. \* Testing for RHEL8 compatibility now uses a RHEL8.10-compatible kernel (previously this was a RHEL8.6-compatible kernel). \* The previously deprecated ``FromRequires`` and ``ToRequires`` fields of the `CiliumNetworkPolicy` and `CiliumClusterwideNetworkPolicy` CRDs have been removed. \* This release introduces enabling packet layer path MTU discovery by default on CNI Pod endpoints, this is controlled via the ``enable-endpoint-packet-layer-pmtud`` flag. \* The ``clustermesh.apiserver.tls.authMode`` option is set by default to ``migration`` as a first step to transition to ``cluster`` in a future release. If you are using ``clustermesh.useAPIServer=true`` and ``clustermesh.config.enabled=false`` you should either create the ``clustermesh-remote-users`` ConfigMap in addition to the existing ClusterMesh secrets or set ``clustermesh.apiserver.tls.authMode=legacy``. If you have a different configuration, you are not expected to take any action and the transition to ``clustermesh.apiserver.tls.authMode=cluster`` should be fully transparent for you. \* The Socket LB tracing message format has been updated, you might briefly see parsing errors or malformed trace-sock events during the upgrade to Cilium v1.19. \* The Cilium MCS-API implementation now raise a port conflict when any exported Service has ports that do not exactly match the oldest exported Service. \* DNS Policies match pattern now support a wildcard prefix(``\*\*.``) to match multilevel subdomains as pattern prefix. For usage see :ref:`DNS based` policies. This change introduces a difference in behavior for existing policies with ``\*\*.`` wildcard prefix in match patterns. This pattern now selects all cascaded subdomains in prefix as opposed to just a single level. For example: ``\*\*.cilium.io`` now selects both ``app.cilium.io`` and ``test.app.cilium.io`` as opposed to just ``app.cilium.io`` previously. \* ``bpf.datapathMode=auto`` config option has been introduced. If set, Cilium will probe the underlying host for netkit support and, if found, netkit mode will be selected at runtime. Otherwise, Cilium will default back to the standard veth mode. This has the side effect of splitting the datapath-mode into "configured mode" and "operational mode" in status outputs, where they differ. The default remains ``bpf.datapathMode=veth`` but may change in future releases. Removed Options ~~~~~~~~~~~~~~~ \* The previously deprecated ``--bpf-lb-proto-diff`` flag has been removed. \* The previously deprecated PCAP recorder feature and its accompanying flags (``--enable-recorder``, ``--hubble-recorder-\*``) have been removed. \* The previously deprecated ``--enable-session-affinity``, ``--enable-internal-traffic-policy``, and ``--enable-svc-source-range-check`` flags have been removed. Their corresponding features are enabled by default. \* The previously deprecated ``--enable-node-port``, ``--enable-host-port``, and ``--enable-external-ips`` flags have been removed. To enable the corresponding features, users must set ``--kube-proxy-replacement=true``. \* The previously deprecated custom calls feature (``--enable-custom-calls``) has been removed. \* The previously deprecated ``--enable-ipv4-egress-gateway`` flag has been removed. To enable the corresponding features, users must set ``--enable-egress-gateway=true``. \* The previously deprecated ``--egress-multi-home-ip-rule-compat`` flag has been removed. If you are running ENI IPAM mode and had this flag explicitly set to ``true``, please unset it and let Cilium v1.18 migrate your rules prior to the upgrade to Cilium v1.19. Azure IPAM users are unaffected by this change, as Cilium continues to use old-style IP rules with Azure IPAM. \* The previously deprecated ``--l2-pod-announcements-interface`` flag
https://github.com/cilium/cilium/blob/main//Documentation/operations/upgrade.rst
main
cilium
[ -0.015070986934006214, -0.006844759918749332, -0.02930743619799614, 0.022277476266026497, 0.06451176106929779, 0.027077103033661842, -0.047916047275066376, 0.0230436772108078, 0.0152939073741436, 0.020896218717098236, -0.007659107446670532, -0.061763010919094086, 0.03523862361907959, -0.00...
0.057631
and had this flag explicitly set to ``true``, please unset it and let Cilium v1.18 migrate your rules prior to the upgrade to Cilium v1.19. Azure IPAM users are unaffected by this change, as Cilium continues to use old-style IP rules with Azure IPAM. \* The previously deprecated ``--l2-pod-announcements-interface`` flag has been removed. The ``--l2-pod-announcements-interface-pattern`` flag should be used instead. Deprecated Options ~~~~~~~~~~~~~~~~~~ \* The ``--enable-ipsec-encrypted-overlay`` flag has no effect and will be removed in Cilium 1.20. Starting from Cilium 1.18 the IPsec encryption is always applied after overlay encapsulation, and therefore this special opt-in flag is no longer needed. \* The ``--aws-pagination-enabled`` flag for cilium-operator is now deprecated in favor of the more flexible ``--aws-max-results-per-call`` flag. The new flag defaults to ``0`` (unpaginated, letting AWS determine optimal page size), which provides better performance in most environments. If AWS returns an ``OperationNotPermitted`` error indicating too many results, the operator will automatically switch to paginated requests (``MaxResults=1000``) for all future API calls. Users with very large AWS accounts can set ``--aws-max-results-per-call=1000`` upfront to force pagination from the start. The deprecated flag still works during the deprecation period (``true`` maps to ``1000``, ``false`` maps to ``0``) and will be removed in Cilium 1.20. \* The flags ``--enable-encryption-strict-mode``, ``--encryption-strict-mode-cidr`` and ``--encryption-strict-mode-allow-remote-node-identities`` have been deprecated and will be removed in Cilium 1.20. Use the egress-specific options ``--enable-encryption-strict-mode-egress``, ``--encryption-strict-egress-cidr`` and ``--encryption-strict-egress-allow-remote-node-identities`` instead. Helm Options ~~~~~~~~~~~~ \* The Helm option ``clustermesh.enableMCSAPISupport`` has been deprecated in favor of ``clustermesh.mcsapi.enabled`` and will be removed in Cilium 1.20. \* The Helm option ``clustermesh.config.clusters`` now support a new format based on a dict in addition to the previous list format. The new format is recommended for users installing Cilium ClusterMesh without Cilium CLI and could allow you to organize your clusters definition in multiple Helm value files. See the Cilium Helm chart documentation or value file for more details. \* The Helm options ``encryption.strictMode.enabled``, ``encryption.strictMode.cidr`` and ``encryption.strictMode.allowRemoteNodeIdentities`` have been deprecated and will be removed in Cilium 1.20. Use the egress-specific options ``encryption.strictMode.egress.enabled``, ``encryption.strictMode.egress.cidr`` and ``encryption.strictMode.egress.allowRemoteNodeIdentities`` instead. Agent Options ~~~~~~~~~~~~~ \* The new agent flag ``encryption-strict-mode-ingress`` allows dropping any pod-to-pod traffic that hasn't been encrypted. It is only available when WireGuard and tunneling are enabled as well. It should be noted that enabling this feature directly with the upgrade can lead to intermittent packet drops. Operator Options ~~~~~~~~~~~~~~~~ \* The ``--unmanaged-pod-watcher-interval`` flag type has been changed from ``int`` (seconds) to ``time.Duration`` for improved usability and consistency with other Cilium configuration options. If you have this flag explicitly configured, update your configuration to use duration format (e.g., ``15s``, ``1m``, ``90s``). The default value remains 15 seconds. .. code-block:: bash # Before (deprecated): --unmanaged-pod-watcher-interval=15 # After: --unmanaged-pod-watcher-interval=15s Note: When using Helm, the ``operator.unmanagedPodWatcher.intervalSeconds`` value now accepts both integers (for backward compatibility) and duration strings. Numeric values will be automatically converted to duration strings (e.g., ``15`` becomes ``"15s"``). Cluster Mesh API Server Options ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Bugtool Options ~~~~~~~~~~~~~~~ Added Metrics ~~~~~~~~~~~~~ \* ``cilium\_agent\_clustermesh\_remote\_cluster\_endpoints`` was added and report the total number of endpoints per remote cluster in a ClusterMesh environment. Removed Metrics ~~~~~~~~~~~~~~~ \* ``k8s\_internal\_traffic\_policy\_enabled`` has been removed, because the corresponding feature is enabled by default. \* ``endpoint\_max\_ifindex`` has been removed, because the corresponding datapath limitation no longer applies. Changed Metrics ~~~~~~~~~~~~~~~ The following metrics previously had instances (i.e. for some watcher K8s resource type labels) under ``workqueue\_``. In this release any such metrics have been renamed and combined into the correct metric name prefixed with ``cilium\_operator\_``. As well, any remaining Operator k8s workqueue metrics that use the label ``queue\_name`` have had it renamed to ``name`` to be consistent with agent k8s workqueue metrics. \* The
https://github.com/cilium/cilium/blob/main//Documentation/operations/upgrade.rst
main
cilium
[ -0.007295775227248669, 0.0017436820780858397, 0.004665108397603035, 0.009738164022564888, 0.03307124972343445, -0.01702425815165043, 0.0002795604523271322, -0.06249678134918213, 0.049879349768161774, 0.05430668592453003, 0.05674026533961296, -0.006476330105215311, 0.004834539722651243, 0.0...
0.050623
under ``workqueue\_``. In this release any such metrics have been renamed and combined into the correct metric name prefixed with ``cilium\_operator\_``. As well, any remaining Operator k8s workqueue metrics that use the label ``queue\_name`` have had it renamed to ``name`` to be consistent with agent k8s workqueue metrics. \* The metric ``workqueue\_adds\_total`` has been renamed and combined into to ``cilium\_operator\_k8s\_workqueue\_adds\_total``, the label ``queue\_name`` has been renamed to ``name``. \* The metric ``workqueue\_depth`` has been renamed and combined into ``cilium\_operator\_k8s\_workqueue\_adds\_total``, the label ``queue\_name`` has been renamed to ``name``. \* The metric ``workqueue\_longest\_running\_processor\_seconds`` has been renamed and combined into ``cilium\_operator\_k8s\_workqueue\_longest\_running\_processor\_seconds``, the label ``queue\_name`` has been renamed to ``name``. \* The metric ``workqueue\_queue\_duration\_seconds`` has been renamed and combined into ``cilium\_operator\_k8s\_workqueue\_queue\_duration\_seconds``, the label ``queue\_name`` has been renamed to ``name``. \* The metric ``workqueue\_retries\_total`` has been renamed and combined into ``cilium\_operator\_k8s\_workqueue\_retries\_total`, the label ``queue\_name`` has been renamed to ``name``. \* The metric ``workqueue\_unfinished\_work\_seconds`` has been renamed and combined into ``cilium\_operator\_k8s\_workqueue\_unfinished\_work\_seconds`, the label ``queue\_name`` has been renamed to ``name``. \* The metric ``workqueue\_work\_duration\_seconds`` has been renamed and combined into ``cilium\_operator\_k8s\_workqueue\_work\_duration\_seconds``, the label ``queue\_name`` has been renamed to ``name``. \* ``k8s\_client\_rate\_limiter\_duration\_seconds`` no longer has labels ``path`` and ``method``. \* ``hubble\_icmp\_total`` has been fixed to correctly use ``family`` label value ``IPv6`` on ``ICMPv6`` flows instead of ``IPv4``. The following metrics: \* ``cilium\_agent\_clustermesh\_global\_services`` \* ``cilium\_operator\_clustermesh\_global\_services`` \* ``cilium\_operator\_clustermesh\_global\_service\_exports`` now report per cluster metric instead of a "global" count and were renamed to respectively: \* ``cilium\_agent\_clustermesh\_remote\_cluster\_services`` \* ``cilium\_operator\_clustermesh\_remote\_cluster\_services`` \* ``cilium\_operator\_clustermesh\_remote\_cluster\_service\_exports`` The following metrics no longer reports a ``source\_cluster`` and a ``source\_node\_name`` label: \* ``node\_health\_connectivity\_status`` \* ``node\_health\_connectivity\_latency\_seconds`` \* ``bootstrap\_seconds`` \* ``\*\_remote\_clusters`` \* ``\*\_remote\_cluster\_last\_failure\_ts`` \* ``\*\_remote\_cluster\_readiness\_status`` \* ``\*\_remote\_cluster\_failures`` \* ``\*\_remote\_cluster\_nodes`` \* ``\*\_remote\_cluster\_services`` \* ``\*\_remote\_cluster\_endpoints`` \* ``cilium\_operator\_clustermesh\_remote\_cluster\_service\_exports`` Deprecated Metrics ~~~~~~~~~~~~~~~~~~ \* ``cilium\_agent\_bootstrap\_seconds`` is now deprecated. Please use ``cilium\_hive\_jobs\_oneshot\_last\_run\_duration\_seconds`` of respective job instead. Advanced ======== Upgrade Impact -------------- Upgrades are designed to have minimal impact on your running deployment. Networking connectivity, policy enforcement and load balancing will remain functional in general. The following is a list of operations that will not be available during the upgrade: \* API-aware policy rules are enforced in user space proxies and are running as part of the Cilium pod. Upgrading Cilium causes the proxy to restart, which results in a connectivity outage and causes the connection to reset. \* Existing policy will remain effective but implementation of new policy rules will be postponed to after the upgrade has been completed on a particular node. \* Monitoring components such as ``cilium-dbg monitor`` will experience a brief outage while the Cilium pod is restarting. Events are queued up and read after the upgrade. If the number of events exceeds the event buffer size, events will be lost. Migrating from kvstore-backed identities to Kubernetes CRD-backed identities ---------------------------------------------------------------------------- Beginning with Cilium 1.6, Kubernetes CRD-backed security identities can be used for smaller clusters. Along with other changes in 1.6, this allows kvstore-free operation if desired. It is possible to migrate identities from an existing kvstore deployment to CRD-backed identities. This minimizes disruptions to traffic as the update rolls out through the cluster. Migration ~~~~~~~~~ When identities change, existing connections can be disrupted while Cilium initializes and synchronizes with the shared identity store. The disruption occurs when new numeric identities are used for existing pods on some instances and others are used on others. When converting to CRD-backed identities, it is possible to pre-allocate CRD identities so that the numeric identities match those in the kvstore. This allows new and old Cilium instances in the rollout to agree. There are two ways to achieve this: you can either run a one-off ``cilium preflight migrate-identity`` script which will perform a point-in-time copy of all identities from
https://github.com/cilium/cilium/blob/main//Documentation/operations/upgrade.rst
main
cilium
[ -0.08809792995452881, -0.03512097895145416, -0.05117008835077286, 0.0010023444192484021, -0.16777877509593964, -0.03294067829847336, 0.06756030023097992, -0.037825945764780045, 0.04709174856543541, -0.02519834414124489, 0.03485192358493805, -0.05829731002449989, 0.036713264882564545, -0.01...
0.179769
identities so that the numeric identities match those in the kvstore. This allows new and old Cilium instances in the rollout to agree. There are two ways to achieve this: you can either run a one-off ``cilium preflight migrate-identity`` script which will perform a point-in-time copy of all identities from the kvstore to CRDs (added in Cilium 1.6), or use the "Double Write" identity allocation mode which will have Cilium manage identities in both the kvstore and CRD at the same time for a seamless migration (added in Cilium 1.17). Migration with the ``cilium preflight migrate-identity`` script ############################################################### The ``cilium preflight migrate-identity`` script is a one-off tool that can be used to copy identities from the kvstore into CRDs. It has a couple of limitations: \* If an identity is created in the kvstore after the one-off migration has been completed, it will not be copied into a CRD. This means that you need to perform the migration on a cluster with no identity churn. \* There is no easy way to revert back to ``--identity-allocation-mode=kvstore`` if something goes wrong after Cilium has been migrated to ``--identity-allocation-mode=crd`` If these limitations are not acceptable, it is recommended to use the ":ref:`Double Write `" identity allocation mode instead. The following steps show an example of performing the migration using the ``cilium preflight migrate-identity`` script. It is safe to re-run the command if desired. It will identify already allocated identities or ones that cannot be migrated. Note that identity ``34815`` is migrated, ``17003`` is already migrated, and ``11730`` has a conflict and a new ID allocated for those labels. The steps below assume a stable cluster with no new identities created during the rollout. Once Cilium using CRD-backed identities is running, it may begin allocating identities in a way that conflicts with older ones in the kvstore. The cilium preflight manifest requires etcd support and can be built with: .. cilium-helm-template:: :namespace: kube-system :set: preflight.enabled=true agent=false config.enabled=false operator.enabled=false etcd.enabled=true etcd.ssl=true :post-helm-commands: > cilium-preflight.yaml :post-commands: kubectl create -f cilium-preflight.yaml Example migration ~~~~~~~~~~~~~~~~~ .. code-block:: shell-session $ kubectl exec -n kube-system cilium-pre-flight-check-1234 -- cilium-dbg preflight migrate-identity INFO[0000] Setting up kvstore client INFO[0000] Connecting to etcd server... config=/var/lib/cilium/etcd-config.yml endpoints="[https://192.168.60.11:2379]" subsys=kvstore INFO[0000] Setting up kubernetes client INFO[0000] Establishing connection to apiserver host="https://192.168.60.11:6443" subsys=k8s INFO[0000] Connected to apiserver subsys=k8s INFO[0000] Got lease ID 29c66c67db8870c8 subsys=kvstore INFO[0000] Got lock lease ID 29c66c67db8870ca subsys=kvstore INFO[0000] Successfully verified version of etcd endpoint config=/var/lib/cilium/etcd-config.yml endpoints="[https://192.168.60.11:2379]" etcdEndpoint="https://192.168.60.11:2379" subsys=kvstore version=3.3.13 INFO[0000] CRD (CustomResourceDefinition) is installed and up-to-date name=CiliumNetworkPolicy/v2 subsys=k8s INFO[0000] Updating CRD (CustomResourceDefinition)... name=v2.CiliumEndpoint subsys=k8s INFO[0001] CRD (CustomResourceDefinition) is installed and up-to-date name=v2.CiliumEndpoint subsys=k8s INFO[0001] Updating CRD (CustomResourceDefinition)... name=v2.CiliumNode subsys=k8s INFO[0002] CRD (CustomResourceDefinition) is installed and up-to-date name=v2.CiliumNode subsys=k8s INFO[0002] Updating CRD (CustomResourceDefinition)... name=v2.CiliumIdentity subsys=k8s INFO[0003] CRD (CustomResourceDefinition) is installed and up-to-date name=v2.CiliumIdentity subsys=k8s INFO[0003] Listing identities in kvstore INFO[0003] Migrating identities to CRD INFO[0003] Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination labels="map[]" subsys=crd-allocator INFO[0003] Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination labels="map[]" subsys=crd-allocator INFO[0003] Skipped non-kubernetes labels when labelling ciliumidentity. All labels will still be used in identity determination labels="map[]" subsys=crd-allocator INFO[0003] Migrated identity identity=34815 identityLabels="k8s:class=tiefighter;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=default;k8s:org=empire;" WARN[0003] ID is allocated to a different key in CRD. A new ID will be allocated for the this key identityLabels="k8s:class=deathstar;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=default;k8s:org=empire;" oldIdentity=11730 INFO[0003] Reusing existing global key key="k8s:class=deathstar;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=default;k8s:org=empire;" subsys=allocator INFO[0003] New ID allocated for key in CRD identity=17281 identityLabels="k8s:class=deathstar;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=default;k8s:org=empire;" oldIdentity=11730 INFO[0003] ID was already allocated to this key. It is already migrated identity=17003 identityLabels="k8s:class=xwing;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=default;k8s:org=alliance;" .. note:: It is also possible to use the ``--k8s-kubeconfig-path`` and ``--kvstore-opt`` ``cilium`` CLI options with the
https://github.com/cilium/cilium/blob/main//Documentation/operations/upgrade.rst
main
cilium
[ -0.02982719987630844, 0.03454889357089996, -0.0605226568877697, -0.035436417907476425, -0.018877731636166573, -0.001203317428007722, -0.03103203885257244, 0.017827369272708893, 0.041671272367239, -0.034033458679914474, 0.10281794518232346, -0.13201791048049927, 0.030598556622862816, -0.017...
0.042511
oldIdentity=11730 INFO[0003] Reusing existing global key key="k8s:class=deathstar;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=default;k8s:org=empire;" subsys=allocator INFO[0003] New ID allocated for key in CRD identity=17281 identityLabels="k8s:class=deathstar;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=default;k8s:org=empire;" oldIdentity=11730 INFO[0003] ID was already allocated to this key. It is already migrated identity=17003 identityLabels="k8s:class=xwing;k8s:io.cilium.k8s.policy.cluster=default;k8s:io.cilium.k8s.policy.serviceaccount=default;k8s:io.kubernetes.pod.namespace=default;k8s:org=alliance;" .. note:: It is also possible to use the ``--k8s-kubeconfig-path`` and ``--kvstore-opt`` ``cilium`` CLI options with the preflight command. The default is to derive the configuration as cilium-agent does. .. code-block:: shell-session cilium preflight migrate-identity --k8s-kubeconfig-path /var/lib/cilium/cilium.kubeconfig --kvstore etcd --kvstore-opt etcd.config=/var/lib/cilium/etcd-config.yml Once the migration is complete, confirm the endpoint identities match by listing the endpoints stored in CRDs and in etcd: .. code-block:: shell-session $ kubectl get ciliumendpoints -A # new CRD-backed endpoints $ kubectl exec -n kube-system cilium-1234 -- cilium-dbg endpoint list # existing etcd-backed endpoints Clearing CRD identities ~~~~~~~~~~~~~~~~~~~~~~~ If a migration has gone wrong, it possible to start with a clean slate. Ensure that no Cilium instances are running with ``--identity-allocation-mode=crd`` and execute: .. code-block:: shell-session $ kubectl delete ciliumid --all .. \_double\_write\_migration: Migration with the "Double Write" identity allocation mode ########################################################## .. include:: ../beta.rst The "Double Write" Identity Allocation Mode allows Cilium to allocate identities as KVStore values \*and\* as CRDs at the same time. This mode also has two versions: one where the source of truth comes from the kvstore (``--identity-allocation-mode=doublewrite-readkvstore``), and one where the source of truth comes from CRDs (``--identity-allocation-mode=doublewrite-readcrd``). The high-level migration plan looks as follows: #. Starting state: Cilium is running in KVStore mode. #. Switch Cilium to "Double Write" mode with all reads happening from the KVStore. This is almost the same as the pure KVStore mode with the only difference being that all identities are duplicated as CRDs but are not used. #. Switch Cilium to "Double Write" mode with all reads happening from CRDs. This is equivalent to Cilium running in pure CRD mode but identities will still be updated in the KVStore to allow for the possibility of a fast rollback. #. Switch Cilium to CRD mode. The KVStore will no longer be used and will be ready for decommission. This will allow you to perform a gradual and seamless migration with the possibility of a fast rollback at steps two or three. Furthermore, when the "Double Write" mode is enabled, the Operator will emit additional metrics to help monitor the migration progress. These metrics can be used for alerting about identity inconsistencies between the KVStore and CRDs. Note that you can also use this to migrate from CRD to KVStore mode. All operations simply need to be repeated in reverse order. Rollout Instructions ~~~~~~~~~~~~~~~~~~~~ #. Re-deploy first the Operator and then the Agents with ``--identity-allocation-mode=doublewrite-readkvstore``. #. Monitor the Operator metrics and logs to ensure that all identities have converged between the KVStore and CRDs. The relevant metrics emitted by the Operator are: \* ``cilium\_operator\_identity\_crd\_total\_count`` and ``cilium\_operator\_identity\_kvstore\_total\_count`` report the total number of identities in CRDs and KVStore respectively. \* ``cilium\_operator\_identity\_crd\_only\_count`` and ``cilium\_operator\_identity\_kvstore\_only\_count`` report the number of identities that are only in CRDs or only in the KVStore respectively, to help detect inconsistencies. In case further investigation is needed, the Operator logs will contain detailed information about the discrepancies between KVStore and CRD identities. Note that Garbage Collection for KVStore identities and CRD identities happens at slightly different times, so it is possible to see discrepancies in the metrics for certain periods of time, depending on ``--identity-gc-interval`` and ``--identity-heartbeat-timeout`` settings. #. Once all identities have converged, re-deploy the Operator and the Agents with ``--identity-allocation-mode=doublewrite-readcrd``. This will cause Cilium to read identities only from CRDs, but continue to write them to the KVStore. #. Once you are ready to decommission the KVStore, re-deploy first the Agents
https://github.com/cilium/cilium/blob/main//Documentation/operations/upgrade.rst
main
cilium
[ -0.0010567725403234363, -0.009792700409889221, -0.009968935512006283, -0.048056911677122116, -0.014493298716843128, -0.005511835217475891, 0.05879456549882889, -0.07402610778808594, 0.0810004323720932, 0.08856834471225739, 0.0625915452837944, -0.07262162864208221, 0.02841576375067234, -0.0...
0.131983
on ``--identity-gc-interval`` and ``--identity-heartbeat-timeout`` settings. #. Once all identities have converged, re-deploy the Operator and the Agents with ``--identity-allocation-mode=doublewrite-readcrd``. This will cause Cilium to read identities only from CRDs, but continue to write them to the KVStore. #. Once you are ready to decommission the KVStore, re-deploy first the Agents and then the Operator with ``--identity-allocation-mode=crd``. This will make Cilium read and write identities only to CRDs. #. You can now decommission the KVStore. .. \_change\_policy\_default\_local\_cluster: Preparing for a ``policy-default-local-cluster`` change ####################################################### Cilium network policies used to implicitly select endpoints from all the clusters. Cilium 1.18 introduced a new option called ``policy-default-local-cluster`` which will be set by default in Cilium 1.19. This option restricts endpoints selection to the local cluster by default. If you are using ClusterMesh and network policies this will be a \*\*breaking change\*\* and you \*\*need to take action\*\* before upgrading to Cilium 1.19. This new option can be set in the ConfigMap or via the Helm value ``clustermesh.policyDefaultLocalCluster``. You can set ``policy-default-local-cluster`` to ``false`` in Cilium 1.19 to keep the existing behavior, however this option will be deprecated and eventually removed in a future release so you should plan your migration to set ``policy-default-local-cluster`` to ``true``. Migrating network policies in practice ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The command ``cilium clustermesh inspect-policy-default-local-cluster --all-namespaces`` can help you discover all the policies that will change as a result of changing ``policy-default-local-cluster``. You can also replace ``--all-namespaces`` with ``-n my-namespace`` if you want to only inspect policies from a particular namespace. Below is an example where there is one network policy that needs to be updated: .. code-block:: shell-session $ cilium clustermesh inspect-policy-default-local-cluster --all-namespaces ⚠️ CiliumNetworkPolicy 0/1 ⚠️ default/allow-from-bar ✅ CiliumClusterWideNetworkPolicy 0/0 ✅ NetworkPolicy 0/0 In this situation you have only one CiliumNetworkPolicy which is affected by a ``policy-default-local-cluster`` change. Let's take a look at the policy: .. code-block:: yaml apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: allow-from-bar namespace: default spec: description: "Allow ingress traffic from bar" endpointSelector: matchLabels: name: foo ingress: - fromEndpoints: - matchLabels: name: bar This network policy does not explicitly select a cluster. This means that with ``policy-default-local-cluster`` set to ``false`` it allows traffic coming from ``bar`` in any clusters connected in your ClusterMesh. With ``policy-default-local-cluster`` set to ``true``, this policy allows traffic from ``bar`` from only the local cluster instead. If ``foo`` and ``bar`` are always in the same cluster, no further action is necessary. In case you want to do this on this individual policy rather than at a global level or that ``bar`` is located on a remote cluster you can update your policy like that: .. code-block:: yaml apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: allow-from-bar namespace: default spec: description: "Allow ingress traffic from bar" endpointSelector: matchLabels: name: foo ingress: - fromEndpoints: - matchLabels: name: bar io.cilium.k8s.policy.cluster: fixme-cluster-name If ``bar`` is located in multiple cluster you can also use a ``matchExpressions`` selecting multiple clusters like that: .. code-block:: yaml apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: allow-from-bar namespace: default spec: description: "Allow ingress traffic from bar" endpointSelector: matchLabels: name: foo ingress: - fromEndpoints: - matchLabels: name: bar matchExpressions: - key: io.cilium.k8s.policy.cluster operator: In values: - fixme-cluster-name-1 - fixme-cluster-name-2 Alternatively, you can also allow traffic from ``bar`` located in every cluster and restore the same behavior as setting ``policy-default-local-cluster`` to ``false`` but on this individual policy: .. code-block:: yaml apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: allow-from-bar namespace: default spec: description: "Allow ingress traffic from bar" endpointSelector: matchLabels: name: foo ingress: - fromEndpoints: - matchLabels: name: bar matchExpressions: - key: io.cilium.k8s.policy.cluster operator: Exists .. \_cnp\_validation: CNP Validation -------------- Running the CNP Validator will make sure the policies deployed in the cluster
https://github.com/cilium/cilium/blob/main//Documentation/operations/upgrade.rst
main
cilium
[ -0.013009065762162209, 0.04209907352924347, -0.06905536353588104, 0.021084550768136978, -0.04561324790120125, -0.02848019264638424, -0.03792237490415573, -0.00003727262082975358, 0.023504214361310005, -0.04245970398187637, 0.09280279278755188, -0.13146157562732697, 0.042007528245449066, -0...
0.085
"cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: allow-from-bar namespace: default spec: description: "Allow ingress traffic from bar" endpointSelector: matchLabels: name: foo ingress: - fromEndpoints: - matchLabels: name: bar matchExpressions: - key: io.cilium.k8s.policy.cluster operator: Exists .. \_cnp\_validation: CNP Validation -------------- Running the CNP Validator will make sure the policies deployed in the cluster are valid. It is important to run this validation before an upgrade so it will make sure Cilium has a correct behavior after upgrade. Avoiding doing this validation might cause Cilium from updating its ``NodeStatus`` in those invalid Network Policies as well as in the worst case scenario it might give a false sense of security to the user if a policy is badly formatted and Cilium is not enforcing that policy due a bad validation schema. This CNP Validator is automatically executed as part of the pre-flight check :ref:`pre\_flight`. Start by deployment the ``cilium-pre-flight-check`` and check if the ``Deployment`` shows READY 1/1, if it does not check the pod logs. .. code-block:: shell-session $ kubectl get deployment -n kube-system cilium-pre-flight-check -w NAME READY UP-TO-DATE AVAILABLE AGE cilium-pre-flight-check 0/1 1 0 12s $ kubectl logs -n kube-system deployment/cilium-pre-flight-check -c cnp-validator --previous level=info msg="Setting up kubernetes client" level=info msg="Establishing connection to apiserver" host="https://172.20.0.1:443" subsys=k8s level=info msg="Connected to apiserver" subsys=k8s level=info msg="Validating CiliumNetworkPolicy 'default/cidr-rule': OK! level=error msg="Validating CiliumNetworkPolicy 'default/cnp-update': unexpected validation error: spec.labels: Invalid value: \"string\": spec.labels in body must be of type object: \"string\"" level=error msg="Found invalid CiliumNetworkPolicy" In this example, we can see the ``CiliumNetworkPolicy`` in the ``default`` namespace with the name ``cnp-update`` is not valid for the Cilium version we are trying to upgrade. In order to fix this policy we need to edit it, we can do this by saving the policy locally and modify it. For this example it seems the ``.spec.labels`` has set an array of strings which is not correct as per the official schema. .. code-block:: shell-session $ kubectl get cnp -n default cnp-update -o yaml > cnp-bad.yaml $ cat cnp-bad.yaml apiVersion: cilium.io/v2 kind: CiliumNetworkPolicy [...] spec: endpointSelector: matchLabels: id: app1 ingress: - fromEndpoints: - matchLabels: id: app2 toPorts: - ports: - port: "80" protocol: TCP labels: - custom=true [...] To fix this policy we need to set the ``.spec.labels`` with the right format and commit these changes into Kubernetes. .. code-block:: shell-session $ cat cnp-bad.yaml apiVersion: cilium.io/v2 kind: CiliumNetworkPolicy [...] spec: endpointSelector: matchLabels: id: app1 ingress: - fromEndpoints: - matchLabels: id: app2 toPorts: - ports: - port: "80" protocol: TCP labels: - key: "custom" value: "true" [...] $ $ kubectl apply -f ./cnp-bad.yaml After applying the fixed policy we can delete the pod that was validating the policies so that Kubernetes creates a new pod immediately to verify if the fixed policies are now valid. .. code-block:: shell-session $ kubectl delete pod -n kube-system -l k8s-app=cilium-pre-flight-check-deployment pod "cilium-pre-flight-check-86dfb69668-ngbql" deleted $ kubectl get deployment -n kube-system cilium-pre-flight-check NAME READY UP-TO-DATE AVAILABLE AGE cilium-pre-flight-check 1/1 1 1 55m $ kubectl logs -n kube-system deployment/cilium-pre-flight-check -c cnp-validator level=info msg="Setting up kubernetes client" level=info msg="Establishing connection to apiserver" host="https://172.20.0.1:443" subsys=k8s level=info msg="Connected to apiserver" subsys=k8s level=info msg="Validating CiliumNetworkPolicy 'default/cidr-rule': OK! level=info msg="Validating CiliumNetworkPolicy 'default/cnp-update': OK! level=info msg="All CCNPs and CNPs valid!" Once they are valid you can continue with the upgrade process. :ref:`cleanup\_preflight\_check`
https://github.com/cilium/cilium/blob/main//Documentation/operations/upgrade.rst
main
cilium
[ -0.0410224050283432, 0.008494701236486435, -0.05595604330301285, -0.052874721586704254, 0.013392604887485504, -0.03840072825551033, -0.02614934742450714, -0.007334183435887098, -0.04063320904970169, 0.006982754450291395, 0.044230177998542786, -0.07991629093885422, 0.05365057662129402, -0.0...
0.20567
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_admin\_guide: ############### Troubleshooting ############### This document describes how to troubleshoot Cilium in different deployment modes. It focuses on a full deployment of Cilium within a datacenter or public cloud. If you are just looking for a simple way to experiment, we highly recommend trying out the :ref:`getting\_started` guide instead. This guide assumes that you have read the :ref:`network\_root` and `security\_root` which explain all the components and concepts. We use GitHub issues to maintain a list of `Cilium Frequently Asked Questions (FAQ)`\_. You can also check there to see if your question(s) is already addressed. Component & Cluster Health ========================== Kubernetes ---------- An initial overview of Cilium can be retrieved by listing all pods to verify whether all pods have the status ``Running``: .. code-block:: shell-session $ kubectl -n kube-system get pods -l k8s-app=cilium NAME READY STATUS RESTARTS AGE cilium-2hq5z 1/1 Running 0 4d cilium-6kbtz 1/1 Running 0 4d cilium-klj4b 1/1 Running 0 4d cilium-zmjj9 1/1 Running 0 4d If Cilium encounters a problem that it cannot recover from, it will automatically report the failure state via ``cilium-dbg status`` which is regularly queried by the Kubernetes liveness probe to automatically restart Cilium pods. If a Cilium pod is in state ``CrashLoopBackoff`` then this indicates a permanent failure scenario. Detailed Status ~~~~~~~~~~~~~~~ If a particular Cilium pod is not in running state, the status and health of the agent on that node can be retrieved by running ``cilium-dbg status`` in the context of that pod: .. code-block:: shell-session $ kubectl -n kube-system exec cilium-2hq5z -- cilium-dbg status KVStore: Ok etcd: 1/1 connected: http://demo-etcd-lab--a.etcd.tgraf.test1.lab.corp.isovalent.link:2379 - 3.2.5 (Leader) ContainerRuntime: Ok docker daemon: OK Kubernetes: Ok OK Kubernetes APIs: ["cilium/v2::CiliumNetworkPolicy", "networking.k8s.io/v1::NetworkPolicy", "core/v1::Service", "core/v1::Endpoint", "core/v1::Node", "CustomResourceDefinition"] Cilium: Ok OK NodeMonitor: Disabled Cilium health daemon: Ok Controller Status: 14/14 healthy Proxy Status: OK, ip 10.2.0.172, port-range 10000-20000 Cluster health: 4/4 reachable (2018-06-16T09:49:58Z) Alternatively, the ``k8s-cilium-exec.sh`` script can be used to run ``cilium-dbg status`` on all nodes. This will provide detailed status and health information of all nodes in the cluster: .. code-block:: shell-session curl -sLO https://raw.githubusercontent.com/cilium/cilium/main/contrib/k8s/k8s-cilium-exec.sh chmod +x ./k8s-cilium-exec.sh ... and run ``cilium-dbg status`` on all nodes: .. code-block:: shell-session $ ./k8s-cilium-exec.sh cilium-dbg status KVStore: Ok Etcd: http://127.0.0.1:2379 - (Leader) 3.1.10 ContainerRuntime: Ok Kubernetes: Ok OK Kubernetes APIs: ["networking.k8s.io/v1beta1::Ingress", "core/v1::Node", "CustomResourceDefinition", "cilium/v2::CiliumNetworkPolicy", "networking.k8s.io/v1::NetworkPolicy", "core/v1::Service", "core/v1::Endpoint"] Cilium: Ok OK NodeMonitor: Listening for events on 2 CPUs with 64x4096 of shared memory Cilium health daemon: Ok Controller Status: 7/7 healthy Proxy Status: OK, ip 10.15.28.238, 0 redirects, port-range 10000-20000 Cluster health: 1/1 reachable (2018-02-27T00:24:34Z) Detailed information about the status of Cilium can be inspected with the ``cilium-dbg status --verbose`` command. Verbose output includes detailed IPAM state (allocated addresses), Cilium controller status, and details of the Proxy status. .. \_ts\_agent\_logs: Logs ~~~~ To retrieve log files of a cilium pod, run (replace ``cilium-1234`` with a pod name returned by ``kubectl -n kube-system get pods -l k8s-app=cilium``) .. code-block:: shell-session kubectl -n kube-system logs --timestamps cilium-1234 If the cilium pod was already restarted due to the liveness problem after encountering an issue, it can be useful to retrieve the logs of the pod before the last restart: .. code-block:: shell-session kubectl -n kube-system logs --timestamps -p cilium-1234 Generic ------- When logged in a host running Cilium, the cilium CLI can be invoked directly, e.g.: .. code-block:: shell-session $ cilium-dbg status KVStore: Ok etcd: 1/1 connected: https://192.168.60.11:2379 - 3.2.7 (Leader) ContainerRuntime: Ok Kubernetes: Ok OK Kubernetes APIs: ["core/v1::Endpoint", "networking.k8s.io/v1beta1::Ingress", "core/v1::Node", "CustomResourceDefinition", "cilium/v2::CiliumNetworkPolicy", "networking.k8s.io/v1::NetworkPolicy", "core/v1::Service"] Cilium: Ok
https://github.com/cilium/cilium/blob/main//Documentation/operations/troubleshooting.rst
main
cilium
[ -0.003420987632125616, -0.019075721502304077, -0.06710690259933472, 0.02994110807776451, 0.08341352641582489, -0.06480290740728378, -0.03269319236278534, 0.062483977526426315, 0.002874344354495406, -0.0005321560311131179, 0.06245996430516243, -0.06572475284337997, 0.045628610998392105, -0....
0.127762
-p cilium-1234 Generic ------- When logged in a host running Cilium, the cilium CLI can be invoked directly, e.g.: .. code-block:: shell-session $ cilium-dbg status KVStore: Ok etcd: 1/1 connected: https://192.168.60.11:2379 - 3.2.7 (Leader) ContainerRuntime: Ok Kubernetes: Ok OK Kubernetes APIs: ["core/v1::Endpoint", "networking.k8s.io/v1beta1::Ingress", "core/v1::Node", "CustomResourceDefinition", "cilium/v2::CiliumNetworkPolicy", "networking.k8s.io/v1::NetworkPolicy", "core/v1::Service"] Cilium: Ok OK NodeMonitor: Listening for events on 2 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPv4 address pool: 261/65535 allocated IPv6 address pool: 4/4294967295 allocated Controller Status: 20/20 healthy Proxy Status: OK, ip 10.0.28.238, port-range 10000-20000 Hubble: Ok Current/Max Flows: 2542/4096 (62.06%), Flows/s: 164.21 Metrics: Disabled Cluster health: 2/2 reachable (2018-04-11T15:41:01Z) .. \_hubble\_troubleshooting: Observing Flows with Hubble =========================== Hubble is a built-in observability tool which allows you to inspect recent flow events on all endpoints managed by Cilium. Ensure Hubble is running correctly ---------------------------------- To ensure the Hubble client can connect to the Hubble server running inside Cilium, you may use the ``hubble status`` command from within a Cilium pod: .. code-block:: shell-session $ hubble status Healthcheck (via unix:///var/run/cilium/hubble.sock): Ok Current/Max Flows: 4095/4095 (100.00%) Flows/s: 164.21 ``cilium-agent`` must be running with the ``--enable-hubble`` option (default) in order for the Hubble server to be enabled. When deploying Cilium with Helm, make sure to set the ``hubble.enabled=true`` value. To check if Hubble is enabled in your deployment, you may look for the following output in ``cilium-dbg status``: .. code-block:: shell-session $ cilium status ... Hubble: Ok Current/Max Flows: 4095/4095 (100.00%), Flows/s: 164.21 Metrics: Disabled ... .. note:: Pods need to be managed by Cilium in order to be observable by Hubble. See how to :ref:`ensure a pod is managed by Cilium` for more details. Observing flows of a specific pod --------------------------------- In order to observe the traffic of a specific pod, you will first have to :ref:`retrieve the name of the cilium instance managing it`. The Hubble CLI is part of the Cilium container image and can be accessed via ``kubectl exec``. The following query for example will show all events related to flows which either originated or terminated in the ``default/tiefighter`` pod in the last three minutes: .. code-block:: shell-session $ kubectl exec -n kube-system cilium-77lk6 -- hubble observe --since 3m --pod default/tiefighter May 4 12:47:08.811: default/tiefighter:53875 -> kube-system/coredns-74ff55c5b-66f4n:53 to-endpoint FORWARDED (UDP) May 4 12:47:08.811: default/tiefighter:53875 -> kube-system/coredns-74ff55c5b-66f4n:53 to-endpoint FORWARDED (UDP) May 4 12:47:08.811: default/tiefighter:53875 <- kube-system/coredns-74ff55c5b-66f4n:53 to-endpoint FORWARDED (UDP) May 4 12:47:08.811: default/tiefighter:53875 <- kube-system/coredns-74ff55c5b-66f4n:53 to-endpoint FORWARDED (UDP) May 4 12:47:08.811: default/tiefighter:50214 <> default/deathstar-c74d84667-cx5kp:80 to-overlay FORWARDED (TCP Flags: SYN) May 4 12:47:08.812: default/tiefighter:50214 <- default/deathstar-c74d84667-cx5kp:80 to-endpoint FORWARDED (TCP Flags: SYN, ACK) May 4 12:47:08.812: default/tiefighter:50214 <> default/deathstar-c74d84667-cx5kp:80 to-overlay FORWARDED (TCP Flags: ACK) May 4 12:47:08.812: default/tiefighter:50214 <> default/deathstar-c74d84667-cx5kp:80 to-overlay FORWARDED (TCP Flags: ACK, PSH) May 4 12:47:08.812: default/tiefighter:50214 <- default/deathstar-c74d84667-cx5kp:80 to-endpoint FORWARDED (TCP Flags: ACK, PSH) May 4 12:47:08.812: default/tiefighter:50214 <> default/deathstar-c74d84667-cx5kp:80 to-overlay FORWARDED (TCP Flags: ACK, FIN) May 4 12:47:08.812: default/tiefighter:50214 <- default/deathstar-c74d84667-cx5kp:80 to-endpoint FORWARDED (TCP Flags: ACK, FIN) May 4 12:47:08.812: default/tiefighter:50214 <> default/deathstar-c74d84667-cx5kp:80 to-overlay FORWARDED (TCP Flags: ACK) You may also use ``-o json`` to obtain more detailed information about each flow event. .. note:: \*\*Hubble Relay\*\* allows you to query multiple Hubble instances simultaneously without having to first manually target a specific node. See `Observing flows with Hubble Relay`\_ for more information. Observing flows with Hubble Relay ================================= Hubble Relay is a service which allows to query multiple Hubble instances simultaneously and aggregate the results. See :ref:`hubble\_setup` to enable Hubble Relay if it is not yet enabled and install the Hubble CLI on your local machine. .. include:: /observability/hubble/port-forward.rst You can verify that Hubble Relay can be reached by using the Hubble CLI and
https://github.com/cilium/cilium/blob/main//Documentation/operations/troubleshooting.rst
main
cilium
[ 0.014847485348582268, 0.031290385872125626, -0.03626585006713867, -0.05391010269522667, -0.02417573519051075, -0.03766040503978729, -0.01607890985906124, -0.008271855302155018, 0.09323401749134064, 0.01221669651567936, -0.0012525967322289944, -0.0864781066775322, -0.01875431276857853, -0.0...
0.200531
allows to query multiple Hubble instances simultaneously and aggregate the results. See :ref:`hubble\_setup` to enable Hubble Relay if it is not yet enabled and install the Hubble CLI on your local machine. .. include:: /observability/hubble/port-forward.rst You can verify that Hubble Relay can be reached by using the Hubble CLI and running the following command from your local machine: .. code-block:: shell-session hubble status -P This command should return an output similar to the following: :: Healthcheck (via 127.0.0.1:4245): Ok Current/Max Flows: 16380/16380 (100.00%) Flows/s: 46.19 Connected Nodes: 4/4 You may see details about nodes that Hubble Relay is connected to by running the following command: .. code-block:: shell-session $ hubble list nodes -P NAME STATUS AGE FLOWS/S CURRENT/MAX-FLOWS cluster/node-cp Connected 2m30s 13.94 2227/4095 ( 54.38%) cluster/node-w1 Connected 2m31s 51.37 5108/9840 ( 51.91%) As Hubble Relay shares the same API as individual Hubble instances, you may follow the `Observing flows with Hubble`\_ section keeping in mind that limitations with regards to what can be seen from individual Hubble instances no longer apply. Connectivity Problems ===================== Cilium connectivity tests ------------------------------------ The Cilium connectivity test deploys a series of services, deployments, and CiliumNetworkPolicy which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. .. note:: The connectivity tests this will only work in a namespace with no other pods or network policies applied. If there is a Cilium Clusterwide Network Policy enabled, that may also break this connectivity check. To run the connectivity tests create an isolated test namespace called ``cilium-test`` to deploy the tests with. .. parsed-literal:: kubectl create ns cilium-test kubectl apply --namespace=cilium-test -f \ |SCM\_WEB|\/examples/kubernetes/connectivity-check/connectivity-check.yaml The tests cover various functionality of the system. Below we call out each test type. If tests pass, it suggests functionality of the referenced subsystem. +----------------------------+-----------------------------+-------------------------------+-----------------------------+----------------------------------------+ | Pod-to-pod (intra-host) | Pod-to-pod (inter-host) | Pod-to-service (intra-host) | Pod-to-service (inter-host) | Pod-to-external resource | +============================+=============================+===============================+=============================+========================================+ | eBPF routing is functional | Data plane, routing, network| eBPF service map lookup | VXLAN overlay port if used | Egress, CiliumNetworkPolicy, masquerade| +----------------------------+-----------------------------+-------------------------------+-----------------------------+----------------------------------------+ The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test: .. code-block:: shell-session $ kubectl get pods -n cilium-test NAME READY STATUS RESTARTS AGE echo-a-6788c799fd-42qxx 1/1 Running 0 69s echo-b-59757679d4-pjtdl 1/1 Running 0 69s echo-b-host-f86bd784d-wnh4v 1/1 Running 0 68s host-to-b-multi-node-clusterip-585db65b4d-x74nz 1/1 Running 0 68s host-to-b-multi-node-headless-77c64bc7d8-kgf8p 1/1 Running 0 67s pod-to-a-allowed-cnp-87b5895c8-bfw4x 1/1 Running 0 68s pod-to-a-b76ddb6b4-2v4kb 1/1 Running 0 68s pod-to-a-denied-cnp-677d9f567b-kkjp4 1/1 Running 0 68s pod-to-b-intra-node-nodeport-8484fb6d89-bwj8q 1/1 Running 0 68s pod-to-b-multi-node-clusterip-f7655dbc8-h5bwk 1/1 Running 0 68s pod-to-b-multi-node-headless-5fd98b9648-5bjj8 1/1 Running 0 68s pod-to-b-multi-node-nodeport-74bd8d7bd5-kmfmm 1/1 Running 0 68s pod-to-external-1111-7489c7c46d-jhtkr 1/1 Running 0 68s pod-to-external-fqdn-allow-google-cnp-b7b6bcdcb-97p75 1/1 Running 0 68s Information about test failures can be determined by describing a failed test pod .. code-block:: shell-session $ kubectl describe pod pod-to-b-intra-node-hostport Warning Unhealthy 6s (x6 over 56s) kubelet, agent1 Readiness probe failed: curl: (7) Failed to connect to echo-b-host-headless port 40000: Connection refused Warning Unhealthy 2s (x3 over 52s) kubelet, agent1 Liveness probe failed: curl: (7) Failed to connect to echo-b-host-headless port 40000: Connection refused .. \_cluster\_connectivity\_health: Checking cluster connectivity health ------------------------------------ Cilium can rule out network fabric related issues when troubleshooting connectivity issues by providing reliable health and latency probes between all cluster nodes and a simulated workload running on each node. By default when Cilium is run, it launches instances of ``cilium-health`` in the background to determine the overall connectivity status of the cluster. This tool periodically runs bidirectional traffic across multiple paths through the cluster and through each node using different protocols to determine the health
https://github.com/cilium/cilium/blob/main//Documentation/operations/troubleshooting.rst
main
cilium
[ 0.0888320654630661, -0.006747023202478886, -0.018274864181876183, 0.052264370024204254, 0.08597074449062347, 0.010787293314933777, -0.06527364253997803, 0.04663291200995445, -0.03001365251839161, 0.007828798145055771, -0.04032658040523529, 0.008340436965227127, 0.09639955312013626, -0.0621...
0.058353
running on each node. By default when Cilium is run, it launches instances of ``cilium-health`` in the background to determine the overall connectivity status of the cluster. This tool periodically runs bidirectional traffic across multiple paths through the cluster and through each node using different protocols to determine the health status of each path and protocol. At any point in time, cilium-health may be queried for the connectivity status of the last probe. .. code-block:: shell-session $ kubectl -n kube-system exec -ti cilium-2hq5z -- cilium-health status --verbose Probe time: 2018-06-16T09:51:58Z Nodes: ip-172-0-52-116.us-west-2.compute.internal (localhost): Host connectivity to 172.0.52.116: ICMP to stack: OK, RTT=315.254µs HTTP to agent: OK, RTT=368.579µs Endpoint connectivity to 10.2.0.183: ICMP to stack: OK, RTT=190.658µs HTTP to agent: OK, RTT=536.665µs ip-172-0-117-198.us-west-2.compute.internal: Host connectivity to 172.0.117.198: ICMP to stack: OK, RTT=1.009679ms HTTP to agent: OK, RTT=1.808628ms Endpoint connectivity to 10.2.1.234: ICMP to stack: OK, RTT=1.016365ms HTTP to agent: OK, RTT=2.29877ms For each node, the connectivity will be displayed for each protocol and path, both to the node itself and to an endpoint on that node. The latency specified is a snapshot at the last time a probe was run, which is typically once per minute. The ICMP connectivity row represents Layer 3 connectivity to the networking stack, while the HTTP connectivity row represents connection to an instance of the ``cilium-health`` agent running on the host or as an endpoint. .. \_monitor: Monitoring Datapath State ------------------------- Sometimes you may experience broken connectivity, which may be due to a number of different causes. A main cause can be unwanted packet drops on the networking level. The tool ``cilium-dbg monitor`` allows you to quickly inspect and see if and where packet drops happen. Following is an example output (use ``kubectl exec`` as in previous examples if running with Kubernetes): .. code-block:: shell-session $ kubectl -n kube-system exec -ti cilium-2hq5z -- cilium-dbg monitor --type drop Listening for events on 2 CPUs with 64x4096 of shared memory Press Ctrl-C to quit xx drop (Policy denied) to endpoint 25729, identity 261->264: fd02::c0a8:210b:0:bf00 -> fd02::c0a8:210b:0:6481 EchoRequest xx drop (Policy denied) to endpoint 25729, identity 261->264: fd02::c0a8:210b:0:bf00 -> fd02::c0a8:210b:0:6481 EchoRequest xx drop (Policy denied) to endpoint 25729, identity 261->264: 10.11.13.37 -> 10.11.101.61 EchoRequest xx drop (Policy denied) to endpoint 25729, identity 261->264: 10.11.13.37 -> 10.11.101.61 EchoRequest xx drop (Invalid destination mac) to endpoint 0, identity 0->0: fe80::5c25:ddff:fe8e:78d8 -> ff02::2 RouterSolicitation The above indicates that a packet to endpoint ID ``25729`` has been dropped due to violation of the Layer 3 policy. Handling drop (CT: Map insertion failed) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If connectivity fails and ``cilium-dbg monitor --type drop`` shows ``xx drop (CT: Map insertion failed)``, then it is likely that the connection tracking table is filling up and the automatic adjustment of the garbage collector interval is insufficient. Setting ``--conntrack-gc-interval`` to an interval lower than the current value may help. This controls the time interval between two garbage collection runs. By default ``--conntrack-gc-interval`` is set to 0 which translates to using a dynamic interval. In that case, the interval is updated after each garbage collection run depending on how many entries were garbage collected. If very few or no entries were garbage collected, the interval will increase; if many entries were garbage collected, it will decrease. The current interval value is reported in the Cilium agent logs. Alternatively, the value for ``bpf-ct-global-any-max`` and ``bpf-ct-global-tcp-max`` can be increased. Setting both of these options will be a trade-off of CPU for ``conntrack-gc-interval``, and for ``bpf-ct-global-any-max`` and ``bpf-ct-global-tcp-max`` the amount of memory consumed. You can track conntrack garbage collection related metrics such as ``datapath\_conntrack\_gc\_runs\_total`` and ``datapath\_conntrack\_gc\_entries`` to get visibility into garbage collection runs.
https://github.com/cilium/cilium/blob/main//Documentation/operations/troubleshooting.rst
main
cilium
[ 0.06784898787736893, 0.0005860643577761948, -0.07914888858795166, 0.0024560887832194567, -0.025476478040218353, -0.1062750592827797, -0.028285061940550804, -0.026198994368314743, 0.08290339261293411, 0.02624773606657982, -0.01278415136039257, -0.07648776471614838, -0.020524973049759865, -0...
0.244858
for ``bpf-ct-global-any-max`` and ``bpf-ct-global-tcp-max`` can be increased. Setting both of these options will be a trade-off of CPU for ``conntrack-gc-interval``, and for ``bpf-ct-global-any-max`` and ``bpf-ct-global-tcp-max`` the amount of memory consumed. You can track conntrack garbage collection related metrics such as ``datapath\_conntrack\_gc\_runs\_total`` and ``datapath\_conntrack\_gc\_entries`` to get visibility into garbage collection runs. Refer to :ref:`metrics` for more details. Enabling datapath debug messages ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ By default, datapath debug messages are disabled, and therefore not shown in ``cilium-dbg monitor -v`` output. To enable them, add ``"datapath"`` to the ``debug-verbose`` option. Policy Troubleshooting ====================== .. \_ensure\_managed\_pod: Ensure pod is managed by Cilium ------------------------------- A potential cause for policy enforcement not functioning as expected is that the networking of the pod selected by the policy is not being managed by Cilium. The following situations result in unmanaged pods: \* The pod is running in host networking and will use the host's IP address directly. Such pods have full network connectivity but Cilium will not provide security policy enforcement for such pods by default. To enforce policy against these pods, either set ``hostNetwork`` to false or use :ref:`HostPolicies`. \* The pod was started before Cilium was deployed. Cilium only manages pods that have been deployed after Cilium itself was started. Cilium will not provide security policy enforcement for such pods. These pods should be restarted in order to ensure that Cilium can provide security policy enforcement. If pod networking is not managed by Cilium. Ingress and egress policy rules selecting the respective pods will not be applied. See the section :ref:`network\_policy` for more details. For a quick assessment of whether any pods are not managed by Cilium, the `Cilium CLI `\_ will print the number of managed pods. If this prints that all of the pods are managed by Cilium, then there is no problem: .. code-block:: shell-session $ cilium status /¯¯\ /¯¯\\_\_/¯¯\ Cilium: OK \\_\_/¯¯\\_\_/ Operator: OK /¯¯\\_\_/¯¯\ Hubble: OK \\_\_/¯¯\\_\_/ ClusterMesh: disabled \\_\_/ Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2 Deployment hubble-relay Desired: 1, Ready: 1/1, Available: 1/1 Deployment hubble-ui Desired: 1, Ready: 1/1, Available: 1/1 DaemonSet cilium Desired: 2, Ready: 2/2, Available: 2/2 Containers: cilium-operator Running: 2 hubble-relay Running: 1 hubble-ui Running: 1 cilium Running: 2 Cluster Pods: 5/5 managed by Cilium ... You can run the following script to list the pods which are \*not\* managed by Cilium: .. code-block:: shell-session $ curl -sLO https://raw.githubusercontent.com/cilium/cilium/main/contrib/k8s/k8s-unmanaged.sh $ chmod +x k8s-unmanaged.sh $ ./k8s-unmanaged.sh kube-system/cilium-hqpk7 kube-system/kube-addon-manager-minikube kube-system/kube-dns-54cccfbdf8-zmv2c kube-system/kubernetes-dashboard-77d8b98585-g52k5 kube-system/storage-provisioner Understand the rendering of your policy --------------------------------------- There are always multiple ways to approach a problem. Cilium can provide the rendering of the aggregate policy provided to it, leaving you to simply compare with what you expect the policy to actually be rather than search (and potentially overlook) every policy. At the expense of reading a very large dump of an endpoint, this is often a faster path to discovering errant policy requests in the Kubernetes API. Start by finding the endpoint you are debugging from the following list. There are several cross references for you to use in this list, including the IP address and pod labels: .. code-block:: shell-session kubectl -n kube-system exec -ti cilium-q8wvt -- cilium-dbg endpoint list When you find the correct endpoint, the first column of every row is the endpoint ID. Use that to dump the full endpoint information: .. code-block:: shell-session kubectl -n kube-system exec -ti cilium-q8wvt -- cilium-dbg endpoint get 59084 .. image:: images/troubleshooting\_policy.png :align: center Importing this dump into a JSON-friendly editor can help browse and navigate the information here. At the top level of the dump, there are two nodes of note: \* ``spec``: The desired
https://github.com/cilium/cilium/blob/main//Documentation/operations/troubleshooting.rst
main
cilium
[ -0.007856921292841434, -0.024404309689998627, -0.10122458636760712, 0.00989609956741333, -0.012972274795174599, -0.11186374723911285, 0.026763465255498886, 0.06348498165607452, 0.0155708696693182, 0.004235855303704739, -0.009748323820531368, -0.08142723143100739, -0.007284561637789011, -0....
0.085904
code-block:: shell-session kubectl -n kube-system exec -ti cilium-q8wvt -- cilium-dbg endpoint get 59084 .. image:: images/troubleshooting\_policy.png :align: center Importing this dump into a JSON-friendly editor can help browse and navigate the information here. At the top level of the dump, there are two nodes of note: \* ``spec``: The desired state of the endpoint \* ``status``: The current state of the endpoint This is the standard Kubernetes control loop pattern. Cilium is the controller here, and it is iteratively working to bring the ``status`` in line with the ``spec``. Opening the ``status``, we can drill down through ``policy.realized.l4``. Do your ``ingress`` and ``egress`` rules match what you expect? If not, the reference to the errant rules can be found in the ``derived-from-rules`` node. Policymap pressure and overflow ------------------------------- The most important step in debugging policymap pressure is finding out which node(s) are impacted. The ``cilium\_bpf\_map\_pressure{map\_name="cilium\_policy\_v2\_\*"}`` metric monitors the endpoint's BPF policymap pressure. This metric exposes the maximum BPF map pressure on the node, meaning the policymap experiencing the most pressure on a particular node. Once the node is known, the troubleshooting steps are as follows: 1. Find the Cilium pod on the node experiencing the problematic policymap pressure and obtain a shell via ``kubectl exec``. 2. Use ``cilium policy selectors`` to get an overview of which selectors are selecting many identities. 3. The type of selector tells you what sort of policy rule could be having an impact. The three existing types of selectors are explained below, each with specific steps depending on the selector. See the steps below corresponding to the type of selector. 4. Consider bumping the policymap size as a last resort. However, keep in mind the following implications: \* Increased memory consumption for each policymap. \* Generally, as identities increase in the cluster, the more work Cilium performs. \* At a broader level, if the policy posture is such that all or nearly all identities are selected, this suggests that the posture is too permissive. +---------------+------------------------------------------------------------------------------------------------------------+ | Selector type | Form in ``cilium policy selectors`` output | +===============+============================================================================================================+ | CIDR | ``&LabelSelector{MatchLabels:map[string]string{cidr.1.1.1.1/32: ,}`` | +---------------+------------------------------------------------------------------------------------------------------------+ | FQDN | ``MatchName: , MatchPattern: \*`` | +---------------+------------------------------------------------------------------------------------------------------------+ | Label | ``&LabelSelector{MatchLabels:map[string]string{any.name: curl,k8s.io.kubernetes.pod.namespace: default,}`` | +---------------+------------------------------------------------------------------------------------------------------------+ An example output of ``cilium policy selectors``: .. code-block:: shell-session root@kind-worker:/home/cilium# cilium policy selectors SELECTOR LABELS USERS IDENTITIES &LabelSelector{MatchLabels:map[string]string{k8s.io.kubernetes.pod.namespace: kube-system,k8s.k8s-app: kube-dns,},MatchExpressions:[]LabelSelectorRequirement{},} default/tofqdn-dns-visibility 1 16500 &LabelSelector{MatchLabels:map[string]string{reserved.none: ,},MatchExpressions:[]LabelSelectorRequirement{},} default/tofqdn-dns-visibility 1 MatchName: , MatchPattern: \* default/tofqdn-dns-visibility 1 16777231 16777232 16777233 16860295 16860322 16860323 16860324 16860325 16860326 16860327 16860328 &LabelSelector{MatchLabels:map[string]string{any.name: netperf,k8s.io.kubernetes.pod.namespace: default,},MatchExpressions:[]LabelSelectorRequirement{},} default/tofqdn-dns-visibility 1 &LabelSelector{MatchLabels:map[string]string{cidr.1.1.1.1/32: ,},MatchExpressions:[]LabelSelectorRequirement{},} default/tofqdn-dns-visibility 1 16860329 &LabelSelector{MatchLabels:map[string]string{cidr.1.1.1.2/32: ,},MatchExpressions:[]LabelSelectorRequirement{},} default/tofqdn-dns-visibility 1 16860330 &LabelSelector{MatchLabels:map[string]string{cidr.1.1.1.3/32: ,},MatchExpressions:[]LabelSelectorRequirement{},} default/tofqdn-dns-visibility 1 16860331 From the output above, we see that all three selectors are in use. The significant action here is to determine which selector is selecting the most identities, because the policy containing that selector is the likely cause for the policymap pressure. Label ~~~~~ See section on :ref:`identity-relevant labels `. Another aspect to consider is the permissiveness of the policies and whether it could be reduced. CIDR ~~~~ One way to reduce the number of identities selected by a CIDR selector is to broaden the range of the CIDR, if possible. For example, in the above example output, the policy contains a ``/32`` rule for each CIDR, rather than using a wider range like ``/30`` instead. Updating the policy with this rule creates an identity that represents all IPs within the ``/30`` and therefore, only requires the selector to select 1 identity. FQDN ~~~~ See section on :ref:`isolating the source of toFQDNs issues regarding identities and policy `. etcd (kvstore) ============== Introduction ------------ Cilium can be operated in CRD-mode and
https://github.com/cilium/cilium/blob/main//Documentation/operations/troubleshooting.rst
main
cilium
[ -0.019927019253373146, 0.06191464886069298, -0.02329721488058567, -0.0027570270467549562, -0.012305427342653275, -0.0470430850982666, -0.017733413726091385, 0.04569058492779732, 0.04713480919599533, 0.052400343120098114, 0.003678981913253665, -0.09303387254476547, 0.024429496377706528, -0....
0.147285
this rule creates an identity that represents all IPs within the ``/30`` and therefore, only requires the selector to select 1 identity. FQDN ~~~~ See section on :ref:`isolating the source of toFQDNs issues regarding identities and policy `. etcd (kvstore) ============== Introduction ------------ Cilium can be operated in CRD-mode and kvstore/etcd mode. When cilium is running in kvstore/etcd mode, the kvstore becomes a vital component of the overall cluster health as it is required to be available for several operations. Operations for which the kvstore is strictly required when running in etcd mode: Scheduling of new workloads: As part of scheduling workloads/endpoints, agents will perform security identity allocation which requires interaction with the kvstore. If a workload can be scheduled due to re-using a known security identity, then state propagation of the endpoint details to other nodes will still depend on the kvstore and thus packets drops due to policy enforcement may be observed as other nodes in the cluster will not be aware of the new workload. Multi cluster: All state propagation between clusters depends on the kvstore. Node discovery: New nodes require to register themselves in the kvstore. Agent bootstrap: The Cilium agent will eventually fail if it can't connect to the kvstore at bootstrap time, however, the agent will still perform all possible operations while waiting for the kvstore to appear. Operations which \*do not\* require kvstore availability: All datapath operations: All datapath forwarding, policy enforcement and visibility functions for existing workloads/endpoints do not depend on the kvstore. Packets will continue to be forwarded and network policy rules will continue to be enforced. However, if the agent requires to restart as part of the :ref:`etcd\_recovery\_behavior`, there can be delays in: \* processing of flow events and metrics \* short unavailability of layer 7 proxies NetworkPolicy updates: Network policy updates will continue to be processed and applied. Services updates: All updates to services will be processed and applied. Understanding etcd status ------------------------- The etcd status is reported when running ``cilium-dbg status``. The following line represents the status of etcd:: KVStore: Ok etcd: 1/1 connected, lease-ID=29c6732d5d580cb5, lock lease-ID=29c6732d5d580cb7, has-quorum=true: https://192.168.60.11:2379 - 3.4.9 (Leader) OK: The overall status. Either ``OK`` or ``Failure``. 1/1 connected: Number of total etcd endpoints and how many of them are reachable. lease-ID: UUID of the lease used for all keys owned by this agent. lock lease-ID: UUID of the lease used for locks acquired by this agent. has-quorum: Status of etcd quorum. Either ``true`` or set to an error. consecutive-errors: Number of consecutive quorum errors. Only printed if errors are present. https://192.168.60.11:2379 - 3.4.9 (Leader): List of all etcd endpoints stating the etcd version and whether the particular endpoint is currently the elected leader. If an etcd endpoint cannot be reached, the error is shown. .. \_etcd\_recovery\_behavior: Recovery behavior ----------------- In the event of an etcd endpoint becoming unhealthy, etcd should automatically resolve this by electing a new leader and by failing over to a healthy etcd endpoint. As long as quorum is preserved, the etcd cluster will remain functional. In addition, Cilium performs a background check in an interval to determine etcd health and potentially take action. The interval depends on the overall cluster size. The larger the cluster, the longer the `interval `\_: \* If no etcd endpoints can be reached, Cilium will report failure in ``cilium-dbg status``. This will cause the liveness and readiness probe of Kubernetes to fail and Cilium will be restarted. \* A lock is acquired and released to test a write operation which requires quorum. If this operation fails, loss of quorum is reported. If quorum fails for
https://github.com/cilium/cilium/blob/main//Documentation/operations/troubleshooting.rst
main
cilium
[ -0.030566861853003502, 0.04960682615637779, -0.015222694724798203, -0.02031608484685421, 0.0393298976123333, 0.06394375115633011, -0.009071659296751022, -0.024442823603749275, 0.05444887652993202, -0.02297794073820114, 0.029810812324285507, -0.06587915867567062, 0.03864266350865364, 0.0333...
0.055907
report failure in ``cilium-dbg status``. This will cause the liveness and readiness probe of Kubernetes to fail and Cilium will be restarted. \* A lock is acquired and released to test a write operation which requires quorum. If this operation fails, loss of quorum is reported. If quorum fails for three or more intervals in a row, Cilium is declared unhealthy. \* The Cilium operator will constantly write to a heartbeat key (``cilium/.heartbeat``). All Cilium agents will watch for updates to this heartbeat key. This validates the ability for an agent to receive key updates from etcd. If the heartbeat key is not updated in time, the quorum check is declared to have failed and Cilium is declared unhealthy after 3 or more consecutive failures. Example of a status with a quorum failure which has not yet reached the threshold:: KVStore: Ok etcd: 1/1 connected, lease-ID=29c6732d5d580cb5, lock lease-ID=29c6732d5d580cb7, has-quorum=2m2.778966915s since last heartbeat update has been received, consecutive-errors=1: https://192.168.60.11:2379 - 3.4.9 (Leader) Example of a status with the number of quorum failures exceeding the threshold:: KVStore: Failure Err: quorum check failed 8 times in a row: 4m28.446600949s since last heartbeat update has been received .. \_troubleshooting\_clustermesh: .. include:: ./troubleshooting\_clustermesh.rst .. \_troubleshooting\_servicemesh: .. include:: troubleshooting\_servicemesh.rst Symptom Library =============== Node to node traffic is being dropped ------------------------------------- Symptom ~~~~~~~ Endpoint to endpoint communication on a single node succeeds but communication fails between endpoints across multiple nodes. Troubleshooting steps: ~~~~~~~~~~~~~~~~~~~~~~ #. Run ``cilium-health status --verbose`` on the node of the source and destination endpoint. It should describe the connectivity from that node to other nodes in the cluster, and to a simulated endpoint on each other node. Identify points in the cluster that cannot talk to each other. If the command does not describe the status of the other node, there may be an issue with the KV-Store. #. Run ``cilium-dbg monitor`` on the node of the source and destination endpoint. Look for packet drops. When running in :ref:`arch\_overlay` mode: #. If nodes are being populated correctly, run ``tcpdump -n -i cilium\_vxlan`` on each node to verify whether cross node traffic is being forwarded correctly between nodes. If packets are being dropped, \* verify that the node IP listed in ``cilium-dbg bpf ipcache list`` can reach each other. \* verify that the firewall on each node allows UDP port 8472. When running in :ref:`arch\_direct\_routing` mode: #. Run ``ip route`` or check your cloud provider router and verify that you have routes installed to route the endpoint prefix between all nodes. #. Verify that the firewall on each node permits to route the endpoint IPs. Useful Scripts ============== .. \_retrieve\_cilium\_pod: Retrieve Cilium pod managing a particular pod --------------------------------------------- Identifies the Cilium pod that is managing a particular pod in a namespace: .. code-block:: shell-session k8s-get-cilium-pod.sh \*\*Example:\*\* .. code-block:: shell-session $ curl -sLO https://raw.githubusercontent.com/cilium/cilium/main/contrib/k8s/k8s-get-cilium-pod.sh $ chmod +x k8s-get-cilium-pod.sh $ ./k8s-get-cilium-pod.sh luke-pod default cilium-zmjj9 cilium-node-init-v7r9p cilium-operator-f576f7977-s5gpq Execute a command in all Kubernetes Cilium pods ----------------------------------------------- Run a command within all Cilium pods of a cluster .. code-block:: shell-session k8s-cilium-exec.sh \*\*Example:\*\* .. code-block:: shell-session $ curl -sLO https://raw.githubusercontent.com/cilium/cilium/main/contrib/k8s/k8s-cilium-exec.sh $ chmod +x k8s-cilium-exec.sh $ ./k8s-cilium-exec.sh uptime 10:15:16 up 6 days, 7:37, 0 users, load average: 0.00, 0.02, 0.00 10:15:16 up 6 days, 7:32, 0 users, load average: 0.00, 0.03, 0.04 10:15:16 up 6 days, 7:30, 0 users, load average: 0.75, 0.27, 0.15 10:15:16 up 6 days, 7:28, 0 users, load average: 0.14, 0.04, 0.01 List unmanaged Kubernetes pods ------------------------------ Lists all Kubernetes pods in the cluster for which Cilium does \*not\* provide networking. This includes pods running in host-networking mode and pods that were started before Cilium was deployed.
https://github.com/cilium/cilium/blob/main//Documentation/operations/troubleshooting.rst
main
cilium
[ -0.0259060338139534, 0.020765533670783043, -0.018531005829572678, -0.003458421677350998, -0.05745352432131767, -0.019267544150352478, -0.07862333953380585, -0.003465366316959262, 0.0722874253988266, 0.0031055372674018145, 0.04949960857629776, -0.037015363574028015, 0.016317788511514664, -0...
0.192749
0.75, 0.27, 0.15 10:15:16 up 6 days, 7:28, 0 users, load average: 0.14, 0.04, 0.01 List unmanaged Kubernetes pods ------------------------------ Lists all Kubernetes pods in the cluster for which Cilium does \*not\* provide networking. This includes pods running in host-networking mode and pods that were started before Cilium was deployed. .. code-block:: shell-session k8s-unmanaged.sh \*\*Example:\*\* .. code-block:: shell-session $ curl -sLO https://raw.githubusercontent.com/cilium/cilium/main/contrib/k8s/k8s-unmanaged.sh $ chmod +x k8s-unmanaged.sh $ ./k8s-unmanaged.sh kube-system/cilium-hqpk7 kube-system/kube-addon-manager-minikube kube-system/kube-dns-54cccfbdf8-zmv2c kube-system/kubernetes-dashboard-77d8b98585-g52k5 kube-system/storage-provisioner Reporting a problem =================== Before you report a problem, make sure to retrieve the necessary information from your cluster before the failure state is lost. .. \_sysdump: Automatic log & state collection -------------------------------- .. include:: ../installation/cli-download.rst Then, execute ``cilium sysdump`` command to collect troubleshooting information from your Kubernetes cluster: .. code-block:: shell-session cilium sysdump Note that by default ``cilium sysdump`` will attempt to collect as much logs as possible and for all the nodes in the cluster. If your cluster size is above 20 nodes, consider setting the following options to limit the size of the sysdump. This is not required, but useful for those who have a constraint on bandwidth or upload size. \* set the ``--node-list`` option to pick only a few nodes in case the cluster has many of them. \* set the ``--logs-since-time`` option to go back in time to when the issues started. \* set the ``--logs-limit-bytes`` option to limit the size of the log files (note: passed onto ``kubectl logs``; does not apply to entire collection archive). Ideally, a sysdump that has a full history of select nodes, rather than a brief history of all the nodes, would be preferred (by using ``--node-list``). The second recommended way would be to use ``--logs-since-time`` if you are able to narrow down when the issues started. Lastly, if the Cilium agent and Operator logs are too large, consider ``--logs-limit-bytes``. Use ``--help`` to see more options: .. code-block:: shell-session cilium sysdump --help Single Node Bugtool ~~~~~~~~~~~~~~~~~~~ If you are not running Kubernetes, it is also possible to run the bug collection tool manually with the scope of a single node: The ``cilium-bugtool`` captures potentially useful information about your environment for debugging. The tool is meant to be used for debugging a single Cilium agent node. In the Kubernetes case, if you have multiple Cilium pods, the tool can retrieve debugging information from all of them. The tool works by archiving a collection of command output and files from several places. By default, it writes to the ``tmp`` directory. Note that the command needs to be run from inside the Cilium pod/container. .. code-block:: shell-session cilium-bugtool When running it with no option as shown above, it will try to copy various files and execute some commands. If ``kubectl`` is detected, it will search for Cilium pods. The default label being ``k8s-app=cilium``, but this and the namespace can be changed via ``k8s-namespace`` and ``k8s-label`` respectively. If you want to capture the archive from a Kubernetes pod, then the process is a bit different .. code-block:: shell-session $ # First we need to get the Cilium pod $ kubectl get pods --namespace kube-system NAME READY STATUS RESTARTS AGE cilium-kg8lv 1/1 Running 0 13m kube-addon-manager-minikube 1/1 Running 0 1h kube-dns-6fc954457d-sf2nk 3/3 Running 0 1h kubernetes-dashboard-6xvc7 1/1 Running 0 1h $ # Run the bugtool from this pod $ kubectl -n kube-system exec cilium-kg8lv -- cilium-bugtool [...] $ # Copy the archive from the pod $ kubectl cp kube-system/cilium-kg8lv:/tmp/cilium-bugtool-20180411-155146.166+0000-UTC-266836983.tar /tmp/cilium-bugtool-20180411-155146.166+0000-UTC-266836983.tar [...] .. note:: Please check the archive for sensitive information and strip it away before sharing it with us. Below is an approximate list of the kind of information in
https://github.com/cilium/cilium/blob/main//Documentation/operations/troubleshooting.rst
main
cilium
[ 0.03666478022933006, 0.04813583195209503, 0.007391761522740126, 0.003788810223340988, -0.007766398601233959, -0.03868977725505829, -0.03550183027982712, -0.022168543189764023, 0.05959491431713104, 0.047224320471286774, 0.04709847643971443, -0.07740603387355804, -0.022496197372674942, -0.01...
0.234929
-n kube-system exec cilium-kg8lv -- cilium-bugtool [...] $ # Copy the archive from the pod $ kubectl cp kube-system/cilium-kg8lv:/tmp/cilium-bugtool-20180411-155146.166+0000-UTC-266836983.tar /tmp/cilium-bugtool-20180411-155146.166+0000-UTC-266836983.tar [...] .. note:: Please check the archive for sensitive information and strip it away before sharing it with us. Below is an approximate list of the kind of information in the archive. \* Cilium status \* Cilium version \* Kernel configuration \* Resolve configuration \* Cilium endpoint state \* Cilium logs \* Docker logs \* ``dmesg`` \* ``ethtool`` \* ``ip a`` \* ``ip link`` \* ``ip r`` \* ``iptables-save`` \* ``kubectl -n kube-system get pods`` \* ``kubectl get pods,svc for all namespaces`` \* ``uname`` \* ``uptime`` \* ``cilium-dbg bpf \* list`` \* ``cilium-dbg endpoint get for each endpoint`` \* ``cilium-dbg endpoint list`` \* ``hostname`` \* ``cilium-dbg policy get`` \* ``cilium-dbg service list`` Debugging information ~~~~~~~~~~~~~~~~~~~~~ If you are not running Kubernetes, you can use the ``cilium-dbg debuginfo`` command to retrieve useful debugging information. If you are running Kubernetes, this command is automatically run as part of the system dump. ``cilium-dbg debuginfo`` can print useful output from the Cilium API. The output format is in Markdown format so this can be used when reporting a bug on the `issue tracker`\_. Running without arguments will print to standard output, but you can also redirect to a file like .. code-block:: shell-session cilium-dbg debuginfo -f debuginfo.md .. note:: Please check the debuginfo file for sensitive information and strip it away before sharing it with us. Slack assistance ---------------- The `Cilium Slack`\_ community is a helpful first point of assistance to get help troubleshooting a problem or to discuss options on how to address a problem. The community is open to anyone. Report an issue via GitHub -------------------------- If you believe to have found an issue in Cilium, please report a `GitHub issue`\_ and make sure to attach a system dump as described above to ensure that developers have the best chance to reproduce the issue. .. \_NodeSelector: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector .. \_RBAC: https://kubernetes.io/docs/reference/access-authn-authz/rbac/ .. \_CNI: https://github.com/containernetworking/cni .. \_Volumes: https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/ .. \_Cilium Frequently Asked Questions (FAQ): https://github.com/cilium/cilium/issues?utf8=%E2%9C%93&q=label%3Akind%2Fquestion%20 .. \_issue tracker: https://github.com/cilium/cilium/issues .. \_GitHub issue: `issue tracker`\_
https://github.com/cilium/cilium/blob/main//Documentation/operations/troubleshooting.rst
main
cilium
[ -0.01863241381943226, 0.052389685064554214, -0.036592017859220505, -0.06372872740030289, 0.014982818625867367, -0.09394156187772751, -0.03954954445362091, 0.027254480868577957, 0.03934366628527641, 0.0033381273970007896, 0.05979122221469879, -0.13343903422355652, -0.013136977329850197, -0....
0.177728
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_admin\_system\_reqs: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* System Requirements \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Before installing Cilium, please ensure that your system meets the minimum requirements below. Most modern Linux distributions already do. Summary ======= When running Cilium using the container image ``cilium/cilium``, the host system must meet these requirements: - Hosts with either AMD64 or AArch64 architecture - `Linux kernel`\_ >= 5.10 or equivalent (e.g., 4.18 on RHEL 8.10) When running Cilium as a native process on your host (i.e. \*\*not\*\* running the ``cilium/cilium`` container image) these additional requirements must be met: - `clang+LLVM`\_ >= 18.1 .. \_`clang+LLVM`: https://llvm.org When running Cilium without Kubernetes these additional requirements must be met: - :ref:`req\_kvstore` etcd >= 3.1.0 ======================== =============================== =================== Requirement Minimum Version In cilium container ======================== =============================== =================== `Linux kernel`\_ >= 5.10 or >= 4.18 on RHEL 8.10 no Key-Value store (etcd) >= 3.1.0 no clang+LLVM >= 18.1 yes ======================== =============================== =================== Architecture Support ==================== Cilium images are built for the following platforms: - AMD64 - AArch64 Linux Distribution Compatibility & Considerations ================================================= The following table lists Linux distributions that are known to work well with Cilium. Some distributions require a few initial tweaks. Please make sure to read each distribution's specific notes below before attempting to run Cilium. ========================== ==================== Distribution Minimum Version ========================== ==================== `Amazon Linux 2`\_ all `Bottlerocket OS`\_ all `CentOS`\_ >= 8.6 `Container-Optimized OS`\_ >= 85 Debian\_ >= 10 Buster `Fedora CoreOS`\_ >= 31.20200108.3.0 Flatcar\_ all LinuxKit\_ all Opensuse\_ Tumbleweed, >=Leap 15.4 `RedHat Enterprise Linux`\_ >= 8.6 `RedHat CoreOS`\_ >= 4.12 `Talos Linux`\_ >= 1.5.0 Ubuntu\_ >= 20.04 ========================== ==================== .. \_Amazon Linux 2: https://docs.aws.amazon.com/AL2/latest/relnotes/relnotes-al2.html .. \_CentOS: https://centos.org .. \_Container-Optimized OS: https://cloud.google.com/container-optimized-os/docs .. \_Fedora CoreOS: https://fedoraproject.org/coreos/release-notes .. \_Debian: https://www.debian.org/releases/ .. \_Flatcar: https://www.flatcar.org/releases .. \_LinuxKit: https://github.com/linuxkit/linuxkit/tree/master/kernel .. \_RedHat Enterprise Linux: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux .. \_RedHat CoreOS: https://access.redhat.com/articles/6907891 .. \_Ubuntu: https://www.releases.ubuntu.com/ .. \_Opensuse: https://en.opensuse.org/openSUSE:Roadmap .. \_Bottlerocket OS: https://github.com/bottlerocket-os/bottlerocket .. \_Talos Linux: https://www.talos.dev/ .. note:: The above list is based on feedback by users. If you find an unlisted Linux distribution that works well, please let us know by opening a GitHub issue or by creating a pull request that updates this guide. Flatcar on AWS EKS in ENI mode ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Flatcar is known to manipulate network interfaces created and managed by Cilium. When running the official Flatcar image for AWS EKS nodes in ENI mode, this may cause connectivity issues and potentially prevent the Cilium agent from booting. To avoid this, disable DHCP on the ENI interfaces and mark them as unmanaged by adding .. code-block:: text [Match] Name=eth[1-9]\* [Network] DHCP=no [Link] Unmanaged=yes to ``/etc/systemd/network/01-no-dhcp.network`` and then .. code-block:: shell-session systemctl daemon-reload systemctl restart systemd-networkd Ubuntu 22.04 on Raspberry Pi ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Before running Cilium on Ubuntu 22.04 on a Raspberry Pi, please make sure to install the following package: .. code-block:: shell-session sudo apt install linux-modules-extra-raspi .. \_admin\_kernel\_version: Linux Kernel ============ Base Requirements ~~~~~~~~~~~~~~~~~ Cilium leverages and builds on the kernel eBPF functionality as well as various subsystems which integrate with eBPF. Therefore, host systems are required to run a recent Linux kernel to run a Cilium agent. More recent kernels may provide additional eBPF functionality that Cilium will automatically detect and use on agent start. For this version of Cilium, it is recommended to use kernel 5.10 or later (or equivalent such as 4.18 on RHEL 8.10). For a list of features that require newer kernels, see :ref:`advanced\_features`. In order for the eBPF feature to be enabled properly, the following kernel configuration options must be enabled. This is typically the case with
https://github.com/cilium/cilium/blob/main//Documentation/operations/system_requirements.rst
main
cilium
[ 0.020903324708342552, 0.029272548854351044, -0.08353600651025772, -0.051071956753730774, 0.02612427808344364, -0.08067452162504196, -0.03322413191199303, 0.047067586332559586, 0.015387902036309242, -0.024153823032975197, 0.05279754847288132, -0.09748779237270355, 0.05270920693874359, -0.00...
0.155582
recommended to use kernel 5.10 or later (or equivalent such as 4.18 on RHEL 8.10). For a list of features that require newer kernels, see :ref:`advanced\_features`. In order for the eBPF feature to be enabled properly, the following kernel configuration options must be enabled. This is typically the case with distribution kernels. When an option can be built as a module or statically linked, either choice is valid. :: CONFIG\_BPF=y CONFIG\_BPF\_SYSCALL=y CONFIG\_NET\_CLS\_BPF=y CONFIG\_BPF\_JIT=y CONFIG\_NET\_CLS\_ACT=y CONFIG\_NET\_SCH\_INGRESS=y CONFIG\_CRYPTO\_SHA1=y CONFIG\_CRYPTO\_USER\_API\_HASH=y CONFIG\_CGROUPS=y CONFIG\_CGROUP\_BPF=y CONFIG\_PERF\_EVENTS=y CONFIG\_SCHEDSTATS=y Requirements for Iptables-based Masquerading ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If you are not using BPF for masquerading (``enable-bpf-masquerade=false``, the default value), then you will need the following kernel configuration options. :: CONFIG\_NETFILTER\_XT\_SET=m CONFIG\_IP\_SET=m CONFIG\_IP\_SET\_HASH\_IP=m CONFIG\_NETFILTER\_XT\_MATCH\_COMMENT=m Requirements for Tunneling and Routing ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Cilium uses tunneling protocols like VXLAN by default for pod-to-pod communication across nodes, as well as policy routing for various traffic management functionality. The following kernel configuration options are required for proper operation: :: CONFIG\_VXLAN=y CONFIG\_GENEVE=y CONFIG\_FIB\_RULES=y .. note:: On some embedded or custom Linux systems, especially when cross-compiling for ARM, enabling ``CONFIG\_FIB\_RULES=y`` directly in the kernel ``.config`` is not sufficient, as it depends on other routing-related kernel options to be enabled. The recommended approach is to use: :: scripts/config --enable CONFIG\_FIB\_RULES make olddefconfig The kernel build system uses ``Kconfig`` logic to validate and manage dependencies, so direct edits to ``.config`` may be ignored or silently overridden. Requirements for L7 and FQDN Policies ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ L7 proxy redirection currently uses ``TPROXY`` iptables actions as well as ``socket`` matches. For L7 redirection to work as intended kernel configuration must include the following modules: :: CONFIG\_NETFILTER\_XT\_TARGET\_TPROXY=m CONFIG\_NETFILTER\_XT\_TARGET\_MARK=m CONFIG\_NETFILTER\_XT\_TARGET\_CT=m CONFIG\_NETFILTER\_XT\_MATCH\_MARK=m CONFIG\_NETFILTER\_XT\_MATCH\_SOCKET=m When ``xt\_socket`` kernel module is missing the forwarding of redirected L7 traffic does not work in non-tunneled datapath modes. Since some notable kernels (e.g., COS) are shipping without ``xt\_socket`` module, Cilium implements a fallback compatibility mode to allow L7 policies and visibility to be used with those kernels. Currently this fallback disables ``ip\_early\_demux`` kernel feature in non-tunneled datapath modes, which may decrease system networking performance. This guarantees HTTP and Kafka redirection works as intended. However, if HTTP or Kafka enforcement policies are never used, this behavior can be turned off by adding the following to the helm configuration command line: .. cilium-helm-install:: :set: enableXTSocketFallback=false :extra-args: ... .. \_features\_kernel\_matrix: Requirements for IPsec ~~~~~~~~~~~~~~~~~~~~~~ The :ref:`encryption\_ipsec` feature requires a lot of kernel configuration options, most of which to enable the actual encryption. Note that the specific options required depend on the algorithm. The list below corresponds to requirements for GCM-128-AES. :: CONFIG\_XFRM=y CONFIG\_XFRM\_OFFLOAD=y CONFIG\_XFRM\_STATISTICS=y CONFIG\_XFRM\_ALGO=m CONFIG\_XFRM\_USER=m CONFIG\_INET{,6}\_ESP=m CONFIG\_INET{,6}\_IPCOMP=m CONFIG\_INET{,6}\_XFRM\_TUNNEL=m CONFIG\_INET{,6}\_TUNNEL=m CONFIG\_INET\_XFRM\_MODE\_TUNNEL=m CONFIG\_CRYPTO\_AEAD=m CONFIG\_CRYPTO\_AEAD2=m CONFIG\_CRYPTO\_GCM=m CONFIG\_CRYPTO\_SEQIV=m CONFIG\_CRYPTO\_CBC=m CONFIG\_CRYPTO\_HMAC=m CONFIG\_CRYPTO\_SHA256=m CONFIG\_CRYPTO\_AES=m Requirements for the Bandwidth Manager ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The :ref:`bandwidth-manager` requires the following kernel configuration option to change the packet scheduling algorithm. :: CONFIG\_NET\_SCH\_FQ=m Requirements for Netkit Device Mode ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The :ref:`netkit` requires the following kernel configuration option to create netkit devices. :: CONFIG\_NETKIT=y .. \_advanced\_features: Required Kernel Versions for Advanced Features ============================================== Additional kernel features continues to progress in the Linux community. Some of Cilium's features are dependent on newer kernel versions and are thus enabled by upgrading to more recent kernel versions as detailed below. ====================================================== =============================== Cilium Feature Minimum Kernel Version ====================================================== =============================== :ref:`enable\_multicast` (AMD64) >= 5.10 IPv6 BIG TCP support >= 5.19 :ref:`enable\_multicast` (AArch64) >= 6.0 IPv4 BIG TCP support >= 6.3 ====================================================== =============================== .. \_req\_kvstore: Key-Value store =============== Cilium optionally uses a distributed Key-Value store to manage, synchronize and distribute security identities across all cluster nodes. The following Key-Value stores are currently supported: - etcd >= 3.1.0 Cilium can be used without a Key-Value store when CRD-based state management is used with Kubernetes. This is
https://github.com/cilium/cilium/blob/main//Documentation/operations/system_requirements.rst
main
cilium
[ 0.007669361773878336, -0.06807556003332138, -0.012166766449809074, -0.015881864354014397, 0.12471756339073181, 0.06579715013504028, -0.04986778646707535, 0.07919994741678238, -0.13747547566890717, -0.022746818140149117, 0.04110606387257576, -0.08426660299301147, -0.0987597331404686, 0.0229...
0.036809
Key-Value store =============== Cilium optionally uses a distributed Key-Value store to manage, synchronize and distribute security identities across all cluster nodes. The following Key-Value stores are currently supported: - etcd >= 3.1.0 Cilium can be used without a Key-Value store when CRD-based state management is used with Kubernetes. This is the default for new Cilium installations. Larger clusters will perform better with a Key-Value store backed identity management instead, see :ref:`k8s\_quick\_install` for more details. See :ref:`install\_kvstore` for details on how to configure the ``cilium-agent`` to use a Key-Value store. clang+LLVM ========== .. note:: This requirement is only needed if you run ``cilium-agent`` natively. If you are using the Cilium container image ``cilium/cilium``, clang+LLVM is included in the container image. LLVM is the compiler suite that Cilium uses to generate eBPF bytecode programs to be loaded into the Linux kernel. The minimum supported version of LLVM available to ``cilium-agent`` should be >=18.1. The version of clang installed must be compiled with the eBPF backend enabled. See https://releases.llvm.org/ for information on how to download and install LLVM. .. \_firewall\_requirements: Firewall Rules ============== If you are running Cilium in an environment that requires firewall rules to enable connectivity, you will have to add the following rules to ensure Cilium works properly. It is recommended but optional that all nodes running Cilium in a given cluster must be able to ping each other so ``cilium-health`` can report and monitor connectivity among nodes. This requires ICMP Type 0/8, Code 0 open among all nodes. TCP 4240 should also be open among all nodes for ``cilium-health`` monitoring. Note that it is also an option to only use one of these two methods to enable health monitoring. If the firewall does not permit either of these methods, Cilium will still operate fine but will not be able to provide health information. For IPsec enabled Cilium deployments, you need to ensure that the firewall allows ESP traffic through. For example, AWS Security Groups doesn't allow ESP traffic by default. If you are using WireGuard, you must allow UDP port 51871. If you are using VXLAN overlay network mode, Cilium uses Linux's default VXLAN port 8472 over UDP, unless Linux has been configured otherwise. In this case, UDP 8472 must be open among all nodes to enable VXLAN overlay mode. The same applies to Geneve overlay network mode, except the port is UDP 6081. If you are running in direct routing mode, your network must allow routing of pod IPs. As an example, if you are running on AWS with VXLAN overlay networking, here is a minimum set of AWS Security Group (SG) rules. It assumes a separation between the SG on the master nodes, ``master-sg``, and the worker nodes, ``worker-sg``. It also assumes ``etcd`` is running on the master nodes. Master Nodes (``master-sg``) Rules: ======================== =============== ==================== =============== Port Range / Protocol Ingress/Egress Source/Destination Description ======================== =============== ==================== =============== 2379-2380/tcp ingress ``worker-sg`` etcd access 8472/udp ingress ``master-sg`` (self) VXLAN overlay 8472/udp ingress ``worker-sg`` VXLAN overlay 4240/tcp ingress ``master-sg`` (self) health checks 4240/tcp ingress ``worker-sg`` health checks ICMP 8/0 ingress ``master-sg`` (self) health checks ICMP 8/0 ingress ``worker-sg`` health checks 8472/udp egress ``master-sg`` (self) VXLAN overlay 8472/udp egress ``worker-sg`` VXLAN overlay 4240/tcp egress ``master-sg`` (self) health checks 4240/tcp egress ``worker-sg`` health checks ICMP 8/0 egress ``master-sg`` (self) health checks ICMP 8/0 egress ``worker-sg`` health checks ======================== =============== ==================== =============== Worker Nodes (``worker-sg``): ======================== =============== ==================== =============== Port Range / Protocol Ingress/Egress Source/Destination Description ======================== =============== ==================== =============== 8472/udp ingress ``master-sg`` VXLAN overlay 8472/udp ingress ``worker-sg`` (self) VXLAN overlay 4240/tcp ingress ``master-sg`` health checks 4240/tcp ingress ``worker-sg`` (self) health checks
https://github.com/cilium/cilium/blob/main//Documentation/operations/system_requirements.rst
main
cilium
[ 0.008270548656582832, 0.0324857272207737, -0.05493912473320961, -0.04090305417776108, -0.08620225638151169, 0.00626017851755023, -0.03493987023830414, 0.020390791818499565, 0.05326495319604874, -0.0015285811387002468, 0.07247857004404068, -0.14108091592788696, 0.042364347726106644, -0.0532...
0.091774
ICMP 8/0 egress ``worker-sg`` health checks ======================== =============== ==================== =============== Worker Nodes (``worker-sg``): ======================== =============== ==================== =============== Port Range / Protocol Ingress/Egress Source/Destination Description ======================== =============== ==================== =============== 8472/udp ingress ``master-sg`` VXLAN overlay 8472/udp ingress ``worker-sg`` (self) VXLAN overlay 4240/tcp ingress ``master-sg`` health checks 4240/tcp ingress ``worker-sg`` (self) health checks ICMP 8/0 ingress ``master-sg`` health checks ICMP 8/0 ingress ``worker-sg`` (self) health checks 8472/udp egress ``master-sg`` VXLAN overlay 8472/udp egress ``worker-sg`` (self) VXLAN overlay 4240/tcp egress ``master-sg`` health checks 4240/tcp egress ``worker-sg`` (self) health checks ICMP 8/0 egress ``master-sg`` health checks ICMP 8/0 egress ``worker-sg`` (self) health checks 2379-2380/tcp egress ``master-sg`` etcd access ======================== =============== ==================== =============== .. note:: If you use a shared SG for the masters and workers, you can condense these rules into ingress/egress to self. If you are using Direct Routing mode, you can condense all rules into ingress/egress ANY port/protocol to/from self. The following ports should also be available on each node: ======================== ================================================================== Port Range / Protocol Description ======================== ================================================================== 4240/tcp cluster health checks (``cilium-health``) 4244/tcp Hubble server 4245/tcp Hubble Relay 4250/tcp Mutual Authentication port 4251/tcp Spire Agent health check port (listening on 127.0.0.1 or ::1) 6060/tcp cilium-agent pprof server (listening on 127.0.0.1) 6061/tcp cilium-operator pprof server (listening on 127.0.0.1) 6062/tcp Hubble Relay pprof server (listening on 127.0.0.1) 9878/tcp cilium-envoy health listener (listening on 127.0.0.1) 9879/tcp cilium-agent health status API (listening on 127.0.0.1 and/or ::1) 9890/tcp cilium-agent gops server (listening on 127.0.0.1) 9891/tcp operator gops server (listening on 127.0.0.1) 9893/tcp Hubble Relay gops server (listening on 127.0.0.1) 9901/tcp cilium-envoy Admin API (listening on 127.0.0.1) 9962/tcp cilium-agent Prometheus metrics 9963/tcp cilium-operator Prometheus metrics 9964/tcp cilium-envoy Prometheus metrics 51871/udp WireGuard encryption tunnel endpoint ======================== ================================================================== .. \_admin\_mount\_bpffs: Mounted eBPF filesystem ======================= .. Note:: Some distributions mount the bpf filesystem automatically. Check if the bpf filesystem is mounted by running the command. .. code-block:: shell-session # mount | grep /sys/fs/bpf $ # if present should output, e.g. "none on /sys/fs/bpf type bpf"... If the eBPF filesystem is not mounted in the host filesystem, Cilium will automatically mount the filesystem. Mounting this BPF filesystem allows the ``cilium-agent`` to persist eBPF resources across restarts of the agent so that the datapath can continue to operate while the agent is subsequently restarted or upgraded. Optionally it is also possible to mount the eBPF filesystem before Cilium is deployed in the cluster, the following command must be run in the host mount namespace. The command must only be run once during the boot process of the machine. .. code-block:: shell-session # mount bpffs /sys/fs/bpf -t bpf A portable way to achieve this with persistence is to add the following line to ``/etc/fstab`` and then run ``mount /sys/fs/bpf``. This will cause the filesystem to be automatically mounted when the node boots. :: bpffs /sys/fs/bpf bpf defaults 0 0 If you are using systemd to manage the kubelet, see the section :ref:`bpffs\_systemd`. Routing Tables ============== When running in :ref:`ipam\_eni` IPAM mode, Cilium will install per-ENI routing tables for each ENI that is used by Cilium for pod IP allocation. These routing tables are added to the host network namespace and must not be otherwise used by the system. The index of those per-ENI routing tables is computed as ``10 + ``. The base offset of 10 is chosen as it is highly unlikely to collide with the main routing table which is between 253-255. Cilium uses the following routing table IDs: ================= ========================================================= Route table ID Purpose ================= ========================================================= 200 IPsec routing rules 202 VTEP routing rules 2004 Routing rules to the proxy 2005 Routing rules from the proxy ================= ========================================================= Cilium manages
https://github.com/cilium/cilium/blob/main//Documentation/operations/system_requirements.rst
main
cilium
[ -0.01837923191487789, 0.06000879034399986, -0.042647723108530045, -0.04634369909763336, -0.006609308999031782, -0.06049586459994316, -0.02099023386836052, 0.06550327688455582, -0.08695206046104431, 0.018285978585481644, -0.0021872809156775475, -0.0849863588809967, -0.00782192125916481, -0....
0.165608
to collide with the main routing table which is between 253-255. Cilium uses the following routing table IDs: ================= ========================================================= Route table ID Purpose ================= ========================================================= 200 IPsec routing rules 202 VTEP routing rules 2004 Routing rules to the proxy 2005 Routing rules from the proxy ================= ========================================================= Cilium manages these routing table IDs even if none of the related features are in use. Privileges ========== The following privileges are required to run Cilium. When running the standard Kubernetes :term:`DaemonSet`, the privileges are automatically granted to Cilium. \* Cilium interacts with the Linux kernel to install eBPF program which will then perform networking tasks and implement security rules. In order to install eBPF programs system-wide, ``CAP\_SYS\_ADMIN`` privileges are required. These privileges must be granted to ``cilium-agent``. The quickest way to meet the requirement is to run ``cilium-agent`` as root and/or as privileged container. \* Cilium requires access to the host networking namespace. For this purpose, the Cilium pod is scheduled to run in the host networking namespace directly.
https://github.com/cilium/cilium/blob/main//Documentation/operations/system_requirements.rst
main
cilium
[ 0.053314581513404846, -0.06151203066110611, -0.004860198590904474, -0.013166019693017006, -0.09018086642026901, -0.06952761113643646, -0.04594690725207329, -0.0009664514218457043, -0.08442356437444687, 0.012903260067105293, -0.05456329509615898, -0.04846297949552536, 0.03368552401661873, -...
0.058063
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_performance\_scalability: Performance & Scalability ========================= Welcome to the performance and scalability guides. This section contains best-practices to tune various performance and scalability aspects. It also contains official benchmarks as measured by the development team in a standardized and repeatable bare metal environment. .. toctree:: :maxdepth: 1 :glob: tuning benchmark scalability/index
https://github.com/cilium/cilium/blob/main//Documentation/operations/performance/index.rst
main
cilium
[ 0.01004960760474205, 0.03850652277469635, -0.10223022848367691, -0.008060779422521591, 0.036057230085134506, -0.11372353881597519, -0.0739612728357315, 0.026723923161625862, -0.04434290900826454, -0.021764475852251053, -0.01046120747923851, -0.03201398625969887, 0.002002254594117403, -0.01...
0.172248
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_performance\_report: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* CNI Performance Benchmark \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Introduction ============ This chapter contains performance benchmark numbers for a variety of scenarios. All tests are performed between containers running on two different bare metal nodes connected back-to-back by a 100Gbit/s network interface. Upon popular request we have included performance numbers for Calico for comparison. .. admonition:: Video :class: attention You can also watch Thomas Graf, Co-founder of Cilium, dive deep into this chapter in `eCHO episode 5: Network performance benchmarking `\_\_. .. tip:: To achieve these performance results, follow the :ref:`performance\_tuning`. For more information on the used system and configuration, see :ref:`test\_hardware`. For more details on all tested configurations, see :ref:`test\_configurations`. The following metrics are collected and reported. Each metric represents a different traffic pattern that can be required for workloads. See the specific sections for an explanation on what type of workloads are represented by each benchmark. Throughput Maximum transfer rate via a single TCP connection and the total transfer rate of 32 accumulated connections. Request/Response Rate The number of request/response messages per second that can be transmitted over a single TCP connection and over 32 parallel TCP connections. Connections Rate The number of connections per second that can be established in sequence with a single request/response payload message transmitted for each new connection. A single process and 32 parallel processes are tested. For the various benchmarks `netperf`\_ has been used to generate the workloads and to collect the metrics. For spawning parallel netperf sessions, `super\_netperf `\_ has been used. Both netperf and super\_netperf are also frequently used and well established tools for benchmarking in the Linux kernel networking community. .. \_benchmark\_throughput: TCP Throughput (TCP\_STREAM) =========================== Throughput testing (TCP\_STREAM) is useful to understand the maximum throughput that can be achieved with a particular configuration. All or most configurations can achieve line-rate or close to line-rate if enough CPU resources are thrown at the load. It is therefore important to understand the amount of CPU resources required to achieve a certain throughput as these CPU resources will no longer be available to workloads running on the machine. This test represents bulk data transfer workloads, e.g. streaming services or services performing data upload/download. Single-Stream ------------- In this test, a single TCP stream is opened between the containers and maximum throughput is achieved: .. image:: images/bench\_tcp\_stream\_1\_stream.png We can see that eBPF-based solutions can outperform even the node-to-node baseline on modern kernels despite performing additional work (forwarding into the network namespace of the container, policy enforcement, ...). This is because eBPF is capable of bypassing the iptables layer of the node which is still traversed for the node to node baseline. The following graph shows the total CPU consumption across the entire system while running the benchmark, normalized to a 50Gbit throughput: .. image:: images/bench\_tcp\_stream\_1\_stream\_cpu.png .. tip:: \*\*Kernel wisdom:\*\* TCP flow performance is limited by the receiver, since sender can use both TSO super-packets. This can be observed in the increased CPU spending on the server-side above above. Multi-Stream ------------- In this test, 32 processes are opening 32 parallel TCP connections. Each process is attempting to reach maximum throughput and the total is reported: .. image:: images/bench\_tcp\_stream\_32\_streams.png Given multiple processes are being used, all test configurations can achieve transfer rates close to the line-rate of the network interface. The main difference is the CPU resources required to achieve it: .. image:: images/bench\_tcp\_stream\_32\_streams\_cpu.png .. \_request\_response: Request/Response Rate (TCP\_RR) ============================== The request/response rate (TCP\_RR) primarily measures the latency and efficiency to handle
https://github.com/cilium/cilium/blob/main//Documentation/operations/performance/benchmark.rst
main
cilium
[ 0.006311534903943539, 0.026123417541384697, -0.1072256788611412, -0.02192698046565056, 0.017820773646235466, -0.11063019186258316, -0.06411895900964737, 0.011342277750372887, 0.01338888518512249, -0.04435395821928978, -0.007005278021097183, -0.06879670917987823, 0.039742033928632736, -0.04...
0.229325
are being used, all test configurations can achieve transfer rates close to the line-rate of the network interface. The main difference is the CPU resources required to achieve it: .. image:: images/bench\_tcp\_stream\_32\_streams\_cpu.png .. \_request\_response: Request/Response Rate (TCP\_RR) ============================== The request/response rate (TCP\_RR) primarily measures the latency and efficiency to handle round-trip forwarding of an individual network packet. This benchmark will lead to the most packets per second possible on the wire and stresses the cost performed by a network packet. This is the opposite of the throughput test which maximizes the size of each network packet. A configuration that is doing well in this test (delivering high requests per second rates) will also deliver better (lower) network latencies. This test represents services which maintain persistent connections and exchange request/response type interactions with other services. This is common for services using REST or gRPC APIs. 1 Process --------- In this test, a single TCP connection is opened between the containers and a single byte is sent back and forth between the containers. For each round-trip, one request is counted: .. image:: images/bench\_tcp\_rr\_1\_process.png eBPF on modern kernels can achieve almost the same request/response rate as the baseline while only consuming marginally more CPU resources: .. image:: images/bench\_tcp\_rr\_1\_process\_cpu.png 32 Processes ------------ In this test, 32 processes are opening 32 parallel TCP connections. Each process is performing single byte round-trips. The total number of requests per second is reported: .. image:: images/bench\_tcp\_rr\_32\_processes.png Cilium can achieve close to 1M requests/s in this test while consuming about 30% of the system resources on both the sender and receiver: .. image:: images/bench\_tcp\_rr\_32\_processes\_cpu.png Connection Rate (TCP\_CRR) ========================= The connection rate (TCP\_CRR) test measures the efficiency in handling new connections. It is similar to the request/response rate test but will create a new TCP connection for each round-trip. This measures the cost of establishing a connection, transmitting a byte in both directions, and closing the connection. This is more expensive than the TCP\_RR test and puts stress on the cost related to handling new connections. This test represents a workload that receives or initiates a lot of TCP connections. An example where this is the case is a publicly exposed service that receives connections from many clients. Good examples of this are L4 proxies or services opening many connections to external endpoints. This benchmark puts the most stress on the system with the least work offloaded to hardware so we can expect to see the biggest difference between tested configurations. A configuration that does well in this test (delivering high connection rates) will handle situations with overwhelming connection rates much better, leaving more CPU resources available to workloads on the system. 1 Process --------- In this test, a single process opens as many TCP connections as possible in sequence: .. image:: images/bench\_tcp\_crr\_1\_process.png The following graph shows the total CPU consumption across the entire system while running the benchmark: .. image:: images/bench\_tcp\_crr\_1\_process\_cpu.png .. tip:: \*\*Kernel wisdom:\*\* The CPU resources graph makes it obvious that some additional kernel cost is paid at the sender as soon as network namespace isolation is performed as all container workload benchmarks show signs of this cost. We will investigate and optimize this aspect in a future release. 32 Processes ------------ In this test, 32 processes running in parallel open as many TCP connections in sequence as possible. This is by far the most stressful test for the system. .. image:: images/bench\_tcp\_crr\_32\_processes.png This benchmark outlines major differences between the tested configurations. In particular, it illustrates the overall cost of iptables which is optimized to perform most of the required work per connection and then caches the
https://github.com/cilium/cilium/blob/main//Documentation/operations/performance/benchmark.rst
main
cilium
[ -0.05470890924334526, 0.03109491802752018, -0.022067531943321228, 0.025525948032736778, 0.014064192771911621, -0.0778363049030304, -0.03820008412003517, 0.030653798952698708, 0.07546494901180267, 0.001396095729433, -0.035194966942071915, 0.016782669350504875, 0.011865275911986828, 0.023137...
0.140358
as possible. This is by far the most stressful test for the system. .. image:: images/bench\_tcp\_crr\_32\_processes.png This benchmark outlines major differences between the tested configurations. In particular, it illustrates the overall cost of iptables which is optimized to perform most of the required work per connection and then caches the result. This leads to a worst-case performance scenario when a lot of new connections are expected. .. note:: We have not been able to measure stable results for the Calico eBPF datapath. We are not sure why. The network packet flow was never steady. We have thus not included the result. We invite the Calico team to work with us to investigate this and then re-test. The following graph shows the total CPU consumption across the entire system while running the benchmark: .. image:: images/bench\_tcp\_crr\_32\_processes\_cpu.png Encryption (WireGuard/IPsec) ============================ Cilium supports encryption via WireGuard® and IPsec. This first section will look at WireGuard and compare it against using Calico for WireGuard encryption. If you are interested in IPsec performance and how it compares to WireGuard, please see :ref:`performance\_wireguard\_ipsec`. WireGuard Throughput -------------------- Looking at TCP throughput first, the following graph shows results for both 1500 bytes MTU and 9000 bytes MTU: .. image:: images/bench\_wireguard\_tcp\_1\_stream.png .. note:: The Cilium eBPF kube-proxy replacement combined with WireGuard is currently slightly slower than Cilium eBPF + kube-proxy. We have identified the problem and will be resolving this deficit in one of the next releases. The following graph shows the total CPU consumption across the entire system while running the WireGuard encryption benchmark: .. image:: images/bench\_wireguard\_tcp\_1\_stream\_cpu.png WireGuard Request/Response -------------------------- The next benchmark measures the request/response rate while encrypting with WireGuard. See :ref:`request\_response` for details on what this test actually entails. .. image:: images/bench\_wireguard\_rr\_1\_process.png All tested configurations performed more or less the same. The following graph shows the total CPU consumption across the entire system while running the WireGuard encryption benchmark: .. image:: images/bench\_wireguard\_rr\_1\_process\_cpu.png .. \_performance\_wireguard\_ipsec: WireGuard vs IPsec ------------------ In this section, we compare Cilium encryption using WireGuard and IPsec. WireGuard is able to achieve a higher maximum throughput: .. image:: images/bench\_wireguard\_ipsec\_tcp\_stream\_1\_stream.png However, looking at the CPU resources required to achieve 10Gbit/s of throughput, WireGuard is less efficient at achieving the same throughput: .. image:: images/bench\_wireguard\_ipsec\_tcp\_stream\_1\_stream\_cpu.png .. tip:: IPsec performing better than WireGuard in this test is unexpected in some ways. A possible explanation is that the IPsec encryption is making use of AES-NI instructions whereas the WireGuard implementation is not. This would typically lead to IPsec being more efficient when AES-NI offload is available and WireGuard being more efficient if the instruction set is not available. Looking at the request/response rate, IPsec is outperforming WireGuard in our tests. Unlike for the throughput tests, the MTU does not have any effect as the packet sizes remain small: .. image:: images/bench\_wireguard\_ipsec\_tcp\_rr\_1\_process.png .. image:: images/bench\_wireguard\_ipsec\_tcp\_rr\_1\_process\_cpu.png Test Environment ================ .. \_test\_hardware: Test Hardware ------------- All tests are performed using regular off-the-shelf hardware. ============ ====================================================================================================================================================== Item Description ============ ====================================================================================================================================================== CPU `AMD Ryzen 9 3950x `\_, AM4 platform, 3.5GHz, 16 cores / 32 threads Mainboard `x570 Aorus Master `\_, PCIe 4.0 x16 support Memory `HyperX Fury DDR4-3200 `\_ 128GB, XMP clocked to 3.2GHz Network Card `Intel E810-CQDA2 `\_, dual port, 100Gbit/s per port, PCIe 4.0 x16 Kernel Linux 5.10 LTS, see also :ref:`performance\_tuning` ============ ====================================================================================================================================================== .. \_test\_configurations: Test Configurations ------------------- All tests are performed using standardized configuration. Upon popular request, we have included measurements for Calico for direct comparison. ============================ =================================================================== Configuration Name Description ============================ =================================================================== Baseline (Node to Node) No Kubernetes Cilium Cilium 1.9.6, eBPF host-routing, kube-proxy replacement, No CT Cilium (legacy host-routing) Cilium 1.9.6, legacy host-routing, kube-proxy replacement, No CT
https://github.com/cilium/cilium/blob/main//Documentation/operations/performance/benchmark.rst
main
cilium
[ 0.01784392073750496, -0.0019342298619449139, -0.001589578459970653, 0.042413994669914246, 0.06368999928236008, -0.0999314934015274, 0.009555302560329437, 0.02062126435339451, -0.0034525799565017223, 0.011732857674360275, -0.029813922941684723, -0.07167666405439377, -0.0200884398072958, -0....
0.097101
are performed using standardized configuration. Upon popular request, we have included measurements for Calico for direct comparison. ============================ =================================================================== Configuration Name Description ============================ =================================================================== Baseline (Node to Node) No Kubernetes Cilium Cilium 1.9.6, eBPF host-routing, kube-proxy replacement, No CT Cilium (legacy host-routing) Cilium 1.9.6, legacy host-routing, kube-proxy replacement, No CT Calico Calico 3.17.3, kube-proxy Calico eBPF Calico 3.17.3, eBPF datapath, No CT ============================ =================================================================== How to reproduce ================ To ease reproducibility, this report is paired with a set of scripts that can be found in `cilium/cilium-perf-networking `\_. All scripts in this document refer to this repository. Specifically, we use `Terraform `\_ and `Ansible `\_ to setup the environment and execute benchmarks. We use `Packet `\_ bare metal servers as our hardware platform, but the guide is structured so that it can be easily adapted to other environments. Download the Cilium performance evaluation scripts: .. code-block:: shell-session $ git clone https://github.com/cilium/cilium-perf-networking.git $ cd cilium-perf-networking Packet Servers -------------- To evaluate both :ref:`arch\_overlay` and :ref:`native\_routing`, we configure the Packet machines to use a `"Mixed/Hybrid" `\_ network mode, where the secondary interfaces of the machines share a flat L2 network. While this can be done on the Packet web UI, we include appropriate Terraform (version 0.13) files to automate this process. .. code-block:: shell-session $ cd terraform $ terraform init $ terraform apply -var 'packet\_token=API\_TOKEN' -var 'packet\_project\_id=PROJECT\_ID' $ terraform output ansible\_inventory | tee ../packet-hosts.ini $ cd ../ The above will provision two servers named ``knb-0`` and ``knb-1`` of type ``c3.small.x86`` and configure them to use a "Mixed/Hybrid" network mode under a common VLAN named ``knb``. The machines will be provisioned with an ``ubuntu\_20\_04`` OS. We also create a ``packet-hosts.ini`` file to use as an inventory file for Ansible. Verify that the servers are successfully provisioned by executing an ad-hoc ``uptime`` command on the servers. .. code-block:: shell-session $ cat packet-hosts.ini [master] 136.144.55.223 ansible\_python\_interpreter=python3 ansible\_user=root prv\_ip=10.67.33.131 node\_ip=10.33.33.10 master=knb-0 [nodes] 136.144.55.225 ansible\_python\_interpreter=python3 ansible\_user=root prv\_ip=10.67.33.133 node\_ip=10.33.33.11 $ ansible -i packet-hosts.ini all -m shell -a 'uptime' 136.144.55.223 | CHANGED | rc=0 >> 09:31:43 up 33 min, 1 user, load average: 0.00, 0.00, 0.00 136.144.55.225 | CHANGED | rc=0 >> 09:31:44 up 33 min, 1 user, load average: 0.00, 0.00, 0.00 Next, we use the ``packet-disbond.yaml`` playbook to configure the network interfaces of the machines. This will destroy the ``bond0`` interface and configure the first physical interface with the public and private IPs (``prv\_ip``) and the second with the node IP (``node\_ip``) that will be used for our evaluations (see `Packet documentation `\_ and our scripts for more info). .. code-block:: shell-session $ ansible-playbook -i packet-hosts.ini playbooks/packet-disbond.yaml .. note:: For hardware platforms other than Packet, users need to provide their own inventory file (``packet-hosts.ini``) and follow the subsequent steps. Install Required Software ------------------------- Install netperf (used for raw host-to-host measurements): .. code-block:: shell-session $ ansible-playbook -i packet-hosts.ini playbooks/install-misc.yaml Install ``kubeadm`` and its dependencies: .. code-block:: shell-session $ ansible-playbook -i packet-hosts.ini playbooks/install-kubeadm.yaml We use `kubenetbench `\_ to execute the `netperf`\_ benchmark in a Kubernetes environment. kubenetbench is a Kubernetes benchmarking project that is agnostic to the CNI or networking plugin that the cluster is deployed with. In this report we focus on pod-to-pod communication between different nodes. To install kubenetbench: .. code-block:: shell-session $ ansible-playbook -i packet-hosts.ini playbooks/install-kubenetbench.yaml .. \_netperf: https://github.com/HewlettPackard/netperf Running Benchmarks ------------------ .. \_tunneling\_results: Tunneling ~~~~~~~~~ Configure Cilium in tunneling (:ref:`arch\_overlay`) mode: .. code-block:: shell-session $ ansible-playbook -e mode=tunneling -i packet-hosts.ini playbooks/install-k8s-cilium.yaml $ ansible-playbook -e conf=vxlan -i packet-hosts.ini playbooks/run-kubenetbench.yaml The first command configures Cilium to use tunneling (``-e mode=tunneling``), which by default uses the VXLAN overlay. The second executes our benchmark suite (the ``conf`` variable is used to
https://github.com/cilium/cilium/blob/main//Documentation/operations/performance/benchmark.rst
main
cilium
[ 0.03933926299214363, 0.00016704977315384895, -0.013369619846343994, -0.03159791976213455, -0.05065790191292763, -0.06275434046983719, -0.09369967877864838, 0.004893937148153782, -0.029382267966866493, -0.027226973325014114, 0.008999292738735676, -0.1394152194261551, -0.005064347758889198, ...
0.171267
in tunneling (:ref:`arch\_overlay`) mode: .. code-block:: shell-session $ ansible-playbook -e mode=tunneling -i packet-hosts.ini playbooks/install-k8s-cilium.yaml $ ansible-playbook -e conf=vxlan -i packet-hosts.ini playbooks/run-kubenetbench.yaml The first command configures Cilium to use tunneling (``-e mode=tunneling``), which by default uses the VXLAN overlay. The second executes our benchmark suite (the ``conf`` variable is used to identify this benchmark run). Once execution is done, a results directory will be copied back in a folder named after the ``conf`` variable (in this case, ``vxlan``). This directory includes all the benchmark results as generated by kubenetbench, including netperf output and system information. .. \_native\_routing\_results: Native Routing ~~~~~~~~~~~~~~ We repeat the same operation as before, but configure Cilium to use :ref:`native\_routing` (``-e mode=directrouting``). .. code-block:: shell-session $ ansible-playbook -e mode=directrouting -i packet-hosts.ini playbooks/install-k8s-cilium.yaml $ ansible-playbook -e conf=routing -i packet-hosts.ini playbooks/run-kubenetbench.yaml .. \_encryption\_results: Encryption ~~~~~~~~~~ To use encryption with native routing: .. code-block:: shell-session $ ansible-playbook -e kubeproxyfree=disabled -e mode=directrouting -e encryption=yes -i packet-hosts.ini playbooks/install-k8s-cilium.yaml $ ansible-playbook -e conf=encryption-routing -i packet-hosts.ini playbooks/run-kubenetbench.yaml Baseline ~~~~~~~~ To have a point of reference for our results, we execute the same benchmarks between hosts without Kubernetes running. This provides an effective upper limit to the performance achieved by Cilium. .. code-block:: shell-session $ ansible-playbook -i packet-hosts.ini playbooks/reset-kubeadm.yaml $ ansible-playbook -i packet-hosts.ini playbooks/run-rawnetperf.yaml The first command removes Kubernetes and reboots the machines to ensure that there are no residues in the systems, whereas the second executes the same set of benchmarks between hosts. An alternative would be to run the raw benchmark before setting up Cilium, in which case one would only need the second command. Cleanup ------- When done with benchmarking, the allocated Packet resources can be released with: .. code-block:: shell-session $ cd terraform && terraform destroy -var 'packet\_token=API\_TOKEN' -var 'packet\_project\_id=PROJECT\_ID'
https://github.com/cilium/cilium/blob/main//Documentation/operations/performance/benchmark.rst
main
cilium
[ -0.013949811458587646, 0.040044791996479034, -0.06096689775586128, -0.009665129706263542, -0.04242607578635216, -0.047562409192323685, -0.05426953732967377, 0.03092450276017189, 0.03708282858133316, 0.047270435839891434, -0.0065114544704556465, -0.0231257826089859, -0.039164360612630844, -...
0.128405
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_performance\_tuning: \*\*\*\*\*\*\*\*\*\*\*\* Tuning Guide \*\*\*\*\*\*\*\*\*\*\*\* This guide helps you optimize a Cilium installation for optimal performance. Recommendation ============== The default out of the box deployment of Cilium is focused on maximum compatibility rather than most optimal performance. If you are a performance-conscious user, here are the recommended settings for operating Cilium to get the best out of your setup. .. note:: In-place upgrade by just enabling the config settings on an existing cluster is not possible since these tunings change the underlying datapath fundamentals and therefore require Pod or even node restarts. The best way to consume this for an existing cluster is to utilize per-node configuration for enabling the tunings only on newly spawned nodes which join the cluster. See the :ref:`per-node-configuration` page for more details. Each of the settings for the recommended performance profile are described in more detail on this page and in this `KubeCon talk `\_\_: - netkit device mode - eBPF host-routing - BIG TCP for IPv4/IPv6 - Bandwidth Manager (optional, for BBR congestion control) - Per-CPU distributed LRU and increased map size ratio - eBPF clock probe to use jiffies for CT map \*\*Requirements:\*\* \* Kernel >= 6.8 \* Supported NICs for BIG TCP: mlx4, mlx5, ice To enable the main settings: .. tabs:: .. group-tab:: Helm .. cilium-helm-install:: :namespace: kube-system :set: routingMode=native bpf.datapathMode=netkit bpf.masquerade=true bpf.distributedLRU.enabled=true bpf.mapDynamicSizeRatio=0.08 ipv6.enabled=true enableIPv6BIGTCP=true ipv4.enabled=true enableIPv4BIGTCP=true kubeProxyReplacement=true bpfClockProbe=true For enabling BBR congestion control in addition, consider adding the following settings to the above Helm install: .. tabs:: .. group-tab:: Helm .. parsed-literal:: --set bandwidthManager.enabled=true \\ --set bandwidthManager.bbr=true .. \_netkit: netkit device mode ================== netkit devices provide connectivity for Pods with the goal to improve throughput and latency for applications as if they would have resided directly in the host namespace, meaning, it reduces the datapath overhead for network namespaces down to zero. The `netkit driver in the kernel `\_\_ has been specifically designed for Cilium's needs and replaces the old-style veth device type. See also the `KubeCon talk on netkit `\_\_ for more details. Cilium utilizes netkit in L3 device mode with blackholing traffic from the Pods when there is no BPF program attached. The Pod specific BPF programs are attached inside the netkit peer device, and can only be managed from the host namespace through Cilium. netkit in combination with eBPF-based host-routing achieves a fast network namespace switch for off-node traffic ingressing into the Pod or leaving the Pod. When netkit is enabled, Cilium also utilizes tcx for all attachments to non-netkit devices. This is done for higher efficiency as well as utilizing BPF links for all Cilium attachments. netkit is available for kernel 6.8 and onwards and it also supports BIG TCP. Once the base kernels become more ubiquitous, the veth device mode of Cilium will be deprecated. To validate whether your installation is running with netkit, run ``cilium status`` in any of the Cilium Pods and look for the line reporting the status for "Device Mode" which should state "netkit". Also, ensure to have eBPF host routing enabled - the reporting status under "Host Routing" must state "BPF". .. warning:: This is a beta feature. Please provide feedback and file a GitHub issue if you experience any problems. Known issues with this feature are tracked `here `\_. .. note:: In-place upgrade by just enabling netkit on an existing cluster is not possible since the CNI plugin cannot simply replace veth with netkit after Pod creation. Also, running both flavors in
https://github.com/cilium/cilium/blob/main//Documentation/operations/performance/tuning.rst
main
cilium
[ 0.04943101108074188, -0.0023004391696304083, -0.0790485367178917, -0.0028251949697732925, 0.026286566630005836, -0.08530277758836746, -0.07001006603240967, 0.03335271403193474, -0.04703459516167641, -0.04915458709001541, 0.010732396505773067, -0.041619304567575455, 0.038529232144355774, -0...
0.159058
a GitHub issue if you experience any problems. Known issues with this feature are tracked `here `\_. .. note:: In-place upgrade by just enabling netkit on an existing cluster is not possible since the CNI plugin cannot simply replace veth with netkit after Pod creation. Also, running both flavors in parallel is currently not supported. The best way to consume this for an existing cluster is to utilize per-node configuration for enabling netkit on newly spawned nodes which join the cluster. See the :ref:`per-node-configuration` page for more details. \*\*Requirements:\*\* \* Kernel >= 6.8 \* eBPF host-routing To enable netkit device mode with eBPF host-routing: .. tabs:: .. group-tab:: Helm .. cilium-helm-install:: :namespace: kube-system :set: routingMode=native bpf.datapathMode=netkit bpf.masquerade=true kubeProxyReplacement=true .. \_eBPF\_Host\_Routing: eBPF Host-Routing ================= Even when network routing is performed by Cilium using eBPF, by default network packets still traverse some parts of the regular network stack of the node. This ensures that all packets still traverse through all of the iptables hooks in case you depend on them. However, they add significant overhead. For exact numbers from our test environment, see :ref:`benchmark\_throughput` and compare the results for "Cilium" and "Cilium (legacy host-routing)". We introduced `eBPF-based host-routing `\_ in Cilium 1.9 to fully bypass iptables and the upper host stack, and to achieve a faster network namespace switch compared to regular veth device operation. This option is automatically enabled if your kernel supports it. To validate whether your installation is running with eBPF host-routing, run ``cilium status`` in any of the Cilium pods and look for the line reporting the status for "Host Routing" which should state "BPF". .. note:: BPF Host Routing is incompatible with Istio (see :gh-issue:`36022` for details). .. note:: When using BPF Host Routing with IPsec, `a kernel bugfix `\_ is required. If you observe connectivity problems, ensure that the kernel package on your nodes has been upgraded recently before reporting an issue. \*\*Requirements:\*\* \* eBPF-based kube-proxy replacement \* eBPF-based masquerading To enable eBPF Host-Routing: .. tabs:: .. group-tab:: Helm .. cilium-helm-install:: :namespace: kube-system :set: bpf.masquerade=true kubeProxyReplacement=true \*\*Known limitations:\*\* eBPF host routing optimizes the host-internal packet routing, and packets no longer hit the netfilter tables in the host namespace. Therefore, it is incompatible with features relying on netfilter hooks (for example, `GKE Workload Identities`\_). Configure ``bpf.hostLegacyRouting=true`` or leverage :ref:`local-redirect-policy` to work around this limitation. .. \_`GKE Workload Identities`: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity .. \_ipv6\_big\_tcp: IPv6 BIG TCP ============ IPv6 BIG TCP allows the network stack to prepare larger GSO (transmit) and GRO (receive) packets to reduce the number of times the stack is traversed which improves performance and latency. It reduces the CPU load and helps achieve higher speeds (i.e. 100Gbit/s and beyond). To pass such packets through the stack BIG TCP adds a temporary Hop-By-Hop header after the IPv6 one which is stripped before transmitting the packet over the wire. BIG TCP can operate in a DualStack setup, IPv4 packets will use the old lower limits (64k) if IPv4 BIG TCP is not enabled, and IPv6 packets will use the new larger ones (192k). Both IPv4 BIG TCP and IPv6 BIG TCP can be enabled so that both use the larger one (192k). Note that Cilium assumes the default kernel values for GSO and GRO maximum sizes are 64k and adjusts them only when necessary, i.e. if BIG TCP is enabled and the current GSO/GRO maximum sizes are less than 192k it will try to increase them, respectively when BIG TCP is disabled and the current maximum values are more than 64k it will try to decrease them. BIG TCP doesn't require network interface MTU changes. .. note:: In-place
https://github.com/cilium/cilium/blob/main//Documentation/operations/performance/tuning.rst
main
cilium
[ 0.04042857512831688, -0.01722826063632965, 0.010233365930616856, -0.0428602509200573, 0.03493570163846016, -0.0016931864665821195, -0.0973149836063385, 0.004034281242638826, -0.024222102016210556, 0.053526099771261215, 0.011217997409403324, -0.1593683958053589, -0.04621671512722969, -0.067...
0.127676
is enabled and the current GSO/GRO maximum sizes are less than 192k it will try to increase them, respectively when BIG TCP is disabled and the current maximum values are more than 64k it will try to decrease them. BIG TCP doesn't require network interface MTU changes. .. note:: In-place upgrade by just enabling BIG TCP on an existing cluster is currently not possible since Cilium does not have access into Pods after they have been created. The best way to consume this for an existing cluster is to either restart Pods or to utilize per-node configuration for enabling BIG TCP on newly spawned nodes which join the cluster. See the :ref:`per-node-configuration` page for more details. \*\*Requirements:\*\* \* Kernel >= 5.19 \* eBPF Host-Routing \* eBPF-based kube-proxy replacement \* eBPF-based masquerading \* Tunneling and encryption disabled \* Supported NICs: mlx4, mlx5, ice To enable IPv6 BIG TCP: .. tabs:: .. group-tab:: Helm .. cilium-helm-install:: :namespace: kube-system :set: routingMode=native bpf.masquerade=true ipv6.enabled=true enableIPv6BIGTCP=true kubeProxyReplacement=true Note that after toggling the IPv6 BIG TCP option the Kubernetes Pods must be restarted for the changes to take effect. To validate whether your installation is running with IPv6 BIG TCP, run ``cilium status`` in any of the Cilium pods and look for the line reporting the status for "IPv6 BIG TCP" which should state "enabled". IPv4 BIG TCP ============ Similar to IPv6 BIG TCP, IPv4 BIG TCP allows the network stack to prepare larger GSO (transmit) and GRO (receive) packets to reduce the number of times the stack is traversed which improves performance and latency. It reduces the CPU load and helps achieve higher speeds (i.e. 100Gbit/s and beyond). To pass such packets through the stack BIG TCP sets IPv4 tot\_len to 0 and uses skb->len as the real IPv4 total length. The proper IPv4 tot\_len is set before transmitting the packet over the wire. BIG TCP can operate in a DualStack setup, IPv6 packets will use the old lower limits (64k) if IPv6 BIG TCP is not enabled, and IPv4 packets will use the new larger ones (192k). Both IPv4 BIG TCP and IPv6 BIG TCP can be enabled so that both use the larger one (192k). Note that Cilium assumes the default kernel values for GSO and GRO maximum sizes are 64k and adjusts them only when necessary, i.e. if BIG TCP is enabled and the current GSO/GRO maximum sizes are less than 192k it will try to increase them, respectively when BIG TCP is disabled and the current maximum values are more than 64k it will try to decrease them. BIG TCP doesn't require network interface MTU changes. .. note:: In-place upgrade by just enabling BIG TCP on an existing cluster is currently not possible since Cilium does not have access into Pods after they have been created. The best way to consume this for an existing cluster is to either restart Pods or to utilize per-node configuration for enabling BIG TCP on newly spawned nodes which join the cluster. See the :ref:`per-node-configuration` page for more details. \*\*Requirements:\*\* \* Kernel >= 6.3 \* eBPF Host-Routing \* eBPF-based kube-proxy replacement \* eBPF-based masquerading \* Tunneling and encryption disabled \* Supported NICs: mlx4, mlx5, ice To enable IPv4 BIG TCP: .. tabs:: .. group-tab:: Helm .. cilium-helm-install:: :namespace: kube-system :set: routingMode=native bpf.masquerade=true ipv4.enabled=true enableIPv4BIGTCP=true kubeProxyReplacement=true Note that after toggling the IPv4 BIG TCP option the Kubernetes Pods must be restarted for the changes to take effect. To validate whether your installation is running with IPv4 BIG TCP, run ``cilium status`` in any of the Cilium pods and look for the line reporting the status for
https://github.com/cilium/cilium/blob/main//Documentation/operations/performance/tuning.rst
main
cilium
[ 0.03745691478252411, 0.00011110331251984462, 0.042189013212919235, -0.025108885020017624, -0.04003840312361717, -0.02856467105448246, -0.09768369048833847, -0.0013004239881411195, -0.06560572981834412, 0.003011303022503853, -0.03621508181095123, -0.04634120315313339, -0.039371054619550705, ...
0.149498
Note that after toggling the IPv4 BIG TCP option the Kubernetes Pods must be restarted for the changes to take effect. To validate whether your installation is running with IPv4 BIG TCP, run ``cilium status`` in any of the Cilium pods and look for the line reporting the status for "IPv4 BIG TCP" which should state "enabled". Bypass iptables Connection Tracking =================================== For the case when eBPF Host-Routing cannot be used and thus network packets still need to traverse the regular network stack in the host namespace, iptables can add a significant cost. This traversal cost can be minimized by disabling the connection tracking requirement for all Pod traffic, thus bypassing the iptables connection tracker. \*\*Requirements:\*\* \* Direct-routing configuration \* eBPF-based kube-proxy replacement \* eBPF-based masquerading or no masquerading To enable the iptables connection-tracking bypass: .. tabs:: .. group-tab:: Cilium CLI .. parsed-literal:: cilium install |CHART\_VERSION| \\ --set installNoConntrackIptablesRules=true \\ --set kubeProxyReplacement=true .. group-tab:: Helm .. cilium-helm-install:: :namespace: kube-system :set: installNoConntrackIptablesRules=true kubeProxyReplacement=true If a Pod has the ``hostNetwork`` flag enabled, the ports for which connection tracking should be skipped must be explicitly listed using the ``network.cilium.io/no-track-host-ports`` annotation: .. code-block:: yaml apiVersion: v1 kind: Pod metadata: annotations: network.cilium.io/no-track-host-ports: "999/tcp,8123/tcp" .. note:: Only UDP and TCP transport protocols are supported with the network.cilium.io/no-track-host-ports annotation at the time of writing. Hubble ====== Running with Hubble observability enabled can come at the expense of performance. The overhead of Hubble is somewhere between 1-15% depending on your network traffic patterns and Hubble aggregation settings. In clusters with a huge amount of network traffic, cilium-agent might spend a significant portion of CPU time on processing monitored events and Hubble may even lose some events. There are multiple ways to tune Hubble to avoid this. Increase Hubble Event Queue Size -------------------------------- The Hubble Event Queue buffers events after they have been emitted from datapath and before they are processed by the Hubble subsystem. If this queue is full, because Hubble can't keep up with the amount of emitted events, Cilium will start dropping events. This does not impact traffic, but the events won't be processed by Hubble and won't show up in Hubble flows or metrics. When this happens you will see log lines similar to the following. :: level=info msg="hubble events queue is processing messages again: NN messages were lost" subsys=hubble level=warning msg="hubble events queue is full: dropping messages; consider increasing the queue size (hubble-event-queue-size) or provisioning more CPU" subsys=hubble By default the Hubble event queue size is ``#CPU \* 1024``, or ``16384`` if your nodes have more than 16 CPU cores. If you encounter event bursts that result in dropped events, increasing this queue size might help. We recommend gradually doubling the queue length until the drops disappear. If you don't see any improvements after increasing the queue length to 128k, further increasing the event queue size is unlikely to help. Be aware that increasing the Hubble event queue size will result in increased memory usage. Depending on your traffic pattern, increasing the queue size by ``10,000`` may increase the memory usage by up to five Megabytes. .. tabs:: .. group-tab:: Cilium CLI .. parsed-literal:: cilium install |CHART\_VERSION| \\ --set hubble.eventQueueSize=32768 .. group-tab:: Helm .. cilium-helm-install:: :namespace: kube-system :set: hubble.eventQueueSize=32768 .. group-tab:: Per-Node If only certain nodes are effected you may also set the queue length on a per-node basis using a :ref:`CiliumNodeConfig object `. :: apiVersion: cilium.io/v2 kind: CiliumNodeConfig metadata: namespace: kube-system name: set-hubble-event-queue spec: nodeSelector: matchLabels: # Update selector to match your nodes io.cilium.update-hubble-event-queue: "true" defaults: hubble-event-queue-size: "32768" Increasing the Hubble event queue size can't mitigate a consistently high rate of events being
https://github.com/cilium/cilium/blob/main//Documentation/operations/performance/tuning.rst
main
cilium
[ 0.10479813814163208, 0.03735825791954994, 0.03950143977999687, -0.04427799955010414, -0.01805553212761879, -0.006232116837054491, -0.06341688334941864, -0.012099049054086208, -0.020317260175943375, 0.040981922298669815, -0.03158687800168991, -0.13780786097049713, -0.057410094887018204, -0....
0.102131
length on a per-node basis using a :ref:`CiliumNodeConfig object `. :: apiVersion: cilium.io/v2 kind: CiliumNodeConfig metadata: namespace: kube-system name: set-hubble-event-queue spec: nodeSelector: matchLabels: # Update selector to match your nodes io.cilium.update-hubble-event-queue: "true" defaults: hubble-event-queue-size: "32768" Increasing the Hubble event queue size can't mitigate a consistently high rate of events being emitted by Cilium datapath and it does not reduce CPU utilization. For this you should consider increasing the aggregation interval or rate limiting events. Increase Aggregation Interval ----------------------------- By default Cilium generates tracing events according to the configured ``monitor-aggregation`` level. For packet events, Cilium generates a tracing event for send packets only on every new connection, any time a packet contains TCP flags that have not been previously seen for the packet direction, and on average once per ``monitor-aggregation-interval``, which defaults to 5 seconds. When socket load-balancing is enabled, the same aggregation levels apply to socket translation events (for example, pre/post reverse translation): - ``none``: emit all socket trace events - ``lowest``/``low``: suppress reverse-direction (recv) socket traces - ``medium``/``maximum``: emit socket trace events only for connect system calls When aggregation is enabled (>= ``lowest``), socket trace emission is aligned to ``monitor-aggregation-interval`` using a strict cadence of approximately one trace per interval for active flows. Depending on your network traffic patterns, the re-emitting of trace events per aggregation interval can make up a large part of the total events. Increasing the aggregation interval may decrease CPU utilization and can prevent lost events. The following will set the aggregation interval to 10 seconds. .. tabs:: .. group-tab:: Cilium CLI .. parsed-literal:: cilium install |CHART\_VERSION| \\ --set bpf.events.monitorInterval="10s" .. group-tab:: Helm .. cilium-helm-install:: :namespace: kube-system :set: bpf.events.monitorInterval="10s" Rate Limit Events ----------------- To further prevent high CPU utilization caused by Hubble, you can also set limits on how many events can be generated by datapath code. Two limits are possible to configure: \* Rate limit - limits how many events on average can be generated \* Burst limit - limits the number of events that can be generated in a span of 1 second When both limits are set to 0, no BPF events rate limiting is imposed. .. note:: Helm configuration for BPF events map rate limiting is experimental and might change in upcoming releases. .. warning:: When BPF events map rate limiting is enabled, Cilium monitor, Hubble observability, Hubble metrics reliability, and Hubble export functionalities might be impacted due to dropped events. To enable eBPF Event Rate Limiting with a rate limit of 10,000 and a burst limit of 50,000: .. tabs:: .. group-tab:: Cilium CLI .. parsed-literal:: cilium install |CHART\_VERSION| \\ --set bpf.events.default.rateLimit=10000 \\ --set bpf.events.default.burstLimit=50000 .. group-tab:: Helm .. cilium-helm-install:: :namespace: kube-system :set: bpf.events.default.rateLimit=10000 bpf.events.default.burstLimit=50000 You can also choose to stop exposing event types in which you are not interested. For instance if you are mainly interested in dropped traffic, you can disable "trace" events which will likely reduce the overall CPU consumption of the agent. .. tabs:: .. group-tab:: Cilium CLI .. code-block:: shell-session cilium config set bpf-events-trace-enabled false .. group-tab:: Helm .. cilium-helm-install:: :namespace: kube-system :set: bpf.events.trace.enabled=false .. warning:: Suppressing one or more event types will impact ``cilium monitor`` as well as Hubble observability capabilities, metrics and exports. Disable Hubble -------------- If all this is not sufficient, in order to optimize for maximum performance, you can disable Hubble: .. tabs:: .. group-tab:: Cilium CLI .. code-block:: shell-session cilium hubble disable .. group-tab:: Helm .. cilium-helm-install:: :namespace: kube-system :set: hubble.enabled=false MTU === The maximum transfer unit (MTU) can have a significant impact on the network throughput of a configuration. Cilium will automatically detect the MTU of the underlying network
https://github.com/cilium/cilium/blob/main//Documentation/operations/performance/tuning.rst
main
cilium
[ 0.05513717606663704, 0.010451468639075756, -0.028261452913284302, 0.005216976162046194, 0.0014009969308972359, -0.09327300637960434, -0.03200158476829529, 0.014172635972499847, 0.05049719661474228, 0.004674005322158337, -0.004953996744006872, -0.06730911135673523, 0.02019997127354145, -0.0...
0.149498
.. tabs:: .. group-tab:: Cilium CLI .. code-block:: shell-session cilium hubble disable .. group-tab:: Helm .. cilium-helm-install:: :namespace: kube-system :set: hubble.enabled=false MTU === The maximum transfer unit (MTU) can have a significant impact on the network throughput of a configuration. Cilium will automatically detect the MTU of the underlying network devices. Therefore, if your system is configured to use jumbo frames, Cilium will automatically make use of it. To benefit from this, make sure that your system is configured to use jumbo frames if your network allows for it. Disable Packet Layer PMTUD -------------------------- Cilium enables Linux's TCP Packetization Layer Path MTU Discovery by default for Pod endpoints. This is a kernel feature that implements `RFC4821 `\_\_ which provides a way of dynamically discovering the correct path MTU size for connections that is resilient against lost packets and firewalls blocking regular ICMP based PMTUD messages. In particular, this provides a robust MTU discovery mechanism against network black holes arising from incorrect MTU sizes and firewalls dropping PMTUD error messages. Although this provides a more robust way of discovery path MTU, it comes at the possible cost of connections initially using sub-optimal MSS resulting in lower network performance. In the case where the correct MTU is known, disabling this feature may provide some improved network throughput on TCP connections. This feature can be disabled via the helm value: ``pmtuDiscovery.packetizationLayerPMTUD.enabled=false``. Bandwidth Manager ================= Cilium's Bandwidth Manager is responsible for managing network traffic more efficiently with the goal of improving overall application latency and throughput. Aside from natively supporting Kubernetes Pod bandwidth annotations, the `Bandwidth Manager `\_, first introduced in Cilium 1.9, is also setting up Fair Queue (FQ) queueing disciplines to support TCP stack pacing (e.g. from EDT/BBR) on all external-facing network devices as well as setting optimal server-grade sysctl settings for the networking stack. \*\*Requirements:\*\* \* eBPF-based kube-proxy replacement To enable the Bandwidth Manager: .. tabs:: .. group-tab:: Helm .. cilium-helm-install:: :namespace: kube-system :set: bandwidthManager.enabled=true kubeProxyReplacement=true To validate whether your installation is running with Bandwidth Manager, run ``cilium status`` in any of the Cilium pods and look for the line reporting the status for "BandwidthManager" which should state "EDT with BPF". BBR congestion control for Pods =============================== The base infrastructure around MQ/FQ setup provided by Cilium's Bandwidth Manager also allows for use of TCP `BBR congestion control `\_ for Pods. BBR is in particular suitable when Pods are exposed behind Kubernetes Services which face external clients from the Internet. BBR achieves higher bandwidths and lower latencies for Internet traffic, for example, it has been `shown `\_ that BBR's throughput can reach as much as 2,700x higher than today's best loss-based congestion control and queueing delays can be 25x lower. In order for BBR to work reliably for Pods, it requires a 5.18 or higher kernel. As outlined in our `Linux Plumbers 2021 talk `\_, this is needed since older kernels do not retain timestamps of network packets when switching from Pod to host network namespace. Due to the latter, the kernel's pacing infrastructure does not function properly in general (not specific to Cilium). We helped fixing this issue for recent kernels to retain timestamps and therefore to get BBR for Pods working. BBR also needs eBPF Host-Routing in order to retain the network packet's socket association all the way until the packet hits the FQ queueing discipline on the physical device in the host namespace. .. note:: In-place upgrade by just enabling BBR on an existing cluster is not possible since Cilium cannot migrate existing sockets over to BBR congestion control. The best way to consume this is to either only
https://github.com/cilium/cilium/blob/main//Documentation/operations/performance/tuning.rst
main
cilium
[ 0.04488862678408623, 0.012149556539952755, -0.04475446045398712, -0.05832258239388466, -0.018423443660140038, -0.06582815945148468, -0.0917673110961914, 0.023614346981048584, 0.03806550055742264, -0.048853062093257904, 0.0017161640571430326, -0.03146189823746681, -0.00949675589799881, -0.0...
0.16674
packet hits the FQ queueing discipline on the physical device in the host namespace. .. note:: In-place upgrade by just enabling BBR on an existing cluster is not possible since Cilium cannot migrate existing sockets over to BBR congestion control. The best way to consume this is to either only enable it on newly built clusters, to restart Pods on existing clusters, or to utilize per-node configuration for enabling BBR on newly spawned nodes which join the cluster. See the :ref:`per-node-configuration` page for more details. Note that the use of BBR could lead to a higher amount of TCP retransmissions and more aggressive behavior towards TCP CUBIC connections. \*\*Requirements:\*\* \* Kernel >= 5.18 \* Bandwidth Manager \* eBPF Host-Routing To enable the Bandwidth Manager with BBR for Pods: .. tabs:: .. group-tab:: Helm .. cilium-helm-install:: :namespace: kube-system :set: bandwidthManager.enabled=true bandwidthManager.bbr=true kubeProxyReplacement=true To validate whether your installation is running with BBR for Pods, run ``cilium status`` in any of the Cilium pods and look for the line reporting the status for "BandwidthManager" which should then state ``EDT with BPF`` as well as ``[BBR]``. XDP Acceleration ================ Cilium has built-in support for accelerating NodePort, LoadBalancer services and services with externalIPs for the case where the arriving request needs to be pushed back out of the node when the backend is located on a remote node. In that case, the network packets do not need to be pushed all the way to the upper networking stack, but with the help of XDP, Cilium is able to process those requests right out of the network driver layer. This helps to reduce latency and scale-out of services given a single node's forwarding capacity is dramatically increased. The kube-proxy replacement at the XDP layer is `available from Cilium 1.8 `\_. \*\*Requirements:\*\* \* Kernel >= 4.19.57, >= 5.1.16, >= 5.2 \* Native XDP supported driver, check :ref:`our driver list ` \* eBPF-based kube-proxy replacement To enable the XDP Acceleration, check out :ref:`our getting started guide ` which also contains instructions for setting it up on public cloud providers. To validate whether your installation is running with XDP Acceleration, run ``cilium status`` in any of the Cilium pods and look for the line reporting the status for "XDP Acceleration" which should say "Native". eBPF Map Backend Memory ======================= Changing Cilium's core BPF map memory configuration from a node-global LRU memory pool to a distributed per-CPU memory pool helps to avoid spinlock contention in the kernel under stress (many CT/NAT element allocation and free operations). The trade-off is higher memory usage given the per-CPU pools cannot be shared anymore, so if a given CPU pool depletes it needs to recycle elements via LRU mechanism. It is therefore recommended to not only enable ``bpf.distributedLRU.enabled`` but to also increase the map sizing which can be done via ``bpf.mapDynamicSizeRatio``: .. tabs:: .. group-tab:: Helm .. cilium-helm-install:: :namespace: kube-system :set: kubeProxyReplacement=true bpf.distributedLRU.enabled=true bpf.mapDynamicSizeRatio=0.08 Note that ``bpf.distributedLRU.enabled`` is off by default in Cilium for legacy reasons given enabling this setting on-the-fly is disruptive for in-flight traffic since the BPF maps have to be recreated. It is recommended to use the per-node configuration to gradually phase in this setting for new nodes joining the cluster. Alternatively, upon initial cluster creation it is recommended to consider enablement. Also, ``bpf.distributedLRU.enabled`` is currently only supported in combination with ``bpf.mapDynamicSizeRatio`` as opposed to statically sized map configuration. eBPF Map Sizing =============== All eBPF maps are created with upper capacity limits. Insertion beyond the limit would fail or constrain the scalability of the datapath. Cilium is using auto-derived defaults based on the given ratio of the total system memory. However,
https://github.com/cilium/cilium/blob/main//Documentation/operations/performance/tuning.rst
main
cilium
[ 0.039358291774988174, -0.006182115059345961, -0.032658763229846954, 0.013704239390790462, -0.02880173735320568, -0.044634781777858734, -0.018281957134604454, -0.04718949273228645, 0.006213064771145582, 0.03692209720611572, -0.029083924368023872, -0.08524617552757263, -0.02977844513952732, ...
0.168692
``bpf.mapDynamicSizeRatio`` as opposed to statically sized map configuration. eBPF Map Sizing =============== All eBPF maps are created with upper capacity limits. Insertion beyond the limit would fail or constrain the scalability of the datapath. Cilium is using auto-derived defaults based on the given ratio of the total system memory. However, the upper capacity limits used by the Cilium agent can be overridden for advanced users. Please refer to the :ref:`bpf\_map\_limitations` guide. eBPF Clock Probe ================ Cilium can probe the underlying kernel to determine whether BPF supports retrieving jiffies instead of ktime. Given Cilium's CT map does not require high resolution, jiffies is more efficient and the preferred clock source. To enable probing and possibly using jiffies, ``bpfClockProbe=true`` can be set: .. tabs:: .. group-tab:: Helm .. cilium-helm-install:: :namespace: kube-system :set: kubeProxyReplacement=true bpfClockProbe=true Note that ``bpfClockProbe`` is off by default in Cilium for legacy reasons given enabling this setting on-the-fly means that previous stored CT map entries with ktime as clock source for timestamps would now be interpreted as jiffies. It is therefore recommended to use the per-node configuration to gradually phase in this setting for new nodes joining the cluster. Alternatively, upon initial cluster creation it is recommended to consider enablement. To validate whether jiffies is now used run ``cilium status --verbose`` in any of the Cilium Pods and look for the line ``Clock Source for BPF``. Linux Kernel ============ In general, we highly recommend using the most recent LTS stable kernel provided by the `kernel community `\_ or by a downstream distribution of your choice. The newer the kernel, the more likely it is that various datapath optimizations can be used. In our Cilium release blogs, we also regularly highlight some of the eBPF based kernel work we conduct which implicitly helps Cilium's datapath performance such as `replacing retpolines with direct jumps in the eBPF JIT `\_. Moreover, the kernel allows to configure several options which will help maximize network performance. CONFIG\_PREEMPT\_NONE ------------------- Run a kernel version with ``CONFIG\_PREEMPT\_NONE=y`` set. Some Linux distributions offer kernel images with this option set or you can re-compile the Linux kernel. ``CONFIG\_PREEMPT\_NONE=y`` is the recommended setting for server workloads. Kubernetes ========== Set scheduling mode ------------------- By default, the cilium daemonset is configured with an `inter-pod anti-affinity`\_ rule. Inter-pod anti-affinity is not recommended for `clusters larger than several hundred nodes`\_ as it reduces scheduling throughput of `kube-scheduler`\_. If your cilium daemonset uses a host port (e.g. if prometheus metrics are enabled), ``kube-scheduler`` guarantees that only a single pod with that port/protocol is scheduled to a node -- effectively offering the same guarantee provided by the inter-pod anti-affinity rule. To leverage this, consider using ``--set scheduling.mode=kube-scheduler`` when installing or upgrading cilium. .. note:: Use caution when changing changing host port numbers. Changing the host port number removes the ``kube-scheduler`` guarantee. When a host port number must change, ensure at least one host port number is shared across the upgrade, or consider using ``--set scheduling.mode=anti-affinity``. .. \_inter-pod anti-affinity: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity .. \_clusters larger than several hundred nodes: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#:~:text=We%20do%20not%20recommend%20using%20them%20in%20clusters%20larger%20than%20several%20hundred%20nodes. .. \_kube-scheduler: https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/ Further Considerations ====================== Various additional settings that we recommend help to tune the system for specific workloads and to reduce jitter: tuned network-\* profiles ------------------------ The `tuned `\_ project offers various profiles to optimize for deterministic performance at the cost of increased power consumption, that is, ``network-latency`` and ``network-throughput``, for example. To enable the former, run: .. code-block:: shell-session tuned-adm profile network-latency Set CPU governor to performance ------------------------------- The CPU scaling up and down can impact latency tests and lead to sub-optimal performance. To achieve maximum consistent performance. Set the CPU governor to ``performance``: .. code-block::
https://github.com/cilium/cilium/blob/main//Documentation/operations/performance/tuning.rst
main
cilium
[ 0.03799665719270706, -0.000025369343347847462, -0.09798380732536316, -0.03021990694105625, -0.049110859632492065, -0.021537577733397484, -0.04498961195349693, 0.0681743323802948, -0.0864560455083847, -0.06880638748407364, 0.02083628810942173, -0.12628857791423798, -0.010401242412626743, -0...
0.153622
``network-latency`` and ``network-throughput``, for example. To enable the former, run: .. code-block:: shell-session tuned-adm profile network-latency Set CPU governor to performance ------------------------------- The CPU scaling up and down can impact latency tests and lead to sub-optimal performance. To achieve maximum consistent performance. Set the CPU governor to ``performance``: .. code-block:: bash for CPU in /sys/devices/system/cpu/cpu\*/cpufreq/scaling\_governor; do echo performance > $CPU done Stop ``irqbalance`` and pin the NIC interrupts to specific CPUs --------------------------------------------------------------- In case you are running ``irqbalance``, consider disabling it as it might migrate the NIC's IRQ handling among CPUs and can therefore cause non-deterministic performance: .. code-block:: shell-session killall irqbalance We highly recommend to pin the NIC interrupts to specific CPUs in order to allow for maximum workload isolation! See `this script `\_ for details and initial pointers on how to achieve this. Note that pinning the queues can potentially vary in setup between different drivers. We generally also recommend to check various documentation and performance tuning guides from NIC vendors on this matter such as from `Mellanox `\_, `Intel `\_ or others for more information.
https://github.com/cilium/cilium/blob/main//Documentation/operations/performance/tuning.rst
main
cilium
[ 0.058217164129018784, 0.02309892699122429, -0.08412005752325058, 0.06749504804611206, -0.06568378955125809, -0.10844872891902924, 0.012901428155601025, 0.014100433327257633, -0.04109298065304756, -0.003531445749104023, 0.027528082951903343, 0.000760067196097225, -0.05601326376199722, -0.08...
0.081317
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_identity-relevant-labels: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Limiting Identity-Relevant Labels \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* We recommend that operators with larger environments limit the set of identity-relevant labels to avoid frequent creation of new security identities. Many Kubernetes labels are not useful for policy enforcement or visibility. A few good examples of such labels include timestamps or hashes. These labels, when included in evaluation, cause Cilium to generate a unique identity for each pod instead of a single identity for all of the pods that comprise a service or application. By default, Cilium considers all labels to be relevant for identities, with the following exceptions: =================================================== ========================================================= Label Description --------------------------------------------------- --------------------------------------------------------- ``!io\.kubernetes`` Ignore all ``io.kubernetes`` labels ``!kubernetes\.io`` Ignore all other ``kubernetes.io`` labels ``!statefulset\.kubernetes\.io/pod-name`` Ignore ``statefulset.kubernetes.io/pod-name`` label ``!apps\.kubernetes\.io/pod-index`` Ignore ``apps.kubernetes.io/pod-index`` label ``!batch\.kubernetes\.io/job-completion-index`` Ignore ``batch.kubernetes.io/job-completion-index`` label ``!batch\.kubernetes\.io/controller-uid`` Ignore ``batch.kubernetes.io/controller-uid`` label ``!beta\.kubernetes\.io`` Ignore all ``beta.kubernetes.io`` labels ``!k8s\.io`` Ignore all ``k8s.io`` labels ``!pod-template-generation`` Ignore all ``pod-template-generation`` labels ``!pod-template-hash`` Ignore all ``pod-template-hash`` labels ``!controller-revision-hash`` Ignore all ``controller-revision-hash`` labels ``!annotation.\*`` Ignore all ``annotation`` labels ``!controller-uid`` Ignore all ``controller-uid`` labels ``!etcd\_node`` Ignore all ``etcd\_node`` labels =================================================== ========================================================= The above label patterns are all \*exclusive label patterns\*, that is to say they define which label keys should be ignored. These are identified by the presence of the ``!`` character. Label configurations that do not contain the ``!`` character are \*inclusive label patterns\*. Once at least one inclusive label pattern is added, only labels that match the inclusive label configuration may be considered relevant for identities. Additionally, when at least one inclusive label pattern is configured, the following inclusive label patterns are automatically added to the configuration: =================================================== ========================================================= Label Description --------------------------------------------------- --------------------------------------------------------- ``reserved:.\*`` Include all ``reserved:`` labels ``io\.kubernetes\.pod\.namespace`` Include all ``io.kubernetes.pod.namespace`` labels ``io\.cilium\.k8s\.namespace\.labels`` Include all ``io.cilium.k8s.namespace.labels`` labels ``io\.cilium\.k8s\.policy\.cluster`` Include all ``io.cilium.k8s.policy.cluster`` labels ``io\.cilium\.k8s\.policy\.serviceaccount`` Include all ``io.cilium.k8s.policy.serviceaccount`` labels ``app\.kubernetes\.io`` Include all ``app.kubernetes.io`` labels =================================================== ========================================================= Configuring Identity-Relevant Labels ------------------------------------ To limit the labels used for evaluating Cilium identities, edit the Cilium ConfigMap object using ``kubectl edit cm -n kube-system cilium-config`` and insert a line to define the label patterns to include or exclude. Alternatively, this attribute can also be set via helm option ``--set labels=``. .. code-block:: yaml apiVersion: v1 data: # ... kube-proxy-replacement: "true" labels: "io\\.kubernetes\\.pod\\.namespace k8s-app app name" enable-ipv4-masquerade: "true" monitor-aggregation: medium # ... .. note:: The double backslash in ``\\.`` is required to escape the slash in the YAML string so that the regular expression contains ``\.``. Label patterns are regular expressions that are implicitly anchored at the start of the label. For example ``example\.com`` will match labels that start with ``example.com``, whereas ``.\*example\.com`` will match labels that contain ``example.com`` anywhere. Be sure to escape periods in domain names to avoid the pattern matching too broadly and therefore including or excluding too many labels. The label patterns are using regular expressions. Therefore, using ``kind$`` or ``^kind$`` can exactly match the label key ``kind``, not just the prefix. Upon defining a custom list of label patterns in the ConfigMap, Cilium adds the provided list of label patterns to the default list of label patterns. After saving the ConfigMap, if the Operator is managing identities (:ref:`IdentityManagementMode`), restart both the Cilium Operators and Agents to pickup the new label pattern setting. If the Agent is managing identities, restart the Cilium Agents to pickup the new label pattern. .. code-block:: shell-session kubectl rollout restart -n kube-system ds/cilium .. note:: Configuring Cilium with label patterns via ``labels`` Helm value does \*\*not\*\* override the default set of label patterns. That is to say, you can
https://github.com/cilium/cilium/blob/main//Documentation/operations/performance/scalability/identity-relevant-labels.rst
main
cilium
[ -0.024077115580439568, 0.08346251398324966, -0.059721652418375015, -0.06693107634782791, 0.011218717321753502, 0.006724529899656773, 0.09613784402608871, -0.0354338102042675, 0.13014201819896698, -0.011040537618100643, 0.04128456488251686, -0.12376304715871811, 0.04624025523662567, -0.0160...
0.252545
the Agent is managing identities, restart the Cilium Agents to pickup the new label pattern. .. code-block:: shell-session kubectl rollout restart -n kube-system ds/cilium .. note:: Configuring Cilium with label patterns via ``labels`` Helm value does \*\*not\*\* override the default set of label patterns. That is to say, you can consider this configuration to append a list of label configurations to the defaults listed above. If you wish to configure this setting in a declarative way including the exact set of label prefixes to be considered for determining workload security identities, you should instead configure the ``label-prefix-file`` configuration flag. Existing identities will not change as a result of this new configuration. To apply the new label pattern setting to existing identities, restart the corresponding Cilium pod on the node where the workload is running. Upon restart, new identities will be created. The old identities will be garbage collected by the Cilium Operator once they are no longer used by any Cilium endpoints. When specifying multiple label patterns to evaluate, provide the list of labels as a space-separated string. Including Labels ---------------- Labels can be defined as a list of labels to include. Only the labels specified and the default inclusive labels will be used to evaluate Cilium identities: .. code-block:: yaml labels: "io\\.kubernetes\\.pod\\.namespace k8s-app app name kind$ other$" The above configuration would only include the following label keys when evaluating Cilium identities: - k8s-app - app - name - kind - other - reserved:.\* - io\.kubernetes\.pod\.namespace - io\.cilium\.k8s.namespace\.labels - io\.cilium\.k8s\.policy\.cluster - io\.cilium\.k8s\.policy\.serviceaccount - app\.kubernetes\.io Note that ``io.kubernetes.pod.namespace`` is already included in default label ``io.kubernetes.pod.namespace``. Labels with the same prefix as defined in the configuration will also be considered. This lists some examples of label keys that would also be evaluated for Cilium identities: - k8s-app-team - app-production - name-defined Because we have ``$`` in label key ``kind$`` and ``other$``. Only label keys using exactly ``kind`` and ``other`` will be evaluated for Cilium. When a single inclusive label is added to the filter, all labels not defined in the default list will be excluded. For example, pods running with the security labels ``team=team-1, env=prod`` will have the label ``env=prod`` ignored as soon Cilium is started with the filter ``team``. Excluding Labels ---------------- Label patterns can also be specified as a list of exclusions. Exclude labels by placing an exclamation mark after colon separating the prefix and pattern. When defined as a list of exclusions, Cilium will include the set of default labels, but will exclude any matches in the provided list when evaluating Cilium identities: .. code-block:: yaml labels: "!controller-uid !job-name" The provided example would cause Cilium to exclude any of the following label matches: - controller-uid - job-name
https://github.com/cilium/cilium/blob/main//Documentation/operations/performance/scalability/identity-relevant-labels.rst
main
cilium
[ -0.012261037714779377, 0.07213539630174637, -0.06908660382032394, -0.041627056896686554, -0.05095165967941284, 0.003530794521793723, 0.049291469156742096, -0.015167398378252983, 0.11841990053653717, -0.03879315406084061, 0.0329253114759922, -0.11930694431066513, -0.00015557573351543397, -0...
0.176548
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_scalability\_guide: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Scalability report \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This report is intended for users planning to run Cilium on clusters with more than 200 nodes in CRD mode (without a kvstore available). In our development cycle we have deployed Cilium on large clusters and these were the options that were suitable for our testing: ===== Setup ===== .. code-block:: shell-session helm template cilium \\ --namespace kube-system \\ --set endpointHealthChecking.enabled=false \\ --set healthChecking=false \\ --set ipam.mode=kubernetes \\ --set k8sServiceHost= \\ --set k8sServicePort= \\ --set prometheus.enabled=true \\ --set operator.prometheus.enabled=true \\ > cilium.yaml \* ``--set endpointHealthChecking.enabled=false`` and ``--set healthChecking=false`` disable endpoint health checking entirely. However it is recommended that those features be enabled initially on a smaller cluster (3-10 nodes) where it can be used to detect potential packet loss due to firewall rules or hypervisor settings. \* ``--set ipam.mode=kubernetes`` is set to ``"kubernetes"`` since our cloud provider has pod CIDR allocation enabled in ``kube-controller-manager``. \* ``--set k8sServiceHost`` and ``--set k8sServicePort`` were set with the IP address of the loadbalancer that was in front of ``kube-apiserver``. This allows Cilium to not depend on kube-proxy to connect to ``kube-apiserver``. \* ``--set prometheus.enabled=true`` and ``--set operator.prometheus.enabled=true`` were just set because we had a Prometheus server probing for metrics in the entire cluster. Our testing cluster consisted of 3 controller nodes and 1000 worker nodes. We have followed the recommended settings from the `official Kubernetes documentation `\_ and have provisioned our machines with the following settings: \* \*\*Cloud provider\*\*: Google Cloud \* \*\*Controllers\*\*: 3x n1-standard-32 (32vCPU, 120GB memory and 50GB SSD, kernel 5.4.0-1009-gcp) \* \*\*Workers\*\*: 1 pool of 1000x custom-2-4096 (2vCPU, 4GB memory and 10GB HDD, kernel 5.4.0-1009-gcp) \* \*\*Metrics\*\*: 1x n1-standard-32 (32vCPU, 120GB memory and 10GB HDD + 500GB HDD) this is a dedicated node for prometheus and grafana pods. .. note:: All 3 controller nodes were behind a GCE load balancer. Each controller contained ``etcd``, ``kube-apiserver``, ``kube-controller-manager`` and ``kube-scheduler`` instances. The CPU, memory and disk size set for the workers might be different for your use case. You might have pods that require more memory or CPU available so you should design your workers based on your requirements. During our testing we had to set the ``etcd`` option ``quota-backend-bytes=17179869184`` because ``etcd`` failed once it reached around ``2GiB`` of allocated space. We provisioned our worker nodes without ``kube-proxy`` since Cilium is capable of performing all functionalities provided by ``kube-proxy``. We created a load balancer in front of ``kube-apiserver`` to allow Cilium to access ``kube-apiserver`` without ``kube-proxy``, and configured Cilium with the options ``--set k8sServiceHost=`` and ``--set k8sServicePort=``. Our ``DaemonSet`` ``updateStrategy`` had the ``maxUnavailable`` set to 250 pods instead of 2, but this value highly depends on your requirements when you are performing a rolling update of Cilium. ===== Steps ===== For each step we took, we provide more details below, with our findings and expected behaviors. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 1. Install Kubernetes v1.18.3 with EndpointSlice feature enabled ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To test the most up-to-date functionalities from Kubernetes and Cilium, we have performed our testing with Kubernetes v1.18.3 and the EndpointSlice feature enabled to improve scalability. Since Kubernetes requires an ``etcd`` cluster, we have deployed v3.4.9. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2. Deploy Prometheus, Grafana and Cilium ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ We have used Prometheus v2.18.1 and Grafana v7.0.1 to retrieve and analyze ``etcd``, ``kube-apiserver``, ``cilium`` and ``cilium-operator`` metrics. ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 3. Provision 2 worker nodes ^^^^^^^^^^^^^^^^^^^^^^^^^^^ This helped us to understand if our testing cluster was correctly provisioned and all metrics were being gathered. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 4. Deploy 5 namespaces with 25 deployments on
https://github.com/cilium/cilium/blob/main//Documentation/operations/performance/scalability/report.rst
main
cilium
[ 0.015634316951036453, 0.03916916623711586, -0.07729432731866837, -0.015109006315469742, 0.021348651498556137, -0.03699326887726784, -0.10672155022621155, 0.05404210835695267, 0.00021215098968241364, -0.011233004741370678, 0.03288332372903824, -0.0967913269996643, 0.04434174299240112, -0.05...
0.166941
used Prometheus v2.18.1 and Grafana v7.0.1 to retrieve and analyze ``etcd``, ``kube-apiserver``, ``cilium`` and ``cilium-operator`` metrics. ^^^^^^^^^^^^^^^^^^^^^^^^^^^ 3. Provision 2 worker nodes ^^^^^^^^^^^^^^^^^^^^^^^^^^^ This helped us to understand if our testing cluster was correctly provisioned and all metrics were being gathered. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 4. Deploy 5 namespaces with 25 deployments on each namespace ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ \* Each deployment had 1 replica (125 pods in total). \* To measure \*\*only\*\* the resources consumed by Cilium, all deployments used the same base image ``registry.k8s.io/pause:3.2``. This image does not have any CPU or memory overhead. \* We provision a small number of pods in a small cluster to understand the CPU usage of Cilium: .. figure:: images/image\_4\_01.png The mark shows when the creation of 125 pods started. As expected, we can see a slight increase of the CPU usage on both Cilium agents running and in the Cilium operator. The agents peaked at 6.8% CPU usage on a 2vCPU machine. .. figure:: images/image\_4\_02.png For the memory usage, we have not seen a significant memory growth in the Cilium agent. On the eBPF memory side, we do see it increasing due to the initialization of some eBPF maps for the new pods. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 5. Provision 998 additional nodes (total 1000 nodes) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. figure:: images/image\_5\_01.png The first mark represents the action of creating nodes, the second mark when 1000 Cilium pods were in ready state. The CPU usage increase is expected since each Cilium agent receives events from Kubernetes whenever a new node is provisioned in the cluster. Once all nodes were deployed the CPU usage was 0.15% on average on a 2vCPU node. .. figure:: images/image\_5\_02.png As we have increased the number of nodes in the cluster to 1000, it is expected to see a small growth of the memory usage in all metrics. However, it is relevant to point out that \*\*an increase in the number of nodes does not cause any significant increase in Cilium’s memory consumption in both control and dataplane.\*\* ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 6. Deploy 25 more deployments on each namespace ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This will now bring us a total of ``5 namespaces \* (25 old deployments + 25 new deployments)=250`` deployments in the entire cluster. We did not install 250 deployments from the start since we only had 2 nodes and that would create 125 pods on each worker node. According to the Kubernetes documentation the maximum recommended number of pods per node is 100. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 7. Scale each deployment to 200 replicas (50000 pods in total) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Having 5 namespaces with 50 deployments means that we have 250 different unique security identities. Having a low cardinality in the labels selected by Cilium helps scale the cluster. By default, Cilium has a limit of 16k security identities, but it can be increased with ``bpf-policy-map-max`` in the Cilium ``ConfigMap``. .. figure:: images/image\_7\_01.png The first mark represents the action of scaling up the deployments, the second mark when 50000 pods were in ready state. \* It is expected to see the CPU usage of Cilium increase since, on each node, Cilium agents receive events from Kubernetes when a new pod is scheduled and started. \* The average CPU consumption of all Cilium agents was 3.38% on a 2vCPU machine. At one point, roughly around minute 15:23, one of those Cilium agents picked 27.94% CPU usage. \* Cilium Operator had a stable 5% CPU consumption while the pods were being created. .. figure:: images/image\_7\_02.png Similar to the behavior seen while increasing the number of worker nodes, adding new pods also increases Cilium memory consumption. \* As we increased the number of pods from 250 to 50000,
https://github.com/cilium/cilium/blob/main//Documentation/operations/performance/scalability/report.rst
main
cilium
[ -0.02793559432029724, -0.04777243360877037, 0.021159550175070763, 0.007635394111275673, -0.03449485823512077, -0.09141994267702103, -0.07634475827217102, -0.06257414817810059, -0.06800193339586258, 0.03231075033545494, 0.021283473819494247, -0.0878758504986763, -0.006225360091775656, -0.00...
0.211135
\* Cilium Operator had a stable 5% CPU consumption while the pods were being created. .. figure:: images/image\_7\_02.png Similar to the behavior seen while increasing the number of worker nodes, adding new pods also increases Cilium memory consumption. \* As we increased the number of pods from 250 to 50000, we saw a maximum memory usage of 573MiB for one of the Cilium agents while the average was 438 MiB. \* For the eBPF memory usage we saw a max usage of 462.7MiB \* This means that each \*\*Cilium agent's memory increased by 10.5KiB per new pod in the cluster.\*\* ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 8. Deploy 250 policies for 1 namespace ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Here we have created 125 L4 network policies and 125 L7 policies. Each policy selected all pods on this namespace and was allowed to send traffic to another pod on this namespace. Each of the 250 policies allows access to a disjoint set of ports. In the end we will have 250 different policies selecting 10000 pods. .. code-block:: yaml apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: "l4-rule-#" namespace: "namespace-1" spec: endpointSelector: matchLabels: my-label: testing fromEndpoints: matchLabels: my-label: testing egress: - toPorts: - ports: - port: "[0-125]+80" # from 80 to 12580 protocol: TCP .. code-block:: yaml apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: "l7-rule-#" namespace: "namespace-1" spec: endpointSelector: matchLabels: my-label: testing fromEndpoints: matchLabels: my-label: testing ingress: - toPorts: - ports: - port: '[126-250]+80' # from 12680 to 25080 protocol: TCP rules: http: - method: GET path: "/path1$" - method: PUT path: "/path2$" headers: - 'X-My-Header: true' .. figure:: images/image\_8\_01.png In this case we saw one of the Cilium agents jumping to 100% CPU usage for 15 seconds while the average peak was 40% during a period of 90 seconds. .. figure:: images/image\_8\_02.png As expected, \*\*increasing the number of policies does not have a significant impact on the memory usage of Cilium since the eBPF policy maps have a constant size\*\* once a pod is initialized. .. figure:: images/image\_8\_03.png .. figure:: images/image\_8\_04.png The first mark represents the point in time when we ran ``kubectl create`` to create the ``CiliumNetworkPolicies``. Since we created the 250 policies sequentially, we cannot properly compute the convergence time. To do that, we could use a single CNP with multiple policy rules defined under the ``specs`` field (instead of the ``spec`` field). Nevertheless, we can see the time it took the last Cilium agent to increment its Policy Revision, which is incremented individually on each Cilium agent every time a CiliumNetworkPolicy (CNP) is received, between second ``15:45:44`` and ``15:45:46`` and see when was the last time an Endpoint was regenerated by checking the 99th percentile of the "Endpoint regeneration time". In this manner, that it took less than 5s. We can also verify \*\*the maximum time was less than 600ms for an endpoint to have the policy enforced.\*\* ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 9. Deploy 250 policies for CiliumClusterwideNetworkPolicies (CCNP) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The difference between these policies and the previous ones installed is that these select all pods in all namespaces. To recap, this means that we will now have \*\*250 different network policies selecting 10000 pods and 250 different network policies selecting 50000 pods on a cluster with 1000 nodes.\*\* Similarly to the previous step we will deploy 125 L4 policies and another 125 L7 policies. .. figure:: images/image\_9\_01.png .. figure:: images/image\_9\_02.png Similar to the creation of the previous 250 CNPs, there was also an increase in CPU usage during the creation of the CCNPs. The CPU usage was similar even though the policies were effectively selecting more pods. .. figure:: images/image\_9\_03.png As all pods running in a node are selected by \*\*all
https://github.com/cilium/cilium/blob/main//Documentation/operations/performance/scalability/report.rst
main
cilium
[ 0.08244563639163971, -0.005736115388572216, -0.036731403321027756, 0.031493935734033585, -0.015096143819391727, -0.061811819672584534, -0.04311002045869827, 0.06993686407804489, -0.04302540048956871, 0.04284874349832535, 0.033965662121772766, -0.03832036256790161, 0.00851547159254551, -0.0...
0.254077
to the creation of the previous 250 CNPs, there was also an increase in CPU usage during the creation of the CCNPs. The CPU usage was similar even though the policies were effectively selecting more pods. .. figure:: images/image\_9\_03.png As all pods running in a node are selected by \*\*all 250 CCNPs created\*\*, we see an increase of the \*\*Endpoint regeneration time\*\* which \*\*peaked a little above 3s.\*\* ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 10. "Accidentally" delete 10000 pods ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ In this step we have "accidentally" deleted 10000 random pods. Kubernetes will then recreate 10000 new pods so it will help us understand what the convergence time is for all the deployed network polices. .. figure:: images/image\_10\_01.png .. figure:: images/image\_10\_02.png \* The first mark represents the point in time when pods were "deleted" and the second mark represents the point in time when Kubernetes finished recreating 10k pods. \* Besides the CPU usage slightly increasing while pods are being scheduled in the cluster, we did see some interesting data points in the eBPF memory usage. As each endpoint can have one or more dedicated eBPF maps, the eBPF memory usage is directly proportional to the number of pods running in a node. \*\*If the number of pods per node decreases so does the eBPF memory usage.\*\* .. figure:: images/image\_10\_03.png We inferred the time it took for all the endpoints to get regenerated by looking at the number of Cilium endpoints with the policy enforced over time. Luckily enough we had another metric that was showing how many Cilium endpoints had policy being enforced: .. figure:: images/image\_10\_04.png ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 11. Control plane metrics over the test run ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The focus of this test was to study the Cilium agent resource consumption at scale. However, we also monitored some metrics of the control plane nodes such as etcd metrics and CPU usage of the k8s-controllers and we present them in the next figures. .. figure:: images/image\_11\_01.png Memory consumption of the 3 etcd instances during the entire scalability testing. .. figure:: images/image\_11\_02.png CPU usage for the 3 controller nodes, average latency per request type in the etcd cluster as well as the number of operations per second made to etcd. .. figure:: images/image\_11\_03.png All etcd metrics, from left to right, from top to bottom: database size, disk sync duration, client traffic in, client traffic out, peer traffic in, peer traffic out. ============= Final Remarks ============= These experiments helped us develop a better understanding of Cilium running in a large cluster entirely in CRD mode and without depending on etcd. There is still some work to be done to optimize the memory footprint of eBPF maps even further, as well as reducing the memory footprint of the Cilium agent. We will address those in the next Cilium version. We can also determine that it is scalable to run Cilium in CRD mode on a cluster with more than 200 nodes. However, it is worth pointing out that we need to run more tests to verify Cilium's behavior when it loses the connectivity with ``kube-apiserver``, as can happen during a control plane upgrade for example. This will also be our focus in the next Cilium version.
https://github.com/cilium/cilium/blob/main//Documentation/operations/performance/scalability/report.rst
main
cilium
[ -0.01579286716878414, -0.025045223534107208, 0.0491238571703434, -0.018217509612441063, 0.041894033551216125, -0.022373661398887634, -0.0008954802760854363, -0.006071740295737982, 0.02369864657521248, 0.0742458701133728, 0.05036497861146927, -0.02505555748939514, -0.014364643022418022, -0....
0.178138
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_hubble\_internals: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Hubble internals \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* .. note:: This documentation section is targeted at developers who are interested in contributing to Hubble. For this purpose, it describes Hubble internals. .. note:: This documentation covers the Hubble server (sometimes referred as "Hubble embedded") and Hubble Relay components but does not cover the Hubble UI and CLI. Hubble builds on top of Cilium and eBPF to enable deep visibility into the communication and behavior of services as well as the networking infrastructure in a completely transparent manner. One of the design goals of Hubble is to achieve all of this at large scale. Hubble's server component is embedded into the Cilium agent in order to achieve high performance with low-overhead. The gRPC services offered by Hubble server may be consumed locally via a Unix domain socket or, more typically, through Hubble Relay. Hubble Relay is a standalone component which is aware of all Hubble instances and offers full cluster visibility by connecting to their respective gRPC APIs. This capability is usually referred to as multi-node. Hubble Relay's main goal is to offer a rich API that can be safely exposed and consumed by the Hubble UI and CLI. Hubble Architecture =================== Hubble exposes gRPC services from the Cilium process that allows clients to receive flows and other type of data. Hubble server ------------- The Hubble server component implements two gRPC services. The \*\*Observer service\*\* which may optionally be exposed via a TCP socket in addition to a local Unix domain socket and the \*\*Peer service\*\*, which is served on both as well as being exposed as a Kubernetes Service when enabled via TCP. The Observer service ^^^^^^^^^^^^^^^^^^^^ The Observer service is the principal service. It provides four RPC endpoints: ``GetFlows``, ``GetNodes``, ``GetNamespaces`` and ``ServerStatus``. \* ``GetNodes`` returns a list of metrics and other information related to each Hubble instance. \* ``ServerStatus`` returns a summary of the information in ``GetNodes``. \* ``GetNamespaces`` returns a list of namespaces that had network flows within the last one hour. \* ``GetFlows`` returns a stream of flow related events. Using ``GetFlows``, callers get a stream of payloads. Request parameters allow callers to specify filters in the form of allow lists and deny lists to provide fine-grained filtering of data. When multiple flow filters are provided, only one of them has to match for a flow to be included/excluded. When both allow and deny filters are specified, the result will contain all flows matched by the allow list that are not also simultaneously matched by the deny list. In order to answer ``GetFlows`` requests, Hubble stores monitoring events from Cilium's event monitor into a user-space ring buffer structure. Monitoring events are obtained by registering a new listener on Cilium monitor. The ring buffer is capable of storing a configurable amount of events in memory. Events are continuously consumed, overriding older ones once the ring buffer is full. Additionally, the Observer service also provides the ``GetAgentEvents`` and ``GetDebugEvents`` RPC endpoints to expose data about the Cilium agent events and Cilium datapath debug events, respectively. Both are similar to ``GetFlows`` except they do not implement filtering capabilities. .. image:: ./../images/hubble\_getflows.png For efficiency, the internal buffer length is a bit mask of ones + 1. The most significant bit of this bit mask is the same position of the most significant bit position of 'n'. In other terms, the internal buffer size is always a power of 2 with 1 slot reserved for the writer. In
https://github.com/cilium/cilium/blob/main//Documentation/internals/hubble.rst
main
cilium
[ 0.01656021550297737, 0.013790320605039597, -0.03755459561944008, -0.04230957105755806, 0.08965066075325012, -0.03149048984050751, -0.08694078773260117, 0.07942679524421692, -0.0015494547551497817, -0.02193405106663704, 0.01523040235042572, 0.03680047020316124, 0.09320397675037384, -0.07409...
0.191732
length is a bit mask of ones + 1. The most significant bit of this bit mask is the same position of the most significant bit position of 'n'. In other terms, the internal buffer size is always a power of 2 with 1 slot reserved for the writer. In effect, from a user perspective, the ring buffer capacity is one less than a power of 2. As the ring buffer is a hot code path, it has been designed to not employ any locking mechanisms and uses atomic operations instead. While this approach has performance benefits, it also has the downsides of being a complex component. Due to its complex nature, the ring buffer is typically accessed via a ring reader that abstracts the complexity of this data structure for reading. The ring reader allows reading one event at the time with 'previous' and 'next' methods but also implements a follow mode where events are continuously read as they are written to the ring buffer. The Peer service ^^^^^^^^^^^^^^^^ The Peer service sends information about Hubble peers in the cluster in a stream. When the ``Notify`` method is called, it reports information about all the peers in the cluster and subsequently sends information about peers that are updated, added, or removed from the cluster. Thus, it allows the caller to keep track of all Hubble instances and query their respective gRPC services. This service is exposed as a Kubernetes Service and is primarily used by Hubble Relay in order to have a cluster-wide view of all Hubble instances. The Peer service obtains peer change notifications by subscribing to Cilium's node manager. To this end, it internally defines a handler that implements Cilium's datapath node handler interface. .. \_hubble\_relay: Hubble Relay ------------ Hubble Relay is the Hubble component that brings multi-node support. It leverages the Peer service to obtain information about Hubble instances and consume their gRPC API in order to provide a more rich API that covers events from across the entire cluster (or even multiple clusters in a ClusterMesh scenario). Hubble Relay was first introduced as a technology preview with the release of Cilium v1.8 and was declared stable with the release of Cilium v1.9. Hubble Relay implements the Observer service for multi-node. To that end, it maintains a persistent connection with every Hubble peer in a cluster with a peer manager. This component provides callers with the list of peers. Callers may report when a peer is unreachable, in which case the peer manager will attempt to reconnect. As Hubble Relay connects to every node in a cluster, the Hubble server instances must make their API available (by default on port 4244). By default, Hubble server endpoints are secured using mutual TLS (mTLS) when exposed on a TCP port in order to limit access to Hubble Relay only.
https://github.com/cilium/cilium/blob/main//Documentation/internals/hubble.rst
main
cilium
[ 0.02310882695019245, -0.002830060664564371, -0.09289191663265228, -0.04291120544075966, -0.013339267112314701, -0.06580472737550735, -0.018281206488609314, 0.013627051375806332, 0.11453011631965637, -0.029412997886538506, -0.044485531747341156, 0.10347320884466171, 0.03174375742673874, -0....
0.180872
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_cilium\_operator\_internals: Cilium Operator =============== This document provides a technical overview of the Cilium Operator and describes the cluster-wide operations it is responsible for. Highly Available Cilium Operator ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Cilium Operator uses Kubernetes leader election library in conjunction with lease locks to provide HA functionality. The capability is supported on Kubernetes versions 1.14 and above. It is Cilium's default behavior since the 1.9 release. The number of replicas for the HA deployment can be configured using Helm option ``operator.replicas``. .. cilium-helm-install:: :namespace: kube-system :set: operator.replicas=3 .. code-block:: shell-session $ kubectl get deployment cilium-operator -n kube-system NAME READY UP-TO-DATE AVAILABLE AGE cilium-operator 3/3 3 3 46s The operator is an integral part of Cilium installations in Kubernetes environments and is tasked to perform the following operations: CRD Registration ~~~~~~~~~~~~~~~~ The default behavior of the Cilium Operator is to register the CRDs used by Cilium. The following custom resources are registered by the Cilium Operator: .. include:: ../crdlist.rst IPAM ~~~~ Cilium Operator is responsible for IP address management when running in the following modes: - :ref:`ipam\_azure` - :ref:`ipam\_eni` - :ref:`ipam\_crd\_cluster\_pool` When running in IPAM mode :ref:`k8s\_hostscope`, the allocation CIDRs used by ``cilium-agent`` is derived from the fields ``podCIDR`` and ``podCIDRs`` populated by Kubernetes in the Kubernetes ``Node`` resource. For :ref:`concepts\_ipam\_crd` IPAM allocation mode, it is the job of Cloud-specific operator to populate the required information about CIDRs in the ``CiliumNode`` resource. Cilium currently has native support for the following Cloud providers in CRD IPAM mode: - Azure - ``cilium-operator-azure`` - AWS - ``cilium-operator-aws`` For more information on IPAM visit :ref:`address\_management`. Load Balancer IP Address Management ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When :ref:`lb\_ipam` is used, Cilium Operator manages IP address for ``type: LoadBalancer`` services. KVStore operations ~~~~~~~~~~~~~~~~~~ These operations are performed only when KVStore is enabled for the Cilium Operator. In addition, KVStore operations are only required when ``cilium-operator`` is running with any of the below options: - ``--synchronize-k8s-services`` - ``--synchronize-k8s-nodes`` - ``--identity-allocation-mode=kvstore`` K8s Services synchronization ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Cilium Operator performs the job of synchronizing Kubernetes services to external KVStore configured for the Cilium Operator if running with ``--synchronize-k8s-services`` flag. The Cilium Operator performs this operation only for shared services (services that have ``service.cilium.io/shared`` annotation set to true). This is meaningful when running Cilium to setup a ClusterMesh. K8s Nodes synchronization ^^^^^^^^^^^^^^^^^^^^^^^^^ Similar to K8s services, Cilium Operator also synchronizes Kubernetes nodes information to the shared KVStore. When a ``Node`` object is deleted it is not possible to reliably cleanup the corresponding ``CiliumNode`` object from the Agent itself. The Cilium Operator holds the responsibility to garbage collect orphaned ``CiliumNodes``. Heartbeat update ^^^^^^^^^^^^^^^^ The Cilium Operator periodically updates the Cilium's heartbeat path key with the current time. The default key for this heartbeat is ``cilium/.heartbeat`` in the KVStore. It is used by Cilium Agents to validate that KVStore updates can be received. Identity garbage collection ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Each workload in Kubernetes is assigned a security identity that is used for policy decision making. This identity is based on common workload markers like labels. Cilium supports two identity allocation mechanisms: - CRD Identity allocation - KVStore Identity allocation Both the mechanisms of identity allocation require the Cilium Operator to perform the garbage collection of stale identities. This garbage collection is necessary because a 16-bit unsigned integer represents the security identity, and thus we can only have a maximum of 65536 identities in the cluster. CRD Identity garbage collection ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ CRD identity allocation uses Kubernetes custom resource ``CiliumIdentity`` to represent a security identity. This is the
https://github.com/cilium/cilium/blob/main//Documentation/internals/cilium_operator.rst
main
cilium
[ 0.01343284361064434, 0.019908994436264038, -0.05989960953593254, -0.031200973317027092, -0.015484288334846497, -0.019857728853821754, -0.032115813344717026, -0.016226928681135178, 0.08800451457500458, 0.03343367576599121, 0.03104710392653942, -0.05146847665309906, 0.047057364135980606, -0....
0.206936
identities. This garbage collection is necessary because a 16-bit unsigned integer represents the security identity, and thus we can only have a maximum of 65536 identities in the cluster. CRD Identity garbage collection ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ CRD identity allocation uses Kubernetes custom resource ``CiliumIdentity`` to represent a security identity. This is the default behavior of Cilium and works out of the box in any K8s environment without any external dependency. The Cilium Operator maintains a local cache for CiliumIdentities with the last time they were seen active. A controller runs in the background periodically which scans this local cache and deletes identities that have not had their heartbeat life sign updated since ``identity-heartbeat-timeout``. One thing to note here is that an Identity is always assumed to be live if it has an endpoint associated with it. KVStore Identity garbage collection ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ While the CRD allocation mode for identities is more common, it is limited in terms of scale. When running in a very large environment, a saner choice is to use the KVStore allocation mode. This mode stores the identities in an external store like etcd. For more information on Cilium's scalability visit :ref:`scalability\_guide`. The garbage collection mechanism involves scanning the KVStore of all the identities. For each identity, the Cilium Operator search in the KVStore if there are any active users of that identity. The entry is deleted from the KVStore if there are no active users. CiliumEndpoint garbage collection ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ CiliumEndpoint object is created by the ``cilium-agent`` for each ``Pod`` in the cluster. The Cilium Operator manages a controller to handle the garbage collection of orphaned ``CiliumEndpoint`` objects. An orphaned ``CiliumEndpoint`` object means that the owner of the endpoint object is not active anymore in the cluster. CiliumEndpoints are also considered orphaned if the owner is an existing Pod in ``PodFailed`` or ``PodSucceeded`` state. This controller is run periodically if the ``endpoint-gc-interval`` option is specified and only once during startup if the option is unspecified. Derivative network policy creation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When using Cloud-provider-specific constructs like ``toGroups`` in the network policy spec, the Cilium Operator performs the job of converting these constructs to derivative CNP/CCNP objects without these fields. For more information, see how Cilium network policies incorporate the use of ``toGroups`` to :ref:`lock down external access using AWS security groups`. Ingress and Gateway API Support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When Ingress or Gateway API support is enabled, the Cilium Operator performs the task of parsing Ingress or Gateway API objects and converting them into ``CiliumEnvoyConfig`` objects used for configuring the per-node Envoy proxy. Additionally, Secrets used by Ingress or Gateway API objects will be synced to a Cilium-managed namespace that the Cilium Agent is then granted access to. This reduces the permissions required of the Cilium Agent. Mutual Authentication Support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When Cilium's Mutual Authentication Support is enabled, the Cilium Operator is responsible for ensuring that each Cilium Identity has an associated identity in the certificate management system. It will create and delete identity registrations in the configured certificate management section as required. The Cilium Operator does not, however have any to the key material in the identities. That information is only shared with the Cilium Agent via other channels.
https://github.com/cilium/cilium/blob/main//Documentation/internals/cilium_operator.rst
main
cilium
[ -0.013427019119262695, 0.09268400073051453, -0.0263789352029562, 0.0021921591833233833, -0.015175120905041695, -0.01933734305202961, 0.030566131696105003, -0.008314661681652069, 0.1704784631729126, -0.038439344614744186, 0.03890920802950859, -0.08332134783267975, 0.022136542946100235, -0.0...
0.179665
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_security\_identities: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Security Identities \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Security identities are generated from labels. They are stored as ``uint32``, which means the maximum limit for a security identity is ``2^32 - 1``. The minimum security identity is ``1``. .. note:: Identity 0 is not a valid value. If it shows up in Hubble output, this means the identity was not found. In the eBPF datapath, it has a special role where it denotes "any identity", i.e. as a wildcard allow in policy maps. Security identities span over several ranges, depending on the context: 1) Cluster-local 2) ClusterMesh 3) Identities generated from CIDR-based policies 4) Identities generated for remote nodes (optional) Cluster-local ~~~~~~~~~~~~~ .. \_local\_scoped\_identity: Cluster-local identities (1) range from ``1`` to ``2^16 - 1``. The lowest values, from ``1`` to ``255``, correspond to the reserved identity range. See the `internal code documentation `\_\_ for details. Clustermesh ~~~~~~~~~~~ .. \_clustermesh\_identity: For ClusterMesh (2), 8 bits are used as the ``cluster-id`` which identifies the cluster in the ClusterMesh, into the 3rd octet as shown by ``0x00FF0000``. The 4th octet (uppermost bits) must be set to ``0`` as well. Neither of these constraints apply CIDR identities however, see (3). CIDR-based identity ~~~~~~~~~~~~~~~~~~~ .. \_cidr\_based\_identity: CIDR identities (3) are local to each node. CIDR identities begin from ``1`` and end at ``16777215``, however since they're shifted by ``24``, this makes their effective range ``1 | (1 << 24)`` to ``16777215 | (1 << 24)`` or from ``16777217`` to ``33554431``. When CIDR policies are applied, the identity generated is local to each node. In other words, the identity may not be the same for the same CIDR policy across two nodes. Node-local identity ~~~~~~~~~~~~~~~~~~~ .. \_remote\_node\_scoped\_identity: Remote-node identities (4) are also local to each node. Functionally, they work much the same as CIDR identities: they are local to each node, potentially differing across nodes on the cluster. They are used when the option ``policy-cidr-match-mode`` includes ``nodes`` or when ``enable-node-selector-labels`` is set to ``true``. Node-local identities (CIDR or remote-node) are never used for traffic between Cilium-managed nodes, so they do not need to fit inside of a VXLAN or Geneve virtual network field. Non-CIDR identities are limited to 24 bits so that they will fit in these fields on the wire, but since CIDR identities will not be encoded in these packets, they can start with a higher value. Hence, the minimum value for a CIDR identity is ``2^24 + 1``. Overall, the following represents the different ranges: :: 0x00000001 - 0x000000FF (1 to 2^8 - 1 ) => reserved identities 0x00000100 - 0x0000FFFF (2^8 to 2^16 - 1 ) => cluster-local identities 0x00010000 - 0x00FFFFFF (2^16 to 2^24 - 1 ) => identities for remote clusters 0x01000000 - 0x01FFFFFF (2^24 to 2^25 - 1 ) => identities for CIDRs (node-local) 0x02000000 - 0x02FFFFFF (2^25 to 2^25 + 2^24 - 1) => identities for remote nodes (local) 0x01010000 - 0xFFFFFFFF (2^25 + 2^24 to 2^32 - 1 ) => reserved for future use
https://github.com/cilium/cilium/blob/main//Documentation/internals/security-identities.rst
main
cilium
[ -0.034440744668245316, 0.0402546152472496, -0.11957734078168869, -0.006202005315572023, 0.017372602596879005, -0.0030470164492726326, 0.07775535434484482, 0.04498688131570816, 0.022626101970672607, -0.02423083409667015, 0.05279245972633362, -0.13375574350357056, 0.10049118846654892, -0.042...
0.174624
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_k8s\_install\_quick: .. \_k8s\_quick\_install: .. \_k8s\_install\_standard: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Cilium Quick Installation \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This guide will walk you through the quick default installation. It will automatically detect and use the best configuration possible for the Kubernetes distribution you are using. All state is stored using Kubernetes custom resource definitions (CRDs). This is the best installation method for most use cases. For large environments (> 500 nodes) or if you want to run specific datapath modes, refer to the :ref:`getting\_started` guide. Should you encounter any issues during the installation, please refer to the :ref:`troubleshooting\_k8s` section and/or seek help on `Cilium Slack`\_. .. \_create\_cluster: Create the Cluster =================== If you don't have a Kubernetes Cluster yet, you can use the instructions below to create a Kubernetes cluster locally or using a managed Kubernetes service: .. tabs:: .. group-tab:: GKE The following commands create a Kubernetes cluster using `Google Kubernetes Engine `\_. See `Installing Google Cloud SDK `\_ for instructions on how to install ``gcloud`` and prepare your account. .. code-block:: bash export NAME="$(whoami)-$RANDOM" # Create the node pool with the following taint to guarantee that # Pods are only scheduled/executed in the node when Cilium is ready. # Alternatively, see the note below. gcloud container clusters create "${NAME}" \ --node-taints node.cilium.io/agent-not-ready=true:NoExecute \ --zone us-west2-a gcloud container clusters get-credentials "${NAME}" --zone us-west2-a .. note:: Please make sure to read and understand the documentation page on :ref:`taint effects and unmanaged pods`. .. group-tab:: AKS The following commands create a Kubernetes cluster using `Azure Kubernetes Service `\_ with no CNI plugin pre-installed (BYOCNI). See `Azure Cloud CLI `\_ for instructions on how to install ``az`` and prepare your account, and the `Bring your own CNI documentation `\_ for more details about BYOCNI prerequisites / implications. .. code-block:: bash export NAME="$(whoami)-$RANDOM" export AZURE\_RESOURCE\_GROUP="${NAME}-group" az group create --name "${AZURE\_RESOURCE\_GROUP}" -l westus2 # Create AKS cluster az aks create \ --resource-group "${AZURE\_RESOURCE\_GROUP}" \ --name "${NAME}" \ --network-plugin none \ --generate-ssh-keys # Get the credentials to access the cluster with kubectl az aks get-credentials --resource-group "${AZURE\_RESOURCE\_GROUP}" --name "${NAME}" .. group-tab:: EKS The following commands create a Kubernetes cluster with ``eksctl`` using `Amazon Elastic Kubernetes Service `\_. See `eksctl Installation `\_ for instructions on how to install ``eksctl`` and prepare your account. .. code-block:: none export NAME="$(whoami)-$RANDOM" cat <eks-config.yaml apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: ${NAME} region: eu-west-1 managedNodeGroups: - name: ng-1 desiredCapacity: 2 privateNetworking: true # taint nodes so that application pods are # not scheduled/executed until Cilium is deployed. # Alternatively, see the note below. taints: - key: "node.cilium.io/agent-not-ready" value: "true" effect: "NoExecute" EOF eksctl create cluster -f ./eks-config.yaml .. note:: Please make sure to read and understand the documentation page on :ref:`taint effects and unmanaged pods`. .. group-tab:: kind Install ``kind`` >= v0.7.0 per kind documentation: `Installation and Usage `\_ .. parsed-literal:: curl -LO \ |SCM\_WEB|\/Documentation/installation/kind-config.yaml kind create cluster --config=kind-config.yaml .. note:: Cilium may fail to deploy due to too many open files in one or more of the agent pods. If you notice this error, you can increase the ``inotify`` resource limits on your host machine (see `Pod errors due to "too many open files" `\_\_). .. group-tab:: minikube Install minikube ≥ v1.28.0 as per minikube documentation: `Install Minikube `\_. The following command will bring up a single node minikube cluster prepared for installing cilium. .. code-block:: shell-session minikube start --cni=cilium .. note:: - This may not install the latest version of cilium. - It might be necessary to add ``--host-dns-resolver=false`` if using
https://github.com/cilium/cilium/blob/main//Documentation/gettingstarted/k8s-install-default.rst
main
cilium
[ 0.020006870850920677, 0.053809262812137604, -0.001413803081959486, -0.08558178693056107, -0.015022198669612408, -0.006674019619822502, -0.046004537492990494, 0.004242323338985443, 0.05891433730721474, -0.017505688592791557, 0.04443329945206642, -0.06480496376752853, -0.006276120897382498, ...
0.14356
as per minikube documentation: `Install Minikube `\_. The following command will bring up a single node minikube cluster prepared for installing cilium. .. code-block:: shell-session minikube start --cni=cilium .. note:: - This may not install the latest version of cilium. - It might be necessary to add ``--host-dns-resolver=false`` if using the Virtualbox provider, otherwise DNS resolution may not work after Cilium installation. .. group-tab:: Kubespray `Kubespray `\_ requires Python ≥ 3.10 for recent versions. For environment setup and dependencies installation, see the `Kubespray Ansible documentation `\_. \*\*Configure cluster:\*\* .. code-block:: bash # Enter the already cloned kubespray directory cd kubespray/ # Copy sample inventory cp -rfp inventory/sample inventory/mycluster # Configure your inventory vi inventory/mycluster/inventory.ini # Configure Kubernetes networking: # Use CNI without any network plugin sed -e 's/^kube\_network\_plugin:.\*$/kube\_network\_plugin: cni/' \ -e 's/^kube\_owner:.\*$/kube\_owner: root/' \ inventory/mycluster/group\_vars/k8s\_cluster/k8s-cluster.yml > k8s-cluster.tmp mv k8s-cluster.tmp inventory/mycluster/group\_vars/k8s\_cluster/k8s-cluster.yml Setting ``kube\_network\_plugin: cni`` ensures the cluster deploys without any network plugin, allowing Cilium to be installed separately afterward. \*\*Deploy cluster:\*\* .. code-block:: bash ansible-playbook -i inventory/mycluster/inventory.ini cluster.yml -b -v \ --private-key=~/.ssh/private\_key (Adjust the path to your private SSH key.) .. note:: For more detailed configuration options, refer to the `Kubespray documentation `\_. .. group-tab:: Rancher Desktop Install Rancher Desktop >= v1.1.0 as per Rancher Desktop documentation: `Install Rancher Desktop `\_. Next you need to configure Rancher Desktop to disable the built-in CNI so you can install Cilium. .. include:: ../installation/rancher-desktop-configure.rst .. group-tab:: Alibaba ACK .. include:: ../beta.rst .. note:: The AlibabaCloud ENI integration with Cilium is subject to the following limitations: - It is currently only enabled for IPv4. - It only works with instances supporting ENI. Refer to `Instance families `\_ for details. Setup a Kubernetes on AlibabaCloud. You can use any method you prefer. The quickest way is to create an ACK (Alibaba Cloud Container Service for Kubernetes) cluster and to replace the CNI plugin with Cilium. For more details on how to set up an ACK cluster please follow the `official documentation `\_. .. \_install\_cilium\_cli: Install the Cilium CLI ====================== .. include:: ../installation/cli-download.rst .. admonition:: Video :class: attention To learn more about the Cilium CLI, check out `eCHO episode 8: Exploring the Cilium CLI `\_\_. Install Cilium ============== You can install Cilium on any Kubernetes cluster. Pick one of the options below: .. tabs:: .. group-tab:: Generic These are the generic instructions on how to install Cilium into any Kubernetes cluster. The installer will attempt to automatically pick the best configuration options for you. Please see the other tabs for distribution/platform specific instructions which also list the ideal default configuration for particular platforms. .. include:: ../installation/requirements-generic.rst \*\*Install Cilium\*\* Install Cilium into the Kubernetes cluster pointed to by your current kubectl context: .. parsed-literal:: cilium install |CHART\_VERSION| .. group-tab:: GKE .. include:: ../installation/requirements-gke.rst \*\*Install Cilium:\*\* Install Cilium into the GKE cluster: .. parsed-literal:: cilium install |CHART\_VERSION| .. group-tab:: AKS .. include:: ../installation/requirements-aks.rst \*\*Install Cilium:\*\* Install Cilium into the AKS cluster: .. parsed-literal:: cilium install |CHART\_VERSION| --set azure.resourceGroup="${AZURE\_RESOURCE\_GROUP}" .. group-tab:: EKS .. include:: ../installation/requirements-eks.rst \*\*Install Cilium:\*\* Install Cilium into the EKS cluster. .. parsed-literal:: cilium install |CHART\_VERSION| cilium status --wait .. note:: If you have to uninstall Cilium and later install it again, that could cause connectivity issues due to ``aws-node`` DaemonSet flushing Linux routing tables. The issues can be fixed by restarting all pods, alternatively to avoid such issues you can delete ``aws-node`` DaemonSet prior to installing Cilium. .. group-tab:: OpenShift .. include:: ../installation/requirements-openshift.rst \*\*Install Cilium:\*\* Cilium is a `Certified OpenShift CNI Plugin `\_ and is best installed when an OpenShift cluster is created using the OpenShift installer. Please refer to :ref:`k8s\_install\_openshift\_okd` for more information. .. group-tab:: RKE
https://github.com/cilium/cilium/blob/main//Documentation/gettingstarted/k8s-install-default.rst
main
cilium
[ 0.05007573589682579, 0.010695459321141243, -0.01687133125960827, -0.03355666249990463, -0.024655424058437347, 0.007204126566648483, -0.11470310389995575, 0.052393537014722824, 0.038862235844135284, 0.02645636908710003, -0.0025518143083900213, -0.12010703980922699, -0.01625850610435009, -0....
0.095265
issues you can delete ``aws-node`` DaemonSet prior to installing Cilium. .. group-tab:: OpenShift .. include:: ../installation/requirements-openshift.rst \*\*Install Cilium:\*\* Cilium is a `Certified OpenShift CNI Plugin `\_ and is best installed when an OpenShift cluster is created using the OpenShift installer. Please refer to :ref:`k8s\_install\_openshift\_okd` for more information. .. group-tab:: RKE .. include:: ../installation/requirements-rke.rst \*\*Install Cilium:\*\* Install Cilium into your newly created RKE cluster: .. parsed-literal:: cilium install |CHART\_VERSION| .. group-tab:: k3s .. include:: ../installation/requirements-k3s.rst \*\*Install Cilium:\*\* Install Cilium into your newly created Kubernetes cluster: .. parsed-literal:: cilium install |CHART\_VERSION| .. group-tab:: Alibaba ACK You can install Cilium using Helm on Alibaba ACK, refer to `k8s\_install\_helm` for details. If the installation fails for some reason, run ``cilium status`` to retrieve the overall status of the Cilium deployment and inspect the logs of whatever pods are failing to be deployed. .. tip:: You may be seeing ``cilium install`` print something like this: .. code-block:: shell-session ♻️ Restarted unmanaged pod kube-system/event-exporter-gke-564fb97f9-rv8hg ♻️ Restarted unmanaged pod kube-system/kube-dns-6465f78586-hlcrz ♻️ Restarted unmanaged pod kube-system/kube-dns-autoscaler-7f89fb6b79-fsmsg ♻️ Restarted unmanaged pod kube-system/l7-default-backend-7fd66b8b88-qqhh5 ♻️ Restarted unmanaged pod kube-system/metrics-server-v0.3.6-7b5cdbcbb8-kjl65 ♻️ Restarted unmanaged pod kube-system/stackdriver-metadata-agent-cluster-level-6cc964cddf-8n2rt This indicates that your cluster was already running some pods before Cilium was deployed and the installer has automatically restarted them to ensure all pods get networking provided by Cilium. Validate the Installation ========================= .. include:: ../installation/cli-status.rst .. include:: ../installation/cli-connectivity-test.rst .. include:: ../installation/next-steps.rst
https://github.com/cilium/cilium/blob/main//Documentation/gettingstarted/k8s-install-default.rst
main
cilium
[ 0.0364595465362072, -0.02043337933719158, 0.0065301889553666115, -0.03247635066509247, 0.037583429366350174, 0.005165540147572756, -0.06693639606237411, 0.0019052679417654872, 0.0509936548769474, 0.021042251959443092, 0.028979072347283363, -0.1474819928407669, 0.010408172383904457, -0.0236...
0.138096
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_starwars\_demo: ####################################### Getting Started with the Star Wars Demo ####################################### .. include:: /security/gsg\_sw\_demo.rst Check Current Access ==================== From the perspective of the \*deathstar\* service, only the ships with label ``org=empire`` are allowed to connect and request landing. Since we have no rules enforced, both \*xwing\* and \*tiefighter\* will be able to request landing. To test this, use the commands below. .. code-block:: shell-session $ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing Ship landed $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing Ship landed Apply an L3/L4 Policy ===================== When using Cilium, endpoint IP addresses are irrelevant when defining security policies. Instead, you can use the labels assigned to the pods to define security policies. The policies will be applied to the right pods based on the labels irrespective of where or when it is running within the cluster. We'll start with the basic policy restricting deathstar landing requests to only the ships that have label (``org=empire``). This will not allow any ships that don't have the ``org=empire`` label to even connect with the \*deathstar\* service. This is a simple policy that filters only on IP protocol (network layer 3) and TCP protocol (network layer 4), so it is often referred to as an L3/L4 network security policy. Note: Cilium performs stateful \*connection tracking\*, meaning that if policy allows the frontend to reach backend, it will automatically allow all required reply packets that are part of backend replying to frontend within the context of the same TCP/UDP connection. \*\*L4 Policy with Cilium and Kubernetes\*\* .. image:: images/cilium\_http\_l3\_l4\_gsg.png :scale: 30 % We can achieve that with the following CiliumNetworkPolicy: .. literalinclude:: ../../examples/minikube/sw\_l3\_l4\_policy.yaml :language: yaml CiliumNetworkPolicies match on pod labels using an "endpointSelector" to identify the sources and destinations to which the policy applies. The above policy whitelists traffic sent from any pods with label (``org=empire``) to \*deathstar\* pods with label (``org=empire, class=deathstar``) on TCP port 80. To apply this L3/L4 policy, run: .. parsed-literal:: $ kubectl create -f \ |SCM\_WEB|\/examples/minikube/sw\_l3\_l4\_policy.yaml ciliumnetworkpolicy.cilium.io/rule1 created Now if we run the landing requests again, only the \*tiefighter\* pods with the label ``org=empire`` will succeed. The \*xwing\* pods will be blocked! .. code-block:: shell-session $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing Ship landed This works as expected. Now the same request run from an \*xwing\* pod will fail: .. code-block:: shell-session $ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing This request will hang, so press Control-C to kill the curl request, or wait for it to time out. Inspecting the Policy ===================== If we run ``cilium-dbg endpoint list`` again we will see that the pods with the label ``org=empire`` and ``class=deathstar`` now have ingress policy enforcement enabled as per the policy above. .. code-block:: shell-session $ kubectl -n kube-system exec cilium-1c2cz -- cilium-dbg endpoint list ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 232 Enabled Disabled 16530 k8s:class=deathstar 10.0.0.147 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire 726 Disabled Disabled 1 reserved:host ready 883 Disabled Disabled 4 reserved:health 10.0.0.244 ready 1634 Disabled Disabled 51373 k8s:io.cilium.k8s.policy.cluster=default 10.0.0.118 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 1673 Disabled Disabled 31028 k8s:class=tiefighter 10.0.0.112 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire 2811 Disabled Disabled 51373 k8s:io.cilium.k8s.policy.cluster=default 10.0.0.47 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 2843 Enabled Disabled 16530 k8s:class=deathstar 10.0.0.89 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire 3184 Disabled Disabled 22654 k8s:class=xwing 10.0.0.30 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=alliance You can also inspect the policy details via ``kubectl`` .. code-block:: shell-session $ kubectl get cnp NAME AGE rule1 2m $
https://github.com/cilium/cilium/blob/main//Documentation/gettingstarted/demo.rst
main
cilium
[ -0.0262984000146389, 0.04777095094323158, -0.08148449659347534, -0.07338179647922516, 0.03961572051048279, -0.06244220584630966, 0.043941304087638855, -0.022199485450983047, 0.01565614901483059, 0.015972256660461426, 0.061989884823560715, -0.08051402121782303, 0.034609537571668625, -0.0075...
0.093754
51373 k8s:io.cilium.k8s.policy.cluster=default 10.0.0.47 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 2843 Enabled Disabled 16530 k8s:class=deathstar 10.0.0.89 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire 3184 Disabled Disabled 22654 k8s:class=xwing 10.0.0.30 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=alliance You can also inspect the policy details via ``kubectl`` .. code-block:: shell-session $ kubectl get cnp NAME AGE rule1 2m $ kubectl describe cnp rule1 Name: rule1 Namespace: default Labels: Annotations: API Version: cilium.io/v2 Description: L3-L4 policy to restrict deathstar access to empire ships only Kind: CiliumNetworkPolicy Metadata: Creation Timestamp: 2020-06-15T14:06:48Z Generation: 1 Managed Fields: API Version: cilium.io/v2 Fields Type: FieldsV1 fieldsV1: f:description: f:spec: .: f:endpointSelector: .: f:matchLabels: .: f:class: f:org: f:ingress: Manager: kubectl Operation: Update Time: 2020-06-15T14:06:48Z Resource Version: 2914 Self Link: /apis/cilium.io/v2/namespaces/default/ciliumnetworkpolicies/rule1 UID: eb3a688b-b3aa-495c-b20a-d4f79e7c088d Spec: Endpoint Selector: Match Labels: Class: deathstar Org: empire Ingress: From Endpoints: Match Labels: Org: empire To Ports: Ports: Port: 80 Protocol: TCP Events: Apply and Test HTTP-aware L7 Policy =================================== In the simple scenario above, it was sufficient to either give \*tiefighter\* / \*xwing\* full access to \*deathstar's\* API or no access at all. But to provide the strongest security (i.e., enforce least-privilege isolation) between microservices, each service that calls \*deathstar's\* API should be limited to making only the set of HTTP requests it requires for legitimate operation. For example, consider that the \*deathstar\* service exposes some maintenance APIs which should not be called by random empire ships. To see this run: .. code-block:: shell-session $ kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port Panic: deathstar exploded goroutine 1 [running]: main.HandleGarbage(0x2080c3f50, 0x2, 0x4, 0x425c0, 0x5, 0xa) /code/src/github.com/empire/deathstar/ temp/main.go:9 +0x64 main.main() /code/src/github.com/empire/deathstar/ temp/main.go:5 +0x85 While this is an illustrative example, unauthorized access such as above can have adverse security repercussions. \*\*L7 Policy with Cilium and Kubernetes\*\* .. image:: images/cilium\_http\_l3\_l4\_l7\_gsg.png :scale: 30 % Cilium is capable of enforcing HTTP-layer (i.e., L7) policies to limit what URLs the \*tiefighter\* is allowed to reach. Here is an example policy file that extends our original policy by limiting \*tiefighter\* to making only a POST /v1/request-landing API call, but disallowing all other calls (including PUT /v1/exhaust-port). .. literalinclude:: ../../examples/minikube/sw\_l3\_l4\_l7\_policy.yaml :language: yaml Update the existing rule to apply L7-aware policy to protect \*deathstar\* using: .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/minikube/sw\_l3\_l4\_l7\_policy.yaml ciliumnetworkpolicy.cilium.io/rule1 configured We can now re-run the same test as above, but we will see a different outcome: .. code-block:: shell-session $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing Ship landed and .. code-block:: shell-session $ kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port Access denied As this rule builds on the identity-aware rule, traffic from pods without the label ``org=empire`` will continue to be dropped causing the connection to time out: .. code-block:: shell-session $ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing As you can see, with Cilium L7 security policies, we are able to permit \*tiefighter\* to access only the required API resources on \*deathstar\*, thereby implementing a "least privilege" security approach for communication between microservices. Note that ``path`` matches the exact url, if for example you want to allow anything under /v1/, you need to use a regular expression: .. code-block:: yaml path: "/v1/.\*" You can observe the L7 policy via ``kubectl``: .. code-block:: shell-session $ kubectl describe ciliumnetworkpolicies Name: rule1 Namespace: default Labels: Annotations: API Version: cilium.io/v2 Description: L7 policy to restrict access to specific HTTP call Kind: CiliumNetworkPolicy Metadata: Creation Timestamp: 2020-06-15T14:06:48Z Generation: 2 Managed Fields: API Version: cilium.io/v2 Fields Type: FieldsV1 fieldsV1: f:description: f:metadata: f:annotations: .: f:kubectl.kubernetes.io/last-applied-configuration: f:spec: .: f:endpointSelector: .: f:matchLabels: .: f:class: f:org: f:ingress: Manager: kubectl Operation: Update Time: 2020-06-15T14:10:46Z Resource Version: 3445 Self Link: /apis/cilium.io/v2/namespaces/default/ciliumnetworkpolicies/rule1 UID: eb3a688b-b3aa-495c-b20a-d4f79e7c088d Spec: Endpoint Selector: Match Labels: Class: deathstar Org: empire Ingress: From
https://github.com/cilium/cilium/blob/main//Documentation/gettingstarted/demo.rst
main
cilium
[ 0.02873685583472252, -0.02031462825834751, 0.003509746864438057, -0.07827907055616379, -0.019491642713546753, -0.031321775168180466, -0.03668327257037163, -0.07429588586091995, 0.047849394381046295, 0.072950080037117, 0.011566255241632462, -0.08862456679344177, -0.058450598269701004, -0.03...
0.144285
Generation: 2 Managed Fields: API Version: cilium.io/v2 Fields Type: FieldsV1 fieldsV1: f:description: f:metadata: f:annotations: .: f:kubectl.kubernetes.io/last-applied-configuration: f:spec: .: f:endpointSelector: .: f:matchLabels: .: f:class: f:org: f:ingress: Manager: kubectl Operation: Update Time: 2020-06-15T14:10:46Z Resource Version: 3445 Self Link: /apis/cilium.io/v2/namespaces/default/ciliumnetworkpolicies/rule1 UID: eb3a688b-b3aa-495c-b20a-d4f79e7c088d Spec: Endpoint Selector: Match Labels: Class: deathstar Org: empire Ingress: From Endpoints: Match Labels: Org: empire To Ports: Ports: Port: 80 Protocol: TCP Rules: Http: Method: POST Path: /v1/request-landing Events: and ``cilium-dbg`` CLI: .. code-block:: shell-session $ kubectl -n kube-system exec cilium-qh5l2 -- cilium-dbg policy get [ { "endpointSelector": { "matchLabels": { "any:class": "deathstar", "any:org": "empire", "k8s:io.kubernetes.pod.namespace": "default" } }, "ingress": [ { "fromEndpoints": [ { "matchLabels": { "any:org": "empire", "k8s:io.kubernetes.pod.namespace": "default" } } ], "toPorts": [ { "ports": [ { "port": "80", "protocol": "TCP" } ], "rules": { "http": [ { "path": "/v1/request-landing", "method": "POST" } ] } } ] } ], "labels": [ { "key": "io.cilium.k8s.policy.derived-from", "value": "CiliumNetworkPolicy", "source": "k8s" }, { "key": "io.cilium.k8s.policy.name", "value": "rule1", "source": "k8s" }, { "key": "io.cilium.k8s.policy.namespace", "value": "default", "source": "k8s" }, { "key": "io.cilium.k8s.policy.uid", "value": "eb3a688b-b3aa-495c-b20a-d4f79e7c088d", "source": "k8s" } ] } ] Revision: 11 It is also possible to monitor the HTTP requests live by using ``cilium-dbg monitor``: .. code-block:: shell-session $ kubectl exec -it -n kube-system cilium-kzgdx -- cilium-dbg monitor -v --type l7 <- Response http to 0 ([k8s:class=tiefighter k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire]) from 2756 ([k8s:io.cilium.k8s.policy.cluster=default k8s:class=deathstar k8s:org=empire k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default]), identity 8876->43854, verdict Forwarded POST http://deathstar.default.svc.cluster.local/v1/request-landing => 200 <- Request http from 0 ([k8s:class=tiefighter k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=empire]) to 2756 ([k8s:io.cilium.k8s.policy.cluster=default k8s:class=deathstar k8s:org=empire k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default]), identity 8876->43854, verdict Denied PUT http://deathstar.default.svc.cluster.local/v1/request-landing => 403 The above output demonstrates a successful response to a POST request followed by a PUT request that is denied by the L7 policy. We hope you enjoyed the tutorial. Feel free to play more with the setup, read the rest of the documentation, and reach out to us on the `Cilium Slack`\_ with any questions! Clean-up ======== .. parsed-literal:: $ kubectl delete -f \ |SCM\_WEB|\/examples/minikube/http-sw-app.yaml $ kubectl delete cnp rule1 .. include:: ../installation/next-steps.rst
https://github.com/cilium/cilium/blob/main//Documentation/gettingstarted/demo.rst
main
cilium
[ -0.005540050566196442, 0.024988196790218353, -0.060124121606349945, -0.022474564611911774, -0.008621473796665668, -0.034458331763744354, -0.05663253366947174, -0.011711026541888714, 0.08321364969015121, 0.020056018605828285, 0.027091603726148605, -0.12756675481796265, -0.02788832038640976, ...
0.23484
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io \*\*\*\*\*\*\*\*\*\*\* Terminology \*\*\*\*\*\*\*\*\*\*\* .. \_label: .. \_labels: Labels ====== Labels are a generic, flexible and highly scalable way of addressing a large set of resources as they allow for arbitrary grouping and creation of sets. Whenever something needs to be described, addressed or selected, it is done based on labels: - `Endpoints` are assigned labels as derived from the container runtime, orchestration system, or other sources. - `Network policies` select pairs of `endpoints` which are allowed to communicate based on labels. The policies themselves are identified by labels as well. What is a Label? ---------------- A label is a pair of strings consisting of a ``key`` and ``value``. A label can be formatted as a single string with the format ``key=value``. The key portion is mandatory and must be unique. This is typically achieved by using the reverse domain name notion, e.g. ``io.cilium.mykey=myvalue``. The value portion is optional and can be omitted, e.g. ``io.cilium.mykey``. Key names should typically consist of the character set ``[a-z0-9-.]``. When using labels to select resources, both the key and the value must match, e.g. when a policy should be applied to all endpoints with the label ``my.corp.foo`` then the label ``my.corp.foo=bar`` will not match the selector. Label Source ------------ A label can be derived from various sources. For example, an `endpoint`\_ will derive the labels associated to the container by the local container runtime as well as the labels associated with the pod as provided by Kubernetes. As these two label namespaces are not aware of each other, this may result in conflicting label keys. To resolve this potential conflict, Cilium prefixes all label keys with ``source:`` to indicate the source of the label when importing labels, e.g. ``k8s:role=frontend``, ``container:user=joe``, ``k8s:role=backend``. This means that when you run a Docker container using ``docker run [...] -l foo=bar``, the label ``container:foo=bar`` will appear on the Cilium endpoint representing the container. Similarly, a Kubernetes pod started with the label ``foo: bar`` will be represented with a Cilium endpoint associated with the label ``k8s:foo=bar``. A unique name is allocated for each potential source. The following label sources are currently supported: - ``container:`` for labels derived from the local container runtime - ``k8s:`` for labels derived from Kubernetes - ``reserved:`` for special reserved labels, see :ref:`reserved\_labels`. - ``unspec:`` for labels with unspecified source When using labels to identify other resources, the source can be included to limit matching of labels to a particular type. If no source is provided, the label source defaults to ``any:`` which will match all labels regardless of their source. If a source is provided, the source of the selecting and matching labels need to match. .. \_endpoint: .. \_endpoints: Endpoint ========= Cilium makes application containers available on the network by assigning them IP addresses. Multiple application containers can share the same IP address; a typical example for this model is a Kubernetes :term:`Pod`. All application containers which share a common address are grouped together in what Cilium refers to as an endpoint. Allocating individual IP addresses enables the use of the entire Layer 4 port range by each endpoint. This essentially allows multiple application containers running on the same cluster node to all bind to well known ports such as ``80`` without causing any conflicts. The default behavior of Cilium is to assign both an IPv6 and IPv4 address to every endpoint. However, this behavior can be configured to only allocate an IPv6 address with the ``--enable-ipv4=false`` option. If both
https://github.com/cilium/cilium/blob/main//Documentation/gettingstarted/terminology.rst
main
cilium
[ -0.03505313768982887, 0.046276796609163284, -0.10826579481363297, -0.012712623924016953, -0.0068659293465316296, -0.006160737480968237, 0.07609287649393082, 0.004523852374404669, 0.06347300112247467, -0.05562296509742737, 0.03565996512770653, -0.06859040260314941, 0.05536364018917084, -0.0...
0.279319
node to all bind to well known ports such as ``80`` without causing any conflicts. The default behavior of Cilium is to assign both an IPv6 and IPv4 address to every endpoint. However, this behavior can be configured to only allocate an IPv6 address with the ``--enable-ipv4=false`` option. If both an IPv6 and IPv4 address are assigned, either address can be used to reach the endpoint. The same behavior will apply with regard to policy rules, load-balancing, etc. See :ref:`address\_management` for more details. Identification -------------- For identification purposes, Cilium assigns an internal endpoint id to all endpoints on a cluster node. The endpoint id is unique within the context of an individual cluster node. .. \_endpoint id: Endpoint Metadata ----------------- An endpoint automatically derives metadata from the application containers associated with the endpoint. The metadata can then be used to identify the endpoint for security/policy, load-balancing and routing purposes. The source of the metadata will depend on the orchestration system and container runtime in use. The following metadata retrieval mechanisms are currently supported: +---------------------+---------------------------------------------------+ | System | Description | +=====================+===================================================+ | Kubernetes | Pod labels (via k8s API) | +---------------------+---------------------------------------------------+ | containerd (Docker) | Container labels (via Docker API) | +---------------------+---------------------------------------------------+ Metadata is attached to endpoints in the form of `labels`. The following example launches a container with the label ``app=benchmark`` which is then associated with the endpoint. The label is prefixed with ``container:`` to indicate that the label was derived from the container runtime. .. code-block:: shell-session $ docker run --net cilium -d -l app=benchmark tgraf/netperf aaff7190f47d071325e7af06577f672beff64ccc91d2b53c42262635c063cf1c $ cilium-dbg endpoint list ENDPOINT POLICY IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT 62006 Disabled 257 container:app=benchmark f00d::a00:20f:0:f236 10.15.116.202 ready An endpoint can have metadata associated from multiple sources. A typical example is a Kubernetes cluster which uses containerd as the container runtime. Endpoints will derive Kubernetes pod labels (prefixed with the ``k8s:`` source prefix) and containerd labels (prefixed with ``container:`` source prefix). .. \_identity: Identity ======== All `endpoints` are assigned an identity. The identity is what is used to enforce basic connectivity between endpoints. In traditional networking terminology, this would be equivalent to Layer 3 enforcement. An identity is identified by `labels` and is given a cluster wide unique identifier. The endpoint is assigned the identity which matches the endpoint's `security relevant labels`, i.e. all endpoints which share the same set of `security relevant labels` will share the same identity. This concept allows to scale policy enforcement to a massive number of endpoints as many individual endpoints will typically share the same set of security `labels` as applications are scaled. What is an Identity? -------------------- The identity of an endpoint is derived based on the `labels` associated with the pod or container which are derived to the `endpoint`\_. When a pod or container is started, Cilium will create an `endpoint`\_ based on the event received by the container runtime to represent the pod or container on the network. As a next step, Cilium will resolve the identity of the `endpoint`\_ created. Whenever the `labels` of the pod or container change, the identity is reconfirmed and automatically modified as required. .. \_security relevant labels: Security Relevant Labels ------------------------ Not all `labels` associated with a container or pod are meaningful when deriving the `identity`. Labels may be used to store metadata such as the timestamp when a container was launched. Cilium requires to know which labels are meaningful and are subject to being considered when deriving the identity. For this purpose, the user is required to specify a list of string prefixes of meaningful labels. The standard behavior is to include all labels
https://github.com/cilium/cilium/blob/main//Documentation/gettingstarted/terminology.rst
main
cilium
[ -0.011616215109825134, 0.03335224837064743, -0.09602740406990051, -0.04066662862896919, -0.009629996493458748, -0.04408823698759079, -0.05083828419446945, -0.02738131210207939, 0.02281498722732067, -0.059061892330646515, 0.01711626537144184, -0.0005661048926413059, -0.002569093368947506, -...
0.226757
the timestamp when a container was launched. Cilium requires to know which labels are meaningful and are subject to being considered when deriving the identity. For this purpose, the user is required to specify a list of string prefixes of meaningful labels. The standard behavior is to include all labels which start with the prefix ``id.``, e.g. ``id.service1``, ``id.service2``, ``id.groupA.service44``. The list of meaningful label prefixes can be specified when starting the agent. .. \_reserved\_labels: Special Identities ------------------ All endpoints which are managed by Cilium will be assigned an identity. In order to allow communication to network endpoints which are not managed by Cilium, special identities exist to represent those. Special reserved identities are prefixed with the string ``reserved:``. +-----------------------------+------------+---------------------------------------------------+ | Identity | Numeric ID | Description | +=============================+============+===================================================+ | ``reserved:unknown`` | 0 | The identity could not be derived. | +-----------------------------+------------+---------------------------------------------------+ | ``reserved:host`` | 1 | The local host. Any traffic that originates from | | | | or is designated to one of the local host IPs. | +-----------------------------+------------+---------------------------------------------------+ | ``reserved:world`` | 2 | Any network endpoint outside of the cluster | +-----------------------------+------------+---------------------------------------------------+ | ``reserved:unmanaged`` | 3 | An endpoint that is not managed by Cilium, e.g. | | | | a Kubernetes pod that was launched before Cilium | | | | was installed. | +-----------------------------+------------+---------------------------------------------------+ | ``reserved:health`` | 4 | This is health checking traffic generated by | | | | Cilium agents. | +-----------------------------+------------+---------------------------------------------------+ | ``reserved:init`` | 5 | An endpoint for which the identity has not yet | | | | been resolved is assigned the init identity. | | | | This represents the phase of an endpoint in which | | | | some of the metadata required to derive the | | | | security identity is still missing. This is | | | | typically the case in the bootstrapping phase. | | | | | | | | The init identity is only allocated if the labels | | | | of the endpoint are not known at creation time. | | | | This can be the case for the Docker plugin. | +-----------------------------+------------+---------------------------------------------------+ | ``reserved:remote-node`` | 6 | The collection of all remote cluster hosts. | | | | Any traffic that originates from or is designated | | | | to one of the IPs of any host in any connected | | | | cluster other than the local node. | +-----------------------------+------------+---------------------------------------------------+ | ``reserved:kube-apiserver`` | 7 | Remote node(s) which have backend(s) serving the | | | | kube-apiserver running. | +-----------------------------+------------+---------------------------------------------------+ | ``reserved:ingress`` | 8 | Given to the IPs used as the source address for | | | | connections from Ingress proxies. | +-----------------------------+------------+---------------------------------------------------+ Well-known Identities --------------------- The following is a list of well-known identities which Cilium is aware of automatically and will hand out a security identity without requiring to contact any external dependencies such as the kvstore. The purpose of this is to allow bootstrapping Cilium and enable network connectivity with policy enforcement in the cluster for essential services without depending on any dependencies. ======================== =================== ==================== ================= =========== ============================================================================ Deployment Namespace ServiceAccount Cluster Name Numeric ID Labels ======================== =================== ==================== ================= =========== ============================================================================ kube-dns kube-system kube-dns 102 ``k8s-app=kube-dns`` kube-dns (EKS) kube-system kube-dns 103 ``k8s-app=kube-dns``, ``eks.amazonaws.com/component=kube-dns`` core-dns kube-system coredns 104 ``k8s-app=kube-dns`` core-dns (EKS) kube-system coredns 106 ``k8s-app=kube-dns``, ``eks.amazonaws.com/component=coredns`` cilium-operator cilium-operator 105 ``name=cilium-operator``, ``io.cilium/app=operator`` ======================== =================== ==================== ================= =========== ============================================================================ \*Note\*: if ``cilium-cluster`` is not defined with the ``cluster-name`` option, the default value will be set to "``default``". Identity Management in the Cluster ---------------------------------- Identities are valid in the entire cluster
https://github.com/cilium/cilium/blob/main//Documentation/gettingstarted/terminology.rst
main
cilium
[ -0.05518937483429909, 0.04868520796298981, -0.09814189374446869, -0.01785687543451786, -0.03302498906850815, 0.011116016656160355, 0.06858059763908386, 0.002919910941272974, 0.06969115883111954, -0.09785450994968414, 0.07199376821517944, -0.08882302790880203, 0.02310296893119812, -0.004920...
0.19786
``k8s-app=kube-dns`` core-dns (EKS) kube-system coredns 106 ``k8s-app=kube-dns``, ``eks.amazonaws.com/component=coredns`` cilium-operator cilium-operator 105 ``name=cilium-operator``, ``io.cilium/app=operator`` ======================== =================== ==================== ================= =========== ============================================================================ \*Note\*: if ``cilium-cluster`` is not defined with the ``cluster-name`` option, the default value will be set to "``default``". Identity Management in the Cluster ---------------------------------- Identities are valid in the entire cluster which means that if several pods or containers are started on several cluster nodes, all of them will resolve and share a single identity if they share the identity relevant labels. This requires coordination between cluster nodes. .. image:: ../images/identity\_store.png :align: center The operation to resolve an endpoint identity is performed with the help of the distributed key-value store which allows to perform atomic operations in the form \*generate a new unique identifier if the following value has not been seen before\*. This allows each cluster node to create the identity relevant subset of labels and then query the key-value store to derive the identity. Depending on whether the set of labels has been queried before, either a new identity will be created, or the identity of the initial query will be returned. .. \_node: Node ==== Cilium refers to a node as an individual member of a cluster. Each node must be running the ``cilium-agent`` and will operate in a mostly autonomous manner. Synchronization of state between Cilium agents running on different nodes is kept to a minimum for simplicity and scale. It occurs exclusively via the Key-Value store or with packet metadata. Node Address ------------ Cilium will automatically detect the node's IPv4 and IPv6 address. The detected node address is printed out when the ``cilium-agent`` starts: :: Local node-name: worker0 Node-IPv6: f00d::ac10:14:0:1 External-Node IPv4: 172.16.0.20 Internal-Node IPv4: 10.200.28.238
https://github.com/cilium/cilium/blob/main//Documentation/gettingstarted/terminology.rst
main
cilium
[ -0.027066972106695175, 0.009693056344985962, -0.038295406848192215, -0.08200991153717041, -0.11193593591451645, -0.06383199989795685, -0.08224616199731827, 0.005909180734306574, 0.05954040586948395, -0.029162608087062836, 0.008025724440813065, -0.13442844152450562, -0.024894151836633682, -...
0.18596
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_getting\_help: ############ Getting Help ############ Cilium is a project with a growing community. There are numerous ways to get help with Cilium if needed: FAQ === \*\*Cilium Frequently Asked Questions (FAQ)\*\*: Cilium uses `GitHub tags `\_ to maintain a list of questions asked by users. We suggest checking to see if your question is already answered. Slack ===== \*\*Chat\*\*: The best way to get immediate help if you get stuck is to ask in one of the `Cilium Slack`\_ channels. GitHub ====== \*\*Bug Tracker\*\*: All the issues are addressed in the `GitHub issue tracker `\_. If you want to report a bug or a new feature please file the issue according to the `GitHub template `\_. \*\*Contributing\*\*: If you want to contribute, reading the :ref:`dev\_guide` should help you. Training ======== \*\*Training courses\*\*: Our website lists `training courses `\_\_ that have been `approved `\_\_ by the Cilium project. Enterprise support ================== \*\*Distributions\*\*: Enterprise-ready, supported and `approved `\_\_ Cilium distributions are listed on the `Cilium website `\_\_. Security Bugs ============= \*\*Security\*\*: We strongly encourage you to report security vulnerabilities to our private security mailing list: security@cilium.io - first, before disclosing them in any public forums. This is a private mailing list where only members of the Cilium internal security team are subscribed to, and is treated as top priority.
https://github.com/cilium/cilium/blob/main//Documentation/gettingstarted/gettinghelp.rst
main
cilium
[ -0.02057010680437088, -0.012824665755033493, -0.04866859316825867, 0.03851310536265373, 0.07031217962503433, -0.057444311678409576, -0.0012248829007148743, 0.055611781775951385, -0.014116223901510239, -0.030781179666519165, 0.036095473915338516, -0.09067594259977341, 0.02977307140827179, -...
0.185285
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io Roadmap ======= The Cilium project is community driven, thus the work that gets done and the project's future roadmap is determined by what work individuals decide to do. You are welcome to raise feature requests by creating them as `GitHub issues`\_. Please search the existing issues to avoid raising duplicates, if you find that someone else is making the same or similar request we encourage the use of GitHub emojis to express your support for an idea! The most active way to influence the capabilities in Cilium is to get involved in development. We label issues with `good-first-issue`\_ to help new potential contributors find issues and feature requests that are relatively self-contained and could be a good place to start. Please also read the :ref:`dev\_guide` for details of our pull request process and expectations, along with instructions for setting up your development environment. We encourage you to discuss your ideas for significant enhancements and feature requests on the ``#development`` channel on `Cilium Slack`\_, bring them to the :ref:`community-meeting`, and/or create a `CFP design doc`\_. The project does not give date commitments since the work is dependent on the community. If you're looking for commitments to apply engineering resources to work on particular features, one option is to discuss this with the companies who offer `commercial distributions of Cilium `\_ and may be able to help. Release Cadence ~~~~~~~~~~~~~~~ We aim to make 2 to 3 `point releases`\_ per year of Cilium and its core components (Hubble, Cilium CLI, Tetragon, etc.). We also make patch releases available as necessary for security or urgent fixes. Focus Areas ----------- For a finer-granularity view, and insight into detailed enhancements and fixes, please refer to `issues on GitHub `\_. The Cilium committers\_ are the main drivers of where the project is heading. Welcoming New Contributors ~~~~~~~~~~~~~~~~~~~~~~~~~~ As a CNCF project we want to make it easier for new contributors to get involved with Cilium. This includes both code and non-code contributions such as documentation, blog posts, example configurations, presentations, training courses, testing and more. Check the :ref:`dev\_guide` documentation to understand how to get involved with code contributions, and the `Get Involved`\_ guide for guidance on contributing blog posts, training and other resources. .. \_committers: https://raw.githubusercontent.com/cilium/cilium/main/MAINTAINERS.md .. \_GitHub issues: https://github.com/cilium/cilium/issues .. \_point releases: https://cilium.io/blog/categories/release/ .. \_Get Involved: https://cilium.io/get-involved .. \_good-first-issue: https://github.com/cilium/cilium/labels/good-first-issue .. \_enterprise: https://cilium.io/enterprise .. \_CFP design doc: https://github.com/cilium/design-cfps/tree/main
https://github.com/cilium/cilium/blob/main//Documentation/community/roadmap.rst
main
cilium
[ -0.031125538051128387, -0.02139708399772644, 0.012166639789938927, -0.03247790411114693, 0.06527851521968842, -0.055830273777246475, -0.04440765082836151, 0.054035693407058716, 0.03333526477217674, -0.03835373744368553, 0.005663422867655754, -0.03426213562488556, 0.05349595472216606, -0.03...
0.191913
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_community-meeting: Community Meetings ================== The Cilium contributors gather regularly for a Zoom call open to everyone. During that time, we discuss: - Status of the next releases for each supported Cilium release - Current state of our CI: flakes being investigated and upcoming changes - Development items for the next release - Any other community-relevant topics during the open session If you want to discuss something during the next meeting's open session, you can add it to the meeting's Google doc. The Zoom link to the meeting is available in the ``#development`` Slack channel and in the meeting notes. Weekly Community Meeting ------------------------ This is a weekly meeting for all contributors. - Date: Every Wednesday at 8:00 AM US/Pacific (Los Angeles) - Meeting notes: `Google Doc `\_\_ Monthly APAC Community Meeting ------------------------------ This is a monthly community meeting held at APAC friendly time. - Date: Every third Wednesday at 4:30 UTC - Meeting notes: `Google Doc `\_\_ Slack ===== Our `Cilium & eBPF Slack `\_ is the main discussion space for the Cilium community. Please be sure to follow and abide by the `Slack Guidelines `\_\_ Slack channels -------------- ======================== ====================================================== Name Purpose ======================== ====================================================== ``#general`` General user discussions & questions ``#hubble`` Questions on Hubble ``#kubernetes`` Kubernetes-specific questions ``#networkpolicy`` Questions on network policies ``#release`` Release announcements only ``#service-mesh`` Questions on Cilium Service Mesh ``#tetragon`` Questions on Tetragon ======================== ====================================================== You can join the following channels if you are looking to contribute to Cilium code, documentation, or website: ======================== ====================================================== Name Purpose ======================== ====================================================== ``#development`` Development discussions around Cilium ``#ebpf-go-dev`` Development discussion for the `eBPF Go library`\_ ``#git`` GitHub notifications ``#sig-``\\* SIG-specific discussions (see below) ``#area-`` Discussing a specific area of the project ``#testing`` Testing and CI discussions ``#cilium-website`` Development discussions around cilium.io ======================== ====================================================== If you are interested in eBPF, then the following channels are for you: ======================== ====================================================== Name Purpose ======================== ====================================================== ``#ebpf`` General eBPF questions ``#ebpf-go`` Questions on the `eBPF Go library`\_ ``#ebpf-lsm`` Questions on BPF Linux Security Modules (LSM) ``#echo-news`` Contributions to `eCHO News`\_ ``#ebpf-for-windows`` Discussions around eBPF for Windows ======================== ====================================================== .. \_eBPF Go library: https://github.com/cilium/ebpf .. \_eCHO News: https://cilium.io/newsletter/ Our Slack hosts channels for eBPF and Cilium-related events online and in person. ======================== ====================================================== Name Purpose ======================== ====================================================== ``#ciliumcon`` CiliumCon ``#ctf`` Cilium and eBPF capture-the-flag challenges ``#ebpf-summit`` eBPF Summit ======================== ====================================================== How to create a Slack channel ----------------------------- 1. Open a new `GitHub issue in the cilium/community repo `\_ 2. Specify the title "Slack: " 3. Provide a description 4. Find two Cilium committers to comment in the issue that they approve the creation of the Slack channel 5. Not all Slack channels need to be listed on this page, but you can submit a PR if you would like to include it here Special Interest Groups ======================= The Cilium project has Special Interest Groups, or SIGs, with a common purpose of advancing the project with respect to a specific topic, such as network policy or documentation. Their goal is to enable a distributed decision structure and code ownership, as well as providing focused forums for getting work done, making decisions, and on boarding new contributors. To learn more about what they are, how to get involved, or which ones are currently active, please check out the `SIG.md in the community repo `\_
https://github.com/cilium/cilium/blob/main//Documentation/community/community.rst
main
cilium
[ -0.014890742488205433, -0.0394468680024147, -0.0037218702491372824, 0.001636289875023067, 0.0478706881403923, -0.044167328625917435, -0.01956585980951786, 0.0266441460698843, 0.032078854739665985, -0.00173834094312042, 0.008718326687812805, -0.028689511120319366, -0.09416752308607101, -0.0...
0.073273
about what they are, how to get involved, or which ones are currently active, please check out the `SIG.md in the community repo `\_
https://github.com/cilium/cilium/blob/main//Documentation/community/community.rst
main
cilium
[ -0.013145641423761845, -0.0705619752407074, -0.014351092278957367, 0.025495141744613647, -0.019612804055213928, 0.010022693313658237, -0.0112367644906044, -0.009028257802128792, -0.07560871541500092, 0.005897457245737314, 0.015223812311887741, 0.03427134454250336, -0.031951889395713806, -0...
0.145079
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_intro: ############################### Introduction to Cilium & Hubble ############################### What is Cilium? =============== Cilium is open source software for transparently securing the network connectivity between application services deployed using Linux container management platforms like Docker and Kubernetes. At the foundation of Cilium is a new Linux kernel technology called eBPF, which enables the dynamic insertion of powerful security visibility and control logic within Linux itself. Because eBPF runs inside the Linux kernel, Cilium security policies can be applied and updated without any changes to the application code or container configuration. .. admonition:: Video :class: attention If you'd like a video introduction to Cilium, check out this `explanation by Thomas Graf, Co-founder of Cilium `\_\_. What is Hubble? =============== :ref:`Hubble` is a fully distributed networking and security observability platform. It is built on top of Cilium and eBPF to enable deep visibility into the communication and behavior of services as well as the networking infrastructure in a completely transparent manner. By building on top of Cilium, Hubble can leverage eBPF for visibility. By relying on eBPF, all visibility is programmable and allows for a dynamic approach that minimizes overhead while providing deep and detailed visibility as required by users. Hubble has been created and specifically designed to make best use of these new eBPF powers. Hubble can answer questions such as: Service dependencies & communication map ---------------------------------------- \* What services are communicating with each other? How frequently? What does the service dependency graph look like? \* What HTTP calls are being made? What Kafka topics does a service consume from or produce to? Network monitoring & alerting ----------------------------- \* Is any network communication failing? Why is communication failing? Is it DNS? Is it an application or network problem? Is the communication broken on layer 4 (TCP) or layer 7 (HTTP)? \* Which services have experienced a DNS resolution problem in the last 5 minutes? Which services have experienced an interrupted TCP connection recently or have seen connections timing out? What is the rate of unanswered TCP SYN requests? Application monitoring ---------------------- \* What is the rate of 5xx or 4xx HTTP response codes for a particular service or across all clusters? \* What is the 95th and 99th percentile latency between HTTP requests and responses in my cluster? Which services are performing the worst? What is the latency between two services? Security observability ---------------------- \* Which services had connections blocked due to network policy? What services have been accessed from outside the cluster? Which services have resolved a particular DNS name? .. admonition:: Video :class: attention If you'd like a video introduction to Hubble, check out `eCHO episode 2: Introduction to Hubble `\_\_. Why Cilium & Hubble? ==================== eBPF is enabling visibility into and control over systems and applications at a granularity and efficiency that was not possible before. It does so in a completely transparent way, without requiring the application to change in any way. eBPF is equally well-equipped to handle modern containerized workloads as well as more traditional workloads such as virtual machines and standard Linux processes. The development of modern datacenter applications has shifted to a service-oriented architecture often referred to as \*microservices\*, wherein a large application is split into small independent services that communicate with each other via APIs using lightweight protocols like HTTP. Microservices applications tend to be highly dynamic, with individual containers getting started or destroyed as the application scales out / in to adapt to load changes and during
https://github.com/cilium/cilium/blob/main//Documentation/overview/intro.rst
main
cilium
[ 0.026657825335860252, 0.05054434761404991, -0.10133235901594162, -0.07444862276315689, 0.1257607787847519, -0.05187942832708359, 0.03467654064297676, 0.02356724627315998, 0.043719347566366196, -0.01023531798273325, 0.05167946591973305, -0.038097672164440155, 0.022301925346255302, -0.042612...
0.220003
wherein a large application is split into small independent services that communicate with each other via APIs using lightweight protocols like HTTP. Microservices applications tend to be highly dynamic, with individual containers getting started or destroyed as the application scales out / in to adapt to load changes and during rolling updates that are deployed as part of continuous delivery. This shift toward highly dynamic microservices presents both a challenge and an opportunity in terms of securing connectivity between microservices. Traditional Linux network security approaches (e.g., iptables) filter on IP address and TCP/UDP ports, but IP addresses frequently churn in dynamic microservices environments. The highly volatile life cycle of containers causes these approaches to struggle to scale side by side with the application as load balancing tables and access control lists carrying hundreds of thousands of rules that need to be updated with a continuously growing frequency. Protocol ports (e.g. TCP port 80 for HTTP traffic) can no longer be used to differentiate between application traffic for security purposes as the port is utilized for a wide range of messages across services. An additional challenge is the ability to provide accurate visibility as traditional systems are using IP addresses as primary identification vehicle which may have a drastically reduced lifetime of just a few seconds in microservices architectures. By leveraging Linux eBPF, Cilium retains the ability to transparently insert security visibility + enforcement, but does so in a way that is based on service / pod / container identity (in contrast to IP address identification in traditional systems) and can filter on application-layer (e.g. HTTP). As a result, Cilium not only makes it simple to apply security policies in a highly dynamic environment by decoupling security from addressing, but can also provide stronger security isolation by operating at the HTTP-layer in addition to providing traditional Layer 3 and Layer 4 segmentation. The use of eBPF enables Cilium to achieve all of this in a way that is highly scalable even for large-scale environments. Functionality Overview ====================== .. include:: ../../README.rst :start-after: begin-functionality-overview :end-before: end-functionality-overview
https://github.com/cilium/cilium/blob/main//Documentation/overview/intro.rst
main
cilium
[ -0.0274487491697073, 0.07904846221208572, 0.010989227332174778, -0.06504534184932709, 0.013631119392812252, -0.06994342058897018, 0.02689162641763687, -0.007233066484332085, 0.03801188990473747, -0.008440379984676838, -0.06861808896064758, 0.000008460820936306845, 0.01060018502175808, -0.0...
0.130096
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_component\_overview: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Component Overview \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* .. image:: ../images/cilium-arch.png :align: center A deployment of Cilium and Hubble consists of the following components running in a cluster: Cilium ====== Agent The Cilium agent (``cilium-agent``) runs on each node in the cluster. At a high-level, the agent accepts configuration via Kubernetes or APIs that describes networking, service load-balancing, network policies, and visibility & monitoring requirements. The Cilium agent listens for events from orchestration systems such as Kubernetes to learn when containers or workloads are started and stopped. It manages the eBPF programs which the Linux kernel uses to control all network access in / out of those containers. Debug Client (CLI) The Cilium debug CLI client (``cilium-dbg``) is a command-line tool that is installed along with the Cilium agent. It interacts with the REST API of the Cilium agent running on the same node. The debug CLI allows inspecting the state and status of the local agent. It also provides tooling to directly access the eBPF maps to validate their state. .. note:: The in-agent Cilium debug CLI client described here should not be confused with the ```cilium`` command line tool for quick-installing, managing and troubleshooting Cilium on Kubernetes clusters `\_. That tool is typically installed remote from the cluster, and uses ``kubeconfig`` information to access Cilium running on the cluster via the Kubernetes API. Operator The Cilium Operator is responsible for managing duties in the cluster which should logically be handled once for the entire cluster, rather than once for each node in the cluster. The Cilium operator is not in the critical path for any forwarding or network policy decision. A cluster will generally continue to function if the operator is temporarily unavailable. However, depending on the configuration, failure in availability of the operator can lead to: \* Delays in :ref:`address\_management` and thus delay in scheduling of new workloads if the operator is required to allocate new IP addresses \* Failure to update the kvstore heartbeat key which will lead agents to declare kvstore unhealthiness and restart. CNI Plugin The CNI plugin (``cilium-cni``) is invoked by Kubernetes when a pod is scheduled or terminated on a node. It interacts with the Cilium API of the node to trigger the necessary datapath configuration to provide networking, load-balancing and network policies for the pod. Hubble ====== Server The Hubble server runs on each node and retrieves the eBPF-based visibility from Cilium. It is embedded into the Cilium agent in order to achieve high performance and low-overhead. It offers a gRPC service to retrieve flows and Prometheus metrics. Relay Relay (``hubble-relay``) is a standalone component which is aware of all running Hubble servers and offers cluster-wide visibility by connecting to their respective gRPC APIs and providing an API that represents all servers in the cluster. Client (CLI) The Hubble CLI (``hubble``) is a command-line tool able to connect to either the gRPC API of ``hubble-relay`` or the local server to retrieve flow events. Graphical UI (GUI) The graphical user interface (``hubble-ui``) utilizes relay-based visibility to provide a graphical service dependency and connectivity map. eBPF ==== eBPF is a Linux kernel bytecode interpreter originally introduced to filter network packets, e.g. tcpdump and socket filters. It has since been extended with additional data structures such as hashtable and arrays as well as additional actions to support packet mangling, forwarding, encapsulation, etc. An in-kernel verifier ensures that eBPF programs are safe to run and a JIT compiler converts the bytecode to
https://github.com/cilium/cilium/blob/main//Documentation/overview/component-overview.rst
main
cilium
[ 0.016687694936990738, -0.00027845337172038853, -0.06468992680311203, -0.028877483680844307, 0.04189123213291168, -0.05563364550471306, -0.04442087188363075, 0.011881504207849503, 0.03706159442663193, -0.023428041487932205, 0.028299421072006226, -0.04553801938891411, 0.0643114522099495, -0....
0.238354
e.g. tcpdump and socket filters. It has since been extended with additional data structures such as hashtable and arrays as well as additional actions to support packet mangling, forwarding, encapsulation, etc. An in-kernel verifier ensures that eBPF programs are safe to run and a JIT compiler converts the bytecode to CPU architecture specific instructions for native execution efficiency. eBPF programs can be run at various hooking points in the kernel such as for incoming and outgoing packets. Cilium is capable of probing the Linux kernel for available features and will automatically make use of more recent features as they are detected. For more detail on kernel versions, see: :ref:`admin\_kernel\_version`. Data Store ========== Cilium requires a data store to propagate state between agents. It supports the following data stores: Kubernetes CRDs (Default) The default choice to store any data and propagate state is to use Kubernetes custom resource definitions (CRDs). CRDs are offered by Kubernetes for cluster components to represent configurations and state via Kubernetes resources. Key-Value Store All requirements for state storage and propagation can be met with Kubernetes CRDs as configured in the default configuration of Cilium. A key-value store can optionally be used as an optimization to improve the scalability of a cluster as change notifications and storage requirements are more efficient with direct key-value store usage. The currently supported key-value stores are: \* `etcd `\_ .. note:: It is possible to leverage the etcd cluster of Kubernetes directly or to maintain a dedicated etcd cluster.
https://github.com/cilium/cilium/blob/main//Documentation/overview/component-overview.rst
main
cilium
[ -0.026427356526255608, 0.005591161083430052, -0.03435870260000229, -0.06667446345090866, 0.05255413427948952, -0.025150420144200325, -0.029704051092267036, 0.03693224489688873, -0.014484038576483727, -0.0184742733836174, 0.000675789313390851, -0.061948925256729126, -0.03939118608832359, -0...
0.307653
You can install [Redis](https://redis.io/docs/about/) or [Redis Stack](/docs/about/about-stack) locally on your machine. Redis and Redis Stack are available on Linux, macOS, and Windows. Here are the installation instructions: \* [Install Redis](/docs/install/install-redis) \* [Install Redis Stack](/docs/install/install-stack) While you can install Redis (Stack) locally, you might also consider using Redis Cloud by creating a [free account](https://redis.com/try-free/?utm\_source=redisio&utm\_medium=referral&utm\_campaign=2023-09-try\_free&utm\_content=cu-redis\_cloud\_users).
https://github.com/redis/redis-doc/blob/master//docs/install/_index.md
master
redis
[ -0.05773963779211044, -0.13794028759002686, -0.05135459452867508, -0.03540810942649841, 0.02353925071656704, -0.04631580039858818, -0.04114249348640442, 0.004321026615798473, 0.011122495867311954, 0.029573064297437668, 0.04392344132065773, -0.021027222275733948, -0.008698182180523872, -0.0...
0.1032
This guide shows you how to install Redis on macOS using Homebrew. Homebrew is the easiest way to install Redis on macOS. If you'd prefer to build Redis from the source files on macOS, see [Installing Redis from Source](/docs/install/install-redis/install-redis-from-source). ## Prerequisites First, make sure you have Homebrew installed. From the terminal, run: {{< highlight bash >}} brew --version {{< / highlight >}} If this command fails, you'll need to [follow the Homebrew installation instructions](https://brew.sh/). ## Installation From the terminal, run: {{< highlight bash >}} brew install redis {{< / highlight >}} This will install Redis on your system. ## Starting and stopping Redis in the foreground To test your Redis installation, you can run the `redis-server` executable from the command line: {{< highlight bash >}} redis-server {{< / highlight >}} If successful, you'll see the startup logs for Redis, and Redis will be running in the foreground. To stop Redis, enter `Ctrl-C`. ### Starting and stopping Redis using launchd As an alternative to running Redis in the foreground, you can also use `launchd` to start the process in the background: {{< highlight bash >}} brew services start redis {{< / highlight >}} This launches Redis and restarts it at login. You can check the status of a `launchd` managed Redis by running the following: {{< highlight bash >}} brew services info redis {{< / highlight >}} If the service is running, you'll see output like the following: {{< highlight bash >}} redis (homebrew.mxcl.redis) Running: ✔ Loaded: ✔ User: miranda PID: 67975 {{< / highlight >}} To stop the service, run: {{< highlight bash >}} brew services stop redis {{< / highlight >}} ## Connect to Redis Once Redis is running, you can test it by running `redis-cli`: {{< highlight bash >}} redis-cli {{< / highlight >}} This will open the Redis REPL. Try running some commands: {{< highlight bash >}} 127.0.0.1:6379> lpush demos redis-macOS-demo OK 127.0.0.1:6379> rpop demos "redis-macOS-demo" {{< / highlight >}} ## Next steps Once you have a running Redis instance, you may want to: \* Try the Redis CLI tutorial \* Connect using one of the Redis clients
https://github.com/redis/redis-doc/blob/master//docs/install/install-redis/install-redis-on-mac-os.md
master
redis
[ -0.007964318618178368, -0.07331045717000961, -0.06385543197393417, -0.05805288255214691, 0.01779330149292946, -0.04609768092632294, -0.018505770713090897, 0.0108199343085289, 0.028666937723755836, -0.0004113816248718649, -0.03855488821864128, -0.019002610817551613, -0.022215258330106735, 0...
0.067779
Most major Linux distributions provide packages for Redis. ## Install on Ubuntu/Debian You can install recent stable versions of Redis from the official `packages.redis.io` APT repository. {{% alert title="Prerequisites" color="warning" %}} If you're running a very minimal distribution (such as a Docker container) you may need to install `lsb-release`, `curl` and `gpg` first: {{< highlight bash >}} sudo apt install lsb-release curl gpg {{< / highlight >}} {{% /alert %}} Add the repository to the `apt` index, update it, and then install: {{< highlight bash >}} curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb\_release -cs) main" | sudo tee /etc/apt/sources.list.d/redis.list sudo apt-get update sudo apt-get install redis {{< / highlight >}} ## Install from Snapcraft The [Snapcraft store](https://snapcraft.io/store) provides [Redis packages](https://snapcraft.io/redis) that can be installed on platforms that support snap. Snap is supported and available on most major Linux distributions. To install via snap, run: {{< highlight bash >}} sudo snap install redis {{< / highlight >}} If your Linux does not currently have snap installed, install it using the instructions described in [Installing snapd](https://snapcraft.io/docs/installing-snapd).
https://github.com/redis/redis-doc/blob/master//docs/install/install-redis/install-redis-on-linux.md
master
redis
[ -0.003308388404548168, -0.027160223573446274, -0.05978350341320038, -0.02154509164392948, 0.039916347712278366, -0.06401456892490387, 0.009789718315005302, 0.0062073818407952785, 0.036159321665763855, 0.010238360613584518, -0.02903338521718979, 0.01812042109668255, 0.019343094900250435, -0...
0.042725
Redis is not officially supported on Windows. However, you can install Redis on Windows for development by following the instructions below. To install Redis on Windows, you'll first need to enable [WSL2](https://docs.microsoft.com/en-us/windows/wsl/install) (Windows Subsystem for Linux). WSL2 lets you run Linux binaries natively on Windows. For this method to work, you'll need to be running Windows 10 version 2004 and higher or Windows 11. ## Install or enable WSL2 Microsoft provides [detailed instructions for installing WSL](https://docs.microsoft.com/en-us/windows/wsl/install). Follow these instructions, and take note of the default Linux distribution it installs. This guide assumes Ubuntu. ## Install Redis Once you're running Ubuntu on Windows, you can follow the steps detailed at [Install on Ubuntu/Debian](/docs/install/install-redis/install-redis-on-linux#install-on-ubuntu-debian) to install recent stable versions of Redis from the official `packages.redis.io` APT repository. Add the repository to the `apt` index, update it, and then install: {{< highlight bash >}} curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb\_release -cs) main" | sudo tee /etc/apt/sources.list.d/redis.list sudo apt-get update sudo apt-get install redis {{< / highlight >}} Lastly, start the Redis server like so: {{< highlight bash >}} sudo service redis-server start {{< / highlight >}} ## Connect to Redis You can test that your Redis server is running by connecting with the Redis CLI: {{< highlight bash >}} redis-cli 127.0.0.1:6379> ping PONG {{< / highlight >}}
https://github.com/redis/redis-doc/blob/master//docs/install/install-redis/install-redis-on-windows.md
master
redis
[ -0.029449913650751114, -0.04687514156103134, -0.10622671246528625, 0.00032616627868264914, -0.017479706555604935, -0.031330715864896774, -0.013801516965031624, 0.03808237239718437, 0.006669791415333748, -0.019966162741184235, -0.024511436000466347, 0.0016437112353742123, -0.02723578363656997...
0.055272
You can compile and install Redis from source on variety of platforms and operating systems including Linux and macOS. Redis has no dependencies other than a C compiler and `libc`. ## Downloading the source files The Redis source files are available from the [Download](/download) page. You can verify the integrity of these downloads by checking them against the digests in the [redis-hashes git repository](https://github.com/redis/redis-hashes). To obtain the source files for the latest stable version of Redis from the Redis downloads site, run: {{< highlight bash >}} wget https://download.redis.io/redis-stable.tar.gz {{< / highlight >}} ## Compiling Redis To compile Redis, first extract the tarball, change to the root directory, and then run `make`: {{< highlight bash >}} tar -xzvf redis-stable.tar.gz cd redis-stable make {{< / highlight >}} To build with TLS support, you'll need to install OpenSSL development libraries (e.g., libssl-dev on Debian/Ubuntu) and then run: {{< highlight bash >}} make BUILD\_TLS=yes {{< / highlight >}} If the compile succeeds, you'll find several Redis binaries in the `src` directory, including: \* \*\*redis-server\*\*: the Redis Server itself \* \*\*redis-cli\*\* is the command line interface utility to talk with Redis. To install these binaries in `/usr/local/bin`, run: {{< highlight bash >}} sudo make install {{< / highlight >}} ### Starting and stopping Redis in the foreground Once installed, you can start Redis by running {{< highlight bash >}} redis-server {{< / highlight >}} If successful, you'll see the startup logs for Redis, and Redis will be running in the foreground. To stop Redis, enter `Ctrl-C`. For a more complete installation, continue with [these instructions](/docs/install/#install-redis-more-properly).
https://github.com/redis/redis-doc/blob/master//docs/install/install-redis/install-redis-from-source.md
master
redis
[ -0.07754348963499069, -0.03202526271343231, -0.06285957247018814, -0.05068438500165939, 0.006711347959935665, -0.08097071200609207, -0.03895444795489311, -0.010459733195602894, 0.03366038575768471, -0.0018179279286414385, -0.044642578810453415, -0.004832057282328606, 0.015776485204696655, ...
0.004747
This is a an installation guide. You'll learn how to install, run, and experiment with the Redis server process. While you can install Redis on any of the platforms listed below, you might also consider using Redis Cloud by creating a [free account](https://redis.com/try-free?utm\_source=redisio&utm\_medium=referral&utm\_campaign=2023-09-try\_free&utm\_content=cu-redis\_cloud\_users). ## Install Redis How you install Redis depends on your operating system and whether you'd like to install it bundled with Redis Stack and Redis UI. See the guide below that best fits your needs: \* [Install Redis from Source](/docs/install/install-redis/install-redis-from-source) \* [Install Redis on Linux](/docs/install/install-redis/install-redis-on-linux) \* [Install Redis on macOS](/docs/install/install-redis/install-redis-on-mac-os) \* [Install Redis on Windows](/docs/install/install-redis/install-redis-on-windows) \* [Install Redis with Redis Stack and RedisInsight](/docs/install/install-stack/) Refer to [Redis Administration](/docs/management/admin/) for detailed setup tips. ## Test if you can connect using the CLI After you have Redis up and running, you can connect using `redis-cli`. External programs talk to Redis using a TCP socket and a Redis specific protocol. This protocol is implemented in the Redis client libraries for the different programming languages. However, to make hacking with Redis simpler, Redis provides a command line utility that can be used to send commands to Redis. This program is called \*\*redis-cli\*\*. The first thing to do to check if Redis is working properly is sending a \*\*PING\*\* command using redis-cli: ``` $ redis-cli ping PONG ``` Running \*\*redis-cli\*\* followed by a command name and its arguments will send this command to the Redis instance running on localhost at port 6379. You can change the host and port used by `redis-cli` - just try the `--help` option to check the usage information. Another interesting way to run `redis-cli` is without arguments: the program will start in interactive mode. You can type different commands and see their replies. ``` $ redis-cli redis 127.0.0.1:6379> ping PONG ``` ## Securing Redis By default Redis binds to \*\*all the interfaces\*\* and has no authentication at all. If you use Redis in a very controlled environment, separated from the external internet and in general from attackers, that's fine. However, if an unhardened Redis is exposed to the internet, it is a big security concern. If you are not 100% sure your environment is secured properly, please check the following steps in order to make Redis more secure: 1. Make sure the port Redis uses to listen for connections (by default 6379 and additionally 16379 if you run Redis in cluster mode, plus 26379 for Sentinel) is firewalled, so that it is not possible to contact Redis from the outside world. 2. Use a configuration file where the `bind` directive is set in order to guarantee that Redis listens on only the network interfaces you are using. For example, only the loopback interface (127.0.0.1) if you are accessing Redis locally from the same computer. 3. Use the `requirepass` option to add an additional layer of security so that clients will be required to authenticate using the `AUTH` command. 4. Use [spiped](http://www.tarsnap.com/spiped.html) or another SSL tunneling software to encrypt traffic between Redis servers and Redis clients if your environment requires encryption. Note that a Redis instance exposed to the internet without any security [is very simple to exploit](http://antirez.com/news/96), so make sure you understand the above and apply \*\*at least\*\* a firewall layer. After the firewall is in place, try to connect with `redis-cli` from an external host to confirm that the instance is not reachable. ## Use Redis from your application Of course using Redis just from the command line interface is not enough as the goal is to use it from your application. To do so, you need to download and install a Redis client library for your
https://github.com/redis/redis-doc/blob/master//docs/install/install-redis/_index.md
master
redis
[ -0.01795956678688526, -0.11035484075546265, -0.04054921865463257, -0.05389350280165672, -0.006741390097886324, -0.07050899416208267, -0.019910331815481186, 0.031100669875741005, -0.004066687077283859, 0.010790963657200336, 0.013214458711445332, 0.0007162162801250815, 0.0338660329580307, -0...
0.101684
that the instance is not reachable. ## Use Redis from your application Of course using Redis just from the command line interface is not enough as the goal is to use it from your application. To do so, you need to download and install a Redis client library for your programming language. You'll find a [full list of clients for different languages in this page](/clients). ## Redis persistence You can learn [how Redis persistence works on this page](/docs/management/persistence/). It is important to understand that, if you start Redis with the default configuration, Redis will spontaneously save the dataset only from time to time. For example, after at least five minutes if you have at least 100 changes in your data. If you want your database to persist and be reloaded after a restart make sure to call the \*\*SAVE\*\* command manually every time you want to force a data set snapshot. Alternatively, you can save the data on disk before quitting by using the \*\*SHUTDOWN\*\* command: ``` $ redis-cli shutdown ``` This way, Redis will save the data on disk before quitting. Reading the [persistence page](/docs/management/persistence/) is strongly suggested to better understand how Redis persistence works. ## Install Redis properly Running Redis from the command line is fine just to hack a bit or for development. However, at some point you'll have some actual application to run on a real server. For this kind of usage you have two different choices: \* Run Redis using screen. \* Install Redis in your Linux box in a proper way using an init script, so that after a restart everything will start again properly. A proper install using an init script is strongly recommended. {{% alert title="Note" color="warning" %}} The available packages for supported Linux distributions already include the capability of starting the Redis server from `/etc/init`. {{% /alert %}} {{% alert title="Note" color="warning" %}} The remainder of this section assumes you've [installed Redis from its source code](/docs/install/install-redis/install-redis-from-source). If instead you have installed Redis Stack, you will need to download a [basic init script](https://raw.githubusercontent.com/redis/redis/7.2/utils/redis\_init\_script) and then modify both it and the following instructions to conform to the way Redis Stack was installed on your platform. For example, on Ubuntu 20.04 LTS, Redis Stack is installed in `/opt/redis-stack`, not `/usr/local`, so you'll need to adjust accordingly. {{% /alert %}} The following instructions can be used to perform a proper installation using the init script shipped with the Redis source code, `/path/to/redis-stable/utils/redis\_init\_script`. If you have not yet run `make install` after building the Redis source, you will need to do so before continuing. By default, `make install` will copy the `redis-server` and `redis-cli` binaries to `/usr/local/bin`. \* Create a directory in which to store your Redis config files and your data: ``` sudo mkdir /etc/redis sudo mkdir /var/redis ``` \* Copy the init script that you'll find in the Redis distribution under the \*\*utils\*\* directory into `/etc/init.d`. We suggest calling it with the name of the port where you are running this instance of Redis. Make sure the resulting file has `0755` permissions. ``` sudo cp utils/redis\_init\_script /etc/init.d/redis\_6379 ``` \* Edit the init script. ``` sudo vi /etc/init.d/redis\_6379 ``` Make sure to set the \*\*REDISPORT\*\* variable to the port you are using. Both the pid file path and the configuration file name depend on the port number. \* Copy the template configuration file you'll find in the root directory of the Redis distribution into `/etc/redis/` using the port number as the name, for instance: ``` sudo cp redis.conf /etc/redis/6379.conf ``` \* Create a directory inside `/var/redis` that will work as both data and working directory
https://github.com/redis/redis-doc/blob/master//docs/install/install-redis/_index.md
master
redis
[ 0.011547510512173176, -0.04691589996218681, -0.10854485630989075, 0.012457101605832577, -0.007266749627888203, -0.029807649552822113, -0.023680714890360832, 0.04041247069835663, 0.03928042948246002, 0.020128948614001274, -0.01024746336042881, 0.03608405217528343, 0.045779384672641754, -0.0...
0.053773