content
large_stringlengths
3
20.5k
url
large_stringlengths
54
193
branch
large_stringclasses
4 values
source
large_stringclasses
42 values
embeddings
listlengths
384
384
score
float64
-0.21
0.65
| observability can be used | | | | to provide reliable audit | | | | of the attacker's L3/L4 | | | | and L7 network | | | | connectivity. Traffic sent | | | | by the attacker will be | | | | attributed to the worker | | | | node, and not to a | | | | specific Kubernetes | | | | workload. | +-----------------+-------------------------+----------------------------+ Recommended Controls ^^^^^^^^^^^^^^^^^^^^ In addition to the recommended controls against the :ref:`kubernetes-workload-attacker`: - Container images should be regularly patched to reduce the chance of compromise. - Minimal container images should be used where possible. - Host-level privileges should be avoided where possible. - Ensure that the container users do not have access to the underlying container runtime. .. \_root-equivalent-host-attacker: Root-equivalent Host Attacker ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A "root" privilege host attacker has full privileges to do everything on the local host. This access could exist for several reasons, including: - Root SSH or other console access to the Kubernetes worker node. - A containerized workload that has escaped the container namespace as a privileged user. - Pods running with ``privileged: true`` or other significant capabilities like ``CAP\_BPF``, ``CAP\_NET\_ADMIN``, ``CAP\_NET\_RAW``, or ``CAP\_SYS\_ADMIN``. .. image:: images/cilium\_threat\_model\_root.png .. rst-class:: wrapped-table +-------------------+--------------------------------------------------+ | \*\*Threat | \*\*Identified STRIDE threats\*\* | | surface\*\* | | +===================+==================================================+ | Cilium agent | In this situation, all potential attacks covered | | | by STRIDE are possible. Of note: | | | | | | - The attacker would be able to disable eBPF on | | | the node, disabling Cilium's network and | | | runtime visibility and enforcement. All | | | further operations by the attacker will be | | | unlimited and unaudited. | | | - The attacker would be able to observe network | | | connectivity across all workloads on the | | | host. | | | - The attacker can spoof traffic from the node | | | such that it appears to come from pods | | | with any identity. | | | - If the physical network allows ARP poisoning, | | | or if any other attack allows a | | | compromised node to "attract" traffic | | | destined to other nodes, the attacker can | | | potentially intercept all traffic in the | | | cluster, even if this traffic is encrypted | | | using IPsec, since we use a cluster-wide | | | pre-shared key. | | | - The attacker can also use Cilium's | | | credentials to :ref:`attack the Kubernetes | | | API server `, | | | as well as Cilium's :ref:`etcd key-value | | | store ` (if in use). | | | - If the compromised node is running the | | | ``cilium-operator`` pod, the attacker | | | would be able to carry out denial of | | | service attacks against other nodes using | | | the ``cilium-operator`` service account | | | credentials found on the node. | +-------------------+ | | Cilium | | | configuration | | +-------------------+ | | Cilium eBPF | | | programs | | +-------------------+ | | Network data | | +-------------------+ | | Observability | | | data | | +-------------------+--------------------------------------------------+ This attack scenario emphasizes the importance of securing Kubernetes nodes, minimizing the permissions available to container workloads, and monitoring for suspicious activity on the node, container, and API server levels. Recommended Controls ^^^^^^^^^^^^^^^^^^^^ In addition to the controls against a :ref:`limited-privilege-host-attacker`: - Workloads with privileged access should be reviewed;
https://github.com/cilium/cilium/blob/main//Documentation/security/threat-model.rst
main
cilium
[ 0.013107266277074814, 0.08049570024013519, 0.0425911620259285, -0.0010868414537981153, 0.05926793813705444, 0.030824199318885803, 0.04639851301908493, -0.038447413593530655, -0.025770200416445732, 0.05308537930250168, -0.034085940569639206, -0.037340082228183746, 0.04222378879785538, 0.009...
0.129867
+-------------------+--------------------------------------------------+ This attack scenario emphasizes the importance of securing Kubernetes nodes, minimizing the permissions available to container workloads, and monitoring for suspicious activity on the node, container, and API server levels. Recommended Controls ^^^^^^^^^^^^^^^^^^^^ In addition to the controls against a :ref:`limited-privilege-host-attacker`: - Workloads with privileged access should be reviewed; privileged access should only be provided to deployments if essential. - Network policies should be configured to limit connectivity to workloads with privileged access. - Kubernetes audit logging should be enabled, with audit logs being sent to a centralized external location for automated review. - Detections should be configured to alert on suspicious activity. - ``cilium-operator`` pods should not be scheduled on nodes that run regular workloads, and should instead be configured to run on control plane nodes. .. \_mitm-attacker: Man-in-the-middle Attacker ~~~~~~~~~~~~~~~~~~~~~~~~~~ In this scenario, our attacker has access to the underlying network between Kubernetes worker nodes, but not the Kubernetes worker nodes themselves. This attacker may inspect, modify, or inject malicious network traffic. .. image:: images/cilium\_threat\_model\_mitm.png The threat matrix for such an attacker is as follows: .. rst-class:: wrapped-table +------------------+---------------------------------------------------+ | \*\*Threat | \*\*Identified STRIDE threats\*\* | | surface\*\* | | +==================+===================================================+ | Cilium agent | None | +------------------+---------------------------------------------------+ | Cilium | None | | configuration | | +------------------+---------------------------------------------------+ | Cilium eBPF | None | | programs | | +------------------+---------------------------------------------------+ | Network data | - Without transparent encryption, an attacker | | | could inspect traffic between workloads in both | | | overlay and native routing modes. | | | - An attacker with knowledge of pod network | | | configuration (including pod IP addresses and | | | ports) could inject traffic into a cluster by | | | forging packets. | | | - Denial of service could occur depending on the | | | behavior of the attacker. | +------------------+---------------------------------------------------+ | Observability | - TLS is required for all connectivity between | | data | Cilium components, as well as for exporting | | | data to other destinations, removing the | | | scope for spoofing or tampering. | | | - Without transparent encryption, the attacker | | | could re-create the observability data as | | | available on the network level. | | | - Information leakage could occur via an attacker | | | scraping Hubble Prometheus metrics. These | | | metrics are disabled by default, and | | | can contain sensitive information on network | | | flows. | | | - Denial of service could occur depending on the | | | behavior of the attacker. | +------------------+---------------------------------------------------+ Recommended Controls ^^^^^^^^^^^^^^^^^^^^ - :ref:`gsg\_encryption` should be configured to ensure the confidentiality of communication between workloads. - TLS should be configured for communication between the Prometheus metrics endpoints and the Prometheus server. - Network policies should be configured such that only the Prometheus server is allowed to scrape :ref:`Hubble metrics ` in particular. .. \_network-attacker: Network Attacker ~~~~~~~~~~~~~~~~ In our threat model, a generic network attacker has access to the same underlying IP network as Kubernetes worker nodes, but is not inline between the nodes. The assumption is that this attacker is still able to send IP layer traffic that reaches a Kubernetes worker node. This is a weaker variant of the man-in-the-middle attack described above, as the attacker can only inject traffic to worker nodes, but not see the replies. .. image:: images/cilium\_threat\_model\_network\_attacker.png For such an attacker, the threat matrix is as follows: .. rst-class:: wrapped-table +------------------+---------------------------------------------------+ | \*\*Threat | \*\*Identified STRIDE threats\*\* | | surface\*\* | | +==================+===================================================+ | Cilium agent | None |
https://github.com/cilium/cilium/blob/main//Documentation/security/threat-model.rst
main
cilium
[ -0.014040889218449593, 0.09727459400892258, 0.03256668150424957, -0.014954307116568089, 0.0587812140583992, -0.008825959637761116, 0.07518616318702698, -0.05980062112212181, 0.01382235623896122, 0.05756249278783798, 0.020974110811948776, -0.030810890719294548, 0.033568356186151505, 0.00308...
0.218826
as the attacker can only inject traffic to worker nodes, but not see the replies. .. image:: images/cilium\_threat\_model\_network\_attacker.png For such an attacker, the threat matrix is as follows: .. rst-class:: wrapped-table +------------------+---------------------------------------------------+ | \*\*Threat | \*\*Identified STRIDE threats\*\* | | surface\*\* | | +==================+===================================================+ | Cilium agent | None | +------------------+---------------------------------------------------+ | Cilium | None | | configuration | | +------------------+---------------------------------------------------+ | Cilium eBPF | None | | programs | | +------------------+---------------------------------------------------+ | Network data | - An attacker with knowledge of pod network | | | configuration (including pod IP addresses and | | | ports) could inject traffic into a cluster by | | | forging packets. | | | - Denial of service could occur depending on the | | | behavior of the attacker. | +------------------+---------------------------------------------------+ | Observability | - Denial of service could occur depending on the | | data | behavior of the attacker. | | | - Information leakage could occur via an attacker | | | scraping Cilium or Hubble Prometheus metrics, | | | depending on the specific metrics enabled. | +------------------+---------------------------------------------------+ Recommended Controls ^^^^^^^^^^^^^^^^^^^^ - :ref:`gsg\_encryption` should be configured to ensure the confidentiality of communication between workloads. .. \_kubernetes-api-server-attacker: Kubernetes API Server Attacker ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This type of attack could be carried out by any user or code with network access to the Kubernetes API server and credentials that allow Kubernetes API requests. Such permissions would allow the user to read or manipulate the API server state (for example by changing CRDs). This section is intended to cover any attack that might be exposed via Kubernetes API server access, regardless of whether the access is full or limited. .. image:: images/cilium\_threat\_model\_api\_server\_attacker.png For such an attacker, our threat matrix is as follows: .. rst-class:: wrapped-table +------------------+---------------------------------------------------+ | \*\*Threat | \*\*Identified STRIDE threats\*\* | | surface\*\* | | +==================+===================================================+ | Cilium agent | - A Kubernetes API user with ``kubectl exec`` | | | access to the pod running Cilium effectively | | | becomes a :ref:`root-equivalent host | | | attacker `, | | | since Cilium runs as a privileged pod. | | | - An attacker with permissions to configure | | | workload settings effectively becomes a | | | :ref:`kubernetes-workload-attacker`. | +------------------+---------------------------------------------------+ | Cilium | The ability to modify the ``Cilium\*`` | | configuration | CustomResourceDefinitions, as well as any | | | CustomResource from Cilium, in the cluster could | | | have the following effects: | | | | | | - The ability to create or modify CiliumIdentity | | | and CiliumEndpoint or CiliumEndpointSlice | | | resources would allow an attacker to tamper | | | with the identities of pods. | | | - The ability to delete Kubernetes or Cilium | | | NetworkPolicies would remove policy | | | enforcement. | | | - Creating a large number of CiliumIdentity | | | resources could result in denial of service. | | | - Workloads external to the cluster could be | | | added to the network. | | | - Traffic routing settings between workloads | | | could be modified | | | | | | The cumulative effect of such actions could | | | result in the escalation of a single-node | | | compromise into a multi-node compromise. | +------------------+---------------------------------------------------+ | Cilium eBPF | An attacker with ``kubectl exec`` access to the | | programs | Cilium agent pod will be able to modify eBPF | | | programs. | +------------------+---------------------------------------------------+ | Network data | Privileged Kubernetes API server access (``exec`` | | |
https://github.com/cilium/cilium/blob/main//Documentation/security/threat-model.rst
main
cilium
[ -0.06425246596336365, 0.08226878941059113, -0.03886805847287178, 0.040759045630693436, 0.10608170181512833, -0.061916425824165344, 0.051715388894081116, 0.03688802570104599, -0.03604770451784134, 0.04631355032324791, 0.041136763989925385, -0.052744489163160324, 0.10574711859226227, 0.03521...
0.107173
into a multi-node compromise. | +------------------+---------------------------------------------------+ | Cilium eBPF | An attacker with ``kubectl exec`` access to the | | programs | Cilium agent pod will be able to modify eBPF | | | programs. | +------------------+---------------------------------------------------+ | Network data | Privileged Kubernetes API server access (``exec`` | | | access to Cilium pods or access to view | | | Kubernetes secrets) could allow an attacker to | | | access the pre-shared key used for IPsec. When | | | used by a :ref:`man-in-the-middle | | | attacker `, this | | | could undermine the confidentiality and integrity | | | of workload communication. | | | |br| |br| | | | Depending on the attacker's level of access, the | | | ability to spoof identities or tamper with policy | | | enforcement could also allow them to view network | | | data. | +------------------+---------------------------------------------------+ | Observability | Users with permissions to configure workload | | data | settings could cause denial of service. | +------------------+---------------------------------------------------+ Recommended Controls ^^^^^^^^^^^^^^^^^^^^ - `Kubernetes RBAC`\_ should be configured to only grant necessary permissions to users and service accounts. Access to resources in the ``kube-system`` and ``cilium`` namespaces in particular should be highly limited. - Kubernetes audit logs should be used to automatically review requests made to the API server, and detections should be configured to alert on suspicious activity. .. \_Kubernetes RBAC: https://kubernetes.io/docs/reference/access-authn-authz/rbac/ .. \_kv-store-attacker: Cilium Key-value Store Attacker ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Cilium can use :ref:`an external key-value store ` such as etcd to store state. In this scenario, we consider a user with network access to the Cilium etcd endpoints and credentials to access those etcd endpoints. The credentials to the etcd endpoints are stored as Kubernetes secrets; any attacker would first have to compromise these secrets before gaining access to the key-value store. .. image:: images/cilium\_threat\_model\_etcd\_attacker.png .. rst-class:: wrapped-table +------------------+---------------------------------------------------+ | \*\*Threat | \*\*Identified STRIDE threats\*\* | | surface\*\* | | +==================+===================================================+ | Cilium agent | None | +------------------+---------------------------------------------------+ | Cilium | The ability to create or modify Identities or | | configuration | Endpoints in etcd would allow an attacker to | | | "give" any pod any identity. The ability to spoof | | | identities in this manner might be used to | | | escalate a single node compromise to a multi-node | | | compromise, for example by spoofing identities to | | | undermine ingress segmentation rules that would | | | be applied on remote nodes. | +------------------+---------------------------------------------------+ | Cilium eBPF | None | | programs | | +------------------+---------------------------------------------------+ | Network data | An attacker would be able to modify the routing | | | of traffic within a cluster, and as a consequence | | | gain the privileges of a :ref:`mitm-attacker`. | | | | +------------------+---------------------------------------------------+ | Observability | None | | data | | +------------------+---------------------------------------------------+ Recommended Controls ^^^^^^^^^^^^^^^^^^^^ - The ``etcd`` instance deployed to store Cilium configuration should be independent of the instance that is typically deployed as part of configuring a Kubernetes cluster. This separation reduces the risk of a Cilium ``etcd`` compromise leading to further cluster-wide impact. - Kubernetes RBAC controls should be applied to restrict access to Kubernetes secrets. - Kubernetes audit logs should be used to detect access to secret data and alert if such access is suspicious. Hubble Data Attacker ~~~~~~~~~~~~~~~~~~~~ This is an attacker with network reachability to Kubernetes worker nodes, or other systems that store or expose Hubble data, with the goal of gaining access to potentially sensitive Hubble flow or process data. .. image:: images/cilium\_threat\_model\_hubble\_attacker.png .. rst-class:: wrapped-table +------------------+---------------------------------------------------+
https://github.com/cilium/cilium/blob/main//Documentation/security/threat-model.rst
main
cilium
[ -0.025396177545189857, 0.07598397135734558, -0.012992143630981445, -0.03504083305597305, -0.025330597534775734, 0.001200937433168292, 0.013176130130887032, -0.01821356825530529, 0.05916222184896469, 0.04349711164832115, 0.005574257578700781, -0.055758461356163025, 0.047108374536037445, -0....
0.18208
if such access is suspicious. Hubble Data Attacker ~~~~~~~~~~~~~~~~~~~~ This is an attacker with network reachability to Kubernetes worker nodes, or other systems that store or expose Hubble data, with the goal of gaining access to potentially sensitive Hubble flow or process data. .. image:: images/cilium\_threat\_model\_hubble\_attacker.png .. rst-class:: wrapped-table +------------------+---------------------------------------------------+ | \*\*Threat | \*\*Identified STRIDE threats\*\* | | surface\*\* | | +==================+===================================================+ | Cilium pods | None | +------------------+---------------------------------------------------+ | Cilium | None | | configuration | | +------------------+---------------------------------------------------+ | Cilium eBPF | None | | programs | | +------------------+---------------------------------------------------+ | Network data | None | +------------------+---------------------------------------------------+ | Observability | None, assuming correct configuration of the | | data | following: | | | | | | - Network policy to limit access to | | | ``hubble-relay`` or ``hubble-ui`` services | | | - Limited access to ``cilium``, | | | ``hubble-relay``, or ``hubble-ui`` pods | | | - TLS for external data export | | | - Security controls at the destination of any | | | exported data | +------------------+---------------------------------------------------+ Recommended Controls ^^^^^^^^^^^^^^^^^^^^ - Network policies should limit access to the ``hubble-relay`` and ``hubble-ui`` services - Kubernetes RBAC should be used to limit access to any ``cilium-\*`` or ``hubble-`\*`` pods - TLS should be configured for access to the Hubble Relay API and Hubble UI - TLS should be correctly configured for any data export - The destination data stores for exported data should be secured (such as by applying encryption at rest and cloud provider specific RBAC controls, for example) Overall Recommendations ----------------------- To summarize the recommended controls to be used when configuring a production Kubernetes cluster with Cilium: #. Ensure that Kubernetes roles are scoped correctly to the requirements of your users, and that service account permissions for pods are tightly scoped to the needs of the workloads. In particular, access to sensitive namespaces, ``exec`` actions, and Kubernetes secrets should all be highly controlled. #. Use resource limits for workloads where possible to reduce the chance of denial of service attacks. #. Ensure that workload privileges and capabilities are only granted when essential to the functionality of the workload, and ensure that specific controls to limit and monitor the behavior of the workload are in place. #. Use :ref:`network policies ` to ensure that network traffic in Kubernetes is segregated. #. Use :ref:`gsg\_encryption` in Cilium to ensure that communication between workloads is secured. #. Enable Kubernetes audit logging, forward the audit logs to a centralized monitoring platform, and define alerting for suspicious activity. #. Enable TLS for access to any externally-facing services, such as Hubble Relay and Hubble UI. #. Use `Tetragon`\_ as a runtime security solution to rapidly detect unexpected behavior within your Kubernetes cluster. If you have questions, suggestions, or would like to help improve Cilium's security posture, reach out to security@cilium.io. .. |br| raw:: html
https://github.com/cilium/cilium/blob/main//Documentation/security/threat-model.rst
main
cilium
[ -0.015021638944745064, 0.09016972035169601, -0.018818523734807968, 0.005686597898602486, 0.10683871805667877, -0.059516552835702896, 0.011611678637564182, 0.02457432448863983, -0.010646807961165905, 0.02846040204167366, -0.0028164908289909363, 0.010084384121000767, 0.06927219033241272, -0....
0.103458
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_encryption\_ztunnel: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Ztunnel Transparent Encryption (Beta) \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* .. include:: ../../beta.rst This guide explains how to configure Cilium to use ztunnel for transparent encryption and mutual TLS (mTLS) authentication between Cilium-managed endpoints. ztunnel is a purpose-built per-node proxy that provides transparent Layer 4 mTLS encryption and authentication for pod-to-pod communication. When ztunnel is enabled in Cilium, the agent running on each cluster node establishes a control plane connection with the local ztunnel proxy. Cilium enrolls pods into the mesh on a per-namespace basis, allowing fine-grained control over which workloads participate in mTLS encryption. Enrolled pods have their traffic transparently redirected to the ztunnel proxy using iptables rules configured in their network namespace, where the traffic is encrypted and authenticated using mutual TLS before being sent to the destination. Generating secrets for authentication ===================================== Cilium's ztunnel integration requires a set a set of private keys and accompanying to certificates be present via Kubernetes secrets. This follows the same pattern as IPsec key injections. These keys can be generated with the following bash script prior to deploying Cilium. .. literalinclude:: ../../../examples/kubernetes-ztunnel/generate-secrets.sh :language: bash The 'bootstrap' keys are used to secure the connection between ztunnel and Cilium's xDS and certificate server implementation. The 'ca' keys are used as the root certificate for creating in-memory and ephemeral client certificates on ztunnel's request. Enable ztunnel in Cilium ======================== Before you install Cilium with ztunnel enabled, ensure that: \* The necessary Kubernetes secrets are available. \* Cluster Mesh is not enabled (ztunnel is currently not compatible with Cluster Mesh). .. tabs:: .. group-tab:: Cilium CLI If you are deploying Cilium with the Cilium CLI, pass the following options: .. parsed-literal:: cilium install |CHART\_VERSION| \ --set encryption.enabled=true \ --set encryption.type=ztunnel .. group-tab:: Helm If you are deploying Cilium with Helm by following :ref:`k8s\_install\_helm`, pass the following options: .. parsed-literal:: helm install cilium |CHART\_RELEASE| \\ --namespace kube-system \\ --set encryption.enabled=true \\ --set encryption.type=ztunnel Enrolling Namespaces ==================== After enabling ztunnel in Cilium, you need to explicitly enroll namespaces to enable mTLS encryption for their workloads. This is done by applying a label to the namespace: .. code-block:: shell-session kubectl label namespace io.cilium/mtls-enabled=true To verify that a namespace is enrolled: .. code-block:: shell-session kubectl get namespace --show-labels When a namespace is enrolled: \* All existing pods in the namespace (except ztunnel pods themselves) are enrolled \* Iptables rules are configured in each pod's network namespace for traffic redirection \* Pod metadata is sent to the ztunnel proxy via the ZDS protocol \* Future pods created in the namespace are automatically enrolled To disenroll a namespace: .. code-block:: shell-session kubectl label namespace io.cilium/mtls-enabled- This will: #. Disenroll all pods in the namespace from ztunnel #. Remove the iptables rules from each pod's network namespace #. Notify ztunnel to stop processing traffic for those workloads Validate the Setup ================== #. Check that ztunnel has been enabled: .. code-block:: shell-session kubectl -n kube-system describe cm cilium-config | grep enable-ztunnel -A2 You should see output indicating that ztunnel encryption is enabled. #. Check which namespaces are enrolled: .. code-block:: shell-session kubectl get namespaces -l io.cilium/mtls-enabled=true This shows all namespaces labeled for ztunnel enrollment. To verify that these namespaces are actually enrolled in the StateDB table: .. code-block:: shell-session kubectl exec -n kube-system ds/cilium -- cilium-dbg statedb dump | jq '.["mtls-enrolled-namespaces"]' The results of this query should show which namespaces have been successfully processed by the enrollment reconciler. #. Run a ``bash`` shell in one of the Cilium pods hosting
https://github.com/cilium/cilium/blob/main//Documentation/security/network/encryption-ztunnel.rst
main
cilium
[ -0.06098497658967972, 0.023706993088126183, -0.09147433191537857, 0.03190172091126442, 0.010837429203093052, -0.06051468104124069, 0.0012729703448712826, -0.0011525930603966117, 0.05764230340719223, -0.03323918581008911, 0.023319439962506294, -0.0025880143512040377, 0.061350367963314056, 0...
0.071962
actually enrolled in the StateDB table: .. code-block:: shell-session kubectl exec -n kube-system ds/cilium -- cilium-dbg statedb dump | jq '.["mtls-enrolled-namespaces"]' The results of this query should show which namespaces have been successfully processed by the enrollment reconciler. #. Run a ``bash`` shell in one of the Cilium pods hosting a mtls-enrolled pod with ``kubectl -n kube-system exec -ti pod/ -- bash`` and execute the following commands: Install tcpdump .. code-block:: shell-session $ apt-get update $ apt-get -y install tcpdump Check that traffic is encrypted. In the example below, this can be verified by the fact that packets will have a destination port of 15008 (HBONE). In the example below, ``eth0`` is the interface used for pod-to-pod communication. Replace this interface with e.g. ``cilium\_vxlan`` if tunneling is enabled. .. code-block:: shell-session tcpdump -i eth0 port 15008 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes 13:00:06.982499 IP 10.244.1.95.15008 > 10.244.2.3.33446: ... 13:00:06.982536 IP 10.244.2.3.33446 > 10.244.1.95.15008: ... 13:00:06.982675 IP 10.244.2.3.33446 > 10.244.1.95.15008: ... Limitations =========== \* The ztunnel integration currently only supports enrollment via namespace labels. Pod-level enrollment is not supported. \* Only TCP traffic is currently supported for mTLS encryption. UDP and other protocols are not redirected to ztunnel. \* The integration requires iptables support in the kernel and cannot be used with environments that do not support iptables (such as some minimal container runtimes). \* Ztunnel interferes with Cilium network policy as traffic is encrypted before it leaves the pod, meaning L4 policies won't work except for directly targeting the ztunnel HBONE port (15008). Known Issues ============ \* Cluster Mesh is not currently supported when ztunnel is enabled. Attempting to enable both will result in a validation error. \* Pods without a network namespace path (such as host-networked pods) cannot be enrolled in ztunnel and will be skipped during enrollment.
https://github.com/cilium/cilium/blob/main//Documentation/security/network/encryption-ztunnel.rst
main
cilium
[ -0.0384519025683403, -0.021649034693837166, 0.014807254076004028, -0.033896978944540024, -0.032742079347372055, -0.08844445645809174, -0.010269254446029663, -0.03851018846035004, 0.0698019415140152, 0.0028580722864717245, 0.0019088361877948046, -0.11186638474464417, 0.03448197618126869, -0...
0.156521
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gsg\_encryption: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Transparent Encryption \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Cilium supports the transparent encryption of Cilium-managed host traffic and traffic between Cilium-managed endpoints using IPsec, WireGuard®, or ztunnel: .. toctree:: :maxdepth: 1 :glob: encryption-ipsec encryption-wireguard encryption-ztunnel .. admonition:: Video :class: attention You can also see a demo of Cilium Transparent Encryption in `eCHO episode 79: Transparent Encryption with IPsec and WireGuard `\_\_. Known Issues and Workarounds ============================ Egress traffic to not yet discovered remote endpoints may be unencrypted ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To determine if a packet needs to be encrypted or not, transparent encryption relies on the same mechanisms as policy enforcement to decide if the destination of an outgoing packet belongs to a Cilium-managed endpoint on a remote node. This means that if an endpoint is allowed to initiate traffic to targets outside of the cluster, it is possible for that endpoint to send packets to arbitrary IP addresses before Cilium learns that a particular IP address belongs to a remote Cilium-managed endpoint or newly joined remote Cilium host in the cluster. In such a case there is a time window during which Cilium will send out the initial packets unencrypted, as it has to assume the destination IP address is outside of the cluster. Once the information about the newly created endpoint has propagated in the cluster and Cilium knows that the IP address is an endpoint on a remote node, it will start encrypting packets using the encryption key of the remote node. One workaround for this issue is to ensure that the endpoint is not allowed to send unencrypted traffic to arbitrary targets outside of the cluster. This can be achieved by defining an egress policy which either completely disallows traffic to ``reserved:world`` identities, or only allows egress traffic to addresses outside of the cluster to a certain subset of trusted IP addresses using ``toCIDR``, ``toCIDRSet`` and ``toFQDN`` rules. See :ref:`policy\_examples` for more details about how to write network policies that restrict egress traffic to certain endpoints. Another way to mitigate this issue is to set ``encryption.strictMode.egress.enabled`` to ``true`` and the expected pod CIDR as ``encryption.strictMode.egress.cidr``. This encryption strict mode egress enforces that traffic exiting a node to the set CIDR is always encrypted. Be aware that information about new pod endpoints must propagate to the node before the node can send traffic to them. Encryption strict mode egress has the following limitations: - The pod CIDR and therefore the encryption strict mode egress CIDR must be IPv4. IPv6 traffic is not protected by the strict mode and can be leaked. - To disable all dynamic lookups, you must use direct routing mode and the node CIDR and pod CIDR must not overlap. Otherwise, ``encryption.strictMode.egress.allowRemoteNodeIdentities`` must be set to ``true``. This allows unencrypted traffic sent from or to an IP address associated with a node identity. Encryption strict mode ingress is currently not supported when chaining Cilium on top of other CNI plugins. For more information, see GitHub issue 15596.
https://github.com/cilium/cilium/blob/main//Documentation/security/network/encryption.rst
main
cilium
[ -0.03998072072863579, 0.04592973366379738, -0.08062493801116943, -0.010620695538818836, 0.02493784762918949, -0.07300268858671188, -0.000409779284382239, -0.03396542742848396, 0.03852936252951622, -0.016952451318502426, 0.030951568856835365, -0.02836623229086399, 0.04950636997818947, -0.02...
0.076766
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io \*\*\*\*\*\*\*\*\*\*\*\* Introduction \*\*\*\*\*\*\*\*\*\*\*\* Cilium provides security on multiple levels. Each can be used individually or combined together. \* :ref:`arch\_id\_security`: Connectivity policies between endpoints (Layer 3), e.g. any endpoint with label ``role=frontend`` can connect to any endpoint with label ``role=backend``. \* Restriction of accessible ports (Layer 4) for both incoming and outgoing connections, e.g. endpoint with label ``role=frontend`` can only make outgoing connections on port 443 (https) and endpoint ``role=backend`` can only accept connections on port 443 (https). \* Fine grained access control on application protocol level to secure HTTP and remote procedure call (RPC) protocols, e.g the endpoint with label ``role=frontend`` can only perform the REST API call ``GET /userdata/[0-9]+``, all other API interactions with ``role=backend`` are restricted.
https://github.com/cilium/cilium/blob/main//Documentation/security/network/intro.rst
main
cilium
[ -0.002925556618720293, 0.0258632805198431, -0.12053921073675156, -0.050289615988731384, 0.009077051654458046, -0.059382326900959015, -0.03208377584815025, 0.03555074706673622, 0.0418475940823555, -0.05741062015295029, 0.041932351887226105, -0.04073871299624443, 0.022818736732006073, -0.024...
0.196443
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_encryption\_ipsec: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* IPsec Transparent Encryption \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This guide explains how to configure Cilium to use IPsec based transparent encryption using Kubernetes secrets to distribute the IPsec keys. After this configuration is complete, all traffic between Cilium-managed endpoints will be encrypted using IPsec. This guide uses Kubernetes secrets to distribute keys. Alternatively, keys may be manually distributed, but that is not shown here. Packets are not encrypted when they are destined to the same node from which they were sent. This behavior is intended. Encryption would provide no benefits in that case, given that the raw traffic can be observed on the node anyway. v1.18 Encrypted Overlay ========================= Prior to v1.18, IPsec encryption was performed before tunnel encapsulation. From Cilium v1.18 and forward, Cilium's IPsec encryption datapath will send traffic for overlay encapsulation prior to IPsec encryption when tunnel mode is enabled. With this change, the security identities used for policy enforcement are encrypted on the wire. This is a security benefit. A disruption-less upgrade from v1.17 to v1.18 can only be achieved by fully patching v1.17 to its latest version. Migration specific code was added to newer v1.17 releases to support a disruption-less upgrade to v1.18. Once patched to the newest v1.17 stable release, a normal upgrade to v1.18 can be performed. .. note:: Because VXLAN is encrypted before being sent, operators see ESP traffic between Kubernetes nodes. This may result in the need to update firewall rules to allow ESP traffic between nodes. This is especially important in Google Cloud GKE environments. The default firewall rules for the cluster's subnet may not allow ESP. Generate & Import the PSK ========================= First, create a Kubernetes secret for the IPsec configuration to be stored. The example below demonstrates generation of the necessary IPsec configuration which will be distributed as a Kubernetes secret called ``cilium-ipsec-keys``. A Kubernetes secret should consist of one key-value pair where the key is the name of the file to be mounted as a volume in cilium-agent pods, and the value is an IPsec configuration in the following format:: key-id encryption-algorithms PSK-in-hex-format key-size .. note:: ``Secret`` resources need to be deployed in the same namespace as Cilium! In our example, we use ``kube-system``. In the example below, GCM-128-AES is used. However, any of the algorithms supported by Linux may be used. To generate the secret, you may use the following command: .. tabs:: .. group-tab:: Cilium CLI .. parsed-literal:: $ cilium encrypt create-key --auth-algo rfc4106-gcm-aes .. group-tab:: Kubectl CLI .. parsed-literal:: $ kubectl create -n kube-system secret generic cilium-ipsec-keys \ --from-literal=keys="3+ rfc4106(gcm(aes)) $(dd if=/dev/urandom count=20 bs=1 2> /dev/null | xxd -p -c 64) 128" .. attention:: The ``+`` sign in the secret is strongly recommended. It will force the use of per-tunnel IPsec keys. The former global IPsec keys are considered insecure (cf. `GHSA-pwqm-x5x6-5586`\_) and were deprecated in v1.16. When using ``+``, the per-tunnel keys will be derived from the secret you generated. .. \_GHSA-pwqm-x5x6-5586: https://github.com/cilium/cilium/security/advisories/GHSA-pwqm-x5x6-5586 The secret can be seen with ``kubectl -n kube-system get secrets`` and will be listed as ``cilium-ipsec-keys``. .. code-block:: shell-session $ kubectl -n kube-system get secrets cilium-ipsec-keys NAME TYPE DATA AGE cilium-ipsec-keys Opaque 1 176m Enable Encryption in Cilium =========================== .. tabs:: .. group-tab:: Cilium CLI If you are deploying Cilium with the Cilium CLI, pass the following options: .. parsed-literal:: cilium install |CHART\_VERSION| \ --set encryption.enabled=true \ --set encryption.type=ipsec .. group-tab:: Helm If you are deploying Cilium with Helm by following :ref:`k8s\_install\_helm`, pass the following
https://github.com/cilium/cilium/blob/main//Documentation/security/network/encryption-ipsec.rst
main
cilium
[ -0.043695151805877686, -0.002738288836553693, -0.06839996576309204, -0.02490902878344059, 0.02382839098572731, -0.04899418354034424, 0.02731994166970253, -0.0008390489383600652, 0.09048022329807281, -0.01306458655744791, 0.004700708203017712, -0.034852590411901474, 0.009750846773386002, -0...
0.154394
Encryption in Cilium =========================== .. tabs:: .. group-tab:: Cilium CLI If you are deploying Cilium with the Cilium CLI, pass the following options: .. parsed-literal:: cilium install |CHART\_VERSION| \ --set encryption.enabled=true \ --set encryption.type=ipsec .. group-tab:: Helm If you are deploying Cilium with Helm by following :ref:`k8s\_install\_helm`, pass the following options: .. cilium-helm-install:: :namespace: kube-system :set: encryption.enabled=true encryption.type=ipsec ``encryption.enabled`` enables encryption of the traffic between Cilium-managed pods. ``encryption.type`` specifies the encryption method and can be omitted as it defaults to ``ipsec``. .. attention:: When using Cilium in any direct routing configuration, ensure that the native routing CIDR is set properly. This is done using ``--ipv4-native-routing-cidr=CIDR`` with the CLI or ``--set ipv4NativeRoutingCIDR=CIDR`` with Helm. At this point the Cilium managed nodes will be using IPsec for all traffic. For further information on Cilium's transparent encryption, see :ref:`ebpf\_datapath`. Dependencies ============ When L7 proxy support is enabled (``--enable-l7-proxy=true``), IPsec requires that the DNS proxy operates in transparent mode (``--dnsproxy-enable-transparent-mode=true``). Encryption interface -------------------- An additional argument can be used to identify the network-facing interface. If direct routing is used and no interface is specified, the default route link is chosen by inspecting the routing tables. This will work in many cases, but depending on routing rules, users may need to specify the encryption interface as follows: .. tabs:: .. group-tab:: Cilium CLI .. parsed-literal:: cilium install |CHART\_VERSION| \ --set encryption.enabled=true \ --set encryption.type=ipsec \ --set encryption.ipsec.interface=ethX .. group-tab:: Helm .. code-block:: shell-session --set encryption.ipsec.interface=ethX Validate the Setup ================== Run a ``bash`` shell in one of the Cilium pods with ``kubectl -n kube-system exec -ti ds/cilium -- bash`` and execute the following commands: 1. Install tcpdump .. code-block:: shell-session $ apt-get update $ apt-get -y install tcpdump 2. Check that traffic is encrypted. In the example below, this can be verified by the fact that packets carry the IP Encapsulating Security Payload (ESP). In the example below, ``eth0`` is the interface used for pod-to-pod communication. Replace this interface with e.g. ``cilium\_vxlan`` if tunneling is enabled. .. code-block:: shell-session tcpdump -l -n -i eth0 esp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 15:16:21.626416 IP 10.60.1.1 > 10.60.0.1: ESP(spi=0x00000001,seq=0x57e2), length 180 15:16:21.626473 IP 10.60.1.1 > 10.60.0.1: ESP(spi=0x00000001,seq=0x57e3), length 180 15:16:21.627167 IP 10.60.0.1 > 10.60.1.1: ESP(spi=0x00000001,seq=0x579d), length 100 15:16:21.627296 IP 10.60.0.1 > 10.60.1.1: ESP(spi=0x00000001,seq=0x579e), length 100 15:16:21.627523 IP 10.60.0.1 > 10.60.1.1: ESP(spi=0x00000001,seq=0x579f), length 180 15:16:21.627699 IP 10.60.1.1 > 10.60.0.1: ESP(spi=0x00000001,seq=0x57e4), length 100 15:16:21.628408 IP 10.60.1.1 > 10.60.0.1: ESP(spi=0x00000001,seq=0x57e5), length 100 .. \_ipsec\_key\_rotation: Key Rotation ============ .. attention:: Key rotations should not be performed during upgrades and downgrades. That is, all nodes in the cluster (or clustermesh) should be on the same Cilium version before rotating keys. .. attention:: It is not recommended to change algorithms that involve different authentication key lengths during key rotations. If this is attempted, Cilium will delay the application of the new key until the agent restarts and will continue using the previous key. This is designed to maintain uninterrupted IPv6 pod-to-pod connectivity. To replace cilium-ipsec-keys secret with a new key: .. code-block:: shell-session KEYID=$(kubectl get secret -n kube-system cilium-ipsec-keys -o go-template --template={{.data.keys}} | base64 -d | grep -oP "^\d+") if [[ $KEYID -ge 15 ]]; then KEYID=0; fi data=$(echo "{\"stringData\":{\"keys\":\"$((($KEYID+1)))+ "rfc4106\(gcm\(aes\)\)" $(dd if=/dev/urandom count=20 bs=1 2> /dev/null | xxd -p -c 64) 128\"}}") kubectl patch secret -n kube-system cilium-ipsec-keys -p="${data}" -v=1 During transition the new and old keys will be in use. The Cilium agent keeps per endpoint data on which key is used by each endpoint and will use the correct key if either side has not
https://github.com/cilium/cilium/blob/main//Documentation/security/network/encryption-ipsec.rst
main
cilium
[ 0.027192357927560806, 0.038463249802589417, -0.023790454491972923, -0.027765169739723206, -0.010582544840872288, -0.040027718991041183, -0.0073806825093925, 0.01834956370294094, 0.07294990122318268, 0.0019284693989902735, 0.016293387860059738, -0.07841626554727554, 0.0006341335247270763, -...
0.086826
xxd -p -c 64) 128\"}}") kubectl patch secret -n kube-system cilium-ipsec-keys -p="${data}" -v=1 During transition the new and old keys will be in use. The Cilium agent keeps per endpoint data on which key is used by each endpoint and will use the correct key if either side has not yet been updated. In this way encryption will work as new keys are rolled out. The ``KEYID`` environment variable in the above example stores the current key ID used by Cilium. The key variable is a uint8 with value between 1 and 15 included and should be monotonically increasing every re-key with a rollover from 15 to 1. The Cilium agent will default to ``KEYID`` of zero if its not specified in the secret. If you are using Cluster Mesh, you must apply the key rotation procedure to all clusters in the mesh. You might need to increase the transition time to allow for the new keys to be deployed and applied across all clusters, which you can do with the agent flag ``ipsec-key-rotation-duration``. Monitoring ========== When monitoring network traffic on a node with IPSec enabled, it is normal to observe in the same interface both the outer packet (node-to-node) carrying the ESP-encrypted payload and then the decrypted inner packet (pod-to-pod). This occurs as, once a packet is decrypted, it is recirculated back to the same interface for further processing. Therefore, depending on the ``tcpdump`` filter applied, the capture might differ, but this \*\*does not\*\* indicate that encryption is not functioning correctly. In particular, to observe: 1. Only the encrypted packet: use the filter ``esp``. 2. Only the decrypted packet: use a specific filter for the protocol used by the pods (such as ``icmp`` for ping). 3. Both encrypted and decrypted packets: use no filter or combine the filters for both (such as ``esp or icmp``). The following capture was taken on a Kind cluster with no filter applied (replace ``eth0`` with ``cilium\_vxlan`` if tunneling is enabled). The nodes have IP addresses ``10.244.2.92`` and ``10.244.1.148``, while the pods have IP addresses ``10.244.2.189`` and ``10.244.1.7``, using ping (ICMP) for communication. .. code-block:: shell-session tcpdump -l -n -i eth0 tcpdump: verbose output suppressed, use -v[v]... for full protocol decode listening on cilium\_vxlan, link-type EN10MB (Ethernet), snapshot length 262144 bytes 09:22:16.379908 IP 10.244.2.92 > 10.244.1.148: ESP(spi=0x00000003,seq=0x8), length 120 09:22:16.379908 IP 10.244.2.189 > 10.244.1.7: ICMP echo request, id 33, seq 1, length 64 Troubleshooting =============== \* If the ``cilium`` Pods fail to start after enabling encryption, double-check if the IPsec ``Secret`` and Cilium are deployed in the same namespace together. \* Check for ``level=warning`` and ``level=error`` messages in the Cilium log files \* If there is a warning message similar to ``Device eth0 does not exist``, use ``--set encryption.ipsec.interface=ethX`` to set the encryption interface. \* Run ``cilium-dbg encrypt status`` in the Cilium Pod: .. code-block:: shell-session $ cilium-dbg encrypt status Encryption: IPsec Decryption interface(s): eth0, eth1, eth2 Keys in use: 4 Max Seq. Number: 0x1e3/0xffffffffffffffff Errors: 0 If the error counter is non-zero, additional information will be displayed with the specific errors the kernel encountered. The number of keys in use should be 2 per remote node per enabled IP family. During a key rotation, it can double to 4 per remote node per IP family. For example, in a 3-nodes cluster, if both IPv4 and IPv6 are enabled and no key rotation is ongoing, there should be 8 keys in use on each node. The list of decryption interfaces should have all native devices that may receive pod traffic (for example, ENI interfaces). All XFRM errors correspond to a packet drop in the
https://github.com/cilium/cilium/blob/main//Documentation/security/network/encryption-ipsec.rst
main
cilium
[ -0.022648511454463005, 0.009009486064314842, -0.03100426308810711, -0.03982828930020332, -0.055848073214292526, -0.01541894767433405, -0.0005895148497074842, 0.01021355390548706, 0.11482325196266174, 0.022285351529717445, 0.044444601982831955, -0.057354431599378586, 0.02212873101234436, -0...
0.151147
IPv4 and IPv6 are enabled and no key rotation is ongoing, there should be 8 keys in use on each node. The list of decryption interfaces should have all native devices that may receive pod traffic (for example, ENI interfaces). All XFRM errors correspond to a packet drop in the kernel. The following details operational mistakes and expected behaviors that can cause those errors. \* When a node reboots, the key used to communicate with it is expected to change on other nodes. You may notice the ``XfrmInNoStates`` and ``XfrmOutNoStates`` counters increase while the new node key is being deployed. \* After a key rotation, if the old key is cleaned up before the configuration of the new key is installed on all nodes, it results in ``XfrmInNoStates`` errors. The old key is removed from nodes after a default interval of 5 minutes by default. By default, all agents watch for key updates and update their configuration within 1 minute after the key is changed, leaving plenty of time before the old key is removed. If you expect the key rotation to take longer for some reason (for example, in the case of Cluster Mesh where several clusters need to be updated), you can increase the delay before cleanup with agent flag ``ipsec-key-rotation-duration``. \* ``XfrmInStateProtoError`` errors can happen for the following reasons: 1. If the key is updated without incrementing the SPI (also called ``KEYID`` in :ref:`ipsec\_key\_rotation` instructions above). It can be fixed by performing a new key rotation, properly. 2. If the source node encrypts the packets using a different anti-replay seq from the anti-reply oseq on the destination node. This can be fixed by properly performing a new key rotation. \* ``XfrmFwdHdrError`` and ``XfrmInError`` happen when the kernel fails to lookup the route for a packet it decrypted. This can legitimately happen when a pod was deleted but some packets are still in transit. Note these errors can also happen under memory pressure when the kernel fails to allocate memory. \* ``XfrmInStateInvalid`` can happen on rare occasions if packets are received while an XFRM state is being deleted. XFRM states get deleted as part of node scale-downs and for some upgrades and downgrades. \* The following table documents the known explanations for several XFRM errors that were observed in the past. Many other error types exist, but they are usually for Linux subfeatures that Cilium doesn't use (e.g., XFRM expiration). ======================= ================================================== Error Known explanation ======================= ================================================== XfrmInError The kernel (1) decrypted and tried to route a packet for a pod that was deleted or (2) failed to allocate memory. XfrmInNoStates Bug in the XFRM configuration for decryption. XfrmInStateProtoError There is a key or anti-replay seq mismatch between nodes. XfrmInStateInvalid A received packet matched an XFRM state that is being deleted. XfrmInTmplMismatch Bug in the XFRM configuration for decryption. XfrmInNoPols Bug in the XFRM configuration for decryption. XfrmInPolBlock Explicit drop, not used by Cilium. XfrmOutNoStates Bug in the XFRM configuration for encryption. XfrmOutStateSeqError The sequence number of an encryption XFRM configuration reached its maximum value. XfrmOutPolBlock Cilium dropped packets that would have otherwise left the node in plain-text. XfrmFwdHdrError The kernel (1) decrypted and tried to route a packet for a pod that was deleted or (2) failed to allocate memory. ======================= ================================================== \* In addition to the above XFRM errors, packet drops of type ``No node ID found`` (code 197) may also occur under normal operations. These drops can happen if a pod attempts to send traffic to a pod on a new node for which the Cilium agent didn't yet receive the CiliumNode object or to
https://github.com/cilium/cilium/blob/main//Documentation/security/network/encryption-ipsec.rst
main
cilium
[ 0.060065366327762604, -0.04422202333807945, 0.04758985713124275, -0.027661776170134544, 0.03611021116375923, -0.056021105498075485, -0.051019977778196335, -0.03656120225787163, 0.05903902277350426, 0.02781805582344532, 0.10124984383583069, 0.06713645160198212, 0.00585528276860714, -0.05402...
0.031658
above XFRM errors, packet drops of type ``No node ID found`` (code 197) may also occur under normal operations. These drops can happen if a pod attempts to send traffic to a pod on a new node for which the Cilium agent didn't yet receive the CiliumNode object or to a pod on a node that was recently deleted. It can also happen if the IP address of the destination node changed and the agent didn't receive the updated CiliumNode object yet. In both cases, the IPsec configuration in the kernel isn't ready yet, so Cilium drops the packets at the source. These drops will stop once the CiliumNode information is propagated across the cluster. .. \_xfrm\_state\_staling\_in\_cilium: XFRM State Staling in Cilium ============================ Control plane disruptions can lead to connectivity issues due to stale XFRM states with out-of-sync IPsec anti-replay counters. This typically results in permanent connectivity disruptions between pods managed by Cilium. This section explains how these issues occur and what you can do about them. Identified Causes ----------------- In KVStore Mode (e.g., etcd), you might encounter stale XFRM states: \* If a Cilium agent is down for prolonged time, the corresponding node entry in the kvstore will be deleted due to lease expiration (see :ref:`kvstore\_leases`), resulting in stale XFRM states. \* If you manually recreate your key-value store, a Cilium agent might connect too late to the new instance. This delay can cause the agent to miss crucial node delete and create events, leading Cilium to retain outdated XFRM states for those nodes. In CRD Mode, stale XFRM states can occur if you delete a CiliumNode resource and restart the Cilium agent DaemonSet. While other agents create fresh XFRM states for the new CiliumNode, the agent on that new node may retain obsolete XFRM states for all the other peer nodes. Mitigation ---------- To restore connectivity in those cases, perform a key rotation (see :ref:`ipsec\_key\_rotation`). This action ensures new consistent and valid XFRM states across all your nodes. Disabling Encryption ==================== To disable the encryption, regenerate the YAML with the option ``encryption.enabled=false`` Limitations =========== \* Transparent encryption is not currently supported when chaining Cilium on top of other CNI plugins. For more information, see :gh-issue:`15596`. \* :ref:`HostPolicies` are not currently supported with IPsec encryption. \* IPsec encryption is not supported on clusters or clustermeshes with more than 65535 nodes. \* Decryption with Cilium IPsec is limited to a single CPU core per IPsec tunnel. This may affect performance in case of high throughput between two nodes.
https://github.com/cilium/cilium/blob/main//Documentation/security/network/encryption-ipsec.rst
main
cilium
[ 0.03901538625359535, -0.06274418532848358, 0.0024267113767564297, 0.055073291063308716, 0.09106409549713135, -0.06935033947229385, 0.03386479988694191, 0.014670898206532001, 0.002389847068116069, 0.014182011596858501, 0.037563543766736984, -0.0499211847782135, -0.0034216251224279404, -0.05...
0.179113
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_arch\_id\_security: \*\*\*\*\*\*\*\*\*\*\*\*\*\* Identity-Based \*\*\*\*\*\*\*\*\*\*\*\*\*\* Container management systems such as Kubernetes deploy a networking model which assigns an individual IP address to each pod (group of containers). This ensures simplicity in architecture, avoids unnecessary network address translation (NAT) and provides each individual container with a full range of port numbers to use. The logical consequence of this model is that depending on the size of the cluster and total number of pods, the networking layer has to manage a large number of IP addresses. Traditionally security enforcement architectures have been based on IP address filters. Let's walk through a simple example: If all pods with the label ``role=frontend`` should be allowed to initiate connections to all pods with the label ``role=backend`` then each cluster node which runs at least one pod with the label ``role=backend`` must have a corresponding filter installed which allows all IP addresses of all ``role=frontend`` pods to initiate a connection to the IP addresses of all local ``role=backend`` pods. All other connection requests should be denied. This could look like this: If the destination address is \*10.1.1.2\* then allow the connection only if the source address is one of the following \*[10.1.2.2,10.1.2.3,20.4.9.1]\*. Every time a new pod with the label ``role=frontend`` or ``role=backend`` is either started or stopped, the rules on every cluster node which run any such pods must be updated by either adding or removing the corresponding IP address from the list of allowed IP addresses. In large distributed applications, this could imply updating thousands of cluster nodes multiple times per second depending on the churn rate of deployed pods. Worse, the starting of new ``role=frontend`` pods must be delayed until all servers running ``role=backend`` pods have been updated with the new security rules as otherwise connection attempts from the new pod could be mistakenly dropped. This makes it difficult to scale efficiently. In order to avoid these complications which can limit scalability and flexibility, Cilium entirely separates security from network addressing. Instead, security is based on the identity of a pod, which is derived through labels. This identity can be shared between pods. This means that when the first ``role=frontend`` pod is started, Cilium assigns an identity to that pod which is then allowed to initiate connections to the identity of the ``role=backend`` pod. The subsequent start of additional ``role=frontend`` pods only requires to resolve this identity via a key-value store, no action has to be performed on any of the cluster nodes hosting ``role=backend`` pods. The starting of a new pod must only be delayed until the identity of the pod has been resolved which is a much simpler operation than updating the security rules on all other cluster nodes. .. image:: \_static/identity.png :align: center
https://github.com/cilium/cilium/blob/main//Documentation/security/network/identity.rst
main
cilium
[ 0.0340019091963768, 0.047945138067007065, -0.04500989243388176, -0.04610045254230499, 0.03769613802433014, -0.034850336611270905, 0.03230173513293266, -0.018485598266124725, 0.0743870809674263, -0.024950094521045685, 0.020869813859462738, -0.04553139582276344, 0.038951411843299866, -0.0308...
0.256742
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Policy Enforcement \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* All security policies are described assuming stateful policy enforcement for session based protocols. This means that the intent of the policy is to describe allowed direction of connection establishment. If the policy allows ``A => B`` then reply packets from ``B`` to ``A`` are automatically allowed as well. However, ``B`` is not automatically allowed to initiate connections to ``A``. If that outcome is desired, then both directions must be explicitly allowed. Security policies may be enforced at \*ingress\* or \*egress\*. For \*ingress\*, this means that each cluster node verifies all incoming packets and determines whether the packet is allowed to be transmitted to the intended endpoint. Correspondingly, for \*egress\* each cluster node verifies outgoing packets and determines whether the packet is allowed to be transmitted to its intended destination. In order to enforce identity based security in a multi host cluster, the identity of the transmitting endpoint is embedded into every network packet that is transmitted in between cluster nodes. The receiving cluster node can then extract the identity and verify whether a particular identity is allowed to communicate with any of the local endpoints. Default Security Policy ======================= If no policy is loaded, the default behavior is to allow all communication unless policy enforcement has been explicitly enabled. As soon as the first policy rule is loaded, policy enforcement is enabled automatically and any communication must then be white listed or the relevant packets will be dropped. Similarly, if an endpoint is not subject to an \*L4\* policy, communication from and to all ports is permitted. Associating at least one \*L4\* policy to an endpoint will block all connectivity to ports unless explicitly allowed.
https://github.com/cilium/cilium/blob/main//Documentation/security/network/policyenforcement.rst
main
cilium
[ -0.05416952073574066, 0.024966023862361908, -0.10486751794815063, -0.04026501998305321, -0.02350015752017498, -0.027516333386301994, 0.07449622452259064, -0.030161606147885323, 0.02716469205915928, 0.014224294573068619, 0.04861857369542122, -0.03142387047410011, 0.09382475912570953, -0.006...
0.212348
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_encryption\_wg: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* WireGuard Transparent Encryption \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This guide explains how to configure Cilium with transparent encryption of traffic between Cilium-managed endpoints using `WireGuard® `\_. .. admonition:: Video :class: attention Aside from this guide, you can also watch `eCHO episode 3: WireGuard `\_\_ on how WireGuard can encrypt network traffic. When WireGuard is enabled in Cilium, the agent running on each cluster node will establish a secure WireGuard tunnel between it and all other known nodes in the cluster. Each node automatically creates its own encryption key-pair and distributes its public key via the ``network.cilium.io/wg-pub-key`` annotation in the Kubernetes ``CiliumNode`` custom resource object. Each node's public key is then used by other nodes to decrypt and encrypt traffic from and to Cilium-managed endpoints running on that node. Packets are not encrypted when they are destined to the same node from which they were sent. This behavior is intended. Encryption would provide no benefits in that case, given that the raw traffic can be observed on the node anyway. The WireGuard tunnel endpoint is exposed on UDP port ``51871`` on each node. If you run Cilium in an environment that requires firewall rules to enable connectivity, you will have to ensure that all Cilium cluster nodes can reach each other via that port. .. note:: When running in tunnel routing mode, pod to pod traffic is encapsulated twice. It is first sent to the VXLAN / Geneve tunnel interface, and then subsequently also encapsulated by the WireGuard tunnel. Enable WireGuard in Cilium ========================== Before you enable WireGuard in Cilium, please ensure that the Linux distribution running on your cluster nodes has support for WireGuard in kernel mode (i.e. ``CONFIG\_WIREGUARD=m`` on Linux 5.6 and newer, or via the out-of-tree WireGuard module on older kernels). See `WireGuard Installation `\_ for details on how to install the kernel module on your Linux distribution. .. tabs:: .. group-tab:: Cilium CLI If you are deploying Cilium with the Cilium CLI, pass the following options: .. parsed-literal:: cilium install |CHART\_VERSION| \ --set encryption.enabled=true \ --set encryption.type=wireguard .. group-tab:: Helm If you are deploying Cilium with Helm by following :ref:`k8s\_install\_helm`, pass the following options: .. cilium-helm-install:: :namespace: kube-system :set: encryption.enabled=true encryption.type=wireguard WireGuard may also be enabled manually by setting the ``enable-wireguard: true`` option in the Cilium ``ConfigMap`` and restarting each Cilium agent instance. .. note:: When running with the CNI chaining (e.g., :ref:`chaining\_aws\_cni`), set the Helm option ``cni.enableRouteMTUForCNIChaining`` to ``true`` to force Cilium to set a correct MTU for Pods. Otherwise, Pod traffic encrypted with WireGuard might get fragmented, which can lead to a network performance degradation. Validate the Setup ================== Run a ``bash`` shell in one of the Cilium pods with ``kubectl -n kube-system exec -ti ds/cilium -- bash`` and execute the following commands: 1. Check that WireGuard has been enabled (number of peers should correspond to a number of nodes subtracted by one): .. code-block:: shell-session cilium-dbg status | grep Encryption Encryption: Wireguard [cilium\_wg0 (Pubkey: <..>, Port: 51871, Peers: 2)] 2. Install tcpdump .. code-block:: shell-session apt-get update apt-get -y install tcpdump 3. Check that traffic is sent via the ``cilium\_wg0`` tunnel device: .. code-block:: shell-session tcpdump -n -i cilium\_wg0 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on cilium\_wg0, link-type RAW (Raw IP), capture size 262144 bytes 15:05:24.643427 IP 10.244.1.35.51116 > 10.244.3.78.8080: Flags [S], seq 476474887, win 64860, options [mss 1410,sackOK,TS val 648097391 ecr 0,nop,wscale 7], length 0 15:05:24.644185 IP 10.244.3.78.8080 > 10.244.1.35.51116: Flags [S.],
https://github.com/cilium/cilium/blob/main//Documentation/security/network/encryption-wireguard.rst
main
cilium
[ -0.08106858283281326, 0.05552125722169876, -0.06346071511507034, -0.02432573027908802, 0.05258374661207199, -0.06062931567430496, 0.05812114104628563, -0.00489760423079133, 0.03107949160039425, -0.02223881706595421, 0.06747821718454361, -0.04328133538365364, 0.04292099550366402, 0.00126797...
0.124254
verbose output suppressed, use -v or -vv for full protocol decode listening on cilium\_wg0, link-type RAW (Raw IP), capture size 262144 bytes 15:05:24.643427 IP 10.244.1.35.51116 > 10.244.3.78.8080: Flags [S], seq 476474887, win 64860, options [mss 1410,sackOK,TS val 648097391 ecr 0,nop,wscale 7], length 0 15:05:24.644185 IP 10.244.3.78.8080 > 10.244.1.35.51116: Flags [S.], seq 4032860634, ack 476474888, win 64308, options [mss 1410,sackOK,TS val 4004186138 ecr 648097391,nop,wscale 7], length 0 15:05:24.644238 IP 10.244.1.35.51116 > 10.244.3.78.8080: Flags [.], ack 1, win 507, options [nop,nop,TS val 648097391 ecr 4004186138], length 0 15:05:24.644277 IP 10.244.1.35.51116 > 10.244.3.78.8080: Flags [P.], seq 1:81, ack 1, win 507, options [nop,nop,TS val 648097392 ecr 4004186138], length 80: HTTP: GET / HTTP/1.1 15:05:24.644370 IP 10.244.3.78.8080 > 10.244.1.35.51116: Flags [.], ack 81, win 502, options [nop,nop,TS val 4004186139 ecr 648097392], length 0 15:05:24.645536 IP 10.244.3.78.8080 > 10.244.1.35.51116: Flags [.], seq 1:1369, ack 81, win 502, options [nop,nop,TS val 4004186140 ecr 648097392], length 1368: HTTP: HTTP/1.1 200 OK 15:05:24.645569 IP 10.244.1.35.51116 > 10.244.3.78.8080: Flags [.], ack 1369, win 502, options [nop,nop,TS val 648097393 ecr 4004186140], length 0 15:05:24.645578 IP 10.244.3.78.8080 > 10.244.1.35.51116: Flags [P.], seq 1369:2422, ack 81, win 502, options [nop,nop,TS val 4004186140 ecr 648097392], length 1053: HTTP 15:05:24.645644 IP 10.244.1.35.51116 > 10.244.3.78.8080: Flags [.], ack 2422, win 494, options [nop,nop,TS val 648097393 ecr 4004186140], length 0 15:05:24.645752 IP 10.244.1.35.51116 > 10.244.3.78.8080: Flags [F.], seq 81, ack 2422, win 502, options [nop,nop,TS val 648097393 ecr 4004186140], length 0 15:05:24.646431 IP 10.244.3.78.8080 > 10.244.1.35.51116: Flags [F.], seq 2422, ack 82, win 502, options [nop,nop,TS val 4004186141 ecr 648097393], length 0 15:05:24.646484 IP 10.244.1.35.51116 > 10.244.3.78.8080: Flags [.], ack 2423, win 502, options [nop,nop,TS val 648097394 ecr 4004186141], length 0 Troubleshooting =============== When troubleshooting dropped or unencrypted packets between pods, the following commands can be helpful: .. code-block:: shell-session # From node A: cilium-dbg debuginfo --output json | jq .encryption { "wireguard": { "interfaces": [ { "listen-port": 51871, "name": "cilium\_wg0", "peer-count": 1, "peers": [ { "allowed-ips": [ "10.154.1.107/32", "10.154.1.195/32" ], "endpoint": "192.168.61.12:51871", "last-handshake-time": "2021-05-05T12:31:24.418Z", "public-key": "RcYfs/GEkcnnv6moK5A1pKnd+YYUue21jO9I08Bv0zo=" } ], "public-key": "DrAc2EloK45yqAcjhxerQKwoYUbLDjyrWgt9UXImbEY=" } ] } } # From node B: cilium-dbg debuginfo --output json | jq .encryption { "wireguard": { "interfaces": [ { "listen-port": 51871, "name": "cilium\_wg0", "peer-count": 1, "peers": [ { "allowed-ips": [ "10.154.2.103/32", "10.154.2.142/32" ], "endpoint": "192.168.61.11:51871", "last-handshake-time": "2021-05-05T12:31:24.631Z", "public-key": "DrAc2EloK45yqAcjhxerQKwoYUbLDjyrWgt9UXImbEY=" } ], "public-key": "RcYfs/GEkcnnv6moK5A1pKnd+YYUue21jO9I08Bv0zo=" } ] } } For pod to pod packets to be successfully encrypted and decrypted, the following must hold: - WireGuard public key of a remote node in the ``peers[\*].public-key`` section matches the actual public key of the remote node (``public-key`` retrieved via the same command on the remote node). - ``peers[\*].allowed-ips`` should contain a list of pod IP addresses running on the remote. Cluster Mesh ============ WireGuard enabled Cilium clusters can be connected via :ref:`Cluster Mesh`. The ``clustermesh-apiserver`` will forward the necessary WireGuard public keys automatically to remote clusters. In such a setup, it is important to note that all participating clusters must have WireGuard encryption enabled, i.e. mixed mode is currently not supported. In addition, UDP traffic between nodes of different clusters on port ``51871`` must be allowed. .. \_node-node-wg: Node-to-Node Encryption (beta) ============================== By default, WireGuard-based encryption only encrypts traffic between Cilium-managed pods. To enable node-to-node encryption, which additionally also encrypts node-to-node, pod-to-node and node-to-pod traffic, use the following configuration options: .. tabs:: .. group-tab:: Cilium CLI If you are deploying Cilium with the Cilium CLI, pass the following options: .. parsed-literal:: cilium install |CHART\_VERSION| \ --set encryption.enabled=true \ --set encryption.type=wireguard \ --set encryption.nodeEncryption=true .. group-tab:: Helm If you are deploying Cilium with Helm by following :ref:`k8s\_install\_helm`, pass the following options: .. cilium-helm-install:: :namespace: kube-system :set: encryption.enabled=true
https://github.com/cilium/cilium/blob/main//Documentation/security/network/encryption-wireguard.rst
main
cilium
[ -0.010979543440043926, 0.08584070205688477, -0.08194106072187424, -0.03367885202169418, 0.05597338080406189, -0.07263948768377304, 0.009501869790256023, -0.02926982007920742, 0.0022472720593214035, 0.011515908874571323, 0.009634776040911674, -0.06036800146102905, -0.05222001671791077, -0.0...
0.101563
If you are deploying Cilium with the Cilium CLI, pass the following options: .. parsed-literal:: cilium install |CHART\_VERSION| \ --set encryption.enabled=true \ --set encryption.type=wireguard \ --set encryption.nodeEncryption=true .. group-tab:: Helm If you are deploying Cilium with Helm by following :ref:`k8s\_install\_helm`, pass the following options: .. cilium-helm-install:: :namespace: kube-system :set: encryption.enabled=true encryption.type=wireguard encryption.nodeEncryption=true .. warning:: Cilium automatically disables node-to-node encryption from and to Kubernetes control-plane nodes, i.e. any node with the ``node-role.kubernetes.io/control-plane`` label will opt-out of node-to-node encryption. This is done to ensure worker nodes are always able to communicate with the Kubernetes API to update their WireGuard public keys. With node-to-node encryption enabled, the connection to the kube-apiserver would also be encrypted with WireGuard. This creates a bootstrapping problem where the connection used to update the WireGuard public key is itself encrypted with the public key about to be replaced. This is problematic if a node needs to change its public key, for example because it generated a new private key after a node reboot or node re-provisioning. Therefore, by not encrypting the connection from and to the kube-apiserver host network with WireGuard, we ensure that worker nodes are never accidentally locked out from the control plane. Note that even if WireGuard node-to-node encryption is disabled on those nodes, the Kubernetes control-plane itself is usually still encrypted by Kubernetes itself using mTLS and that pod-to-pod traffic for any Cilium-manged pods on the control-plane nodes are also still encrypted via Cilium's WireGuard implementation. The label selector for matching the control-plane nodes which shall not participate in node-to-node encryption can be configured using the ``node-encryption-opt-out-labels`` ConfigMap option. It defaults to ``node-role.kubernetes.io/control-plane``. You may force node-to-node encryption from and to control-plane nodes by using an empty label selector with that option. Note that doing so is not recommended, as it will require you to always manually update a node's public key in its corresponding ``CiliumNode`` CRD when a worker node's public key changes, given that the worker node will be unable to do so itself. N/S load balancer traffic isn't encrypted when an intermediate node redirects a request to a different node with the following load balancer configuration: - LoadBalancer & NodePort XDP Acceleration - Direct Server Return (DSR) in non-Geneve dispatch mode Egress Gateway replies are not encrypted when XDP Acceleration is enabled. Which traffic is encrypted ========================== The following table denotes which packets are encrypted with WireGuard depending on the mode. Configurations or communication pairs not present in the following table are not subject to encryption with WireGuard and therefore assumed to be unencrypted. +----------------+-------------------+----------------------+-----------------+ | Origin | Destination | Configuration | Encryption mode | +================+===================+======================+=================+ | Pod | remote Pod | any | default | +----------------+-------------------+----------------------+-----------------+ | Pod | remote Node | any | node-to-node | +----------------+-------------------+----------------------+-----------------+ | Node | remote Pod | any | node-to-node | +----------------+-------------------+----------------------+-----------------+ | Node | remote Node | any | node-to-node | +----------------+-------------------+----------------------+-----------------+ | \*\*Services\*\* | +----------------+-------------------+----------------------+-----------------+ | Pod | remote Pod via | any | default | | | ClusterIP Service | | | +----------------+-------------------+----------------------+-----------------+ | Pod | remote Pod via | Socket LB | default | | | non ClusterIP | | | | | Service (e.g., | | | | | NodePort) | | | +----------------+-------------------+----------------------+-----------------+ | Pod | remote Pod via | kube-proxy | node-to-node | | | non ClusterIP | | | | | Service | | | +----------------+-------------------+----------------------+-----------------+ | Client outside | remote Pod via | KPR, | default | | cluster | Service | overlay routing, | | | | | without DSR, | | | | | without XDP | | +----------------+-------------------+----------------------+-----------------+ | Client outside |
https://github.com/cilium/cilium/blob/main//Documentation/security/network/encryption-wireguard.rst
main
cilium
[ -0.0035204917658120394, 0.06644458323717117, -0.03376704826951027, 0.007786653935909271, 0.010654720477759838, -0.017287984490394592, 0.02172354981303215, -0.0012293272884562612, 0.08180193603038788, 0.006988173816353083, 0.02379833720624447, -0.07630700618028641, 0.02656593546271324, 0.00...
0.066473
| | | | | Service | | | +----------------+-------------------+----------------------+-----------------+ | Client outside | remote Pod via | KPR, | default | | cluster | Service | overlay routing, | | | | | without DSR, | | | | | without XDP | | +----------------+-------------------+----------------------+-----------------+ | Client outside | remote Pod via | native routing, | node-to-node | | cluster | Service | without XDP | | +----------------+-------------------+----------------------+-----------------+ | Client outside | remote Pod or | DSR in Geneve mode, | default | | cluster | remote Node via | without XDP | | | | Service | | | +----------------+-------------------+----------------------+-----------------+ | Pod | remote Pod via L7 | L7 Proxy / Ingress | default | | | Proxy or L7 | | | | | Ingress Service | | | +----------------+-------------------+----------------------+-----------------+ | \*\*Egress Gateway\*\* | +----------------+-------------------+----------------------+-----------------+ | Pod |Egress Gateway node| Egress Gateway | default | +----------------+-------------------+----------------------+-----------------+ | Egress Gateway | Pod | Egress Gateway | default | | node | | without XDP | | +----------------+-------------------+----------------------+-----------------+ \* \*\*Pod\*\*: Cilium-managed K8s Pod running in non-host network namespace. \* \*\*Node\*\*: K8s host running Cilium, or Pod running in host network namespace managed by Cilium. \* \*\*Service\*\*: K8s Service (ClusterIP, NodePort, LoadBalancer, ExternalIP). \* \*\*Client outside cluster\*\*: Any client which runs outside K8s cluster. Request between client and Node is not encrypted. Depending on Cilium configuration (see the table at the beginning of this section), it might be encrypted only between intermediate Node (which received client request first) and destination Node. Known Issues ========================== \* Packets may be dropped when configuring the WireGuard device leading to connectivity issues. This happens when endpoints are added or removed or when node updates occur. In some cases this may lead to failed calls to ``sendmsg`` and ``sendto``. See :gh-issue:`33159` for more details. Legal ===== "WireGuard" is a registered trademark of Jason A. Donenfeld.
https://github.com/cilium/cilium/blob/main//Documentation/security/network/encryption-wireguard.rst
main
cilium
[ 0.0028259342070668936, -0.030664891004562378, 0.011032377369701862, -0.028684785589575768, 0.0141372075304389, -0.06825011223554611, -0.033093761652708054, 0.013437272980809212, -0.05151243507862091, -0.0011360349599272013, 0.055999480187892914, 0.0019159413641318679, -0.016231879591941833, ...
0.104791
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_envoy: ===== Envoy ===== Envoy proxy shipped with Cilium is built with minimal Envoy extensions and custom policy enforcement filters. Cilium uses this minimal distribution as its host proxy for enforcing HTTP and other L7 policies as specified in network policies for the cluster. Cilium proxy is distributed within the Cilium images. For more information on the version compatibility matrix, see `Cilium Proxy documentation `\_. \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Deployment as DaemonSet \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Background ========== When Cilium L7 functionality (Ingress, Gateway API, Network Policies with L7 functionality, L7 Protocol Visibility) is enabled or installed in a Kubernetes cluster, the Cilium agent starts an Envoy proxy as separate process within the Cilium agent pod. That Envoy proxy instance becomes responsible for proxying all matching L7 requests on that node. As a result, L7 traffic targeted by policies depends on the availability of the Cilium agent pod. Alternatively, it's possible to deploy the Envoy proxy as independently life-cycled DaemonSet called ``cilium-envoy`` instead of running it from within the Cilium Agent Pod. The communication between Cilium agent and Envoy proxy takes place via UNIX domain sockets in both deployment modes. Be that streaming the access logs (e.g. L7 Protocol Visibility), updating the configuration via `xDS `\_ or accessing the admin interface. Due to the use of UNIX domain sockets, Envoy DaemonSet and the Cilium Agent need to have compatible types when SELinux is enabled on the host. This is the case if not specified otherwise, both using the highly privileged type: ``spc\_t``. SELinux is enabled by default on Red Hat OpenShift Container Platform. Enable and configure Envoy DaemonSet ==================================== To enable the dedicated Envoy proxy DaemonSet, install Cilium with the Helm value ``envoy.enabled`` set to ``true``. Please see the :ref:`helm\_reference` (keys with ``envoy.\*``) for detailed information on how to configure the Envoy proxy DaemonSet. Potential Benefits ================== - Cilium Agent restarts (e.g. for upgrades) without impacts for the live traffic proxied via Envoy. - Envoy patch release upgrades without impacts for the Cilium Agent. - Separate CPU and memory limits for Envoy and Cilium Agent for performance isolation. - Envoy application log not mixed with the one of the Cilium Agent. - Dedicated health probes for the Envoy proxy. - Explicit deployment of Envoy proxy during Cilium installation (compared to on demand in the embedded mode). .. admonition:: Video :class: attention If you'd like to see Cilium Envoy in action, check out `eCHO episode 127: Cilium & Envoy `\_\_. \*\*\*\*\*\*\*\*\*\*\*\*\* Go Extensions \*\*\*\*\*\*\*\*\*\*\*\*\* .. note:: This feature is currently in beta phase. .. note:: The Go extensions proxylib framework is residing in cilium/proxy repository. This is a guide for developers who are interested in writing a Go extension to the Envoy proxy as part of Cilium. .. image:: \_static/proxylib\_logical\_flow.png As depicted above, this framework allows a developer to write a small amount of Go code (green box) focused on parsing a new API protocol, and this Go code is able to take full advantage of Cilium features including high-performance redirection to/from Envoy, rich L7-aware policy language and access logging, and visibility into encrypted traffic via kTLS (coming soon!). In sum, you as the developer need only worry about the logic of parsing the protocol, and Cilium + Envoy + eBPF do the heavy-lifting. This guide uses simple examples based on a hypothetical "r2d2" protocol (see `proxylib/r2d2/r2d2parser.go `\_) that might be used to talk to a simple protocol droid a long time ago in a galaxy far, far away. But it
https://github.com/cilium/cilium/blob/main//Documentation/security/network/proxy/envoy.rst
main
cilium
[ -0.00013868045061826706, 0.018676094710826874, -0.03151828050613403, -0.03651191294193268, 0.035304710268974304, -0.06766626238822937, -0.037397198379039764, -0.038977038115262985, -0.017541950568556786, 0.019419318065047264, 0.026354791596531868, -0.040404874831438065, 0.023041000589728355,...
0.190436
of parsing the protocol, and Cilium + Envoy + eBPF do the heavy-lifting. This guide uses simple examples based on a hypothetical "r2d2" protocol (see `proxylib/r2d2/r2d2parser.go `\_) that might be used to talk to a simple protocol droid a long time ago in a galaxy far, far away. But it also points to other real protocols like Memcached and Cassandra that already exist in the cilium/proxylib directory. Step 1: Decide on a Basic Policy Model ====================================== To get started, take some time to think about what it means to provide protocol-aware security in the context of your chosen protocol. Most protocols follow a common pattern of a client who performs an ''operation'' on a ''resource''. For example: - A standard RESTful HTTP request has a GET/POST/PUT/DELETE methods (operation) and URLs (resource). - A database protocol like MySQL has SELECT/INSERT/UPDATE/DELETE actions (operation) on a combined database + table name (resource). - A queueing protocol like Kafka has produce/consume (operation) on a particular queue (resource). A common policy model is to allow the user to whitelist certain operations on one or more resources. In some cases, the resources need to support regexes to avoid explicit matching on variable content like ids (e.g., /users/ would match /users/.\*) In our examples, the ''r2d2'' example, we'll use a basic set of operations (READ/WRITE/HALT/RESET). The READ and WRITE commands also support a 'filename' resource, while HALT and RESET have no resource. Step 2: Understand Protocol, Encoding, Framing and Types ======================================================== Next, get your head wrapped around how a protocol looks terms of the raw data, as this is what you'll be parsing. Try looking for official definitions of the protocol or API. Official docs will not only help you quickly learn how the protocol works, but will also help you by documenting tricky corner cases that wouldn't be obvious just from regular use of the protocol. For example, here are example specs for `Redis Protocol `\_ , `Cassandra Protocol `\_, and `AWS SQS `\_ . These specs help you understand protocol aspects like: - \*\*encoding / framing\*\* : how to recognize the beginning/end of individual requests/replies within a TCP stream. This typically involves reading a header that encodes the overall request length, though some simple protocols use a delimiter like ''\r\n\'' to separate messages. - \*\*request/reply fields\*\* : for most protocols, you will need to parse out fields at various offsets into the request data in order to extract security-relevant values for visibility + filtering. In some cases, access control requires filtering requests from clients to servers, but in some cases, parsing replies will also be required if reply data is required to understand future requests (e.g., prepared-statements in database protocols). - \*\*message flow\*\* : specs often describe various dependencies between different requests. Basic protocols tend to follow a simple serial request/reply model, but more advanced protocols will support pipelining (i.e., sending multiple requests before any replies have been received). - \*\*protocol errors\*\* : when a Cilium proxy denies a request based on policy, it should return a protocol-specific error to the client (e.g., in HTTP, a proxy should return a ''403 Access Denied'' error). Looking at the protocol spec will typically indicate how you should return an equivalent ''Access Denied'' error. Sometimes, the protocol spec does not give you a full sense of the set of commands that can be sent over the protocol. In that case, looking at higher-level user documentation can fill in some of these knowledge gaps. Here are examples for `Redis Commands `\_ and `Cassandra CQL Commands `\_ . Another great trick is to use `Wireshark `\_ to capture raw packet
https://github.com/cilium/cilium/blob/main//Documentation/security/network/proxy/envoy.rst
main
cilium
[ -0.03231126815080643, 0.005941576324403286, -0.02847493626177311, -0.06646550446748734, -0.08989277482032776, -0.031891390681266785, -0.05951714143157005, 0.04642079770565033, -0.046218711882829666, -0.00024384056450799108, -0.04821890965104103, -0.05856576934456825, 0.04103890806436539, 0...
0.191766
of commands that can be sent over the protocol. In that case, looking at higher-level user documentation can fill in some of these knowledge gaps. Here are examples for `Redis Commands `\_ and `Cassandra CQL Commands `\_ . Another great trick is to use `Wireshark `\_ to capture raw packet data between a client and server. For many protocols, the `Wireshark Sample Captures `\_ has already saved captures for us. Otherwise, you can easily use tcpdump to capture a file. For example, for MySQL traffic on port 3306, you could run the following in a container running the MySQL client or server: “tcpdump -s 0 port 3306 -w mysql.pcap”. `More Info `\_ In our example r2d2 protocol, we'll keep the spec as simple as possible. It is a text-only based protocol, with each request being a line terminated by ''\r\n''. A request starts with a case-insensitive string command ("READ","WRITE","HALT","RESET"). If the command is "READ" or "WRITE", the command must be followed by a space, and a non-empty filename that contains only non whitespace ASCII characters. Step 3: Search for Existing Parser Code / Libraries =================================================== Look for open source Go library/code that can help. Is there existing open source Go code that parse your protocol that you can leverage, either directly as library or a motivating example? For example, the `tidwall/recon library `\_ parses Redis in Go, and `Vitess `\_ parses MySQL in Go. `Wireshark dissectors `\_ also has a wealth of protocol parsers written in C that can serve as useful guidance. Note: finding client-only protocol parsing code is typically less helpful than finding a proxy implementation, or a full parser library. This is because the set of requests a client parsers is typically the inverse set of the requests a Cilium proxy needs to parse, since the proxy mimics the server rather than the client. Still, viewing a Go client can give you a general idea of how to parse the general serialization format of the protocol. Step 4: Follow the Cilium Developer Guide ========================================= It is easiest to start Cilium development by following the :ref:`dev\_guide` After cloning cilium/proxy repo: .. code-block:: shell-session $ cd proxy $ vagrant up $ cd proxylib While this dev VM is running, you can open additional terminals to the cilium/proxy dev VM by running ``vagrant ssh`` from within the cilium/proxy source directory. Step 5: Create New Proxy Skeleton ================================= From inside the proxylib directory, copy the rd2d directory and rename the files. Replace ''newproto'' with your protocol: .. code-block:: shell-session $ mkdir newproto $ cd newproto $ cp ../r2d2/r2d2parser.go newproto.go $ cp ../r2d2/r2d2parser\_test.go newproto\_test.go Within both newproto.go and newproto\_test.go update references to r2d2 with your protocol name. Search for both ''r2d2'' and ''R2D2''. Also, edit proxylib.go and add the following import line: :: \_ "github.com/cilium/proxy/proxylib/newproto" Step 6: Update OnData Method ============================ Implementing a parser requires you as the developer to implement three primary functions, shown as blue in the diagram below. We will cover OnData() in this section, and the other functions in section `Step 9: Add Policy Loading and Matching`\_. .. image:: \_static/proxylib\_key\_functions.png The beating heart of your parsing is implementing the onData function. You can think of any proxy as have two data streams, one in the request direction (i.e., client to server) and one in the reply direction (i.e., server to client). OnData is called when there is data to process, and the value of the boolean 'reply' parameter indicates the direction of the stream for a given call to OnData. The data passed to OnData is a slice of byte slices (i.e., an array of byte arrays). The
https://github.com/cilium/cilium/blob/main//Documentation/security/network/proxy/envoy.rst
main
cilium
[ 0.00610301923006773, 0.011327585205435753, -0.0520419105887413, 0.020040318369865417, -0.04862966015934944, -0.037049613893032074, -0.013689092360436916, 0.042318668216466904, -0.015166070312261581, 0.048682522028684616, -0.034494128078222275, -0.07314276695251465, 0.11054068058729172, -0....
0.0873
server to client). OnData is called when there is data to process, and the value of the boolean 'reply' parameter indicates the direction of the stream for a given call to OnData. The data passed to OnData is a slice of byte slices (i.e., an array of byte arrays). The return values of the OnData function tell the Go framework tell how data in the stream should be processed, with four primary outcomes: - \*\*PASS x\*\* : The next x bytes in the data stream passed to OnData represent a request/reply that should be passed on to the server/client. The common case here is that this is a request that should be allowed by policy, or that no policy is applied. Note: x bytes may be less than the total amount of data passed to OnData, in which case the remaining bytes will still be in the data stream when onData is invoked next. x bytes may also be more than the data that has been passed to OnData. For example, in the case of a protocol where the parser filters only on values in a protocol header, it is often possible to make a filtering decision, and then pass (or drop) the size of the full request/reply without having the entire request passed to Go. - \*\*MORE x\*\* : The buffers passed to OnData to do not represent all of the data required to frame and filter the request/reply. Instead, the parser needs to see at least x additional bytes beyond the current data to make a decision. In some cases, the full request must be read to understand framing and filtering, but in others a decision can be made simply by reading a protocol header. When parsing data, be defensive, and recognize that it is technically possible that data arrives one byte at a time. Two common scenarios exist here: - \*\*Text-based Protocols\*\* : For text-based protocols that use a delimiter like "\r\n", it is common to simply check if the delimiter exists, and return MORE 1 if it does not, as technically one more character could result in the delimiter being present. See the sample r2d2 parser as a basic example of this. - \*\*Binary-based protocols\*\* : Many binary protocols have a fixed header length, which containers a field that then indicates the remaining length of the request. In the binary case, first check to make sure a full header is received. Typically the header will indicate both the full request length (i.e., framing), as well as the request type, which indicates how much of the full request must be read in order to perform filtering (in many cases, this is less than the full request). A binary parser will typically return MORE if the data passed to OnData is less than the header length. After reading a full header, the simple approach is for the parser to return MORE to wait for the full request to be received and parsed (see the existing CassandraParser as an example). However, as an optimization, the parser can attempt to only request the minimum number of bytes required beyond the header to make a policy decision, and then PASS or DROP the remaining bytes without requiring them to be passed to the Go parser. - \*\*DROP x\*\* : Remove the first x bytes from the data stream passed to OnData, as they represent a request/reply that should not be forwarded to the client or server based on policy. Don't worry about making onData return a drop right away, as we'll return to DROP in a later step below. - \*\*ERROR
https://github.com/cilium/cilium/blob/main//Documentation/security/network/proxy/envoy.rst
main
cilium
[ -0.030830862000584602, 0.015708180144429207, -0.0391540601849556, -0.026570970192551613, -0.02760004997253418, -0.04993458837270737, 0.13508543372154236, -0.024049123749136925, 0.04675175994634628, -0.01798977144062519, -0.001266875071451068, 0.044362012296915054, -0.022952387109398842, 0....
0.05573
first x bytes from the data stream passed to OnData, as they represent a request/reply that should not be forwarded to the client or server based on policy. Don't worry about making onData return a drop right away, as we'll return to DROP in a later step below. - \*\*ERROR y\*\* : The connection contains data that does not match the protocol spec, and prevents you from further parsing the data stream. The framework will terminate the connection. An example would be a request length that falls outside the min/max specified by the protocol spec, or values for a field that fall outside the values indicated by the spec (e.g., wrong versions, unknown commands). If you are still able to properly frame the requests, you can also choose to simply drop the request and return a protocol error (e.g., similar to an ''HTTP 400 Bad Request'' error. But in all cases, you should write your parser defensively, such that you never forward a request that you do not understand, as such a request could become an avenue for subverting the intended security visibility and filtering policies. See proxylib/types.h for the set of valid error codes. See proxylib/proxylib/parserfactory.go for the official OnData interface definition. Keep it simple, and work iteratively. Start out just getting the framing right. Can you write a parser that just prints out the length and contents of a request, and then PASS each request with no policy enforcement? One simple trick is to comment out the r2d2 parsing logic in OnData, but leave it in the file as a reference, as your protocol will likely require similar code as we add more functionality below. Step 7: Use Unit Testing To Drive Development ============================================= Use unit tests to drive your development. Its tempting to want to first test your parser by firing up a client and server and developing on the fly. But in our experience you’ll iterate faster by using the great unit test framework created along with the Go proxy framework. This framework lets you pass in an example set of requests as byte arrays to a CheckOnDataOK method, which are passed to the parser's OnData method. CheckOnDataOK takes a set of expected return values, and compares them to the actual return values from OnData processing the byte arrays. Take some time to look at the unit tests for the r2d2 parser, and then for more complex parsers like Cassandra and Memcached. For simple text-based protocols, you can simply write ASCII strings to represent protocol messages, and convert them to []byte arrays and pass them to CheckOnDataOK. For binary protocols, one can either create byte arrays directly, or use a mechanism to convert a hex string to byte[] array using a helper function like hexData in cassandra/cassandraparser\_test.go A great way to get the exact data to pass in is to copy the data from the Wireshark captures mentioned above in Step #2. You can see the full application layer data streams in Wireshark by right-clicking on a packet and selecting “Follow As… TCP Stream”. If the protocol is text-based, you can copy the data as ASCII (see r2d2/r2d2parser\_test.go as an example of this). For binary data, it can be easier to instead select “raw” in the drop-down, and use a basic utility to convert from ascii strings to binary raw data (see cassandra/cassandraparser\_test.go for an example of this). To run the unit tests, go to proxylib/newproto and run: .. code-block:: shell-session $ go test This will build the latest version of your parser and unit test files and run the unit tests. Step 8: Add More
https://github.com/cilium/cilium/blob/main//Documentation/security/network/proxy/envoy.rst
main
cilium
[ -0.06646879017353058, 0.019162064418196678, -0.0161538515239954, 0.0031477566808462143, -0.07114215940237045, -0.09522286057472229, 0.05664260685443878, -0.01846162974834442, -0.052155476063489914, -0.03988280147314072, -0.03818967193365097, -0.0036997406277805567, -0.008049438707530499, -...
-0.002588
strings to binary raw data (see cassandra/cassandraparser\_test.go for an example of this). To run the unit tests, go to proxylib/newproto and run: .. code-block:: shell-session $ go test This will build the latest version of your parser and unit test files and run the unit tests. Step 8: Add More Advanced Parsing ================================= Thinking back to step #1, what are the critical fields to parse out of the request in order to understand the “operation” and “resource” of each request. Can you print those out for each request? Use the unit test framework to pass in increasingly complex requests, and confirm that the parser prints out the right values, and that the unit tests are properly slicing the datastream into requests and parsing out the required fields. A couple scenarios to make sure your parser handles properly via unit tests: - data chunks that are less than a full request (return MORE) - requests that are spread across multiple data chunks. (return MORE ,then PASS) - multiple requests that are bundled into a single data chunk (return PASS, then another PASS) - rejection of malformed requests (return ERROR). For certain advanced cases, it is required for a parser to store state across requests. In this case, data can be stored using data structures that are included as part of the main parser struct. See CassandraParser in cassandra/cassandraparser.go as an example of how the parser uses a string to store the current 'keyspace' in use, and uses Go maps to keep state required for handling prepared queries. Step 9: Add Policy Loading and Matching ======================================== Once you have the parsing of most protocol messages ironed out, its time to start enforcing policy. First, create a Go object that will represent a single rule in the policy language. For example, this is the rule for the r2d2 protocol, which performs exact match on the command string, and a regex on the filename: .. code-block:: go type R2d2Rule struct { cmdExact string fileRegexCompiled \*regexp.Regexp } There are two key methods to update: - Matches : This function implements the basic logic of comparing data from a single request against a single policy rule, and return true if that rule matches (i.e., allows) that request. - RuleParser : Reads key value pairs from policy, validates those entries, and stores them as a Rule object. See r2d2/r2d2parser.go for examples of both functions for the r2d2 protocol. You'll also need to update OnData to call p.connection.Matches(), and if this function return false, return DROP for a request. Note: despite the similar names between the Matches() function you create in your newprotoparser.go and p.connection.Matches(), do not confuse the two. Your OnData function should always call p.connection.Matches() rather than invoking your own Matches() directly, as p.connection.Matches() calls the parser's Matches() function only on the subset of L7 rules that apply for the given Cilium source identity for this particular connection. Once you add the logic to call Matches() and return DROP in OnData, you will need to update unit tests to have policies that allow the traffic you expect to be passed. The following is an example of how r2d2/r2d2parser\_test.go adds an allow-all policy for a given test: .. code-block:: go s.ins.CheckInsertPolicyText(c, "1", []string{` name: "cp1" policy: 2 ingress\_per\_port\_policies: < port: 80 rules: < l7\_proto: "r2d2" > > `}) The following is an example of a policy that would allow READ commands with a file regex of ".\*": .. code-block:: go s.ins.CheckInsertPolicyText(c, "1", []string{` name: "cp2" policy: 2 ingress\_per\_port\_policies: < port: 80 rules: < l7\_proto: "r2d2" l7\_rules: < rule: < key: "cmd" value: "READ" > rule: < key:
https://github.com/cilium/cilium/blob/main//Documentation/security/network/proxy/envoy.rst
main
cilium
[ 0.026963472366333008, 0.047177206724882126, -0.025745199993252754, 0.030514054000377655, -0.14424178004264832, -0.10926375538110733, -0.05810613930225372, 0.0011416422203183174, -0.04788161814212799, 0.00488689262419939, -0.05614031106233597, -0.1058788150548935, 0.03521840274333954, -0.03...
-0.008099
`}) The following is an example of a policy that would allow READ commands with a file regex of ".\*": .. code-block:: go s.ins.CheckInsertPolicyText(c, "1", []string{` name: "cp2" policy: 2 ingress\_per\_port\_policies: < port: 80 rules: < l7\_proto: "r2d2" l7\_rules: < rule: < key: "cmd" value: "READ" > rule: < key: "file" value: ".\*" > > > > > `}) Step 10: Inject Error Response ============================== Simply dropping the request from the request data stream prevents the request from reaching the server, but it would leave the client hanging, waiting for a response that would never come since the server did not see the request. Instead, the proxy should return an application-layer reply indicating that access was denied, similar to how an HTTP proxy would return a ''403 Access Denied'' error. Look back at the protocol spec discussed in Step 2 to understand what an access denied message looks like for this protocol, and use the p.connection.Inject() method to send this error reply back to the client. See r2d2/r2d2parser.go for an example. .. code-block:: go p.connection.Inject(true, []byte("ERROR\r\n")) Note: p.connection.Inject() will inject the data it is passed into the reply datastream. In order for the client to parse this data correctly, it must be injected at a proper framing boundary (i.e., in between other reply messages that may be in the reply data stream). If the client is following a basic serial request/reply model per connection, this is essentially guaranteed as at the time of a request that is denied, there are no other replies potentially in the reply datastream. But if the protocol supports pipelining (i.e., multiple requests in flight) replies must be properly framed and PASSed on a per request basis, and the timing of the call to p.connection.Inject() must be controlled such that the client will properly match the Error response with the correct request. See the Memcached parser as an example of how to accomplish this. Step 11: Add Access Logging =========================== Cilium also has the notion of an ''Access Log'', which records each request handled by the proxy and indicates whether the request was allowed or denied. A call to ``p.connection.Log()`` implements access logging. See the OnData function in r2d2/r2d2parser.go as an example: .. code-block:: go p.connection.Log(access\_log\_entry\_type, &cilium.LogEntry\_GenericL7{ &cilium.L7LogEntry{ Proto: "r2d2", Fields: map[string]string{ "cmd": reqData.cmd, "file": reqData.file, }, }, }) Step 12: Manual Testing ======================= Find the standard docker container for running the protocol server. Often the same image also has a CLI client that you can use as a client. Start both a server and client container running in the cilium dev VM, and attach them to the already created “cilium-net”. For example, with Cassandra, we run: .. code-block:: shell-session docker run --name cass-server -l id=cass-server -d --net cilium-net cassandra docker run --name cass-client -l id=cass-client -d --net cilium-net cassandra sh -c 'sleep 3000' Note that we run both containers with labels that will make it easy to refer to these containers in a cilium network policy. Note that we have the client container run the sleep command, as we will use 'docker exec' to access the client CLI. Use ``cilium-dbg endpoint list`` to identify the IP address of the protocol server. .. code-block:: shell-session $ cilium-dbg endpoint list ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 2987 Disabled Disabled 31423 container:id=cass-server f00d::a0b:0:0:bab 10.11.51.247 ready 27333 Disabled Disabled 4 reserved:health f00d::a0b:0:0:6ac5 10.11.92.46 ready 50923 Disabled Disabled 18253 container:id=cass-client f00d::a0b:0:0:c6eb 10.11.175.191 ready One can then invoke the client CLI using that server IP address (10.11.51.247 in the above example): .. code-block:: shell-session docker exec -it cass-client sh -c 'cqlsh 10.11.51.247 -e "select
https://github.com/cilium/cilium/blob/main//Documentation/security/network/proxy/envoy.rst
main
cilium
[ -0.1332232505083084, 0.08558505773544312, -0.04161427915096283, -0.03046710602939129, -0.0054716868326067924, -0.04855634272098541, 0.07101508975028992, 0.06392758339643478, -0.07467316091060638, 0.05491220951080322, 0.029932072386145592, -0.025559041649103165, 0.0737689733505249, -0.00493...
0.102467
container:id=cass-server f00d::a0b:0:0:bab 10.11.51.247 ready 27333 Disabled Disabled 4 reserved:health f00d::a0b:0:0:6ac5 10.11.92.46 ready 50923 Disabled Disabled 18253 container:id=cass-client f00d::a0b:0:0:c6eb 10.11.175.191 ready One can then invoke the client CLI using that server IP address (10.11.51.247 in the above example): .. code-block:: shell-session docker exec -it cass-client sh -c 'cqlsh 10.11.51.247 -e "select \* from system.local"' Note that in the above example, ingress policy is not enforced for the Cassandra server endpoint, so no data will flow through the Cassandra parser. A simple ''allow all'' L7 Cassandra policy can be used to send all data to the Cassandra server through the Go Cassandra parser. This policy has a single empty rule, which matches all requests. When performing manual testing, remember that each time you change your Go proxy code, you must re-run ``make`` and ``sudo make install`` and then restart the cilium-agent process. If the only changes you have made since last compiling cilium are in your cilium/proxylib directory, you can safely just run ``make`` and ``sudo make install`` in that directory, which saves time. For example: .. code-block:: shell-session $ cd proxylib // only safe is this is the only directory that has changed $ make $ sudo make install If you rebase or other files change, you need to run both commands from the top level directory. Cilium agent default to running as a service in the development VM. However, the default options do not include the ``--debug-verbose=flow`` flag, which is critical to getting visibility in troubleshooting Go proxy frameworks. So it is easiest to stop the cilium service and run the cilium-agent directly as a command in a terminal window, and adding the ``--debug-verbose=flow`` flag. .. code-block:: shell-session $ sudo service cilium stop $ sudo /usr/bin/cilium-agent --debug --ipv4-range 10.11.0.0/16 --kvstore-opt etcd.address=192.168.60.11:4001 --kvstore etcd -t vxlan --fixed-identity-mapping=128=kv-store --fixed-identity-mapping=129=kube-dns --debug-verbose=flow Step 13: Add Runtime Tests ========================== Before submitting this change to the Cilium community, it is recommended that you add runtime tests that will run as part of Cilium's continuous integration testing. Usually these runtime test can be based on the same container images and test commands you used for manual testing. The best approach for adding runtime tests is typically to start out by copying-and-pasting an existing L7 protocol runtime test and then updating it to run the container images and CLI commands specific to the new protocol. See cilium/test/runtime/cassandra.go as an example that matches the use of Cassandra described above in the manual testing section. Note that the json policy files used by the runtime tests are stored in cilium/test/runtime/manifests, and the Cassandra example policies in those directories are easy to use as a based for similar policies you may create for your new protocol. Step 14: Review Spec for Corner Cases ===================================== Many protocols have advanced features or corner cases that will not manifest themselves as part of basic testing. Once you have written a first rev of the parser, it is a good idea to go back and review the protocol's spec or list of commands to see what if any aspects may fall outside the scope of your initial parser. For example, corner cases like the handling of empty or nil lists may not show up in your testing, but may cause your parser to fail. Add more unit tests to cover these corner cases. It is OK for the first rev of your parser not to handle all types of requests, or to have a simplified policy structure in terms of which fields can be matched. However, it is important to know what aspects of the protocol you are not parsing, and ensure that it does
https://github.com/cilium/cilium/blob/main//Documentation/security/network/proxy/envoy.rst
main
cilium
[ 0.03158266842365265, 0.10904738306999207, -0.051349204033613205, -0.06390900909900665, -0.06271091103553772, -0.050394054502248764, 0.01141082588583231, -0.024691522121429443, 0.03284849599003792, 0.02262248657643795, -0.007133685518056154, -0.13185252249240875, 0.049751292914152145, -0.00...
0.076878
OK for the first rev of your parser not to handle all types of requests, or to have a simplified policy structure in terms of which fields can be matched. However, it is important to know what aspects of the protocol you are not parsing, and ensure that it does not lead to any security concerns. For example, failing to parse prepared statements in a database protocol and instead just passing PREPARE and EXECUTE commands through would lead to gaping security whole that would render your other filtering meaningless in the face of a sophisticated attacker. Step 15: Write Docs or Getting Started Guide (optional) ======================================================= At a minimum, the policy examples included as part of the runtime tests serve as basic documentation of the policy and its expected behavior. But we also encourage adding more user friendly examples and documentation, for example, Getting Started Guides. For a good example to follow, see :gh-issue:`5661`. Also be sure to update ``Documentation/security/index.rst`` with a link to this new getting started guide. With that, you are ready to post this change for feedback from the Cilium community. Congrats!
https://github.com/cilium/cilium/blob/main//Documentation/security/network/proxy/envoy.rst
main
cilium
[ -0.07362531870603561, 0.030210142955183983, 0.01791599579155445, -0.007146770134568214, -0.04546600580215454, -0.07742546498775482, 0.017400233075022697, 0.006661960389465094, -0.11202099174261093, 0.004891794640570879, -0.0351233184337616, -0.07186150550842285, 0.01825885847210884, -0.020...
0.001818
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io Using Kubernetes Constructs In Policy ===================================== This section covers Kubernetes specific network policy aspects. .. \_k8s\_namespaces: Namespaces ---------- `Namespaces `\_ are used to create virtual clusters within a Kubernetes cluster. All Kubernetes objects including `NetworkPolicy` and `CiliumNetworkPolicy` belong to a particular namespace. Known Pitfalls -------------- This section covers known pitfalls when using Kubernetes constructs in policy. Considerations Of Namespace Boundaries ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Depending on how a policy is defined and created, Kubernetes namespaces are automatically taken into account. Network policies imported directly with the :ref:`api\_ref` apply to all namespaces unless a namespace selector is specified as described in :ref:`example\_cnp\_ns\_boundaries`. .. \_example\_cnp\_ns\_boundaries: Example ^^^^^^^ This example demonstrates how to enforce Kubernetes namespace-based boundaries for the namespaces ``ns1`` and ``ns2`` by enabling default-deny on all pods of either namespace and then allowing communication from all pods within the same namespace. .. note:: The example locks down ingress of the pods in ``ns1`` and ``ns2``. This means that the pods can still communicate egress to anywhere unless the destination is in either ``ns1`` or ``ns2`` in which case both source and destination have to be in the same namespace. In order to enforce namespace boundaries at egress, the same example can be used by specifying the rules at egress in addition to ingress. .. literalinclude:: ../../../examples/policies/kubernetes/namespace/isolate-namespaces.yaml :language: yaml Policies Only Apply Within The Namespace ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Network policies created and imported as `CiliumNetworkPolicy` CRD and `NetworkPolicy` apply within the namespace. In other words, the policy \*\*only\*\* applies to pods within that namespace. It's possible, however, to grant access to and from pods in other namespaces as described in :ref:`example\_cnp\_across\_ns`. .. \_example\_cnp\_across\_ns: Example ^^^^^^^ The following example exposes all pods with the label ``name=leia`` in the namespace ``ns1`` to all pods with the label ``name=luke`` in the namespace ``ns2``. Refer to the :git-tree:`example YAML files ` for a fully functional example including pods deployed to different namespaces. .. literalinclude:: ../../../examples/policies/kubernetes/namespace/namespace-policy.yaml :language: yaml Specifying Namespace In EndpointSelector, FromEndpoints, ToEndpoints ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Specifying the namespace by way of the label ``k8s:io.kubernetes.pod.namespace`` in the ``fromEndpoints`` and ``toEndpoints`` fields is supported as described in :ref:`example\_cnp\_egress\_to\_kube\_system`. However, Kubernetes prohibits specifying the namespace in the ``endpointSelector``, as it would violate the namespace isolation principle of Kubernetes. The ``endpointSelector`` always applies to pods in the namespace associated with the `CiliumNetworkPolicy` resource itself. .. \_example\_cnp\_egress\_to\_kube\_system: Example ^^^^^^^ The following example allows all pods in the ``public`` namespace in which the policy is created to communicate with kube-dns on port 53/UDP in the ``kube-system`` namespace. .. literalinclude:: ../../../examples/policies/kubernetes/namespace/kubedns-policy.yaml :language: yaml Namespace Specific Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Using namespace-specific information like ``io.cilium.k8s.namespace.labels`` within a ``fromEndpoints`` or ``toEndpoints`` is supported only for a :ref:`CiliumClusterwideNetworkPolicy` and not a :ref:`CiliumNetworkPolicy`. Hence, ``io.cilium.k8s.namespace.labels`` will be ignored in :ref:`CiliumNetworkPolicy` resources. Match Expressions ~~~~~~~~~~~~~~~~~ When using ``matchExpressions`` in a :ref:`CiliumNetworkPolicy` or a :ref:`CiliumClusterwideNetworkPolicy`, the list values are treated as a logical AND. If you want to match multiple keys with a logical OR, you must use multiple ``matchExpressions``. .. \_example\_multiple\_match\_expressions: Example ^^^^^^^ This example demonstrates how to enforce a policy with multiple ``matchExpressions`` that achieves a logical OR between the keys and its values. .. literalinclude:: ../../../examples/policies/l3/match-expressions/or-statement.yaml :language: yaml The following example shows a logical AND using a single ``matchExpression``. .. literalinclude:: ../../../examples/policies/l3/match-expressions/and-statement.yaml :language: yaml ServiceAccounts ~~~~~~~~~~~~~~~ Kubernetes `Service Accounts `\_ are used to associate an identity to a pod or process managed by Kubernetes and grant identities access to Kubernetes resources and secrets. Cilium supports the specification of network security policies based on the service account identity of a
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/kubernetes.rst
main
cilium
[ -0.02691303938627243, -0.0033314272295683622, -0.04995269700884819, -0.0653262585401535, -0.019328834488987923, -0.03285996988415718, -0.010053718462586403, -0.00036825824645347893, 0.046487145125865936, 0.00654556043446064, -0.0005107970791868865, -0.050541359931230545, 0.02351001836359501,...
0.208489
.. literalinclude:: ../../../examples/policies/l3/match-expressions/and-statement.yaml :language: yaml ServiceAccounts ~~~~~~~~~~~~~~~ Kubernetes `Service Accounts `\_ are used to associate an identity to a pod or process managed by Kubernetes and grant identities access to Kubernetes resources and secrets. Cilium supports the specification of network security policies based on the service account identity of a pod. The service account of a pod is either defined via the `service account admission controller `\_ or can be directly specified in the Pod, Deployment, ReplicationController resource like this: .. code-block:: yaml apiVersion: v1 kind: Pod metadata: name: my-pod spec: serviceAccountName: leia # ... Example ^^^^^^^ The following example grants any pod running under the service account of "luke" to issue a ``HTTP GET /public`` request on TCP port 80 to all pods running associated to the service account of "leia". Refer to the :git-tree:`example YAML files ` for a fully functional example including deployment and service account resources. .. literalinclude:: ../../../examples/policies/kubernetes/serviceaccount/serviceaccount-policy.yaml :language: yaml Multi-Cluster ~~~~~~~~~~~~~ When operating multiple cluster with cluster mesh, the cluster name is exposed via the label ``io.cilium.k8s.policy.cluster`` and can be used to restrict policies to a particular cluster. .. literalinclude:: ../../../examples/policies/kubernetes/clustermesh/cross-cluster-policy.yaml :language: yaml Note the ``io.kubernetes.pod.namespace: default`` in the policy rule. It makes sure the policy applies to ``rebel-base`` in the ``default`` namespace of ``cluster2`` regardless of the namespace in ``cluster1`` where ``x-wing`` is deployed in. If the namespace label of policy rules is omitted it defaults to the same namespace where the policy itself is applied in, which may be not what is wanted when deploying cross-cluster policies. To allow access from/to any namespace, use ``matchExpressions`` combined with an ``Exists`` operator. .. literalinclude:: ../../../examples/policies/kubernetes/clustermesh/cross-cluster-any-namespace-policy.yaml :language: yaml Clusterwide Policies ~~~~~~~~~~~~~~~~~~~~ `CiliumNetworkPolicy` only allows to bind a policy restricted to a particular namespace. There can be situations where one wants to have a cluster-scoped effect of the policy, which can be done using Cilium's `CiliumClusterwideNetworkPolicy` Kubernetes custom resource. The specification of the policy is same as that of `CiliumNetworkPolicy` except that it is not namespaced. In the cluster, this policy will allow ingress traffic from pods matching the label ``name=luke`` from any namespace to pods matching the labels ``name=leia`` in any namespace. .. literalinclude:: ../../../examples/policies/kubernetes/clusterwide/clusterscope-policy.yaml :language: yaml Allow All Cilium Managed Endpoints To Communicate With Kube-dns ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The following example allows all Cilium managed endpoints in the cluster to communicate with kube-dns on port 53/UDP in the ``kube-system`` namespace. .. literalinclude:: ../../../examples/policies/kubernetes/clusterwide/wildcard-from-endpoints.yaml :language: yaml .. \_health\_endpoint: Example: Add Health Endpoint ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The following example adds the health entity to all Cilium managed endpoints in order to check cluster connectivity health. .. literalinclude:: ../../../examples/policies/kubernetes/clusterwide/health.yaml :language: yaml
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/kubernetes.rst
main
cilium
[ -0.0399862565100193, 0.006171452812850475, -0.033681582659482956, -0.0754861906170845, -0.01623566634953022, -0.007833439856767654, 0.08688435703516006, -0.01815338060259819, 0.09834212809801102, 0.05649233236908913, 0.026141317561268806, -0.04921514540910721, 0.03852033615112305, -0.01728...
0.242393
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_network\_policy: .. \_Network Policies: .. \_Network Policy: Overview of Network Policy -------------------------- This page documents the policy language used to configure network policies in Cilium. Security policies can be specified and imported via the following mechanisms: \* Using Kubernetes `NetworkPolicy`, `CiliumNetworkPolicy` and `CiliumClusterwideNetworkPolicy` resources. See the section :ref:`k8s\_policy` for more details. In this mode, Kubernetes will automatically distribute the policies to all agents. \* Directly imported into the agent via CLI or :ref:`api\_ref` of the agent. This method does not automatically distribute policies to all agents. It is in the responsibility of the user to import the policy in all required agents. (This method is deprecated as of v1.18 and will be removed in v1.19.) .. toctree:: :maxdepth: 2 :glob: intro language kubernetes lifecycle troubleshooting caveats
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/index.rst
main
cilium
[ 0.014243786223232746, 0.016411734744906425, -0.035965677350759506, -0.06503963470458984, -0.01341224741190672, -0.01196861919015646, 0.02678099274635315, -0.010116142220795155, 0.03595723956823349, 0.019846895709633827, 0.023962346836924553, -0.05099262297153473, 0.051270581781864166, -0.0...
0.188147
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_policy\_guide: .. \_policy\_enforcement\_modes: Policy Enforcement Modes ======================== The configuration of the Cilium agent and the Cilium Network Policy determines whether an endpoint accepts traffic from a source or not. The agent can be put into the following three policy enforcement modes: default This is the default behavior for policy enforcement. In this mode, endpoints have unrestricted network access until selected by policy. Upon being selected by a policy, the endpoint permits only allowed traffic. This state is per-direction and can be adjusted on a per-policy basis. For more details, :ref:`see the dedicated section on default mode`. always With always mode, policy enforcement is enabled on all endpoints even if no rules select specific endpoints. If you want to configure health entity to check cluster-wide connectivity when you start cilium-agent with ``enable-policy: always``, you will likely want to enable communications to and from the health endpoint. See :ref:`health\_endpoint`. never With never mode, policy enforcement is disabled on all endpoints, even if rules do select specific endpoints. In other words, all traffic is allowed from any source (on ingress) or destination (on egress). To :ref:`configure ` the policy enforcement mode, adjust the Helm value ``policyEnforcementMode`` or the corresponding configuration flag ``enable-policy``. .. \_policy\_mode\_default: Endpoint default policy ----------------------- By default, all egress and ingress traffic is allowed for all endpoints. When an endpoint is selected by a network policy, it transitions to a default-deny state, where only \*\*explicitly allowed\*\* traffic is permitted. This state is per-direction: \* If any rule selects an :ref:`endpoint` and the rule has an ingress section, the endpoint goes into default deny-mode for ingress. \* If any rule selects an :ref:`endpoint` and the rule has an egress section, the endpoint goes into default-deny mode for egress. This means that endpoints start without any restrictions, and the first policy will switch the endpoint's default enforcement mode (per direction). It is possible to create policies that do not enable the default-deny mode for selected endpoints. The field ``EnableDefaultDeny`` configures this. Rules with ``EnableDefaultDeny`` disabled are ignored when determining the default mode. For example, this policy causes all DNS traffic to be intercepted, but does not block any traffic, even if it is the first policy to apply to an endpoint. An administrator can safely apply this policy cluster-wide, without the risk that it transitions an endpoint in to default-deny and causes legitimate traffic to be dropped. .. warning:: ``EnableDefaultDeny`` does not apply to :ref:`layer-7 policy `. Adding a layer-7 rule that does not include a layer-7 allow-all will cause drops, even when default-deny is explicitly disabled. .. code-block:: yaml apiVersion: cilium.io/v2 kind: CiliumClusterwideNetworkPolicy metadata: name: intercept-all-dns spec: endpointSelector: matchExpressions: - key: "io.kubernetes.pod.namespace" operator: "NotIn" values: - "kube-system" - key: "k8s-app" operator: "NotIn" values: - kube-dns enableDefaultDeny: egress: false ingress: false egress: - toEndpoints: - matchLabels: io.kubernetes.pod.namespace: kube-system k8s-app: kube-dns toPorts: - ports: - port: "53" protocol: TCP - port: "53" protocol: UDP rules: dns: - matchPattern: "\*" Policy Deny Response Handling ============================== By default, when network policy denies egress traffic from a pod, Cilium silently drops the packets. This means applications experience connection timeouts rather than immediate connection failures when attempting to reach forbidden destinations. However, some applications may benefit from receiving explicit rejection notifications instead of experiencing connection timeouts. This can provide faster feedback to applications and improve user experience by reducing wait times. This behavior can be configured with the ``--policy-deny-response`` option: \*\*none\*\* (default) Silently drop denied packets. Applications will experience connection timeouts
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/intro.rst
main
cilium
[ 0.044219911098480225, 0.012201203964650631, -0.0899362787604332, -0.049721609801054, 0.03335287421941757, -0.032607629895210266, 0.0029916835483163595, -0.014842798002064228, 0.04480715095996857, 0.007568429224193096, 0.05728057399392128, -0.030562935397028923, -0.005305171012878418, -0.00...
0.178794
However, some applications may benefit from receiving explicit rejection notifications instead of experiencing connection timeouts. This can provide faster feedback to applications and improve user experience by reducing wait times. This behavior can be configured with the ``--policy-deny-response`` option: \*\*none\*\* (default) Silently drop denied packets. Applications will experience connection timeouts when policy denies traffic. \*\*icmp\*\* (experimental) Send an ICMP Destination Unreachable response back to the source pod when egress traffic is denied by policy. This provides immediate feedback to applications that the connection was rejected. .. note:: This is an experimental feature and only applies to ipv4 egress pod traffic denied by network policy. Ingress traffic denial behavior and ipv6 are not supported currently. Check :gh-issue:`41859` for updates .. warning:: When using ``--policy-deny-response=icmp``, ensure that ICMP ingress traffic is allowed by your network policies. If ICMP traffic is blocked by ingress policies, applications will not receive the rejection notifications and will still experience connection timeouts. .. \_policy\_rule: Rule Basics =========== All policy rules are based upon a whitelist model, that is, each rule in the policy allows traffic that matches the rule. If two rules exist, and one would match a broader set of traffic, then all traffic matching the broader rule will be allowed. If there is an intersection between two or more rules, then traffic matching the union of those rules will be allowed. Finally, if traffic does not match any of the rules, it will be dropped pursuant to the `policy\_enforcement\_modes`. Policy rules share a common base type which specifies which endpoints the rule applies to and common metadata to identify the rule. Each rule is split into an ingress section and an egress section. The ingress section contains the rules which must be applied to traffic entering the endpoint, and the egress section contains rules applied to traffic coming from the endpoint matching the endpoint selector. Either ingress, egress, or both can be provided. If both ingress and egress are omitted, the rule has no effect. .. code-block:: go type Rule struct { // EndpointSelector selects all endpoints which should be subject to // this rule. EndpointSelector and NodeSelector cannot be both empty and // are mutually exclusive. // // +optional EndpointSelector EndpointSelector `json:"endpointSelector,omitempty"` // NodeSelector selects all nodes which should be subject to this rule. // EndpointSelector and NodeSelector cannot be both empty and are mutually // exclusive. Can only be used in CiliumClusterwideNetworkPolicies. // // +optional NodeSelector EndpointSelector `json:"nodeSelector,omitempty"` // Ingress is a list of IngressRule which are enforced at ingress. // If omitted or empty, this rule does not apply at ingress. // // +optional Ingress []IngressRule `json:"ingress,omitempty"` // Egress is a list of EgressRule which are enforced at egress. // If omitted or empty, this rule does not apply at egress. // // +optional Egress []EgressRule `json:"egress,omitempty"` // Labels is a list of optional strings which can be used to // re-identify the rule or to store metadata. It is possible to lookup // or delete strings based on labels. Labels are not required to be // unique, multiple rules can have overlapping or identical labels. // // +optional Labels labels.LabelArray `json:"labels,omitempty"` // Description is a free form string, it can be used by the creator of // the rule to store human readable explanation of the purpose of this // rule. Rules cannot be identified by comment. // // +optional Description string `json:"description,omitempty"` } ---- endpointSelector / nodeSelector Selects the endpoints or nodes which the policy rules apply to. The policy rules will be applied to all endpoints which match the labels specified in the selector. For additional details, see the
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/intro.rst
main
cilium
[ 0.015469078905880451, 0.08277036249637604, 0.06684134155511856, -0.005022082477807999, 0.02772318571805954, -0.016080837696790695, 0.025084037333726883, -0.024610428139567375, 0.03518451377749443, 0.051133736968040466, -0.012517095543444157, -0.020114896818995476, 0.005010617896914482, -0....
0.124843
Rules cannot be identified by comment. // // +optional Description string `json:"description,omitempty"` } ---- endpointSelector / nodeSelector Selects the endpoints or nodes which the policy rules apply to. The policy rules will be applied to all endpoints which match the labels specified in the selector. For additional details, see the :ref:`EndpointSelector` and :ref:`NodeSelector` sections. ingress List of rules which must apply at ingress of the endpoint, i.e. to all network packets which are entering the endpoint. egress List of rules which must apply at egress of the endpoint, i.e. to all network packets which are leaving the endpoint. labels Labels are used to identify the rule. Rules can be listed and deleted by labels. Policy rules which are imported via :ref:`kubernetes` automatically get the label ``io.cilium.k8s.policy.name=NAME`` assigned where ``NAME`` corresponds to the name specified in the `NetworkPolicy` or `CiliumNetworkPolicy` resource. description Description is a string which is not interpreted by Cilium. It can be used to describe the intent and scope of the rule in a human readable form. .. \_EndpointSelector: Endpoint Selector ----------------- The Endpoint Selector is based on the `Kubernetes LabelSelector`\_. It is called Endpoint Selector because it only applies to labels associated with an :ref:`Endpoint `. .. \_NodeSelector: Node Selector ------------- Like the :ref:`Endpoint Selector `, the Node Selector is based on the `Kubernetes LabelSelector`\_, although rather than matching on labels associated with Endpoints, it applies to labels associated with :ref:`Nodes ` in the cluster. Node Selectors can only be used in :ref:`CiliumClusterwideNetworkPolicies `. For details on the scope of node-level policies, see :ref:`HostPolicies`. .. \_Kubernetes LabelSelector: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/intro.rst
main
cilium
[ -0.029761169105768204, 0.055322617292404175, 0.020281925797462463, -0.03791214898228645, 0.06700602173805237, -0.012168578803539276, 0.0715317353606224, -0.03089112974703312, 0.048552338033914566, 0.026569347828626633, -0.02412385307252407, -0.07705942541360855, 0.022490205243229866, -0.02...
0.177221
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_policy\_caveats: \*\*\*\*\*\*\* Caveats \*\*\*\*\*\*\* Security Identity for N/S Service Traffic ========================================= When accessing a Kubernetes service from outside the cluster, the :ref:`arch\_id\_security` assignment depends on the routing mode. In the tunneling mode (i.e., ``--tunnel-protocol=vxlan`` or ``--tunnel-protocol=geneve``), the request to the service will have the ``reserved:world`` security identity. In the native-routing mode (i.e., ``--routing-mode=native``), the security identity will be set to the ``reserved:world`` if the request was sent to the node which runs the selected endpoint by the LB. If not, i.e., the request needs to be forwarded to another node after the service endpoint selection, then it will have the ``reserved:remote-node``. The latter traffic will match ``fromEntities: cluster`` policies. Differences From Kubernetes Network Policies ============================================ When creating Cilium Network Policies it is important to keep in mind that Cilium Network Policies do not perfectly replicate the functionality of `Kubernetes Network Policies `\_. See :ref:`this table ` for differences.
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/caveats.rst
main
cilium
[ -0.007158446591347456, 0.062384460121393204, -0.05186021327972412, -0.05754908174276352, 0.011203257367014885, -0.03119753487408161, 0.016790486872196198, 0.003661203430965543, 0.07286989688873291, -0.017160562798380852, -0.0030457740649580956, -0.08991536498069763, 0.007123303599655628, -...
0.193367
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_policy\_examples: Layer 3 Examples ================ The layer 3 policy establishes the base connectivity rules regarding which endpoints can talk to each other. Layer 3 policies can be specified using the following methods: \* `Endpoints based`: This is used to describe the relationship if both endpoints are managed by Cilium and are thus assigned labels. The advantage of this method is that IP addresses are not encoded into the policies and the policy is completely decoupled from the addressing. \* `Services based`: This is an intermediate form between Labels and CIDR and makes use of the services concept in the orchestration system. A good example of this is the Kubernetes concept of Service endpoints which are automatically maintained to contain all backend IP addresses of a service. This allows to avoid hardcoding IP addresses into the policy even if the destination endpoint is not controlled by Cilium. \* `Entities based`: Entities are used to describe remote peers which can be categorized without knowing their IP addresses. This includes connectivity to the local host serving the endpoints or all connectivity to outside of the cluster. \* `Node based`: This is an extension of ``remote-node`` entity. Optionally nodes can have unique identity that can be used to allow/block access only from specific ones. \* `CIDR based`: This is used to describe the relationship to or from external services if the remote peer is not an endpoint. This requires to hardcode either IP addresses or subnets into the policies. This construct should be used as a last resort as it requires stable IP or subnet assignments. \* `DNS based`: Selects remote, non-cluster, peers using DNS names converted to IPs via DNS lookups. It shares all limitations of the `CIDR based` rules above. DNS information is acquired by routing DNS traffic via `DNS Proxy` with a separate policy rule. DNS TTLs are respected. .. \_Endpoints based: Endpoints based --------------- Endpoints-based L3 policy is used to establish rules between endpoints inside the cluster managed by Cilium. Endpoints-based L3 policies are defined by using an `EndpointSelector` inside a rule to select what kind of traffic can be received (on ingress), or sent (on egress). An empty `EndpointSelector` allows all traffic. The examples below demonstrate this in further detail. .. note:: \*\*Kubernetes:\*\* See section :ref:`k8s\_namespaces` for details on how the `EndpointSelector` applies in a Kubernetes environment with regard to namespaces. Ingress ~~~~~~~ An endpoint is allowed to receive traffic from another endpoint if at least one ingress rule exists which selects the destination endpoint with the `EndpointSelector` in the ``endpointSelector`` field. To restrict traffic upon ingress to the selected endpoint, the rule selects the source endpoint with the `EndpointSelector` in the ``fromEndpoints`` field. Simple Ingress Allow ~~~~~~~~~~~~~~~~~~~~ The following example illustrates how to use a simple ingress rule to allow communication from endpoints with the label ``role=frontend`` to endpoints with the label ``role=backend``. .. literalinclude:: ../../../examples/policies/l3/simple/l3.yaml :language: yaml Ingress Allow All Endpoints ~~~~~~~~~~~~~~~~~~~~~~~~~~~ An empty `EndpointSelector` will select all endpoints, thus writing a rule that will allow all ingress traffic to an endpoint may be done as follows: .. literalinclude:: ../../../examples/policies/l3/ingress-allow-all/ingress-allow-all.yaml :language: yaml Note that while the above examples allow all ingress traffic to an endpoint, this does not mean that all endpoints are allowed to send traffic to this endpoint per their policies. In other words, policy must be configured on both sides (sender and receiver). Egress ~~~~~~ An endpoint is allowed to send traffic to another endpoint if at least
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/language.rst
main
cilium
[ -0.05547122657299042, -0.0004307838680688292, -0.062047217041254044, -0.08341825008392334, 0.00951423216611147, -0.004440219607204199, 0.04896610230207443, -0.03648002818226814, 0.08405770361423492, -0.020175324752926826, 0.04934793710708618, -0.03387153893709183, 0.04033749923110008, -0.0...
0.25705
traffic to an endpoint, this does not mean that all endpoints are allowed to send traffic to this endpoint per their policies. In other words, policy must be configured on both sides (sender and receiver). Egress ~~~~~~ An endpoint is allowed to send traffic to another endpoint if at least one egress rule exists which selects the destination endpoint with the `EndpointSelector` in the ``endpointSelector`` field. To restrict traffic upon egress to the selected endpoint, the rule selects the destination endpoint with the `EndpointSelector` in the ``toEndpoints`` field. Simple Egress Allow ~~~~~~~~~~~~~~~~~~~~ The following example illustrates how to use a simple egress rule to allow communication to endpoints with the label ``role=backend`` from endpoints with the label ``role=frontend``. .. literalinclude:: ../../../examples/policies/l3/simple/l3\_egress.yaml :language: yaml Egress Allow All Endpoints ~~~~~~~~~~~~~~~~~~~~~~~~~~ An empty `EndpointSelector` will select all egress endpoints from an endpoint based on the `CiliumNetworkPolicy` namespace (``default`` by default). The following rule allows all egress traffic from endpoints with the label ``role=frontend`` to all other endpoints in the same namespace: .. literalinclude:: ../../../examples/policies/l3/egress-allow-all/egress-allow-all.yaml :language: yaml Note that while the above examples allow all egress traffic from an endpoint, the receivers of the egress traffic may have ingress rules that deny the traffic. In other words, policy must be configured on both sides (sender and receiver). Simple Egress Deny ~~~~~~~~~~~~~~~~~~ The following example illustrates how to deny communication to endpoints with the label ``role=backend`` from endpoints with the label ``role=frontend``. If an ``egressDeny`` rule matches, egress traffic is denied even if the policy contains ``egress`` rules that would otherwise allow it. .. literalinclude:: ../../../examples/policies/l3/egress-deny/egress-deny.yaml :language: yaml Ingress/Egress Default Deny ~~~~~~~~~~~~~~~~~~~~~~~~~~~ An endpoint can be put into the default deny mode at ingress or egress if a rule selects the endpoint and contains the respective rule section ingress or egress. .. note:: Any rule selecting the endpoint will have this effect, this example illustrates how to put an endpoint into default deny mode without whitelisting other peers at the same time. .. literalinclude:: ../../../examples/policies/l3/egress-default-deny/egress-default-deny.yaml :language: yaml Additional Label Requirements ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. warning:: The ``fromRequires`` and ``toRequires`` fields are deprecated as of Cilium 1.17.x. They have been removed as of Cilium 1.19. It is often required to apply the principle of \*separation of concern\* when defining policies. For this reason, an additional construct exists which allows to establish base requirements for any connectivity to happen. For this purpose, the ``fromRequires`` field can be used to establish label requirements which serve as a foundation for any ``fromEndpoints`` relationship. ``fromRequires`` is a list of additional constraints which must be met in order for the selected endpoints to be reachable. These additional constraints do not grant access privileges by themselves, so to allow traffic there must also be rules which match ``fromEndpoints``. The same applies for egress policies, with ``toRequires`` and ``toEndpoints``. The purpose of this rule is to allow establishing base requirements such as, any endpoint in ``env=prod`` can only be accessed if the source endpoint also carries the label ``env=prod``. .. warning:: ``toRequires`` and ``fromRequires`` apply to all rules that share the same endpoint selector and are not limited by other egress or ingress rules. As a result ``toRequires`` and ``fromRequires`` limits all ingress and egress traffic that applies to its endpoint selector. An important implication of the fact that ``toRequires`` and ``fromRequires`` limit all ingress and egress traffic that applies to an endpoint selector is that the other egress and ingress rules (such as ``fromEndpoints``, ``fromPorts``, ``toEntities``, ``toServices``, and the rest) do not limit the scope of the ``toRequires`` of ``fromRequires`` fields. Pairing other ingress and egress rules with a ``toRequires`` or ``fromRequires`` will result
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/language.rst
main
cilium
[ 0.02004123106598854, -0.021315649151802063, -0.01000360120087862, -0.056836482137441635, -0.0038957493379712105, 0.012411396019160748, 0.09182839840650558, -0.033519092947244644, -0.002558075590059161, 0.022123094648122787, -0.04021412134170532, -0.03670983761548996, 0.004304612521082163, ...
0.062813
and egress traffic that applies to an endpoint selector is that the other egress and ingress rules (such as ``fromEndpoints``, ``fromPorts``, ``toEntities``, ``toServices``, and the rest) do not limit the scope of the ``toRequires`` of ``fromRequires`` fields. Pairing other ingress and egress rules with a ``toRequires`` or ``fromRequires`` will result in valid policy, but the requirements set in ``toRequires`` and ``fromRequires`` stay in effect no matter what would otherwise be allowed by the other rules. This example shows how to require every endpoint with the label ``env=prod`` to be only accessible if the source endpoint also has the label ``env=prod``. .. literalinclude:: ../../../examples/policies/l3/requires/requires.yaml :language: yaml This ``fromRequires`` rule doesn't allow anything on its own and needs to be combined with other rules to allow traffic. For example, when combined with the example policy below, the endpoint with label ``env=prod`` will become accessible from endpoints that have both labels ``env=prod`` and ``role=frontend``. .. literalinclude:: ../../../examples/policies/l3/requires/endpoints.yaml :language: yaml .. \_Services based: Services based -------------- Traffic from endpoints to services running in your cluster can be allowed via ``toServices`` statements in Egress rules. Policies can reference `Kubernetes Services `\_ by name or label selector. This feature uses the discovered services' `label selector `\_ as an :ref:`endpoint selector ` within the policy. .. note:: `Services without selectors `\_ are handled differently. The IPs in the service's EndpointSlices are, converted to :ref:`CIDR ` selectors. CIDR selectors cannot select pods, and that limitation applies here as well. The special Kubernetes Service ``default/kubernetes`` does not use a label selector. It is \*\*not recommended\*\* to grant access to the Kubernetes API server with a ``toServices``-based policy. Use instead the `kube-apiserver entity `. This example shows how to allow all endpoints with the label ``id=app2`` to talk to all endpoints of Kubernetes Service ``myservice`` in kubernetes namespace ``default`` as well as all services with label ``env=staging`` in namespace ``another-namespace``. .. literalinclude:: ../../../examples/policies/l3/service/service.yaml :language: yaml .. \_Entities based: Entities based -------------- ``fromEntities`` is used to describe the entities that can access the selected endpoints. ``toEntities`` is used to describe the entities that can be accessed by the selected endpoints. The following entities are defined: host The host entity includes the local host. This also includes all containers running in host networking mode on the local host. remote-node Any node in any of the connected clusters other than the local host. This also includes all containers running in host-networking mode on remote nodes. kube-apiserver The kube-apiserver entity represents the kube-apiserver in a Kubernetes cluster. This entity represents both deployments of the kube-apiserver: within the cluster and outside of the cluster. ingress The ingress entity represents the Cilium Envoy instance that handles ingress L7 traffic. Be aware that this also applies for pod-to-pod traffic within the same cluster when using ingress endpoints (also known as \*hairpinning\*). cluster Cluster is the logical group of all network endpoints inside of the local cluster. This includes all Cilium-managed endpoints of the local cluster, unmanaged endpoints in the local cluster, as well as the host, remote-node, and init identities. This also includes all remote nodes in a clustermesh scenario. init The init entity contains all endpoints in bootstrap phase for which the security identity has not been resolved yet. This is typically only observed in non-Kubernetes environments. See section :ref:`endpoint\_lifecycle` for details. health The health entity represents the health endpoints, used to check cluster connectivity health. Each node managed by Cilium hosts a health endpoint. See `cluster\_connectivity\_health` for details on health checks. unmanaged The unmanaged entity represents endpoints not managed by Cilium. Unmanaged endpoints are considered part of the cluster and are included in the cluster
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/language.rst
main
cilium
[ -0.04083840548992157, -0.014062391594052315, 0.012792232446372509, -0.06560899317264557, 0.025367939844727516, 0.05293520539999008, 0.08518712967634201, -0.040325336158275604, 0.0029128387104719877, -0.006260244641453028, -0.010315656661987305, -0.0473756417632103, 0.0023482826072722673, 0...
0.069784
represents the health endpoints, used to check cluster connectivity health. Each node managed by Cilium hosts a health endpoint. See `cluster\_connectivity\_health` for details on health checks. unmanaged The unmanaged entity represents endpoints not managed by Cilium. Unmanaged endpoints are considered part of the cluster and are included in the cluster entity. world The world entity corresponds to all endpoints outside of the cluster. Allowing to world is identical to allowing to CIDR 0.0.0.0/0. An alternative to allowing from and to world is to define fine grained DNS or CIDR based policies. all The all entity represents the combination of all known clusters as well world and whitelists all communication. .. note:: The ``kube-apiserver`` entity may not work for \*ingress traffic\* in some Kubernetes distributions, such as Azure AKS and GCP GKE. This is due to the fact that ingress control-plane traffic is being tunneled through worker nodes, which does not preserve the original source IP. You may be able to use a broader ``fromEntities: cluster`` rule instead. Restricting \*egress traffic\* via ``toEntities: kube-apiserver`` however is expected to work on these Kubernetes distributions. .. \_kube\_apiserver\_entity: Access to/from kube-apiserver ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Allow all endpoints with the label ``env=dev`` to access the kube-apiserver. .. literalinclude:: ../../../examples/policies/l3/entities/apiserver.yaml :language: yaml Access to/from local host ~~~~~~~~~~~~~~~~~~~~~~~~~ Allow all endpoints with the label ``env=dev`` to access the host that is serving the particular endpoint. .. note:: Kubernetes will automatically allow all communication from the local host of all local endpoints. You can run the agent with the option ``--allow-localhost=policy`` to disable this behavior which will give you control over this via policy. .. literalinclude:: ../../../examples/policies/l3/entities/host.yaml :language: yaml .. \_policy-remote-node: Access to/from all nodes in the cluster (or clustermesh) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Allow all endpoints with the label ``env=dev`` to receive traffic from any host in the cluster that Cilium is running on. .. literalinclude:: ../../../examples/policies/l3/entities/nodes.yaml :language: yaml Access to/from outside cluster ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This example shows how to enable access from outside of the cluster to all endpoints that have the label ``role=public``. .. literalinclude:: ../../../examples/policies/l3/entities/world.yaml :language: yaml .. \_policy\_node\_based: .. \_Node based: Node based ---------- .. note:: Example below with ``fromNodes/toNodes`` fields will only take effect when ``enable-node-selector-labels`` flag is set to true (or equivalent Helm value ``nodeSelectorLabels: true``). When ``--enable-node-selector-labels=true`` is specified, every cilium-agent allocates a different local :ref:`security identity ` for all other nodes. But instead of using :ref:`local scoped identity ` it uses :ref:`remote-node scoped identity` identity range. By default all labels that ``Node`` object has attached are taken into account, which might result in allocation of \*\*unique\*\* identity for each remote-node. For these cases it is also possible to filter only :ref:`security relevant labels ` with ``--node-labels`` flag. This example shows how to allow all endpoints with the label ``env=prod`` to receive traffic \*\*only\*\* from control plane (labeled ``node-role.kubernetes.io/control-plane=""``) nodes in the cluster (or clustermesh). Note that by default policies automatically select nodes from all the clusters in a Cluster Mesh environment unless it is explicitly specified. To restrict node selection to the local cluster by default you can enable the option ``--policy-default-local-cluster`` via the ConfigMap option ``policy-default-local-cluster`` or the Helm value ``clustermesh.policyDefaultLocalCluster``. .. literalinclude:: ../../../examples/policies/l3/entities/customnodes.yaml :language: yaml .. \_policy\_cidr: .. \_CIDR based: IP/CIDR based ------------- CIDR policies are used to define policies to and from endpoints which are not managed by Cilium and thus do not have labels associated with them. These are typically external services, VMs or metal machines running in particular subnets. CIDR policy can also be used to limit access to external services, for example to limit external access to a particular IP range. CIDR policies can be applied at ingress or
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/language.rst
main
cilium
[ 0.03152952343225479, 0.021621715277433395, -0.03897320479154587, -0.008928555063903332, 0.0543525367975235, -0.04982319474220276, 0.02329990826547146, -0.07499493658542633, 0.07352744042873383, 0.03964468464255333, 0.04421680420637131, -0.0909983441233635, 0.02792545035481453, -0.015261080...
0.282287
not have labels associated with them. These are typically external services, VMs or metal machines running in particular subnets. CIDR policy can also be used to limit access to external services, for example to limit external access to a particular IP range. CIDR policies can be applied at ingress or egress. CIDR rules apply if Cilium cannot map the source or destination to an identity derived from endpoint labels, ie the `reserved\_labels`. For example, CIDR rules will apply to traffic where one side of the connection is: \* A network endpoint outside the cluster \* The host network namespace where the pod is running. \* Within the cluster prefix but the IP's networking is not provided by Cilium. \* (:ref:`optional `) Node IPs within the cluster Conversely, CIDR rules do not apply to traffic where both sides of the connection are either managed by Cilium or use an IP belonging to a node in the cluster (including host networking pods). This traffic may be allowed using labels, services or entities -based policies as described above. Ingress ~~~~~~~ fromCIDR List of source prefixes/CIDRs that are allowed to talk to all endpoints selected by the ``endpointSelector``. fromCIDRSet List of source prefixes/CIDRs that are allowed to talk to all endpoints selected by the ``endpointSelector``, along with an optional list of prefixes/CIDRs per source prefix/CIDR that are subnets of the source prefix/CIDR from which communication is not allowed. ``fromCIDRSet`` may also reference prefixes/CIDRs indirectly via a :ref:`CiliumCIDRGroup`. Egress ~~~~~~ toCIDR List of destination prefixes/CIDRs that endpoints selected by ``endpointSelector`` are allowed to talk to. Note that endpoints which are selected by a ``fromEndpoints`` are automatically allowed to reply back to the respective destination endpoints. toCIDRSet List of destination prefixes/CIDRs that endpoints selected by ``endpointSelector`` are allowed to talk to, along with an optional list of prefixes/CIDRs per source prefix/CIDR that are subnets of the destination prefix/CIDR to which communication is not allowed. ``toCIDRSet`` may also reference prefixes/CIDRs indirectly via a :ref:`CiliumCIDRGroup`. Allow to external CIDR block ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This example shows how to allow all endpoints with the label ``app=myService`` to talk to the external IP ``20.1.1.1``, as well as the CIDR prefix ``10.0.0.0/8``, but not CIDR prefix ``10.96.0.0/12`` .. literalinclude:: ../../../examples/policies/l3/cidr/cidr.yaml :language: yaml .. \_cidr\_select\_nodes: Selecting nodes with CIDR / ipBlock ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. include:: ../../beta.rst By default, CIDR-based selectors do not match in-cluster entities (pods or nodes). Optionally, you can direct the policy engine to select nodes by CIDR / ipBlock. This requires you to configure Cilium with ``--policy-cidr-match-mode=nodes`` or the equivalent Helm value ``policyCIDRMatchMode: nodes``. It is safe to toggle this option on a running cluster, and toggling the option affects neither upgrades nor downgrades. When ``--policy-cidr-match-mode=nodes`` is specified, every agent allocates a distinct local :ref:`security identity ` for all other nodes. This slightly increases memory usage -- approximately 1MB for every 1000 nodes in the cluster. This is particularly relevant to self-hosted clusters -- that is, clusters where the apiserver is hosted on in-cluster nodes. Because CIDR-based selectors ignore nodes by default, you must ordinarily use the ``kube-apiserver`` :ref:`entity ` as part of a CiliumNetworkPolicy. Setting ``--policy-cidr-match-mode=nodes`` permits selecting the apiserver via an ``ipBlock`` peer in a KubernetesNetworkPolicy. .. \_DNS based: DNS based --------- DNS policies are used to define Layer 3 policies to endpoints that are not managed by Cilium, but have DNS queryable domain names. The IP addresses provided in DNS responses are allowed by Cilium in a similar manner to IPs in `CIDR based`\_ policies. They are an alternative when the remote IPs may change or are not know prior, or when DNS is more convenient. To enforce policy
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/language.rst
main
cilium
[ 0.031265586614608765, -0.02106965146958828, -0.08801776170730591, -0.07139276713132858, 0.05647987872362137, 0.03584853932261467, 0.06738390773534775, -0.023313643410801888, 0.050488315522670746, -0.026438431814312935, 0.08195380121469498, -0.04900803044438362, 0.02131747640669346, 0.00629...
0.273922
have DNS queryable domain names. The IP addresses provided in DNS responses are allowed by Cilium in a similar manner to IPs in `CIDR based`\_ policies. They are an alternative when the remote IPs may change or are not know prior, or when DNS is more convenient. To enforce policy on DNS requests themselves, see `Layer 7 Examples`\_. .. note:: In order to associate domain names with IP addresses, Cilium intercepts DNS responses per-Endpoint using a `DNS Proxy`\_. This requires Cilium to be configured with ``--enable-l7-proxy=true`` and an L7 policy allowing DNS requests. For more details, see :ref:`DNS Obtaining Data`. An L3 `CIDR based`\_ rule is generated for every ``toFQDNs`` rule and applies to the same endpoints. The IP information is selected for insertion by ``matchName`` or ``matchPattern`` rules, and is collected from all DNS responses seen by Cilium on the node. Multiple selectors may be included in a single egress rule. .. note:: The DNS Proxy is provided in each Cilium agent. As a result, DNS requests targeted by policies depend on the availability of the Cilium agent pod. This includes DNS policies (:ref:`proxy\_visibility`). ``toFQDNs`` egress rules cannot contain any other L3 rules, such as ``toEndpoints`` (under `Endpoints Based`\_) and ``toCIDRs`` (under `CIDR Based`\_). They may contain L4/L7 rules, such as ``toPorts`` (see `Layer 4 Examples`\_) with, optionally, ``HTTP`` and ``Kafka`` sections (see `Layer 7 Examples`\_). .. note:: DNS based rules are intended for external connections and behave similarly to `CIDR based`\_ rules. See `Services based`\_ and `Endpoints based`\_ for cluster-internal traffic. IPs to be allowed are selected via: ``toFQDNs.matchName`` Inserts IPs of domains that match ``matchName`` exactly. Multiple distinct names may be included in separate ``matchName`` entries and IPs for domains that match any ``matchName`` will be inserted. ``toFQDNs.matchPattern`` Inserts IPs of domains that match the pattern in ``matchPattern``, accounting for wildcards. Patterns are composed of literal characters that are allowed in domain names: a-z, 0-9, ``.`` and ``-``. ``\*`` is allowed as a wildcard with a number of convenience behaviors: \* ``\*`` within a domain allows 0 or more valid DNS characters, except for the ``.`` separator. ``\*.cilium.io`` will match ``sub.cilium.io`` but not ``cilium.io`` or ``sub.sub.cilium.io``. ``part\*ial.com`` will match ``partial.com`` and ``part-extra-ial.com``. \* ``\*`` alone matches all names, and inserts all cached DNS IPs into this rule. \* ``\*\*.`` is a special prefix supported in DNS match pattern to wildcard all cascaded subdomains in the prefix. For example: ``\*\*.cilium.io`` pattern will match both ``app.cilium.io`` and ``test.app.cilium.io`` but not ``cilium.io``. The example below allows all DNS traffic on port 53 to the DNS service and intercepts it via the `DNS Proxy`\_. If using a non-standard DNS port for a DNS application behind a Kubernetes Service, the port must match the backend port. When the application makes a request for my-remote-service.com, Cilium learns the IP address and will allow traffic due to the match on the name under the ``toFQDNs.matchName`` rule. Example ~~~~~~~ .. literalinclude:: ../../../examples/policies/l3/fqdn/fqdn.yaml :language: yaml Managing Short-Lived Connections & Maximum IPs per FQDN/endpoint ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Many short-lived connections can grow the number of IPs mapping to an FQDN quickly. In order to limit the number of IP addresses that map a particular FQDN, each FQDN has a per-endpoint max capacity of IPs that will be retained (default: 50). Once this limit is exceeded, the oldest IP entries are automatically expired from the cache. This capacity can be changed using the ``--tofqdns-endpoint-max-ip-per-hostname`` option. As with long-lived connections above, live connections are not expired until they terminate. It is safe to mix long- and short-lived connections from the same Pod. IPs above the limit described above will only be
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/language.rst
main
cilium
[ -0.05862366408109665, -0.02850901149213314, -0.012692767195403576, -0.04236024618148804, -0.057187508791685104, -0.07755056023597717, -0.024788647890090942, -0.07394860684871674, 0.052575185894966125, -0.014440304599702358, -0.012723424471914768, -0.05515436455607414, -0.008875499479472637, ...
0.090619
are automatically expired from the cache. This capacity can be changed using the ``--tofqdns-endpoint-max-ip-per-hostname`` option. As with long-lived connections above, live connections are not expired until they terminate. It is safe to mix long- and short-lived connections from the same Pod. IPs above the limit described above will only be removed if unused by a connection. .. \_l4\_policy: Layer 4 Examples ================ Limit ingress/egress ports -------------------------- Layer 4 policy can be specified in addition to layer 3 policies or independently. It restricts the ability of an endpoint to emit and/or receive packets on a particular port using a particular protocol. If no layer 4 policy is specified for an endpoint, the endpoint is allowed to send and receive on all layer 4 ports and protocols including ICMP. If any layer 4 policy is specified, then ICMP will be blocked unless it's related to a connection that is otherwise allowed by the policy. Layer 4 policies apply to ports after service port mapping has been applied. Layer 4 policy can be specified at both ingress and egress using the ``toPorts`` field. The ``toPorts`` field takes a ``PortProtocol`` structure which is defined as follows: .. code-block:: go // PortProtocol specifies an L4 port with an optional transport protocol type PortProtocol struct { // Port can be an L4 port number, or a name in the form of "http" // or "http-8080". EndPort is ignored if Port is a named port. Port string `json:"port"` // EndPort can only be an L4 port number. It is ignored when // Port is a named port. // // +optional EndPort int32 `json:"endPort,omitempty"` // Protocol is the L4 protocol. If omitted or empty, any protocol // matches. Accepted values: "TCP", "UDP", ""/"ANY" // // Matching on ICMP is not supported. // // +optional Protocol string `json:"protocol,omitempty"` } Example (L4) ~~~~~~~~~~~~ The following rule limits all endpoints with the label ``app=myService`` to only be able to emit packets using TCP on port 80, to any layer 3 destination: .. literalinclude:: ../../../examples/policies/l4/l4.yaml :language: yaml Example Port Ranges ~~~~~~~~~~~~~~~~~~~ The following rule limits all endpoints with the label ``app=myService`` to only be able to emit packets using TCP on ports 80-444, to any layer 3 destination: .. literalinclude:: ../../../examples/policies/l4/l4\_port\_range.yaml :language: yaml .. note:: Layer 7 rules support port ranges, except for DNS rules. Labels-dependent Layer 4 rule ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This example enables all endpoints with the label ``role=frontend`` to communicate with all endpoints with the label ``role=backend``, but they must communicate using TCP on port 80. Endpoints with other labels will not be able to communicate with the endpoints with the label ``role=backend``, and endpoints with the label ``role=frontend`` will not be able to communicate with ``role=backend`` on ports other than 80. .. literalinclude:: ../../../examples/policies/l4/l3\_l4\_combined.yaml :language: yaml CIDR-dependent Layer 4 Rule ~~~~~~~~~~~~~~~~~~~~~~~~~~~ This example enables all endpoints with the label ``role=crawler`` to communicate with all remote destinations inside the CIDR ``192.0.2.0/24``, but they must communicate using TCP on port 80. The policy does not allow Endpoints without the label ``role=crawler`` to communicate with destinations in the CIDR ``192.0.2.0/24``. Furthermore, endpoints with the label ``role=crawler`` will not be able to communicate with destinations in the CIDR ``192.0.2.0/24`` on ports other than port 80. .. literalinclude:: ../../../examples/policies/l4/cidr\_l4\_combined.yaml :language: yaml Limit ICMP/ICMPv6 types ----------------------- ICMP policy can be specified in addition to layer 3 policies or independently. It restricts the ability of an endpoint to emit and/or receive packets on a particular ICMP/ICMPv6 type (both type (integer) and corresponding CamelCase message (string) are supported). If any ICMP policy is specified, layer 4 and ICMP communication will be blocked unless it's related to a connection
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/language.rst
main
cilium
[ 0.00208473252132535, -0.006061977706849575, 0.007666506338864565, -0.01858111098408699, 0.014181477017700672, 0.026814963668584824, 0.03280750289559364, -0.05431238189339638, 0.05436159670352936, 0.08309444785118103, 0.015628185123205185, 0.0450066402554512, -0.024302544072270393, -0.01022...
0.092781
policies or independently. It restricts the ability of an endpoint to emit and/or receive packets on a particular ICMP/ICMPv6 type (both type (integer) and corresponding CamelCase message (string) are supported). If any ICMP policy is specified, layer 4 and ICMP communication will be blocked unless it's related to a connection that is otherwise allowed by the policy. ICMP policy can be specified at both ingress and egress using the ``icmps`` field. The ``icmps`` field takes a ``ICMPField`` structure which is defined as follows: .. code-block:: go // ICMPField is a ICMP field. // // +deepequal-gen=true // +deepequal-gen:private-method=true type ICMPField struct { // Family is a IP address version. // Currently, we support `IPv4` and `IPv6`. // `IPv4` is set as default. // // +kubebuilder:default=IPv4 // +kubebuilder:validation:Optional // +kubebuilder:validation:Enum=IPv4;IPv6 Family string `json:"family,omitempty"` // Type is a ICMP-type. // It should be an 8bit code (0-255), or it's CamelCase name (for example, "EchoReply"). // Allowed ICMP types are: // Ipv4: EchoReply | DestinationUnreachable | Redirect | Echo | EchoRequest | // RouterAdvertisement | RouterSelection | TimeExceeded | ParameterProblem | // Timestamp | TimestampReply | Photuris | ExtendedEcho Request | ExtendedEcho Reply // Ipv6: DestinationUnreachable | PacketTooBig | TimeExceeded | ParameterProblem | // EchoRequest | EchoReply | MulticastListenerQuery| MulticastListenerReport | // MulticastListenerDone | RouterSolicitation | RouterAdvertisement | NeighborSolicitation | // NeighborAdvertisement | RedirectMessage | RouterRenumbering | ICMPNodeInformationQuery | // ICMPNodeInformationResponse | InverseNeighborDiscoverySolicitation | InverseNeighborDiscoveryAdvertisement | // HomeAgentAddressDiscoveryRequest | HomeAgentAddressDiscoveryReply | MobilePrefixSolicitation | // MobilePrefixAdvertisement | DuplicateAddressRequestCodeSuffix | DuplicateAddressConfirmationCodeSuffix | // ExtendedEchoRequest | ExtendedEchoReply // // +deepequal-gen=false // +kubebuilder:validation:XIntOrString // +kubebuilder:validation:Pattern="^([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5]|EchoReply|DestinationUnreachable|Redirect|Echo|RouterAdvertisement|RouterSelection|TimeExceeded|ParameterProblem|Timestamp|TimestampReply|Photuris|ExtendedEchoRequest|ExtendedEcho Reply|PacketTooBig|ParameterProblem|EchoRequest|MulticastListenerQuery|MulticastListenerReport|MulticastListenerDone|RouterSolicitation|RouterAdvertisement|NeighborSolicitation|NeighborAdvertisement|RedirectMessage|RouterRenumbering|ICMPNodeInformationQuery|ICMPNodeInformationResponse|InverseNeighborDiscoverySolicitation|InverseNeighborDiscoveryAdvertisement|HomeAgentAddressDiscoveryRequest|HomeAgentAddressDiscoveryReply|MobilePrefixSolicitation|MobilePrefixAdvertisement|DuplicateAddressRequestCodeSuffix|DuplicateAddressConfirmationCodeSuffix)$" Type \*intstr.IntOrString `json:"type"` } Example (ICMP/ICMPv6) ~~~~~~~~~~~~~~~~~~~~~ The following rule limits all endpoints with the label ``app=myService`` to only be able to emit packets using ICMP with type 8 and ICMPv6 with message EchoRequest, to any layer 3 destination: .. literalinclude:: ../../../examples/policies/l4/icmp.yaml :language: yaml Limit TLS Server Name Indication (SNI) -------------------------------------- When multiple websites are hosted on the same server with a shared IP address, Server Name Indication (SNI), an extension of the TLS protocol, ensures that the client receives the correct SSL certificate for the website they are trying to access. SNI allows the hostname or domain name of the website to be specified during the TLS handshake, rather than after the handshake when the HTTP connection is established. Cilium Network Policy can limit an endpoint's ability to establish a TLS handshake to a specified list of SNIs. The SNI policy is always configured at the egress level and is usually set up alongside port policies. Example (TLS SNI) ~~~~~~~~~~~~~~~~~ .. note:: TLS SNI policy enforcement requires L7 proxy enabled. The following rule limits all endpoints with the label ``app=myService`` to only be able to establish TLS connections with ``one.one.one.one`` SNI. Any other attempt to another SNI (for example, with ``cilium.io``) will be rejected. .. literalinclude:: ../../../examples/policies/l4/l4\_sni.yaml :language: yaml Below is the same SSL error while trying to connect to ``cilium.io`` from curl. .. code-block:: shell-session $ kubectl exec -- curl -v https://cilium.io \* Host cilium.io:443 was resolved. \* IPv6: (none) \* IPv4: 104.198.14.52 \* Trying 104.198.14.52:443... \* Connected to cilium.io (104.198.14.52) port 443 \* ALPN: curl offers h2,http/1.1 \* TLSv1.3 (OUT), TLS handshake, Client hello (1): \* CAfile: /etc/ssl/certs/ca-certificates.crt \* CApath: /etc/ssl/certs \* Recv failure: Connection reset by peer \* OpenSSL SSL\_connect: Connection reset by peer in connection to cilium.io:443 \* Closing connection curl: (35) Recv failure: Connection reset by peer command terminated with exit code 35 .. \_l7\_policy: Layer 7 Examples ================ Layer 7 policy rules are embedded into Layer 4 rules and can be specified for ingress and egress. ``L7Rules`` structure is a base type containing an enumeration
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/language.rst
main
cilium
[ -0.021091299131512642, 0.05224961042404175, -0.02896100841462612, -0.062076542526483536, 0.00837372150272131, 0.03908047452569008, 0.030309252440929413, 0.009263663552701473, 0.03380168229341507, -0.004564345348626375, 0.019872508943080902, -0.08605688810348511, -0.05381620675325394, -0.03...
0.24415
\* Closing connection curl: (35) Recv failure: Connection reset by peer command terminated with exit code 35 .. \_l7\_policy: Layer 7 Examples ================ Layer 7 policy rules are embedded into Layer 4 rules and can be specified for ingress and egress. ``L7Rules`` structure is a base type containing an enumeration of protocol specific fields. .. code-block:: go // L7Rules is a union of port level rule types. Mixing of different port // level rule types is disallowed, so exactly one of the following must be set. // If none are specified, then no additional port level rules are applied. type L7Rules struct { // HTTP specific rules. // // +optional HTTP []PortRuleHTTP `json:"http,omitempty"` // Kafka-specific rules. // // +optional Kafka []PortRuleKafka `json:"kafka,omitempty"` // DNS-specific rules. // // +optional DNS []PortRuleDNS `json:"dns,omitempty"` } The structure is implemented as a union, i.e. only one member field can be used per port. If multiple ``toPorts`` rules with identical ``PortProtocol`` select an overlapping list of endpoints, then the layer 7 rules are combined together if they are of the same type. If the type differs, the policy is rejected. Each member consists of a list of application protocol rules. A layer 7 request is permitted if at least one of the rules matches. If no rules are specified, then all traffic is permitted. If a layer 4 rule is specified in the policy, and a similar layer 4 rule with layer 7 rules is also specified, then the layer 7 portions of the latter rule will have no effect. .. note:: Unlike layer 3 and layer 4 policies, violation of layer 7 rules does not result in packet drops. Instead, if possible, an application protocol specific access denied message is crafted and returned, e.g. an \*HTTP 403 access denied\* is sent back for HTTP requests which violate the policy, or a \*DNS REFUSED\* response for DNS requests. .. note:: Layer 7 rules support port ranges, except for DNS rules. .. note:: In `HostPolicies`, i.e. policies that use :ref:`NodeSelector`, only DNS layer 7 rules are currently functional. Other types of layer 7 rules cannot be specified in `HostPolicies`. Host layer 7 DNS policies are a beta feature. Please provide feedback and file a GitHub issue if you experience any problems. .. note:: Layer 7 policies will proxy traffic through a node-local :ref:`envoy` instance, which will either be deployed as a DaemonSet or embedded in the agent pod. When Envoy is embedded in the agent pod, Layer 7 traffic targeted by policies will therefore depend on the availability of the Cilium agent pod. .. note:: L7 policies for SNATed IPv6 traffic (e.g., pod-to-world) require a kernel with the `fix `\_\_ applied. The stable kernel versions with the fix are 6.14.1, 6.12.22, 6.6.86, 6.1.133, 5.15.180, 5.10.236. See :gh-issue:`37932` for the reference. HTTP ---- The following fields can be matched on: Path Path is an extended POSIX regex matched against the path of a request. Currently it can contain characters disallowed from the conventional "path" part of a URL as defined by RFC 3986. Paths must begin with a ``/``. If omitted or empty, all paths are all allowed. Method Method is an extended POSIX regex matched against the method of a request, e.g. ``GET``, ``POST``, ``PUT``, ``PATCH``, ``DELETE``, ... If omitted or empty, all methods are allowed. Host Host is an extended POSIX regex matched against the host header of a request, e.g. ``foo.com``. If omitted or empty, the value of the host header is ignored. Headers Headers is a list of HTTP headers which must be present in the request. If omitted or empty, requests are
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/language.rst
main
cilium
[ -0.04696488380432129, -0.0033747439738363028, -0.03528480976819992, -0.03800066560506821, -0.03720056638121605, -0.028156500309705734, -0.05760938301682472, -0.0472838394343853, -0.04274853318929672, 0.02719159983098507, 0.031786080449819565, -0.027465030550956726, -0.054061535745859146, 0...
0.013431
Host Host is an extended POSIX regex matched against the host header of a request, e.g. ``foo.com``. If omitted or empty, the value of the host header is ignored. Headers Headers is a list of HTTP headers which must be present in the request. If omitted or empty, requests are allowed regardless of headers present. It's also possible to do some more advanced header matching against header values. ``HeaderMatches`` is a list of HTTP headers which must be present and match against the given values. Mismatch field can be used to specify what to do when there is no match. Allow GET /public ~~~~~~~~~~~~~~~~~ The following example allows ``GET`` requests to the URL ``/public`` from the endpoints with the labels ``env=prod`` to endpoints with the labels ``app=service``, but requests to any other URL, or using another method, will be rejected. Requests on ports other than port 80 will be dropped. .. literalinclude:: ../../../examples/policies/l7/http/simple/l7.yaml :language: yaml All GET /path1 and PUT /path2 when header set ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The following example limits all endpoints which carry the labels ``app=myService`` to only be able to receive packets on port 80 using TCP. While communicating on this port, the only API endpoints allowed will be ``GET /path1``, and ``PUT /path2`` with the HTTP header ``X-My-Header`` set to ``true``: .. literalinclude:: ../../../examples/policies/l7/http/http.yaml :language: yaml .. \_kafka\_policy: Kafka (beta) ------------ .. include:: ../../deprecated.rst PortRuleKafka is a list of Kafka protocol constraints. All fields are optional, if all fields are empty or missing, the rule will match all Kafka messages. There are two ways to specify the Kafka rules. We can choose to specify a high-level "produce" or "consume" role to a topic or choose to specify more low-level Kafka protocol specific apiKeys. Writing rules based on Kafka roles is easier and covers most common use cases, however if more granularity is needed then users can alternatively write rules using specific apiKeys. The following fields can be matched on: Role Role is a case-insensitive string which describes a group of API keys necessary to perform certain higher-level Kafka operations such as "produce" or "consume". A Role automatically expands into all APIKeys required to perform the specified higher-level operation. The following roles are supported: - "produce": Allow producing to the topics specified in the rule. - "consume": Allow consuming from the topics specified in the rule. This field is incompatible with the APIKey field, i.e APIKey and Role cannot both be specified in the same rule. If omitted or empty, and if APIKey is not specified, then all keys are allowed. APIKey APIKey is a case-insensitive string matched against the key of a request, for example "produce", "fetch", "createtopic", "deletetopic". For a more extensive list, see the `Kafka protocol reference `\_. This field is incompatible with the Role field. APIVersion APIVersion is the version matched against the api version of the Kafka message. If set, it must be a string representing a positive integer. If omitted or empty, all versions are allowed. ClientID ClientID is the client identifier as provided in the request. From Kafka protocol documentation: This is a user supplied identifier for the client application. The user can use any identifier they like and it will be used when logging errors, monitoring aggregates, etc. For example, one might want to monitor not just the requests per second overall, but the number coming from each client application (each of which could reside on multiple servers). This id acts as a logical grouping across all requests from a particular client. If omitted or empty, all client identifiers are allowed. Topic Topic is the topic name contained in the message.
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/language.rst
main
cilium
[ -0.054799363017082214, 0.1321021169424057, 0.020501576364040375, -0.06730113178491592, 0.021333666518330574, -0.06633798778057098, 0.0055519454181194305, -0.06512033194303513, 0.014996214769780636, 0.033219970762729645, -0.04123247414827347, -0.07299745827913284, 0.04406743869185448, 0.011...
0.041502
overall, but the number coming from each client application (each of which could reside on multiple servers). This id acts as a logical grouping across all requests from a particular client. If omitted or empty, all client identifiers are allowed. Topic Topic is the topic name contained in the message. If a Kafka request contains multiple topics, then all topics in the message must be allowed by the policy or the message will be rejected. This constraint is ignored if the matched request message type does not contain any topic. The maximum length of the Topic is 249 characters, which must be either ``a-z``, ``A-Z``, ``0-9``, ``-``, ``.`` or ``\_``. If omitted or empty, all topics are allowed. Allow producing to topic empire-announce using Role ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. literalinclude:: ../../../examples/policies/l7/kafka/kafka-role.yaml :language: yaml Allow producing to topic empire-announce using apiKeys ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. literalinclude:: ../../../examples/policies/l7/kafka/kafka.yaml :language: yaml .. \_dns\_discovery: DNS Policy and IP Discovery --------------------------- Policy may be applied to DNS traffic, allowing or disallowing specific DNS query names or patterns of names (other DNS fields, such as query type, are not considered). This policy is effected via a `DNS Proxy`, which is also used to collect IPs used to populate L3 `DNS based`\_ ``toFQDNs`` rules. .. note:: While Layer 7 DNS policy can be applied without any other Layer 3 rules, the presence of a Layer 7 rule (with its Layer 3 and 4 components) will block other traffic. DNS policy may be applied via: ``matchName`` Allows queries for domains that match ``matchName`` exactly. Multiple distinct names may be included in separate ``matchName`` entries and queries for domains that match any ``matchName`` will be allowed. ``matchPattern`` Allows queries for domains that match the pattern in ``matchPattern``, accounting for wildcards. Patterns are composed of literal characters that that are allowed in domain names: a-z, 0-9, ``.`` and ``-``. ``\*`` is allowed as a wildcard with a number of convenience behaviors: \* ``\*`` within a domain allows 0 or more valid DNS characters, except for the ``.`` separator. ``\*.cilium.io`` will match ``sub.cilium.io`` but not ``cilium.io``. ``part\*ial.com`` will match ``partial.com`` and ``part-extra-ial.com``. \* ``\*`` alone matches all names, and inserts all IPs in DNS responses into the cilium-agent DNS cache. In this example, L7 DNS policy allows queries for ``cilium.io``, any subdomains of ``cilium.io``, and any subdomains of ``api.cilium.io``. No other DNS queries will be allowed. The separate L3 ``toFQDNs`` egress rule allows connections to any IPs returned in DNS queries for ``cilium.io``, ``sub.cilium.io``, ``service1.api.cilium.io`` and any matches of ``special\*service.api.cilium.io``, such as ``special-region1-service.api.cilium.io`` but not ``region1-service.api.cilium.io``. DNS queries to ``anothersub.cilium.io`` are allowed but connections to the returned IPs are not, as there is no L3 ``toFQDNs`` rule selecting them. L4 and L7 policy may also be applied (see `DNS based`\_), restricting connections to TCP port 80 in this case. .. literalinclude:: ../../../examples/policies/l7/dns/dns.yaml :language: yaml .. note:: When applying DNS policy in kubernetes, queries for service.namespace.svc.cluster.local. must be explicitly allowed with ``matchPattern: \*.\*.svc.cluster.local.``. Similarly, queries that rely on the DNS search list to complete the FQDN must be allowed in their entirety. e.g. A query for ``servicename`` that succeeds with ``servicename.namespace.svc.cluster.local.`` must have the latter allowed with ``matchName`` or ``matchPattern``. See `Alpine/musl deployments and DNS Refused`\_. .. note:: DNS policies do not support port ranges. .. \_DNS Obtaining Data: Obtaining DNS Data for use by ``toFQDNs`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ IPs are obtained via intercepting DNS requests with a proxy. These IPs can be selected with ``toFQDN`` rules. DNS responses are cached within Cilium agent respecting TTL. .. \_DNS Proxy: DNS Proxy ~~~~~~~~~ A DNS Proxy in the agent intercepts egress DNS traffic and records IPs seen in the
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/language.rst
main
cilium
[ 0.008561247028410435, -0.03772849962115288, -0.06850460916757584, -0.023341095075011253, -0.03171065077185631, 0.028094852343201637, 0.015849148854613304, -0.04704563319683075, 0.09133089333772659, 0.04007025435566902, -0.05930224806070328, -0.06294140219688416, 0.07822920382022858, -0.000...
0.067717
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ IPs are obtained via intercepting DNS requests with a proxy. These IPs can be selected with ``toFQDN`` rules. DNS responses are cached within Cilium agent respecting TTL. .. \_DNS Proxy: DNS Proxy ~~~~~~~~~ A DNS Proxy in the agent intercepts egress DNS traffic and records IPs seen in the responses. This interception is, itself, a separate policy rule governing DNS requests, and must be specified separately. For details on how to enforce policy on DNS requests and configuring the DNS proxy, see `Layer 7 Examples`\_. Only IPs in intercepted DNS responses to an application will be allowed in the Cilium policy rules. For a given domain name, IPs from responses to all pods managed by a Cilium instance are allowed by policy (respecting TTLs). This ensures that allowed IPs are consistent with those returned to applications. The DNS Proxy is the only method to allow IPs from responses allowed by wildcard L7 DNS ``matchPattern`` rules for use in ``toFQDNs`` rules. The following example obtains DNS data by interception without blocking any DNS requests. It allows L3 connections to ``cilium.io``, ``sub.cilium.io`` and any subdomains of ``sub.cilium.io``. .. literalinclude:: ../../../examples/policies/l7/dns/dns-visibility.yaml :language: yaml .. note:: DNS policies do not support port ranges. Alpine/musl deployments and DNS Refused ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Some common container images treat the DNS ``Refused`` response when the `DNS Proxy`\_ rejects a query as a more general failure. This stops traversal of the search list defined in ``/etc/resolv.conf``. It is common for pods to search by appending ``.svc.cluster.local.`` to DNS queries. When this occurs, a lookup for ``cilium.io`` may first be attempted as ``cilium.io.namespace.svc.cluster.local.`` and rejected by the proxy. Instead of continuing and eventually attempting ``cilium.io.`` alone, the Pod treats the DNS lookup is treated as failed. This can be mitigated with the ``--tofqdns-dns-reject-response-code`` option. The default is ``refused`` but ``nameError`` can be selected, causing the proxy to return a NXDomain response to refused queries. A more pod-specific solution is to configure ``ndots`` appropriately for each Pod, via ``dnsConfig``, so that the search list is not used for DNS lookups that do not need it. See the `Kubernetes documentation `\_ for instructions. .. \_deny\_policies: Deny Policies ============= Deny policies, available and enabled by default since Cilium 1.9, allows to explicitly restrict certain traffic to and from a Pod. Deny policies take precedence over allow policies, regardless of whether they are a Cilium Network Policy, a Clusterwide Cilium Network Policy or even a Kubernetes Network Policy. Similarly to "allow" policies, Pods will enter default-deny mode as soon a single policy selects it. If multiple allow and deny policies are applied to the same pod, the following table represents the expected enforcement for that Pod: +--------------------------------------------------------------------------------------------+ | \*\*Set of Ingress Policies Deployed to Server Pod\*\* | +---------------------+-----------------------+---------+---------+--------+--------+--------+ | | Layer 7 (HTTP) | ✓ | ✓ | ✓ | ✓ | | | +-----------------------+---------+---------+--------+--------+--------+ | | Layer 4 (80/TCP) | ✓ | ✓ | ✓ | ✓ | | | \*\*Allow Policies\*\* +-----------------------+---------+---------+--------+--------+--------+ | | Layer 4 (81/TCP) | ✓ | ✓ | ✓ | ✓ | | | +-----------------------+---------+---------+--------+--------+--------+ | | Layer 3 (Pod: Client) | ✓ | ✓ | ✓ | ✓ | | +---------------------+-----------------------+---------+---------+--------+--------+--------+ | | Layer 4 (80/TCP) | | ✓ | | ✓ | ✓ | | \*\*Deny Policies\*\* +-----------------------+---------+---------+--------+--------+--------+ | | Layer 3 (Pod: Client) | | | ✓ | ✓ | | +---------------------+-----------------------+---------+---------+--------+--------+--------+ | \*\*Result for Traffic Connections (Allowed / Denied)\*\* | +---------------------+-----------------------+---------+---------+--------+--------+--------+ | | curl server:81 | Allowed | Allowed | Denied | Denied | Denied | | +-----------------------+---------+---------+--------+--------+--------+ | \*\*Client → Server\*\* | curl server:80 | Allowed | Denied | Denied | Denied |
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/language.rst
main
cilium
[ -0.08360561728477478, -0.006290880497545004, -0.01599087193608284, -0.06652560830116272, -0.12275463342666626, -0.06189321354031563, 0.02075745351612568, -0.04236804321408272, 0.050967030227184296, -0.009442444890737534, -0.00918971560895443, -0.018622800707817078, -0.013984257355332375, -...
0.134089
| ✓ | ✓ | | +---------------------+-----------------------+---------+---------+--------+--------+--------+ | \*\*Result for Traffic Connections (Allowed / Denied)\*\* | +---------------------+-----------------------+---------+---------+--------+--------+--------+ | | curl server:81 | Allowed | Allowed | Denied | Denied | Denied | | +-----------------------+---------+---------+--------+--------+--------+ | \*\*Client → Server\*\* | curl server:80 | Allowed | Denied | Denied | Denied | Denied | | +-----------------------+---------+---------+--------+--------+--------+ | | ping server | Allowed | Allowed | Denied | Denied | Denied | +---------------------+-----------------------+---------+---------+--------+--------+--------+ If we pick the second column in the above table, the bottom section shows the forwarding behaviour for a policy that selects curl or ping traffic between the client and server: \* Curl to port 81 is allowed because there is an allow policy on port 81, and no deny policy on that port; \* Curl to port 80 is denied because there is a deny policy on that port; \* Ping to the server is allowed because there is a Layer 3 allow policy and no deny. The following policy will deny ingress from "world" on all namespaces on all Pods managed by Cilium. Existing inter-cluster policies will still be allowed as this policy is allowing traffic from everywhere except from "world". .. literalinclude:: ../../../examples/policies/l3/entities/from\_world\_deny.yaml :language: yaml Deny policies do not support: policy enforcement at L7, i.e., specifically denying an URL and ``toFQDNs``, i.e., specifically denying traffic to a specific domain name. .. \_disk\_policies: Disk based Cilium Network Policies ================================== This functionality enables users to place network policy YAML files directly into the node's filesystem, bypassing the need for definition via k8s CRD. By setting the config field ``static-cnp-path``, users specify the directory from which policies will be loaded. The Cilium agent then processes all policy YAML files present in this directory, transforming them into rules that are incorporated into the policy engine. Additionally, the Cilium agent monitors this directory for any new policy YAML files as well as any updates or deletions, making corresponding updates to the policy engine's rules. It is important to note that this feature only supports CiliumNetworkPolicy and CiliumClusterwideNetworkPolicy. The directory that the Cilium agent needs to monitor should be mounted from the host using volume mounts. For users deploying via Helm, this can be enabled via ``extraArgs`` and ``extraHostPathMounts`` as follows: .. code-block:: yaml extraArgs: - --static-cnp-path=/policies extraHostPathMounts: - name: static-policies mountPath: /policies hostPath: /policies hostPathType: Directory To determine whether a policy was established via Kubernetes CRD or directly from a directory, execute the command ``cilium policy get`` and examine the source attribute within the policy. In output, you could notice policies that have been sourced from a directory will have the ``source`` field set as ``directory``. Additionally, ``cilium endpoint get `` also have fields to show the source of policy associated with that endpoint. Previous limitations and known issues ------------------------------------- For Cilium versions prior to 1.14 deny-policies for peers outside the cluster sometimes did not work because of :gh-issue:`15198`. Make sure that you are using version 1.14 or later if you are relying on deny policies to manage external traffic to your cluster. .. \_HostPolicies: Host Policies ============= Host policies take the form of a :ref:`CiliumClusterwideNetworkPolicy` with a :ref:`NodeSelector` instead of an :ref:`EndpointSelector`. Host policies can have layer 3 and layer 4 rules on both ingress and egress. They can also have layer 7 DNS rules, but no other kinds of layer 7 rules. .. note:: Host L7 DNS policies are a beta feature. Please provide feedback and file a GitHub issue if you experience any problems. .. attention:: Adding layer 7 DNS rules to a host policy enables :ref:`DNS based` host policies at the cost of making all host DNS requests
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/language.rst
main
cilium
[ -0.029807254672050476, 0.06840869039297104, -0.04703931510448456, -0.07170874625444412, -0.06615102291107178, -0.03893942013382912, -0.010474215261638165, -0.05915893614292145, -0.02349490113556385, -0.049920372664928436, 0.0463516004383564, -0.011502589099109173, 0.05536340922117233, -0.0...
0.015889
7 rules. .. note:: Host L7 DNS policies are a beta feature. Please provide feedback and file a GitHub issue if you experience any problems. .. attention:: Adding layer 7 DNS rules to a host policy enables :ref:`DNS based` host policies at the cost of making all host DNS requests go through the :ref:`DNS Proxy` provided in each Cilium agent. This includes DNS requests for kube-apiserver if it is configured as a FQDN (e.g. in managed Kubernetes clusters) by critical processes such as kubelet. This has important implications for the proper functioning of the node, because while Cilium agent is restarting, :ref:`DNS Proxy` is not available, and all DNS requests redirected to it will time out. - When upgrading Cilium agent image on a set of nodes, the new image must be :ref:`pre-pulled `, because kubelet will not be able to contact the container registry after it stops the old Cilium agent pod. - If Kubernetes feature gate `KubeletEnsureSecretPulledImages`\_ is enabled and kubelet is configured with `image credential providers`\_ relying on remote authentication and authorization services (common in managed Kubernetes), image pull credentials verification policy must be configured in such a way that the Cilium agent image is exempted from image credential verification. Otherwise kubelet may be unable to verify image pull credentials for the new Cilium agent pod, and it will fail to start (rendering the node unusable) despite the new agent image having been pre-pulled. .. \_KubeletEnsureSecretPulledImages: https://kubernetes.io/docs/concepts/containers/images/#ensureimagepullcredentialverification .. \_image credential providers: https://kubernetes.io/docs/tasks/administer-cluster/kubelet-credential-provider Host policies apply to all the nodes selected by their :ref:`NodeSelector`. In each selected node, they apply only to the host namespace, including host-networking pods. They don't apply to communications between non-host-networking pods and locations outside of the cluster. Installation of Host Policies requires the addition of the following ``helm`` flags when installing Cilium: \* ``--set devices='{interface}'`` where ``interface`` refers to the network device Cilium is configured on, for example ``eth0``. If you omit this option, Cilium auto-detects what interface the host firewall applies to. \* ``--set hostFirewall.enabled=true`` As an example, the following policy allows ingress traffic for any node with the label ``type=ingress-worker`` on TCP ports 22, 6443 (kube-apiserver), 2379 (etcd), and 4240 (health checks), as well as UDP port 8472 (VXLAN). .. literalinclude:: ../../../examples/policies/host/lock-down-ingress.yaml :language: yaml To reuse this policy, replace the ``port:`` values with ports used in your environment. In order to allow protocols such as VRRP and IGMP that don't have any transport-layer ports, set ``--enable-extended-ip-protocols`` flag to true. By default, such traffic is dropped with ``DROP\_CT\_UNKNOWN\_PROTO`` error. As an example, the following policy allows egress traffic on any node with the label ``type=egress-worker`` on TCP ports 22, 6443/443 (kube-apiserver), 2379 (etcd), and 4240 (health checks), UDP port 8472 (VXLAN), and traffic with VRRP protocol. .. literalinclude:: ../../../examples/policies/host/allow-extended-protocols.yaml :language: yaml .. \_troubleshooting\_host\_policies: Troubleshooting Host Policies ----------------------------- If you have troubles with Host Policies, try the following steps: - Ensure the ``helm`` options listed in :ref:`the Host Policies description ` were applied during installation. - To verify that your policy has been accepted and applied by the Cilium agent, run ``kubectl get CiliumClusterwideNetworkPolicy -o yaml`` and make sure the policy is listed. - If policies don't seem to be applied to your nodes, verify the ``nodeSelector`` is labeled correctly in your environment. In the example configuration, you can run ``kubectl get nodes -o custom-columns=NAME:.metadata.name,LABELS:.metadata.labels | grep type:ingress-worker`` to verify labels match the policy. To troubleshoot policies for a given node, try the following steps. For all steps, run ``cilium-dbg`` in the relevant namespace, on the Cilium agent pod for the node, for example with: .. code-block:: shell-session $ kubectl exec -n $CILIUM\_NAMESPACE
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/language.rst
main
cilium
[ -0.005059709306806326, -0.04740188643336296, 0.048319701105356216, -0.0617530420422554, -0.05592336505651474, -0.05353196710348129, -0.08402156829833984, -0.052774764597415924, 0.04576713964343071, 0.028676817193627357, -0.017728358507156372, -0.05582434684038162, -0.02580181322991848, -0....
0.138286
-o custom-columns=NAME:.metadata.name,LABELS:.metadata.labels | grep type:ingress-worker`` to verify labels match the policy. To troubleshoot policies for a given node, try the following steps. For all steps, run ``cilium-dbg`` in the relevant namespace, on the Cilium agent pod for the node, for example with: .. code-block:: shell-session $ kubectl exec -n $CILIUM\_NAMESPACE $CILIUM\_POD\_NAME -- cilium-dbg ... Retrieve the endpoint ID for the host endpoint on the node with ``cilium-dbg endpoint get -l reserved:host -o jsonpath='{[0].id}'``. Use this ID to replace ``$HOST\_EP\_ID`` in the next steps: - If policies are applied, but not enforced for the node, check the status of the policy audit mode with ``cilium-dbg endpoint config $HOST\_EP\_ID | grep PolicyAuditMode``. If necessary, :ref:`disable the audit mode `. - Run ``cilium-dbg endpoint list``, and look for the host endpoint, with ``$HOST\_EP\_ID`` and the ``reserved:host`` label. Ensure that policy is enabled in the selected direction. - Run ``cilium-dbg status list`` and check the devices listed in the ``Host firewall`` field. Verify that traffic actually reaches the listed devices. - Use ``cilium-dbg monitor`` with ``--related-to $HOST\_EP\_ID`` to examine traffic for the host endpoint. .. \_host\_policies\_known\_issues: Host Policies known issues -------------------------- - The first time Cilium enforces Host Policies in the cluster, it may drop reply traffic for legitimate connections that should be allowed by the policies in place. Connections should stabilize again after a few seconds. One workaround is to enable, disable, then re-enable Host Policies enforcement. For details, see :gh-issue:`25448`. - In the context of ClusterMesh, the following combination of options is not supported: - Cilium operating in CRD mode (as opposed to KVstore mode), - Host Policies enabled, - tunneling enabled, - kube-proxy-replacement enabled, and - WireGuard enabled. This combination results in a failure to connect to the clustermesh-apiserver. For details, refer to :gh-issue:`31209`. - Host Policies do not work on host WireGuard interfaces. For details, see :gh-issue:`17636`. - When Host Policies are enabled, hosts drop traffic from layer-2 protocols that they consider as unknown, even if no Host Policies are loaded. For example, this affects LLC traffic (see :gh-issue:`17877`) or VRRP traffic (see :gh-issue:`18347`). - When kube-proxy-replacement is disabled, or configured not to implement services for the native device (such as NodePort), hosts will enforce Host Policies on service addresses rather than the service endpoints. For details, refer to :gh-issue:`12545`. - Host Firewall and thus Host Policies do not work together with IPsec. For details, refer to :gh-issue:`41854`.
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/language.rst
main
cilium
[ 0.04196878522634506, 0.0825250893831253, -0.07388754934072495, 0.00559589546173811, 0.021502681076526642, -0.04759800061583519, 0.013617640361189842, -0.03473830223083496, 0.04964957386255264, 0.05969729647040367, 0.022783957421779633, -0.13921472430229187, 0.0007134890183806419, -0.033110...
0.167841
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_endpoint\_lifecycle: .. \_Endpoint Lifecycle: Endpoint Lifecycle ================== This section specifies the lifecycle of Cilium endpoints. Every endpoint in Cilium is in one of the following states: \* ``restoring``: The endpoint was started before Cilium started, and Cilium is restoring its networking configuration. \* ``waiting-for-identity``: Cilium is allocating a unique identity for the endpoint. \* ``waiting-to-regenerate``: The endpoint received an identity and is waiting for its networking configuration to be (re)generated. \* ``regenerating``: The endpoint's networking configuration is being (re)generated. This includes programming eBPF for that endpoint. \* ``ready``: The endpoint's networking configuration has been successfully (re)generated. \* ``disconnecting``: The endpoint is being deleted. \* ``disconnected``: The endpoint has been deleted. .. image:: ../../images/cilium-endpoint-lifecycle.png :scale: 50 % :align: center The state of an endpoint can be queried using the ``cilium-dbg endpoint list`` and ``cilium-dbg endpoint get`` CLI commands. While an endpoint is running, it transitions between the ``waiting-for-identity``, ``waiting-to-regenerate``, ``regenerating``, and ``ready`` states. A transition into the ``waiting-for-identity`` state indicates that the endpoint changed its identity. A transition into the ``waiting-to-regenerate`` or ``regenerating`` state indicates that the policy to be enforced on the endpoint has changed because of a change in identity, policy, or configuration. An endpoint transitions into the ``disconnecting`` state when it is being deleted, regardless of its current state. .. \_init\_identity: Init Identity ------------- In some situations, Cilium can't determine the labels of an endpoint immediately when the endpoint is created, and therefore can't allocate an identity for the endpoint at that point. Until the endpoint's labels are known, Cilium temporarily associates a special single label ``reserved:init`` to the endpoint. When the endpoint's labels become known, Cilium then replaces that special label with the endpoint's labels and allocates a proper identity to the endpoint. This may occur during endpoint creation in the following cases: \* Running Cilium with docker via libnetwork \* With Kubernetes when the Kubernetes API server is not available \* In etcd mode when the corresponding kvstore is not available To allow traffic to/from endpoints while they are initializing, you can create policy rules that select the ``reserved:init`` label, and/or rules that allow traffic to/from the special ``init`` entity. For instance, writing a rule that allows all initializing endpoints to receive connections from the host and to perform DNS queries may be done as follows: .. literalinclude:: ../../../examples/policies/l4/init.yaml :language: yaml Likewise, writing a rule that allows an endpoint to receive DNS queries from initializing endpoints may be done as follows: .. literalinclude:: ../../../examples/policies/l4/from\_init.yaml :language: yaml If any ingress (resp. egress) policy rules selects the ``reserved:init`` label, all ingress (resp. egress) traffic to (resp. from) initializing endpoints that is not explicitly allowed by those rules will be dropped. Otherwise, if the policy enforcement mode is ``never`` or ``default``, all ingress (resp. egress) traffic is allowed to (resp. from) initializing endpoints. Otherwise, all ingress (resp. egress) traffic is dropped. .. \_lockdown\_mode: Lockdown Mode ------------- If the Cilium agent option ``enable-lockdown-endpoint-on-policy-overflow`` is set to "true" Cilium will put an endpoint into "lockdown" if the policy map cannot accommodate all of the required policy map entries required (that is, the policy map for the endpoint is overflowing). Cilium will put the endpoint out of "lockdown" when it detects that the policy map is no longer overflowing. When an endpoint is locked down all network traffic, both egress and ingress, will be dropped. Cilium will log a warning that the endpoint has been locked down. If this option is enabled, cluster operators should closely
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/lifecycle.rst
main
cilium
[ -0.06826639920473099, -0.00823434442281723, -0.054606031626462936, 0.003522064769640565, 0.027787568047642708, -0.04336412996053696, -0.02491643838584423, -0.017233164981007576, 0.02178684063255787, -0.03413209319114685, 0.07478905469179153, -0.07327119261026382, -0.0009173848084174097, -0...
0.190973
of "lockdown" when it detects that the policy map is no longer overflowing. When an endpoint is locked down all network traffic, both egress and ingress, will be dropped. Cilium will log a warning that the endpoint has been locked down. If this option is enabled, cluster operators should closely monitor the metric the bpf map pressure metric of the ``cilium\_policy\_\*`` maps. See `Policymap pressure and overflow`\_ for more details. They can use this metric to create an alert for increased memory pressure on the policy map as well as alert for a lockdown if ``enable-lockdown-endpoint-on-policy-overflow`` is set to "true" (any ``bpf\_map\_pressure`` above a value of ``1.0``). .. \_Policymap pressure and overflow: /operations/troubleshooting.html#policymap-pressure-and-overflow
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/lifecycle.rst
main
cilium
[ 0.02179822139441967, 0.0019440626492723823, -0.02386854588985443, 0.025049543008208275, 0.04969821870326996, -0.009591421112418175, 0.034710902720689774, 0.01409014780074358, -0.0099229970946908, 0.009705111384391785, 0.0011389694409444928, -0.01749216951429844, -0.0368896946310997, -0.064...
0.236775
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_policy\_troubleshooting: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Troubleshooting \*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Policy Rule to Endpoint Mapping =============================== To determine which policy rules are currently in effect for an endpoint the data from ``cilium-dbg endpoint list`` and ``cilium-dbg endpoint get`` can be paired with the data from ``cilium-dbg policy get``. ``cilium-dbg endpoint get`` will list the labels of each rule that applies to an endpoint. The list of labels can be passed to ``cilium-dbg policy get`` to show that exact source policy. Note that rules that have no labels cannot be fetched alone (a no label ``cilium-dbg policy get`` returns the complete policy on the node). Rules with the same labels will be returned together. In the above example, for one of the ``deathstar`` pods the endpoint id is 568. We can print all policies applied to it with: .. code-block:: shell-session $ # Get a shell on the Cilium pod $ kubectl exec -ti cilium-88k78 -n kube-system -- /bin/bash $ # print out the ingress labels $ # clean up the data $ # fetch each policy via each set of labels $ # (Note that while the structure is "...l4.ingress...", it reflects all L3, L4 and L7 policy. $ cilium-dbg endpoint get 568 -o jsonpath='{range ..status.policy.realized.l4.ingress[\*].derived-from-rules}{@}{"\n"}{end}'|tr -d '][' | xargs -I{} bash -c 'echo "Labels: {}"; cilium-dbg policy get {}' Labels: k8s:io.cilium.k8s.policy.name=rule1 k8s:io.cilium.k8s.policy.namespace=default [ { "endpointSelector": { "matchLabels": { "any:class": "deathstar", "any:org": "empire", "k8s:io.kubernetes.pod.namespace": "default" } }, "ingress": [ { "fromEndpoints": [ { "matchLabels": { "any:org": "empire", "k8s:io.kubernetes.pod.namespace": "default" } } ], "toPorts": [ { "ports": [ { "port": "80", "protocol": "TCP" } ], "rules": { "http": [ { "path": "/v1/request-landing", "method": "POST" } ] } } ] } ], "labels": [ { "key": "io.cilium.k8s.policy.name", "value": "rule1", "source": "k8s" }, { "key": "io.cilium.k8s.policy.namespace", "value": "default", "source": "k8s" } ] } ] Revision: 217 $ # repeat for egress $ cilium-dbg endpoint get 568 -o jsonpath='{range ..status.policy.realized.l4.egress[\*].derived-from-rules}{@}{"\n"}{end}' | tr -d '][' | xargs -I{} bash -c 'echo "Labels: {}"; cilium-dbg policy get {}' Troubleshooting ``toFQDNs`` rules ================================= ``toFQDNs`` rules do nothing if there is no :ref:`L7 DNS rule ` covering the endpoint. The effect of ``toFQDNs`` may change long after a policy is applied, as DNS data changes. This can make it difficult to debug unexpectedly blocked connections, or transient failures. Cilium provides CLI tools to introspect the state of applying FQDN policy in multiple layers of the daemon: #. ``cilium-dbg policy get`` should show the FQDN policy that was imported: .. code-block:: json { "endpointSelector": { "matchLabels": { "any:class": "mediabot", "any:org": "empire", "k8s:io.kubernetes.pod.namespace": "default" } }, "egress": [ { "toFQDNs": [ { "matchName": "api.github.com" } ] }, { "toEndpoints": [ { "matchLabels": { "k8s:io.kubernetes.pod.namespace": "kube-system", "k8s:k8s-app": "kube-dns" } } ], "toPorts": [ { "ports": [ { "port": "53", "protocol": "ANY" } ], "rules": { "dns": [ { "matchPattern": "\*" } ] } } ] } ], "labels": [ { "key": "io.cilium.k8s.policy.derived-from", "value": "CiliumNetworkPolicy", "source": "k8s" }, { "key": "io.cilium.k8s.policy.name", "value": "fqdn", "source": "k8s" }, { "key": "io.cilium.k8s.policy.namespace", "value": "default", "source": "k8s" }, { "key": "io.cilium.k8s.policy.uid", "value": "f213c6b2-c87b-449c-a66c-e19a288062ba", "source": "k8s" } ] } #. After making a DNS request, the FQDN to IP mapping should be available via ``cilium-dbg fqdn cache list``: .. code-block:: shell-session # cilium-dbg fqdn cache list Endpoint Source FQDN TTL ExpirationTime IPs 725 lookup api.github.com. 3600 2023-02-10T18:16:05.842Z 140.82.121.6 725 lookup support.github.com. 3600 2023-02-10T18:16:09.371Z 185.199.111.133,185.199.109.133,185.199.110.133,185.199.108.133 #. If the traffic is allowed, then these IPs should have corresponding local identities via ``cilium-dbg ip list | grep ``: .. code-block::
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/troubleshooting.rst
main
cilium
[ -0.0018945022020488977, 0.06861444562673569, -0.05270425230264664, -0.024570120498538017, 0.06411982327699661, -0.040242452174425125, 0.024476205930113792, 0.007252130191773176, 0.017580687999725342, -0.006970611400902271, 0.06047216057777405, -0.10470164567232132, 0.06737760454416275, -0....
0.154921
list``: .. code-block:: shell-session # cilium-dbg fqdn cache list Endpoint Source FQDN TTL ExpirationTime IPs 725 lookup api.github.com. 3600 2023-02-10T18:16:05.842Z 140.82.121.6 725 lookup support.github.com. 3600 2023-02-10T18:16:09.371Z 185.199.111.133,185.199.109.133,185.199.110.133,185.199.108.133 #. If the traffic is allowed, then these IPs should have corresponding local identities via ``cilium-dbg ip list | grep ``: .. code-block:: shell-session # cilium-dbg ip list | grep -A 1 140.82.121.6 140.82.121.6/32 fqdn:api.github.com reserved:world Monitoring ``toFQDNs`` identity usage ------------------------------------- When using ``toFQDNs`` selectors, every IP observed by a matching DNS lookup will be labeled with that selector. As a DNS name might be matched by multiple selectors, and because an IP might map to multiple names, an IP might be labeled by multiple selectors. As with regular cluster identities, every unique combination of labels will allocate its own numeric security identity. This can lead to many different identities being allocated, as described in :ref:`identity-relevant-labels`. To detect potential identity exhaustion for ``toFQDNs`` identities, the number allocated FQDN identities can be monitored using the ``identity\_label\_sources{type="fqdn"}`` metric. As a comparative reference the ``fqdn\_selectors`` metric monitors the number of registered ``toFQDNs`` selectors. For more details on metrics, please refer to :ref:`metrics`.
https://github.com/cilium/cilium/blob/main//Documentation/security/policy/troubleshooting.rst
main
cilium
[ -0.04565519839525223, 0.05965058133006096, -0.11959850043058395, -0.053459808230400085, 0.03473673388361931, -0.06808588653802872, 0.061734993010759354, -0.007078219670802355, 0.07641858607530594, 0.048897549510002136, 0.019379809498786926, -0.03746369108557701, -0.02467476576566696, -0.04...
0.122056
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_xfrm\_guide: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* XFRM Reference Guide \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* .. note:: This documentation section is targeted at developers and users who want to understand the Linux XFRM subsystem. While reading this reference guide may help broaden your understanding of Cilium, it is not a requirement to use Cilium. Please refer to the :ref:`getting\_started` guide and :ref:`ebpf\_datapath` for a higher level introduction. Overview ======== IPsec encryption in the Linux kernel relies on `XFRM`\_. XFRM is an IP framework intended for packet transformations, from encryption to compression. It is configured via a set of \*policy\* and \*state\* objects, which for IPsec, correspond to Security Policies and Security Associations. .. \_XFRM: https://man7.org/linux/man-pages/man8/ip-xfrm.8.html XFRM Policies and States ------------------------ At a high-level, XFRM policies define what traffic to accept and reject, whereas states define how to perform the encryption and decryption. Policies can match on the direction (``out``, ``in``, or ``fwd``), the source and destination IP addresses with CIDRs, and the packet mark. As an example, the following policy matches egressing packets with any source IP address, 10.56.1.X destination IP addresses, and ``0xcb93eXX`` packet marks. Policies default to allowing traffic as done here. .. code-block:: text src 0.0.0.0/0 dst 10.56.1.0/24 dir out priority 0 mark 0xcb93e00/0xffffff00 [...] States are relatively similar, except that they are agnostic to the direction and can only match on exact IP addresses (or 0.0.0.0 to match all). The following state will apply to packets with IP addresses 10.56.0.17 -> 10.56.1.238, the same packet marks as above. In the case of tunnel-mode IPsec, these IP addresses correspond to the outer IP addresses. For ingressing, encrypted packets, the SPI will also be used (discussed below). .. code-block:: text src 10.56.0.17 dst 10.56.1.238 proto esp spi 0x00000003 reqid 1 mode tunnel replay-window 0 mark 0xcb93e00/0xffffff00 output-mark 0xe00/0xffffff00 aead rfc4106(gcm(aes)) 0x6254fced5f7a5ea9401b9015ecf10d65eac51a69 128 anti-replay context: seq 0x0, oseq 0x36, bitmap 0x00000000 sel src 0.0.0.0/0 dst 0.0.0.0/0 You may notice that nothing specifies if this state should perform encryption or decryption. That's because it can actually do both. As said above, states are agnostic to the direction of traffic so the same state may theoretically be used for both encryption and decryption. What to do will be determined based on where in the stack the state is matched (ex., decryption on ingress). Policy Templates ---------------- XFRM policies also typically define a template, as below: .. code-block:: text src 0.0.0.0/0 dst 10.56.1.0/24 dir out priority 0 mark 0xcb93e00/0xffffff00 tmpl src 10.56.0.17 dst 10.56.1.238 proto esp spi 0x00000003 reqid 1 mode tunnel How this template is used depends on the direction. For egressing traffic, the template defines the encoding to perform. For example, the above template will encapsulate packets with an IP header and an ESP header. The IP header will have IP addresses 10.56.0.17 and 10.56.1.238. The ESP header will have SPI 3. For ingressing and forwarded traffic however, the template acts as an additional filter. The following XFRM policy for example will only allow packets if they are ESP packets with outer IP addresses 10.56.1.238 and 10.56.0.17, in addition to having a packet mark matching ``0xd00/0xf00``. .. code-block:: text src 0.0.0.0/0 dst 10.56.0.0/24 dir in priority 0 mark 0xd00/0xf00 tmpl src 10.56.1.238 dst 10.56.0.17 proto esp reqid 1 mode tunnel Note that when using tunnel mode as is the case here, we should always see XFRM states matching the template of XFRM OUT policies. That is because, on egress, the states are matched after the template is applied. The IP addresses, the SPI,
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/xfrm/index.rst
main
cilium
[ 0.002678867429494858, -0.06166737526655197, -0.1156989187002182, 0.008488724939525127, 0.09111020714044571, -0.07263953238725662, 0.007661984767764807, 0.0776093453168869, -0.024157896637916565, -0.041855912655591965, 0.08698267489671707, -0.09481676667928696, 0.032023072242736816, -0.0329...
0.143569
10.56.0.17 proto esp reqid 1 mode tunnel Note that when using tunnel mode as is the case here, we should always see XFRM states matching the template of XFRM OUT policies. That is because, on egress, the states are matched after the template is applied. The IP addresses, the SPI, the protocol, the mode, and the reqid should all match between the XFRM state and the template in that case. XFRM Packet Flows ================= The following diagram represents the usual Netfilter packet flow with the XFRM elements in purple: .. image:: /images/netfilter-with-xfrm.png :align: center Egress Packet Flow ------------------ On egress, packets will first hit one of the "XFRM OUT policy" blocks. At this point, a lookup is performed against the XFRM OUT policies. If a match is found, the packet goes to the "XFRM encode" block, any template is applied (ex., encapsulation), and the packet is then matched against XFRM states. If a state is found, its information is used to encrypt the packet. The encrypted packet will then navigate again through the OUTPUT and POSTROUTING chains. Ingress Packet Flow ------------------- On ingress, encrypted packets (ex., ESP packets) will hit the "XFRM decode" after they navigate through the INPUT chain. In tunnel mode, encrypted packets will typically have one of the server's IP addresses as the outer destination address, so they should automatically be routed through the INPUT chain. If not, it may be necessary to add IP routes to redirect packets to the INPUT chain. As an example, Cilium identifies IPsec traffic on tc-bpf ingress and marks them with a special value which is then used to reroute those packets to the INPUT chain. At the "XFRM decode", if packets match an XFRM state, they will be decoded (i.e., decapsulated and decrypted) using the state's information. The match is based on the source & destination addresses, the mark, the SPI, and the protocol. In case of any decode error (ex., wrong key), the packet is dropped and an error counter is increased. As illustrated on the diagram, an XFRM policy matching the packet isn't required for the decoding to happen (it goes directly to "XFRM decode"), but is required for the packet to proceed to a local process or through the FORWARD chain. An XFRM policy with an optional template (i.e., ``level use``) will allow all decoded packets through. Traffic that was never encrypted, and therefore does not come from "XFRM decode", is allowed by default. After a packet is decoded, it is recirculated in the stack, as if coming from the interface it was initially received on. More specifically, packets are recirculated before the tc layer, such that they are visible on the tc-bpf hook a second time (once before decryption, once after). The packet mark is preserved when recirculated, so it's possible to identify and trace packets that have been decrypted and recirculated. Output Description of ``ip xfrm`` ================================= Outputs are from iproute2-6.1.0. More fields will likely appear in newer versions. For example, XFRM states have a ``dir`` field in newer kernels (v6.10+), which will likely appear in the ``ip xfrm state`` output at some point. In the ``ip xfrm`` output, policies are ordered by date of creation, with newer policies at the top. This is important because, in case two policies match a packet and have the same priority, the newest one is used. .. code-block:: bash $ ip xfrm policy # - `src 0.0.0.0/0` is the CIDR to match against the source IP address # - `dst 0.0.0.0/0` is the CIDR to match against the destination IP address src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 #
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/xfrm/index.rst
main
cilium
[ -0.03721998631954193, 0.0030000878032296896, 0.08070164918899536, -0.009174061939120293, 0.07568550854921341, -0.0014949108008295298, 0.09061770886182785, -0.06531257927417755, -0.0055499388836324215, -0.01789850927889347, 0.03109206072986126, -0.06330131739377975, -0.00286171305924654, -0...
0.005656
the same priority, the newest one is used. .. code-block:: bash $ ip xfrm policy # - `src 0.0.0.0/0` is the CIDR to match against the source IP address # - `dst 0.0.0.0/0` is the CIDR to match against the destination IP address src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 # - `dir fwd` states the direction. It defines where in the Linux stack this policy will be # used, between ingress, egress, and forwarding. # - `action allow` is the action to take on matching packets. Packets can only be allowed # through (by default) or dropped. # - `index 18` is used to differentiate between different policies which might have the # same or overlapping selectors. If not given or if it already exists, it is # automatically (re-)generated (cf., `xfrm\_gen\_index`). The three LSBs encode the # direction (ex., 1 for `XFRM\_POLICY\_OUT`). The MSBs are simply incremented by one (that # is, the index is incremented by 8) until a free index is found. # - `priority 2975` states the priority for this policy in case multiple could match the # packet. 0 is the highest priority. # - `share any` is always set to `any` and unused today # (https://elixir.bootlin.com/linux/v6.9.5/source/net/xfrm/xfrm\_user.c#L1914). # - `flag (0x00000000)` set of flags for XFRM policies. Only `XFRM\_POLICY\_ICMP` (0x2) is # supported at the moment; `XFRM\_POLICY\_LOCALOK` (0x1) is not implemented (anymore?). # When `XFRM\_POLICY\_ICMP` is given, the policy will also apply to ICMP packet with a # payload packet that matches the policy's selector. dir fwd action allow index 18 priority 2975 share any flag (0x00000000) lifetime config: # Various limits and expiration times for the policy, based on the number of bytes # received, the number of packets received, the time since the policy was added, or the # time since the policy was last matched by a packet. When a soft limit or expiration # time is reached, a notification is sent to userspace via netlink # (`struct xfrm\_user\_expire`). When a hard limit or expiration time is reached, the # policy is deleted. limit: soft (INF)(bytes), hard (INF)(bytes) limit: soft (INF)(packets), hard (INF)(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: # Counters for bytes and packets matched by this policy, to be used if limits have # been set. 0(bytes), 0(packets) # Timestamps for when the policy was added and when it was last matched by a packet, to # be used if expiration times have been set. add 2024-06-17 11:24:49 use 2024-06-17 11:25:01 # - `src 0.0.0.0` See Policy Templates for how this field is used. # - `dst 10.92.0.164` See Policy Templates for how this field is used. tmpl src 0.0.0.0 dst 10.92.0.164 # - `proto esp` See Policy Templates for how this field is used. # - `spi 0x00000000(0)` See Policy Templates for how this field is used. # - `reqid 1(0x00000001)` See Policy Templates for how this field is used. # - `mode tunnel` See Policy Templates for how this field is used. proto esp spi 0x00000000(0) reqid 1(0x00000001) mode tunnel # - `level use` is the nonsensical way to indicate this template is optional, the # alternative being `level required`. If no XFRM state matching the template is # found, the template will be skipped if optional. Otherwise, the packet will be # dropped with `XfrmInTmplMismatch`. # - `share any` is not implemented and will always be `any`. level use share any # - `enc-mask ffffffff` Bit mask defining the list of allowed encryption algorithms. # See Encryption algorithms in include/uapi/linux/pfkeyv2.h for the list of possible # values. # -
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/xfrm/index.rst
main
cilium
[ -0.02202283963561058, -0.05214344337582588, -0.03453941270709038, -0.08614684641361237, 0.14719757437705994, -0.005146705079823732, 0.03524385765194893, -0.019517019391059875, 0.019860588014125824, 0.003514316864311695, 0.1303989142179489, 0.0459115132689476, -0.03152291476726532, -0.02853...
0.119023
will be # dropped with `XfrmInTmplMismatch`. # - `share any` is not implemented and will always be `any`. level use share any # - `enc-mask ffffffff` Bit mask defining the list of allowed encryption algorithms. # See Encryption algorithms in include/uapi/linux/pfkeyv2.h for the list of possible # values. # - `auth-mask ffffffff` Bit mask defining the list of allowed authentication # algorithms. See Authentication algorithms in include/uapi/linux/pfkeyv2.h for the # list of possible values. # - `comp-mask ffffffff` Non-implemented bit mask (was probably defined for compression # algorithms). enc-mask ffffffff auth-mask ffffffff comp-mask ffffffff .. code-block:: bash $ ip xfrm state # - `src 10.92.1.189` is the IP address to match against the packets' source IP addresses. # - `dst 10.92.0.164` is the IP address to match against the packets' destination IP addresses. src 10.92.1.189 dst 10.92.0.164 # - `proto esp` states the IPsec protocol to use. # - `spi 0x00000000(0)` is the Security Parameter Index. A tag to distinguish between # multiple IPsec streams that may be using different algorithms and/or keys. Particularly # useful during key rotations. # - `reqid 1(0x00000001)` is an ID only used to ensure the XFRM policy template and the # state match. It doesn't seem to be used for anything else in the kernel. # - `mode tunnel` states whether the packet is encapsulated (`tunnel`) or if the ESP header # is simply added to the existing packet (`transport`). proto esp spi 0x00000003(3) reqid 1(0x00000001) mode tunnel # - `replay-window 0` size of the replay window used for the anti-replay checks (i.e., # toleration setting). # - `seq 0x000000000` # - `flag (0x000000000)` holds various flags including `XFRM\_STATE\_ESN` (0x80) for ESN # mode. replay-window 0 seq 0x00000000 flag (0x00000000) # - `mark 0x4db50d00/0xffff0f00` are the value and mask used to match against the packets' # marks. # - `output-mark 0xd00/0xffffff00` are the value and mask to apply to the packets' marks # after they have been encrypted or decrypted. mark 0x4db50d00/0xffff0f00 output-mark 0xd00/0xffffff00 # - `aead rfc4106(gcm(aes))` are the type and name of algorithm in use. # - `0x856f15d0ccabe682286b4286bccf5d595b88b168 (160 bits)` is the key and its size. It's # of course sensitive information that should be treated as such. # - `128` is the ICV length. Which lengths are supported depends on the algorithm in use. aead rfc4106(gcm(aes)) 0x856f15d0ccabe682286b4286bccf5d595b88b168 (160 bits) 128 # - `seq 0x0` holds the current receive-side sequence number, for the anti-replay check. # - `oseq 0x0` is the last emitted sequence number. If this number overflows (on 32-bits), # packets are dropped and the error counter `XfrmOutStateSeqError` is increased. In ESN # mode, this sequence number is coded on 64-bits. # - `bitmap 0x00000000` tracks the sequence numbers that have already been seen in the replay # window. anti-replay context: seq 0x0, oseq 0x0, bitmap 0x00000000 # - `sel src 0.0.0.0/0 dst 0.0.0.0/0` is an additional filter applying to the decrypted # packets, to ensure the inner packets are coming and going where you expect. # - `uid 0` this field appears to be unused (`user` in `struct xfrm\_selector`). sel src 0.0.0.0/0 dst 0.0.0.0/0 uid 0 lifetime config: # Various limits and expiration times for the state, based on the number of bytes # received, the number of packets received, the time since the state was added, or the # time since the state was last used for a packet. When a soft limit or expiration time # is reached, a notification is sent to userspace via netlink # (`struct xfrm\_user\_expire`). When a hard limit or expiration time is reached, the state # is deleted. limit: soft (INF)(bytes), hard
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/xfrm/index.rst
main
cilium
[ -0.02615201100707054, 0.011914235539734364, -0.12592262029647827, -0.032987628132104874, 0.08987725526094437, -0.04289931803941727, 0.029703011736273766, -0.024531083181500435, -0.03575870022177696, -0.05717795342206955, 0.03279983997344971, -0.04951716959476471, -0.010629193857312202, -0....
-0.005537
# time since the state was last used for a packet. When a soft limit or expiration time # is reached, a notification is sent to userspace via netlink # (`struct xfrm\_user\_expire`). When a hard limit or expiration time is reached, the state # is deleted. limit: soft (INF)(bytes), hard (INF)(bytes) limit: soft (INF)(packets), hard (INF)(packets) expire add: soft 0(sec), hard 0(sec) expire use: soft 0(sec), hard 0(sec) lifetime current: # Counters for bytes and packets matched by this policy, to be used if limits have been # set. 20124(bytes), 83(packets) # Timestamps for when the policy was added and when it was last matched by a packet, to # be used if expiration times have been set. add 2024-06-17 11:15:48 use 2024-06-17 11:16:02 stats: # - `replay-window 0` is incremented whenever a packet is received with a sequence number # outside the window. # - `replay 0` is incremented whenever a packet is received with a sequence number in the # replay window that was already observed. # - `failed 0` (full name `integrity\_failed` on kernel's side) is incremented when the # checksums for authentication or encryption headers are incorrect. # `XfrmInStateProtoError` is always incremented when this counter is incremented. replay-window 0 replay 0 failed 0 XFRM Errors =========== All XFRM errors correspond to packet drops. Some of them may also be associated with per-state counters increasing. ``CONFIG\_XFRM\_STATISTICS`` is required to see these error counters in ``/proc/net/xfrm\_stat``. - \*\*XfrmInError:\*\* If the kernel fails to allocate memory during encryption. - \*\*XfrmInBufferError:\*\* - If a packet is going through too many XFRM states. The maximum is set to ``XFRM\_MAX\_DEPTH`` (6). - If too many XFRM policy templates apply to a packet. The maximum is also set to ``XFRM\_MAX\_DEPTH`` (6). - \*\*XfrmInHdrError:\*\* - If the SPI portion of the packet is malformed. - If the outer IP header is malformed. - \*\*XfrmInNoStates:\*\* If no XFRM IN state was found that matches the AH or ESP packet ingressing on the INPUT chain. - \*\*XfrmInStateProtoError:\*\* - If the AH or ESP checksum is incorrect. - If the packet's IPsec protocol (ex., AH, ESP) doesn't match the protocol specified by the XFRM state. - Also includes all protocol specific errors (ex., from ``esp\_input``) listed below: - If decryption/encryption fails (ex., because the key specified in the XFRM IN state doesn't match the key with which the packet was encrypted). - If the protocol headers (ex., ESP) or trailers are malformed. - If there is not enough memory to perform encryption/decryption. - \*\*XfrmInStateModeError:\*\* If the packet is in IPsec tunnel mode, but the matched XFRM state is in transport mode. - \*\*XfrmInStateSeqError:\*\* If the anti-replay check rejected the packet. If the check failed because the sequence number was outside the window, the ``replay-window`` counter of the associated XFRM state will be incremented. If it failed because the sequence number was seen already, the ``replay`` counter is incremented instead. - \*\*XfrmInStateExpired:\*\* There can be a delay between when a state expires (hard limits) and when it's actually deleted. During that time, matching packets are dropped with ``XfrmInStateExpired`` on ingress. - \*\*XfrmInStateMismatch:\*\* - If the encapsulation protocol of the XFRM state (ex., ``espinudp`` in ``encap`` field of ``ip xfrm state``) doesn't match the encapsulation protocol of the packet. - If the decrypted packet doesn't match the selector (``sel`` field) of the used XFRM state. - \*\*XfrmInStateInvalid:\*\* If received packet matched an XFRM state that is being deleted or that expired. - \*\*XfrmInTmplMismatch:\*\* - If a packet matches an XFRM policy with a non-optional template, but the template doesn't match any of the XFRM states used to decrypt the
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/xfrm/index.rst
main
cilium
[ -0.0832059308886528, -0.042846497148275375, -0.029496274888515472, -0.027671318501234055, 0.07281288504600525, 0.017271703109145164, 0.11277132481336594, 0.006818108726292849, 0.04336598515510559, 0.026367127895355225, 0.005011021625250578, 0.029010213911533356, 0.029074111953377724, 0.059...
0.141073
field) of the used XFRM state. - \*\*XfrmInStateInvalid:\*\* If received packet matched an XFRM state that is being deleted or that expired. - \*\*XfrmInTmplMismatch:\*\* - If a packet matches an XFRM policy with a non-optional template, but the template doesn't match any of the XFRM states used to decrypt the packet (yes, a packet can be decoded multiple times). - If an XFRM state with ``mode tunnel`` was used on the packet and it doesn't match any XFRM policy template. - \*\*XfrmInNoPols:\*\* If the ingressing packet doesn't match any XFRM policy and the default action is set to ``block``. See ``ip xfrm policy {get,set}default`` to view and set the default XFRM policy actions. - \*\*XfrmInPolBlock:\*\* If the packet matches an XFRM IN policy with ``action block``. - \*\*XfrmOutError:\*\* - If the kernel fails to allocate memory during encryption. - In some cases, if the packet to encrypt is malformed. - \*\*XfrmOutBundleCheckError:\*\* Unused. - \*\*XfrmOutNoStates:\*\* If the packet matched an XFRM OUT policy, but no XFRM state was found that matches the policy's template. - \*\*XfrmOutStateProtoError:\*\* If a protocol-specific (ex., ESP) encryption error happens. - \*\*XfrmOutStateModeError:\*\* If the packet exceeds the MTU once encapsulated and it shouldn't be fragmented. - \*\*XfrmOutStateSeqError:\*\* The output sequence number (``oseq``) of an XFRM state reached its maximum value, ``UINT32\_MAX`` when not using ESN mode. - \*\*XfrmOutStateExpired:\*\* There can be a delay between when a state expires (hard limits) and when it's actually deleted. During that time, matching packets are dropped with ``XfrmOutStateExpired`` on egress. - \*\*XfrmOutPolBlock:\*\* If the packet matches an XFRM OUT policy with ``action block``. - \*\*XfrmOutPolDead:\*\* Unused. ``XfrmOutStateInvalid`` is reported instead for XFRM states that in the process of being deleted. - \*\*XfrmOutPolError:\*\* - If too many XFRM policy templates apply to a packet. The maximum is also set to ``XFRM\_MAX\_DEPTH`` (6). - If no XFRM state is found for a non-optional template of the matching XFRM policy. - \*\*XfrmFwdHdrError:\*\* If the packet is malformed when going through the FWD policy check. - \*\*XfrmOutStateInvalid:\*\* If egressing packet matched an XFRM state that is being deleted or that expired. - \*\*XfrmOutStateDirError:\*\* If the direction of the XFRM state found during the lookup is defined and isn't ``XFRM\_SA\_DIR\_OUT``. Only on kernels v6.10 and newer. - \*\*XfrmInStateDirError:\*\* If the direction of the XFRM state found during the lookup is defined and isn't ``XFRM\_SA\_DIR\_IN``. Only on kernels v6.10 and newer. Performance Considerations ========================== This section describes the data structures used to hold the XFRM policies and states. This is useful to understand when dealing with a large number of states and policies as the information they hold can help improve indexing and speed up the lookups. When dealing with thousands of policies and states, the lookup cost can become non-negligible even when compared to the encryption/decryption cost. Data Structure for XFRM Policies -------------------------------- XFRM policies are stored in a rather complex data structure made of multiple red-black trees and hash tables. At the root, everything is contained in a `resizable hashtable`\_ indexed by network namespace, IP family, direction, and interface (in case XFRM interfaces are used). Each entry in this resizable hash table contains several black-red trees, which themselves hold the XFRM policies. Those entries are represented by the structure ``xfrm\_pol\_inexact\_bin``. .. \_resizable hashtable: https://lwn.net/Articles/751974 .. image:: /images/xfrm\_policies\_data\_structure.png :align: center Once ``xfrm\_pol\_inexact\_bin`` has been retrieved (based on current IP family, namespace, and direction), each of its red-black trees is looked up using the source and destination IP addresses. The ``root\_s`` tree contains policies sorted by source IP addresses; the ``root\_d`` tree contains policies sorted by destination IP addresses. In addition, leaf nodes of the ``root\_d`` tree
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/xfrm/index.rst
main
cilium
[ -0.020199695602059364, -0.022043533623218536, 0.006391170434653759, -0.02697359025478363, 0.09964872896671295, -0.0064375754445791245, 0.06232638284564018, -0.037932250648736954, -0.014418945647776127, -0.010635264217853546, 0.042092110961675644, -0.058254458010196686, 0.004521877504885197, ...
0.033907
on current IP family, namespace, and direction), each of its red-black trees is looked up using the source and destination IP addresses. The ``root\_s`` tree contains policies sorted by source IP addresses; the ``root\_d`` tree contains policies sorted by destination IP addresses. In addition, leaf nodes of the ``root\_d`` tree also contain another tree with policies sorted by source IP addresses. That allows the lookups into ``root\_s`` and ``root\_d`` to return three lists of candidate ``(src\_ip; dst\_ip)`` policies from the leaf nodes: - A list of ``(src\_ip; any)`` candidates from ``root\_s``. - A list of ``(any; dst\_ip)`` candidates from ``root\_d``. - A list of ``(src\_ip; dst\_ip)`` candidates from the trees pointed by the leaf nodes of ``root\_d``. These three lists of candidate XFRM policies are completed by a list of ``(any; any)`` candidates directly stored in the ``xfrm\_pol\_inexact\_bin`` entry. Note that an XFRM policy will only be present in one of the four candidate lists, according to its source and destination CIDRs. These four lists of candidate XFRM policies are then evaluated. The kernel iterates through each list, looking for the highest-priority (lowest ``priority`` number) candidate that matches the packet. If two policies match and have the same priority, the newest one is preferred. It's also only during this linear evaluation of candidates that the packet mark is compared with the policy marks. Data Structure for XFRM States ------------------------------ XFRM states are organized in four hash tables, with different XFRM fields used for indexing and different purposes: - ``net->xfrm.state\_bydst`` is indexed by source and destination IP addresses as well as reqid. - ``net->xfrm.state\_bysrc`` is indexed only by source and destination IP addresses. - ``net->xfrm.state\_byspi`` is indexed by destination IP address, SPI, and protocol. - ``net->xfrm.state\_byseq`` is indexed by sequence number only. ``net->xfrm.state\_byspi`` is used when looking up an XFRM state for ingressing packets. This makes sense to speed up the search as each XFRM state is encouraged to have its own SPI (cf., `RFC4301`\_, section 4.1) and the encrypted packets carry the SPI. .. \_RFC4301: https://datatracker.ietf.org/doc/html/rfc4301 When searching for the XFRM state that corresponds to an XFRM policy template (before encryption), ``net->xfrm.state\_bydst`` is used. That makes sense because the indexing information is what the XFRM policy template provides. That hash table is typically also the one being used when iterating through all XFRM states (ex., when flushing them), but any hash table would do the job for that. ``net->xfrm.state\_bysrc`` and ``net->xfrm.state\_byseq`` are used for various other management tasks, such as looking up an XFRM state to update, answering a netlink query from the user, or checking for existing states before adding a new one.
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/xfrm/index.rst
main
cilium
[ -0.024979280307888985, 0.01817338541150093, -0.011667663231492043, -0.04593445360660553, 0.12079326063394547, -0.04536230117082596, 0.050713371485471725, -0.037747956812381744, -0.045521512627601624, 0.06594788283109665, 0.056137487292289734, 0.00875038467347622, 0.04080413281917572, -0.09...
0.097359
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_bpf\_guide: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* BPF and XDP Reference Guide \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* .. note:: This documentation section is targeted at developers and users who want to understand BPF and XDP in great technical depth. While reading this reference guide may help broaden your understanding of Cilium, it is not a requirement to use Cilium. Please refer to the :ref:`getting\_started` guide and :ref:`ebpf\_datapath` for a higher level introduction. BPF is a highly flexible and efficient virtual machine-like construct in the Linux kernel allowing to execute bytecode at various hook points in a safe manner. It is used in a number of Linux kernel subsystems, most prominently networking, tracing and security (e.g. sandboxing). Although BPF exists since 1992, this document covers the extended Berkeley Packet Filter (eBPF) version which has first appeared in Kernel 3.18 and renders the original version which is being referred to as "classic" BPF (cBPF) these days mostly obsolete. cBPF is known to many as being the packet filter language used by tcpdump. Nowadays, the Linux kernel runs eBPF only and loaded cBPF bytecode is transparently translated into an eBPF representation in the kernel before program execution. This documentation will generally refer to the term BPF unless explicit differences between eBPF and cBPF are being pointed out. Even though the name Berkeley Packet Filter hints at a packet filtering specific purpose, the instruction set is generic and flexible enough these days that there are many use cases for BPF apart from networking. See :ref:`bpf\_users` for a list of projects which use BPF. Cilium uses BPF heavily in its data path, see :ref:`ebpf\_datapath` for further information. The goal of this chapter is to provide a BPF reference guide in order to gain understanding of BPF, its networking specific use including loading BPF programs with tc (traffic control) and XDP (eXpress Data Path), and to aid with developing Cilium's BPF templates. .. toctree:: :maxdepth: 2 architecture toolchain debug\_and\_test progtypes resources
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/index.rst
main
cilium
[ -0.03868376091122627, -0.01980872079730034, -0.07580259442329407, -0.07533149421215057, 0.022978220134973526, -0.037374064326286316, 0.026251371949911118, 0.05719999596476555, 0.010513991117477417, -0.009018845856189728, 0.08295446634292603, -0.07374091446399689, 0.04604089632630348, -0.06...
0.192287
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_bpf\_dev: Development Tools ================= Current user space tooling, introspection facilities and kernel control knobs around BPF are discussed in this section. .. note:: The tooling and infrastructure around BPF is still rapidly evolving and thus may not provide a complete picture of all available tools. Development Environment ----------------------- A step by step guide for setting up a development environment for BPF can be found below for both Fedora and Ubuntu. This will guide you through building, installing and testing a development kernel as well as building and installing iproute2. The step of manually building iproute2 and Linux kernel is usually not necessary given that major distributions already ship recent enough kernels by default, but would be needed for testing bleeding edge versions or contributing BPF patches to iproute2 and to the Linux kernel, respectively. Similarly, for debugging and introspection purposes building bpftool is optional but recommended. .. tabs:: .. group-tab:: Fedora The following applies to Fedora 25 or later: .. code-block:: shell-session $ sudo dnf install -y git gcc ncurses-devel elfutils-libelf-devel bc \ openssl-devel libcap-devel clang llvm graphviz bison flex glibc-static .. note:: If you are running some other Fedora derivative and ``dnf`` is missing, try using ``yum`` instead. .. group-tab:: Ubuntu The following applies to Ubuntu 17.04 or later: .. code-block:: shell-session $ sudo apt-get install -y make gcc libssl-dev bc libelf-dev libcap-dev \ clang gcc-multilib llvm libncurses5-dev git pkg-config libmnl-dev bison flex \ graphviz .. group-tab:: openSUSE Tumbleweed The following applies to openSUSE Tumbleweed and openSUSE Leap 15.0 or later: .. code-block:: shell-session $ sudo zypper install -y git gcc ncurses-devel libelf-devel bc libopenssl-devel \ libcap-devel clang llvm graphviz bison flex glibc-devel-static Compiling the Kernel ```````````````````` Development of new BPF features for the Linux kernel happens inside the ``net-next`` git tree, latest BPF fixes in the ``net`` tree. The following command will obtain the kernel source for the ``net-next`` tree through git: .. code-block:: shell-session $ git clone git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git If the git commit history is not of interest, then ``--depth 1`` will clone the tree much faster by truncating the git history only to the most recent commit. In case the ``net`` tree is of interest, it can be cloned from this url: .. code-block:: shell-session $ git clone git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git There are dozens of tutorials on the Internet on how to build Linux kernels, one good resource is the Kernel Newbies website (https://kernelnewbies.org/KernelBuild) that can be followed with one of the two git trees mentioned above. Make sure that the generated ``.config`` file contains the following ``CONFIG\_\*`` entries for running BPF. These entries are also needed for Cilium. :: CONFIG\_CGROUP\_BPF=y CONFIG\_BPF=y CONFIG\_BPF\_SYSCALL=y CONFIG\_NET\_SCH\_INGRESS=m CONFIG\_NET\_CLS\_BPF=m CONFIG\_NET\_CLS\_ACT=y CONFIG\_BPF\_JIT=y CONFIG\_LWTUNNEL\_BPF=y CONFIG\_HAVE\_EBPF\_JIT=y CONFIG\_BPF\_EVENTS=y CONFIG\_TEST\_BPF=m Some of the entries cannot be adjusted through ``make menuconfig``. For example, ``CONFIG\_HAVE\_EBPF\_JIT`` is selected automatically if a given architecture does come with an eBPF JIT. In this specific case, ``CONFIG\_HAVE\_EBPF\_JIT`` is optional but highly recommended. An architecture not having an eBPF JIT compiler will need to fall back to the in-kernel interpreter with the cost of being less efficient executing BPF instructions. Verifying the Setup ``````````````````` After you have booted into the newly compiled kernel, navigate to the BPF selftest suite in order to test BPF functionality (current working directory points to the root of the cloned git tree): .. code-block:: shell-session $ cd tools/testing/selftests/bpf/ $ make $ sudo ./test\_verifier The verifier tests print out all the current checks being performed. The summary at the end of running all tests will
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/toolchain.rst
main
cilium
[ 0.008774539455771446, -0.07372432947158813, -0.07754825800657272, -0.04310503974556923, 0.07816409319639206, -0.05661220848560333, -0.04958653077483177, 0.07477093487977982, -0.022128961980342865, -0.03752819076180458, 0.034419137984514236, -0.07812024652957916, -0.04753018543124199, -0.07...
0.174725
in order to test BPF functionality (current working directory points to the root of the cloned git tree): .. code-block:: shell-session $ cd tools/testing/selftests/bpf/ $ make $ sudo ./test\_verifier The verifier tests print out all the current checks being performed. The summary at the end of running all tests will dump information of test successes and failures: :: Summary: 847 PASSED, 0 SKIPPED, 0 FAILED .. note:: For kernel releases 4.16+ the BPF selftest has a dependency on LLVM 6.0+ caused by the BPF function calls which do not need to be inlined anymore. See section :ref:`bpf\_to\_bpf\_calls` or the cover letter mail from the kernel patch (https://lwn.net/Articles/741773/) for more information. Not every BPF program has a dependency on LLVM 6.0+ if it does not use this new feature. If your distribution does not provide LLVM 6.0+ you may compile it by following the instruction in the :ref:`tooling\_llvm` section. In order to run through all BPF selftests, the following command is needed: .. code-block:: shell-session $ sudo make run\_tests If you see any failures, please contact us on `Cilium Slack`\_ with the full test output. Compiling iproute2 `````````````````` Similar to the ``net`` (fixes only) and ``net-next`` (new features) kernel trees, iproute2 is split into two separate trees, namely ``iproute`` and ``iproute2-next``. The ``iproute2`` repository is based on the ``net`` tree and the ``iproute2-next`` repository is based against the ``net-next`` kernel tree. This is necessary, so that changes in header files can be synchronized in the iproute2 tree. To clone the stable ``iproute2`` repository: .. code-block:: shell-session $ git clone https://git.kernel.org/pub/scm/network/iproute2/iproute2.git Similarly, to clone the mentioned development ``iproute2-next`` tree: .. code-block:: shell-session $ git clone https://git.kernel.org/pub/scm/network/iproute2/iproute2-next.git After that, proceed with the build and installation: .. code-block:: shell-session $ cd iproute2/ $ ./configure --prefix=/usr TC schedulers ATM no libc has setns: yes SELinux support: yes ELF support: yes libmnl support: no Berkeley DB: no docs: latex: no WARNING: no docs can be built from LaTeX files sgml2html: no WARNING: no HTML docs can be built from SGML $ make [...] $ sudo make install Ensure that the ``configure`` script shows ``ELF support: yes``, so that iproute2 can process ELF files from LLVM's BPF back end. libelf was listed in the instructions for installing the dependencies in case of Fedora and Ubuntu earlier. Compiling bpftool ````````````````` bpftool is an essential tool around debugging and introspection of BPF programs and maps. It is part of the kernel tree and available under ``tools/bpf/bpftool/``. Make sure to have cloned either the ``net`` or ``net-next`` kernel tree as described earlier. In order to build and install bpftool, the following steps are required: .. code-block:: shell-session $ cd /tools/bpf/bpftool/ $ make Auto-detecting system features: ... libbfd: [ on ] ... disassembler-four-args: [ OFF ] CC xlated\_dumper.o CC prog.o CC common.o CC cgroup.o CC main.o CC json\_writer.o CC cfg.o CC map.o CC jit\_disasm.o CC disasm.o make[1]: Entering directory '/home/foo/trees/net/tools/lib/bpf' Auto-detecting system features: ... libelf: [ on ] ... bpf: [ on ] CC libbpf.o CC bpf.o CC nlattr.o LD libbpf-in.o LINK libbpf.a make[1]: Leaving directory '/home/foo/trees/bpf/tools/lib/bpf' LINK bpftool $ sudo make install .. \_tooling\_llvm: LLVM ---- LLVM is currently the only compiler suite providing a BPF back end. gcc does not support BPF at this point. The BPF back end was merged into LLVM's 3.7 release. Major distributions enable the BPF back end by default when they package LLVM, therefore installing clang and llvm is sufficient on most recent distributions to start compiling C into BPF object files. The typical workflow is that BPF programs are written in C, compiled by LLVM into object / ELF files, which are parsed
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/toolchain.rst
main
cilium
[ -0.019823512062430382, -0.0631198137998581, -0.020489996299147606, -0.006294488441199064, 0.0623958595097065, -0.06004820019006729, -0.04585456848144531, 0.009189186617732048, 0.0032283696345984936, -0.020213747397065163, 0.043478939682245255, -0.07632264494895935, -0.04440220817923546, -0...
0.041982
BPF back end by default when they package LLVM, therefore installing clang and llvm is sufficient on most recent distributions to start compiling C into BPF object files. The typical workflow is that BPF programs are written in C, compiled by LLVM into object / ELF files, which are parsed by user space BPF ELF loaders (such as iproute2 or others) and pushed into the kernel through the BPF system call. The kernel verifies the BPF instructions and JITs them, returning a new file descriptor for the program, which then can be attached to a subsystem (e.g. networking). If supported, the subsystem could then further offload the BPF program to hardware (e.g. NIC). For LLVM, BPF target support can be checked, for example, through the following: .. code-block:: shell-session $ llc --version LLVM (http://llvm.org/): LLVM version 3.8.1 Optimized build. Default target: x86\_64-unknown-linux-gnu Host CPU: skylake Registered Targets: [...] bpf - BPF (host endian) bpfeb - BPF (big endian) bpfel - BPF (little endian) [...] By default, the ``bpf`` target uses the endianness of the CPU it compiles on, meaning that if the CPU's endianness is little endian, the program is represented in little endian format as well, and if the CPU's endianness is big endian, the program is represented in big endian. This also matches the runtime behavior of BPF, which is generic and uses the CPU's endianness it runs on in order to not disadvantage architectures in any of the format. For cross-compilation, the two targets ``bpfeb`` and ``bpfel`` were introduced, thanks to that BPF programs can be compiled on a node running in one endianness (e.g. little endian on x86) and run on a node in another endianness format (e.g. big endian on arm). Note that the front end (clang) needs to run in the target endianness as well. Using ``bpf`` as a target is the preferred way in situations where no mixture of endianness applies. For example, compilation on ``x86\_64`` results in the same output for the targets ``bpf`` and ``bpfel`` due to being little endian, therefore scripts triggering a compilation also do not have to be endian aware. A minimal, stand-alone XDP drop program might look like the following example (``xdp-example.c``): .. code-block:: c #include #ifndef \_\_section # define \_\_section(NAME) \ \_\_attribute\_\_((section(NAME), used)) #endif \_\_section("prog") int xdp\_drop(struct xdp\_md \*ctx) { return XDP\_DROP; } char \_\_license[] \_\_section("license") = "GPL"; It can then be compiled and loaded into the kernel as follows: .. code-block:: shell-session $ clang -O2 -Wall --target=bpf -c xdp-example.c -o xdp-example.o # ip link set dev em1 xdp obj xdp-example.o .. note:: Attaching an XDP BPF program to a network device as above requires Linux 4.11 with a device that supports XDP, or Linux 4.12 or later. For the generated object file LLVM (>= 3.9) uses the official BPF machine value, that is, ``EM\_BPF`` (decimal: ``247`` / hex: ``0xf7``). In this example, the program has been compiled with ``bpf`` target under ``x86\_64``, therefore ``LSB`` (as opposed to ``MSB``) is shown regarding endianness: .. code-block:: shell-session $ file xdp-example.o xdp-example.o: ELF 64-bit LSB relocatable, \*unknown arch 0xf7\* version 1 (SYSV), not stripped ``readelf -a xdp-example.o`` will dump further information about the ELF file, which can sometimes be useful for introspecting generated section headers, relocation entries and the symbol table. In the unlikely case where clang and LLVM need to be compiled from scratch, the following commands can be used: .. code-block:: shell-session $ git clone https://github.com/llvm/llvm-project.git $ cd llvm-project $ mkdir build $ cd build $ cmake -DLLVM\_ENABLE\_PROJECTS=clang -DLLVM\_TARGETS\_TO\_BUILD="BPF;X86" -DBUILD\_SHARED\_LIBS=OFF -DCMAKE\_BUILD\_TYPE=Release -DLLVM\_BUILD\_RUNTIME=OFF -G "Unix Makefiles" ../llvm $ make -j $(getconf \_NPROCESSORS\_ONLN) $ ./bin/llc --version LLVM (http://llvm.org/):
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/toolchain.rst
main
cilium
[ -0.003191098105162382, -0.03448344022035599, -0.022373393177986145, -0.0528852753341198, 0.02007480151951313, 0.025180168449878693, -0.10580651462078094, 0.07207805663347244, -0.005663005169481039, -0.046806033700704575, -0.021910443902015686, -0.02825298346579075, -0.09140507131814957, -0...
0.050541
LLVM need to be compiled from scratch, the following commands can be used: .. code-block:: shell-session $ git clone https://github.com/llvm/llvm-project.git $ cd llvm-project $ mkdir build $ cd build $ cmake -DLLVM\_ENABLE\_PROJECTS=clang -DLLVM\_TARGETS\_TO\_BUILD="BPF;X86" -DBUILD\_SHARED\_LIBS=OFF -DCMAKE\_BUILD\_TYPE=Release -DLLVM\_BUILD\_RUNTIME=OFF -G "Unix Makefiles" ../llvm $ make -j $(getconf \_NPROCESSORS\_ONLN) $ ./bin/llc --version LLVM (http://llvm.org/): LLVM version x.y.zsvn Optimized build. Default target: x86\_64-unknown-linux-gnu Host CPU: skylake Registered Targets: bpf - BPF (host endian) bpfeb - BPF (big endian) bpfel - BPF (little endian) x86 - 32-bit X86: Pentium-Pro and above x86-64 - 64-bit X86: EM64T and AMD64 $ export PATH=$PWD/bin:$PATH # add to ~/.bashrc Make sure that ``--version`` mentions ``Optimized build.``, otherwise the compilation time for programs when having LLVM in debugging mode will significantly increase (e.g. by 10x or more). For debugging, clang can generate the assembler output as follows: .. code-block:: shell-session $ clang -O2 -S -Wall --target=bpf -c xdp-example.c -o xdp-example.S $ cat xdp-example.S .text .section prog,"ax",@progbits .globl xdp\_drop .p2align 3 xdp\_drop: # @xdp\_drop # BB#0: r0 = 1 exit .section license,"aw",@progbits .globl \_\_license # @\_\_license \_\_license: .asciz "GPL" Starting from LLVM's release 6.0, there is also assembler parser support. You can program using BPF assembler directly, then use llvm-mc to assemble it into an object file. For example, you can assemble the xdp-example.S listed above back into object file using: .. code-block:: shell-session $ llvm-mc -triple bpf -filetype=obj -o xdp-example.o xdp-example.S Furthermore, more recent LLVM versions (>= 4.0) can also store debugging information in dwarf format into the object file. This can be done through the usual workflow by adding ``-g`` for compilation. .. code-block:: shell-session $ clang -O2 -g -Wall --target=bpf -c xdp-example.c -o xdp-example.o $ llvm-objdump -S --no-show-raw-insn xdp-example.o xdp-example.o: file format ELF64-BPF Disassembly of section prog: xdp\_drop: ; { 0: r0 = 1 ; return XDP\_DROP; 1: exit The ``llvm-objdump`` tool can then annotate the assembler output with the original C code used in the compilation. The trivial example in this case does not contain much C code, however, the line numbers shown as ``0:`` and ``1:`` correspond directly to the kernel's verifier log. This means that in case BPF programs get rejected by the verifier, ``llvm-objdump`` can help to correlate the instructions back to the original C code, which is highly useful for analysis. .. code-block:: shell-session # ip link set dev em1 xdp obj xdp-example.o verb Prog section 'prog' loaded (5)! - Type: 6 - Instructions: 2 (0 over limit) - License: GPL Verifier analysis: 0: (b7) r0 = 1 1: (95) exit processed 2 insns As it can be seen in the verifier analysis, the ``llvm-objdump`` output dumps the same BPF assembler code as the kernel. Leaving out the ``--no-show-raw-insn`` option will also dump the raw ``struct bpf\_insn`` as hex in front of the assembly: .. code-block:: shell-session $ llvm-objdump -S xdp-example.o xdp-example.o: file format ELF64-BPF Disassembly of section prog: xdp\_drop: ; { 0: b7 00 00 00 01 00 00 00 r0 = 1 ; return foo(); 1: 95 00 00 00 00 00 00 00 exit For LLVM IR debugging, the compilation process for BPF can be split into two steps, generating a binary LLVM IR intermediate file ``xdp-example.bc``, which can later on be passed to llc: .. code-block:: shell-session $ clang -O2 -Wall --target=bpf -emit-llvm -c xdp-example.c -o xdp-example.bc $ llc xdp-example.bc -march=bpf -filetype=obj -o xdp-example.o The generated LLVM IR can also be dumped in human readable format through: .. code-block:: shell-session $ clang -O2 -Wall -emit-llvm -S -c xdp-example.c -o - LLVM can attach debug information such as the description of used data types in the program to the generated BPF
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/toolchain.rst
main
cilium
[ 0.044363852590322495, -0.02518610469996929, -0.002913030795753002, -0.06949196010828018, 0.032221488654613495, -0.004701823461800814, -0.10030809789896011, 0.03976535052061081, -0.007056365255266428, 0.007919473573565483, -0.03398323431611061, -0.11324775218963623, -0.04108075797557831, -0...
-0.13668
-march=bpf -filetype=obj -o xdp-example.o The generated LLVM IR can also be dumped in human readable format through: .. code-block:: shell-session $ clang -O2 -Wall -emit-llvm -S -c xdp-example.c -o - LLVM can attach debug information such as the description of used data types in the program to the generated BPF object file. By default, this is in DWARF format. A heavily simplified version used by BPF is called BTF (BPF Type Format). The resulting DWARF can be converted into BTF and is later on loaded into the kernel through BPF object loaders. The kernel will then verify the BTF data for correctness and keeps track of the data types the BTF data is containing. BPF maps can then be annotated with key and value types out of the BTF data such that a later dump of the map exports the map data along with the related type information. This allows for better introspection, debugging and value pretty printing. Note that BTF data is a generic debugging data format and as such any DWARF to BTF converted data can be loaded (e.g. kernel's vmlinux DWARF data could be converted to BTF and loaded). Latter is in particular useful for BPF tracing in the future. In order to generate BTF from DWARF debugging information, elfutils (>= 0.173) is needed. If that is not available, then adding the ``-mattr=dwarfris`` option to the ``llc`` command is required during compilation: .. code-block:: shell-session $ llc -march=bpf -mattr=help |& grep dwarfris dwarfris - Disable MCAsmInfo DwarfUsesRelocationsAcrossSections. [...] The reason using ``-mattr=dwarfris`` is because the flag ``dwarfris`` (``dwarf relocation in section``) disables DWARF cross-section relocations between DWARF and the ELF's symbol table since libdw does not have proper BPF relocation support, and therefore tools like ``pahole`` would otherwise not be able to properly dump structures from the object. elfutils (>= 0.173) implements proper BPF relocation support and therefore the same can be achieved without the ``-mattr=dwarfris`` option. Dumping the structures from the object file could be done from either DWARF or BTF information. ``pahole`` uses the LLVM emitted DWARF information at this point, however, future ``pahole`` versions could rely on BTF if available. For converting DWARF into BTF, a recent pahole version (>= 1.12) is required. A recent pahole version can also be obtained from its official git repository if not available from one of the distribution packages: .. code-block:: shell-session $ git clone https://git.kernel.org/pub/scm/devel/pahole/pahole.git ``pahole`` comes with the option ``-J`` to convert DWARF into BTF from an object file. ``pahole`` can be probed for BTF support as follows (note that the ``llvm-objcopy`` tool is required for ``pahole`` as well, so check its presence, too): .. code-block:: shell-session $ pahole --help | grep BTF -J, --btf\_encode Encode as BTF Generating debugging information also requires the front end to generate source level debug information by passing ``-g`` to the ``clang`` command line. Note that ``-g`` is needed independently of whether ``llc``'s ``dwarfris`` option is used. Full example for generating the object file: .. code-block:: shell-session $ clang -O2 -g -Wall --target=bpf -emit-llvm -c xdp-example.c -o xdp-example.bc $ llc xdp-example.bc -march=bpf -mattr=dwarfris -filetype=obj -o xdp-example.o Alternatively, by using clang only to build a BPF program with debugging information (again, the dwarfris flag can be omitted when having proper elfutils version): .. code-block:: shell-session $ clang --target=bpf -O2 -g -c -Xclang -target-feature -Xclang +dwarfris -c xdp-example.c -o xdp-example.o After successful compilation ``pahole`` can be used to properly dump structures of the BPF program based on the DWARF information: .. code-block:: shell-session $ pahole xdp-example.o struct xdp\_md { \_\_u32 data; /\* 0 4 \*/ \_\_u32 data\_end; /\* 4 4 \*/ \_\_u32
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/toolchain.rst
main
cilium
[ 0.007170765660703182, 0.026850998401641846, -0.05731887370347977, -0.009888040833175182, 0.06836586445569992, -0.032777734100818634, -0.035256750881671906, 0.060633305460214615, -0.05976252630352974, -0.022686198353767395, 0.03556274250149727, -0.0872567668557167, -0.050099506974220276, -0...
0.120959
-Xclang -target-feature -Xclang +dwarfris -c xdp-example.c -o xdp-example.o After successful compilation ``pahole`` can be used to properly dump structures of the BPF program based on the DWARF information: .. code-block:: shell-session $ pahole xdp-example.o struct xdp\_md { \_\_u32 data; /\* 0 4 \*/ \_\_u32 data\_end; /\* 4 4 \*/ \_\_u32 data\_meta; /\* 8 4 \*/ /\* size: 12, cachelines: 1, members: 3 \*/ /\* last cacheline: 12 bytes \*/ }; Through the option ``-J`` ``pahole`` can eventually generate the BTF from DWARF. In the object file DWARF data will still be retained alongside the newly added BTF data. Full ``clang`` and ``pahole`` example combined: .. code-block:: shell-session $ clang --target=bpf -O2 -Wall -g -c -Xclang -target-feature -Xclang +dwarfris -c xdp-example.c -o xdp-example.o $ pahole -J xdp-example.o The presence of a ``.BTF`` section can be seen through ``readelf`` tool: .. code-block:: shell-session $ readelf -a xdp-example.o [...] [18] .BTF PROGBITS 0000000000000000 00000671 [...] BPF loaders such as iproute2 will detect and load the BTF section, so that BPF maps can be annotated with type information. LLVM by default uses the BPF base instruction set for generating code in order to make sure that the generated object file can also be loaded with older kernels such as long-term stable kernels (e.g. 4.9+). However, LLVM has a ``-mcpu`` selector for the BPF back end in order to select different versions of the BPF instruction set, namely instruction set extensions on top of the BPF base instruction set in order to generate more efficient and smaller code. Available ``-mcpu`` options can be queried through: .. code-block:: shell-session $ llc -march bpf -mcpu=help Available CPUs for this target: generic - Select the generic processor. probe - Select the probe processor. v1 - Select the v1 processor. v2 - Select the v2 processor. [...] The ``generic`` processor is the default processor, which is also the base instruction set ``v1`` of BPF. Options ``v1`` and ``v2`` are typically useful in an environment where the BPF program is being cross compiled and the target host where the program is loaded differs from the one where it is compiled (and thus available BPF kernel features might differ as well). The recommended ``-mcpu`` option which is also used by Cilium internally is ``-mcpu=probe``! Here, the LLVM BPF back end queries the kernel for availability of BPF instruction set extensions and when found available, LLVM will use them for compiling the BPF program whenever appropriate. A full command line example with llc's ``-mcpu=probe``: .. code-block:: shell-session $ clang -O2 -Wall --target=bpf -emit-llvm -c xdp-example.c -o xdp-example.bc $ llc xdp-example.bc -march=bpf -mcpu=probe -filetype=obj -o xdp-example.o Generally, LLVM IR generation is architecture independent. However, there are a few differences when using ``clang --target=bpf`` versus leaving ``--target=bpf`` out and thus using clang's default target which, depending on the underlying architecture, might be ``x86\_64``, ``arm64`` or others. Quoting from the kernel's ``Documentation/bpf/bpf\_devel\_QA.txt``: \* BPF programs may recursively include header file(s) with file scope inline assembly codes. The default target can handle this well, while bpf target may fail if bpf backend assembler does not understand these assembly codes, which is true in most cases. \* When compiled without -g, additional elf sections, e.g., ``.eh\_frame`` and ``.rela.eh\_frame``, may be present in the object file with default target, but not with bpf target. \* The default target may turn a C switch statement into a switch table lookup and jump operation. Since the switch table is placed in the global read-only section, the bpf program will fail to load. The bpf target does not support switch table optimization. The clang option ``-fno-jump-tables`` can be used to disable
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/toolchain.rst
main
cilium
[ -0.016160547733306885, 0.052912354469299316, -0.021039435639977455, -0.053227588534355164, 0.022019580006599426, -0.07539112120866776, -0.0013696664245799184, 0.007772382814437151, -0.0806681215763092, 0.013957097195088863, 0.04222341626882553, -0.1289457380771637, -0.033989060670137405, -...
0.017625
may turn a C switch statement into a switch table lookup and jump operation. Since the switch table is placed in the global read-only section, the bpf program will fail to load. The bpf target does not support switch table optimization. The clang option ``-fno-jump-tables`` can be used to disable switch table generation. \* For clang ``--target=bpf``, it is guaranteed that pointer or long / unsigned long types will always have a width of 64 bit, no matter whether underlying clang binary or default target (or kernel) is 32 bit. However, when native clang target is used, then it will compile these types based on the underlying architecture's conventions, meaning in case of 32 bit architecture, pointer or long / unsigned long types e.g. in BPF context structure will have width of 32 bit while the BPF LLVM back end still operates in 64 bit. The native target is mostly needed in tracing for the case of walking the kernel's ``struct pt\_regs`` that maps CPU registers, or other kernel structures where CPU's register width matters. In all other cases such as networking, the use of ``clang --target=bpf`` is the preferred choice. Also, LLVM started to support 32-bit subregisters and BPF ALU32 instructions since LLVM's release 7.0. A new code generation attribute ``alu32`` is added. When it is enabled, LLVM will try to use 32-bit subregisters whenever possible, typically when there are operations on 32-bit types. The associated ALU instructions with 32-bit subregisters will become ALU32 instructions. For example, for the following sample code: .. code-block:: shell-session $ cat 32-bit-example.c void cal(unsigned int \*a, unsigned int \*b, unsigned int \*c) { unsigned int sum = \*a + \*b; \*c = sum; } At default code generation, the assembler looks like: .. code-block:: shell-session $ clang --target=bpf -emit-llvm -S 32-bit-example.c $ llc -march=bpf 32-bit-example.ll $ cat 32-bit-example.s cal: r1 = \*(u32 \*)(r1 + 0) r2 = \*(u32 \*)(r2 + 0) r2 += r1 \*(u32 \*)(r3 + 0) = r2 exit 64-bit registers are used, hence the addition means 64-bit addition. Now, if you enable the new 32-bit subregisters support by specifying ``-mattr=+alu32``, then the assembler looks like: .. code-block:: shell-session $ llc -march=bpf -mattr=+alu32 32-bit-example.ll $ cat 32-bit-example.s cal: w1 = \*(u32 \*)(r1 + 0) w2 = \*(u32 \*)(r2 + 0) w2 += w1 \*(u32 \*)(r3 + 0) = w2 exit ``w`` register, meaning 32-bit subregister, will be used instead of 64-bit ``r`` register. Enable 32-bit subregisters might help reducing type extension instruction sequences. It could also help kernel eBPF JIT compiler for 32-bit architectures for which registers pairs are used to model the 64-bit eBPF registers and extra instructions are needed for manipulating the high 32-bit. Given read from 32-bit subregister is guaranteed to read from low 32-bit only even though write still needs to clear the high 32-bit, if the JIT compiler has known the definition of one register only has subregister reads, then instructions for setting the high 32-bit of the destination could be eliminated. When writing C programs for BPF, there are a couple of pitfalls to be aware of, compared to usual application development with C. The following items describe some of the differences for the BPF model: 1. \*\*Everything needs to be inlined, there are no function calls (on older LLVM versions) or shared library calls available.\*\* Shared libraries, etc cannot be used with BPF. However, common library code used in BPF programs can be placed into header files and included in the main programs. For example, Cilium makes heavy use of it (see ``bpf/lib/``). However, this still allows for including header files, for example, from
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/toolchain.rst
main
cilium
[ 0.056781105697155, -0.04277042672038078, -0.042720627039670944, -0.04270637035369873, -0.035375144332647324, -0.01145221572369337, -0.06428810954093933, 0.06536007672548294, -0.08621430397033691, -0.002271727193146944, -0.005636227782815695, -0.06725353747606277, -0.11113078892230988, -0.0...
0.023532
available.\*\* Shared libraries, etc cannot be used with BPF. However, common library code used in BPF programs can be placed into header files and included in the main programs. For example, Cilium makes heavy use of it (see ``bpf/lib/``). However, this still allows for including header files, for example, from the kernel or other libraries and reuse their static inline functions or macros / definitions. Unless a recent kernel (4.16+) and LLVM (6.0+) is used where BPF to BPF function calls are supported, then LLVM needs to compile and inline the entire code into a flat sequence of BPF instructions for a given program section. In such case, best practice is to use an annotation like ``\_\_inline`` for every library function as shown below. The use of ``always\_inline`` is recommended, since the compiler could still decide to uninline large functions that are only annotated as ``inline``. In case the latter happens, LLVM will generate a relocation entry into the ELF file, which BPF ELF loaders such as iproute2 cannot resolve and will thus produce an error since only BPF maps are valid relocation entries which loaders can process. .. code-block:: c #include #ifndef \_\_section # define \_\_section(NAME) \ \_\_attribute\_\_((section(NAME), used)) #endif #ifndef \_\_inline # define \_\_inline \ inline \_\_attribute\_\_((always\_inline)) #endif static \_\_inline int foo(void) { return XDP\_DROP; } \_\_section("prog") int xdp\_drop(struct xdp\_md \*ctx) { return foo(); } char \_\_license[] \_\_section("license") = "GPL"; 2. \*\*Multiple programs can reside inside a single C file in different sections.\*\* C programs for BPF make heavy use of section annotations. A C file is typically structured into 3 or more sections. BPF ELF loaders use these names to extract and prepare the relevant information in order to load the programs and maps through the bpf system call. For example, iproute2 uses ``maps`` and ``license`` as default section name to find metadata needed for map creation and the license for the BPF program, respectively. On program creation time the latter is pushed into the kernel as well, and enables some of the helper functions which are exposed as GPL only in case the program also holds a GPL compatible license, for example ``bpf\_ktime\_get\_ns()``, ``bpf\_probe\_read()`` and others. The remaining section names are specific for BPF program code, for example, the below code has been modified to contain two program sections, ``ingress`` and ``egress``. The toy example code demonstrates that both can share a map and common static inline helpers such as the ``account\_data()`` function. The ``xdp-example.c`` example has been modified to a ``tc-example.c`` example that can be loaded with tc and attached to a netdevice's ingress and egress hook. It accounts the transferred bytes into a map called ``acc\_map``, which has two map slots, one for traffic accounted on the ingress hook, one on the egress hook. .. code-block:: c #include #include #include #include #ifndef \_\_section # define \_\_section(NAME) \ \_\_attribute\_\_((section(NAME), used)) #endif #ifndef \_\_inline # define \_\_inline \ inline \_\_attribute\_\_((always\_inline)) #endif #ifndef lock\_xadd # define lock\_xadd(ptr, val) \ ((void)\_\_sync\_fetch\_and\_add(ptr, val)) #endif #ifndef BPF\_FUNC # define BPF\_FUNC(NAME, ...) \ (\*NAME)(\_\_VA\_ARGS\_\_) = (void \*)BPF\_FUNC\_##NAME #endif static void \*BPF\_FUNC(map\_lookup\_elem, void \*map, const void \*key); struct bpf\_elf\_map acc\_map \_\_section("maps") = { .type = BPF\_MAP\_TYPE\_ARRAY, .size\_key = sizeof(uint32\_t), .size\_value = sizeof(uint32\_t), .pinning = PIN\_GLOBAL\_NS, .max\_elem = 2, }; static \_\_inline int account\_data(struct \_\_sk\_buff \*skb, uint32\_t dir) { uint32\_t \*bytes; bytes = map\_lookup\_elem(&acc\_map, &dir); if (bytes) lock\_xadd(bytes, skb->len); return TC\_ACT\_OK; } \_\_section("ingress") int tc\_ingress(struct \_\_sk\_buff \*skb) { return account\_data(skb, 0); } \_\_section("egress") int tc\_egress(struct \_\_sk\_buff \*skb) { return account\_data(skb, 1); } char \_\_license[] \_\_section("license") = "GPL"; The example also demonstrates a couple of other things which are useful to be aware of
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/toolchain.rst
main
cilium
[ -0.02567298896610737, -0.016845786944031715, -0.04976774379611015, -0.05453949794173241, 0.03565047308802605, 0.02064097858965397, -0.040983591228723526, 0.005320737138390541, -0.014823265373706818, -0.07871368527412415, -0.018797507509589195, -0.015246926806867123, -0.020024990662932396, ...
0.044391
map\_lookup\_elem(&acc\_map, &dir); if (bytes) lock\_xadd(bytes, skb->len); return TC\_ACT\_OK; } \_\_section("ingress") int tc\_ingress(struct \_\_sk\_buff \*skb) { return account\_data(skb, 0); } \_\_section("egress") int tc\_egress(struct \_\_sk\_buff \*skb) { return account\_data(skb, 1); } char \_\_license[] \_\_section("license") = "GPL"; The example also demonstrates a couple of other things which are useful to be aware of when developing programs. The code includes kernel headers, standard C headers and an iproute2 specific header containing the definition of ``struct bpf\_elf\_map``. iproute2 has a common BPF ELF loader and as such the definition of ``struct bpf\_elf\_map`` is the very same for XDP and tc typed programs. A ``struct bpf\_elf\_map`` entry defines a map in the program and contains all relevant information (such as key / value size, etc) needed to generate a map which is used from the two BPF programs. The structure must be placed into the ``maps`` section, so that the loader can find it. There can be multiple map declarations of this type with different variable names, but all must be annotated with ``\_\_section("maps")``. The ``struct bpf\_elf\_map`` is specific to iproute2. Different BPF ELF loaders can have different formats, for example, the libbpf in the kernel source tree, which is mainly used by ``perf``, has a different specification. iproute2 guarantees backwards compatibility for ``struct bpf\_elf\_map``. Cilium follows the iproute2 model. The example also demonstrates how BPF helper functions are mapped into the C code and being used. Here, ``map\_lookup\_elem()`` is defined by mapping this function into the ``BPF\_FUNC\_map\_lookup\_elem`` enum value which is exposed as a helper in ``uapi/linux/bpf.h``. When the program is later loaded into the kernel, the verifier checks whether the passed arguments are of the expected type and re-points the helper call into a real function call. Moreover, ``map\_lookup\_elem()`` also demonstrates how maps can be passed to BPF helper functions. Here, ``&acc\_map`` from the ``maps`` section is passed as the first argument to ``map\_lookup\_elem()``. Since the defined array map is global, the accounting needs to use an atomic operation, which is defined as ``lock\_xadd()``. LLVM maps ``\_\_sync\_fetch\_and\_add()`` as a built-in function to the BPF atomic add instruction, that is, ``BPF\_STX | BPF\_XADD | BPF\_W`` for word sizes. Last but not least, the ``struct bpf\_elf\_map`` tells that the map is to be pinned as ``PIN\_GLOBAL\_NS``. This means that tc will pin the map into the BPF pseudo file system as a node. By default, it will be pinned to ``/sys/fs/bpf/tc/globals/acc\_map`` for the given example. Due to the ``PIN\_GLOBAL\_NS``, the map will be placed under ``/sys/fs/bpf/tc/globals/``. ``globals`` acts as a global namespace that spans across object files. If the example used ``PIN\_OBJECT\_NS``, then tc would create a directory that is local to the object file. For example, different C files with BPF code could have the same ``acc\_map`` definition as above with a ``PIN\_GLOBAL\_NS`` pinning. In that case, the map will be shared among BPF programs originating from various object files. ``PIN\_NONE`` would mean that the map is not placed into the BPF file system as a node, and as a result, will not be accessible from user space after tc quits. It would also mean that tc creates two separate map instances for each program, since it cannot retrieve a previously pinned map under that name. The ``acc\_map`` part from the mentioned path is the name of the map as specified in the source code. Thus, upon loading of the ``ingress`` program, tc will find that no such map exists in the BPF file system and creates a new one. On success, the map will also be pinned, so that when the ``egress`` program is loaded through tc, it will find that such map already
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/toolchain.rst
main
cilium
[ -0.059353820979595184, 0.03380408510565758, -0.09937706589698792, -0.09557494521141052, 0.035636816173791885, -0.05037657171487808, 0.08851959556341171, 0.035667575895786285, -0.014695808291435242, -0.04928301274776459, 0.06127781793475151, -0.03762916103005409, 0.03812767192721367, -0.124...
0.123018
Thus, upon loading of the ``ingress`` program, tc will find that no such map exists in the BPF file system and creates a new one. On success, the map will also be pinned, so that when the ``egress`` program is loaded through tc, it will find that such map already exists in the BPF file system and will reuse that for the ``egress`` program. The loader also makes sure in case maps exist with the same name that also their properties (key / value size, etc) match. Just like tc can retrieve the same map, also third party applications can use the ``BPF\_OBJ\_GET`` command from the bpf system call in order to create a new file descriptor pointing to the same map instance, which can then be used to lookup / update / delete map elements. The code can be compiled and loaded via iproute2 as follows: .. code-block:: shell-session $ clang -O2 -Wall --target=bpf -c tc-example.c -o tc-example.o # tc qdisc add dev em1 clsact # tc filter add dev em1 ingress bpf da obj tc-example.o sec ingress # tc filter add dev em1 egress bpf da obj tc-example.o sec egress # tc filter show dev em1 ingress filter protocol all pref 49152 bpf filter protocol all pref 49152 bpf handle 0x1 tc-example.o:[ingress] direct-action id 1 tag c5f7825e5dac396f # tc filter show dev em1 egress filter protocol all pref 49152 bpf filter protocol all pref 49152 bpf handle 0x1 tc-example.o:[egress] direct-action id 2 tag b2fd5adc0f262714 # mount | grep bpf sysfs on /sys/fs/bpf type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel) bpf on /sys/fs/bpf type bpf (rw,relatime,mode=0700) # tree /sys/fs/bpf/ /sys/fs/bpf/ +-- ip -> /sys/fs/bpf/tc/ +-- tc | +-- globals | +-- acc\_map +-- xdp -> /sys/fs/bpf/tc/ 4 directories, 1 file As soon as packets pass the ``em1`` device, counters from the BPF map will be increased. 3. \*\*There are no global variables allowed.\*\* For the reasons already mentioned in point 1, BPF cannot have global variables as often used in normal C programs. However, there is a work-around in that the program can simply use a BPF map of type ``BPF\_MAP\_TYPE\_PERCPU\_ARRAY`` with just a single slot of arbitrary value size. This works, because during execution, BPF programs are guaranteed to never get preempted by the kernel and therefore can use the single map entry as a scratch buffer for temporary data, for example, to extend beyond the stack limitation. This also functions across tail calls, since it has the same guarantees with regards to preemption. Otherwise, for holding state across multiple BPF program runs, normal BPF maps can be used. 4. \*\*There are no const strings or arrays allowed.\*\* Defining ``const`` strings or other arrays in the BPF C program does not work for the same reasons as pointed out in sections 1 and 3, which is, that relocation entries will be generated in the ELF file which will be rejected by loaders due to not being part of the ABI towards loaders (loaders also cannot fix up such entries as it would require large rewrites of the already compiled BPF sequence). In the future, LLVM might detect these occurrences and early throw an error to the user. Helper functions such as ``trace\_printk()`` can be worked around as follows: .. code-block:: c static void BPF\_FUNC(trace\_printk, const char \*fmt, int fmt\_size, ...); #ifndef printk # define printk(fmt, ...) \ ({ \ char \_\_\_\_fmt[] = fmt; \ trace\_printk(\_\_\_\_fmt, sizeof(\_\_\_\_fmt), ##\_\_VA\_ARGS\_\_); \ }) #endif The program can then use the macro naturally like ``printk("skb len:%u\n", skb->len);``. The output will then be written to the trace pipe. ``tc exec bpf dbg`` can be used to retrieve the
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/toolchain.rst
main
cilium
[ -0.060281816869974136, 0.06263552606105804, -0.04121962562203407, 0.004467464517802, -0.010469204746186733, -0.014429762959480286, 0.02498302049934864, 0.045178238302469254, 0.04396925866603851, 0.0023610612843185663, 0.03997573256492615, -0.05628286674618721, 0.012514605186879635, -0.0875...
0.148698
# define printk(fmt, ...) \ ({ \ char \_\_\_\_fmt[] = fmt; \ trace\_printk(\_\_\_\_fmt, sizeof(\_\_\_\_fmt), ##\_\_VA\_ARGS\_\_); \ }) #endif The program can then use the macro naturally like ``printk("skb len:%u\n", skb->len);``. The output will then be written to the trace pipe. ``tc exec bpf dbg`` can be used to retrieve the messages from there. The use of the ``trace\_printk()`` helper function has a couple of disadvantages and thus is not recommended for production usage. Constant strings like the ``"skb len:%u\n"`` need to be loaded into the BPF stack each time the helper function is called, but also BPF helper functions are limited to a maximum of 5 arguments. This leaves room for only 3 additional variables which can be passed for dumping. Therefore, despite being helpful for quick debugging, it is recommended (for networking programs) to use the ``skb\_event\_output()`` or the ``xdp\_event\_output()`` helper, respectively. They allow for passing custom structs from the BPF program to the perf event ring buffer along with an optional packet sample. For example, Cilium's monitor makes use of these helpers in order to implement a debugging framework, notifications for network policy violations, etc. These helpers pass the data through a lockless memory mapped per-CPU ``perf`` ring buffer, and is thus significantly faster than ``trace\_printk()``. 5. \*\*Use of LLVM built-in functions for memset()/memcpy()/memmove()/memcmp().\*\* Since BPF programs cannot perform any function calls other than those to BPF helpers, common library code needs to be implemented as inline functions. In addition, also LLVM provides some built-ins that the programs can use for constant sizes (here: ``n``) which will then always get inlined: .. code-block:: c #ifndef memset # define memset(dest, chr, n) \_\_builtin\_memset((dest), (chr), (n)) #endif #ifndef memcpy # define memcpy(dest, src, n) \_\_builtin\_memcpy((dest), (src), (n)) #endif #ifndef memmove # define memmove(dest, src, n) \_\_builtin\_memmove((dest), (src), (n)) #endif The ``memcmp()`` built-in had some corner cases where inlining did not take place due to an LLVM issue in the back end, and is therefore not recommended to be used until the issue is fixed. 6. \*\*There are no loops available (yet).\*\* The BPF verifier in the kernel checks that a BPF program does not contain loops by performing a depth first search of all possible program paths besides other control flow graph validations. The purpose is to make sure that the program is always guaranteed to terminate. A very limited form of looping is available for constant upper loop bounds by using ``#pragma unroll`` directive. Example code that is compiled to BPF: .. code-block:: c #pragma unroll for (i = 0; i < IPV6\_MAX\_HEADERS; i++) { switch (nh) { case NEXTHDR\_NONE: return DROP\_INVALID\_EXTHDR; case NEXTHDR\_FRAGMENT: return DROP\_FRAG\_NOSUPPORT; case NEXTHDR\_HOP: case NEXTHDR\_ROUTING: case NEXTHDR\_AUTH: case NEXTHDR\_DEST: if (skb\_load\_bytes(skb, l3\_off + len, &opthdr, sizeof(opthdr)) < 0) return DROP\_INVALID; nh = opthdr.nexthdr; if (nh == NEXTHDR\_AUTH) len += ipv6\_authlen(&opthdr); else len += ipv6\_optlen(&opthdr); break; default: \*nexthdr = nh; return len; } } Another possibility is to use tail calls by calling into the same program again and using a ``BPF\_MAP\_TYPE\_PERCPU\_ARRAY`` map for having a local scratch space. While being dynamic, this form of looping however is limited to a maximum of 34 iterations (the initial program, plus 33 iterations from the tail calls). In the future, BPF may have some native, but limited form of implementing loops. 7. \*\*Partitioning programs with tail calls.\*\* Tail calls provide the flexibility to atomically alter program behavior during runtime by jumping from one BPF program into another. In order to select the next program, tail calls make use of program array maps (``BPF\_MAP\_TYPE\_PROG\_ARRAY``), and pass the map as well as the index to the next program to
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/toolchain.rst
main
cilium
[ 0.005391231272369623, 0.02965562604367733, -0.10113193094730377, 0.003444444388151169, -0.033205680549144745, -0.011552331037819386, 0.11233920603990555, 0.10227780789136887, -0.041444625705480576, -0.06918370723724365, -0.010504786856472492, -0.05477265268564224, 0.024193590506911278, -0....
0.095023
Tail calls provide the flexibility to atomically alter program behavior during runtime by jumping from one BPF program into another. In order to select the next program, tail calls make use of program array maps (``BPF\_MAP\_TYPE\_PROG\_ARRAY``), and pass the map as well as the index to the next program to jump to. There is no return to the old program after the jump has been performed, and in case there was no program present at the given map index, then execution continues on the original program. For example, this can be used to implement various stages of a parser, where such stages could be updated with new parsing features during runtime. Another use case are event notifications, for example, Cilium can opt in packet drop notifications during runtime, where the ``skb\_event\_output()`` call is located inside the tail called program. Thus, during normal operations, the fall-through path will always be executed unless a program is added to the related map index, where the program then prepares the metadata and triggers the event notification to a user space daemon. Program array maps are quite flexible, enabling also individual actions to be implemented for programs located in each map index. For example, the root program attached to XDP or tc could perform an initial tail call to index 0 of the program array map, performing traffic sampling, then jumping to index 1 of the program array map, where firewalling policy is applied and the packet either dropped or further processed in index 2 of the program array map, where it is mangled and sent out of an interface again. Jumps in the program array map can, of course, be arbitrary. The kernel will eventually execute the fall-through path when the maximum tail call limit has been reached. Minimal example extract of using tail calls: .. code-block:: c [...] #ifndef \_\_stringify # define \_\_stringify(X) #X #endif #ifndef \_\_section # define \_\_section(NAME) \ \_\_attribute\_\_((section(NAME), used)) #endif #ifndef \_\_section\_tail # define \_\_section\_tail(ID, KEY) \ \_\_section(\_\_stringify(ID) "/" \_\_stringify(KEY)) #endif #ifndef BPF\_FUNC # define BPF\_FUNC(NAME, ...) \ (\*NAME)(\_\_VA\_ARGS\_\_) = (void \*)BPF\_FUNC\_##NAME #endif #define BPF\_JMP\_MAP\_ID 1 static void BPF\_FUNC(tail\_call, struct \_\_sk\_buff \*skb, void \*map, uint32\_t index); struct bpf\_elf\_map jmp\_map \_\_section("maps") = { .type = BPF\_MAP\_TYPE\_PROG\_ARRAY, .id = BPF\_JMP\_MAP\_ID, .size\_key = sizeof(uint32\_t), .size\_value = sizeof(uint32\_t), .pinning = PIN\_GLOBAL\_NS, .max\_elem = 1, }; \_\_section\_tail(BPF\_JMP\_MAP\_ID, 0) int looper(struct \_\_sk\_buff \*skb) { printk("skb cb: %u\n", skb->cb[0]++); tail\_call(skb, &jmp\_map, 0); return TC\_ACT\_OK; } \_\_section("prog") int entry(struct \_\_sk\_buff \*skb) { skb->cb[0] = 0; tail\_call(skb, &jmp\_map, 0); return TC\_ACT\_OK; } char \_\_license[] \_\_section("license") = "GPL"; When loading this toy program, tc will create the program array and pin it to the BPF file system in the global namespace under ``jmp\_map``. Also, the BPF ELF loader in iproute2 will also recognize sections that are marked as ``\_\_section\_tail()``. The provided ``id`` in ``struct bpf\_elf\_map`` will be matched against the id marker in the ``\_\_section\_tail()``, that is, ``JMP\_MAP\_ID``, and the program therefore loaded at the user specified program array map index, which is ``0`` in this example. As a result, all provided tail call sections will be populated by the iproute2 loader to the corresponding maps. This mechanism is not specific to tc, but can be applied with any other BPF program type that iproute2 supports (such as XDP, lwt). The generated elf contains section headers describing the map id and the entry within that map: .. code-block:: shell-session $ llvm-objdump -S --no-show-raw-insn prog\_array.o | less prog\_array.o: file format ELF64-BPF Disassembly of section 1/0: looper: 0: r6 = r1 1: r2 = \*(u32 \*)(r6 + 48) 2: r1 = r2 3: r1 += 1 4: \*(u32 \*)(r6 + 48)
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/toolchain.rst
main
cilium
[ 0.0057335044257342815, -0.02145780809223652, 0.0021608448587358, -0.035174380987882614, -0.028277074918150902, -0.053249482065439224, 0.04904378950595856, 0.06016019731760025, 0.06699023395776749, -0.055030014365911484, -0.03033277578651905, 0.019942117854952812, -0.06503906100988388, -0.0...
0.215706
id and the entry within that map: .. code-block:: shell-session $ llvm-objdump -S --no-show-raw-insn prog\_array.o | less prog\_array.o: file format ELF64-BPF Disassembly of section 1/0: looper: 0: r6 = r1 1: r2 = \*(u32 \*)(r6 + 48) 2: r1 = r2 3: r1 += 1 4: \*(u32 \*)(r6 + 48) = r1 5: r1 = 0 ll 7: call -1 8: r1 = r6 9: r2 = 0 ll 11: r3 = 0 12: call 12 13: r0 = 0 14: exit Disassembly of section prog: entry: 0: r2 = 0 1: \*(u32 \*)(r1 + 48) = r2 2: r2 = 0 ll 4: r3 = 0 5: call 12 6: r0 = 0 7: exi In this case, the ``section 1/0`` indicates that the ``looper()`` function resides in the map id ``1`` at position ``0``. The pinned map can be retrieved by user space applications (e.g. Cilium daemon), but also by tc itself in order to update the map with new programs. Updates happen atomically, the initial entry programs that are triggered first from the various subsystems are also updated atomically. Example for tc to perform tail call map updates: .. code-block:: shell-session # tc exec bpf graft m:globals/jmp\_map key 0 obj new.o sec foo In case iproute2 would update the pinned program array, the ``graft`` command can be used. By pointing it to ``globals/jmp\_map``, tc will update the map at index / key ``0`` with a new program residing in the object file ``new.o`` under section ``foo``. 8. \*\*Limited stack space of maximum 512 bytes.\*\* Stack space in BPF programs is limited to only 512 bytes, which needs to be taken into careful consideration when implementing BPF programs in C. However, as mentioned earlier in point 3, a ``BPF\_MAP\_TYPE\_PERCPU\_ARRAY`` map with a single entry can be used in order to enlarge scratch buffer space. 9. \*\*Use of BPF inline assembly possible.\*\* LLVM 6.0 or later allows use of inline assembly for BPF for the rare cases where it might be needed. The following (nonsense) toy example shows a 64 bit atomic add. Due to lack of documentation, LLVM source code in ``lib/Target/BPF/BPFInstrInfo.td`` as well as ``test/CodeGen/BPF/`` might be helpful for providing some additional examples. Test code: .. code-block:: c #include #ifndef \_\_section # define \_\_section(NAME) \ \_\_attribute\_\_((section(NAME), used)) #endif \_\_section("prog") int xdp\_test(struct xdp\_md \*ctx) { \_\_u64 a = 2, b = 3, \*c = &a /\* just a toy xadd example to show the syntax \*/ asm volatile("lock \*(u64 \*)(%0+0) += %1" : "=r"(c) : "r"(b), "0"(c)); return a; } char \_\_license[] \_\_section("license") = "GPL"; The above program is compiled into the following sequence of BPF instructions: :: Verifier analysis: 0: (b7) r1 = 2 1: (7b) \*(u64 \*)(r10 -8) = r1 2: (b7) r1 = 3 3: (bf) r2 = r10 4: (07) r2 += -8 5: (db) lock \*(u64 \*)(r2 +0) += r1 6: (79) r0 = \*(u64 \*)(r10 -8) 7: (95) exit processed 8 insns (limit 131072), stack depth 8 10. \*\*Remove struct padding with aligning members by using #pragma pack.\*\* In modern compilers, data structures are aligned by default to access memory efficiently. Structure members are packed to memory addresses and padding is added for the proper alignment with the processor word size (e.g. 8-byte for 64-bit processors, 4-byte for 32-bit processors). Because of this, the size of struct may often grow larger than expected. .. code-block:: c struct called\_info { u64 start; // 8-byte u64 end; // 8-byte u32 sector; // 4-byte }; // size of 20-byte ? printf("size of %d-byte\n", sizeof(struct called\_info)); // size of 24-byte // Actual compiled composition of struct
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/toolchain.rst
main
cilium
[ 0.010606946423649788, 0.11496998369693756, -0.08931883424520493, -0.06384190171957016, 0.013274626806378365, -0.07042291015386581, 0.04757773503661156, 0.06582564860582352, -0.011804888024926186, -0.00496467761695385, 0.02197931334376335, -0.05553727224469185, -0.05511784180998802, -0.0674...
0.144799
this, the size of struct may often grow larger than expected. .. code-block:: c struct called\_info { u64 start; // 8-byte u64 end; // 8-byte u32 sector; // 4-byte }; // size of 20-byte ? printf("size of %d-byte\n", sizeof(struct called\_info)); // size of 24-byte // Actual compiled composition of struct called\_info // 0x0(0) 0x8(8) // ↓\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_↓ // | start (8) | // |\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_| // | end (8) | // |\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_| // | sector(4) | PADDING | <= address aligned to 8 // |\_\_\_\_\_\_\_\_\_\_\_\_|\_\_\_\_\_\_\_\_\_\_\_| with 4-byte PADDING. The BPF verifier in the kernel checks the stack boundary that a BPF program does not access outside of boundary or uninitialized stack area. Using struct with the padding as a map value, will cause ``invalid indirect read from stack`` failure on ``bpf\_prog\_load()``. Example code: .. code-block:: c struct called\_info { u64 start; u64 end; u32 sector; }; struct bpf\_map\_def SEC("maps") called\_info\_map = { .type = BPF\_MAP\_TYPE\_HASH, .key\_size = sizeof(long), .value\_size = sizeof(struct called\_info), .max\_entries = 4096, }; SEC("kprobe/submit\_bio") int submit\_bio\_entry(struct pt\_regs \*ctx) { char fmt[] = "submit\_bio(bio=0x%lx) called: %llu\n"; u64 start\_time = bpf\_ktime\_get\_ns(); long bio\_ptr = PT\_REGS\_PARM1(ctx); struct called\_info called\_info = { .start = start\_time, .end = 0, .sector = 0 }; bpf\_map\_update\_elem(&called\_info\_map, &bio\_ptr, &called\_info, BPF\_ANY); bpf\_trace\_printk(fmt, sizeof(fmt), bio\_ptr, start\_time); return 0; } Corresponding output on ``bpf\_load\_program()``:: bpf\_load\_program() err=13 0: (bf) r6 = r1 ... 19: (b7) r1 = 0 20: (7b) \*(u64 \*)(r10 -72) = r1 21: (7b) \*(u64 \*)(r10 -80) = r7 22: (63) \*(u32 \*)(r10 -64) = r1 ... 30: (85) call bpf\_map\_update\_elem#2 invalid indirect read from stack off -80+20 size 24 At ``bpf\_prog\_load()``, an eBPF verifier ``bpf\_check()`` is called, and it'll check stack boundary by calling ``check\_func\_arg() -> check\_stack\_boundary()``. From the upper error shows, ``struct called\_info`` is compiled to 24-byte size, and the message says reading a data from +20 is an invalid indirect read. And as we discussed earlier, the address 0x14(20) is the place where PADDING is. .. code-block:: c // Actual compiled composition of struct called\_info // 0x10(16) 0x14(20) 0x18(24) // ↓\_\_\_\_\_\_\_\_\_\_\_\_↓\_\_\_\_\_\_\_\_\_\_\_↓ // | sector(4) | PADDING | <= address aligned to 8 // |\_\_\_\_\_\_\_\_\_\_\_\_|\_\_\_\_\_\_\_\_\_\_\_| with 4-byte PADDING. The ``check\_stack\_boundary()`` internally loops through the every ``access\_size`` (24) byte from the start pointer to make sure that it's within stack boundary and all elements of the stack are initialized. Since the padding isn't supposed to be used, it gets the 'invalid indirect read from stack' failure. To avoid this kind of failure, removing the padding from the struct is necessary. Removing the padding by using ``#pragma pack(n)`` directive: .. code-block:: c #pragma pack(4) struct called\_info { u64 start; // 8-byte u64 end; // 8-byte u32 sector; // 4-byte }; // size of 20-byte ? printf("size of %d-byte\n", sizeof(struct called\_info)); // size of 20-byte // Actual compiled composition of packed struct called\_info // 0x0(0) 0x8(8) // ↓\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_↓ // | start (8) | // |\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_| // | end (8) | // |\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_| // | sector(4) | <= address aligned to 4 // |\_\_\_\_\_\_\_\_\_\_\_\_| with no PADDING. By locating ``#pragma pack(4)`` before of ``struct called\_info``, the compiler will align members of a struct to the least of 4-byte and their natural alignment. As you can see, the size of ``struct called\_info`` has been shrunk to 20-byte and the padding no longer exists. But, removing the padding has downsides too. For example, the compiler will generate less optimized code. Since we've removed the padding, processors will conduct unaligned access to the structure and this might lead to performance degradation. And also, unaligned access might get rejected by verifier on some architectures. However, there is a way to avoid downsides of packed
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/toolchain.rst
main
cilium
[ -0.0022732792422175407, 0.07059206068515778, -0.04548213258385658, -0.02827330119907856, -0.016588909551501274, -0.09003211557865143, -0.012707187794148922, 0.13087819516658783, -0.03828613460063934, -0.04802035912871361, 0.05536763742566109, -0.006197284907102585, -0.0496649332344532, -0....
0.050176
example, the compiler will generate less optimized code. Since we've removed the padding, processors will conduct unaligned access to the structure and this might lead to performance degradation. And also, unaligned access might get rejected by verifier on some architectures. However, there is a way to avoid downsides of packed structure. By simply adding the explicit padding ``u32 pad`` member at the end will resolve the same problem without packing of the structure. .. code-block:: c struct called\_info { u64 start; // 8-byte u64 end; // 8-byte u32 sector; // 4-byte u32 pad; // 4-byte }; // size of 24-byte ? printf("size of %d-byte\n", sizeof(struct called\_info)); // size of 24-byte // Actual compiled composition of struct called\_info with explicit padding // 0x0(0) 0x8(8) // ↓\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_↓ // | start (8) | // |\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_| // | end (8) | // |\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_| // | sector(4) | pad (4) | <= address aligned to 8 // |\_\_\_\_\_\_\_\_\_\_\_\_|\_\_\_\_\_\_\_\_\_\_\_| with explicit PADDING. 11. \*\*Accessing packet data via invalidated references\*\* Some networking BPF helper functions such as ``bpf\_skb\_store\_bytes`` might change the size of a packet data. As verifier is not able to track such changes, any a priori reference to the data will be invalidated by verifier. Therefore, the reference needs to be updated before accessing the data to avoid verifier rejecting a program. To illustrate this, consider the following snippet: .. code-block:: c struct iphdr \*ip4 = (struct iphdr \*) skb->data + ETH\_HLEN; skb\_store\_bytes(skb, l3\_off + offsetof(struct iphdr, saddr), &new\_saddr, 4, 0); if (ip4->protocol == IPPROTO\_TCP) { // do something } Verifier will reject the snippet due to dereference of the invalidated ``ip4->protocol``: :: R1=pkt\_end(id=0,off=0,imm=0) R2=pkt(id=0,off=34,r=34,imm=0) R3=inv0 R6=ctx(id=0,off=0,imm=0) R7=inv(id=0,umax\_value=4294967295,var\_off=(0x0; 0xffffffff)) R8=inv4294967162 R9=pkt(id=0,off=0,r=34,imm=0) R10=fp0,call\_-1 ... 18: (85) call bpf\_skb\_store\_bytes#9 19: (7b) \*(u64 \*)(r10 -56) = r7 R0=inv(id=0) R6=ctx(id=0,off=0,imm=0) R7=inv(id=0,umax\_value=2,var\_off=(0x0; 0x3)) R8=inv4294967162 R9=inv(id=0) R10=fp0,call\_-1 fp-48=mmmm???? fp-56=mmmmmmmm 21: (61) r1 = \*(u32 \*)(r9 +23) R9 invalid mem access 'inv' To fix this, the reference to ``ip4`` has to be updated: .. code-block:: c struct iphdr \*ip4 = (struct iphdr \*) skb->data + ETH\_HLEN; skb\_store\_bytes(skb, l3\_off + offsetof(struct iphdr, saddr), &new\_saddr, 4, 0); ip4 = (struct iphdr \*) skb->data + ETH\_HLEN; if (ip4->protocol == IPPROTO\_TCP) { // do something } iproute2 -------- There are various front ends for loading BPF programs into the kernel such as bcc, perf, iproute2 and others. The Linux kernel source tree also provides a user space library under ``tools/lib/bpf/``, which is mainly used and driven by perf for loading BPF tracing programs into the kernel. However, the library itself is generic and not limited to perf only. bcc is a toolkit providing many useful BPF programs mainly for tracing that are loaded ad-hoc through a Python interface embedding the BPF C code. Syntax and semantics for implementing BPF programs slightly differ among front ends in general, though. Additionally, there are also BPF samples in the kernel source tree (``samples/bpf/``) which parse the generated object files and load the code directly through the system call interface. This and previous sections mainly focus on the iproute2 suite's BPF front end for loading networking programs of XDP, tc or lwt type, since Cilium's programs are implemented against this BPF loader. In future, Cilium will be equipped with a native BPF loader, but programs will still be compatible to be loaded through iproute2 suite in order to facilitate development and debugging. All BPF program types supported by iproute2 share the same BPF loader logic due to having a common loader back end implemented as a library (``lib/bpf.c`` in iproute2 source tree). The previous section on LLVM also covered some iproute2 parts related to writing BPF C programs, and later
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/toolchain.rst
main
cilium
[ -0.02214425429701805, 0.07113786041736603, -0.06684939563274384, -0.019937101751565933, 0.02165617048740387, -0.12147579342126846, -0.018586495891213417, 0.07691846042871475, -0.016097502782940865, -0.06253709644079208, 0.05174186825752258, 0.010238053277134895, -0.03727944567799568, -0.01...
0.083412
and debugging. All BPF program types supported by iproute2 share the same BPF loader logic due to having a common loader back end implemented as a library (``lib/bpf.c`` in iproute2 source tree). The previous section on LLVM also covered some iproute2 parts related to writing BPF C programs, and later sections in this document are related to tc and XDP specific aspects when writing programs. Therefore, this section will rather focus on usage examples for loading object files with iproute2 as well as some of the generic mechanics of the loader. It does not try to provide a complete coverage of all details, but enough for getting started. \*\*1. Loading of XDP BPF object files.\*\* Given a BPF object file ``prog.o`` has been compiled for XDP, it can be loaded through ``ip`` to a XDP-supported netdevice called ``em1`` with the following command: .. code-block:: shell-session # ip link set dev em1 xdp obj prog.o The above command assumes that the program code resides in the default section which is called ``prog`` in XDP case. Should this not be the case, and the section is named differently, for example, ``foobar``, then the program needs to be loaded as: .. code-block:: shell-session # ip link set dev em1 xdp obj prog.o sec foobar Note that it is also possible to load the program out of the ``.text`` section. Changing the minimal, stand-alone XDP drop program by removing the ``\_\_section()`` annotation from the ``xdp\_drop`` entry point would look like the following: .. code-block:: c #include #ifndef \_\_section # define \_\_section(NAME) \ \_\_attribute\_\_((section(NAME), used)) #endif int xdp\_drop(struct xdp\_md \*ctx) { return XDP\_DROP; } char \_\_license[] \_\_section("license") = "GPL"; And can be loaded as follows: .. code-block:: shell-session # ip link set dev em1 xdp obj prog.o sec .text By default, ``ip`` will throw an error in case a XDP program is already attached to the networking interface, to prevent it from being overridden by accident. In order to replace the currently running XDP program with a new one, the ``-force`` option must be used: .. code-block:: shell-session # ip -force link set dev em1 xdp obj prog.o Most XDP-enabled drivers today support an atomic replacement of the existing program with a new one without traffic interruption. There is always only a single program attached to an XDP-enabled driver due to performance reasons, hence a chain of programs is not supported. However, as described in the previous section, partitioning of programs can be performed through tail calls to achieve a similar use case when necessary. The ``ip link`` command will display an ``xdp`` flag if the interface has an XDP program attached. ``ip link | grep xdp`` can thus be used to find all interfaces that have XDP running. Further introspection facilities are provided through the detailed view with ``ip -d link`` and ``bpftool`` can be used to retrieve information about the attached program based on the BPF program ID shown in the ``ip link`` dump. In order to remove the existing XDP program from the interface, the following command must be issued: .. code-block:: shell-session # ip link set dev em1 xdp off In the case of switching a driver's operation mode from non-XDP to native XDP and vice versa, typically the driver needs to reconfigure its receive (and transmit) rings in order to ensure received packet are set up linearly within a single page for BPF to read and write into. However, once completed, then most drivers only need to perform an atomic replacement of the program itself when a BPF program is requested to be swapped. In total, XDP supports three operation
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/toolchain.rst
main
cilium
[ -0.011674484238028526, -0.04664919897913933, 0.0275974553078413, -0.028042569756507874, 0.03806048259139061, -0.04021738842129707, -0.01793813519179821, 0.11126627773046494, -0.030574981123209, -0.027608349919319153, -0.01155942864716053, -0.021322118118405342, -0.07199453562498093, -0.102...
0.026775
ensure received packet are set up linearly within a single page for BPF to read and write into. However, once completed, then most drivers only need to perform an atomic replacement of the program itself when a BPF program is requested to be swapped. In total, XDP supports three operation modes which iproute2 implements as well: ``xdpdrv``, ``xdpoffload`` and ``xdpgeneric``. ``xdpdrv`` stands for native XDP, meaning the BPF program is run directly in the driver's receive path at the earliest possible point in software. This is the normal / conventional XDP mode and requires drivers to implement XDP support, which all major 10G/40G/+ networking drivers in the upstream Linux kernel already provide. ``xdpgeneric`` stands for generic XDP and is intended as an experimental test bed for drivers which do not yet support native XDP. Given the generic XDP hook in the ingress path comes at a much later point in time when the packet already enters the stack's main receive path as a ``skb``, the performance is significantly less than with processing in ``xdpdrv`` mode. ``xdpgeneric`` therefore is for the most part only interesting for experimenting, less for production environments. Last but not least, the ``xdpoffload`` mode is implemented by SmartNICs such as those supported by Netronome's nfp driver and allow for offloading the entire BPF/XDP program into hardware, thus the program is run on each packet reception directly on the card. This provides even higher performance than running in native XDP although not all BPF map types or BPF helper functions are available for use compared to native XDP. The BPF verifier will reject the program in such case and report to the user what is unsupported. Other than staying in the realm of supported BPF features and helper functions, no special precautions have to be taken when writing BPF C programs. When a command like ``ip link set dev em1 xdp obj [...]`` is used, then the kernel will attempt to load the program first as native XDP, and in case the driver does not support native XDP, it will automatically fall back to generic XDP. Thus, for example, using explicitly ``xdpdrv`` instead of ``xdp``, the kernel will only attempt to load the program as native XDP and fail in case the driver does not support it, which provides a guarantee that generic XDP is avoided altogether. Example for enforcing a BPF/XDP program to be loaded in native XDP mode, dumping the link details and unloading the program again: .. code-block:: shell-session # ip -force link set dev em1 xdpdrv obj prog.o # ip link show [...] 6: em1: mtu 1500 xdp qdisc mq state UP mode DORMANT group default qlen 1000 link/ether be:08:4d:b6:85:65 brd ff:ff:ff:ff:ff:ff prog/xdp id 1 tag 57cd311f2e27366b [...] # ip link set dev em1 xdpdrv off Same example now for forcing generic XDP, even if the driver would support native XDP, and additionally dumping the BPF instructions of the attached dummy program through bpftool: .. code-block:: shell-session # ip -force link set dev em1 xdpgeneric obj prog.o # ip link show [...] 6: em1: mtu 1500 xdpgeneric qdisc mq state UP mode DORMANT group default qlen 1000 link/ether be:08:4d:b6:85:65 brd ff:ff:ff:ff:ff:ff prog/xdp id 4 tag 57cd311f2e27366b <-- BPF program ID 4 [...] # bpftool prog dump xlated id 4 <-- Dump of instructions running on em1 0: (b7) r0 = 1 1: (95) exit # ip link set dev em1 xdpgeneric off And last but not least offloaded XDP, where we additionally dump program information via bpftool for retrieving general metadata: .. code-block:: shell-session # ip -force link set dev em1 xdpoffload obj prog.o
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/toolchain.rst
main
cilium
[ 0.017606226727366447, -0.04019274190068245, 0.050475604832172394, -0.07560788840055466, 0.02898424118757248, -0.06345298886299133, 0.046768687665462494, 0.0570162869989872, -0.04743102192878723, -0.02676997147500515, 0.05941176041960716, -0.000037345413147704676, -0.02653501369059086, -0.0...
0.074211
on em1 0: (b7) r0 = 1 1: (95) exit # ip link set dev em1 xdpgeneric off And last but not least offloaded XDP, where we additionally dump program information via bpftool for retrieving general metadata: .. code-block:: shell-session # ip -force link set dev em1 xdpoffload obj prog.o # ip link show [...] 6: em1: mtu 1500 xdpoffload qdisc mq state UP mode DORMANT group default qlen 1000 link/ether be:08:4d:b6:85:65 brd ff:ff:ff:ff:ff:ff prog/xdp id 8 tag 57cd311f2e27366b [...] # bpftool prog show id 8 8: xdp tag 57cd311f2e27366b dev em1 <-- Also indicates a BPF program offloaded to em1 loaded\_at Apr 11/20:38 uid 0 xlated 16B not jited memlock 4096B # ip link set dev em1 xdpoffload off Note that it is not possible to use ``xdpdrv`` and ``xdpgeneric`` or other modes at the same time, meaning only one of the XDP operation modes must be picked. A switch between different XDP modes e.g. from generic to native or vice versa is not atomically possible. Only switching programs within a specific operation mode is: .. code-block:: shell-session # ip -force link set dev em1 xdpgeneric obj prog.o # ip -force link set dev em1 xdpoffload obj prog.o RTNETLINK answers: File exists # ip -force link set dev em1 xdpdrv obj prog.o RTNETLINK answers: File exists # ip -force link set dev em1 xdpgeneric obj prog.o <-- Succeeds due to xdpgeneric # Switching between modes requires to first leave the current operation mode in order to then enter the new one: .. code-block:: shell-session # ip -force link set dev em1 xdpgeneric obj prog.o # ip -force link set dev em1 xdpgeneric off # ip -force link set dev em1 xdpoffload obj prog.o # ip l [...] 6: em1: mtu 1500 xdpoffload qdisc mq state UP mode DORMANT group default qlen 1000 link/ether be:08:4d:b6:85:65 brd ff:ff:ff:ff:ff:ff prog/xdp id 17 tag 57cd311f2e27366b [...] # ip -force link set dev em1 xdpoffload off \*\*2. Loading of tc BPF object files.\*\* Given a BPF object file ``prog.o`` has been compiled for tc, it can be loaded through the tc command to a netdevice. Unlike XDP, there is no driver dependency for supporting attaching BPF programs to the device. Here, the netdevice is called ``em1``, and with the following command the program can be attached to the networking ``ingress`` path of ``em1``: .. code-block:: shell-session # tc qdisc add dev em1 clsact # tc filter add dev em1 ingress bpf da obj prog.o The first step is to set up a ``clsact`` qdisc (Linux queueing discipline). ``clsact`` is a dummy qdisc similar to the ``ingress`` qdisc, which can only hold classifier and actions, but does not perform actual queueing. It is needed in order to attach the ``bpf`` classifier. The ``clsact`` qdisc provides two special hooks called ``ingress`` and ``egress``, where the classifier can be attached to. Both ``ingress`` and ``egress`` hooks are located in central receive and transmit locations in the networking data path, where every packet on the device passes through. The ``ingress`` hook is called from ``\_\_netif\_receive\_skb\_core() -> sch\_handle\_ingress()`` in the kernel and the ``egress`` hook from ``\_\_dev\_queue\_xmit() -> sch\_handle\_egress()``. The equivalent for attaching the program to the ``egress`` hook looks as follows: .. code-block:: shell-session # tc filter add dev em1 egress bpf da obj prog.o The ``clsact`` qdisc is processed lockless from ``ingress`` and ``egress`` direction and can also be attached to virtual, queue-less devices such as ``veth`` devices connecting containers. Next to the hook, the ``tc filter`` command selects ``bpf`` to be used in ``da`` (direct-action) mode. ``da`` mode is recommended and should always be specified. It
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/toolchain.rst
main
cilium
[ -0.026062065735459328, 0.02026973105967045, -0.00362854590639472, -0.052312783896923065, 0.08590630441904068, -0.07125838845968246, 0.054545484483242035, 0.028174804523587227, -0.011285477317869663, -0.014702253974974155, 0.08535543829202652, -0.06480243057012558, 0.012704377062618732, -0....
0.03746
is processed lockless from ``ingress`` and ``egress`` direction and can also be attached to virtual, queue-less devices such as ``veth`` devices connecting containers. Next to the hook, the ``tc filter`` command selects ``bpf`` to be used in ``da`` (direct-action) mode. ``da`` mode is recommended and should always be specified. It basically means that the ``bpf`` classifier does not need to call into external tc action modules, which are not necessary for ``bpf`` anyway, since all packet mangling, forwarding or other kind of actions can already be performed inside a single BPF program, and is therefore significantly faster. At this point, the program has been attached and is executed once packets traverse the device. Like in XDP, should the default section name not be used, then it can be specified during load, for example, in case of section ``foobar``: .. code-block:: shell-session # tc filter add dev em1 egress bpf da obj prog.o sec foobar iproute2's BPF loader allows for using the same command line syntax across program types, hence the ``obj prog.o sec foobar`` is the same syntax as with XDP mentioned earlier. The attached programs can be listed through the following commands: .. code-block:: shell-session # tc filter show dev em1 ingress filter protocol all pref 49152 bpf filter protocol all pref 49152 bpf handle 0x1 prog.o:[ingress] direct-action id 1 tag c5f7825e5dac396f # tc filter show dev em1 egress filter protocol all pref 49152 bpf filter protocol all pref 49152 bpf handle 0x1 prog.o:[egress] direct-action id 2 tag b2fd5adc0f262714 The output of ``prog.o:[ingress]`` tells that program section ``ingress`` was loaded from the file ``prog.o``, and ``bpf`` operates in ``direct-action`` mode. The program ``id`` and ``tag`` is appended for each case, where the latter denotes a hash over the instruction stream which can be correlated with the object file or ``perf`` reports with stack traces, etc. Last but not least, the ``id`` represents the system-wide unique BPF program identifier that can be used along with ``bpftool`` to further inspect or dump the attached BPF program. tc can attach more than just a single BPF program, it provides various other classifiers which can be chained together. However, attaching a single BPF program is fully sufficient since all packet operations can be contained in the program itself thanks to ``da`` (``direct-action``) mode, meaning the BPF program itself will already return the tc action verdict such as ``TC\_ACT\_OK``, ``TC\_ACT\_SHOT`` and others. For optimal performance and flexibility, this is the recommended usage. In the above ``show`` command, tc also displays ``pref 49152`` and ``handle 0x1`` next to the BPF related output. Both are auto-generated in case they are not explicitly provided through the command line. ``pref`` denotes a priority number, which means that in case multiple classifiers are attached, they will be executed based on ascending priority, and ``handle`` represents an identifier in case multiple instances of the same classifier have been loaded under the same ``pref``. Since in case of BPF, a single program is fully sufficient, ``pref`` and ``handle`` can typically be ignored. Only in the case where it is planned to atomically replace the attached BPF programs, it would be recommended to explicitly specify ``pref`` and ``handle`` a priori on initial load, so that they do not have to be queried at a later point in time for the ``replace`` operation. Thus, creation becomes: .. code-block:: shell-session # tc filter add dev em1 ingress pref 1 handle 1 bpf da obj prog.o sec foobar # tc filter show dev em1 ingress filter protocol all pref 1 bpf filter protocol all pref 1 bpf handle 0x1 prog.o:[foobar] direct-action id 1 tag c5f7825e5dac396f
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/toolchain.rst
main
cilium
[ -0.04985075816512108, 0.010661851614713669, -0.035008665174245834, -0.06093916296958923, 0.011719164438545704, -0.010208148509263992, 0.11333944648504257, -0.02811243198812008, 0.02479841746389866, 0.025811344385147095, 0.0213155597448349, -0.048918984830379486, -0.038687702268362045, -0.1...
0.127949
Thus, creation becomes: .. code-block:: shell-session # tc filter add dev em1 ingress pref 1 handle 1 bpf da obj prog.o sec foobar # tc filter show dev em1 ingress filter protocol all pref 1 bpf filter protocol all pref 1 bpf handle 0x1 prog.o:[foobar] direct-action id 1 tag c5f7825e5dac396f And for the atomic replacement, the following can be issued for updating the existing program at ``ingress`` hook with the new BPF program from the file ``prog.o`` in section ``foobar``: .. code-block:: shell-session # tc filter replace dev em1 ingress pref 1 handle 1 bpf da obj prog.o sec foobar Last but not least, in order to remove all attached programs from the ``ingress`` respectively ``egress`` hook, the following can be used: .. code-block:: shell-session # tc filter del dev em1 ingress # tc filter del dev em1 egress For removing the entire ``clsact`` qdisc from the netdevice, which implicitly also removes all attached programs from the ``ingress`` and ``egress`` hooks, the below command is provided: .. code-block:: shell-session # tc qdisc del dev em1 clsact tc BPF programs can also be offloaded if the NIC and driver has support for it, like XDP BPF programs. Netronome's nfp supported NICs offer both types of BPF offload. .. code-block:: shell-session # tc qdisc add dev em1 clsact # tc filter replace dev em1 ingress pref 1 handle 1 bpf skip\_sw da obj prog.o Error: TC offload is disabled on net device. We have an error talking to the kernel If the above error is shown, then tc hardware offload first needs to be enabled for the device through ethtool's ``hw-tc-offload`` setting: .. code-block:: shell-session # ethtool -K em1 hw-tc-offload on # tc qdisc add dev em1 clsact # tc filter replace dev em1 ingress pref 1 handle 1 bpf skip\_sw da obj prog.o # tc filter show dev em1 ingress filter protocol all pref 1 bpf filter protocol all pref 1 bpf handle 0x1 prog.o:[classifier] direct-action skip\_sw in\_hw id 19 tag 57cd311f2e27366b The ``in\_hw`` flag confirms that the program has been offloaded to the NIC. Note that BPF offloads for both tc and XDP cannot be loaded at the same time, either the tc or XDP offload option must be selected. \*\*3. Testing BPF offload interface via netdevsim driver.\*\* The netdevsim driver which is part of the Linux kernel provides a dummy driver which implements offload interfaces for XDP BPF and tc BPF programs and facilitates testing kernel changes or low-level user space programs implementing a control plane directly against the kernel's UAPI. A netdevsim device can be created as follows: .. code-block:: shell-session # modprobe netdevsim // [ID] [PORT\_COUNT] # echo "1 1" > /sys/bus/netdevsim/new\_device # devlink dev netdevsim/netdevsim1 # devlink port netdevsim/netdevsim1/0: type eth netdev eth0 flavour physical # ip l [...] 4: eth0: mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 2a:d5:cd:08:d1:3f brd ff:ff:ff:ff:ff:ff After that step, XDP BPF or tc BPF programs can be test loaded as shown in the various examples earlier: .. code-block:: shell-session # ip -force link set dev eth0 xdpoffload obj prog.o # ip l [...] 4: eth0: mtu 1500 xdpoffload qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 2a:d5:cd:08:d1:3f brd ff:ff:ff:ff:ff:ff prog/xdp id 16 tag a04f5eef06a7f555 These two workflows are the basic operations to load XDP BPF respectively tc BPF programs with iproute2. There are other various advanced options for the BPF loader that apply both to XDP and tc, some of them are listed here. In the examples only XDP is presented for simplicity. \*\*1. Verbose log output even on success.\*\* The option
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/toolchain.rst
main
cilium
[ -0.068805031478405, 0.06740190833806992, 0.008591949939727783, -0.052909135818481445, -0.0012766080908477306, -0.042827241122722626, 0.07573145627975464, -0.0074279592372477055, 0.018638476729393005, 0.061207596212625504, 0.0007091588922776282, -0.0747029110789299, -0.009206794202327728, -...
0.099867
load XDP BPF respectively tc BPF programs with iproute2. There are other various advanced options for the BPF loader that apply both to XDP and tc, some of them are listed here. In the examples only XDP is presented for simplicity. \*\*1. Verbose log output even on success.\*\* The option ``verb`` can be appended for loading programs in order to dump the verifier log, even if no error occurred: .. code-block:: shell-session # ip link set dev em1 xdp obj xdp-example.o verb Prog section 'prog' loaded (5)! - Type: 6 - Instructions: 2 (0 over limit) - License: GPL Verifier analysis: 0: (b7) r0 = 1 1: (95) exit processed 2 insns \*\*2. Load program that is already pinned in BPF file system.\*\* Instead of loading a program from an object file, iproute2 can also retrieve the program from the BPF file system in case some external entity pinned it there and attach it to the device: .. code-block:: shell-session # ip link set dev em1 xdp pinned /sys/fs/bpf/prog iproute2 can also use the short form that is relative to the detected mount point of the BPF file system: .. code-block:: shell-session # ip link set dev em1 xdp pinned m:prog When loading BPF programs, iproute2 will automatically detect the mounted file system instance in order to perform pinning of nodes. In case no mounted BPF file system instance was found, then tc will automatically mount it to the default location under ``/sys/fs/bpf/``. In case an instance has already been found, then it will be used and no additional mount will be performed: .. code-block:: shell-session # mkdir /var/run/bpf # mount --bind /var/run/bpf /var/run/bpf # mount -t bpf bpf /var/run/bpf # tc filter add dev em1 ingress bpf da obj tc-example.o sec prog # tree /var/run/bpf /var/run/bpf +-- ip -> /run/bpf/tc/ +-- tc | +-- globals | +-- jmp\_map +-- xdp -> /run/bpf/tc/ 4 directories, 1 file By default tc will create an initial directory structure as shown above, where all subsystem users will point to the same location through symbolic links for the ``globals`` namespace, so that pinned BPF maps can be reused among various BPF program types in iproute2. In case the file system instance has already been mounted and an existing structure already exists, then tc will not override it. This could be the case for separating ``lwt``, ``tc`` and ``xdp`` maps in order to not share ``globals`` among all. As briefly covered in the previous LLVM section, iproute2 will install a header file upon installation which can be included through the standard include path by BPF programs: .. code-block:: c #include The purpose of this header file is to provide an API for maps and default section names used by programs. It's a stable contract between iproute2 and BPF programs. The map definition for iproute2 is ``struct bpf\_elf\_map``. Its members have been covered earlier in the LLVM section of this document. When parsing the BPF object file, the iproute2 loader will walk through all ELF sections. It initially fetches ancillary sections like ``maps`` and ``license``. For ``maps``, the ``struct bpf\_elf\_map`` array will be checked for validity and whenever needed, compatibility workarounds are performed. Subsequently all maps are created with the user provided information, either retrieved as a pinned object, or newly created and then pinned into the BPF file system. Next the loader will handle all program sections that contain ELF relocation entries for maps, meaning that BPF instructions loading map file descriptors into registers are rewritten so that the corresponding map file descriptors are encoded into the instructions immediate value, in order for the kernel
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/toolchain.rst
main
cilium
[ -0.04371616244316101, -0.00124258897267282, 0.030532492324709892, -0.047463592141866684, 0.0625782161951065, -0.047469884157180786, 0.03342027962207794, 0.09252656251192093, -0.029874185100197792, 0.012845107354223728, 0.011074651964008808, -0.06026792898774147, -0.033521827310323715, -0.0...
0.053056
the BPF file system. Next the loader will handle all program sections that contain ELF relocation entries for maps, meaning that BPF instructions loading map file descriptors into registers are rewritten so that the corresponding map file descriptors are encoded into the instructions immediate value, in order for the kernel to be able to convert them later on into map kernel pointers. After that all the programs themselves are created through the BPF system call, and tail called maps, if present, updated with the program's file descriptors.
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/toolchain.rst
main
cilium
[ 0.0003207289264537394, -0.009595074690878391, -0.03433762490749359, -0.06191089749336243, 0.04546590894460678, -0.018573060631752014, -0.014342967420816422, 0.03885597363114357, 0.01875290274620056, -0.045656777918338776, 0.0387786366045475, 0.036773741245269775, -0.05863484367728233, -0.1...
0.199635
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_bpf\_users: Further Reading =============== Mentioned lists of docs, projects, talks, papers, and further reading materials are likely not complete. Thus, feel free to open pull requests to complete the list. Kernel Developer FAQ -------------------- Under ``Documentation/bpf/``, the Linux kernel provides two FAQ files that are mainly targeted for kernel developers involved in the BPF subsystem. \* \*\*BPF Devel FAQ:\*\* this document provides mostly information around patch submission process as well as BPF kernel tree, stable tree and bug reporting workflows, questions around BPF's extensibility and interaction with LLVM and more. https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/bpf/bpf\_devel\_QA.rst .. \* \*\*BPF Design FAQ:\*\* this document tries to answer frequently asked questions around BPF design decisions related to the instruction set, verifier, calling convention, JITs, etc. https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/bpf/bpf\_design\_QA.rst Projects using BPF ------------------ The following list includes a selection of open source projects making use of BPF respectively provide tooling for BPF. In this context the eBPF instruction set is specifically meant instead of projects utilizing the legacy cBPF: \*\*Tracing\*\* \* \*\*BCC\*\* BCC stands for BPF Compiler Collection, and its key feature is to provide a set of easy to use and efficient kernel tracing utilities all based upon BPF programs hooking into kernel infrastructure based upon kprobes, kretprobes, tracepoints, uprobes, uretprobes as well as USDT probes. The collection provides close to hundred tools targeting different layers across the stack from applications, system libraries, to the various different kernel subsystems in order to analyze a system's performance characteristics or problems. Additionally, BCC provides an API in order to be used as a library for other projects. https://github.com/iovisor/bcc .. \* \*\*bpftrace\*\* bpftrace is a DTrace-style dynamic tracing tool for Linux and uses LLVM as a back end to compile scripts to BPF-bytecode and makes use of BCC for interacting with the kernel's BPF tracing infrastructure. It provides a higher-level language for implementing tracing scripts compared to native BCC. https://github.com/ajor/bpftrace .. \* \*\*perf\*\* The perf tool which is developed by the Linux kernel community as part of the kernel source tree provides a way to load tracing BPF programs through the conventional perf record subcommand where the aggregated data from BPF can be retrieved and post processed in perf.data for example through perf script and other means. https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/perf .. \* \*\*ply\*\* ply is a tracing tool that follows the 'Little Language' approach of yore, and compiles ply scripts into Linux BPF programs that are attached to kprobes and tracepoints in the kernel. The scripts have a C-like syntax, heavily inspired by DTrace and by extension awk. ply keeps dependencies to very minimum and only requires flex and bison at build time, only libc at runtime. https://github.com/wkz/ply .. \* \*\*systemtap\*\* systemtap is a scripting language and tool for extracting, filtering and summarizing data in order to diagnose and analyze performance or functional problems. It comes with a BPF back end called stapbpf which translates the script directly into BPF without the need of an additional compiler and injects the probe into the kernel. Thus, unlike stap's kernel modules this does neither have external dependencies nor requires to load kernel modules. https://sourceware.org/git/gitweb.cgi?p=systemtap.git;a=summary .. \* \*\*PCP\*\* Performance Co-Pilot (PCP) is a system performance and analysis framework which is able to collect metrics through a variety of agents as well as analyze collected systems' performance metrics in real-time or by using historical data. With pmdabcc, PCP has a BCC based performance metrics domain agent which extracts data from the kernel via BPF and BCC. https://github.com/performancecopilot/pcp .. \* \*\*Weave Scope\*\* Weave
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/resources.rst
main
cilium
[ -0.04321272298693657, -0.014599651098251343, -0.09947895258665085, -0.05760309100151062, 0.0825895294547081, -0.035438939929008484, 0.005317586939781904, 0.09154537320137024, 0.028003022074699402, -0.051936548203229904, 0.02736564911901951, -0.05993538349866867, -0.0005770159768871963, -0....
0.23401
to collect metrics through a variety of agents as well as analyze collected systems' performance metrics in real-time or by using historical data. With pmdabcc, PCP has a BCC based performance metrics domain agent which extracts data from the kernel via BPF and BCC. https://github.com/performancecopilot/pcp .. \* \*\*Weave Scope\*\* Weave Scope is a cloud monitoring tool collecting data about processes, networking connections or other system data by making use of BPF in combination with kprobes. Weave Scope works on top of the gobpf library in order to load BPF ELF files into the kernel, and comes with a tcptracer-bpf tool which monitors connect, accept and close calls in order to trace TCP events. https://github.com/weaveworks/scope .. \*\*Networking\*\* \* \*\*Cilium\*\* Cilium provides and transparently secures network connectivity and load-balancing between application workloads such as application containers or processes. Cilium operates at Layer 3/4 to provide traditional networking and security services as well as Layer 7 to protect and secure use of modern application protocols such as HTTP, gRPC and Kafka. It is integrated into orchestration frameworks such as Kubernetes. BPF is the foundational part of Cilium that operates in the kernel's networking data path. https://github.com/cilium/cilium .. \* \*\*Suricata\*\* Suricata is a network IDS, IPS and NSM engine, and utilizes BPF as well as XDP in three different areas, that is, as BPF filter in order to process or bypass certain packets, as a BPF based load balancer in order to allow for programmable load balancing and for XDP to implement a bypass or dropping mechanism at high packet rates. https://suricata.readthedocs.io/en/suricata-5.0.2/capture-hardware/ebpf-xdp.html https://github.com/OISF/suricata .. \* \*\*systemd\*\* systemd allows for IPv4/v6 accounting as well as implementing network access control for its systemd units based on BPF's cgroup ingress and egress hooks. Accounting is based on packets / bytes, and ACLs can be specified as address prefixes for allow / deny rules. More information can be found at: http://0pointer.net/blog/ip-accounting-and-access-lists-with-systemd.html https://github.com/systemd/systemd .. \* \*\*iproute2\*\* iproute2 offers the ability to load BPF programs as LLVM generated ELF files into the kernel. iproute2 supports both, XDP BPF programs as well as tc BPF programs through a common BPF loader backend. The tc and ip command line utilities enable loader and introspection functionality for the user. https://git.kernel.org/pub/scm/network/iproute2/iproute2.git/ .. \* \*\*p4c-xdp\*\* p4c-xdp presents a P4 compiler backend targeting BPF and XDP. P4 is a domain specific language describing how packets are processed by the data plane of a programmable network element such as NICs, appliances or switches, and with the help of p4c-xdp P4 programs can be translated into BPF C programs which can be compiled by clang / LLVM and loaded as BPF programs into the kernel at XDP layer for high performance packet processing. https://github.com/vmware/p4c-xdp .. \*\*Others\*\* \* \*\*LLVM\*\* clang / LLVM provides the BPF back end in order to compile C BPF programs into BPF instructions contained in ELF files. The LLVM BPF back end is developed alongside with the BPF core infrastructure in the Linux kernel and maintained by the same community. clang / LLVM is a key part in the toolchain for developing BPF programs. https://llvm.org/ .. \* \*\*libbpf\*\* libbpf is a generic BPF library which is developed by the Linux kernel community as part of the kernel source tree and allows for loading and attaching BPF programs from LLVM generated ELF files into the kernel. The library is used by other kernel projects such as perf and bpftool. https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/lib/bpf .. \* \*\*bpftool\*\* bpftool is the main tool for introspecting and debugging BPF programs and BPF maps, and like libbpf is developed by the Linux kernel community. It allows for dumping all active BPF programs and
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/resources.rst
main
cilium
[ -0.015253630466759205, -0.01292530819773674, -0.10180716216564178, -0.008746065199375153, 0.0017614230746403337, -0.0028651945758610964, 0.03320314362645149, 0.013063318096101284, 0.0033605482894927263, -0.021343950182199478, -0.02872033789753914, -0.06393095850944519, -0.04230818897485733, ...
0.177739
The library is used by other kernel projects such as perf and bpftool. https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/lib/bpf .. \* \*\*bpftool\*\* bpftool is the main tool for introspecting and debugging BPF programs and BPF maps, and like libbpf is developed by the Linux kernel community. It allows for dumping all active BPF programs and maps in the system, dumping and disassembling BPF or JITed BPF instructions from a program as well as dumping and manipulating BPF maps in the system. bpftool supports interaction with the BPF filesystem, loading various program types from an object file into the kernel and much more. https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/bpf/bpftool .. \* \*\*cilium/ebpf\*\* ``cilium/ebpf`` (ebpf-go) is a pure Go library that provides utilities for loading, compiling, and debugging eBPF programs. It has minimal external dependencies and is intended to be used in long-running processes. Its ``bpf2go`` utility automates away compiling eBPF C programs and embedding them into Go binaries. It implements attaching programs to various kernel hooks, as well as kprobes and uprobes for tracing arbitrary kernel and user space functions. It also features a complete assembler that allows constructing eBPF programs at runtime using Go, or modifying them after they've been loaded from an ELF object. https://github.com/cilium/ebpf .. \* \*\*ebpf\_asm\*\* ebpf\_asm provides an assembler for BPF programs written in an Intel-like assembly syntax, and therefore offers an alternative for writing BPF programs directly in assembly for cases where programs are rather small and simple without needing the clang / LLVM toolchain. https://github.com/Xilinx-CNS/ebpf\_asm .. XDP Newbies ----------- There are a couple of walk-through posts by David S. Miller to the xdp-newbies mailing list (http://vger.kernel.org/vger-lists.html#xdp-newbies), which explain various parts of XDP and BPF: 4. May 2017, BPF Verifier Overview, David S. Miller, https://www.spinics.net/lists/xdp-newbies/msg00185.html 3. May 2017, Contextually speaking..., David S. Miller, https://www.spinics.net/lists/xdp-newbies/msg00181.html 2. May 2017, bpf.h and you..., David S. Miller, https://www.spinics.net/lists/xdp-newbies/msg00179.html 1. Apr 2017, XDP example of the day, David S. Miller, https://www.spinics.net/lists/xdp-newbies/msg00009.html BPF Newsletter -------------- Alexander Alemayhu initiated a newsletter around BPF roughly once per week covering latest developments around BPF in Linux kernel land and its surrounding ecosystem in user space. All BPF update newsletters (01 - 12) can be found here: https://cilium.io/blog/categories/technology/5/ And for the news on the latest resources and developments in the eBPF world, please refer to the link here: https://ebpf.io/blog Podcasts -------- There have been a number of technical podcasts partially covering BPF. Incomplete list: 5. Feb 2017, Linux Networking Update from Netdev Conference, Thomas Graf, Software Gone Wild, Show 71, https://blog.ipspace.net/2017/02/linux-networking-update-from-netdev.html https://www.ipspace.net/nuggets/podcast/Show\_71-NetDev\_Update.mp3 4. Jan 2017, The IO Visor Project, Brenden Blanco, OVS Orbit, Episode 23, https://ovsorbit.org/#e23 https://ovsorbit.org/episode-23.mp3 3. Oct 2016, Fast Linux Packet Forwarding, Thomas Graf, Software Gone Wild, Show 64, https://blog.ipspace.net/2016/10/fast-linux-packet-forwarding-with.html https://www.ipspace.net/nuggets/podcast/Show\_64-Cilium\_with\_Thomas\_Graf.mp3 2. Aug 2016, P4 on the Edge, John Fastabend, OVS Orbit, Episode 11, https://ovsorbit.org/#e11 https://ovsorbit.org/episode-11.mp3 1. May 2016, Cilium, Thomas Graf, OVS Orbit, Episode 4, https://ovsorbit.org/#e4 https://ovsorbit.org/episode-4.mp3 Blog posts ---------- The following (incomplete) list includes blog posts around BPF, XDP and related projects: 34. May 2017, An entertaining eBPF XDP adventure, Suchakra Sharma, https://suchakra.wordpress.com/2017/05/23/an-entertaining-ebpf-xdp-adventure/ 33. May 2017, eBPF, part 2: Syscall and Map Types, Ferris Ellis, https://ferrisellis.com/posts/ebpf\_syscall\_and\_maps/ 32. May 2017, Monitoring the Control Plane, Gary Berger, https://www.firstclassfunc.com/2018/07/monitoring-the-control-plane/ 31. Apr 2017, USENIX/LISA 2016 Linux bcc/BPF Tools, Brendan Gregg, http://www.brendangregg.com/blog/2017-04-29/usenix-lisa-2016-bcc-bpf-tools.html 30. Apr 2017, Liveblog: Cilium for Network and Application Security with BPF and XDP, Scott Lowe, https://blog.scottlowe.org/2017/04/18/black-belt-cilium/ 29. Apr 2017, eBPF, part 1: Past, Present, and Future, Ferris Ellis, https://ferrisellis.com/posts/ebpf\_past\_present\_future/ 28. Mar 2017, Analyzing KVM Hypercalls with eBPF Tracing, Suchakra Sharma, https://suchakra.wordpress.com/2017/03/31/analyzing-kvm-hypercalls-with-ebpf-tracing/ 27. Jan 2017, Golang bcc/BPF Function Tracing, Brendan Gregg, http://www.brendangregg.com/blog/2017-01-31/golang-bcc-bpf-function-tracing.html 26. Dec 2016, Give me 15 minutes and I'll change your view of Linux tracing, Brendan Gregg, http://www.brendangregg.com/blog/2016-12-27/linux-tracing-in-15-minutes.html 25. Nov 2016, Cilium:
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/resources.rst
main
cilium
[ -0.04090934991836548, -0.02999640628695488, -0.0780518427491188, -0.04613006114959717, 0.09446628391742706, -0.03390873968601227, -0.006942823063582182, 0.08052397519350052, -0.019844839349389076, -0.06589953601360321, 0.004082772880792618, -0.04553687945008278, -0.05872292071580887, -0.08...
0.216929
Present, and Future, Ferris Ellis, https://ferrisellis.com/posts/ebpf\_past\_present\_future/ 28. Mar 2017, Analyzing KVM Hypercalls with eBPF Tracing, Suchakra Sharma, https://suchakra.wordpress.com/2017/03/31/analyzing-kvm-hypercalls-with-ebpf-tracing/ 27. Jan 2017, Golang bcc/BPF Function Tracing, Brendan Gregg, http://www.brendangregg.com/blog/2017-01-31/golang-bcc-bpf-function-tracing.html 26. Dec 2016, Give me 15 minutes and I'll change your view of Linux tracing, Brendan Gregg, http://www.brendangregg.com/blog/2016-12-27/linux-tracing-in-15-minutes.html 25. Nov 2016, Cilium: Networking and security for containers with BPF and XDP, Daniel Borkmann, https://opensource.googleblog.com/2016/11/cilium-networking-and-security.html 24. Nov 2016, Linux bcc/BPF tcplife: TCP Lifespans, Brendan Gregg, http://www.brendangregg.com/blog/2016-11-30/linux-bcc-tcplife.html 23. Oct 2016, DTrace for Linux 2016, Brendan Gregg, http://www.brendangregg.com/blog/2016-10-27/dtrace-for-linux-2016.html 22. Oct 2016, Linux 4.9's Efficient BPF-based Profiler, Brendan Gregg, http://www.brendangregg.com/blog/2016-10-21/linux-efficient-profiler.html 21. Oct 2016, Linux bcc tcptop, Brendan Gregg, http://www.brendangregg.com/blog/2016-10-15/linux-bcc-tcptop.html 20. Oct 2016, Linux bcc/BPF Node.js USDT Tracing, Brendan Gregg, http://www.brendangregg.com/blog/2016-10-12/linux-bcc-nodejs-usdt.html 19. Oct 2016, Linux bcc/BPF Run Queue (Scheduler) Latency, Brendan Gregg, http://www.brendangregg.com/blog/2016-10-08/linux-bcc-runqlat.html 18. Oct 2016, Linux bcc ext4 Latency Tracing, Brendan Gregg, http://www.brendangregg.com/blog/2016-10-06/linux-bcc-ext4dist-ext4slower.html 17. Oct 2016, Linux MySQL Slow Query Tracing with bcc/BPF, Brendan Gregg, http://www.brendangregg.com/blog/2016-10-04/linux-bcc-mysqld-qslower.html 16. Oct 2016, Linux bcc Tracing Security Capabilities, Brendan Gregg, http://www.brendangregg.com/blog/2016-10-01/linux-bcc-security-capabilities.html 15. Sep 2016, Suricata bypass feature, Eric Leblond, https://www.stamus-networks.com/blog/2016/09/28/suricata-bypass-feature 14. Aug 2016, Introducing the p0f BPF compiler, Gilberto Bertin, https://blog.cloudflare.com/introducing-the-p0f-bpf-compiler/ 13. Jun 2016, Ubuntu Xenial bcc/BPF, Brendan Gregg, http://www.brendangregg.com/blog/2016-06-14/ubuntu-xenial-bcc-bpf.html 12. Mar 2016, Linux BPF/bcc Road Ahead, March 2016, Brendan Gregg, http://www.brendangregg.com/blog/2016-03-28/linux-bpf-bcc-road-ahead-2016.html 11. Mar 2016, Linux BPF Superpowers, Brendan Gregg, http://www.brendangregg.com/blog/2016-03-05/linux-bpf-superpowers.html 10. Feb 2016, Linux eBPF/bcc uprobes, Brendan Gregg, http://www.brendangregg.com/blog/2016-02-08/linux-ebpf-bcc-uprobes.html 9. Feb 2016, Who is waking the waker? (Linux chain graph prototype), Brendan Gregg, http://www.brendangregg.com/blog/2016-02-05/ebpf-chaingraph-prototype.html 8. Feb 2016, Linux Wakeup and Off-Wake Profiling, Brendan Gregg, http://www.brendangregg.com/blog/2016-02-01/linux-wakeup-offwake-profiling.html 7. Jan 2016, Linux eBPF Off-CPU Flame Graph, Brendan Gregg, http://www.brendangregg.com/blog/2016-01-20/ebpf-offcpu-flame-graph.html 6. Jan 2016, Linux eBPF Stack Trace Hack, Brendan Gregg, http://www.brendangregg.com/blog/2016-01-18/ebpf-stack-trace-hack.html 1. Sep 2015, Linux Networking, Tracing and IO Visor, a New Systems Performance Tool for a Distributed World, Suchakra Sharma, https://thenewstack.io/comparing-dtrace-iovisor-new-systems-performance-platform-advance-linux-networking-virtualization/ 5. Aug 2015, BPF Internals - II, Suchakra Sharma, https://suchakra.wordpress.com/2015/08/12/bpf-internals-ii/ 4. May 2015, eBPF: One Small Step, Brendan Gregg, http://www.brendangregg.com/blog/2015-05-15/ebpf-one-small-step.html 3. May 2015, BPF Internals - I, Suchakra Sharma, https://suchakra.wordpress.com/2015/05/18/bpf-internals-i/ 2. Jul 2014, Introducing the BPF Tools, Marek Majkowski, https://blog.cloudflare.com/introducing-the-bpf-tools/ 1. May 2014, BPF - the forgotten bytecode, Marek Majkowski, https://blog.cloudflare.com/bpf-the-forgotten-bytecode/ Books ----- BPF Performance Tools (Gregg, Addison Wesley, 2019) Talks ----- The following (incomplete) list includes talks and conference papers related to BPF and XDP: 46. July 2021, eBPF & Cilium Office Hours episode 13: XDP Hands-on Tutorial, with Liz Rice, https://www.youtube.com/watch?v=YUI78vC4qSQ&t=300s 45. June 2021, eBPF & Cilium Office Hours episode 9: XDP and Load Balancing, with Daniel Borkmann, https://www.youtube.com/watch?v=OIyPm6K4ooY&t=308s 44. May 2017, PyCon 2017, Portland, Executing python functions in the linux kernel by transpiling to bpf, Alex Gartrell, https://www.youtube.com/watch?v=CpqMroMBGP4 43. May 2017, gluecon 2017, Denver, Cilium + BPF: Least Privilege Security on API Call Level for Microservices, Dan Wendlandt, http://gluecon.com/#agenda 42. May 2017, Lund Linux Con, Lund, XDP - eXpress Data Path, Jesper Dangaard Brouer, http://people.netfilter.org/hawk/presentations/LLC2017/XDP\_DDoS\_protecting\_LLC2017.pdf 41. May 2017, Polytechnique Montreal, Trace Aggregation and Collection with eBPF, Suchakra Sharma, https://hsdm.dorsal.polymtl.ca/system/files/eBPF-5May2017%20(1).pdf 40. Apr 2017, DockerCon, Austin, Cilium - Network and Application Security with BPF and XDP, Thomas Graf, https://www.slideshare.net/ThomasGraf5/dockercon-2017-cilium-network-and-application-security-with-bpf-and-xdp 39. Apr 2017, NetDev 2.1, Montreal, XDP Mythbusters, David S. Miller, https://netdevconf.info/2.1/slides/apr7/miller-XDP-MythBusters.pdf 38. Apr 2017, NetDev 2.1, Montreal, Droplet: DDoS countermeasures powered by BPF + XDP, Huapeng Zhou, Doug Porter, Ryan Tierney, Nikita Shirokov, https://netdevconf.info/2.1/slides/apr6/zhou-netdev-xdp-2017.pdf 37. Apr 2017, NetDev 2.1, Montreal, XDP in practice: integrating XDP in our DDoS mitigation pipeline, Gilberto Bertin, https://netdevconf.info/2.1/slides/apr6/bertin\_Netdev-XDP.pdf 36. Apr 2017, NetDev 2.1, Montreal, XDP for the Rest of Us, Andy Gospodarek, Jesper Dangaard Brouer, https://netdevconf.info/2.1/slides/apr7/gospodarek-Netdev2.1-XDP-for-the-Rest-of-Us\_Final.pdf 35. Mar 2017, SCALE15x, Pasadena, Linux 4.x Tracing: Performance Analysis with bcc/BPF, Brendan Gregg, https://www.slideshare.net/brendangregg/linux-4x-tracing-performance-analysis-with-bccbpf 34. Mar 2017, XDP Inside and Out, David S. Miller, https://raw.githubusercontent.com/iovisor/bpf-docs/master/XDP\_Inside\_and\_Out.pdf 33. Mar 2017, OpenSourceDays, Copenhagen, XDP - eXpress Data Path, Used for DDoS
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/resources.rst
main
cilium
[ -0.01828615367412567, 0.010745612904429436, -0.036845654249191284, -0.036483246833086014, -0.006337242666631937, -0.030560173094272614, -0.00587038416415453, 0.04272386059165001, -0.05400505289435387, -0.019891653209924698, 0.033631034195423126, -0.12033607065677643, -0.040927160531282425, ...
0.159212
for the Rest of Us, Andy Gospodarek, Jesper Dangaard Brouer, https://netdevconf.info/2.1/slides/apr7/gospodarek-Netdev2.1-XDP-for-the-Rest-of-Us\_Final.pdf 35. Mar 2017, SCALE15x, Pasadena, Linux 4.x Tracing: Performance Analysis with bcc/BPF, Brendan Gregg, https://www.slideshare.net/brendangregg/linux-4x-tracing-performance-analysis-with-bccbpf 34. Mar 2017, XDP Inside and Out, David S. Miller, https://raw.githubusercontent.com/iovisor/bpf-docs/master/XDP\_Inside\_and\_Out.pdf 33. Mar 2017, OpenSourceDays, Copenhagen, XDP - eXpress Data Path, Used for DDoS protection, Jesper Dangaard Brouer, http://people.netfilter.org/hawk/presentations/OpenSourceDays2017/XDP\_DDoS\_protecting\_osd2017.pdf 32. Mar 2017, source{d}, Infrastructure 2017, Madrid, High-performance Linux monitoring with eBPF, Alfonso Acosta, https://www.youtube.com/watch?v=k4jqTLtdrxQ 31. Feb 2017, FOSDEM 2017, Brussels, Stateful packet processing with eBPF, an implementation of OpenState interface, Quentin Monnet, https://archive.fosdem.org/2017/schedule/event/stateful\_ebpf/ 30. Feb 2017, FOSDEM 2017, Brussels, eBPF and XDP walkthrough and recent updates, Daniel Borkmann, http://borkmann.ch/talks/2017\_fosdem.pdf 29. Feb 2017, FOSDEM 2017, Brussels, Cilium - BPF & XDP for containers, Thomas Graf, https://archive.fosdem.org/2017/schedule/event/cilium/ 28. Jan 2017, linuxconf.au, Hobart, BPF: Tracing and more, Brendan Gregg, https://www.slideshare.net/brendangregg/bpf-tracing-and-more 27. Dec 2016, USENIX LISA 2016, Boston, Linux 4.x Tracing Tools: Using BPF Superpowers, Brendan Gregg, https://www.slideshare.net/brendangregg/linux-4x-tracing-tools-using-bpf-superpowers 26. Nov 2016, Linux Plumbers, Santa Fe, Cilium: Networking & Security for Containers with BPF & XDP, Thomas Graf, https://www.slideshare.net/ThomasGraf5/clium-container-networking-with-bpf-xdp 25. Nov 2016, OVS Conference, Santa Clara, Offloading OVS Flow Processing using eBPF, William (Cheng-Chun) Tu, http://www.openvswitch.org/support/ovscon2016/7/1120-tu.pdf 24. Oct 2016, One.com, Copenhagen, XDP - eXpress Data Path, Intro and future use-cases, Jesper Dangaard Brouer, http://people.netfilter.org/hawk/presentations/xdp2016/xdp\_intro\_and\_use\_cases\_sep2016.pdf 23. Oct 2016, Docker Distributed Systems Summit, Berlin, Cilium: Networking & Security for Containers with BPF & XDP, Thomas Graf, https://www.slideshare.net/Docker/cilium-bpf-xdp-for-containers-66969823 22. Oct 2016, NetDev 1.2, Tokyo, Data center networking stack, Tom Herbert, https://netdevconf.info/1.2/session.html?tom-herbert 21. Oct 2016, NetDev 1.2, Tokyo, Fast Programmable Networks & Encapsulated Protocols, David S. Miller, https://netdevconf.info/1.2/session.html?david-miller-keynote 20. Oct 2016, NetDev 1.2, Tokyo, XDP workshop - Introduction, experience, and future development, Tom Herbert, https://netdevconf.info/1.2/session.html?herbert-xdp-workshop 19. Oct 2016, NetDev1.2, Tokyo, The adventures of a Suricate in eBPF land, Eric Leblond, https://netdevconf.info/1.2/slides/oct6/10\_suricata\_ebpf.pdf 18. Oct 2016, NetDev1.2, Tokyo, cls\_bpf/eBPF updates since netdev 1.1, Daniel Borkmann, http://borkmann.ch/talks/2016\_tcws.pdf 17. Oct 2016, NetDev1.2, Tokyo, Advanced programmability and recent updates with tc’s cls\_bpf, Daniel Borkmann, http://borkmann.ch/talks/2016\_netdev2.pdf https://netdevconf.info/1.2/papers/borkmann.pdf 16. Oct 2016, NetDev 1.2, Tokyo, eBPF/XDP hardware offload to SmartNICs, Jakub Kicinski, Nic Viljoen, https://netdevconf.info/1.2/papers/eBPF\_HW\_OFFLOAD.pdf 15. Aug 2016, LinuxCon, Toronto, What Can BPF Do For You?, Brenden Blanco, https://events.static.linuxfound.org/sites/events/files/slides/iovisor-lc-bof-2016.pdf 14. Aug 2016, LinuxCon, Toronto, Cilium - Fast IPv6 Container Networking with BPF and XDP, Thomas Graf, https://www.slideshare.net/ThomasGraf5/cilium-fast-ipv6-container-networking-with-bpf-and-xdp 13. Aug 2016, P4, EBPF and Linux TC Offload, Dinan Gunawardena, Jakub Kicinski, https://de.slideshare.net/Open-NFP/p4-epbf-and-linux-tc-offload 12. Jul 2016, Linux Meetup, Santa Clara, eXpress Data Path, Brenden Blanco, https://www.slideshare.net/IOVisor/express-data-path-linux-meetup-santa-clara-july-2016 11. Jul 2016, Linux Meetup, Santa Clara, CETH for XDP, Yan Chan, Yunsong Lu, https://www.slideshare.net/IOVisor/ceth-for-xdp-linux-meetup-santa-clara-july-2016 10. May 2016, P4 workshop, Stanford, P4 on the Edge, John Fastabend, https://schd.ws/hosted\_files/2016p4workshop/1d/Intel%20Fastabend-P4%20on%20the%20Edge.pdf 9. Mar 2016, Performance @Scale 2016, Menlo Park, Linux BPF Superpowers, Brendan Gregg, https://www.slideshare.net/brendangregg/linux-bpf-superpowers 8. Mar 2016, eXpress Data Path, Tom Herbert, Alexei Starovoitov, https://raw.githubusercontent.com/iovisor/bpf-docs/master/Express\_Data\_Path.pdf 7. Feb 2016, NetDev1.1, Seville, On getting tc classifier fully programmable with cls\_bpf, Daniel Borkmann, http://borkmann.ch/talks/2016\_netdev.pdf https://netdevconf.info/1.1/proceedings/papers/On-getting-tc-classifier-fully-programmable-with-cls-bpf.pdf 6. Jan 2016, FOSDEM 2016, Brussels, Linux tc and eBPF, Daniel Borkmann, http://borkmann.ch/talks/2016\_fosdem.pdf 5. Oct 2015, LinuxCon Europe, Dublin, eBPF on the Mainframe, Michael Holzheu, https://events.static.linuxfound.org/sites/events/files/slides/ebpf\_on\_the\_mainframe\_lcon\_2015.pdf 4. Aug 2015, Tracing Summit, Seattle, LLTng's Trace Filtering and beyond (with some eBPF goodness, of course!), Suchakra Sharma, https://raw.githubusercontent.com/iovisor/bpf-docs/master/ebpf\_excerpt\_20Aug2015.pdf 3. Jun 2015, LinuxCon Japan, Tokyo, Exciting Developments in Linux Tracing, Elena Zannoni, https://events.static.linuxfound.org/sites/events/files/slides/tracing-linux-ezannoni-linuxcon-ja-2015\_0.pdf 2. Feb 2015, Collaboration Summit, Santa Rosa, BPF: In-kernel Virtual Machine, Alexei Starovoitov, https://events.static.linuxfound.org/sites/events/files/slides/bpf\_collabsummit\_2015feb20.pdf 1. Feb 2015, NetDev 0.1, Ottawa, BPF: In-kernel Virtual Machine, Alexei Starovoitov, https://netdevconf.info/0.1/sessions/15.html 0. Feb 2014, DevConf.cz, Brno, tc and cls\_bpf: lightweight packet classifying with BPF, Daniel Borkmann, http://borkmann.ch/talks/2014\_devconf.pdf Further Documents ----------------- - Dive into BPF: a list of reading material, Quentin Monnet (https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/) - XDP - eXpress Data Path, Jesper Dangaard Brouer (https://prototype-kernel.readthedocs.io/en/latest/networking/XDP/index.html)
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/resources.rst
main
cilium
[ -0.005538312718272209, 0.0009746068972162902, 0.02541913464665413, -0.06071075052022934, 0.05754933878779411, -0.09953302890062332, 0.016048332676291466, 0.07143672555685043, -0.017082151025533676, -0.00966697558760643, 0.006549172103404999, -0.004646409302949905, 0.00886205118149519, -0.0...
0.1622
0. Feb 2014, DevConf.cz, Brno, tc and cls\_bpf: lightweight packet classifying with BPF, Daniel Borkmann, http://borkmann.ch/talks/2014\_devconf.pdf Further Documents ----------------- - Dive into BPF: a list of reading material, Quentin Monnet (https://qmonnet.github.io/whirl-offload/2016/09/01/dive-into-bpf/) - XDP - eXpress Data Path, Jesper Dangaard Brouer (https://prototype-kernel.readthedocs.io/en/latest/networking/XDP/index.html)
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/resources.rst
main
cilium
[ -0.023027919232845306, -0.052472762763500214, 0.01244270708411932, -0.08388527482748032, 0.061172567307949066, -0.014717316254973412, 0.09635636955499649, 0.015645697712898254, -0.05799616128206253, -0.007370246108621359, -0.02250179462134838, -0.026405390352010727, -0.03955230861902237, -...
0.12167
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_bpf\_architect: BPF Architecture ================ BPF does not define itself by only providing its instruction set, but also by offering further infrastructure around it such as maps which act as efficient key / value stores, helper functions to interact with and leverage kernel functionality, tail calls for calling into other BPF programs, security hardening primitives, a pseudo file system for pinning objects (maps, programs), and infrastructure for allowing BPF to be offloaded, for example, to a network card. LLVM provides a BPF back end, so that tools like clang can be used to compile C into a BPF object file, which can then be loaded into the kernel. BPF is deeply tied to the Linux kernel and allows for full programmability without sacrificing native kernel performance. Last but not least, also the kernel subsystems making use of BPF are part of BPF's infrastructure. The two main subsystems discussed throughout this document are tc and XDP where BPF programs can be attached to. XDP BPF programs are attached at the earliest networking driver stage and trigger a run of the BPF program upon packet reception. By definition, this achieves the best possible packet processing performance since packets cannot get processed at an even earlier point in software. However, since this processing occurs so early in the networking stack, the stack has not yet extracted metadata out of the packet. On the other hand, tc BPF programs are executed later in the kernel stack, so they have access to more metadata and core kernel functionality. Apart from tc and XDP programs, various other kernel subsystems use BPF, such as tracing (via kprobes, uprobes, tracepoints, for example). The following subsections provide further details on individual aspects of the BPF architecture. Instruction Set --------------- BPF is a general purpose RISC instruction set and was originally designed for the purpose of writing programs in a subset of C which can be compiled into BPF instructions through a compiler back end (e.g. LLVM), so that the kernel can later on map them through an in-kernel JIT compiler into native opcodes for optimal execution performance inside the kernel. The advantages for pushing these instructions into the kernel include: \* Making the kernel programmable without having to cross kernel / user space boundaries. For example, BPF programs related to networking, as in the case of Cilium, can implement flexible container policies, load balancing and other means without having to move packets to user space and back into the kernel. State between BPF programs and kernel / user space can still be shared through maps whenever needed. \* Given the flexibility of a programmable data path, programs can be heavily optimized for performance also by compiling out features that are not required for the use cases the program solves. For example, if a container does not require IPv4, then the BPF program can be built to only deal with IPv6 in order to save resources in the fast-path. \* In case of networking (e.g. tc and XDP), BPF programs can be updated atomically without having to restart the kernel, system services or containers, and without traffic interruptions. Furthermore, any program state can also be maintained throughout updates via BPF maps. \* BPF provides a stable ABI towards user space, and does not require any third party kernel modules. BPF is a core part of the Linux kernel that is shipped everywhere, and guarantees that existing BPF programs keep running with newer kernel versions. This
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/architecture.rst
main
cilium
[ -0.013014661148190498, -0.016848590224981308, -0.08939240127801895, -0.060132794082164764, 0.026179105043411255, -0.04550046846270561, -0.026155535131692886, 0.08219607174396515, 0.04533897340297699, -0.07033544778823853, 0.0015800079563632607, -0.05635026842355728, -0.021329637616872787, ...
0.274556
maintained throughout updates via BPF maps. \* BPF provides a stable ABI towards user space, and does not require any third party kernel modules. BPF is a core part of the Linux kernel that is shipped everywhere, and guarantees that existing BPF programs keep running with newer kernel versions. This guarantee is the same guarantee that the kernel provides for system calls with regard to user space applications. Moreover, BPF programs are portable across different architectures. \* BPF programs work in concert with the kernel, they make use of existing kernel infrastructure (e.g. drivers, netdevices, tunnels, protocol stack, sockets) and tooling (e.g. iproute2) as well as the safety guarantees which the kernel provides. Unlike kernel modules, BPF programs are verified through an in-kernel verifier in order to ensure that they cannot crash the kernel, always terminate, etc. XDP programs, for example, reuse the existing in-kernel drivers and operate on the provided DMA buffers containing the packet frames without exposing them or an entire driver to user space as in other models. Moreover, XDP programs reuse the existing stack instead of bypassing it. BPF can be considered a generic "glue code" between kernel facilities for crafting programs to solve specific use cases. The execution of a BPF program inside the kernel is always event-driven! Examples: \* A networking device which has a BPF program attached on its ingress path will trigger the execution of the program once a packet is received. \* A kernel address which has a kprobe with a BPF program attached will trap once the code at that address gets executed, which will then invoke the kprobe's callback function for instrumentation, subsequently triggering the execution of the attached BPF program. BPF consists of eleven 64 bit registers with 32 bit subregisters, a program counter and a 512 byte large BPF stack space. Registers are named ``r0`` - ``r10``. The operating mode is 64 bit by default, the 32 bit subregisters can only be accessed through special ALU (arithmetic logic unit) operations. The 32 bit lower subregisters zero-extend into 64 bit when they are being written to. Register ``r10`` is the only register which is read-only and contains the frame pointer address in order to access the BPF stack space. The remaining ``r0`` - ``r9`` registers are general purpose and of read/write nature. A BPF program can call into a predefined helper function, which is defined by the core kernel (never by modules). The BPF calling convention is defined as follows: \* ``r0`` contains the return value of a helper function call. \* ``r1`` - ``r5`` hold arguments from the BPF program to the kernel helper function. \* ``r6`` - ``r9`` are callee saved registers that will be preserved on helper function call. The BPF calling convention is generic enough to map directly to ``x86\_64``, ``arm64`` and other ABIs, thus all BPF registers map one to one to HW CPU registers, so that a JIT only needs to issue a call instruction, but no additional extra moves for placing function arguments. This calling convention was modeled to cover common call situations without having a performance penalty. Calls with 6 or more arguments are currently not supported. The helper functions in the kernel which are dedicated to BPF (``BPF\_CALL\_0()`` to ``BPF\_CALL\_5()`` functions) are specifically designed with this convention in mind. Register ``r0`` is also the register containing the exit value for the BPF program. The semantics of the exit value are defined by the type of program. Furthermore, when handing execution back to the kernel, the exit value is passed as a 32 bit value. Registers ``r1`` - ``r5`` are
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/architecture.rst
main
cilium
[ -0.03353578597307205, -0.07223623245954514, -0.023471098393201828, -0.09153547883033752, 0.0706842690706253, -0.031499359756708145, 0.010646146722137928, 0.04235706478357315, 0.03701185807585716, -0.04577810689806938, 0.01459694467484951, 0.01910330168902874, -0.031827013939619064, -0.0769...
0.262649
Register ``r0`` is also the register containing the exit value for the BPF program. The semantics of the exit value are defined by the type of program. Furthermore, when handing execution back to the kernel, the exit value is passed as a 32 bit value. Registers ``r1`` - ``r5`` are scratch registers, meaning the BPF program needs to either spill them to the BPF stack or move them to callee saved registers if these arguments are to be reused across multiple helper function calls. Spilling means that the variable in the register is moved to the BPF stack. The reverse operation of moving the variable from the BPF stack to the register is called filling. The reason for spilling/filling is due to the limited number of registers. Upon entering execution of a BPF program, register ``r1`` initially contains the context for the program. The context is the input argument for the program (similar to ``argc/argv`` pair for a typical C program). BPF is restricted to work on a single context. The context is defined by the program type, for example, a networking program can have a kernel representation of the network packet (``skb``) as the input argument. The general operation of BPF is 64 bit to follow the natural model of 64 bit architectures in order to perform pointer arithmetics, pass pointers but also pass 64 bit values into helper functions, and to allow for 64 bit atomic operations. The maximum instruction limit per program is restricted to 4096 BPF instructions, which, by design, means that any program will terminate quickly. For kernel newer than 5.1 this limit was lifted to 1 million BPF instructions. Although the instruction set contains forward as well as backward jumps, the in-kernel BPF verifier will forbid loops so that termination is always guaranteed. Since BPF programs run inside the kernel, the verifier's job is to make sure that these are safe to run, not affecting the system's stability. This means that from an instruction set point of view, loops can be implemented, but the verifier will restrict that. However, there is also a concept of tail calls that allows for one BPF program to jump into another one. This, too, comes with an upper nesting limit of 33 calls, and is usually used to decouple parts of the program logic, for example, into stages. The instruction format is modeled as two operand instructions, which helps mapping BPF instructions to native instructions during JIT phase. The instruction set is of fixed size, meaning every instruction has 64 bit encoding. Currently, 87 instructions have been implemented and the encoding also allows to extend the set with further instructions when needed. The instruction encoding of a single 64 bit instruction on a big-endian machine is defined as a bit sequence from most significant bit (MSB) to least significant bit (LSB) of ``op:8``, ``dst\_reg:4``, ``src\_reg:4``, ``off:16``, ``imm:32``. ``off`` and ``imm`` is of signed type. The encodings are part of the kernel headers and defined in ``linux/bpf.h`` header, which also includes ``linux/bpf\_common.h``. ``op`` defines the actual operation to be performed. Most of the encoding for ``op`` has been reused from cBPF. The operation can be based on register or immediate operands. The encoding of ``op`` itself provides information on which mode to use (``BPF\_X`` for denoting register-based operations, and ``BPF\_K`` for immediate-based operations respectively). In the latter case, the destination operand is always a register. Both ``dst\_reg`` and ``src\_reg`` provide additional information about the register operands to be used (e.g. ``r0`` - ``r9``) for the operation. ``off`` is used in some instructions to provide a relative offset, for example,
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/architecture.rst
main
cilium
[ -0.04631365090608597, -0.08545218408107758, -0.07688456773757935, 0.006292003206908703, -0.03472846746444702, 0.01601552590727806, 0.07148940116167068, 0.08886696398258209, 0.0007799954037182033, -0.013418258167803288, -0.05794103816151619, -0.029925957322120667, -0.03025142475962639, -0.0...
0.160208
``BPF\_K`` for immediate-based operations respectively). In the latter case, the destination operand is always a register. Both ``dst\_reg`` and ``src\_reg`` provide additional information about the register operands to be used (e.g. ``r0`` - ``r9``) for the operation. ``off`` is used in some instructions to provide a relative offset, for example, for addressing the stack or other buffers available to BPF (e.g. map values, packet data, etc), or jump targets in jump instructions. ``imm`` contains a constant / immediate value. The available ``op`` instructions can be categorized into various instruction classes. These classes are also encoded inside the ``op`` field. The ``op`` field is divided into (from MSB to LSB) ``code:4``, ``source:1`` and ``class:3``. ``class`` is the more generic instruction class, ``code`` denotes a specific operational code inside that class, and ``source`` tells whether the source operand is a register or an immediate value. Possible instruction classes include: \* ``BPF\_LD``, ``BPF\_LDX``: Both classes are for load operations. ``BPF\_LD`` is used for loading a double word as a special instruction spanning two instructions due to the ``imm:32`` split, and for byte / half-word / word loads of packet data. The latter was carried over from cBPF mainly in order to keep cBPF to BPF translations efficient, since they have optimized JIT code. For native BPF these packet load instructions are less relevant nowadays. ``BPF\_LDX`` class holds instructions for byte / half-word / word / double-word loads out of memory. Memory in this context is generic and could be stack memory, map value data, packet data, etc. \* ``BPF\_ST``, ``BPF\_STX``: Both classes are for store operations. Similar to ``BPF\_LDX`` the ``BPF\_STX`` is the store counterpart and is used to store the data from a register into memory, which, again, can be stack memory, map value, packet data, etc. ``BPF\_STX`` also holds special instructions for performing word and double-word based atomic add operations, which can be used for counters, for example. The ``BPF\_ST`` class is similar to ``BPF\_STX`` by providing instructions for storing data into memory only that the source operand is an immediate value. \* ``BPF\_ALU``, ``BPF\_ALU64``: Both classes contain ALU operations. Generally, ``BPF\_ALU`` operations are in 32 bit mode and ``BPF\_ALU64`` in 64 bit mode. Both ALU classes have basic operations with source operand which is register-based and an immediate-based counterpart. Supported by both are add (``+``), sub (``-``), and (``&``), or (``|``), left shift (``<<``), right shift (``>>``), xor (``^``), mul (``\*``), div (``/``), mod (``%``), neg (``~``) operations. Also mov (`` := ``) was added as a special ALU operation for both classes in both operand modes. ``BPF\_ALU64`` also contains a signed right shift. ``BPF\_ALU`` additionally contains endianness conversion instructions for half-word / word / double-word on a given source register. \* ``BPF\_JMP``: This class is dedicated to jump operations. Jumps can be unconditional and conditional. Unconditional jumps simply move the program counter forward, so that the next instruction to be executed relative to the current instruction is ``off + 1``, where ``off`` is the constant offset encoded in the instruction. Since ``off`` is signed, the jump can also be performed backwards as long as it does not create a loop and is within program bounds. Conditional jumps operate on both, register-based and immediate-based source operands. If the condition in the jump operations results in ``true``, then a relative jump to ``off + 1`` is performed, otherwise the next instruction (``0 + 1``) is performed. This fall-through jump logic differs compared to cBPF and allows for better branch prediction as it fits the CPU branch predictor logic more naturally. Available conditions are jeq (``==``), jne (``!=``), jgt (``>``), jge (``>=``),
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/architecture.rst
main
cilium
[ -0.05481201037764549, -0.011641236953437328, -0.07193003594875336, -0.01926591247320175, -0.041456349194049835, -0.050284985452890396, 0.06333602219820023, 0.09011074900627136, 0.026605691760778427, -0.014117160812020302, 0.04035931080579758, 0.047307711094617844, -0.02292376197874546, -0....
0.231973
to ``off + 1`` is performed, otherwise the next instruction (``0 + 1``) is performed. This fall-through jump logic differs compared to cBPF and allows for better branch prediction as it fits the CPU branch predictor logic more naturally. Available conditions are jeq (``==``), jne (``!=``), jgt (``>``), jge (``>=``), jsgt (signed ``>``), jsge (signed ``>=``), jlt (``<``), jle (``<=``), jslt (signed ``<``), jsle (signed ``<=``) and jset (jump if ``DST & SRC``). Apart from that, there are three special jump operations within this class: the exit instruction which will leave the BPF program and return the current value in ``r0`` as a return code, the call instruction, which will issue a function call into one of the available BPF helper functions, and a hidden tail call instruction, which will jump into a different BPF program. The Linux kernel is shipped with a BPF interpreter which executes programs assembled in BPF instructions. Even cBPF programs are translated into eBPF programs transparently in the kernel, except for architectures that still ship with a cBPF JIT and have not yet migrated to an eBPF JIT. Currently ``x86\_64``, ``arm64``, ``ppc64``, ``s390x``, ``mips64``, ``sparc64`` and ``arm`` architectures come with an in-kernel eBPF JIT compiler. All BPF handling such as loading of programs into the kernel or creation of BPF maps is managed through a central ``bpf()`` system call. It is also used for managing map entries (lookup / update / delete), and making programs as well as maps persistent in the BPF file system through pinning. Helper Functions ---------------- Helper functions are a concept which enables BPF programs to consult a core kernel defined set of function calls in order to retrieve / push data from / to the kernel. Available helper functions may differ for each BPF program type, for example, BPF programs attached to sockets are only allowed to call into a subset of helpers compared to BPF programs attached to the tc layer. Encapsulation and decapsulation helpers for lightweight tunneling constitute an example of functions which are only available to lower tc layers, whereas event output helpers for pushing notifications to user space are available to tc and XDP programs. Each helper function is implemented with a commonly shared function signature similar to system calls. The signature is defined as: .. code-block:: c u64 fn(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5) The calling convention as described in the previous section applies to all BPF helper functions. The kernel abstracts helper functions into macros ``BPF\_CALL\_0()`` to ``BPF\_CALL\_5()`` which are similar to those of system calls. The following example is an extract from a helper function which updates map elements by calling into the corresponding map implementation callbacks: .. code-block:: c BPF\_CALL\_4(bpf\_map\_update\_elem, struct bpf\_map \*, map, void \*, key, void \*, value, u64, flags) { WARN\_ON\_ONCE(!rcu\_read\_lock\_held()); return map->ops->map\_update\_elem(map, key, value, flags); } const struct bpf\_func\_proto bpf\_map\_update\_elem\_proto = { .func = bpf\_map\_update\_elem, .gpl\_only = false, .ret\_type = RET\_INTEGER, .arg1\_type = ARG\_CONST\_MAP\_PTR, .arg2\_type = ARG\_PTR\_TO\_MAP\_KEY, .arg3\_type = ARG\_PTR\_TO\_MAP\_VALUE, .arg4\_type = ARG\_ANYTHING, }; There are various advantages of this approach: while cBPF overloaded its load instructions in order to fetch data at an impossible packet offset to invoke auxiliary helper functions, each cBPF JIT needed to implement support for such a cBPF extension. In case of eBPF, each newly added helper function will be JIT compiled in a transparent and efficient way, meaning that the JIT compiler only needs to emit a call instruction since the register mapping is made in such a way that BPF register assignments already match the underlying architecture's calling convention. This allows for easily extending the core kernel
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/architecture.rst
main
cilium
[ -0.059037622064352036, -0.031883370131254196, -0.05299845337867737, -0.003692458849400282, 0.004877019673585892, 0.010353866964578629, -0.04109715297818184, 0.12313523888587952, -0.028861526399850845, -0.03805261850357056, -0.015062947757542133, -0.01576671563088894, 0.021248361095786095, ...
0.081099
be JIT compiled in a transparent and efficient way, meaning that the JIT compiler only needs to emit a call instruction since the register mapping is made in such a way that BPF register assignments already match the underlying architecture's calling convention. This allows for easily extending the core kernel with new helper functionality. All BPF helper functions are part of the core kernel and cannot be extended or added through kernel modules. The aforementioned function signature also allows the verifier to perform type checks. The above ``struct bpf\_func\_proto`` is used to hand all the necessary information that needs to be known about the helper to the verifier, so that the verifier can make sure that the expected types from the helper match the current contents of the BPF program's analyzed registers. Argument types can range from passing in any kind of value up to restricted contents such as a pointer / size pair for the BPF stack buffer, which the helper should read from or write to. In the latter case, the verifier can also perform additional checks, for example, whether the buffer was previously initialized. The list of available BPF helper functions is rather long and constantly growing, for example, at the time of this writing, tc BPF programs can choose from 38 different BPF helpers. The kernel's ``struct bpf\_verifier\_ops`` contains a ``get\_func\_proto`` callback function that provides the mapping of a specific ``enum bpf\_func\_id`` to one of the available helpers for a given BPF program type. Maps ---- .. image:: /images/bpf\_map.png :align: center Maps are efficient key / value stores that reside in kernel space. They can be accessed from a BPF program in order to keep state among multiple BPF program invocations. They can also be accessed through file descriptors from user space and can be arbitrarily shared with other BPF programs or user space applications. BPF programs which share maps with each other are not required to be of the same program type, for example, tracing programs can share maps with networking programs. A single BPF program can currently access up to 64 different maps directly. Map implementations are provided by the core kernel. There are generic maps with per-CPU and non-per-CPU flavor that can read / write arbitrary data, but there are also a few non-generic maps that are used along with helper functions. Generic maps currently available are ``BPF\_MAP\_TYPE\_HASH``, ``BPF\_MAP\_TYPE\_ARRAY``, ``BPF\_MAP\_TYPE\_PERCPU\_HASH``, ``BPF\_MAP\_TYPE\_PERCPU\_ARRAY``, ``BPF\_MAP\_TYPE\_LRU\_HASH``, ``BPF\_MAP\_TYPE\_LRU\_PERCPU\_HASH`` and ``BPF\_MAP\_TYPE\_LPM\_TRIE``. They all use the same common set of BPF helper functions in order to perform lookup, update or delete operations while implementing a different backend with differing semantics and performance characteristics. Non-generic maps that are currently in the kernel are ``BPF\_MAP\_TYPE\_PROG\_ARRAY``, ``BPF\_MAP\_TYPE\_PERF\_EVENT\_ARRAY``, ``BPF\_MAP\_TYPE\_CGROUP\_ARRAY``, ``BPF\_MAP\_TYPE\_STACK\_TRACE``, ``BPF\_MAP\_TYPE\_ARRAY\_OF\_MAPS``, ``BPF\_MAP\_TYPE\_HASH\_OF\_MAPS``. For example, ``BPF\_MAP\_TYPE\_PROG\_ARRAY`` is an array map which holds other BPF programs, ``BPF\_MAP\_TYPE\_ARRAY\_OF\_MAPS`` and ``BPF\_MAP\_TYPE\_HASH\_OF\_MAPS`` both hold pointers to other maps such that entire BPF maps can be atomically replaced at runtime. These types of maps tackle a specific issue which was unsuitable to be implemented solely through a BPF helper function since additional (non-data) state is required to be held across BPF program invocations. Object Pinning -------------- .. image:: /images/bpf\_fs.png :align: center BPF maps and programs act as a kernel resource and can only be accessed through file descriptors, backed by anonymous inodes in the kernel. Advantages, but also a number of disadvantages come along with them: User space applications can make use of most file descriptor related APIs, file descriptor passing for Unix domain sockets work transparently, etc, but at the same time, file descriptors are limited to a processes' lifetime, which makes options like map sharing rather cumbersome to
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/architecture.rst
main
cilium
[ -0.08926986157894135, -0.06379768997430801, -0.011149265803396702, -0.05802348256111145, 0.04016053304076195, -0.006330331787467003, 0.056323885917663574, 0.09758888930082321, -0.0010309885255992413, -0.07442592829465866, 0.023214874789118767, -0.0801435112953186, -0.09077367186546326, -0....
0.187607
of disadvantages come along with them: User space applications can make use of most file descriptor related APIs, file descriptor passing for Unix domain sockets work transparently, etc, but at the same time, file descriptors are limited to a processes' lifetime, which makes options like map sharing rather cumbersome to carry out. Thus, it brings a number of complications for certain use cases such as iproute2, where tc or XDP sets up and loads the program into the kernel and terminates itself eventually. With that, also access to maps is unavailable from user space side, where it could otherwise be useful, for example, when maps are shared between ingress and egress locations of the data path. Also, third party applications may wish to monitor or update map contents during BPF program runtime. To overcome this limitation, a minimal kernel space BPF file system has been implemented, where BPF map and programs can be pinned to, a process called object pinning. The BPF system call has therefore been extended with two new commands which can pin (``BPF\_OBJ\_PIN``) or retrieve (``BPF\_OBJ\_GET``) a previously pinned object. For instance, tools such as tc make use of this infrastructure for sharing maps on ingress and egress. The BPF related file system is not a singleton, it does support multiple mount instances, hard and soft links, etc. Tail Calls ---------- .. image:: /images/bpf\_tailcall.png :align: center Another concept that can be used with BPF is called tail calls. Tail calls can be seen as a mechanism that allows one BPF program to call another, without returning back to the old program. Such a call has minimal overhead as unlike function calls, it is implemented as a long jump, reusing the same stack frame. Such programs are verified independently of each other, thus for transferring state, either per-CPU maps as scratch buffers or in case of tc programs, ``skb`` fields such as the ``cb[]`` area must be used. Only programs of the same type can be tail called, and they also need to match in terms of JIT compilation, thus either JIT compiled or only interpreted programs can be invoked, but not mixed together. There are two components involved for carrying out tail calls: the first part needs to setup a specialized map called program array (``BPF\_MAP\_TYPE\_PROG\_ARRAY``) that can be populated by user space with key / values, where values are the file descriptors of the tail called BPF programs, the second part is a ``bpf\_tail\_call()`` helper where the context, a reference to the program array and the lookup key is passed to. Then the kernel inlines this helper call directly into a specialized BPF instruction. Such a program array is currently write-only from user space side. The kernel looks up the related BPF program from the passed file descriptor and atomically replaces program pointers at the given map slot. When no map entry has been found at the provided key, the kernel will just "fall through" and continue execution of the old program with the instructions following after the ``bpf\_tail\_call()``. Tail calls are a powerful utility, for example, parsing network headers could be structured through tail calls. During runtime, functionality can be added or replaced atomically, and thus altering the BPF program's execution behavior. .. \_bpf\_to\_bpf\_calls: BPF to BPF Calls ---------------- .. image:: /images/bpf\_call.png :align: center Aside from BPF helper calls and BPF tail calls, a more recent feature that has been added to the BPF core infrastructure is BPF to BPF calls. Before this feature was introduced into the kernel, a typical BPF C program had to declare any reusable code that, for example, resides in
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/architecture.rst
main
cilium
[ -0.02145261876285076, -0.01247383188456297, -0.03947661444544792, -0.023113803938031197, 0.03808174282312393, 0.008832619525492191, -0.01372036337852478, 0.11667052656412125, 0.05953764542937279, 0.004270227625966072, 0.0430726483464241, 0.0335700698196888, -0.000858797924593091, -0.070348...
0.19455
from BPF helper calls and BPF tail calls, a more recent feature that has been added to the BPF core infrastructure is BPF to BPF calls. Before this feature was introduced into the kernel, a typical BPF C program had to declare any reusable code that, for example, resides in headers as ``always\_inline`` such that when LLVM compiles and generates the BPF object file all these functions were inlined and therefore duplicated many times in the resulting object file, artificially inflating its code size: .. code-block:: c #include #ifndef \_\_section # define \_\_section(NAME) \ \_\_attribute\_\_((section(NAME), used)) #endif #ifndef \_\_inline # define \_\_inline \ inline \_\_attribute\_\_((always\_inline)) #endif static \_\_inline int foo(void) { return XDP\_DROP; } \_\_section("prog") int xdp\_drop(struct xdp\_md \*ctx) { return foo(); } char \_\_license[] \_\_section("license") = "GPL"; The main reason why this was necessary was due to lack of function call support in the BPF program loader as well as verifier, interpreter and JITs. Starting with Linux kernel 4.16 and LLVM 6.0 this restriction got lifted and BPF programs no longer need to use ``always\_inline`` everywhere. Thus, the prior shown BPF example code can then be rewritten more naturally as: .. code-block:: c #include #ifndef \_\_section # define \_\_section(NAME) \ \_\_attribute\_\_((section(NAME), used)) #endif static int foo(void) { return XDP\_DROP; } \_\_section("prog") int xdp\_drop(struct xdp\_md \*ctx) { return foo(); } char \_\_license[] \_\_section("license") = "GPL"; Mainstream BPF JIT compilers like ``x86\_64`` and ``arm64`` support BPF to BPF calls today with others following in near future. BPF to BPF call is an important performance optimization since it heavily reduces the generated BPF code size and therefore becomes friendlier to a CPU's instruction cache. The calling convention known from BPF helper function applies to BPF to BPF calls just as well, meaning ``r1`` up to ``r5`` are for passing arguments to the callee and the result is returned in ``r0``. ``r1`` to ``r5`` are scratch registers whereas ``r6`` to ``r9`` preserved across calls the usual way. The maximum number of nesting calls respectively allowed call frames is ``8``. A caller can pass pointers (e.g. to the caller's stack frame) down to the callee, but never vice versa. BPF JIT compilers emit separate images for each function body and later fix up the function call addresses in the image in a final JIT pass. This has proven to require minimal changes to the JITs in that they can treat BPF to BPF calls as conventional BPF helper calls. Up to kernel 5.9, BPF tail calls and BPF subprograms excluded each other. BPF programs that utilized tail calls couldn't take the benefit of reducing program image size and faster load times. Linux kernel 5.10 finally allows users to bring the best of two worlds and adds the ability to combine the BPF subprograms with tail calls. This improvement comes with some restrictions, though. Mixing these two features can cause a kernel stack overflow. To get an idea of what might happen, see the picture below that illustrates the mix of bpf2bpf calls and tail calls: .. image:: /images/bpf\_tailcall\_subprograms.png :align: center Tail calls, before the actual jump to the target program, will unwind only its current stack frame. As we can see in the example above, if a tail call occurs from within the sub-function, the function's (func1) stack frame will be present on the stack when a program execution is at func2. Once the final function (func3) function terminates, all the previous stack frames will be unwinded and control will get back to the caller of BPF program caller. The kernel introduced additional logic for detecting this feature combination. There is a limit on
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/architecture.rst
main
cilium
[ -0.04055710509419441, 0.0198870450258255, 0.013087032362818718, 0.02009330876171589, 0.062311071902513504, -0.0186210535466671, 0.024693196639418602, 0.0638444796204567, 0.04377597197890282, -0.027443096041679382, 0.03966948762536049, 0.0033237645402550697, -0.03895626217126846, -0.0628458...
0.088098
stack when a program execution is at func2. Once the final function (func3) function terminates, all the previous stack frames will be unwinded and control will get back to the caller of BPF program caller. The kernel introduced additional logic for detecting this feature combination. There is a limit on the stack size throughout the whole call chain down to 256 bytes per subprogram (note that if the verifier detects the bpf2bpf call, then the main function is treated as a sub-function as well). In total, with this restriction, the BPF program's call chain can consume at most 8KB of stack space. This limit comes from the 256 bytes per stack frame multiplied by the tail call count limit (33). Without this, the BPF programs will operate on 512-byte stack size, yielding the 16KB size in total for the maximum count of tail calls that would overflow the stack on some architectures. One more thing to mention is that this feature combination is currently supported only on the x86-64 architecture. JIT --- .. image:: /images/bpf\_jit.png :align: center The 64 bit ``x86\_64``, ``arm64``, ``ppc64``, ``s390x``, ``mips64``, ``sparc64`` and 32 bit ``arm``, ``x86\_32`` architectures are all shipped with an in-kernel eBPF JIT compiler, also all of them are feature equivalent and can be enabled through: .. code-block:: shell-session # echo 1 > /proc/sys/net/core/bpf\_jit\_enable The 32 bit ``mips``, ``ppc`` and ``sparc`` architectures currently have a cBPF JIT compiler. The mentioned architectures still having a cBPF JIT as well as all remaining architectures supported by the Linux kernel which do not have a BPF JIT compiler at all need to run eBPF programs through the in-kernel interpreter. In the kernel's source tree, eBPF JIT support can be easily determined through issuing a grep for ``HAVE\_EBPF\_JIT``: .. code-block:: shell-session # git grep HAVE\_EBPF\_JIT arch/ arch/arm/Kconfig: select HAVE\_EBPF\_JIT if !CPU\_ENDIAN\_BE32 arch/arm64/Kconfig: select HAVE\_EBPF\_JIT arch/powerpc/Kconfig: select HAVE\_EBPF\_JIT if PPC64 arch/mips/Kconfig: select HAVE\_EBPF\_JIT if (64BIT && !CPU\_MICROMIPS) arch/s390/Kconfig: select HAVE\_EBPF\_JIT if PACK\_STACK && HAVE\_MARCH\_Z196\_FEATURES arch/sparc/Kconfig: select HAVE\_EBPF\_JIT if SPARC64 arch/x86/Kconfig: select HAVE\_EBPF\_JIT if X86\_64 JIT compilers speed up execution of the BPF program significantly since they reduce the per instruction cost compared to the interpreter. Often instructions can be mapped 1:1 with native instructions of the underlying architecture. This also reduces the resulting executable image size and is therefore more instruction cache friendly to the CPU. In particular in case of CISC instruction sets such as ``x86``, the JITs are optimized for emitting the shortest possible opcodes for a given instruction to shrink the total necessary size for the program translation. Hardening --------- BPF locks the entire BPF interpreter image (``struct bpf\_prog``) as well as the JIT compiled image (``struct bpf\_binary\_header``) in the kernel as read-only during the program's lifetime in order to prevent the code from potential corruptions. Any corruption happening at that point, for example, due to some kernel bugs will result in a general protection fault and thus crash the kernel instead of allowing the corruption to happen silently. Architectures that support setting the image memory as read-only can be determined through: .. code-block:: shell-session $ git grep ARCH\_HAS\_SET\_MEMORY | grep select arch/arm/Kconfig: select ARCH\_HAS\_SET\_MEMORY arch/arm64/Kconfig: select ARCH\_HAS\_SET\_MEMORY arch/s390/Kconfig: select ARCH\_HAS\_SET\_MEMORY arch/x86/Kconfig: select ARCH\_HAS\_SET\_MEMORY The option ``CONFIG\_ARCH\_HAS\_SET\_MEMORY`` is not configurable, thanks to which this protection is always built in. Other architectures might follow in the future. In case of the ``x86\_64`` JIT compiler, the JITing of the indirect jump from the use of tail calls is realized through a retpoline in case ``CONFIG\_RETPOLINE`` has been set which is the default at the time of writing in most modern Linux distributions. In case of ``/proc/sys/net/core/bpf\_jit\_harden`` set
https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/architecture.rst
main
cilium
[ -0.037007685750722885, -0.10847653448581696, -0.039606716483831406, -0.05917217209935188, 0.009302915073931217, -0.04797228425741196, 0.009605031460523605, 0.07809971272945404, 0.07038514316082001, -0.03543949872255325, -0.03177119418978691, -0.04502853751182556, -0.06498011946678162, -0.0...
0.09606