content large_stringlengths 3 20.5k | url large_stringlengths 54 193 | branch large_stringclasses 4
values | source large_stringclasses 42
values | embeddings listlengths 384 384 | score float64 -0.21 0.65 |
|---|---|---|---|---|---|
future. In case of the ``x86\_64`` JIT compiler, the JITing of the indirect jump from the use of tail calls is realized through a retpoline in case ``CONFIG\_RETPOLINE`` has been set which is the default at the time of writing in most modern Linux distributions. In case of ``/proc/sys/net/core/bpf\_jit\_harden`` set to ``1`` additional hardening steps for the JIT compilation take effect for unprivileged users. This effectively trades off their performance slightly by decreasing a (potential) attack surface in case of untrusted users operating on the system. The decrease in program execution still results in better performance compared to switching to interpreter entirely. Currently, enabling hardening will blind all user provided 32 bit and 64 bit constants from the BPF program when it gets JIT compiled in order to prevent JIT spraying attacks which inject native opcodes as immediate values. This is problematic as these immediate values reside in executable kernel memory, therefore, a jump that could be triggered from some kernel bug would jump to the start of the immediate value and then execute these as native instructions. JIT constant blinding prevents this due to randomizing the actual instruction, which means the operation is transformed from an immediate based source operand to a register based one through rewriting the instruction by splitting the actual load of the value into two steps: 1) load of a blinded immediate value ``rnd ^ imm`` into a register, 2) xoring that register with ``rnd`` such that the original ``imm`` immediate then resides in the register and can be used for the actual operation. The example was provided for a load operation, but really all generic operations are blinded. Example of JITing a program with hardening disabled: .. code-block:: shell-session # echo 0 > /proc/sys/net/core/bpf\_jit\_harden ffffffffa034f5e9 + : [...] 39: mov $0xa8909090,%eax 3e: mov $0xa8909090,%eax 43: mov $0xa8ff3148,%eax 48: mov $0xa89081b4,%eax 4d: mov $0xa8900bb0,%eax 52: mov $0xa810e0c1,%eax 57: mov $0xa8908eb4,%eax 5c: mov $0xa89020b0,%eax [...] The same program gets constant blinded when loaded through BPF as an unprivileged user in the case hardening is enabled: .. code-block:: shell-session # echo 1 > /proc/sys/net/core/bpf\_jit\_harden ffffffffa034f1e5 + : [...] 39: mov $0xe1192563,%r10d 3f: xor $0x4989b5f3,%r10d 46: mov %r10d,%eax 49: mov $0xb8296d93,%r10d 4f: xor $0x10b9fd03,%r10d 56: mov %r10d,%eax 59: mov $0x8c381146,%r10d 5f: xor $0x24c7200e,%r10d 66: mov %r10d,%eax 69: mov $0xeb2a830e,%r10d 6f: xor $0x43ba02ba,%r10d 76: mov %r10d,%eax 79: mov $0xd9730af,%r10d 7f: xor $0xa5073b1f,%r10d 86: mov %r10d,%eax 89: mov $0x9a45662b,%r10d 8f: xor $0x325586ea,%r10d 96: mov %r10d,%eax [...] Both programs are semantically the same, only that none of the original immediate values are visible anymore in the disassembly of the second program. At the same time, hardening also disables any JIT kallsyms exposure for privileged users, preventing that JIT image addresses are not exposed to ``/proc/kallsyms`` anymore. Moreover, the Linux kernel provides the option ``CONFIG\_BPF\_JIT\_ALWAYS\_ON`` which removes the entire BPF interpreter from the kernel and permanently enables the JIT compiler. This has been developed as part of a mitigation in the context of Spectre v2 such that when used in a VM-based setting, the guest kernel is not going to reuse the host kernel's BPF interpreter when mounting an attack anymore. For container-based environments, the ``CONFIG\_BPF\_JIT\_ALWAYS\_ON`` configuration option is optional, but in case JITs are enabled there anyway, the interpreter may as well be compiled out to reduce the kernel's complexity. Thus, it is also generally recommended for widely used JITs in case of mainstream architectures such as ``x86\_64`` and ``arm64``. Last but not least, the kernel offers an option to disable the use of the ``bpf(2)`` system call for unprivileged users through the ``/proc/sys/kernel/unprivileged\_bpf\_disabled`` sysctl knob. This is on purpose a | https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/architecture.rst | main | cilium | [
-0.06532351672649384,
-0.01615949720144272,
-0.008736462332308292,
-0.01232447661459446,
-0.012751175090670586,
-0.05687964707612991,
0.05654403567314148,
0.11431384086608887,
-0.010207093320786953,
-0.06038029491901398,
0.036048904061317444,
0.030378971248865128,
-0.01644873060286045,
-0.... | 0.110069 |
it is also generally recommended for widely used JITs in case of mainstream architectures such as ``x86\_64`` and ``arm64``. Last but not least, the kernel offers an option to disable the use of the ``bpf(2)`` system call for unprivileged users through the ``/proc/sys/kernel/unprivileged\_bpf\_disabled`` sysctl knob. This is on purpose a one-time kill switch, meaning once set to ``1``, there is no option to reset it back to ``0`` until a new kernel reboot. When set only ``CAP\_SYS\_ADMIN`` privileged processes out of the initial namespace are allowed to use the ``bpf(2)`` system call from that point onwards. Upon start, Cilium sets this knob to ``1`` as well. .. code-block:: shell-session # echo 1 > /proc/sys/kernel/unprivileged\_bpf\_disabled Offloads -------- .. image:: /images/bpf\_offload.png :align: center Networking programs in BPF, in particular for tc and XDP do have an offload-interface to hardware in the kernel in order to execute BPF code directly on the NIC. Currently, the ``nfp`` driver from Netronome has support for offloading BPF through a JIT compiler which translates BPF instructions to an instruction set implemented against the NIC. This includes offloading of BPF maps to the NIC as well, thus the offloaded BPF program can perform map lookups, updates and deletions. BPF sysctls ----------- The Linux kernel provides few sysctls that are BPF related and covered in this section. \* ``/proc/sys/net/core/bpf\_jit\_enable``: Enables or disables the BPF JIT compiler. +-------+-------------------------------------------------------------------+ | Value | Description | +-------+-------------------------------------------------------------------+ | 0 | Disable the JIT and use only interpreter (kernel's default value) | +-------+-------------------------------------------------------------------+ | 1 | Enable the JIT compiler | +-------+-------------------------------------------------------------------+ | 2 | Enable the JIT and emit debugging traces to the kernel log | +-------+-------------------------------------------------------------------+ As described in subsequent sections, ``bpf\_jit\_disasm`` tool can be used to process debugging traces when the JIT compiler is set to debugging mode (option ``2``). \* ``/proc/sys/net/core/bpf\_jit\_harden``: Enables or disables BPF JIT hardening. Note that enabling hardening trades off performance, but can mitigate JIT spraying by blinding out the BPF program's immediate values. For programs processed through the interpreter, blinding of immediate values is not needed / performed. +-------+-------------------------------------------------------------------+ | Value | Description | +-------+-------------------------------------------------------------------+ | 0 | Disable JIT hardening (kernel's default value) | +-------+-------------------------------------------------------------------+ | 1 | Enable JIT hardening for unprivileged users only | +-------+-------------------------------------------------------------------+ | 2 | Enable JIT hardening for all users | +-------+-------------------------------------------------------------------+ \* ``/proc/sys/net/core/bpf\_jit\_kallsyms``: Enables or disables export of JITed programs as kernel symbols to ``/proc/kallsyms`` so that they can be used together with ``perf`` tooling as well as making these addresses aware to the kernel for stack unwinding, for example, used in dumping stack traces. The symbol names contain the BPF program tag (``bpf\_prog\_``). If ``bpf\_jit\_harden`` is enabled, then this feature is disabled. +-------+-------------------------------------------------------------------+ | Value | Description | +-------+-------------------------------------------------------------------+ | 0 | Disable JIT kallsyms export (kernel's default value) | +-------+-------------------------------------------------------------------+ | 1 | Enable JIT kallsyms export for privileged users only | +-------+-------------------------------------------------------------------+ \* ``/proc/sys/kernel/unprivileged\_bpf\_disabled``: Enables or disable unprivileged use of the ``bpf(2)`` system call. The Linux kernel has unprivileged use of ``bpf(2)`` enabled by default. Once the value is set to 1, unprivileged use will be permanently disabled until the next reboot, neither an application nor an admin can reset the value anymore. The value can also be set to 2, which means it can be changed at runtime to 0 or 1 later while disabling the unprivileged used for now. This value was added in Linux 5.13. If ``BPF\_UNPRIV\_DEFAULT\_OFF`` is enabled in the kernel config, then this knob will default to 2 instead of 0. This knob does not affect any cBPF programs such as seccomp or traditional socket filters that do not use | https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/architecture.rst | main | cilium | [
-0.032231882214546204,
-0.023454908281564713,
-0.11344772577285767,
-0.026093032211065292,
-0.037224385887384415,
-0.04477439075708389,
0.07593165338039398,
0.05322949215769768,
0.008437097072601318,
-0.02806340903043747,
0.07492633908987045,
0.014205661602318287,
-0.010294944047927856,
0.... | 0.158205 |
disabling the unprivileged used for now. This value was added in Linux 5.13. If ``BPF\_UNPRIV\_DEFAULT\_OFF`` is enabled in the kernel config, then this knob will default to 2 instead of 0. This knob does not affect any cBPF programs such as seccomp or traditional socket filters that do not use the ``bpf(2)`` system call for loading the program into the kernel. +-------+---------------------------------------------------------------------+ | Value | Description | +-------+---------------------------------------------------------------------+ | 0 | Unprivileged use of bpf syscall enabled (kernel's default value) | +-------+---------------------------------------------------------------------+ | 1 | Unprivileged use of bpf syscall disabled (until reboot) | +-------+---------------------------------------------------------------------+ | 2 | Unprivileged use of bpf syscall disabled | | | (default if ``BPF\_UNPRIV\_DEFAULT\_OFF`` is enabled in kernel config) | +-------+---------------------------------------------------------------------+ | https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/architecture.rst | main | cilium | [
-0.00597672862932086,
-0.01661256141960621,
-0.1099305972456932,
0.0067282929085195065,
-0.020366838201880455,
0.05167944356799126,
-0.010891522280871868,
0.013400783762335777,
0.019708100706338882,
-0.06403397023677826,
0.05755027011036873,
-0.10854437947273254,
-0.013836544007062912,
-0.... | 0.071657 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_bpf\_debug: Debugging and Testing ===================== bpftool ------- bpftool is the main introspection and debugging tool around BPF and developed and shipped along with the Linux kernel tree under ``tools/bpf/bpftool/``. The tool can dump all BPF programs and maps that are currently loaded in the system, or list and correlate all BPF maps used by a specific program. Furthermore, it allows to dump the entire map's key / value pairs, or lookup, update, delete individual ones as well as retrieve a key's neighbor key in the map. Such operations can be performed based on BPF program or map IDs or by specifying the location of a BPF file system pinned program or map. The tool additionally also offers an option to pin maps or programs into the BPF file system. For a quick overview of all BPF programs currently loaded on the host invoke the following command: .. code-block:: shell-session # bpftool prog 398: sched\_cls tag 56207908be8ad877 loaded\_at Apr 09/16:24 uid 0 xlated 8800B jited 6184B memlock 12288B map\_ids 18,5,17,14 399: sched\_cls tag abc95fb4835a6ec9 loaded\_at Apr 09/16:24 uid 0 xlated 344B jited 223B memlock 4096B map\_ids 18 400: sched\_cls tag afd2e542b30ff3ec loaded\_at Apr 09/16:24 uid 0 xlated 1720B jited 1001B memlock 4096B map\_ids 17 401: sched\_cls tag 2dbbd74ee5d51cc8 loaded\_at Apr 09/16:24 uid 0 xlated 3728B jited 2099B memlock 4096B map\_ids 17 [...] Similarly, to get an overview of all active maps: .. code-block:: shell-session # bpftool map 5: hash flags 0x0 key 20B value 112B max\_entries 65535 memlock 13111296B 6: hash flags 0x0 key 20B value 20B max\_entries 65536 memlock 7344128B 7: hash flags 0x0 key 10B value 16B max\_entries 8192 memlock 790528B 8: hash flags 0x0 key 22B value 28B max\_entries 8192 memlock 987136B 9: hash flags 0x0 key 20B value 8B max\_entries 512000 memlock 49352704B [...] Note that for each command, bpftool also supports json based output by appending ``--json`` at the end of the command line. An additional ``--pretty`` improves the output to be more human readable. .. code-block:: shell-session # bpftool prog --json --pretty For dumping the post-verifier BPF instruction image of a specific BPF program, one starting point could be to inspect a specific program, e.g. attached to the tc ingress hook: .. code-block:: shell-session # tc filter show dev cilium\_host egress filter protocol all pref 1 bpf chain 0 filter protocol all pref 1 bpf chain 0 handle 0x1 bpf\_host.o:[from-netdev] \ direct-action not\_in\_hw id 406 tag e0362f5bd9163a0a jited The program from the object file ``bpf\_host.o``, section ``from-netdev`` has a BPF program ID of ``406`` as denoted in ``id 406``. Based on this information bpftool can provide some high-level metadata specific to the program: .. code-block:: shell-session # bpftool prog show id 406 406: sched\_cls tag e0362f5bd9163a0a loaded\_at Apr 09/16:24 uid 0 xlated 11144B jited 7721B memlock 12288B map\_ids 18,20,8,5,6,14 The program of ID 406 is of type ``sched\_cls`` (``BPF\_PROG\_TYPE\_SCHED\_CLS``), has a ``tag`` of ``e0362f5bd9163a0a`` (SHA sum over the instruction sequence), it was loaded by root ``uid 0`` on ``Apr 09/16:24``. The BPF instruction sequence is ``11,144 bytes`` long and the JITed image ``7,721 bytes``. The program itself (excluding maps) consumes ``12,288 bytes`` that are accounted / charged against user ``uid 0``. And the BPF program uses the BPF maps with IDs ``18``, ``20``, ``8``, ``5``, ``6`` and ``14``. The latter IDs can further be used to get information or dump the map themselves. Additionally, bpftool can issue a dump request of the BPF instructions the program runs: .. code-block:: | https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/debug_and_test.rst | main | cilium | [
0.029444724321365356,
0.019480254501104355,
-0.10370045900344849,
-0.03325369209051132,
0.07907925546169281,
-0.0490276999771595,
0.02182433009147644,
0.05746465176343918,
0.0009509210358373821,
-0.042604830116033554,
0.05610295757651329,
-0.0733519047498703,
0.007670936174690723,
-0.09732... | 0.205181 |
``uid 0``. And the BPF program uses the BPF maps with IDs ``18``, ``20``, ``8``, ``5``, ``6`` and ``14``. The latter IDs can further be used to get information or dump the map themselves. Additionally, bpftool can issue a dump request of the BPF instructions the program runs: .. code-block:: shell-session # bpftool prog dump xlated id 406 0: (b7) r7 = 0 1: (63) \*(u32 \*)(r1 +60) = r7 2: (63) \*(u32 \*)(r1 +56) = r7 3: (63) \*(u32 \*)(r1 +52) = r7 [...] 47: (bf) r4 = r10 48: (07) r4 += -40 49: (79) r6 = \*(u64 \*)(r10 -104) 50: (bf) r1 = r6 51: (18) r2 = map[id:18] <-- BPF map id 18 53: (b7) r5 = 32 54: (85) call bpf\_skb\_event\_output#5656112 <-- BPF helper call 55: (69) r1 = \*(u16 \*)(r6 +192) [...] bpftool correlates BPF map IDs into the instruction stream as shown above as well as calls to BPF helpers or other BPF programs. The instruction dump reuses the same 'pretty-printer' as the kernel's BPF verifier. Since the program was JITed and therefore the actual JIT image that was generated out of above ``xlated`` instructions is executed, it can be dumped as well through bpftool: .. code-block:: shell-session # bpftool prog dump jited id 406 0: push %rbp 1: mov %rsp,%rbp 4: sub $0x228,%rsp b: sub $0x28,%rbp f: mov %rbx,0x0(%rbp) 13: mov %r13,0x8(%rbp) 17: mov %r14,0x10(%rbp) 1b: mov %r15,0x18(%rbp) 1f: xor %eax,%eax 21: mov %rax,0x20(%rbp) 25: mov 0x80(%rdi),%r9d [...] Mainly for BPF JIT developers, the option also exists to interleave the disassembly with the actual native opcodes: .. code-block:: shell-session # bpftool prog dump jited id 406 opcodes 0: push %rbp 55 1: mov %rsp,%rbp 48 89 e5 4: sub $0x228,%rsp 48 81 ec 28 02 00 00 b: sub $0x28,%rbp 48 83 ed 28 f: mov %rbx,0x0(%rbp) 48 89 5d 00 13: mov %r13,0x8(%rbp) 4c 89 6d 08 17: mov %r14,0x10(%rbp) 4c 89 75 10 1b: mov %r15,0x18(%rbp) 4c 89 7d 18 [...] The same interleaving can be done for the normal BPF instructions which can sometimes be useful for debugging in the kernel: .. code-block:: shell-session # bpftool prog dump xlated id 406 opcodes 0: (b7) r7 = 0 b7 07 00 00 00 00 00 00 1: (63) \*(u32 \*)(r1 +60) = r7 63 71 3c 00 00 00 00 00 2: (63) \*(u32 \*)(r1 +56) = r7 63 71 38 00 00 00 00 00 3: (63) \*(u32 \*)(r1 +52) = r7 63 71 34 00 00 00 00 00 4: (63) \*(u32 \*)(r1 +48) = r7 63 71 30 00 00 00 00 00 5: (63) \*(u32 \*)(r1 +64) = r7 63 71 40 00 00 00 00 00 [...] The basic blocks of a program can also be visualized with the help of ``graphviz``. For this purpose, bpftool has a ``visual`` dump mode that generates a dot file instead of the plain BPF ``xlated`` instruction dump that can later be converted to a png file: .. code-block:: shell-session # bpftool prog dump xlated id 406 visual &> output.dot $ dot -Tpng output.dot -o output.png Another option would be to pass the dot file to dotty as a viewer, that is ``dotty output.dot``, where the result for the ``bpf\_host.o`` program looks as follows (small extract): .. image:: /images/bpf\_dot.png :align: center Note that the ``xlated`` instruction dump provides the post-verifier BPF instruction image which means that it dumps the instructions as if they were to be run through the BPF interpreter. In the kernel, the verifier performs various rewrites of the original instructions provided by the BPF loader. One | https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/debug_and_test.rst | main | cilium | [
-0.04706745222210884,
0.058962397277355194,
-0.1060074046254158,
-0.046643808484077454,
0.019840434193611145,
0.03006812185049057,
0.05647623911499977,
-0.006208574399352074,
-0.05900879576802254,
-0.04442118480801582,
-0.01437580119818449,
-0.08710089325904846,
-0.00047751489910297096,
-0... | 0.120731 |
:align: center Note that the ``xlated`` instruction dump provides the post-verifier BPF instruction image which means that it dumps the instructions as if they were to be run through the BPF interpreter. In the kernel, the verifier performs various rewrites of the original instructions provided by the BPF loader. One example of rewrites is the inlining of helper functions in order to improve runtime performance, here in the case of a map lookup for hash tables: .. code-block:: shell-session # bpftool prog dump xlated id 3 0: (b7) r1 = 2 1: (63) \*(u32 \*)(r10 -4) = r1 2: (bf) r2 = r10 3: (07) r2 += -4 4: (18) r1 = map[id:2] <-- BPF map id 2 6: (85) call \_\_htab\_map\_lookup\_elem#77408 <-+ BPF helper inlined rewrite 7: (15) if r0 == 0x0 goto pc+2 | 8: (07) r0 += 56 | 9: (79) r0 = \*(u64 \*)(r0 +0) <-+ 10: (15) if r0 == 0x0 goto pc+24 11: (bf) r2 = r10 12: (07) r2 += -4 [...] bpftool correlates calls to helper functions or BPF to BPF calls through kallsyms. Therefore, make sure that JITed BPF programs are exposed to kallsyms (``bpf\_jit\_kallsyms``) and that kallsyms addresses are not obfuscated (calls are otherwise shown as ``call bpf\_unspec#0``): .. code-block:: shell-session # echo 0 > /proc/sys/kernel/kptr\_restrict # echo 1 > /proc/sys/net/core/bpf\_jit\_kallsyms BPF to BPF calls are correlated as well for both, interpreter as well as JIT case. In the latter, the tag of the subprogram is shown as call target. In each case, the ``pc+2`` is the pc-relative offset of the call target, which denotes the subprogram. .. code-block:: shell-session # bpftool prog dump xlated id 1 0: (85) call pc+2#\_\_bpf\_prog\_run\_args32 1: (b7) r0 = 1 2: (95) exit 3: (b7) r0 = 2 4: (95) exit JITed variant of the dump: .. code-block:: shell-session # bpftool prog dump xlated id 1 0: (85) call pc+2#bpf\_prog\_3b185187f1855c4c\_F 1: (b7) r0 = 1 2: (95) exit 3: (b7) r0 = 2 4: (95) exit In the case of tail calls, the kernel maps them into a single instruction internally, bpftool will still correlate them as a helper call for ease of debugging: .. code-block:: shell-session # bpftool prog dump xlated id 2 [...] 10: (b7) r2 = 8 11: (85) call bpf\_trace\_printk#-41312 12: (bf) r1 = r6 13: (18) r2 = map[id:1] 15: (b7) r3 = 0 16: (85) call bpf\_tail\_call#12 17: (b7) r1 = 42 18: (6b) \*(u16 \*)(r6 +46) = r1 19: (b7) r0 = 0 20: (95) exit # bpftool map show id 1 1: prog\_array flags 0x0 key 4B value 4B max\_entries 1 memlock 4096B Dumping an entire map is possible through the ``map dump`` subcommand which iterates through all present map elements and dumps the key / value pairs. If no BTF (BPF Type Format) data is available for a given map, then the key / value pairs are dumped as hex: .. code-block:: shell-session # bpftool map dump id 5 key: f0 0d 00 00 00 00 00 00 0a 66 00 00 00 00 8a d6 02 00 00 00 value: 00 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/debug_and_test.rst | main | cilium | [
-0.04725771024823189,
0.06026904657483101,
-0.08311404287815094,
-0.10751951485872269,
0.004714665003120899,
-0.017674334347248077,
-0.031231680884957314,
0.0021932381205260754,
-0.03187011927366257,
0.0009573682327754796,
-0.0053350357338786125,
-0.013161529786884785,
0.0070793647319078445,... | 0.04715 |
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 key: 0a 66 1c ee 00 00 00 00 00 00 00 00 00 00 00 00 01 00 00 00 value: 00 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 [...] Found 6 elements However, with BTF, the map also holds debugging information about the key and value structures. For example, BTF in combination with BPF maps and the BPF\_ANNOTATE\_KV\_PAIR() macro from iproute2 will result in the following dump (``test\_xdp\_noinline.o`` from kernel selftests): .. code-block:: shell-session # cat tools/testing/selftests/bpf/test\_xdp\_noinline.c [...] struct ctl\_value { union { \_\_u64 value; \_\_u32 ifindex; \_\_u8 mac[6]; }; }; struct bpf\_map\_def \_\_attribute\_\_ ((section("maps"), used)) ctl\_array = { .type = BPF\_MAP\_TYPE\_ARRAY, .key\_size = sizeof(\_\_u32), .value\_size = sizeof(struct ctl\_value), .max\_entries = 16, .map\_flags = 0, }; BPF\_ANNOTATE\_KV\_PAIR(ctl\_array, \_\_u32, struct ctl\_value); [...] The BPF\_ANNOTATE\_KV\_PAIR() macro forces a map-specific ELF section containing an empty key and value, this enables the iproute2 BPF loader to correlate BTF data with that section and thus allows to choose the corresponding types out of the BTF for loading the map. Compiling through LLVM and generating BTF through debugging information by ``pahole``: .. code-block:: shell-session # clang [...] -O2 --target=bpf -g -emit-llvm -c test\_xdp\_noinline.c -o - | llc -march=bpf -mcpu=probe -mattr=dwarfris -filetype=obj -o test\_xdp\_noinline.o # pahole -J test\_xdp\_noinline.o Now loading into kernel and dumping the map via bpftool: .. code-block:: shell-session # ip -force link set dev lo xdp obj test\_xdp\_noinline.o sec xdp-test # ip a 1: lo: mtu 65536 xdpgeneric/id:227 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid\_lft forever preferred\_lft forever inet6 ::1/128 scope host valid\_lft forever preferred\_lft forever [...] # bpftool prog show id 227 227: xdp tag a85e060c275c5616 gpl loaded\_at 2018-07-17T14:41:29+0000 uid 0 xlated 8152B not jited memlock 12288B map\_ids 381,385,386,382,384,383 # bpftool map dump id 386 [{ "key": 0, "value": { "": { "value": 0, "ifindex": 0, "mac": [] } } },{ "key": 1, "value": { "": { "value": 0, "ifindex": 0, "mac": [] } } },{ [...] Lookup, update, delete, and 'get next key' operations on the map for specific keys can be performed through bpftool as well. If the BPF program has been successfully loaded with BTF debugging information, the BTF ID will be shown in ``prog show`` command result denoted in ``btf\_id``. .. code-block:: shell-session # bpftool prog show id 72 72: xdp name balancer\_ingres tag acf44cabb48385ed gpl loaded\_at 2020-04-13T23:12:08+0900 uid 0 xlated 19104B jited 10732B memlock 20480B map\_ids 126,130,131,127,129,128 btf\_id 60 This can also be confirmed with ``btf show`` command which dumps all BTF objects loaded on a system. .. code-block:: shell-session # bpftool btf show 60: size 12243B prog\_ids 72 map\_ids 126,130,131,127,129,128 And the subcommand ``btf dump`` can be used to check which debugging information is included in the BTF. With this command, BTF dump can be formatted either 'raw' or 'c', the one that is used in C code. .. code-block:: shell-session # bpftool btf dump id 60 format c [...] struct ctl\_value { union { \_\_u64 value; \_\_u32 ifindex; \_\_u8 mac[6]; }; | https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/debug_and_test.rst | main | cilium | [
0.03685557097196579,
0.007239659316837788,
-0.05017838254570961,
-0.030639974400401115,
0.0595858134329319,
-0.01928717829287052,
0.026244865730404854,
0.016089685261249542,
-0.05679808929562569,
0.002804143587127328,
0.07595955580472946,
-0.06827172636985779,
-0.0030344787519425154,
-0.07... | 0.113558 |
information is included in the BTF. With this command, BTF dump can be formatted either 'raw' or 'c', the one that is used in C code. .. code-block:: shell-session # bpftool btf dump id 60 format c [...] struct ctl\_value { union { \_\_u64 value; \_\_u32 ifindex; \_\_u8 mac[6]; }; }; typedef unsigned int u32; [...] .. admonition:: Video :class: attention To learn more about bpftool, check out `eCHO episode 11: Exploring bpftool `\_\_ with Quentin Monnet, maintainer of bpftool. Kernel Testing -------------- The Linux kernel ships a BPF selftest suite, which can be found in the kernel source tree under ``tools/testing/selftests/bpf/``. .. code-block:: shell-session $ cd tools/testing/selftests/bpf/ $ make # make run\_tests The test suite contains test cases against the BPF verifier, program tags, various tests against the BPF map interface and map types. It contains various runtime tests from C code for checking LLVM back end, and eBPF as well as cBPF asm code that is run in the kernel for testing the interpreter and JITs. JIT Debugging ------------- For JIT developers performing audits or writing extensions, each compile run can output the generated JIT image into the kernel log through: .. code-block:: shell-session # echo 2 > /proc/sys/net/core/bpf\_jit\_enable Whenever a new BPF program is loaded, the JIT compiler will dump the output, which can then be inspected with ``dmesg``, for example: :: [ 3389.935842] flen=6 proglen=70 pass=3 image=ffffffffa0069c8f from=tcpdump pid=20583 [ 3389.935847] JIT code: 00000000: 55 48 89 e5 48 83 ec 60 48 89 5d f8 44 8b 4f 68 [ 3389.935849] JIT code: 00000010: 44 2b 4f 6c 4c 8b 87 d8 00 00 00 be 0c 00 00 00 [ 3389.935850] JIT code: 00000020: e8 1d 94 ff e0 3d 00 08 00 00 75 16 be 17 00 00 [ 3389.935851] JIT code: 00000030: 00 e8 28 94 ff e0 83 f8 01 75 07 b8 ff ff 00 00 [ 3389.935852] JIT code: 00000040: eb 02 31 c0 c9 c3 ``flen`` is the length of the BPF program (here, 6 BPF instructions), and ``proglen`` tells the number of bytes generated by the JIT for the opcode image (here, 70 bytes in size). ``pass`` means that the image was generated in 3 compiler passes, for example, ``x86\_64`` can have various optimization passes to further reduce the image size when possible. ``image`` contains the address of the generated JIT image, ``from`` and ``pid`` the user space application name and PID respectively, which triggered the compilation process. The dump output for eBPF and cBPF JITs is the same format. In the kernel tree under ``tools/bpf/``, there is a tool called ``bpf\_jit\_disasm``. It reads out the latest dump and prints the disassembly for further inspection: .. code-block:: shell-session # ./bpf\_jit\_disasm 70 bytes emitted from JIT compiler (pass:3, flen:6) ffffffffa0069c8f + : 0: push %rbp 1: mov %rsp,%rbp 4: sub $0x60,%rsp 8: mov %rbx,-0x8(%rbp) c: mov 0x68(%rdi),%r9d 10: sub 0x6c(%rdi),%r9d 14: mov 0xd8(%rdi),%r8 1b: mov $0xc,%esi 20: callq 0xffffffffe0ff9442 25: cmp $0x800,%eax 2a: jne 0x0000000000000042 2c: mov $0x17,%esi 31: callq 0xffffffffe0ff945e 36: cmp $0x1,%eax 39: jne 0x0000000000000042 3b: mov $0xffff,%eax 40: jmp 0x0000000000000044 42: xor %eax,%eax 44: leaveq 45: retq Alternatively, the tool can also dump related opcodes along with the disassembly. .. code-block:: shell-session # ./bpf\_jit\_disasm -o 70 bytes emitted from JIT compiler (pass:3, flen:6) ffffffffa0069c8f + : 0: push %rbp 55 1: mov %rsp,%rbp 48 89 e5 4: sub $0x60,%rsp 48 83 ec 60 8: mov %rbx,-0x8(%rbp) 48 89 5d f8 c: mov 0x68(%rdi),%r9d 44 8b 4f 68 10: sub 0x6c(%rdi),%r9d 44 2b 4f 6c 14: mov 0xd8(%rdi),%r8 4c 8b 87 d8 00 00 00 | https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/debug_and_test.rst | main | cilium | [
-0.034814655780792236,
0.007039953488856554,
-0.11583677679300308,
0.021311018615961075,
0.06279433518648148,
-0.009052562527358532,
0.03745095431804657,
0.05651617422699928,
-0.051584530621767044,
-0.02736777812242508,
0.002071080496534705,
-0.11492440104484558,
-0.037017110735177994,
-0.... | 0.168606 |
+ : 0: push %rbp 55 1: mov %rsp,%rbp 48 89 e5 4: sub $0x60,%rsp 48 83 ec 60 8: mov %rbx,-0x8(%rbp) 48 89 5d f8 c: mov 0x68(%rdi),%r9d 44 8b 4f 68 10: sub 0x6c(%rdi),%r9d 44 2b 4f 6c 14: mov 0xd8(%rdi),%r8 4c 8b 87 d8 00 00 00 1b: mov $0xc,%esi be 0c 00 00 00 20: callq 0xffffffffe0ff9442 e8 1d 94 ff e0 25: cmp $0x800,%eax 3d 00 08 00 00 2a: jne 0x0000000000000042 75 16 2c: mov $0x17,%esi be 17 00 00 00 31: callq 0xffffffffe0ff945e e8 28 94 ff e0 36: cmp $0x1,%eax 83 f8 01 39: jne 0x0000000000000042 75 07 3b: mov $0xffff,%eax b8 ff ff 00 00 40: jmp 0x0000000000000044 eb 02 42: xor %eax,%eax 31 c0 44: leaveq c9 45: retq c3 More recently, ``bpftool`` adapted the same feature of dumping the BPF JIT image based on a given BPF program ID already loaded in the system (see bpftool section). For performance analysis of JITed BPF programs, ``perf`` can be used as usual. As a prerequisite, JITed programs need to be exported through kallsyms infrastructure. .. code-block:: shell-session # echo 1 > /proc/sys/net/core/bpf\_jit\_enable # echo 1 > /proc/sys/net/core/bpf\_jit\_kallsyms Enabling or disabling ``bpf\_jit\_kallsyms`` does not require a reload of the related BPF programs. Next, a small workflow example is provided for profiling BPF programs. A crafted tc BPF program is used for demonstration purposes, where perf records a failed allocation inside ``bpf\_clone\_redirect()`` helper. Due to the use of direct write, ``bpf\_try\_make\_head\_writable()`` failed, which would then release the cloned ``skb`` again and return with an error message. ``perf`` thus records all ``kfree\_skb`` events. .. code-block:: shell-session # tc qdisc add dev em1 clsact # tc filter add dev em1 ingress bpf da obj prog.o sec main # tc filter show dev em1 ingress filter protocol all pref 49152 bpf filter protocol all pref 49152 bpf handle 0x1 prog.o:[main] direct-action id 1 tag 8227addf251b7543 # cat /proc/kallsyms [...] ffffffffc00349e0 t fjes\_hw\_init\_command\_registers [fjes] ffffffffc003e2e0 d \_\_tracepoint\_fjes\_hw\_stop\_debug\_err [fjes] ffffffffc0036190 t fjes\_hw\_epbuf\_tx\_pkt\_send [fjes] ffffffffc004b000 t bpf\_prog\_8227addf251b7543 # perf record -a -g -e skb:kfree\_skb sleep 60 # perf script --kallsyms=/proc/kallsyms [...] ksoftirqd/0 6 [000] 1004.578402: skb:kfree\_skb: skbaddr=0xffff9d4161f20a00 protocol=2048 location=0xffffffffc004b52c 7fffb8745961 bpf\_clone\_redirect (/lib/modules/4.10.0+/build/vmlinux) 7fffc004e52c bpf\_prog\_8227addf251b7543 (/lib/modules/4.10.0+/build/vmlinux) 7fffc05b6283 cls\_bpf\_classify (/lib/modules/4.10.0+/build/vmlinux) 7fffb875957a tc\_classify (/lib/modules/4.10.0+/build/vmlinux) 7fffb8729840 \_\_netif\_receive\_skb\_core (/lib/modules/4.10.0+/build/vmlinux) 7fffb8729e38 \_\_netif\_receive\_skb (/lib/modules/4.10.0+/build/vmlinux) 7fffb872ae05 process\_backlog (/lib/modules/4.10.0+/build/vmlinux) 7fffb872a43e net\_rx\_action (/lib/modules/4.10.0+/build/vmlinux) 7fffb886176c \_\_do\_softirq (/lib/modules/4.10.0+/build/vmlinux) 7fffb80ac5b9 run\_ksoftirqd (/lib/modules/4.10.0+/build/vmlinux) 7fffb80ca7fa smpboot\_thread\_fn (/lib/modules/4.10.0+/build/vmlinux) 7fffb80c6831 kthread (/lib/modules/4.10.0+/build/vmlinux) 7fffb885e09c ret\_from\_fork (/lib/modules/4.10.0+/build/vmlinux) The stack trace recorded by ``perf`` will then show the ``bpf\_prog\_8227addf251b7543()`` symbol as part of the call trace, meaning that the BPF program with the tag ``8227addf251b7543`` was related to the ``kfree\_skb`` event, and such program was attached to netdevice ``em1`` on the ingress hook as shown by tc. Introspection ------------- The Linux kernel provides various tracepoints around BPF and XDP which can be used for additional introspection, for example, to trace interactions of user space programs with the bpf system call. Tracepoints for BPF: .. code-block:: shell-session # perf list | grep bpf: bpf:bpf\_map\_create [Tracepoint event] bpf:bpf\_map\_delete\_elem [Tracepoint event] bpf:bpf\_map\_lookup\_elem [Tracepoint event] bpf:bpf\_map\_next\_key [Tracepoint event] bpf:bpf\_map\_update\_elem [Tracepoint event] bpf:bpf\_obj\_get\_map [Tracepoint event] bpf:bpf\_obj\_get\_prog [Tracepoint event] bpf:bpf\_obj\_pin\_map [Tracepoint event] bpf:bpf\_obj\_pin\_prog [Tracepoint event] bpf:bpf\_prog\_get\_type [Tracepoint event] bpf:bpf\_prog\_load [Tracepoint event] bpf:bpf\_prog\_put\_rcu [Tracepoint event] Example usage with ``perf`` (alternatively to ``sleep`` example used here, a specific application like ``tc`` could be used here instead, of course): .. code-block:: shell-session # perf record -a -e bpf:\* sleep 10 # perf script sock\_example 6197 [005] 283.980322: bpf:bpf\_map\_create: map type=ARRAY ufd=4 key=4 val=8 max=256 flags=0 sock\_example 6197 [005] 283.980721: bpf:bpf\_prog\_load: prog=a5ea8fa30ea6849c type=SOCKET\_FILTER ufd=5 sock\_example 6197 [005] 283.988423: bpf:bpf\_prog\_get\_type: prog=a5ea8fa30ea6849c type=SOCKET\_FILTER sock\_example 6197 [005] 283.988443: bpf:bpf\_map\_lookup\_elem: map type=ARRAY ufd=4 | https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/debug_and_test.rst | main | cilium | [
-0.08899104595184326,
-0.015584392473101616,
-0.05932136997580528,
-0.01863810047507286,
-0.06498956680297852,
-0.07105821371078491,
-0.021599406376481056,
0.049994371831417084,
-0.06518037617206573,
-0.015692275017499924,
0.030702335759997368,
-0.03749559447169304,
0.04313844069838524,
0.... | 0.06495 |
course): .. code-block:: shell-session # perf record -a -e bpf:\* sleep 10 # perf script sock\_example 6197 [005] 283.980322: bpf:bpf\_map\_create: map type=ARRAY ufd=4 key=4 val=8 max=256 flags=0 sock\_example 6197 [005] 283.980721: bpf:bpf\_prog\_load: prog=a5ea8fa30ea6849c type=SOCKET\_FILTER ufd=5 sock\_example 6197 [005] 283.988423: bpf:bpf\_prog\_get\_type: prog=a5ea8fa30ea6849c type=SOCKET\_FILTER sock\_example 6197 [005] 283.988443: bpf:bpf\_map\_lookup\_elem: map type=ARRAY ufd=4 key=[06 00 00 00] val=[00 00 00 00 00 00 00 00] [...] sock\_example 6197 [005] 288.990868: bpf:bpf\_map\_lookup\_elem: map type=ARRAY ufd=4 key=[01 00 00 00] val=[14 00 00 00 00 00 00 00] swapper 0 [005] 289.338243: bpf:bpf\_prog\_put\_rcu: prog=a5ea8fa30ea6849c type=SOCKET\_FILTER For the BPF programs, their individual program tag is displayed. For debugging, XDP also has a tracepoint that is triggered when exceptions are raised: .. code-block:: shell-session # perf list | grep xdp: xdp:xdp\_exception [Tracepoint event] Exceptions are triggered in the following scenarios: \* The BPF program returned an invalid / unknown XDP action code. \* The BPF program returned with ``XDP\_ABORTED`` indicating a non-graceful exit. \* The BPF program returned with ``XDP\_TX``, but there was an error on transmit, for example, due to the port not being up, due to the transmit ring being full, due to allocation failures, etc. Both tracepoint classes can also be inspected with a BPF program itself attached to one or more tracepoints, collecting further information in a map or punting such events to a user space collector through the ``bpf\_perf\_event\_output()`` helper, for example. Tracing pipe ------------ When a BPF program makes a call to ``bpf\_trace\_printk()``, the output is sent to the kernel tracing pipe. Users may read from this file to consume events that are traced to this buffer: .. code-block:: shell-session # tail -f /sys/kernel/debug/tracing/trace\_pipe ... Miscellaneous ------------- BPF programs and maps are memory accounted against ``RLIMIT\_MEMLOCK`` similar to ``perf``. The currently available size in unit of system pages which may be locked into memory can be inspected through ``ulimit -l``. The setrlimit system call man page provides further details. The default limit is usually insufficient to load more complex programs or larger BPF maps, so that the BPF system call will return with ``errno`` of ``EPERM``. In such situations a workaround with ``ulimit -l unlimited`` or with a sufficiently large limit could be performed. The ``RLIMIT\_MEMLOCK`` is mainly enforcing limits for unprivileged users. Depending on the setup, setting a higher limit for privileged users is often acceptable. | https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/debug_and_test.rst | main | cilium | [
-0.037385448813438416,
0.04045102000236511,
-0.12189716100692749,
-0.05157243460416794,
0.0005205614143051207,
0.012218008749186993,
0.07037831097841263,
-0.06398776918649673,
-0.018829185515642166,
0.03498365357518196,
-0.031886614859104156,
-0.035145025700330734,
-0.047933969646692276,
-... | 0.052933 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_bpf\_program: Program Types ============= At the time of this writing, there are eighteen different BPF program types available, two of the main types for networking are further explained in below subsections, namely XDP BPF programs as well as tc BPF programs. Extensive usage examples for the two program types for LLVM, iproute2 or other tools are spread throughout the toolchain section and not covered here. Instead, this section focuses on their architecture, concepts and use cases. XDP --- XDP stands for eXpress Data Path and provides a framework for BPF that enables high-performance programmable packet processing in the Linux kernel. It runs the BPF program at the earliest possible point in software, namely at the moment the network driver receives the packet. At this point in the fast-path the driver just picked up the packet from its receive rings, without having done any expensive operations such as allocating an ``skb`` for pushing the packet further up the networking stack, without having pushed the packet into the GRO engine, etc. Thus, the XDP BPF program is executed at the earliest point when it becomes available to the CPU for processing. XDP works in concert with the Linux kernel and its infrastructure, meaning the kernel is not bypassed as in various networking frameworks that operate in user space only. Keeping the packet in kernel space has several major advantages: \* XDP is able to reuse all the upstream developed kernel networking drivers, user space tooling, or even other available in-kernel infrastructure such as routing tables, sockets, etc in BPF helper calls itself. \* Residing in kernel space, XDP has the same security model as the rest of the kernel for accessing hardware. \* There is no need for crossing kernel / user space boundaries since the processed packet already resides in the kernel and can therefore flexibly forward packets into other in-kernel entities like namespaces used by containers or the kernel's networking stack itself. This is particularly relevant in times of Meltdown and Spectre. \* Punting packets from XDP to the kernel's robust, widely used and efficient TCP/IP stack is trivially possible, allows for full reuse and does not require maintaining a separate TCP/IP stack as with user space frameworks. \* The use of BPF allows for full programmability, keeping a stable ABI with the same 'never-break-user-space' guarantees as with the kernel's system call ABI and compared to modules it also provides safety measures thanks to the BPF verifier that ensures the stability of the kernel's operation. \* XDP trivially allows for atomically swapping programs during runtime without any network traffic interruption or even kernel / system reboot. \* XDP allows for flexible structuring of workloads integrated into the kernel. For example, it can operate in "busy polling" or "interrupt driven" mode. Explicitly dedicating CPUs to XDP is not required. There are no special hardware requirements and it does not rely on hugepages. \* XDP does not require any third-party kernel modules or licensing. It is a long-term architectural solution, a core part of the Linux kernel, and developed by the kernel community. \* XDP is already enabled and shipped everywhere with major distributions running a kernel equivalent to 4.8 or higher and supports most major 10G or higher networking drivers. As a framework for running BPF in the driver, XDP additionally ensures that packets are laid out linearly and fit into a single DMA'ed page which is readable and writable by the BPF program. XDP | https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/progtypes.rst | main | cilium | [
0.0024575155694037676,
-0.027086788788437843,
-0.060537438839673996,
-0.04767439141869545,
0.05116810277104378,
-0.07256525009870529,
-0.002752887085080147,
0.08243878930807114,
-0.002241260139271617,
-0.06923471391201019,
0.05271044000983238,
0.008116459473967552,
-0.023507781326770782,
-... | 0.189533 |
kernel equivalent to 4.8 or higher and supports most major 10G or higher networking drivers. As a framework for running BPF in the driver, XDP additionally ensures that packets are laid out linearly and fit into a single DMA'ed page which is readable and writable by the BPF program. XDP also ensures that additional headroom of 256 bytes is available to the program for implementing custom encapsulation headers with the help of the ``bpf\_xdp\_adjust\_head()`` BPF helper or adding custom metadata in front of the packet through ``bpf\_xdp\_adjust\_meta()``. The framework contains XDP action codes further described in the section below which a BPF program can return in order to instruct the driver how to proceed with the packet, and it enables the possibility to atomically replace BPF programs running at the XDP layer. XDP is tailored for high-performance by design. BPF allows to access the packet data through 'direct packet access' which means that the program holds data pointers directly in registers, loads the content into registers, respectively writes from there into the packet. The packet representation in XDP that is passed to the BPF program as the BPF context looks as follows: .. code-block:: c struct xdp\_buff { void \*data; void \*data\_end; void \*data\_meta; void \*data\_hard\_start; struct xdp\_rxq\_info \*rxq; }; ``data`` points to the start of the packet data in the page, and as the name suggests, ``data\_end`` points to the end of the packet data. Since XDP allows for a headroom, ``data\_hard\_start`` points to the maximum possible headroom start in the page, meaning, when the packet should be encapsulated, then ``data`` is moved closer towards ``data\_hard\_start`` via ``bpf\_xdp\_adjust\_head()``. The same BPF helper function also allows for decapsulation in which case ``data`` is moved further away from ``data\_hard\_start``. ``data\_meta`` initially points to the same location as ``data`` but ``bpf\_xdp\_adjust\_meta()`` is able to move the pointer towards ``data\_hard\_start`` as well in order to provide room for custom metadata which is invisible to the normal kernel networking stack but can be read by tc BPF programs since it is transferred from XDP to the ``skb``. Vice versa, it can remove or reduce the size of the custom metadata through the same BPF helper function by moving ``data\_meta`` away from ``data\_hard\_start`` again. ``data\_meta`` can also be used solely for passing state between tail calls similarly to the ``skb->cb[]`` control block case that is accessible in tc BPF programs. This gives the following relation respectively invariant for the ``struct xdp\_buff`` packet pointers: ``data\_hard\_start`` <= ``data\_meta`` <= ``data`` < ``data\_end``. The ``rxq`` field points to some additional per receive queue metadata which is populated at ring setup time (not at XDP runtime): .. code-block:: c struct xdp\_rxq\_info { struct net\_device \*dev; u32 queue\_index; u32 reg\_state; } \_\_\_\_cacheline\_aligned; The BPF program can retrieve ``queue\_index`` as well as additional data from the netdevice itself such as ``ifindex``, etc. \*\*BPF program return codes\*\* After running the XDP BPF program, a verdict is returned from the program in order to tell the driver how to process the packet next. In the ``linux/bpf.h`` system header file all available return verdicts are enumerated: .. code-block:: c enum xdp\_action { XDP\_ABORTED = 0, XDP\_DROP, XDP\_PASS, XDP\_TX, XDP\_REDIRECT, }; ``XDP\_DROP`` as the name suggests will drop the packet right at the driver level without wasting any further resources. This is in particular useful for BPF programs implementing DDoS mitigation mechanisms or firewalling in general. The ``XDP\_PASS`` return code means that the packet is allowed to be passed up to the kernel's networking stack. Meaning, the current CPU that was processing this packet now allocates a ``skb``, populates it, and passes it onwards into | https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/progtypes.rst | main | cilium | [
0.03844047710299492,
0.00506323017179966,
0.007201076950877905,
-0.10811199992895126,
0.027976039797067642,
-0.013627857901155949,
0.042439237236976624,
0.08148467540740967,
-0.0480119064450264,
-0.021517373621463776,
0.020314566791057587,
-0.029965665191411972,
-0.05490962043404579,
-0.06... | 0.148139 |
for BPF programs implementing DDoS mitigation mechanisms or firewalling in general. The ``XDP\_PASS`` return code means that the packet is allowed to be passed up to the kernel's networking stack. Meaning, the current CPU that was processing this packet now allocates a ``skb``, populates it, and passes it onwards into the GRO engine. This would be equivalent to the default packet handling behavior without XDP. With ``XDP\_TX`` the BPF program has an efficient option to transmit the network packet out of the same NIC it just arrived on again. This is typically useful when few nodes are implementing, for example, firewalling with subsequent load balancing in a cluster and thus, act as a hairpinned load balancer pushing the incoming packets back into the switch after rewriting them in XDP BPF. ``XDP\_REDIRECT`` is similar to ``XDP\_TX`` in that it is able to transmit the XDP packet, but through another NIC. Another option for the ``XDP\_REDIRECT`` case is to redirect into a BPF cpumap, meaning, the CPUs serving XDP on the NIC's receive queues can continue to do so and push the packet for processing the upper kernel stack to a remote CPU. This is similar to ``XDP\_PASS``, but with the ability that the XDP BPF program can keep serving the incoming high load as opposed to temporarily spend work on the current packet for pushing into upper layers. Last but not least, ``XDP\_ABORTED`` which serves denoting an exception like state from the program and has the same behavior as ``XDP\_DROP`` only that ``XDP\_ABORTED`` passes the ``trace\_xdp\_exception`` tracepoint which can be additionally monitored to detect misbehavior. \*\*Use cases for XDP\*\* Some of the main use cases for XDP are presented in this subsection. The list is non-exhaustive and given the programmability and efficiency XDP and BPF enables, it can easily be adapted to solve very specific use cases. \* \*\*DDoS mitigation, firewalling\*\* One of the basic XDP BPF features is to tell the driver to drop a packet with ``XDP\_DROP`` at this early stage which allows for any kind of efficient network policy enforcement with having an extremely low per-packet cost. This is ideal in situations when needing to cope with any sort of DDoS attacks, but also more general allows to implement any sort of firewalling policies with close to no overhead in BPF e.g. in either case as stand alone appliance (e.g. scrubbing 'clean' traffic through ``XDP\_TX``) or widely deployed on nodes protecting end hosts themselves (via ``XDP\_PASS`` or cpumap ``XDP\_REDIRECT`` for good traffic). Offloaded XDP takes this even one step further by moving the already small per-packet cost entirely into the NIC with processing at line-rate. .. \* \*\*Forwarding and load-balancing\*\* Another major use case of XDP is packet forwarding and load-balancing through either ``XDP\_TX`` or ``XDP\_REDIRECT`` actions. The packet can be arbitrarily mangled by the BPF program running in the XDP layer, even BPF helper functions are available for increasing or decreasing the packet's headroom in order to arbitrarily encapsulate respectively decapsulate the packet before sending it out again. With ``XDP\_TX`` hairpinned load-balancers can be implemented that push the packet out of the same networking device it originally arrived on, or with the ``XDP\_REDIRECT`` action it can be forwarded to another NIC for transmission. The latter return code can also be used in combination with BPF's cpumap to load-balance packets for passing up the local stack, but on remote, non-XDP processing CPUs. .. \* \*\*Pre-stack filtering / processing\*\* Besides policy enforcement, XDP can also be used for hardening the kernel's networking stack with the help of ``XDP\_DROP`` case, meaning, it can drop irrelevant packets for a local node | https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/progtypes.rst | main | cilium | [
0.0034271993208676577,
-0.041122157126665115,
0.010963075794279575,
-0.07867732644081116,
0.016909563913941383,
-0.039662547409534454,
0.10593359172344208,
0.023734334856271744,
-0.028111103922128677,
-0.036338284611701965,
0.04357817396521568,
0.002099614590406418,
-0.029607683420181274,
... | 0.137836 |
load-balance packets for passing up the local stack, but on remote, non-XDP processing CPUs. .. \* \*\*Pre-stack filtering / processing\*\* Besides policy enforcement, XDP can also be used for hardening the kernel's networking stack with the help of ``XDP\_DROP`` case, meaning, it can drop irrelevant packets for a local node right at the earliest possible point before the networking stack sees them e.g. given we know that a node only serves TCP traffic, any UDP, SCTP or other L4 traffic can be dropped right away. This has the advantage that packets do not need to traverse various entities like GRO engine, the kernel's flow dissector and others before it can be determined to drop them and thus, this allows for reducing the kernel's attack surface. Thanks to XDP's early processing stage, this effectively 'pretends' to the kernel's networking stack that these packets have never been seen by the networking device. Additionally, if a potential bug in the stack's receive path got uncovered and would cause a 'ping of death' like scenario, XDP can be utilized to drop such packets right away without having to reboot the kernel or restart any services. Due to the ability to atomically swap such programs to enforce a drop of bad packets, no network traffic is even interrupted on a host. Another use case for pre-stack processing is that given the kernel has not yet allocated an ``skb`` for the packet, the BPF program is free to modify the packet and, again, have it 'pretend' to the stack that it was received by the networking device this way. This allows for cases such as having custom packet mangling and encapsulation protocols where the packet can be decapsulated prior to entering GRO aggregation in which GRO otherwise would not be able to perform any sort of aggregation due to not being aware of the custom protocol. XDP also allows to push metadata (non-packet data) in front of the packet. This is 'invisible' to the normal kernel stack, can be GRO aggregated (for matching metadata) and later on processed in coordination with a tc ingress BPF program where it has the context of a ``skb`` available for e.g. setting various skb fields. .. \* \*\*Flow sampling, monitoring\*\* XDP can also be used for cases such as packet monitoring, sampling or any other network analytics, for example, as part of an intermediate node in the path or on end hosts in combination also with prior mentioned use cases. For complex packet analysis, XDP provides a facility to efficiently push network packets (truncated or with full payload) and custom metadata into a fast lockless per CPU memory mapped ring buffer provided from the Linux perf infrastructure to a user space application. This also allows for cases where only a flow's initial data can be analyzed and once determined as good traffic having the monitoring bypassed. Thanks to the flexibility brought by BPF, this allows for implementing any sort of custom monitoring or sampling. .. One example of XDP BPF production usage is Facebook's SHIV and Droplet infrastructure which implements their L4 load-balancing and DDoS countermeasures. Migrating their production infrastructure away from netfilter's IPVS (IP Virtual Server) over to XDP BPF allowed for a 10x speedup compared to their previous IPVS setup. This was first presented at the netdev 2.1 conference: \* Slides: https://netdevconf.info/2.1/slides/apr6/zhou-netdev-xdp-2017.pdf \* Video: https://youtu.be/YEU2ClcGqts Another example is the integration of XDP into Cloudflare's DDoS mitigation pipeline, which originally was using cBPF instead of eBPF for attack signature matching through iptables' ``xt\_bpf`` module. Due to use of iptables this caused severe performance problems under attack where a user | https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/progtypes.rst | main | cilium | [
-0.03540157154202461,
-0.0028368313796818256,
0.11236226558685303,
-0.07685527205467224,
0.07064060866832733,
-0.04078235477209091,
0.05469536781311035,
0.04335763677954674,
0.0316159725189209,
-0.004333276767283678,
0.054576508700847626,
0.0683179646730423,
-0.031113293021917343,
-0.06782... | 0.128609 |
conference: \* Slides: https://netdevconf.info/2.1/slides/apr6/zhou-netdev-xdp-2017.pdf \* Video: https://youtu.be/YEU2ClcGqts Another example is the integration of XDP into Cloudflare's DDoS mitigation pipeline, which originally was using cBPF instead of eBPF for attack signature matching through iptables' ``xt\_bpf`` module. Due to use of iptables this caused severe performance problems under attack where a user space bypass solution was deemed necessary but came with drawbacks as well such as needing to busy poll the NIC and expensive packet re-injection into the kernel's stack. The migration over to eBPF and XDP combined best of both worlds by having high-performance programmable packet processing directly inside the kernel: \* Slides: https://netdevconf.info/2.1/slides/apr6/bertin\_Netdev-XDP.pdf \* Video: https://youtu.be/7OuOukmuivg \*\*XDP operation modes\*\* XDP has three operation modes where 'native' XDP is the default mode. When talked about XDP this mode is typically implied. \* \*\*Native XDP\*\* This is the default mode where the XDP BPF program is run directly out of the networking driver's early receive path. Most widespread used NICs for 10G and higher support native XDP already. .. \* \*\*Offloaded XDP\*\* In the offloaded XDP mode the XDP BPF program is directly offloaded into the NIC instead of being executed on the host CPU. Thus, the already extremely low per-packet cost is pushed off the host CPU entirely and executed on the NIC, providing even higher performance than running in native XDP. This offload is typically implemented by SmartNICs containing multi-threaded, multicore flow processors where an in-kernel JIT compiler translates BPF into native instructions for the latter. Drivers supporting offloaded XDP usually also support native XDP for cases where some BPF helpers may not yet or only be available for the native mode. .. \* \*\*Generic XDP\*\* For drivers not implementing native or offloaded XDP yet, the kernel provides an option for generic XDP which does not require any driver changes since run at a much later point out of the networking stack. This setting is primarily targeted at developers who want to write and test programs against the kernel's XDP API, and will not operate at the performance rate of the native or offloaded modes. For XDP usage in a production environment either the native or offloaded mode is better suited and the recommended way to run XDP. .. \_xdp\_drivers: \*\*Driver support\*\* \*\*Drivers supporting native XDP\*\* A list of drivers supporting native XDP can be found in the table below. The corresponding network driver name of an interface can be determined as follows: .. code-block:: shell-session # ethtool -i eth0 driver: nfp [...] +------------------------+-------------------------+-------------+ | Vendor | Driver | XDP Support | +========================+=========================+=============+ | Amazon | ena | >= 5.6 | +------------------------+-------------------------+-------------+ | Aquantia | atlantic | >= 5.19 | +------------------------+-------------------------+-------------+ | Broadcom | bnxt\_en | >= 4.11 | +------------------------+-------------------------+-------------+ | Cavium | thunderx | >= 4.12 | +------------------------+-------------------------+-------------+ | Engleder | tsne | >= 6.3 | | | (TSN Express Path) | | +------------------------+-------------------------+-------------+ | Freescale | dpaa | >= 5.11 | | +-------------------------+-------------+ | | dpaa2 | >= 5.0 | | +-------------------------+-------------+ | | enetc | >= 5.13 | | +-------------------------+-------------+ | | fec\_enet | >= 6.2 | +------------------------+-------------------------+-------------+ | Fungible | fun | >= 5.18 | +------------------------+-------------------------+-------------+ | Google | gve | >= 6.4 | +------------------------+-------------------------+-------------+ | Intel | ice | >= 5.5 | | +-------------------------+-------------+ | | igb | >= 5.10 | | +-------------------------+-------------+ | | igc | >= 5.13 | | +-------------------------+-------------+ | | i40e | >= 4.13 | | +-------------------------+-------------+ | | ixgbe | >= 4.12 | | +-------------------------+-------------+ | | ixgbevf | >= 4.17 | +------------------------+-------------------------+-------------+ | Marvell | mvneta | >= 5.5 | | +-------------------------+-------------+ | | mvpp2 | >= 5.9 | | https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/progtypes.rst | main | cilium | [
-0.0335577018558979,
0.014974859543144703,
0.07745636999607086,
-0.07081156224012375,
0.04802364483475685,
-0.04305563494563103,
0.022839030250906944,
0.05851760506629944,
-0.07071401178836823,
0.02826242707669735,
0.028143486008048058,
-0.008070401847362518,
-0.0010605448624119163,
-0.096... | 0.134497 |
igc | >= 5.13 | | +-------------------------+-------------+ | | i40e | >= 4.13 | | +-------------------------+-------------+ | | ixgbe | >= 4.12 | | +-------------------------+-------------+ | | ixgbevf | >= 4.17 | +------------------------+-------------------------+-------------+ | Marvell | mvneta | >= 5.5 | | +-------------------------+-------------+ | | mvpp2 | >= 5.9 | | +-------------------------+-------------+ | | otx2 | >= 5.16 | +------------------------+-------------------------+-------------+ | Mediatek | mtk | >= 6.0 | +------------------------+-------------------------+-------------+ | Mellanox | mlx4 | >= 4.8 | | +-------------------------+-------------+ | | mlx5 | >= 4.9 | +------------------------+-------------------------+-------------+ | Microchip | lan966x | >= 6.2 | +------------------------+-------------------------+-------------+ | Microsoft | hv\_netvsc (Hyper-V) | >= 5.6 | | +-------------------------+-------------+ | | mana | >= 5.17 | +------------------------+-------------------------+-------------+ | Netronome | nfp | >= 4.10 | +------------------------+-------------------------+-------------+ | Others | bonding | >= 5.15 | | +-------------------------+-------------+ | | netdevsim | >= 4.16 | | +-------------------------+-------------+ | | tun/tap | >= 4.14 | | +-------------------------+-------------+ | | virtio\_net | >= 4.10 | | +-------------------------+-------------+ | | xen-netfront | >= 5.9 | | +-------------------------+-------------+ | | veth | >= 4.19 | +------------------------+-------------------------+-------------+ | QLogic | qede | >= 4.10 | +------------------------+-------------------------+-------------+ | Socionext | netsec | >= 5.3 | +------------------------+-------------------------+-------------+ | Solarflare | SFC Efx | >= 5.5 | +------------------------+-------------------------+-------------+ | STMicro | stmmac | >= 5.13 | +------------------------+-------------------------+-------------+ | Texas Instruments | cpsw | >= 5.3 | +------------------------+-------------------------+-------------+ | VMware | vmxnet3 | >= 6.6 | +------------------------+-------------------------+-------------+ \*\*Drivers supporting offloaded XDP\*\* \* \*\*Netronome\*\* \* nfp [2]\_ .. note:: Examples for writing and loading XDP programs are included in the `bpf\_dev` section under the respective tools. .. [2] Some BPF helper functions such as retrieving the current CPU number will not be available in an offloaded setting. tc (traffic control) -------------------- Aside from other program types such as XDP, BPF can also be used out of the kernel's tc (traffic control) layer in the networking data path. On a high-level there are three major differences when comparing XDP BPF programs to tc BPF ones: \* The BPF input context is a ``sk\_buff`` not a ``xdp\_buff``. When the kernel's networking stack receives a packet, after the XDP layer, it allocates a buffer and parses the packet to store metadata about the packet. This representation is known as the ``sk\_buff``. This structure is then exposed in the BPF input context so that BPF programs from the tc ingress layer can use the metadata that the stack extracts from the packet. This can be useful, but comes with an associated cost of the stack performing this allocation and metadata extraction, and handling the packet until it hits the tc hook. By definition, the ``xdp\_buff`` doesn't have access to this metadata because the XDP hook is called before this work is done. This is a significant contributor to the performance difference between the XDP and tc hooks. Therefore, BPF programs attached to the tc BPF hook can, for instance, read or write the skb's ``mark``, ``pkt\_type``, ``protocol``, ``priority``, ``queue\_mapping``, ``napi\_id``, ``cb[]`` array, ``hash``, ``tc\_classid`` or ``tc\_index``, vlan metadata, the XDP transferred custom metadata and various other information. All members of the ``struct \_\_sk\_buff`` BPF context used in tc BPF are defined in the ``linux/bpf.h`` system header. Generally, the ``sk\_buff`` is of a completely different nature than ``xdp\_buff`` where both come with advantages and disadvantages. For example, the ``sk\_buff`` case has the advantage that it is rather straight forward to mangle its associated metadata, however, it also contains a lot of protocol specific information (e.g. GSO related state) which makes it difficult to simply switch protocols by solely rewriting the packet data. This is due to the stack processing | https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/progtypes.rst | main | cilium | [
-0.04680686816573143,
0.04869081825017929,
-0.07561240345239639,
-0.019762324169278145,
0.0011715525761246681,
-0.03845555707812309,
-0.008321077562868595,
0.13646341860294342,
-0.02806493081152439,
0.004836331587284803,
0.08610060811042786,
-0.1122654378414154,
0.029546890407800674,
-0.02... | 0.142253 |
case has the advantage that it is rather straight forward to mangle its associated metadata, however, it also contains a lot of protocol specific information (e.g. GSO related state) which makes it difficult to simply switch protocols by solely rewriting the packet data. This is due to the stack processing the packet based on the metadata rather than having the cost of accessing the packet contents each time. Thus, additional conversion is required from BPF helper functions taking care that ``sk\_buff`` internals are properly converted as well. The ``xdp\_buff`` case however does not face such issues since it comes at such an early stage where the kernel has not even allocated an ``sk\_buff`` yet, thus packet rewrites of any kind can be realized trivially. However, the ``xdp\_buff`` case has the disadvantage that ``sk\_buff`` metadata is not available for mangling at this stage. The latter is overcome by passing custom metadata from XDP BPF to tc BPF, though. In this way, the limitations of each program type can be overcome by operating complementary programs of both types as the use case requires. .. \* Compared to XDP, tc BPF programs can be triggered out of ingress and also egress points in the networking data path as opposed to ingress only in the case of XDP. The two hook points ``sch\_handle\_ingress()`` and ``sch\_handle\_egress()`` in the kernel are triggered out of ``\_\_netif\_receive\_skb\_core()`` and ``\_\_dev\_queue\_xmit()``, respectively. The latter two are the main receive and transmit functions in the data path that, setting XDP aside, are triggered for every network packet going in or coming out of the node allowing for full visibility for tc BPF programs at these hook points. .. \* The tc BPF programs do not require any driver changes since they are run at hook points in generic layers in the networking stack. Therefore, they can be attached to any type of networking device. While this provides flexibility, it also trades off performance compared to running at the native XDP layer. However, tc BPF programs still come at the earliest point in the generic kernel's networking data path after GRO has been run but \*\*before\*\* any protocol processing, traditional iptables firewalling such as iptables PREROUTING or nftables ingress hooks or other packet processing takes place. Likewise on egress, tc BPF programs execute at the latest point before handing the packet to the driver itself for transmission, meaning \*\*after\*\* traditional iptables firewalling hooks like iptables POSTROUTING, but still before handing the packet to the kernel's GSO engine. One exception which does require driver changes however are offloaded tc BPF programs, typically provided by SmartNICs in a similar way as offloaded XDP just with differing set of features due to the differences in the BPF input context, helper functions and verdict codes. .. BPF programs run in the tc layer are run from the ``cls\_bpf`` classifier. While the tc terminology describes the BPF attachment point as a "classifier", this is a bit misleading since it under-represents what ``cls\_bpf`` is capable of. That is to say, a fully programmable packet processor being able not only to read the ``skb`` metadata and packet data, but to also arbitrarily mangle both and terminate the tc processing with an action verdict. ``cls\_bpf`` can thus be regarded as a self-contained entity that manages and executes tc BPF programs. ``cls\_bpf`` can hold one or more tc BPF programs. In the case where Cilium deploys ``cls\_bpf`` programs, it attaches only a single program for a given hook in ``direct-action`` mode. Typically, in the traditional tc scheme, there is a split between classifier and action modules, where the classifier has one | https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/progtypes.rst | main | cilium | [
-0.037715308368206024,
0.00698729557916522,
-0.011035269126296043,
-0.12787820398807526,
-0.02491278201341629,
-0.08616558462381363,
0.06023327261209488,
0.06040916219353676,
-0.04053148999810219,
-0.011348002590239048,
0.004929481074213982,
-0.031007608398795128,
0.01817930117249489,
-0.0... | 0.105424 |
``cls\_bpf`` can hold one or more tc BPF programs. In the case where Cilium deploys ``cls\_bpf`` programs, it attaches only a single program for a given hook in ``direct-action`` mode. Typically, in the traditional tc scheme, there is a split between classifier and action modules, where the classifier has one or more actions attached to it that are triggered once the classifier has a match. In the modern world for using tc in the software data path this model does not scale well for complex packet processing. Given tc BPF programs attached to ``cls\_bpf`` are fully self-contained, they effectively fuse the parsing and action process together into a single unit. Thanks to ``cls\_bpf``'s ``direct-action`` mode, it will just return the tc action verdict and terminate the processing pipeline immediately. This allows for implementing scalable programmable packet processing in the networking data path by avoiding linear iteration of actions. ``cls\_bpf`` is the only such "classifier" module in the tc layer capable of such a fast-path. Like XDP BPF programs, tc BPF programs can be atomically updated at runtime via ``cls\_bpf`` without interrupting any network traffic or having to restart services. Both the tc ingress and the egress hook where ``cls\_bpf`` itself can be attached to is managed by a pseudo qdisc called ``sch\_clsact``. This is a drop-in replacement and proper superset of the ingress qdisc since it is able to manage both, ingress and egress tc hooks. For tc's egress hook in ``\_\_dev\_queue\_xmit()`` it is important to stress that it is not executed under the kernel's qdisc root lock. Thus, both tc ingress and egress hooks are executed in a lockless manner in the fast-path. In either case, preemption is disabled and execution happens under RCU read side. Typically, on egress there are qdiscs attached to netdevices such as ``sch\_mq``, ``sch\_fq``, ``sch\_fq\_codel`` or ``sch\_htb`` where some of them are classful qdiscs that contain subclasses and thus require a packet classification mechanism to determine a verdict where to demux the packet. This is handled by a call to ``tcf\_classify()`` which calls into tc classifiers if present. ``cls\_bpf`` can also be attached and used in such cases. Such operation usually happens under the qdisc root lock and can be subject to lock contention. The ``sch\_clsact`` qdisc's egress hook comes at a much earlier point however which does not fall under that and operates completely independent from conventional egress qdiscs. Thus, for cases like ``sch\_htb`` the ``sch\_clsact`` qdisc could perform the heavy lifting packet classification through tc BPF outside of the qdisc root lock, setting the ``skb->mark`` or ``skb->priority`` from there such that ``sch\_htb`` only requires a flat mapping without expensive packet classification under the root lock thus reducing contention. Offloaded tc BPF programs are supported for the case of ``sch\_clsact`` in combination with ``cls\_bpf`` where the prior loaded BPF program was JITed from a SmartNIC driver to be run natively on the NIC. Only ``cls\_bpf`` programs operating in ``direct-action`` mode are supported to be offloaded. ``cls\_bpf`` only supports offloading a single program and cannot offload multiple programs. Furthermore, only the ingress hook supports offloading BPF programs. One ``cls\_bpf`` instance is able to hold multiple tc BPF programs internally. If this is the case, then the ``TC\_ACT\_UNSPEC`` program return code will continue execution with the next tc BPF program in that list. However, this has the drawback that several programs would need to parse the packet over and over again resulting in degraded performance. \*\*BPF program return codes\*\* Both the tc ingress and egress hook share the same action return verdicts that tc BPF programs can use. They are defined in the ``linux/pkt\_cls.h`` system | https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/progtypes.rst | main | cilium | [
-0.07158628106117249,
-0.02674580179154873,
-0.08588013797998428,
-0.05418093502521515,
-0.04678767919540405,
-0.07966242730617523,
0.05242931470274925,
0.04364629089832306,
0.05149552598595619,
-0.011451629921793938,
-0.011584984138607979,
-0.0675341859459877,
0.04992091283202171,
-0.0072... | 0.191343 |
has the drawback that several programs would need to parse the packet over and over again resulting in degraded performance. \*\*BPF program return codes\*\* Both the tc ingress and egress hook share the same action return verdicts that tc BPF programs can use. They are defined in the ``linux/pkt\_cls.h`` system header: .. code-block:: c #define TC\_ACT\_UNSPEC (-1) #define TC\_ACT\_OK 0 #define TC\_ACT\_SHOT 2 #define TC\_ACT\_STOLEN 4 #define TC\_ACT\_REDIRECT 7 There are a few more action ``TC\_ACT\_\*`` verdicts available in the system header file which are also used in the two hooks. However, they share the same semantics with the ones above. Meaning, from a tc BPF perspective, ``TC\_ACT\_OK`` and ``TC\_ACT\_RECLASSIFY`` have the same semantics, as well as the three ``TC\_ACT\_STOLEN``, ``TC\_ACT\_QUEUED`` and ``TC\_ACT\_TRAP`` opcodes. Therefore, for these cases we only describe ``TC\_ACT\_OK`` and the ``TC\_ACT\_STOLEN`` opcode for the two groups. Starting out with ``TC\_ACT\_UNSPEC``. It has the meaning of "unspecified action" and is used in three cases, i) when an offloaded tc BPF program is attached and the tc ingress hook is run where the ``cls\_bpf`` representation for the offloaded program will return ``TC\_ACT\_UNSPEC``, ii) in order to continue with the next tc BPF program in ``cls\_bpf`` for the multi-program case. The latter also works in combination with offloaded tc BPF programs from point i) where the ``TC\_ACT\_UNSPEC`` from there continues with a next tc BPF program solely running in non-offloaded case. Last but not least, iii) ``TC\_ACT\_UNSPEC`` is also used for the single program case to simply tell the kernel to continue with the ``skb`` without additional side-effects. ``TC\_ACT\_UNSPEC`` is very similar to the ``TC\_ACT\_OK`` action code in the sense that both pass the ``skb`` onwards either to upper layers of the stack on ingress or down to the networking device driver for transmission on egress, respectively. The only difference to ``TC\_ACT\_OK`` is that ``TC\_ACT\_OK`` sets ``skb->tc\_index`` based on the classid the tc BPF program set. The latter is set out of the tc BPF program itself through ``skb->tc\_classid`` from the BPF context. ``TC\_ACT\_SHOT`` instructs the kernel to drop the packet, meaning, upper layers of the networking stack will never see the ``skb`` on ingress and similarly, the packet will never be submitted for transmission on egress. ``TC\_ACT\_SHOT`` and ``TC\_ACT\_STOLEN`` are both similar in nature with few differences: ``TC\_ACT\_SHOT`` will indicate to the kernel that the ``skb`` was released through ``kfree\_skb()`` and return ``NET\_XMIT\_DROP`` to the callers for immediate feedback, whereas ``TC\_ACT\_STOLEN`` will release the ``skb`` through ``consume\_skb()`` and pretend to upper layers that the transmission was successful through ``NET\_XMIT\_SUCCESS``. The perf's drop monitor which records traces of ``kfree\_skb()`` will therefore also not see any drop indications from ``TC\_ACT\_STOLEN`` since its semantics are such that the ``skb`` has been "consumed" or queued but certainly not "dropped". Last but not least the ``TC\_ACT\_REDIRECT`` action which is available for tc BPF programs as well. This allows to redirect the ``skb`` to the same or another's device ingress or egress path together with the ``bpf\_redirect()`` helper. Being able to inject the packet into another device's ingress or egress direction allows for full flexibility in packet forwarding with BPF. There are no requirements on the target networking device other than being a networking device itself, there is no need to run another instance of ``cls\_bpf`` on the target device or other such restrictions. \*\*tc BPF FAQ\*\* This section contains a few miscellaneous question and answer pairs related to tc BPF programs that are asked from time to time. \* \*\*Question:\*\* What about ``act\_bpf`` as a tc action module, is it still relevant? \* \*\*Answer:\*\* Not really. Although ``cls\_bpf`` and ``act\_bpf`` share the same | https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/progtypes.rst | main | cilium | [
-0.08733256906270981,
0.007737988606095314,
-0.0017979075200855732,
-0.01604447327554226,
0.024857433512806892,
0.03351717069745064,
0.12520547211170197,
0.038349878042936325,
0.057653699070215225,
0.06969938427209854,
-0.01815253496170044,
-0.01596568152308464,
0.0019610722083598375,
-0.0... | 0.170737 |
\*\*tc BPF FAQ\*\* This section contains a few miscellaneous question and answer pairs related to tc BPF programs that are asked from time to time. \* \*\*Question:\*\* What about ``act\_bpf`` as a tc action module, is it still relevant? \* \*\*Answer:\*\* Not really. Although ``cls\_bpf`` and ``act\_bpf`` share the same functionality for tc BPF programs, ``cls\_bpf`` is more flexible since it is a proper superset of ``act\_bpf``. The way tc works is that tc actions need to be attached to tc classifiers. In order to achieve the same flexibility as ``cls\_bpf``, ``act\_bpf`` would need to be attached to the ``cls\_matchall`` classifier. As the name says, this will match on every packet in order to pass them through for attached tc action processing. For ``act\_bpf``, this is will result in less efficient packet processing than using ``cls\_bpf`` in ``direct-action`` mode directly. If ``act\_bpf`` is used in a setting with other classifiers than ``cls\_bpf`` or ``cls\_matchall`` then this will perform even worse due to the nature of operation of tc classifiers. Meaning, if classifier A has a mismatch, then the packet is passed to classifier B, reparsing the packet, etc, thus in the typical case there will be linear processing where the packet would need to traverse N classifiers in the worst case to find a match and execute ``act\_bpf`` on that. Therefore, ``act\_bpf`` has never been largely relevant. Additionally, ``act\_bpf`` does not provide a tc offloading interface either compared to ``cls\_bpf``. .. \* \*\*Question:\*\* Is it recommended to use ``cls\_bpf`` not in ``direct-action`` mode? \* \*\*Answer:\*\* No. The answer is similar to the one above in that this is otherwise unable to scale for more complex processing. tc BPF can already do everything needed by itself in an efficient manner and thus there is no need for anything other than ``direct-action`` mode. .. \* \*\*Question:\*\* Is there any performance difference in offloaded ``cls\_bpf`` and offloaded XDP? \* \*\*Answer:\*\* No. Both are JITed through the same compiler in the kernel which handles the offloading to the SmartNIC and the loading mechanism for both is very similar as well. Thus, the BPF program gets translated into the same target instruction set in order to be able to run on the NIC natively. The two tc BPF and XDP BPF program types have a differing set of features, so depending on the use case one might be picked over the other due to availability of certain helper functions in the offload case, for example. \*\*Use cases for tc BPF\*\* Some of the main use cases for tc BPF programs are presented in this subsection. Also here, the list is non-exhaustive and given the programmability and efficiency of tc BPF, it can easily be tailored and integrated into orchestration systems in order to solve very specific use cases. While some use cases with XDP may overlap, tc BPF and XDP BPF are mostly complementary to each other and both can also be used at the same time or one over the other depending on which is most suitable for a given problem to solve. \* \*\*Policy enforcement for containers\*\* One application which tc BPF programs are suitable for is to implement policy enforcement, custom firewalling or similar security measures for containers or pods, respectively. In the conventional case, container isolation is implemented through network namespaces with veth networking devices connecting the host's initial namespace with the dedicated container's namespace. Since one end of the veth pair has been moved into the container's namespace whereas the other end remains in the initial namespace of the host, all network traffic from the container has to pass through | https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/progtypes.rst | main | cilium | [
-0.1167684867978096,
-0.024468498304486275,
-0.033723361790180206,
-0.04868190363049507,
-0.018461965024471283,
0.03573025390505791,
0.15925955772399902,
0.05058886483311653,
0.034119825810194016,
0.015271413139998913,
-0.032929420471191406,
-0.04328155890107155,
0.05197273567318916,
0.056... | 0.133866 |
with veth networking devices connecting the host's initial namespace with the dedicated container's namespace. Since one end of the veth pair has been moved into the container's namespace whereas the other end remains in the initial namespace of the host, all network traffic from the container has to pass through the host-facing veth device allowing for attaching tc BPF programs on the tc ingress and egress hook of the veth. Network traffic going into the container will pass through the host-facing veth's tc egress hook whereas network traffic coming from the container will pass through the host-facing veth's tc ingress hook. For virtual devices like veth devices XDP is unsuitable in this case since the kernel operates solely on a ``skb`` here and generic XDP has a few limitations where it does not operate with cloned ``skb``'s. The latter is heavily used from the TCP/IP stack in order to hold data segments for retransmission where the generic XDP hook would simply get bypassed instead. Moreover, generic XDP needs to linearize the entire ``skb`` resulting in heavily degraded performance. tc BPF on the other hand is more flexible as it specializes on the ``skb`` input context case and thus does not need to cope with the limitations from generic XDP. .. \* \*\*Forwarding and load-balancing\*\* The forwarding and load-balancing use case is quite similar to XDP, although slightly more targeted towards east-west container workloads rather than north-south traffic (though both technologies can be used in either case). Since XDP is only available on ingress side, tc BPF programs allow for further use cases that apply in particular on egress, for example, container based traffic can already be NATed and load-balanced on the egress side through BPF out of the initial namespace such that this is done transparent to the container itself. Egress traffic is already based on the ``sk\_buff`` structure due to the nature of the kernel's networking stack, so packet rewrites and redirects are suitable out of tc BPF. By utilizing the ``bpf\_redirect()`` helper function, BPF can take over the forwarding logic to push the packet either into the ingress or egress path of another networking device. Thus, any bridge-like devices become unnecessary to use as well by utilizing tc BPF as forwarding fabric. .. \* \*\*Flow sampling, monitoring\*\* Like in XDP case, flow sampling and monitoring can be realized through a high-performance lockless per-CPU memory mapped perf ring buffer where the BPF program is able to push custom data, the full or truncated packet contents, or both up to a user space application. From the tc BPF program this is realized through the ``bpf\_skb\_event\_output()`` BPF helper function which has the same function signature and semantics as ``bpf\_xdp\_event\_output()``. Given tc BPF programs can be attached to ingress and egress as opposed to only ingress in XDP BPF case plus the two tc hooks are at the lowest layer in the (generic) networking stack, this allows for bidirectional monitoring of all network traffic from a particular node. This might be somewhat related to the cBPF case which tcpdump and Wireshark makes use of, though, without having to clone the ``skb`` and with being a lot more flexible in terms of programmability where, for example, BPF can already perform in-kernel aggregation rather than pushing everything up to user space as well as custom annotations for packets pushed into the ring buffer. The latter is also heavily used in Cilium where packet drops can be further annotated to correlate container labels and reasons for why a given packet had to be dropped (such as due to policy violation) in order to provide a | https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/progtypes.rst | main | cilium | [
0.028767017647624016,
0.017569195479154587,
0.06182049587368965,
-0.10428430140018463,
-0.01310683786869049,
-0.024420684203505516,
0.059618812054395676,
0.018554706126451492,
-0.01863446645438671,
-0.04397966340184212,
0.010606503114104271,
-0.02974577434360981,
0.01787468232214451,
-0.03... | 0.053276 |
custom annotations for packets pushed into the ring buffer. The latter is also heavily used in Cilium where packet drops can be further annotated to correlate container labels and reasons for why a given packet had to be dropped (such as due to policy violation) in order to provide a richer context. .. \* \*\*Packet scheduler pre-processing\*\* The ``sch\_clsact``'s egress hook which is called ``sch\_handle\_egress()`` runs right before taking the kernel's qdisc root lock, thus tc BPF programs can be utilized to perform all the heavy lifting packet classification and mangling before the packet is transmitted into a real full blown qdisc such as ``sch\_htb``. This type of interaction of ``sch\_clsact`` with a real qdisc like ``sch\_htb`` coming later in the transmission phase allows to reduce the lock contention on transmission since ``sch\_clsact``'s egress hook is executed without taking locks. .. One concrete example user of tc BPF but also XDP BPF programs is Cilium. Cilium is open source software for transparently securing the network connectivity between application services deployed using Linux container management platforms like Docker and Kubernetes and operates at Layer 3/4 as well as Layer 7. At the heart of Cilium operates BPF in order to implement the policy enforcement as well as load balancing and monitoring. \* Slides: https://www.slideshare.net/ThomasGraf5/dockercon-2017-cilium-network-and-application-security-with-bpf-and-xdp \* Video: https://youtu.be/ilKlmTDdFgk \* Github: https://github.com/cilium/cilium \*\*Driver support\*\* Since tc BPF programs are triggered from the kernel's networking stack and not directly out of the driver, they do not require any extra driver modification and therefore can run on any networking device. The only exception listed below is for offloading tc BPF programs to the NIC. \*\*Drivers supporting offloaded tc BPF\*\* \* \*\*Netronome\*\* \* nfp [2]\_ .. note:: Examples for writing and loading tc BPF programs are included in the `bpf\_dev` section under the respective tools. | https://github.com/cilium/cilium/blob/main//Documentation/reference-guides/bpf/progtypes.rst | main | cilium | [
-0.079715296626091,
0.032988741993904114,
-0.07467378675937653,
-0.051240224391222,
0.021841105073690414,
0.024758296087384224,
0.08300743997097015,
0.015314573422074318,
0.09588095545768738,
-0.0004722045559901744,
0.013752906583249569,
-0.04409728944301605,
0.0028630192391574383,
-0.0209... | 0.217609 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_dev\_guide: Development ----------- We're happy you're interested in contributing to the Cilium project. This section of the Cilium documentation will help you make sure you have an environment capable of testing changes to the Cilium source code, and that you understand the workflow of getting these changes reviewed and merged upstream. .. toctree:: :maxdepth: 2 contributing\_guide reviewers\_committers/index dev\_setup images codeoverview datapath\_config bpf\_tests hive statedb debugging hubble introducing\_new\_crds bgp\_cplane renovate The best way to get help if you get stuck is to ask a question on `Cilium Slack`\_. With Cilium contributors across the globe, there is almost always someone available to help. | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/index.rst | main | cilium | [
-0.013348298147320747,
-0.032168302685022354,
-0.05944365635514259,
-0.004789338447153568,
0.06451978534460068,
-0.059762660413980484,
-0.0898667573928833,
0.03013921156525612,
-0.061729952692985535,
0.014631431549787521,
0.04498213157057762,
-0.08610221743583679,
-0.010134726762771606,
-0... | 0.098111 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_bpf\_lvh\_tests: #################################### Run eBPF Tests with Little VM Helper #################################### Prerequisites ------------- - Install ``qemu-utils``: .. code-block:: shell-session $ sudo apt-get install qemu-utils Build Little VM Helper CLI -------------------------- - Checkout the LVH repo: .. code-block:: shell-session $ gh repo clone cilium/little-vm-helper - Build the CLI: .. code-block:: shell-session $ make little-vm-helper VM image selection and preparation ---------------------------------- - You can find all available image types `here `\_. In this tutorial we use the ``complexity-test`` image. - Pull the image: .. code-block:: shell-session $ ./lvh images pull quay.io/lvh-images/complexity-test:bpf-net-main --dir /var/tmp/ - Resize the image (optional): .. code-block:: shell-session $ qemu-img resize /var/tmp/images/complexity-test\_bpf-net.qcow2 +16G VM preparation -------------- - Run the VM: .. code-block:: shell-session $ ./lvh run --image /var/tmp/images/complexity-test\_bpf-net.qcow2 \ --host-mount \ --cpu-kind=host \ --cpu=2 \ --mem=8G \ -p 2222:22 \ --console-log-file=/tmp/lvh-console.log - SSH to the VM: .. code-block:: shell-session $ ssh -p 2222 root@localhost $ resize2fs /dev/vda $ git config --global --add safe.directory /host $ apt update && apt install -y -o Dpkg::Options::="--force-confold" xxd docker-buildx-plugin Run tests --------- - All tests: .. code-block:: shell-session $ cd /host $ make run\_bpf\_tests - Specific test: .. code-block:: shell-session $ cd /host $ make run\_bpf\_tests BPF\_TEST="xdp\_nodeport\_lb4\_nat\_lb" - Verbose mode: .. code-block:: shell-session $ cd /host $ make run\_bpf\_tests BPF\_TEST\_VERBOSE=1 - Dump context: .. code-block:: shell-session $ cd /host $ make run\_bpf\_tests BPF\_TEST\_DUMP\_CTX=1 BPF\_TEST\_VERBOSE=1 | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/bpf_tests.rst | main | cilium | [
0.005043121986091137,
0.04413419961929321,
-0.06387055665254593,
-0.042833637446165085,
0.04717749357223511,
0.02127452939748764,
-0.09631921350955963,
0.045199520885944366,
-0.019694704562425613,
-0.001877912087365985,
0.07899615913629532,
-0.15913964807987213,
-0.034101713448762894,
-0.0... | 0.091123 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_howto\_contribute: How To Contribute ================= This document shows how to contribute as a community contributor. :ref:`Guidance for reviewers and committers ` is also available. Cilium Feature Proposals ~~~~~~~~~~~~~~~~~~~~~~~~ Before you start working on a significant code change, it's a good idea to make sure that your approach is likely to be accepted. The best way to do this is to create a `Cilium issue of type "Feature Request" on GitHub `\_ where you describe your plans. For longer proposals, you might like to include a link to an external doc (e.g. a Google doc) where it's easier for reviewers to make comments and suggestions in-line. The GitHub feature request template includes a link to the `Cilium Feature Proposal template `\_ which you are welcome to use to help structure your proposal. Please make a copy of that template, fill it in with your ideas, and ensure it's publicly visible, before adding the link into the GitHub issue. After the initial discussion, CFPs should be added to the `design-cfps repo `\_ so the design and discussion can be stored for future reference. .. \_issue\_lifecycle: Issue Lifecycle ~~~~~~~~~~~~~~~ Cilium uses automated tools to manage issue lifecycle. Understanding how these work can help you keep important issues active and prevent them from being closed inadvertently. Stale Issue Management ^^^^^^^^^^^^^^^^^^^^^^ Issues are automatically managed by a stale bot with the following behavior: - \*\*Issues become stale after 60 days\*\* of inactivity (no comments, commits, or other activity) - \*\*Stale issues are closed after 14 additional days\*\* (74 days total from last activity) - \*\*Pull requests become stale after 30 days\*\* and are closed after 14 additional days \*\*Preventing Issues from Becoming Stale\*\* There are several ways to prevent an issue from being automatically marked as stale and closed: 1. \*\*Assign the issue\*\*: Issues with any assignees are automatically exempt from stale marking 2. \*\*Add exempt labels\*\*: Issues with any of these labels are exempt: - ``pinned`` - for issues that should never be closed automatically - ``security`` - for security-related issues - ``good-first-issue`` - for newcomer-friendly issues - ``help-wanted`` - for issues seeking community contributions 3. \*\*Regular activity\*\*: Any comment, commit reference, or other activity will reset the stale timer 4. \*\*Convert to discussions\*\*: For open-ended topics, consider moving to `GitHub Discussions `\_ \*\*If Your Issue Was Closed\*\* If an issue was closed automatically but you believe it's still relevant: 1. Add a comment explaining why the issue should remain open 2. Reopen the issue (if you have permissions) or ask a maintainer to reopen it 3. Consider adding appropriate labels (e.g., ``pinned``) or asking for someone to be assigned to prevent future auto-closure This system helps keep the issue tracker focused on active work while preserving important long-term issues and community contributions. .. \_provision\_environment: Clone and Provision Environment ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #. Make sure you have a `GitHub account `\_. #. Fork the `Cilium repository `\_ to your GitHub user or organization. #. Turn off GitHub actions for your fork as described in the `GitHub Docs `\_. This is recommended to avoid unnecessary CI notification failures on the fork. #. Clone your ``${YOUR\_GITHUB\_USERNAME\_OR\_ORG}/cilium`` fork and set up the base repository as ``upstream`` remote: .. code-block:: shell-session git clone https://github.com/${YOUR\_GITHUB\_USERNAME\_OR\_ORG}/cilium.git cd cilium git remote add upstream https://github.com/cilium/cilium.git #. Set up your :ref:`dev\_env`. #. Check the GitHub issues for `good tasks to get started `\_. #. Follow the steps in :ref:`making\_changes` to start contributing :) .. \_submit\_pr: Submitting a pull request | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/contributing_guide.rst | main | cilium | [
-0.07580626755952835,
0.001793297124095261,
-0.04184701293706894,
0.026448136195540428,
0.06474370509386063,
-0.002014181576669216,
-0.08002741634845734,
0.08413781970739365,
-0.044303059577941895,
0.03519479185342789,
0.010277758352458477,
-0.06307331472635269,
0.015888698399066925,
-0.04... | 0.118654 |
repository as ``upstream`` remote: .. code-block:: shell-session git clone https://github.com/${YOUR\_GITHUB\_USERNAME\_OR\_ORG}/cilium.git cd cilium git remote add upstream https://github.com/cilium/cilium.git #. Set up your :ref:`dev\_env`. #. Check the GitHub issues for `good tasks to get started `\_. #. Follow the steps in :ref:`making\_changes` to start contributing :) .. \_submit\_pr: Submitting a pull request ~~~~~~~~~~~~~~~~~~~~~~~~~ Contributions must be submitted in the form of pull requests against the upstream GitHub repository at https://github.com/cilium/cilium. #. Fork the Cilium repository. #. Push your changes to the topic branch in your fork of the repository. #. Submit a pull request on https://github.com/cilium/cilium. Before hitting the submit button, please make sure that the following requirements have been met: #. Take some time to describe your change in the PR description! A well-written description about the motivation of the change and choices you made during the implementation can go a long way to help the reviewers understand why you've made the change and why it's a good way to solve your problem. If it helps you to explain something, use pictures or `Mermaid diagrams `\_. #. Each commit must compile and be functional on its own to allow for bisecting of commits in the event of a bug affecting the tree. #. All code is covered by unit and/or runtime tests where feasible. #. All changes have been tested and checked for regressions by running the existing testsuite against your changes. See the :ref:`testsuite-legacy` section for additional details. #. All commits contain a well written commit description including a title, description and a ``Fixes: #XXX`` line if the commit addresses a particular GitHub issue. Note that the GitHub issue will be automatically closed when the commit is merged. :: apipanic: Log stack at debug level Previously, it was difficult to debug issues when the API panicked because only a single line like the following was printed: level=warning msg="Cilium API handler panicked" client=@ method=GET panic\_message="write unix /var/run/cilium/cilium.sock->@: write: broken pipe" This patch logs the stack at this point at debug level so that it can at least be determined in developer environments. Fixes: #4191 Signed-off-by: Joe Stringer .. note:: Make sure to include a blank line in between commit title and commit description. #. If any of the commits fixes a particular commit already in the tree, that commit is referenced in the commit message of the bugfix. This ensures that whoever performs a backport will pull in all required fixes: :: daemon: use endpoint RLock in HandleEndpoint Fixes: a804c7c7dd9a ("daemon: wait for endpoint to be in ready state if specified via EndpointChangeRequest") Signed-off-by: André Martins .. note:: The proper format for the ``Fixes:`` tag referring to commits is to use the first 12 characters of the git SHA followed by the full commit title as seen above without breaking the line. #. If you change CLI arguments of any binaries in this repo, the CI will reject your PR if you don't also update the command reference docs. To do so, make sure to run the ``postcheck`` make target. .. code-block:: shell-session $ make postcheck $ git add Documentation/cmdref $ git commit #. All commits are signed off. See the section :ref:`dev\_coo`. .. note:: Passing the ``-s`` option to ``git commit`` will add the ``Signed-off-by:`` line to your commit message automatically. #. Document any user-facing or breaking changes in ``Documentation/operations/upgrade.rst``. #. (optional) Pick the appropriate milestone for which this PR is being targeted, e.g. ``1.6``, ``1.7``. This is in particular important in the time frame between the feature freeze and final release date. #. If you have permissions to do so, pick the right release-note label. These labels will be | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/contributing_guide.rst | main | cilium | [
-0.030700160190463066,
-0.05852598696947098,
-0.025919992476701736,
-0.03610995411872864,
-0.016680151224136353,
-0.030711593106389046,
-0.07768241316080093,
0.08911743015050888,
0.005865979008376598,
0.10156570374965668,
0.015625935047864914,
-0.1142088770866394,
0.0027189315296709538,
0.... | 0.056664 |
#. (optional) Pick the appropriate milestone for which this PR is being targeted, e.g. ``1.6``, ``1.7``. This is in particular important in the time frame between the feature freeze and final release date. #. If you have permissions to do so, pick the right release-note label. These labels will be used to generate the release notes which will primarily be read by users. +-----------------------------------+--------------------------------------------------------------------------------------------------------+ | Labels | When to set | +===================================+========================================================================================================+ | ``release-note/bug`` | This is a non-trivial bugfix and is a user-facing bug | +-----------------------------------+--------------------------------------------------------------------------------------------------------+ | ``release-note/major`` | This is a major feature addition, e.g. Add MongoDB support | +-----------------------------------+--------------------------------------------------------------------------------------------------------+ | ``release-note/minor`` | This is a minor feature addition, e.g. Add support for a Kubernetes version | +-----------------------------------+--------------------------------------------------------------------------------------------------------+ | ``release-note/misc`` | This is a not user-facing change, e.g. Refactor endpoint package, a bug fix of a non-released feature | +-----------------------------------+--------------------------------------------------------------------------------------------------------+ | ``release-note/ci`` | This is a CI feature or bug fix. | +-----------------------------------+--------------------------------------------------------------------------------------------------------+ #. Verify the release note text. If not explicitly changed, the title of the PR will be used for the release notes. If you want to change this, you can add a special section to the description of the PR. These release notes are primarily going to be read by users, so it is important that release notes for bugs, major and minor features do not contain internal details of Cilium functionality which sometimes are irrelevant for users. Example of a bad release note :: ```release-note Fix concurrent access in k8s watchers structures ``` Example of a good release note :: ```release-note Fix panic when Cilium received an invalid Cilium Network Policy from Kubernetes ``` .. note:: If multiple lines are provided, then the first line serves as the high level bullet point item and any additional line will be added as a sub item to the first line. #. If you have permissions, pick the right labels for your PR: +------------------------------+---------------------------------------------------------------------------+ | Labels | When to set | +==============================+===========================================================================+ | ``kind/bug`` | This is a bugfix worth mentioning in the release notes | +------------------------------+---------------------------------------------------------------------------+ | ``kind/enhancement`` | This enhances existing functionality in Cilium | +------------------------------+---------------------------------------------------------------------------+ | ``kind/feature`` | This is a feature | +------------------------------+---------------------------------------------------------------------------+ | ``release-blocker/X.Y`` | This PR should block the next X.Y release | +------------------------------+---------------------------------------------------------------------------+ | ``needs-backport/X.Y`` | PR needs to be backported to these stable releases | +------------------------------+---------------------------------------------------------------------------+ | ``backport/X.Y`` | This is backport PR, may only be set as part of :ref:`backport\_process` | +------------------------------+---------------------------------------------------------------------------+ | ``upgrade-impact`` | The code changes have a potential upgrade impact | +------------------------------+---------------------------------------------------------------------------+ | ``area/\*`` (Optional) | Code area this PR covers | +------------------------------+---------------------------------------------------------------------------+ .. note:: If you do not have permissions to set labels on your pull request. Leave a comment and a core team member will add the labels for you. Most reviewers will do this automatically without prior request. #. Open a draft pull request. GitHub provides the ability to create a Pull Request in "draft" mode. On the "New Pull Request" page, below the pull request description box there is a button for creating the pull request. Click the arrow and choose "Create draft pull request". If your PR is still a work in progress, please select this mode. You will still be able to run the CI against it. .. image:: https://i1.wp.com/user-images.githubusercontent.com/3477155/52671177-5d0e0100-2ee8-11e9-8645-bdd923b7d93b.gif :align: center #. To notify reviewers that the PR is ready for review, click \*\*Ready for review\*\* at the bottom of the page. #. Engage in any discussions raised by reviewers and address any changes requested. Set the PR to draft PR mode while you address changes, then click \*\*Ready for review\*\* to re-request review. .. image:: /images/cilium\_request\_review.png Getting | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/contributing_guide.rst | main | cilium | [
-0.018723422661423683,
0.04259685054421425,
-0.03366420045495033,
-0.018104158341884613,
0.02303306758403778,
0.11455869674682617,
-0.01090316567569971,
0.0577043779194355,
-0.02174784243106842,
0.003999796230345964,
0.06489670276641846,
-0.02680058218538761,
-0.09283138811588287,
-0.04399... | 0.031396 |
is ready for review, click \*\*Ready for review\*\* at the bottom of the page. #. Engage in any discussions raised by reviewers and address any changes requested. Set the PR to draft PR mode while you address changes, then click \*\*Ready for review\*\* to re-request review. .. image:: /images/cilium\_request\_review.png Getting a pull request merged ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #. As you submit the pull request as described in the section :ref:`submit\_pr`. One of the reviewers will start a CI run by replying with a comment ``/test`` as described in :ref:`trigger\_phrases`. If you are an `organization member`\_, you can trigger the CI run yourself. CI consists of: #. Static code analysis by GitHub Actions and Travis CI. Golang linter suggestions are added in-line on PRs. For other failed jobs, please refer to build log for required action (e.g. Please run ``go mod tidy && go mod vendor`` and submit your changes, etc). #. :ref:`ci\_gha`: Will run a series of tests: #. Unit tests #. Single node runtime tests #. Multi node Kubernetes tests If a CI test fails which seems unrelated to your PR, it may be a flaky test. Follow the process described in :ref:`ci\_failure\_triage`. #. As part of the submission, GitHub will have requested a review from the respective code owners according to the ``CODEOWNERS`` file in the repository. #. Address any feedback received from the reviewers #. You can push individual commits to address feedback and then rebase your branch at the end before merging. #. Once you have addressed the feedback, re-request a review from the reviewers that provided feedback by clicking on the button next to their name in the list of reviewers. This ensures that the reviewers are notified again that your PR is ready for subsequent review. #. Owners of the repository will automatically adjust the labels on the pull request to track its state and progress towards merging. #. Once the PR has been reviewed and the CI tests have passed, the PR will be merged by one of the repository owners. In case this does not happen, ping us on `Cilium Slack`\_ in the ``#development`` channel. .. \_organization member: https://github.com/cilium/community/blob/main/CONTRIBUTOR-LADDER.md#organization-member Handling large pull requests ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If the PR is considerably large (e.g. with more than 200 lines changed and/or more than 6 commits), consider whether there is a good way to split the PR into smaller PRs that can be merged more incrementally. Reviewers are often more hesitant to review large PRs due to the level of complexity involved in understanding the changes and the amount of time required to provide constructive review comments. By making smaller logical PRs, this makes it easier for the reviewer to provide comments and to engage in dialogue on the PR, and also means there should be fewer overall pieces of feedback that you need to address as a contributor. Tighter feedback cycles like this then make it easier to get your contributions into the tree, which also helps with reducing conflicts with other contributions. Good candidates for smaller PRs may be individual bugfixes, or self-contained refactoring that adjusts the code in order to make it easier to build subsequent functionality on top. While handling review on larger PRs, consider creating a new commit to address feedback from each review that you receive on your PR. This will make the review process smoother as GitHub has limitations that prevents reviewers from only seeing the new changes added since the last time they have reviewed a PR. Once all reviews are addressed those commits should be squashed against the commit that introduced those changes. This can be accomplished by the | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/contributing_guide.rst | main | cilium | [
-0.11474528163671494,
-0.010431925766170025,
-0.07520823925733566,
0.0648074522614479,
0.01776469126343727,
-0.014193495735526085,
-0.015836570411920547,
0.005597371142357588,
-0.005274672992527485,
0.08338569849729538,
0.03153053671121597,
-0.0726020485162735,
0.00619253795593977,
-0.0408... | 0.095959 |
the review process smoother as GitHub has limitations that prevents reviewers from only seeing the new changes added since the last time they have reviewed a PR. Once all reviews are addressed those commits should be squashed against the commit that introduced those changes. This can be accomplished by the usage of ``git rebase -i upstream/main`` and in that window, move these new commits below the commit that introduced the changes and replace the work ``pick`` with ``fixup``. In the following example, commit ``d2cb02265`` will be combined into ``9c62e62d8`` and commit ``146829b59`` will be combined into ``9400fed20``. :: pick 9c62e62d8 docs: updating contribution guide process fixup d2cb02265 joe + paul + chris changes pick 9400fed20 docs: fixing typo fixup 146829b59 Quentin and Maciej reviews Once this is done you can perform push force into your branch and request for your PR to be merged. Reviewers should apply the documented :ref:`review\_process` when providing feedback to a PR. .. \_dev\_coo: Developer's Certificate of Origin ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To improve tracking of who did what, we've introduced a "sign-off" procedure. The sign-off is a simple line at the end of the explanation for the commit, which certifies that you wrote it or otherwise have the right to pass it on as open-source work. The rules are pretty simple: if you can certify the below: :: Developer Certificate of Origin Version 1.1 Copyright (C) 2004, 2006 The Linux Foundation and its contributors. 1 Letterman Drive Suite D4700 San Francisco, CA, 94129 Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Developer's Certificate of Origin 1.1 By making a contribution to this project, I certify that: (a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it. (d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved. then you just add a line saying: :: Signed-off-by: Random J Developer If you need to add your sign off to a commit you have already made, please see `this article `\_. Cilium follows the real names policy described in the CNCF `DCO Guidelines v1.0 `\_: :: The DCO requires the use of a real name that can be used to identify someone in case there is an issue about a contribution they made. A real name does not require a legal name, nor a birth name, nor any name that appears on an official ID (e.g. a passport). Your real name is the name you convey to people in the community for them to use to identify you as you. The key concern is that your identification is sufficient enough to contact you if an issue were to arise in the future about your contribution. Your real name should not | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/contributing_guide.rst | main | cilium | [
-0.08978308737277985,
-0.04170391336083412,
-0.01811184734106064,
-0.004799806512892246,
0.014394545927643776,
0.0180593840777874,
-0.04926585778594017,
0.049169160425662994,
0.036148734390735626,
0.08194589614868164,
0.04916089400649071,
-0.010274828411638737,
0.014624260365962982,
-0.069... | -0.037947 |
name is the name you convey to people in the community for them to use to identify you as you. The key concern is that your identification is sufficient enough to contact you if an issue were to arise in the future about your contribution. Your real name should not be an anonymous id or false name that misrepresents who you are. .. \_contributor\_ladder: Contributor Ladder ~~~~~~~~~~~~~~~~~~ To help contributors grow in both privileges and responsibilities for the project, Cilium also has a `contributor ladder `\_. The ladder lays out how contributors can go from community contributor to a committer and what is expected for each level. Community members generally start at the first levels of the "ladder" and advance up it as their involvement in the project grows. Our contributors are happy to help you advance along the contributor ladder. | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/contributing_guide.rst | main | cilium | [
-0.08768922090530396,
-0.011296161450445652,
-0.0771809071302414,
-0.023940540850162506,
-0.051188550889492035,
-0.017973758280277252,
0.0577240064740181,
0.029823826625943184,
0.007882092148065567,
-0.028574064373970032,
0.028165951371192932,
-0.09439169615507126,
-0.014061273075640202,
-... | 0.180798 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_dpconfig: Configuring the Datapath ======================== Introduction ~~~~~~~~~~~~ In order for the Cilium datapath to function, it needs access to configuration data such as feature flags, addresses, timeouts, security IDs and all sorts of tunables and user configuration. These values are provided by the agent at the time of loading the BPF program. This page outlines the configuration mechanism, some recommendations, how to migrate legacy configuration, as well as practical examples. Getting Started ~~~~~~~~~~~~~~~ First, let's look at a practical example to illustrate the configuration API and see the configuration process in action. This will help you understand how to declare, assign, and use configuration variables effectively in the Cilium datapath. Declaring C Variable ^^^^^^^^^^^^^^^^^^^^ To start off, let's take a look at a straightforward example of a configuration value used in the datapath. This is an example from ``bpf/include/bpf/config/lxc.h``, included by ``bpf\_lxc.c``: .. code-block:: c DECLARE\_CONFIG(\_\_u16, endpoint\_id, "The endpoint's security ID") This invokes the ``DECLARE\_CONFIG`` macro, which declares the 16-bit unsigned integer config value named ``endpoint\_id``, followed by a description. We'll see why the description is useful later on. With our variable declared, ``make`` the ``bpf/`` directory to rebuild the datapath and run ``dpgen`` to generate Go code: .. code-block:: bash make -C bpf -j$(nproc) This will emit our variable to one of the Go config scaffoldings in the ``pkg/datapath/config`` Go package. Wiring up Go Values ^^^^^^^^^^^^^^^^^^^ One of the files in package ``config`` will now contain a new struct field that can be populated at BPF load time. .. code-block:: go type BPFLXC struct { ... // The endpoint's security ID. EndpointID uint16 `config:"endpoint\_id"` ... } As shown in the preceding snippet, the new struct field carries our helpful comment we provided in the C code and refers to the ``endpoint\_id`` variable we declared. .. note:: At the time of writing, populating Go configuration scaffolding still mostly happens in ``pkg/datapath/loader`` and is scattered between a few places. The goal is to create StateDB tables for each configuration object. These can be managed from Hive Cells and automatically trigger a reload of the necessary BPF programs when any of the values change. This document will be updated along with these changes. Now, we need to wire up the field with an actual value. Depending on which object you're adding configuration to and depending on whether the value is "node configuration" (more below) or object-specific, you may need to look in different places. For example, adding a value to ``bpf\_lxc.c`` like in this example, the value is typically set in ``endpointRewrites()``: .. code-block:: go func endpointRewrites(...) ... { ... cfg.InterfaceIfindex = uint32(ep.GetIfIndex()) ... } .. warning:: This plumbing needs to be done for every object that needs access to the variable! For example, if you declare a variable in a header common to both ``bpf\_lxc.c`` and ``bpf\_host.c``, you'll need to make sure the agent supplies the value to both structs. If this document no longer matches the codebase, grep around for uses of the various structs and their fields, and extend the existing code. Over time, Hive Cells will be able to write to these structs using StateDB tables. Reading the Variable in C ^^^^^^^^^^^^^^^^^^^^^^^^^ We've declared our global config variable. We've generated Go code and wired up a value from the agent. Now, we need to put the variable to use! In datapath BPF code, we can refer to it using the ``CONFIG()`` macro. This macro resolves to a special variable name representing our | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/datapath_config.rst | main | cilium | [
-0.015944914892315865,
-0.013679316267371178,
-0.13728033006191254,
-0.02176891639828682,
-0.029799943789839745,
-0.058079835027456284,
-0.007538809906691313,
0.08898165822029114,
-0.04311303049325943,
-0.04413004592061043,
0.03956902399659157,
-0.06686230003833771,
0.010737059637904167,
-... | 0.159846 |
We've declared our global config variable. We've generated Go code and wired up a value from the agent. Now, we need to put the variable to use! In datapath BPF code, we can refer to it using the ``CONFIG()`` macro. This macro resolves to a special variable name representing our configuration value, which could change in the future. The macro is there to avoid cross-cutting code changes if we ever need to make changes here. .. note:: The variable is not a compile-time constant, so it cannot be used to control things like BPF map sizes or to initialize other global ``const`` variables at compile time. .. code-block:: C CONFIG(endpoint\_id) Use the macro like you would typically use a variable: .. code-block:: c \_\_u16 endpoint\_id = CONFIG(endpoint\_id); or in a branch: .. code-block:: c if (CONFIG(endpoint\_id) != 0) { ... } Node Configuration ~~~~~~~~~~~~~~~~~~ .. warning:: Historically, most of the agent's configuration was presented to the datapath as "node configuration" (in ``node\_config.h``), but this pattern is discouraged going forward and may go away at some point in the future. More on this in :ref:`guidelines`. To make migration from ``#define``-style configuration more straightforward, we've kept the concept of node configuration, albeit with runtime-provided values instead of ``#ifdef``. Node configuration can be declared in ``bpf/include/bpf/config/node.h``: .. code-block:: c NODE\_CONFIG(\_\_u64, foo, "The foo value") This will show up in the Go scaffolding as: .. code-block:: go type Node struct { // The foo value. Foo uint64 `config:"foo"` } Populate it in the agent through ``pkg/datapath/config.NodeConfig()``: .. code-block:: go func NodeConfig(lnc \*datapath.LocalNodeConfiguration) Node { ... node.Foo = 42 ... } It behaves identically with regards to ``CONFIG()``. .. \_guidelines: Guidelines and Recommendations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A few guiding principles: - Avoid dead code in the form of variables that are never set by the agent. For example, if only ``bpf\_lxc.c`` uses your variable, don't put it in a shared header across multiple BPF objects. To share types with other objects, put those in a separate header instead. - Declare variables close to where they're used, e.g. in header files implementing a feature. - Avoid conditional ``#include`` statements. Use the following procedure to determine where to declare your configuration: 1. For new features, use ``DECLARE\_CONFIG()`` in the header implementing your feature. Only import the header in the BPF object(s) where the feature is utilized. 2. For new config in existing features, ``DECLARE\_CONFIG()`` as close as possible to the code that consumes it. 3. For porting over node configuration from ``node\_config.h`` (``WriteNodeConfig``), try narrowing down where the config is used and see if it can use ``DECLARE\_CONFIG()`` in a header imported by a small number of BPF objects instead. Refactoring is worth it here, since it avoids dead code in objects that don't use the node config. 4. If none of the above cases apply, use ``NODE\_CONFIG()``. .. \_defaults: Defaults ~~~~~~~~ To assign a default value other than 0 to a configuration variable directly from C, the ``ASSIGN\_CONFIG()`` macro can be used after declaring the variable. This can be useful for setting sane defaults that will automatically apply even when the agent doesn't supply a value. For example, the agent uses this for device MTU: .. code-block:: c DECLARE\_CONFIG(\_\_u16, device\_mtu, "MTU of the device the bpf program is attached to") ASSIGN\_CONFIG(\_\_u16, device\_mtu, MTU) .. warning:: ``ASSIGN\_CONFIG()`` can only be used once per variable per compilation unit. This makes it so the variable cannot be overridden from tests without a workaround, so use sparingly. See :ref:`testing` for more details. .. \_testing: Testing ~~~~~~~ When writing tests, you may need to override configuration values to test different code paths. This | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/datapath_config.rst | main | cilium | [
0.003185214940458536,
0.03352978825569153,
-0.07640735059976578,
0.041059739887714386,
-0.07084879279136658,
0.03273328021168709,
0.07068520784378052,
0.06717495620250702,
-0.08134511858224869,
-0.04079337790608406,
0.03960464149713516,
-0.05346940830349922,
0.0027896990068256855,
-0.02093... | 0.039878 |
be used once per variable per compilation unit. This makes it so the variable cannot be overridden from tests without a workaround, so use sparingly. See :ref:`testing` for more details. .. \_testing: Testing ~~~~~~~ When writing tests, you may need to override configuration values to test different code paths. This can be done by using the ``ASSIGN\_CONFIG()`` macro in a test file as described in :ref:`defaults` after importing the main object under test, e.g. ``bpf\_lxc.c``. See the test suite itself for the most up-to-date examples. Note that there are some restrictions, primarily that the literal passed to ``ASSSIGN\_CONFIG()`` must be compile-time constant, and can't e.g. be the name of another variable. Occasionally, you may need to override a config that already has a default value set using ``ASSIGN\_CONFIG()``, in which case a workaround is needed: .. code-block:: c #ifndef OVERRIDABLE\_CONFIG DECLARE\_CONFIG(\_\_u8, overridable, "Config with a default and an override from tests") ASSIGN\_CONFIG(\_\_u8, overridable, 42) #define OVERRIDABLE\_CONFIG CONFIG(overridable) #endif Then, from the test file, set ``#define OVERRIDABLE\_CONFIG`` before including the object under test to make the override take precedence. .. code-block:: c #define OVERRIDABLE\_CONFIG 1337 #include "bpf\_lxc.c" This is somewhat surprising, so use sparingly and consider refactoring the code to avoid the need for this. Known Limitations ~~~~~~~~~~~~~~~~~ - Runtime-based configuration cannot currently be set during verifier tests. This means that if you have a branch behind a (boolean) config, it will currently not be evaluated by the verifier, and there may be latent verifier errors that pop up when enabled through agent configuration. However, with the new configuration mechanism, we can now fully automate testing all permutations of config flags, without having to maintain them manually going forward. Hold off on migrating ``ENABLE\_`` defines until this is resolved. - Generating Go scaffolding for struct variables is not yet supported. Background ~~~~~~~~~~ Historically, configuration was fed into the datapath using ``#define`` statements generated at runtime, with sections of optional code cordoned off by ``#ifdef`` and similar mechanisms. This has served us well over the years, but with the increasing complexity of the agent and the datapath, it has become clear that we need a more structured and maintainable way to configure the datapath. Linux kernels 5.2 and later support read-only maps to store config data that cannot be changed after the kernel verified the program. If these values are used in branches, the verifier can then perform dead code elimination, eliminating branches it deems unreachable. This minimizes the amount of work the verifier needs to do in subsequent verification steps and ensures the BPF program image is as lean as possible. This also means we no longer need to conditionally compile out parts of code we don't need, so we can adopt an approach where the datapath's BPF code is built and embedded into the agent at compile time. This, in turn, means we no longer need to ship LLVM with the agent (maybe you've heard of the term ``clang-free``), reducing the size of the agent container image and significantly cutting down on agent startup time and CPU usage. Endpoints will also regenerate faster during configuration changes. | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/datapath_config.rst | main | cilium | [
-0.04189777001738548,
-0.034944530576467514,
-0.07347321510314941,
0.08836231380701065,
-0.06765279918909073,
0.018631940707564354,
0.006970648188143969,
0.03896060213446617,
-0.08243986964225769,
-0.025003260001540184,
0.03569471463561058,
-0.016894115135073662,
0.05577453598380089,
-0.00... | -0.015404 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_bgp\_cplane\_contributing: ================= BGP Control Plane ================= This section is specific to :ref:`bgp\_control\_plane` contributions. Development Environment ======================= BGP Control Plane requires a BGP peer for testing. This section describes a `ContainerLab`\_ and `Kind`\_-based development environment. The following diagram shows the topology: .. \_ContainerLab: https://containerlab.dev/ .. \_Kind: https://kind.sigs.k8s.io/ .. image:: \_static/bgp-lab.drawio.png :align: center The following describes the role of each node: \* ``router0`` is an `FRRouting (FRR)`\_ router. It is pre-configured with minimal peering settings with server0 and server1. \* ``server0`` and ``server1`` are ``nicolaka/netshoot`` containers that each share a network namespace with their own Kind node. \* ``server2`` is a non-Cilium ``nicolaka/netshoot`` node useful for testing traffic connectivity from outside of the k8s cluster. .. \_FRRouting (FRR): https://frrouting.org/ Prerequisites ------------- \* ContainerLab v0.45.1 or later \* Kind v0.20.0 or later \* Your container runtime networks must not use ``10.0.0.0/8`` and ``fd00::/16`` Deploy Lab ---------- The following example deploys a lab with the latest stable version of Cilium: .. code-block:: shell-session $ make kind-bgp-service .. note:: The prior example sets up an environment showcasing k8s service advertisements over BGP. Please refer to container lab directory in Cilium repository under ``contrib/containerlab`` for more labs. If you want to install a locally built version of Cilium instead of the stable version, pass ``local`` as the ``VERSION`` environment variable value: .. code-block:: shell-session $ make VERSION=local kind-bgp-service Peering with Router ------------------- Peer Cilium nodes with FRR by applying BGP configuration resources: .. code-block:: shell-session $ make kind-bgp-service-apply-bgp To deploy some example k8s services, run the following commands: .. code-block:: shell-session $ make kind-bgp-service-apply-service Validating Peering Status ------------------------- You can validate the peering status with the following command. Confirm that the session state is established and Received and Advertised counters are non-zero. .. code-block:: shell-session $ cilium bgp peers Node Local AS Peer AS Peer Address Session State Uptime Family Received Advertised bgp-cplane-dev-service-control-plane 65001 65000 fd00:10::1 established 51s ipv4/unicast 6 4 ipv6/unicast 4 3 bgp-cplane-dev-service-worker 65001 65000 fd00:10::1 established 51s ipv4/unicast 6 6 ipv6/unicast 4 4 Destroy Lab ----------- .. code-block:: shell-session $ make kind-bgp-service-down | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/bgp_cplane.rst | main | cilium | [
-0.019040577113628387,
-0.017860038205981255,
-0.06570205092430115,
-0.017307443544268608,
0.04183618351817131,
-0.040188249200582504,
-0.04928358271718025,
-0.006163517478853464,
-0.016189290210604668,
-0.014779222197830677,
0.022104907780885696,
-0.07441137731075287,
-0.009809915907680988,... | 0.168097 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_hubble\_contributing: Hubble ====== This section is specific to Hubble contributions. Bumping the vendored Cilium dependency ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Hubble vendors Cilium using Go modules. You can bump the dependency by first running: .. code-block:: shell-session go get github.com/cilium/cilium@main However, Cilium's ``go.mod`` contains ``replace`` directives, which are ignored by ``go get`` and ``go mod``. Therefore you must also manually copy any updated ``replace`` directives from Cilium's ``go.mod`` to Hubble's ``go.mod``. Once you have done this you can tidy up, vendor the modules, and verify them: .. code-block:: shell-session go mod tidy go mod vendor go mod verify The bumped dependency should be committed as a single commit containing all the changes to ``go.mod``, ``go.sum``, and the ``vendor`` directory. | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/hubble.rst | main | cilium | [
0.009743351489305496,
0.02631092630326748,
-0.04454848915338516,
-0.02870371751487255,
0.03922547027468681,
-0.006119724828749895,
-0.06332514435052872,
0.0535910539329052,
-0.038276370614767075,
-0.04930281266570091,
0.10163449496030807,
-0.0200119037181139,
0.0352485366165638,
-0.0403740... | 0.085807 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gs\_debugging: ######### Debugging ######### Attaching a Debugger -------------------- Cilium comes with a set of Makefile targets for quickly deploying development builds to a local :ref:`Kind ` cluster. One of these targets is ``kind-debug-agent``, which generates a container image that wraps the Cilium agent with a `Delve (dlv) `\_ invocation. This causes the agent process to listen for connections from a debugger front-end on port 2345. To build and push a debug image to your local Kind cluster, run: .. code-block:: shell-session $ make kind-debug-agent .. note:: The image is automatically pushed to the Kind nodes, but running Cilium Pods are not restarted. To do so, run: .. code-block:: shell-session $ kubectl delete pods -n kube-system -l app.kubernetes.io/name=cilium-agent If your Kind cluster was set up using ``make kind``, it will automatically be configured using with the following port mappings: - ``23401``: ``kind-control-plane-1`` - ``2340\*``: Subsequent ``kind-control-plane-\*`` nodes, if defined - ``23411``: ``kind-worker-1`` - ``2341\*``: Subsequent ``kind-worker-\*`` nodes, if defined The Delve listener supports multiple debugging protocols, so any IDEs or debugger front-ends that understand either the `Debug Adapter Protocol `\_ or Delve API v2 are supported. ~~~~~~~~~~~~~~~~~~ Visual Studio Code ~~~~~~~~~~~~~~~~~~ The Cilium repository contains a VS Code launch configuration (``.vscode/launch.json``) that includes debug targets for the Kind control plane, the first two ``kind-worker`` nodes and the :ref:`Cilium Operator `. .. image:: \_static/vscode-run-and-debug.png :align: center | The preceding screenshot is taken from the 'Run And Debug' section in VS Code. The default shortcut to access this section is ``Shift+Ctrl+D``. Select a target to attach to, start the debug session and set a breakpoint to halt the agent or operator on a specific code statement. This only works for Go code, BPF C code cannot be debugged this way. See `the VS Code debugging guide `\_ for more details. ~~~~~~ Neovim ~~~~~~ The Cilium repository contains a `.nvim directory `\_ containing a DAP configuration as well as a README on how to configure ``nvim-dap``. toFQDNs and DNS Debugging ------------------------- The interactions of L3 toFQDNs and L7 DNS rules can be difficult to debug around. Unlike many other policy rules, these are resolved at runtime with unknown data. Pods may create large numbers of IPs in the cache or the IPs returned may not be compatible with our datapath implementation. Sometimes we also just have bugs. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Isolating the source of toFQDNs issues ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ While there is no common culprit when debugging, the DNS Proxy shares the least code with other system and so is more likely the least audited in this chain. The cascading caching scheme is also complex in its behaviour. Determining whether an issue is caused by the DNS components, in the policy layer or in the datapath is often the first step when debugging toFQDNs related issues. Generally, working top-down is easiest as the information needed to verify low-level correctness can be collected in the initial debug invocations. REFUSED vs NXDOMAIN responses ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The proxy uses REFUSED DNS responses to indicate a denied request. Some libc implementations, notably musl which is common in Alpine Linux images, terminate the whole DNS search in these cases. This often manifests as a connect error in applications, as the libc lookup returns no data. To work around this, denied responses can be configured to be NXDOMAIN by setting ``--tofqdns-dns-reject-response-code=nameError`` on the command line. Monitor Events ~~~~~~~~~~~~~~ The DNS Proxy emits multiple L7 DNS monitor events. One for the request and one for the response (if allowed). Often | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/debugging.rst | main | cilium | [
-0.009766676463186741,
0.005707739386707544,
-0.05445165932178497,
-0.0007372907712124288,
0.08443658798933029,
-0.09865786880254745,
-0.047258369624614716,
-0.0016560863004997373,
0.005366972181946039,
0.014224104583263397,
0.05426633730530739,
-0.10069561749696732,
-0.002015443751588464,
... | 0.154903 |
as the libc lookup returns no data. To work around this, denied responses can be configured to be NXDOMAIN by setting ``--tofqdns-dns-reject-response-code=nameError`` on the command line. Monitor Events ~~~~~~~~~~~~~~ The DNS Proxy emits multiple L7 DNS monitor events. One for the request and one for the response (if allowed). Often the L7 DNS rules are paired with L3 toFQDNs rules and events relating to those rules are also relevant. .. Note:: Be sure to run cilium-dbg monitor on the same node as the pod being debugged! .. code-block:: shell-session $ kubectl exec pod/cilium-sbp8v -n kube-system -- cilium-dbg monitor --related-to 3459 Listening for events on 4 CPUs with 64x4096 of shared memory Press Ctrl-C to quit level=info msg="Initializing dissection cache..." subsys=monitor -> Request dns from 3459 ([k8s:org=alliance k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.cilium.k8s.policy.cluster=default k8s:class=xwing]) to 0 ([k8s:io.cilium.k8s.policy.serviceaccount=kube-dns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns k8s:io.cilium.k8s.policy.cluster=default]), identity 323->15194, verdict Forwarded DNS Query: cilium.io. A -> endpoint 3459 flow 0xe6866e21 identity 15194->323 state reply ifindex lxc84b58cbdabfe orig-ip 10.60.1.115: 10.63.240.10:53 -> 10.60.0.182:42132 udp -> Response dns to 3459 ([k8s:org=alliance k8s:io.kubernetes.pod.namespace=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.cilium.k8s.policy.cluster=default k8s:class=xwing]) from 0 ([k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=kube-dns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns]), identity 323->15194, verdict Forwarded DNS Query: cilium.io. A TTL: 486 Answer: '104.198.14.52' -> endpoint 3459 flow 0xe6866e21 identity 15194->323 state reply ifindex lxc84b58cbdabfe orig-ip 10.60.1.115: 10.63.240.10:53 -> 10.60.0.182:42132 udp Policy verdict log: flow 0x614e9723 local EP ID 3459, remote ID 16777217, proto 6, egress, action allow, match L3-Only, 10.60.0.182:41510 -> 104.198.14.52:80 tcp SYN -> stack flow 0x614e9723 identity 323->16777217 state new ifindex 0 orig-ip 0.0.0.0: 10.60.0.182:41510 -> 104.198.14.52:80 tcp SYN -> 0: 10.60.0.182:41510 -> 104.198.14.52:80 tcp SYN -> endpoint 3459 flow 0x7388921 identity 16777217->323 state reply ifindex lxc84b58cbdabfe orig-ip 104.198.14.52: 104.198.14.52:80 -> 10.60.0.182:41510 tcp SYN, ACK -> stack flow 0x614e9723 identity 323->16777217 state established ifindex 0 orig-ip 0.0.0.0: 10.60.0.182:41510 -> 104.198.14.52:80 tcp ACK -> 0: 10.60.0.182:41510 -> 104.198.14.52:80 tcp ACK -> stack flow 0x614e9723 identity 323->16777217 state established ifindex 0 orig-ip 0.0.0.0: 10.60.0.182:41510 -> 104.198.14.52:80 tcp ACK -> 0: 10.60.0.182:41510 -> 104.198.14.52:80 tcp ACK -> endpoint 3459 flow 0x7388921 identity 16777217->323 state reply ifindex lxc84b58cbdabfe orig-ip 104.198.14.52: 104.198.14.52:80 -> 10.60.0.182:41510 tcp ACK -> 0: 10.60.0.182:41510 -> 104.198.14.52:80 tcp ACK -> stack flow 0x614e9723 identity 323->16777217 state established ifindex 0 orig-ip 0.0.0.0: 10.60.0.182:41510 -> 104.198.14.52:80 tcp ACK, FIN -> 0: 10.60.0.182:41510 -> 104.198.14.52:80 tcp ACK, FIN -> endpoint 3459 flow 0x7388921 identity 16777217->323 state reply ifindex lxc84b58cbdabfe orig-ip 104.198.14.52: 104.198.14.52:80 -> 10.60.0.182:41510 tcp ACK, FIN -> stack flow 0x614e9723 identity 323->16777217 state established ifindex 0 orig-ip 0.0.0.0: 10.60.0.182:41510 -> 104.198.14.52:80 tcp ACK The above is for a simple ``curl cilium.io`` in a pod. The L7 DNS request is the first set of message and the subsequent L3 connection is the HTTP component. AAAA DNS lookups commonly happen but were removed to simplify the example. - If no L7 DNS requests appear, the proxy redirect is not in place. This may mean that the policy does not select this endpoint or there is an issue with the proxy redirection. Whether any redirects exist can be checked with ``cilium-dbg status --all-redirects``. In the past, a bug occurred with more permissive L3 rules overriding the proxy redirect, causing the proxy to never see the requests. - If the L7 DNS request is blocked, with an explicit denied message, then the requests are not allowed by the proxy. This may be due to a typo in the network policy, or the matchPattern rule not allowing this domain. It may also be due to a bug in policy propagation to the DNS Proxy. - If the DNS request is allowed, with an explicit message, and it should not be, this may be because a more general policy is in | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/debugging.rst | main | cilium | [
-0.003056237706914544,
0.0025524271186441183,
-0.014000206254422665,
-0.048409122973680496,
-0.0795481950044632,
-0.07059624046087265,
-0.032051652669906616,
-0.047097545117139816,
0.07773306965827942,
0.02273217961192131,
-0.04170919954776764,
-0.09127315133810043,
-0.033956486731767654,
... | 0.09689 |
policy, or the matchPattern rule not allowing this domain. It may also be due to a bug in policy propagation to the DNS Proxy. - If the DNS request is allowed, with an explicit message, and it should not be, this may be because a more general policy is in place that allows the request. ``matchPattern: "\*"`` visibility policies are commonly in place and would supersede all other, more restrictive, policies. If no other policies are in place, incorrect allows may indicate a bug when passing policy information to the proxy. There is no way to dump the rules in the proxy, but a debug log is printed when a rule is added. Look for ``DNS Proxy updating matchNames in allowed list during UpdateRules``. The pkg/proxy/dns.go file contains the DNS proxy implementation. If L7 DNS behaviour seems correct, see the sections below to further isolate the issue. This can be verified with ``cilium-dbg fqdn cache list``. The IPs in the response should appear in the cache for the appropriate endpoint. The lookup time is included in the json output of the command. .. code-block:: shell-session $ kubectl exec pod/cilium-sbp8v -n kube-system -- cilium-dbg fqdn cache list Endpoint Source FQDN TTL ExpirationTime IPs 3459 lookup cilium.io. 3600 2020-04-21T15:04:27.146Z 104.198.14.52 As of Cilium 1.16, the ``ExpirationTime`` represents the next time that the entry will be evaluated for staleness. If the entry ``Source`` is ``lookup``, then the entry will expire at that time. An equivalent entry with source ``connection`` may be established when a ``lookup`` entry expires. If the corresponding Endpoint continues to communicate to this domain via one of the related IP addresses, then Cilium will continue to keep the ``connection`` entry alive. When the expiration time for a ``connection`` entry is reached, the entry will be re-evaluated to determine whether it is still used by active connections, and at that time may expire or be renewed with a new target expiration time. DNS Proxy Errors ~~~~~~~~~~~~~~~~ REFUSED responses are returned when the proxy encounters an error during processing. This can be confusing to debug as that is also the response when a DNS request is denied. An error log is always printed in these cases. Some are callbacks provided by other packages via daemon in cilium-agent. - ``Rejecting DNS query from endpoint due to error``: This is the "normal" policy-reject message. It is a debug log. - ``cannot extract endpoint IP from DNS request``: The proxy cannot read the socket information to read the source endpoint IP. This could mean an issue with the datapath routing and information passing. - ``cannot extract endpoint ID from DNS request``: The proxy cannot use the source endpoint IP to get the cilium-internal ID for that endpoint. This is different from the Security Identity. This could mean that cilium is not managing this endpoint and that something has gone awry. It could also mean a routing problem where a packet has arrived at the proxy incorrectly. - ``cannot extract destination IP:port from DNS request``: The proxy cannot read the socket information of the original request to obtain the intended target IP:Port. This could mean an issue with the datapath routing and information passing. - ``cannot find server ip in ipcache``: The proxy cannot resolve a Security Identity for the target IP of the DNS request. This should always succeed, as world catches all IPs not set by more specific entries. This can mean a broken ipcache BPF table. - ``Rejecting DNS query from endpoint due to error``: While checking if the DNS request was allowed (based on Endpoint ID, destination IP:Port and the DNS query) | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/debugging.rst | main | cilium | [
-0.11240390688180923,
-0.010634848847985268,
0.059223294258117676,
-0.07854144275188446,
-0.0834273248910904,
-0.06748752295970917,
-0.012861186638474464,
-0.07883688807487488,
0.009364763274788857,
0.02611958608031273,
-0.031037887558341026,
0.027234787121415138,
-0.02492390014231205,
0.0... | -0.083148 |
This should always succeed, as world catches all IPs not set by more specific entries. This can mean a broken ipcache BPF table. - ``Rejecting DNS query from endpoint due to error``: While checking if the DNS request was allowed (based on Endpoint ID, destination IP:Port and the DNS query) an error occurred. These errors would come from the internal rule lookup in the proxy, the ``allowed`` field. - ``Timeout waiting for response to forwarded proxied DNS lookup``: The proxy forwards requests 1:1 and does not cache. It applies a 10s timeout on responses to those requests, as the client will retry within this period (usually). Bursts of these errors can happen if the DNS target server misbehaves and many pods see DNS timeouts. This isn't an actual problem with cilium or the proxy although it can be caused by policy blocking the DNS target server if it is in-cluster. - ``Timed out waiting for datapath updates of FQDN IP information; returning response``: When the proxy updates the DNS caches with response data, it needs to allow some time for that information to get into the datapath. Otherwise, pods would attempt to make the outbound connection (the thing that caused the DNS lookup) before the datapath is ready. Many stacks retry the SYN in such cases but some return an error and some apps further crash as a response. This delay is configurable by setting the ``--tofqdns-proxy-response-max-delay`` command line argument but defaults to 100ms. It can be exceeded if the system is under load. .. \_isolating-source-toFQDNs-issues-identities-policy: Identities and Policy ~~~~~~~~~~~~~~~~~~~~~ Once a DNS response has been passed back through the proxy and is placed in the DNS cache ``toFQDNs`` rules can begin using the IPs in the cache. There are multiple layers of cache: - A per-Endpoint ``DNSCache`` stores the lookups for this endpoint. It is restored on cilium startup with the endpoint. Limits are applied here for ``--tofqdns-endpoint-max-ip-per-hostname`` and TTLs are tracked. The ``--tofqdns-min-ttl`` is not used here. - A per-Endpoint ``DNSZombieMapping`` list of IPs that have expired from the per-Endpoint cache but are waiting for the Connection Tracking GC to mark them in-use or not. This can take up to 12 hours to occur. This list is size-limited by ``--tofqdns-max-deferred-connection-deletes``. - A global ``DNSCache`` where all endpoint and poller DNS data is collected. It does apply the ``--tofqdns-min-ttl`` value but not the ``--tofqdns-endpoint-max-ip-per-hostname`` value. If an IP exists in the FQDN cache (check with ``cilium-dbg fqdn cache list``) then ``toFQDNs`` rules that select a domain name, either explicitly via ``matchName`` or via ``matchPattern``, should cause IPs for that domain to have allocated Security Identities. These can be listed with: .. code-block:: shell-session $ kubectl exec pod/cilium-sbp8v -n kube-system -- cilium-dbg identity list ID LABELS 1 reserved:host 2 reserved:world 3 reserved:unmanaged 4 reserved:health 5 reserved:init 6 reserved:remote-node 323 k8s:class=xwing k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=default k8s:org=alliance ... 16777217 fqdn:\* reserved:world Note that FQDN identities are allocated locally on the node and have a high-bit set so they are often in the 16-million range. Note that this is the identity in the monitor output for the HTTP connection. In cases where there is no matching identity for an IP in the fqdn cache it may simply be because no policy selects an associated domain. The policy system represents each ``toFQDNs:`` rule with a ``FQDNSelector`` instance. These receive updates from a global ``NameManager`` in the daemon. They can be listed along with other selectors (roughly corresponding to any L3 rule): .. code-block:: shell-session $ kubectl exec pod/cilium-sbp8v -n kube-system -- cilium-dbg policy selectors SELECTOR USERS IDENTITIES MatchName: , MatchPattern: \* 1 16777217 &LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[]LabelSelectorRequirement{},} 2 | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/debugging.rst | main | cilium | [
-0.07682495564222336,
0.02770724706351757,
0.034444909542798996,
-0.011688982136547565,
-0.10199645161628723,
-0.07568781077861786,
-0.021152708679437637,
-0.05087484419345856,
0.029308954253792763,
0.01712006889283657,
-0.04911966994404793,
0.017650414258241653,
-0.00004545882984530181,
-... | 0.032271 |
``FQDNSelector`` instance. These receive updates from a global ``NameManager`` in the daemon. They can be listed along with other selectors (roughly corresponding to any L3 rule): .. code-block:: shell-session $ kubectl exec pod/cilium-sbp8v -n kube-system -- cilium-dbg policy selectors SELECTOR USERS IDENTITIES MatchName: , MatchPattern: \* 1 16777217 &LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[]LabelSelectorRequirement{},} 2 1 2 3 4 5 6 323 6188 15194 18892 25379 29200 32255 33831 16777217 &LabelSelector{MatchLabels:map[string]string{reserved.none: ,},MatchExpressions:[]LabelSelectorRequirement{},} 1 In this example 16777217 is used by two selectors, one with ``matchPattern: "\*"`` and another empty one. This is because of the policy in use: .. code-block:: yaml apiVersion: cilium.io/v2 kind: CiliumNetworkPolicy metadata: name: "tofqdn-dns-visibility" spec: endpointSelector: matchLabels: any:org: alliance egress: - toPorts: - ports: - port: "53" protocol: ANY rules: dns: - matchPattern: "\*" - toFQDNs: - matchPattern: "\*" The L7 DNS rule has an implicit L3 allow-all because it defines only L4 and L7 sections. This is the second selector in the list, and includes all possible L3 identities known in the system. In contrast, the first selector, which corresponds to the ``toFQDNS: matchName: "\*"`` rule would list all identities for IPs that came from the DNS Proxy. Unintended DNS Policy Drops ~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``toFQDNSs`` policy enforcement relies on the source pod performing a DNS query before using an IP address returned in the DNS response. Sometimes pods may hold on to a DNS response and start new connections to the same IP address at a later time. This may trigger policy drops if the DNS response has expired as requested by the DNS server in the time-to-live (TTL) value in the response. When DNS is used for service load balancing the advertised TTL value may be short (e.g., 60 seconds). Cilium honors the TTL values returned by the DNS server by default, but you can override them by setting a minimum TTL using ``--tofqdns-min-ttl`` flag. This setting overrides short TTLs and allows the pod to use the IP address in the DNS response for a longer duration. Existing connections also keep the IP address as allowed in the policy. Any new connections opened by the pod using the same IP address without performing a new DNS query after the (possibly extended) DNS TTL has expired are dropped by Cilium policy enforcement. To allow pods to use the DNS response after TTL expiry for new connections, a command line option ``--tofqdns-idle-connection-grace-period`` may be used to keep the IP address / name mapping valid in the policy for an extended time after DNS TTL expiry. This option takes effect only if the pod has opened at least one connection during the DNS TTL period. Datapath Plumbing ~~~~~~~~~~~~~~~~~ For a policy to be fully realized the datapath for an Endpoint must be updated. In the case of a new DNS-source IP, the FQDN identity associated with it must propagate from the selectors to the Endpoint specific policy. Unless a new policy is being added, this often only involves updating the Policy Map of the Endpoint with the new FQDN Identity of the IP. This can be verified: .. code-block:: shell-session $ kubectl exec pod/cilium-sbp8v -n kube-system -- cilium-dbg bpf policy get 3459 DIRECTION LABELS (source:key[=value]) PORT/PROTO PROXY PORT BYTES PACKETS Ingress reserved:unknown ANY NONE 1367 7 Ingress reserved:host ANY NONE 0 0 Egress reserved:unknown 53/TCP 36447 0 0 Egress reserved:unknown 53/UDP 36447 138 2 Egress fqdn:\* ANY NONE 477 6 reserved:world Note that the labels for identities are resolved here. This can be skipped, or there may be cases where this doesn't occur: .. code-block:: shell-session $ kubectl exec pod/cilium-sbp8v -n kube-system -- cilium-dbg bpf policy get -n 3459 DIRECTION IDENTITY PORT/PROTO PROXY | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/debugging.rst | main | cilium | [
-0.055769722908735275,
0.06770157814025879,
-0.00020268875232432038,
-0.04702351987361908,
-0.0041753631085157394,
0.002896349411457777,
0.11357434839010239,
-0.04608846455812454,
0.09527350962162018,
0.01694045029580593,
0.03552701696753502,
-0.15255334973335266,
0.01694721169769764,
-0.1... | 0.203034 |
2 Egress fqdn:\* ANY NONE 477 6 reserved:world Note that the labels for identities are resolved here. This can be skipped, or there may be cases where this doesn't occur: .. code-block:: shell-session $ kubectl exec pod/cilium-sbp8v -n kube-system -- cilium-dbg bpf policy get -n 3459 DIRECTION IDENTITY PORT/PROTO PROXY PORT BYTES PACKETS Ingress 0 ANY NONE 1367 7 Ingress 1 ANY NONE 0 0 Egress 0 53/TCP 36447 0 0 Egress 0 53/UDP 36447 138 2 Egress 16777217 ANY NONE 477 6 L3 ``toFQDNs`` rules are egress only, so we would expect to see an ``Egress`` entry with Security Identity ``16777217``. The L7 rule, used to redirect to the DNS Proxy is also present with a populated ``PROXY PORT``. It has a 0 ``IDENTITY`` as it is an L3 wildcard, i.e. the policy allows any peer on the specified port. An identity missing here can be an error in various places: - Policy doesn't actually allow this Endpoint to connect. A sanity check is to use ``cilium-dbg endpoint list`` to see if cilium thinks it should have policy enforcement. - Endpoint regeneration is slow and the Policy Map has not been updated yet. This can occur in cases where we have leaked IPs from the DNS cache (i.e. they were never deleted correctly) or when there are legitimately many IPs. It can also simply mean an overloaded node or even a deadlock within cilium. - A more permissive policy has removed the need to include this identity. This is likely a bug, however, as the IP would still have an identity allocated and it would be included in the Policy Map. In the past, a similar bug occurred with the L7 redirect and that would stop this whole process at the beginning. Mutexes / Locks and Data Races ------------------------------ .. Note:: This section only applies to Golang code. There are a few options available to debug Cilium data races and deadlocks. To debug data races, Golang allows ``-race`` to be passed to the compiler to compile Cilium with race detection. Additionally, the flag can be provided to ``go test`` to detect data races in a testing context. .. \_compile-cilium-with-race-detection: ~~~~~~~~~~~~~~ Race detection ~~~~~~~~~~~~~~ To compile a Cilium binary with race detection, you can do: .. code-block:: shell-session $ make RACE=1 .. Note:: For building the Operator with race detection, you must also provide ``BASE\_IMAGE`` which can be the ``cilium/cilium-runtime`` image from the root Dockerfile found in the Cilium repository. To run integration tests with race detection, you can do: .. code-block:: shell-session $ make RACE=1 integration-tests ~~~~~~~~~~~~~~~~~~ Deadlock detection ~~~~~~~~~~~~~~~~~~ Cilium can be compiled with a build tag ``lockdebug`` which will provide a seamless wrapper over the standard mutex types in Golang, via `sasha-s/go-deadlock library `\_. No action is required, besides building the binary with this tag. For example: .. code-block:: shell-session $ make LOCKDEBUG=1 $ # Deadlock detection during integration tests: $ make LOCKDEBUG=1 integration-tests Moreover, you can enable mutex contention and blocked goroutine profiling with ``pprof`` (see below for more ``pprof`` examples). These features can be enabled with the ``--pprof-block-profile-rate`` and ``--pprof-mutex-profile-fraction`` flags. Note that the block profiler is `not recommended `\_ for production due to performance overhead. CPU Profiling and Memory Leaks ------------------------------ Cilium bundles ``gops``, a standard tool for Golang applications, which provides the ability to collect CPU and memory profiles using ``pprof``. Inspecting profiles can help identify CPU bottlenecks and memory leaks. To capture a profile, take a :ref:`sysdump ` of the cluster with the Cilium CLI or more directly, use the ``cilium-bugtool`` command that is included in the Cilium image after enabling ``pprof`` | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/debugging.rst | main | cilium | [
-0.037368569523096085,
0.023989200592041016,
-0.023335790261626244,
-0.05729902908205986,
-0.04716575890779495,
0.00229069497436285,
0.08297620713710785,
-0.031474631279706955,
0.09528698772192001,
0.035016488283872604,
-0.01278812438249588,
-0.10844959318637848,
-0.041056931018829346,
-0.... | 0.129098 |
to collect CPU and memory profiles using ``pprof``. Inspecting profiles can help identify CPU bottlenecks and memory leaks. To capture a profile, take a :ref:`sysdump ` of the cluster with the Cilium CLI or more directly, use the ``cilium-bugtool`` command that is included in the Cilium image after enabling ``pprof`` in the Cilium ConfigMap: .. code-block:: shell-session $ kubectl exec -ti -n kube-system -- cilium-bugtool --get-pprof --pprof-trace-seconds N $ kubectl cp -n kube-system :/tmp/cilium-bugtool-.tar ./cilium-pprof.tar $ tar xf ./cilium-pprof.tar Be mindful that the profile window is the number of seconds passed to ``--pprof-trace-seconds``. Ensure that the number of seconds are enough to capture Cilium while it is exhibiting the problematic behavior to debug. There are 6 files that encompass the tar archive: .. code-block:: shell-session Permissions Size User Date Modified Name .rw-r--r-- 940 chris 6 Jul 14:04 gops-memstats-$(pidof-cilium-agent).md .rw-r--r-- 211k chris 6 Jul 14:04 gops-stack-$(pidof-cilium-agent).md .rw-r--r-- 58 chris 6 Jul 14:04 gops-stats-$(pidof-cilium-agent).md .rw-r--r-- 212 chris 6 Jul 14:04 pprof-cpu .rw-r--r-- 2.3M chris 6 Jul 14:04 pprof-heap .rw-r--r-- 25k chris 6 Jul 14:04 pprof-trace The files prefixed with ``pprof-`` are profiles. For more information on each one, see `Julia Evan's blog`\_ on ``pprof``. To view the CPU or memory profile, simply execute the following command: .. code-block:: shell-session $ go tool pprof -http localhost:9090 pprof-cpu # for CPU $ go tool pprof -http localhost:9090 pprof-heap # for memory This opens a browser window for profile inspection. .. \_Julia Evan's blog: https://jvns.ca/blog/2017/09/24/profiling-go-with-pprof/ | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/debugging.rst | main | cilium | [
0.02485637553036213,
0.013959737494587898,
-0.07363550364971161,
0.015392464585602283,
0.05554020032286644,
-0.08764093369245529,
0.015438921749591827,
0.07883337885141373,
-0.006010872311890125,
0.0002717181632760912,
0.033581338822841644,
-0.14938731491565704,
-0.008472972549498081,
-0.0... | 0.144003 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_dev\_env: Development Setup ================= This page provides an overview of different methods for efficient development on Cilium. Depending on your needs, you can choose the most suitable method. Quick Start ----------- The following commands install Cilium in a `Kind`\_-based Kubernetes cluster. Run them in the root directory of the Cilium repository. The ``make`` targets are described in section `Kind-based Setup <#kind-based-setup-preferred>`\_. .. \_Kind: https://kind.sigs.k8s.io/ .. note:: The command output informs you of any missing dependencies. In particular, if you get the message ``'cilium' not found``, it means you are missing the Cilium CLI. On Linux: .. code-block:: shell-session make kind make kind-image-fast make kind-install-cilium-fast On any OS: .. code-block:: shell-session make kind make kind-image make kind-install-cilium Detailed Instructions --------------------- Depending on your specific development environment and requirements, you can follow the detailed instructions below. Verifying Your Development Setup ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Assuming you have Go installed, you can quickly verify many elements of your development setup by running the following command: .. code-block:: shell-session $ make dev-doctor Depending on your end-goal, not all dependencies listed are required to develop on Cilium. For example, "Ginkgo" is not required if you want to improve our documentation. Thus, do not consider that you need to have all tools installed. Version Requirements ~~~~~~~~~~~~~~~~~~~~ If using these tools, you need to have the following versions from them in order to effectively contribute to Cilium: +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+ | Dependency | Version / Commit ID | Download Command | +===================================================================+==============================+=================================================================+ | git | latest | N/A (OS-specific) | +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+ | clang | >= 18.1 (latest recommended) | N/A (OS-specific) | +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+ | llvm | >= 18.1 (latest recommended) | N/A (OS-specific) | +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+ | `go `\_ | |GO\_RELEASE| | N/A (OS-specific) | +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+ + `ginkgo `\_\_ | >= 1.4.0 and < 2.0.0 | ``go install github.com/onsi/ginkgo/ginkgo@v1.16.5`` | +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+ + `golangci-lint `\_ | >= v1.27 | N/A (OS-specific) | +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+ + `Docker `\_ | OS-Dependent | N/A (OS-specific) | +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+ + `Docker-Compose `\_ | OS-Dependent | N/A (OS-specific) | +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+ + python3-pip | latest | N/A (OS-specific) | +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+ + `helm `\_ | >= v3.13.0 | N/A (OS-specific) | +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+ + `kind `\_\_ | >= v0.7.0 | ``go install sigs.k8s.io/kind@v0.19.0`` | +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+ + `kubectl `\_ | >= v1.26.0 | N/A (OS-specific) | +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+ + `cilium-cli `\_ | Cilium-Dependent | N/A (OS-specific) | +-------------------------------------------------------------------+------------------------------+-----------------------------------------------------------------+ For `integration\_testing`, you will need to run ``docker`` without privileges. You can usually achieve this by adding your current user to the ``docker`` group. Kind-based Setup (preferred) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can find the setup for a `Kind`\_ environment in ``contrib/scripts/kind.sh``. This setup doesn't require any VMs and/or VirtualBox on Linux, but does require `Docker for Mac `\_ for Mac OS. Makefile targets automate the task of spinning up an environment: \* ``make kind``: Creates a Kind cluster based on the configuration passed in. For more information, see `configurations\_for\_clusters`. \* ``make kind-down``: Tears down and deletes the cluster. Depending on your environment you can build Cilium by using the following makefile targets: For Linux and Mac OS ^^^^^^^^^^^^^^^^^^^^ Makefile targets automate building and installing Cilium images: \* ``make kind-image``: Builds all Cilium images and loads them into the cluster. \* ``make kind-image-agent``: Builds only the Cilium Agent image and loads it into the cluster. \* ``make kind-image-operator``: Builds only the Cilium Operator (generic) image and loads it into the cluster. \* ``make kind-debug``: Builds all Cilium images with optimizations disabled and ``dlv`` embedded for live debugging enabled and loads the images into the cluster. | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/dev_setup.rst | main | cilium | [
0.012744271196424961,
-0.024325231090188026,
-0.02240176498889923,
-0.03311970829963684,
0.005456049460917711,
-0.044568467885255814,
-0.07371402531862259,
0.008321131579577923,
0.06304441392421722,
-0.017188595607876778,
0.05247185006737709,
-0.12885798513889313,
-0.014653767459094524,
-0... | 0.153913 |
the Cilium Agent image and loads it into the cluster. \* ``make kind-image-operator``: Builds only the Cilium Operator (generic) image and loads it into the cluster. \* ``make kind-debug``: Builds all Cilium images with optimizations disabled and ``dlv`` embedded for live debugging enabled and loads the images into the cluster. \* ``make kind-debug-agent``: Like ``kind-debug``, but for the agent image only. Use if only the agent image needs to be rebuilt for faster iteration. \* ``make kind-install-cilium``: Installs Cilium into the cluster using the Cilium CLI. The preceding list includes the most used commands for \*\*convenience\*\*. For more targets, see the ``Makefile`` (or simply run ``make help``). For Linux only - with shorter development workflow time ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ On Linux environments, or on environments where you can compile and run Cilium, it is possible to use "fast" targets. These fast targets will build Cilium in the local environment and mount that binary, as well the bpf source code, in an pre-existing running Cilium container. \* ``make kind-install-cilium-fast``: Installs Cilium into the cluster using the Cilium CLI with the volume mounts defined. \* ``make kind-image-fast``: Builds all Cilium binaries and loads them into all Kind clusters available in the host. Configuration for Cilium ^^^^^^^^^^^^^^^^^^^^^^^^ The Makefile targets that install Cilium pass the following list of Helm values (YAML files) to the Cilium CLI. \* ``contrib/testing/kind-common.yaml``: Shared between normal and fast installation modes. \* ``contrib/testing/kind-values.yaml``: Used by normal installation mode. \* ``contrib/testing/kind-fast.yaml``: Used by fast installation mode. \* ``contrib/testing/kind-custom.yaml``: User defined custom values that are applied if the file is present. The file is ignored by Git as specified in ``contrib/testing/.gitignore``. .. \_configurations\_for\_clusters: Configuration for clusters ^^^^^^^^^^^^^^^^^^^^^^^^^^ ``make kind`` takes a few environment variables to modify the configuration of the clusters it creates. The following parameters are the most commonly used: \* ``CONTROLPLANES``: How many control-plane nodes are created. \* ``WORKERS``: How many worker nodes are created. \* ``CLUSTER\_NAME``: The name of the Kubernetes cluster. \* ``IMAGE``: The image for Kind, for example: ``kindest/node:v1.11.10``. \* ``KUBEPROXY\_MODE``: Pass directly as ``kubeProxyMode`` to the Kind configuration Custom Resource Definition (CRD). For more environment variables, see ``contrib/scripts/kind.sh``. .. \_making\_changes: Making Changes -------------- #. Make sure the ``main`` branch of your fork is up-to-date: .. code-block:: shell-session git fetch upstream main:main #. Create a PR branch with a descriptive name, branching from ``main``: .. code-block:: shell-session git switch -c pr/changes-to-something main #. Make the changes you want. #. Separate the changes into logical commits. #. Describe the changes in the commit messages. Focus on answering the question why the change is required and document anything that might be unexpected. #. If any description is required to understand your code changes, then those instructions should be code comments instead of statements in the commit description. .. note:: For submitting PRs, all commits need be to signed off (``git commit -s``). See the section :ref:`dev\_coo`. #. Make sure your changes meet the following criteria: #. New code is covered by :ref:`integration\_testing`. #. End to end integration / runtime tests have been extended or added. If not required, mention in the commit message what existing test covers the new code. #. Follow-up commits are squashed together nicely. Commits should separate logical chunks of code and not represent a chronological list of changes. #. Run ``git diff --check`` to catch obvious white space violations #. Run ``make`` to build your changes. This will also run ``make lint`` and error out on any golang linting errors. The rules are configured in ``.golangci.yaml`` #. Run ``make -C bpf checkpatch`` to validate against your changes coding style and commit messages. #. See :ref:`integration\_testing` on | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/dev_setup.rst | main | cilium | [
-0.02200024388730526,
0.029747916385531425,
-0.021131327375769615,
-0.002862377790734172,
0.02997726947069168,
-0.12292648106813431,
-0.05243377760052681,
0.0036606742069125175,
-0.05237186700105667,
-0.019075728952884674,
0.05100969225168228,
-0.0671192929148674,
0.04388083517551422,
-0.0... | 0.166293 |
white space violations #. Run ``make`` to build your changes. This will also run ``make lint`` and error out on any golang linting errors. The rules are configured in ``.golangci.yaml`` #. Run ``make -C bpf checkpatch`` to validate against your changes coding style and commit messages. #. See :ref:`integration\_testing` on how to run integration tests. #. See :ref:`testsuite` for information how to run the end to end integration tests #. If you are making documentation changes, you can generate documentation files and serve them locally on ``http://localhost:9081`` by running ``make render-docs``. This make target works assuming that ``docker`` is running in the environment. Dev Container ------------- Cilium provides `Dev Container `\_ configuration for Visual Studio Code Remote Containers and `Github Codespaces `\_. This allows you to use a preconfigured development environment in the cloud or locally. The container is based on the official Cilium builder image and provides all the dependencies required to build Cilium. You can also install common packages, such as ``kind``, ``kubectl``, and ``cilium-cli``, with ``contrib/scripts/devcontainer-setup.sh``: .. code-block:: shell-session $ ./contrib/scripts/devcontainer-setup.sh Package versions can be modified to fit your requirements. This needs to only be set up once when the ``devcontainer`` is first created. .. note:: The current Dev Container is running as root. Non-root user support requires non-root user in Cilium builder image, which is related to :gh-issue:`23217`. Update a golang version ----------------------- Minor version ~~~~~~~~~~~~~ Each Cilium release is tied to a specific version of Golang via an explicit constraint in our Renovate configuration. We aim to build and release all maintained Cilium branches using a Golang version that is actively supported. This needs to be balanced against the desire to avoid regressions in Golang that may impact Cilium. Golang supports two minor versions at any given time – when updating the version used by a Cilium branch, you should choose the older of the two supported versions. To update the minor version of Golang used by a release, you will first need to update the Renovate configuration found in ``.github/renovate.json5``. For each minor release, there will be a section that looks like this: .. code-block:: json { "matchPackageNames": [ "docker.io/library/golang", "go" ], "allowedVersions": "<1.21", "matchBaseBranches": [ "v1.14" ] } To allow Renovate to create a pull request that updates the minor Golang version, bump the ``allowedVersions`` constraint to include the desired minor version. Once this change has been merged, Renovate will create a pull request that updates the Golang version. Minor version updates may require further changes to ensure that all Cilium features are working correctly – use the CI to identify any issues that require further changes, and bring them to the attention of the Cilium maintainers in the pull request. Once the CI is passing, the PR will be merged as part of the standard version upgrade process. Patch version ~~~~~~~~~~~~~ New patch versions of Golang are picked up automatically by the CI; there should normally be no need to update the version manually. Add/update a golang dependency ------------------------------ Let's assume we want to add ``github.com/containernetworking/cni`` version ``v0.5.2``: .. code-block:: shell-session $ go get github.com/containernetworking/cni@v0.5.2 $ go mod tidy $ go mod vendor $ git add go.mod go.sum vendor/ For a first run, it can take a while as it will download all dependencies to your local cache but the remaining runs will be faster. Updating k8s is a special case which requires updating k8s libraries in a single change: .. code-block:: shell-session $ # get the tag we are updating (for example ``v0.17.3`` corresponds to k8s ``v1.17.3``) $ # open go.mod and search and replace all ``v0.17.3`` with the version $ | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/dev_setup.rst | main | cilium | [
-0.0020547187887132168,
0.01527937687933445,
0.007022415287792683,
-0.009601245634257793,
0.037480831146240234,
-0.0517672523856163,
-0.06392260640859604,
0.08763672411441803,
-0.039343979209661484,
0.018011892214417458,
0.023137478157877922,
-0.0899004191160202,
0.01167146023362875,
0.007... | 0.001536 |
will be faster. Updating k8s is a special case which requires updating k8s libraries in a single change: .. code-block:: shell-session $ # get the tag we are updating (for example ``v0.17.3`` corresponds to k8s ``v1.17.3``) $ # open go.mod and search and replace all ``v0.17.3`` with the version $ # that we are trying to upgrade with, for example: ``v0.17.4``. $ # Close the file and run: $ go mod tidy $ go mod vendor $ make generate-k8s-api $ git add go.mod go.sum vendor/ Add/update a cilium/kindest-node image -------------------------------------- Cilium might use its own fork of kindest-node so that it can use k8s versions that have not been released by Kind maintainers yet. One other reason for using a fork is that the base image used on kindest-node may not have been release yet. For example, as of this writing, Cilium requires Debian Bookworm (yet to be released), because the glibc version available on Cilium's base Docker image is the same as the one used in the Bookworm Docker image which is relevant for testing with Go's race detector. Currently, only maintainers can publish an image on ``quay.io/cilium/kindest-node``. However, anyone can build a kindest-node image and try it out To build a cilium/kindest-node image, first build the base Docker image: .. code-block:: shell-session git clone https://github.com/kubernetes-sigs/kind.git cd kind make -C images/base/ quick Take note of the resulting image tag for that command, it should be the last tag built for the ``gcr.io/k8s-staging-kind/base`` repository in ``docker ps -a``. Secondly, change into the directory with Kubernetes' source code which will be used for the kindest node image. On this example, we will build a kindest-base image with Kubernetes version ``v1.28.3`` using the recently-built base image ``gcr.io/k8s-staging-kind/base:v20231108-a9fbf702``: .. code-block:: shell-session $ # Change to k8s' source code directory. $ git clone https://github.com/kubernetes/kubernetes.git $ cd kubernetes $ tag=v1.28.3 $ git fetch origin --tags $ git checkout tags/${tag} $ kind build node-image \ --image=quay.io/cilium/kindest-node:${tag} \ --base-image=gcr.io/k8s-staging-kind/base:v20231108-a9fbf702 Finally, publish the image to a public repository. If you are a maintainer and have permissions to publish on ``quay.io/cilium/kindest-node``, the Renovate bot will automatically pick the new version and create a new Pull Request with this update. If you are not a maintainer you will have to update the image manually in Cilium's repository. Add/update a new Kubernetes version ----------------------------------- Let's assume we want to add a new Kubernetes version ``v1.19.0``: #. Follow the above instructions to update the Kubernetes libraries. #. Follow the next instructions depending on if it is a minor update or a patch update. Minor version ~~~~~~~~~~~~~ #. Check if it is possible to remove the last supported Kubernetes version from :ref:`k8scompatibility`, :ref:`k8s\_requirements` and add the new Kubernetes version to that list. #. If the minimal supported version changed, leave a note in the upgrade guide stating the minimal supported Kubernetes version. #. If the minimal supported version changed, search over the code, more likely under ``pkg/k8s``, if there is code that can be removed which specifically exists for the compatibility of the previous Kubernetes minimal version supported. #. If the minimal supported version changed, update the field ``MinimalVersionConstraint`` in ``pkg/k8s/version/version.go`` #. Sync all "``slim``" types by following the instructions in ``pkg/k8s/slim/README.md``. The overall goal is to update changed fields or deprecated fields from the upstream code. New functions / fields / structs added in upstream that are not used in Cilium, can be removed. #. Make sure the workflows used on all PRs are running with the new Kubernetes version by default. Make sure the files ``contributing/testing/{ci,e2e}.rst`` are up to date with these changes. #. Update documentation files: - ``Documentation/contributing/testing/e2e.rst`` - ``Documentation/network/kubernetes/compatibility.rst`` | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/dev_setup.rst | main | cilium | [
-0.03768652677536011,
0.0826149582862854,
0.07473303377628326,
-0.04700234904885292,
0.0016691210912540555,
-0.0617092028260231,
-0.04363931715488434,
0.0332743301987648,
0.012637566775083542,
0.04469064250588417,
0.054068125784397125,
-0.022471342235803604,
-0.025961652398109436,
-0.05394... | 0.038359 |
added in upstream that are not used in Cilium, can be removed. #. Make sure the workflows used on all PRs are running with the new Kubernetes version by default. Make sure the files ``contributing/testing/{ci,e2e}.rst`` are up to date with these changes. #. Update documentation files: - ``Documentation/contributing/testing/e2e.rst`` - ``Documentation/network/kubernetes/compatibility.rst`` - ``Documentation/network/kubernetes/requirements.rst`` #. Update the Kubernetes version with the newer version in - ``test/test\_suite\_test.go``. - ``.github/actions/ginkgo/main-prs.yaml`` - ``.github/actions/ginkgo/main-scheduled.yaml`` - ``.github/actions/set-env-variables/action.yml`` - ``contrib/scripts/devcontainer-setup.sh`` - ``.github/actions/ginkgo/main-focus.yaml`` #. Bump the kindest/node version in ``.github/actions/ginkgo/main-k8s-versions.yaml``. #. Run ``./contrib/scripts/check-k8s-code-gen.sh`` #. Check ``controller-runtime`` compatibility with the new Kubernetes version. If there are any changes required, update the controller-runtime version in ``go.mod``. See https://github.com/kubernetes-sigs/controller-runtime?tab=readme-ov-file#compatibility. #. Run ``go mod vendor && go mod tidy`` #. Run ``./contrib/scripts/check-k8s-code-gen.sh`` (again) #. Run ``make -C Documentation update-helm-values`` #. Compile the code locally to make sure all the library updates didn't removed any used code. #. Provision a new dev VM to check if the provisioning scripts work correctly with the new k8s version. #. Run ``git add vendor/ test/provision/manifest/ Documentation/ && git commit -sam "Update k8s tests and libraries to v1.28.0-rc.0"`` #. Submit all your changes into a new PR. Ensure the PR is opened against a branch in ``cilium/cilium`` and \*not\* a fork. Otherwise, CI is not triggered properly. Please open a thread on #development if you do not have permissions to create a branch in ``cilium/cilium``. #. Ensure that the target CI workflows are running and passing after updating the target k8s versions in the GitHub action workflows. #. Once CI is green and PR has been merged, ping the CI team again so that they update the `Cilium CI matrix`\_, ``.github/maintainers-little-helper.yaml``, and GitHub required PR checks accordingly. .. \_Cilium CI matrix: https://docs.google.com/spreadsheets/d/1TThkqvVZxaqLR-Ela4ZrcJ0lrTJByCqrbdCjnI32\_X0 Patch version ~~~~~~~~~~~~~ #. Submit all your changes into a new PR. Making changes to the Helm chart -------------------------------- The Helm chart is located in the ``install/kubernetes`` directory. The ``values.yaml.tmpl`` file contains the values for the Helm chart which are being used into the ``values.yaml`` file. To prepare your changes you need to run the make scripts for the chart: .. code-block:: shell-session $ make -C install/kubernetes This does all needed steps in one command. Your change to the Helm chart is now ready to be submitted! You can also run them one by one using the individual targets below. When updating or adding a value they can be synced to the ``values.yaml`` file by running the following command: .. code-block:: shell-session $ make -C install/kubernetes cilium/values.yaml Before submitting the changes the ``README.md`` file needs to be updated, this can be done using the ``docs`` target: .. code-block:: shell-session $ make -C install/kubernetes docs At last you might want to check the chart using the ``lint`` target: .. code-block:: shell-session $ make -C install/kubernetes lint Optional: Docker and IPv6 ------------------------- Note that these instructions are useful to you if you care about having IPv6 addresses for your Docker containers. If you'd like IPv6 addresses, you will need to follow these steps: 1) Edit ``/etc/docker/daemon.json`` and set the ``ipv6`` key to ``true``. .. code-block:: json { "ipv6": true } If that doesn't work alone, try assigning a fixed range. Many people have reported trouble with IPv6 and Docker. `Source here. `\_ .. code-block:: json { "ipv6": true, "fixed-cidr-v6": "2001:db8:1::/64" } And then: .. code-block:: shell-session ip -6 route add 2001:db8:1::/64 dev docker0 sysctl net.ipv6.conf.default.forwarding=1 sysctl net.ipv6.conf.all.forwarding=1 2) Restart the docker daemon to pick up the new configuration. 3) The new command for creating a network managed by Cilium: .. code-block:: shell-session $ docker network create --ipv6 --driver cilium --ipam-driver cilium cilium-net Now new containers will have an | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/dev_setup.rst | main | cilium | [
0.014809814281761646,
0.010329115204513073,
-0.001645728130824864,
0.00844571739435196,
-0.010580123402178288,
-0.03611325845122337,
-0.06828619539737701,
0.03232238069176674,
0.09268844872713089,
0.04454444721341133,
-0.011675986461341381,
-0.03764444217085838,
-0.052621498703956604,
0.06... | 0.127469 |
route add 2001:db8:1::/64 dev docker0 sysctl net.ipv6.conf.default.forwarding=1 sysctl net.ipv6.conf.all.forwarding=1 2) Restart the docker daemon to pick up the new configuration. 3) The new command for creating a network managed by Cilium: .. code-block:: shell-session $ docker network create --ipv6 --driver cilium --ipam-driver cilium cilium-net Now new containers will have an IPv6 address assigned to them. Debugging --------- Datapath code ~~~~~~~~~~~~~ The tool ``cilium-dbg monitor`` can also be used to retrieve debugging information from the eBPF based datapath. To enable all log messages: - Start the ``cilium-agent`` with ``--debug-verbose=datapath``, or - Run ``cilium-dbg config debug=true debugLB=true`` from an already running agent. These options enable logging functions in the datapath: ``cilium\_dbg()``, ``cilium\_dbg\_lb()`` and ``printk()``. .. note:: The ``printk()`` logging function is used by the developer to debug the datapath outside of the ``cilium monitor``. In this case, ``bpftool prog tracelog`` can be used to retrieve debugging information from the eBPF based datapath. Both ``cilium\_dbg()`` and ``printk()`` functions are available from the ``bpf/lib/dbg.h`` header file. The image below shows the options that could be used as startup options by ``cilium-agent`` (see upper blue box) or could be changed at runtime by running ``cilium-dbg config `` for an already running agent (see lower blue box). Along with each option, there is one or more logging function associated with it: ``cilium\_dbg()`` and ``printk()``, for ``DEBUG`` and ``cilium\_dbg\_lb()`` for ``DEBUG\_LB``. .. image:: \_static/cilium-debug-datapath-options.svg :align: center :alt: Cilium debug datapath options .. note:: If you need to enable the ``LB\_DEBUG`` for an already running agent by running ``cilium-dbg config debugLB=true``, you must pass the option ``debug=true`` along. Debugging of an individual endpoint can be enabled by running ``cilium-dbg endpoint config ID debug=true``. Running ``cilium-dbg monitor -v`` will print the normal form of monitor output along with debug messages: .. code-block:: shell-session $ cilium-dbg endpoint config 731 debug=true Endpoint 731 configuration updated successfully $ cilium-dbg monitor -v Press Ctrl-C to quit level=info msg="Initializing dissection cache..." subsys=monitor <- endpoint 745 flow 0x6851276 identity 4->0 state new ifindex 0 orig-ip 0.0.0.0: 8e:3c:a3:67:cc:1e -> 16:f9:cd:dc:87:e5 ARP -> lxc\_health: 16:f9:cd:dc:87:e5 -> 8e:3c:a3:67:cc:1e ARP CPU 00: MARK 0xbbe3d555 FROM 0 DEBUG: Inheriting identity=1 from stack <- host flow 0xbbe3d555 identity 1->0 state new ifindex 0 orig-ip 0.0.0.0: 10.11.251.76:57896 -> 10.11.166.21:4240 tcp ACK CPU 00: MARK 0xbbe3d555 FROM 0 DEBUG: Successfully mapped addr=10.11.251.76 to identity=1 CPU 00: MARK 0xbbe3d555 FROM 0 DEBUG: Attempting local delivery for container id 745 from seclabel 1 CPU 00: MARK 0xbbe3d555 FROM 745 DEBUG: Conntrack lookup 1/2: src=10.11.251.76:57896 dst=10.11.166.21:4240 CPU 00: MARK 0xbbe3d555 FROM 745 DEBUG: Conntrack lookup 2/2: nexthdr=6 flags=0 CPU 00: MARK 0xbbe3d555 FROM 745 DEBUG: CT entry found lifetime=21925, revnat=0 CPU 00: MARK 0xbbe3d555 FROM 745 DEBUG: CT verdict: Established, revnat=0 -> endpoint 745 flow 0xbbe3d555 identity 1->4 state established ifindex lxc\_health orig-ip 10.11.251.76: 10.11.251.76:57896 -> 10.11.166.21:4240 tcp ACK Passing ``-v -v`` supports deeper detail, for example: .. code-block:: shell-session $ cilium-dbg endpoint config 3978 debug=true Endpoint 3978 configuration updated successfully $ cilium-dbg monitor -v -v --hex Listening for events on 2 CPUs with 64x4096 of shared memory Press Ctrl-C to quit ------------------------------------------------------------------------------ CPU 00: MARK 0x1c56d86c FROM 3978 DEBUG: 70 bytes Incoming packet from container ifindex 85 00000000 33 33 00 00 00 02 ae 45 75 73 11 04 86 dd 60 00 |33.....Eus....`.| 00000010 00 00 00 10 3a ff fe 80 00 00 00 00 00 00 ac 45 |....:..........E| 00000020 75 ff fe 73 11 04 ff 02 00 00 00 00 00 00 00 00 |u..s............| 00000030 00 00 00 00 00 02 85 00 15 b4 00 00 00 00 01 01 |................| 00000040 ae 45 75 73 11 | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/dev_setup.rst | main | cilium | [
0.04612753167748451,
0.009798973798751831,
-0.04339590296149254,
0.01017441600561142,
-0.013571714982390404,
-0.03987933695316315,
-0.07740644365549088,
-0.00790729746222496,
-0.04250676929950714,
0.019080625846982002,
-0.005004492588341236,
-0.10051263868808746,
-0.06593417376279831,
-0.0... | 0.067887 |
00 00 00 00 00 ac 45 |....:..........E| 00000020 75 ff fe 73 11 04 ff 02 00 00 00 00 00 00 00 00 |u..s............| 00000030 00 00 00 00 00 02 85 00 15 b4 00 00 00 00 01 01 |................| 00000040 ae 45 75 73 11 04 00 00 00 00 00 00 |.Eus........| CPU 00: MARK 0x1c56d86c FROM 3978 DEBUG: Handling ICMPv6 type=133 ------------------------------------------------------------------------------ CPU 00: MARK 0x1c56d86c FROM 3978 Packet dropped 131 (Invalid destination mac) 70 bytes ifindex=0 284->0 00000000 33 33 00 00 00 02 ae 45 75 73 11 04 86 dd 60 00 |33.....Eus....`.| 00000010 00 00 00 10 3a ff fe 80 00 00 00 00 00 00 ac 45 |....:..........E| 00000020 75 ff fe 73 11 04 ff 02 00 00 00 00 00 00 00 00 |u..s............| 00000030 00 00 00 00 00 02 85 00 15 b4 00 00 00 00 01 01 |................| 00000040 00 00 00 00 |....| ------------------------------------------------------------------------------ CPU 00: MARK 0x7dc2b704 FROM 3978 DEBUG: 86 bytes Incoming packet from container ifindex 85 00000000 33 33 ff 00 8a d6 ae 45 75 73 11 04 86 dd 60 00 |33.....Eus....`.| 00000010 00 00 00 20 3a ff fe 80 00 00 00 00 00 00 ac 45 |... :..........E| 00000020 75 ff fe 73 11 04 ff 02 00 00 00 00 00 00 00 00 |u..s............| 00000030 00 01 ff 00 8a d6 87 00 20 40 00 00 00 00 fd 02 |........ @......| 00000040 00 00 00 00 00 00 c0 a8 21 0b 00 00 8a d6 01 01 |........!.......| 00000050 ae 45 75 73 11 04 00 00 00 00 00 00 |.Eus........| CPU 00: MARK 0x7dc2b704 FROM 3978 DEBUG: Handling ICMPv6 type=135 CPU 00: MARK 0x7dc2b704 FROM 3978 DEBUG: ICMPv6 neighbour soliciation for address b21a8c0:d68a0000 One of the most common issues when developing datapath code is that the eBPF code cannot be loaded into the kernel. This frequently manifests as the endpoints appearing in the "not-ready" state and never switching out of it: .. code-block:: shell-session $ cilium-dbg endpoint list ENDPOINT POLICY IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT 48896 Disabled 266 container:id.server fd02::c0a8:210b:0:bf00 10.11.13.37 not-ready 60670 Disabled 267 container:id.client fd02::c0a8:210b:0:ecfe 10.11.167.158 not-ready Running ``cilium-dbg endpoint get`` for one of the endpoints will provide a description of known state about it, which includes eBPF verification logs. The files under ``/var/run/cilium/state`` provide context about how the eBPF datapath is managed and set up. The .h files describe specific configurations used for eBPF program compilation. The numbered directories describe endpoint-specific state, including header configuration files and eBPF binaries. Current eBPF map state for particular programs is held under ``/sys/fs/bpf/``, and the `bpf-map `\_ utility can be useful for debugging what is going on inside them, for example: .. code-block:: shell-session # ls /sys/fs/bpf/tc/globals/ cilium\_calls\_15124 cilium\_calls\_48896 cilium\_ct4\_global cilium\_lb4\_rr\_seq cilium\_lb6\_services cilium\_policy\_v2\_25729 cilium\_policy\_v2\_60670 cilium\_tunnel\_map cilium\_calls\_25729 cilium\_calls\_60670 cilium\_ct6\_global cilium\_lb4\_services cilium\_lxc cilium\_policy\_v2\_3978 cilium\_policy\_v2\_reserved\_1 cilium\_calls\_3978 cilium\_calls\_netdev\_ns\_1 cilium\_events cilium\_lb6\_reverse\_nat cilium\_policy cilium\_policy\_v2\_4314 cilium\_policy\_v2\_reserved\_2 cilium\_calls\_4314 cilium\_calls\_overlay\_2 cilium\_lb4\_reverse\_nat cilium\_lb6\_rr\_seq cilium\_policy\_v2\_15124 cilium\_policy\_v2\_48896 cilium\_reserved\_policy # bpf-map info /sys/fs/bpf/tc/globals/cilium\_policy\_v2\_15124 Type: Hash Key size: 8 Value size: 24 Max entries: 1024 Flags: 0x0 # bpf-map dump /sys/fs/bpf/tc/globals/cilium\_policy\_v2\_15124 Key: 00000000 6a 01 00 00 82 23 06 00 |j....#..| Value: 00000000 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000010 00 00 00 00 00 00 00 00 |........| | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/dev_setup.rst | main | cilium | [
-0.0172292347997427,
0.06101645901799202,
-0.04712209105491638,
0.009358775801956654,
0.029989611357450485,
-0.09338350594043732,
-0.027026241645216942,
0.036424294114112854,
-0.025522543117403984,
-0.005230879411101341,
0.03392772749066353,
-0.06241466477513313,
-0.05726727098226547,
-0.0... | 0.050103 |
00 00 00 |................| 00000010 00 00 00 00 00 00 00 00 |........| | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/dev_setup.rst | main | cilium | [
0.013656388968229294,
0.03899272158741951,
-0.09455940127372742,
-0.006369126029312611,
-0.11044580489397049,
0.011046755127608776,
0.08089236170053482,
0.08242296427488327,
0.0025790955405682325,
-0.014929390512406826,
-0.004382510203868151,
0.005078780464828014,
0.07317787408828735,
0.05... | 0.05631 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_code\_overview: Code Overview ============= This section provides an overview of the Cilium & Hubble source code directory structure. It is useful to get an initial overview on where to find what. High-level ---------- Top-level directories `github.com/cilium/cilium `\_: api The Cilium & Hubble API definition. bpf The eBPF datapath code bugtool CLI for collecting agent & system information for bug reporting cilium Cilium CLI client contrib, tools Additional tooling and resources used for development daemon The cilium-agent running on each node examples Various example resources and manifests. Typically require to be modified before usage is possible. hubble-relay Hubble Relay server install Helm deployment manifests for all components pkg Common Go packages shared between all components operator Operator responsible for centralized tasks which do not require to be performed on each node. plugins Plugins to integrate with Kubernetes and Docker test End-to-end integration tests run in the :ref:`testsuite-legacy`. Cilium ------ api/v1/openapi.yaml API specification of the Cilium API. Used for code generation. api/v1/models/ Go code generated from openapi.yaml representing all API resources bpf The eBPF datapath code cilium Cilium CLI client cilium-health Cilium cluster connectivity CLI client daemon cilium-agent specific code plugins/cilium-cni The CNI plugin to integrate with Kubernetes plugins/cilium-docker The Docker integration plugin Hubble ------ The server-side code of Hubble is integrated into the Cilium repository. The Hubble CLI can be found in the separate repository `github.com/cilium/hubble `\_. The Hubble UI can be found in the separate repository `github.com/cilium/hubble-ui `\_. api/v1/external, api/v1/flow, api/v1/observer, api/v1/peer, api/v1/relay API specifications of the Hubble APIs. hubble-relay Hubble Relay agent pkg/hubble All Hubble specific code pkg/hubble/container Ring buffer implementation pkg/hubble/filters Flow filtering capabilities pkg/hubble/metrics Metrics plugins providing Prometheus based on Hubble's visibility pkg/hubble/observe Layer running on top of the Cilium datapath monitoring, feeding the metrics and ring buffer. pkg/hubble/parser Network flow parsers pkg/hubble/peer Peer service implementation pkg/hubble/relay Hubble Relay service implementation pkg/hubble/server The server providing the API for the Hubble client and UI Important common packages ------------------------- pkg/allocator Security identity allocation pkg/bpf Abstraction layer to interact with the eBPF runtime pkg/client Go client to access Cilium API pkg/clustermesh Multi-cluster implementation including control plane and global services pkg/controller Base controller implementation for any background operation that requires retries or interval-based invocation. pkg/datapath Abstraction layer for datapath interaction pkg/defaults All default values pkg/elf ELF abstraction library for the eBPF loader pkg/endpoint Abstraction of a Cilium endpoint, representing all workloads. pkg/endpointmanager Manager of all endpoints pkg/envoy Envoy proxy interactions pkg/fqdn FQDN proxy and FQDN policy implementation pkg/health Network connectivity health checking pkg/hive A dependency injection framework for modular composition of applications pkg/identity Representation of a security identity for workloads pkg/ipam IP address management pkg/ipcache Global cache mapping IPs to endpoints and security identities pkg/k8s All interactions with Kubernetes pkg/kvstore Key-value store abstraction layer with backends for etcd pkg/labels Base metadata type to describe all label/metadata requirements for workload identity specification and policy matching. pkg/loadbalancer Control plane for load-balancing functionality pkg/maps eBPF map representations pkg/metrics Prometheus metrics implementation pkg/monitor eBPF datapath monitoring abstraction pkg/node Representation of a network node pkg/option All available configuration options pkg/policy Policy enforcement specification & implementation pkg/proxy Layer 7 proxy abstraction pkg/trigger Implementation of trigger functionality to implement event-driven functionality | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/codeoverview.rst | main | cilium | [
-0.011489960364997387,
-0.011798498220741749,
-0.07403779774904251,
0.006743221078068018,
0.053356289863586426,
-0.009106187149882317,
-0.05272161588072777,
0.06670595705509186,
-0.04767489805817604,
-0.01862965151667595,
0.04606984928250313,
-0.059642624109983444,
0.04824352264404297,
-0.... | 0.230414 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_guide-to-the-hive: Guide to the Hive ================= Introduction ~~~~~~~~~~~~ Cilium is using dependency injection (via ``pkg/hive``) to wire up the initialization, starting and stopping of its components. `Dependency injection `\_ (DI) is a technique for separating the use of objects from their creation and initialization. Essentially dependency injection is about automating the manual management of dependencies. Object constructors only need to declare their dependencies as function parameters and the rest is handled by the library. This helps with building a loosely-coupled modular architecture as it removes the need for centralization of initialization and configuration. It also reduces the inclination to use global variables over explicit passing of objects, which is often a source of bugs (due to unexpected initialization order) and difficult to deal with in tests (as the state needs to be restored for the next test). With dependency injection components are described as plain values (``Cell`` in our flavor of DI) enabling visualization of inter-component dependencies and opening the internal architecture up for inspection. Dependency injection and the machinery described here are only a tool to help us towards the real goal: a modular software architecture that can be easily understood, extended, repurposed, tested and refactored by a large group of developers with minimal overlap between modules. To achieve this we also need to have modularity in mind when designing the architecture and APIs. Hive and Cells ~~~~~~~~~~~~~~ Cilium applications are composed using runtime dependency injection from a set of modular components called cells that compose together to form a hive (as in bee hive). A hive can then be supplied with configuration and executed. To provide a feel for what this is about, here is how a simple modular HTTP server application would leverage hive: .. code-block:: go package server // The server cell implements a generic HTTP server. Provides the 'Server' API // for registering request handlers. // // Module() creates a named collection of cells. var Cell = cell.Module( "http-server", // Module identifier (for e.g. logging and tracing) "HTTP Server", // Module title (for documentation) // Provide the application the constructor for the server. cell.Provide(New), // Config registers a configuration when provided with the defaults // and an implementation of Flags() for registering the configuration flags. cell.Config(defaultServerConfig), ) // Server allows registering request handlers with the HTTP server type Server interface { ListenAddress() string RegisterHandler(path string, fn http.HandlerFunc) } func New(lc cell.Lifecycle, cfg ServerConfig) Server { // Initialize http.Server, register Start and Stop hooks to Lifecycle // for starting and stopping the server and return an implementation of // 'Server' for other cells for registering handlers. // ... } type ServerConfig struct { ServerPort uint16 } var defaultServerConfig = ServerConfig{ ServerPort: 8080, } func (def ServerConfig) Flags(flags \*pflag.FlagSet) { // Register the "server-port" flag. Hive by convention maps the flag to the ServerPort // field. flags.Uint16("server-port", def.ServerPort, "Sets the HTTP server listen port") } With the above generic HTTP server in the ``server`` package, we can now implement a simple handler for /hello in the ``hello`` package: .. code-block:: go package hello // The hello cell implements and registers a hello handler to the HTTP server. // // This cell isn't a Module, but rather just a plain Invoke. An Invoke // is a cell that, unlike Provide, is always executed. Invoke functions // can depend on values that constructors registered with Provide() can // return. These constructors are then called and their results remembered. var Cell = cell.Invoke(registerHelloHandler) func | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/hive.rst | main | cilium | [
-0.055944934487342834,
0.00937928818166256,
-0.009927129372954369,
0.0018896689871326089,
-0.001024614437483251,
-0.07235574722290039,
0.005857471376657486,
0.06126450374722481,
-0.01554059237241745,
0.02606278471648693,
0.06199105829000473,
-0.028767526149749756,
0.013144957832992077,
-0.... | 0.276338 |
isn't a Module, but rather just a plain Invoke. An Invoke // is a cell that, unlike Provide, is always executed. Invoke functions // can depend on values that constructors registered with Provide() can // return. These constructors are then called and their results remembered. var Cell = cell.Invoke(registerHelloHandler) func helloHandler(w http.ResponseWriter, req \*http.Request) { w.Write([]byte("hello")) } func registerHelloHandler(srv server.Server) { srv.RegisterHandler("/hello", helloHandler) } And then put the two together into a simple application: .. code-block:: go package main var ( // exampleHive is an application with an HTTP server and a handler // at /hello. exampleHive = hive.New( server.Cell, hello.Cell, ) // cmd is the root command for this application. Runs // exampleHive when executed. cmd \*cobra.Command = &cobra.Command{ Use: "example", Run: func(cmd \*cobra.Command, args []string) { // Run() will execute all invoke functions, followed by start hooks // and will then wait for interrupt signal before executing stop hooks // and returning. exampleHive.Run() }, } ) func main() { // Register all command-line flags from each config cell to the // flag-set of our command. exampleHive.RegisterFlags(cmd.Flags()) // Add the "hive" sub-command for inspecting the application. cmd.AddCommand(exampleHive.Command())) // Execute the root command. cmd.Execute() } If you prefer to learn by example you can find a more complete and runnable example application from ``pkg/hive/example``. Try running it with ``go run .`` and also try ``go run . hive``. And if you're interested in how all this is implemented internally, see ``pkg/hive/example/mini``, a minimal example of how to do dependency injection with reflection. The Hive API ~~~~~~~~~~~~ With the example hopefully having now whetted the appetite, we'll take a proper look at the hive API. `hive `\_ provides the Hive type and `hive.New `\_ constructor. The ``hive.Hive`` type can be thought of as an application container, composed from cells: .. code-block:: go var myHive = hive.New(foo.Cell, bar.Cell) // Call Run() to run the hive. myHive.Run() // Start(), wait for signal (ctrl-c) and then Stop() // Hive can also be started and stopped directly. Useful in tests. if err := myHive.Start(ctx); err != nil { /\* ... \*/ } if err := myHive.Stop(ctx); err != nil { /\* ... \*/ } // Hive's configuration can be registered with a Cobra command: hive.RegisterFlags(cmd.Flags()) // Hive also provides a sub-command for inspecting it: cmd.AddCommand(hive.Command()) `hive/cell `\_ defines the Cell interface that ``hive.New()`` consumes and the following functions for creating cells: - :ref:`api\_module`: A named set of cells. - :ref:`api\_provide`: Provides constructor(s) to the hive. Lazy and only invoked if referenced by an Invoke function (directly or indirectly via other constructor). - :ref:`ProvidePrivate `: Provides private constructor(s) to a module and its sub-modules. - :ref:`api\_decorate`: Wraps a set of cells with a decorator function to provide these cells with augmented objects. - :ref:`api\_config`: Provides a configuration struct to the hive. - :ref:`api\_invoke`: Registers an invoke function to instantiate and initialize objects. - :ref:`api\_metric`: Provides metrics to the hive. Hive also by default provides the following globally available objects: - :ref:`api\_lifecycle`: Methods for registering Start and Stop functions that are executed when Hive is started and stopped. The hooks are appended to it in dependency order (since the constructors are invoked in dependency order). - :ref:`api\_shutdowner`: Allows gracefully shutting down the hive from anywhere in case of a fatal error post-start. - ``logrus.FieldLogger``: Interface to the logger. Module() decorates it with ``subsys=``. .. \_api\_provide: Provide ^^^^^^^ We'll now take a look at each of the different kinds of cells, starting with Provide(), which registers one or more constructors with the hive: .. code-block:: go // func Provide(ctors any...) Cell type A interface {} func NewA() A | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/hive.rst | main | cilium | [
-0.11067686229944229,
0.014604787342250347,
-0.11029163002967834,
0.026431018486618996,
-0.13167765736579895,
-0.019261179491877556,
-0.012409747578203678,
0.048664093017578125,
-0.05423852428793907,
-0.026512386277318,
0.04362914711236954,
-0.03685178607702255,
0.030791833996772766,
-0.01... | 0.121756 |
logger. Module() decorates it with ``subsys=``. .. \_api\_provide: Provide ^^^^^^^ We'll now take a look at each of the different kinds of cells, starting with Provide(), which registers one or more constructors with the hive: .. code-block:: go // func Provide(ctors any...) Cell type A interface {} func NewA() A { return A{} } type B interface {} func NewB(A) B { return B{} } // simpleCell provides A and B var simpleCell cell.Cell = cell.Provide(NewA, NewB) If the constructors take many parameters, we'll want to group them into a struct with ``cell.In``, and conversely if there are many return values, into a struct with ``cell.Out``. This tells hive to unpack them: .. code-block:: go type params struct { cell.In A A B B Lifecycle cell.Lifecycle } type out struct { cell.Out C C D D E E } func NewCDE(params params) out { ... } var Cell = cell.Provide(NewCDE) Sometimes we want to depend on a group of values sharing the same type, e.g. to collect API handlers or metrics. This can be done with `value groups `\_ by combining ``cell.In`` and ``cell.Out`` with the ``group`` struct tag: .. code-block:: go type HandlerOut struct { cell.Out Handler Handler `group:"handlers"` } func NewHelloHandler() HandlerOut { ... } func NewEventHandler(src events.Source) HandlerOut { ... } type ServerParams struct { cell.In Handlers []Handler `group:"handlers"` } func NewServer(params ServerParams) Server { // params.Handlers will have the "Handlers" from NewHelloHandler and // NewEventHandler. } var Hive = hive.New( cell.Provide(NewHelloHandler, NewEventHandler, NewServer) ) For a working example of group values this, see ``hive/example``. Use ``Provide()`` when you want to expose an object or an interface to the application. If there is nothing meaningful to expose, consider instead using ``Invoke()`` to register lifecycle hooks for an unexported object. .. \_api\_invoke: Invoke ^^^^^^ Invoke is used to invoke a function to initialize some part of the application. The provided constructors won't be called unless an invoke function references them, either directly or indirectly via another constructor: .. code-block:: go // func Invoke(funcs ...any) Cell cell.Invoke( // Construct both B and C and then introduce them to each other. func(b B, c C) { b.SetHandler(c) c.SetOwner(b) }, // Construct D for its side-effects only (e.g. start and stop hooks). // Avoid this if you can and use Invoke() to register hooks instead of Provide() if // there's no API to provide. func(D){}, ) .. \_api\_module: Module ^^^^^^ Cells can be grouped into modules (a named set of cells): .. code-block:: go // func Module(id, title string, cells ...Cell) Cell var Cell = cell.Module( "example", // short identifier (for use in e.g. logging and tracing) "An example module", // one-line description (for documentation) cell.Provide(New), innerModule, // modules can contain other modules ) var innerModule cell.Cell = cell.Module( "example-inner", "An inner module", cell.Provide(newInner), ) Module() also provides the wrapped cells with a personalized ``logrus.FieldLogger`` with the ``subsys`` field set to module identifier ("example" above). The scope created by Module() is useful when combined with ProvidePrivate(): .. code-block:: go var Cell = cell.Module( "example", "An example module", cell.ProvidePrivate(NewA), // A only accessible from this module (or sub-modules) cell.Provide(NewB), // B is accessible from anywhere ) .. \_api\_decorate: Decorate ^^^^^^^^ Sometimes one may want to use a modified object inside a module, for example how above Module() provided the cells with a personalized logger. This can be done with a decorator: .. code-block:: go // func Decorate(dtor any, cells ...Cell) Cell var Cell = cell.Decorate( myLogger, // The decoration function // These cells will see the objects returned by the 'myLogger' decorator // rather than the objects on the outside. foo.Cell, bar.Cell, | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/hive.rst | main | cilium | [
-0.04658178985118866,
0.019843576475977898,
-0.03278694674372673,
-0.02127360738813877,
-0.05100973695516586,
-0.011739445850253105,
0.03772001713514328,
-0.006953765172511339,
-0.056746285408735275,
0.058227647095918655,
0.04772666469216347,
-0.06248907372355461,
-0.03435428813099861,
0.0... | 0.118896 |
personalized logger. This can be done with a decorator: .. code-block:: go // func Decorate(dtor any, cells ...Cell) Cell var Cell = cell.Decorate( myLogger, // The decoration function // These cells will see the objects returned by the 'myLogger' decorator // rather than the objects on the outside. foo.Cell, bar.Cell, ) // myLogger is a decorator that can depend on one or more objects in the application // and return one or more objects. The input parameters don't necessarily need to match // the output types. func myLogger(log logrus.FieldLogger) logrus.FieldLogger { return log.WithField("lasers", "stun") } .. \_api\_config: Config ^^^^^^ Cilium applications use the `cobra `\_ and `pflag `\_ libraries for implementing the command-line interface. With Cobra, one defines a ``Command``, with optional sub-commands. Each command has an associated FlagSet which must be populated before a command is executed in order to parse or to produce usage documentation. Hive bridges to Cobra with ``cell.Config``, which takes a value that implements ``cell.Flagger`` for adding flags to a command's FlagSet and returns a cell that "provides" the parsed configuration to the application: .. code-block:: go // type Flagger interface { // Flags(flags \*pflag.FlagSet) // } // func Config[Cfg Flagger](defaultConfig Cfg) cell.Cell type MyConfig struct { MyOption string SliceOption []string MapOption map[string]string } func (def MyConfig) Flags(flags \*pflag.FlagSet) { // Register the "my-option" flag. This matched against the MyOption field // by removing any dashes and doing case insensitive comparison. flags.String("my-option", def.MyOption, "My config option") // Flags are supported for representing complex types such as slices and maps. // \* Slices are obtained splitting the input string on commas. // \* Maps support different formats based on how they are provided: // - CLI: key=value format, separated by commas; the flag can be // repeated multiple times. // - Environment variable or configuration file: either JSON encoded // or comma-separated key=value format. flags.StringSlice("slice-option", def.SliceOption, "My slice config option") flags.StringToString("map-option", def.MapOption, "My map config option") } var defaultMyConfig = MyConfig{ MyOption: "the default value", } func New(cfg MyConfig) MyThing var Cell = cell.Module( "module-with-config", "A module with a config", cell.Config(defaultMyConfig), cell.Provide(New), ) Every field in the default configuration structure must be explicitly populated. When selecting defaults for the option, consider which option will introduce the minimal disruption to existing users during upgrade. For instance, if the flag retains existing behavior from a previous release, then the default flag value should retain that behavior. If you are introducing a new optional feature, consider disabling the option by default. In tests the configuration can be populated in various ways: .. code-block:: go func TestCell(t \*testing.T) { h := hive.New(Cell) // Options can be set via Viper h.Viper().Set("my-option", "test-value") // Or via pflags flags := pflag.NewFlagSet("", pflag.ContinueOnError) h.RegisterFlags(flags) flags.Set("my-option", "test-value") flags.Parse("--my-option=test-value") // Or the preferred way with a config override: h = hive.New( Cell, ) AddConfigOverride( h, func(cfg \*MyConfig) { cfg.MyOption = "test-override" }) // To validate that the Cell can be instantiated and the configuration // struct is well-formed without starting you can call Populate(): if err := h.Populate(); err != nil { t.Fatalf("Failed to populate: %s", err) } } .. \_api\_metric: Metric ^^^^^^ The metric cell allows you to define a collection of metrics near a feature you would like to instrument. Like the :ref:`api\_provide` cell, you define a new type and a constructor. In the case of a metric cell the type should be a struct with only public fields. The types of these fields should implement both `metric.WithMetadata `\_ and `prometheus.Collector `\_. The easiest way to get such metrics is to use the types defined in `pkg/metrics/metric `\_. The metric collection struct type returned | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/hive.rst | main | cilium | [
-0.0239227544516325,
0.0469796247780323,
-0.07245851308107376,
0.033017635345458984,
-0.05730925127863884,
-0.055681973695755005,
0.07634717971086502,
0.01808781921863556,
-0.01863648183643818,
-0.025418967008590698,
-0.01382895652204752,
-0.0759633406996727,
-0.01873336359858513,
0.051679... | 0.018901 |
case of a metric cell the type should be a struct with only public fields. The types of these fields should implement both `metric.WithMetadata `\_ and `prometheus.Collector `\_. The easiest way to get such metrics is to use the types defined in `pkg/metrics/metric `\_. The metric collection struct type returned by the given constructor is made available in the hive just like a normal provide. In addition all of the metrics are made available via the ``hive-metrics`` `value group `\_. This value group is consumed by the metrics package so any metrics defined via a metric cell are automatically registered. .. code-block:: go var Cell = cell.Module("my-feature", "My Feature", cell.Metric(NewFeatureMetrics), cell.Provide(NewMyFeature), ) type FeatureMetrics struct { Calls metric.Vec[metric.Counter] Latency metric.Histogram } func NewFeatureMetrics() FeatureMetrics { return FeatureMetrics{ Calls: metric.NewCounterVec(metric.CounterOpts{ ConfigName: metrics.Namespace + "\_my\_feature\_calls\_total", Subsystem: "my\_feature", Namespace: metrics.Namespace, Name: "calls\_total", }, []string{"caller"}), Latency: metric.NewHistogram(metric.HistogramOpts{ ConfigName: metrics.Namespace + "\_my\_feature\_latency\_seconds", Namespace: metrics.Namespace, Subsystem: "my\_feature", Name: "latency\_seconds", }), } } type MyFeature struct { metrics FeatureMetrics } func NewMyFeature(metrics FeatureMetrics) \*MyFeature { return &MyFeature{ metrics: metrics, } } func (mf \*MyFeature) SomeFunction(caller string) { mf.metrics.Calls.With(prometheus.Labels{"caller": caller}).Inc() span := spanstat.Start() // Normally we would do some actual work here time.Sleep(time.Second) span.End(true) mf.metrics.Latency.Observe(span.Seconds()) } .. \_api\_lifecycle: Lifecycle ^^^^^^^^^ In addition to cells an important building block in hive is the lifecycle. A lifecycle is a list of start and stop hook pairs that are executed in order (reverse when stopping) when running the hive. .. code-block:: go package hive type Lifecycle { Append(HookInterface) } type HookContext context.Context type HookInterface interface { Start(HookContext) error Stop(HookContext) error } type Hook struct { OnStart func(HookContext) error OnStop func(HookContext) error } func (h Hook) Start(ctx HookContext) error { ... } func (h Hook) Stop(ctx HookContext) error { ... } The lifecycle hooks can be implemented either by implementing the HookInterface methods, or using the Hook struct. Lifecycle is accessible from any cell: .. code-block:: go var ExampleCell = cell.Module( "example", "Example module", cell.Provide(New), ) type Example struct { /\* ... \*/ } func (e \*Example) Start(ctx HookContext) error { /\* ... \*/ } func (e \*Example) Stop(ctx HookContext) error { /\* ... \*/ } func New(lc cell.Lifecycle) \*Example { e := &Example{} lc.Append(e) return e } These hooks are executed when hive.Run() is called. The HookContext given to these hooks is there to allow graceful aborting of the starting or stopping, either due to user pressing ``Control-C`` or due to a timeout. By default Hive has 5 minute start timeout and 1 minute stop timeout, but these are configurable with SetTimeouts(). A grace time of 5 seconds is given on top of the timeout after which the application is forcefully terminated, regardless of whether the hook has finished or not. .. \_api\_shutdowner: Shutdowner ^^^^^^^^^^ Sometimes there's nothing else to do but crash. If a fatal error is encountered in a ``Start()`` hook it's easy: just return the error and abort the start. After starting one can initiate a shutdown using the ``hive.Shutdowner``: .. code-block:: go package hive type Shutdowner interface { Shutdown(...ShutdownOption) } func ShutdownWithError(err error) ShutdownOption { /\* ... \*/ } package example type Example struct { /\* ... \*/ Shutdowner hive.Shutdowner } func (e \*Example) eventLoop() { for { /\* ... \*/ if err != nil { // Uh oh, this is really bad, we've got to crash. e.Shutdowner.Shutdown(hive.ShutdownWithError(err)) } } } Creating and running a hive ~~~~~~~~~~~~~~~~~~~~~~~~~~~ A hive is created using ``hive.New()``: .. code-block:: go // func New(cells ...cell.Cell) \*Hive var myHive = hive.New(FooCell, BarCell) ``New()`` creates a new hive and registers all providers to it. Invoke functions are not yet executed as our application may | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/hive.rst | main | cilium | [
-0.02783914841711521,
-0.025831948965787888,
-0.0622749999165535,
-0.007668663281947374,
-0.10430947691202164,
-0.015889525413513184,
0.04575918987393379,
0.04298781231045723,
0.0054457904770970345,
-0.019386030733585358,
0.03122706711292267,
-0.15494340658187866,
-0.08223149925470352,
0.0... | 0.123821 |
e.Shutdowner.Shutdown(hive.ShutdownWithError(err)) } } } Creating and running a hive ~~~~~~~~~~~~~~~~~~~~~~~~~~~ A hive is created using ``hive.New()``: .. code-block:: go // func New(cells ...cell.Cell) \*Hive var myHive = hive.New(FooCell, BarCell) ``New()`` creates a new hive and registers all providers to it. Invoke functions are not yet executed as our application may have multiple hives and we need to delay object instantiation to until we know which hive to use. However ``New`` does execute an invoke function to gather all command-line flags from all configuration cells. These can be then registered with a Cobra command: .. code-block:: go var cmd \*cobra.Command = /\* ... \*/ myHive.RegisterFlags(cmd.Flags()) After that the hive can be started with ``myHive.Run()``. Run() will first construct the parsed configurations and will then execute all invoke functions to instantiate all needed objects. As part of this the lifecycle hooks will have been appended (in dependency order). After that the start hooks can be executed one after the other to start the hive. Once started, Run() waits for SIGTERM and SIGINT signals and upon receiving one will execute the stop hooks in reverse order to bring the hive down. Now would be a good time to try this out in practice. You'll find a small example application in `hive/example `\_. Try running it with ``go run .`` and exploring the implementation (try what happens if a provider is commented out!). Inspecting a hive ~~~~~~~~~~~~~~~~~ The ``hive.Hive`` can be inspected with the 'hive' command after it's been registered with cobra: .. code-block:: go var rootCmd \*cobra.Command = /\* ... \*/ rootCmd.AddCommand(myHive.Command()) .. code-block:: shell-session cilium$ go run ./daemon hive Cells: Ⓜ️ agent (Cilium Agent): Ⓜ️ infra (Infrastructure): Ⓜ️ k8s-client (Kubernetes Client): ⚙️ (client.Config) { K8sAPIServer: (string) "", K8sKubeConfigPath: (string) "", K8sClientQPS: (float32) 0, K8sClientBurst: (int) 0, K8sHeartbeatTimeout: (time.Duration) 30s, EnableK8sAPIDiscovery: (bool) false } 🚧 client.newClientset (cell.go:109): ⇨ client.Config, cell.Lifecycle, logrus.FieldLogger ⇦ client.Clientset ... Start hooks: • gops.registerGopsHooks.func1 (cell.go:44) • cmd.newDatapath.func1 (daemon\_main.go:1625) ... Stop hooks: ... The hive command prints out the cells, showing what modules, providers, configurations etc. exist and what they're requiring and providing. Finally the command prints out all registered start and stop hooks. Note that these hooks often depend on the configuration (e.g. k8s-client will not insert a hook unless e.g. --k8s-kubeconfig-path is given). The hive command takes the same command-line flags as the root command. The provider dependencies in a hive can also be visualized as a graphviz dot-graph: .. code-block:: bash cilium$ go run ./daemon hive dot-graph | dot -Tx11 Guidelines ~~~~~~~~~~ Few guidelines one should strive to follow when implementing larger cells: \* A constructor function should only do validation and allocation. Spawning of goroutines or I/O operations must not be performed from constructors, but rather via the Start hook. This is required as we want to inspect the object graph (e.g. ``hive.PrintObjects``) and side-effectful constructors would cause undesired effects. \* Stop functions should make sure to block until all resources (goroutines, file handles, …) created by the module have been cleaned up (with e.g. ``sync.WaitGroup``). This makes sure that independent tests in the same test suite are not affecting each other. Use `goleak `\_ to check that goroutines are not leaked. \* Preferably each non-trivial cell would come with a test that validates that it implements its public API correctly. The test also serves as an example of how the cell's API is used and it also validates the correctness of the cells it depends on which helps with refactoring. \* Utility cells should not Invoke(). Since cells may be used in many applications it makes sense to make them lazy to allow bundling useful | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/hive.rst | main | cilium | [
-0.07934457063674927,
0.0047669908963143826,
-0.05237393081188202,
0.020447680726647377,
-0.07240891456604004,
-0.022199973464012146,
0.028137734159827232,
0.010644569993019104,
-0.03202759474515915,
0.06091444194316864,
0.0013126888079568744,
-0.042969875037670135,
-0.01998831145465374,
0... | 0.069645 |
an example of how the cell's API is used and it also validates the correctness of the cells it depends on which helps with refactoring. \* Utility cells should not Invoke(). Since cells may be used in many applications it makes sense to make them lazy to allow bundling useful utilities into one collection. If a utility cell has an invoke, it may be instantiated even if it is never used. \* For large cells, provide interfaces and not struct pointers. A cell can be thought of providing a service to the rest of the application. To make it accessible, one should think about what APIs the module provides and express these as well documented interface types. If the interface is large, try breaking it up into multiple small ones. Interface types also allows integration testing with mock implementations. The rational here is the same as with "return structs, accept interfaces": since hive works with the names of types, we want to "inject" interfaces into the object graph and not struct pointers. Extra benefit is that separating the API implemented by a module into one or more interfaces it is easier to document and easier to inspect as all public method declarations are in one place. \* Use parameter (cell.In) and result (cell.Out) objects liberally. If a constructor takes more than two parameters, consider using a parameter struct instead. Testing with hive script ~~~~~~~~~~~~~~~~~~~~~~~~ The hive library comes with `script `\_, a simple scripting engine for writing tests. It is a fork of the `internal/script `\_ library used by the Go compiler for testing the compiler CLI usage. For usage with hive it has been extended with support for interactive use, retrying of failures and ability to inject commands from Hive cells. The same scripting language and commands provided by cells is available via the ``cilium-dbg shell`` command for live inspection of the Cilium Agent. Hive scripts are `txtar `\_ (text archive) files that contain a sequence of commands and a set of embedded input files. When the script is executed a temporary directory (``$WORK``) is created and the input files are extracted there. To understand how this is put together, let's take a look at a minimal example: .. literalinclude:: ../../../contrib/examples/script/example.go :caption: contrib/examples/script/example.go :language: go :tab-width: 4 We've now defined a module providing ``Example`` object and some commands for interacting with it. We can now define our test runner: .. literalinclude:: ../../../contrib/examples/script/example\_test.go :caption: contrib/examples/script/example\_test.go :language: go :tab-width: 4 And with the test runner in place we can now write our test script: .. literalinclude:: ../../../contrib/examples/script/testdata/example.txtar :caption: contrib/examples/script/testdata/example.txtar :language: shell With everything in place we can now run the tests: .. code-block:: shell-session $ cd contrib/examples/script $ go test . === RUN TestScript === RUN TestScript/example.txtar scripttest.go:251: 2025-02-26T08:32:25Z scripttest.go:253: $WORK=/tmp/TestScriptexample.txtar2477299450/001 scripttest.go:72: DATADIR=/home/jussi/go/src/github.com/cilium/cilium/contrib/examples/script/testdata PWD=/tmp/TestScriptexample.txtar2477299450/001 WORK=/tmp/TestScriptexample.txtar2477299450/001 TMPDIR=/tmp/TestScriptexample.txtar2477299450/001/tmp scripttest.go:72: #! --enable-example=true # ^ an (optional) shebang can be used to configure cells # This is a comment that starts a section of commands (0.000s) > echo 'hello' [stdout] hello logger.go:256: level=INFO msg="Starting hive" logger.go:256: level=INFO msg="Started hive" duration=1.53µs scripttest.go:72: # The test hive has not been started yet, let's start it! (0.000s) > hive/start logger.go:256: level=INFO msg="SayHello() called" module=example name=foo greeting=Hello, scripttest.go:72: # Cells can provide custom commands (0.000s) > example/hello foo calling SayHello(foo, Hello,) [stdout] Hello, foo > stdout 'Hello, foo' matched: Hello, foo scripttest.go:72: # Check that call count equals 1 (0.000s) > example/counts [stdout] 1 SayHello() > stdout '1 SayHello()' matched: 1 SayHello() scripttest.go:72: # The file 'foo' should not be the same as 'bar' (0.000s) > ! cmp foo bar diff foo bar --- foo +++ bar @@ -1,2 | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/hive.rst | main | cilium | [
-0.07728032767772675,
0.024254197254776955,
-0.009043490514159203,
-0.05068441107869148,
-0.05869979038834572,
-0.04265480861067772,
-0.008350465446710587,
0.04594651237130165,
-0.052104998379945755,
0.047562096267938614,
-0.02583472430706024,
-0.04215502738952637,
-0.01943998783826828,
0.... | 0.149504 |
Hello, foo scripttest.go:72: # Check that call count equals 1 (0.000s) > example/counts [stdout] 1 SayHello() > stdout '1 SayHello()' matched: 1 SayHello() scripttest.go:72: # The file 'foo' should not be the same as 'bar' (0.000s) > ! cmp foo bar diff foo bar --- foo +++ bar @@ -1,2 +1,1 @@ -foo - +bar --- PASS: TestScript/example.txtar (0.00s) ok github.com/cilium/cilium/contrib/examples/script 0.003s In the test execution we can see that a temporary working directory ``$WORK`` was created and our test files from the ``example.txtar`` extracted there. Each command was then executed in order. As many of the cells bring rich set of commands it's important that they're easy to discover. To find the commands available, use the ``help`` command to interactively explore the available commands to use in tests. Try for example adding ``break`` as the last command in ``example.txtar``: .. code-block:: shell-session $ go test . .... @@ -1,2 +1,1 @@ -foo - +bar > break Break! Control-d to continue. debug> help example [stdout] example/counts Show the call counts of the example module example/hello [--greeting=string] name Say hello Flags: --greeting string Greeting to use (default "Hello,") debug> example/hello --greeting=Hei Jussi calling SayHello(Jussi, Hei) [stdout] Hei Jussi logger.go:256: level=INFO msg="SayHello() called" module=example name=Jussi greeting=Hei Command reference ^^^^^^^^^^^^^^^^^ The important default commands are: - ``help``: List available commands. Takes an optional regex to filter. - ``hive``: Dump the hive object graph - ``hive/start``: Start the test hive - ``stdout regex``: Grep the stdout buffer - ``cmp file1 file2``: Compare two files - ``exec cmd args...``: Execute an external program (``$PATH`` needs to be set!) - ``replace old new file``: Replace text in a file - ``empty``: Check if file is empty The commands can be modified with prefixes: - ``! cmd args...``: Fail if the command succeeds - ``\* cmd args...``: Retry all commands in the section until this succeeds - ``!\* cmd args...``: Retry all commands in the section until this fails A section is defined by a ``# comment`` line and consists of all commands between the comment and the next comment. New commands should use the naming scheme ``/``, e.g. ``hive/start`` and not build sub-commands. This makes ``help`` more useful and makes it easier to discover the commands. Cells with script support ^^^^^^^^^^^^^^^^^^^^^^^^^ These cells when included in the test hive will bring useful commands that can be used in tests. - `FakeClientCell `\_: Commands for interacting with the fake client to add or delete objects. See ``help k8s``. - `StateDB `\_: Commands for inspecting and manipulating StateDB. Also available via ``cilium-dbg shell``. See ``help db``. - `metrics.Cell `\_: Commands for dumping and plotting metrics. See ``help metrics`` and ``pkg/metrics/testdata``. Note that StateDB and metrics are part of Cilium's Hive wrapper defined in ``pkg/hive``, so if you use ``(pkg/hive).New()`` they will be included automatically. Example tests ^^^^^^^^^^^^^ To find existing tests to use as reference you can grep for usage of scripttest.Test: .. code-block:: shell-session $ git grep 'scripttest.Test' contrib/examples/script/example\_test.go: scripttest.Test( ... Here's a few scripts that are worth calling out: - ``daemon/k8s/testdata/pod.txtar``: Tests populating ``Table[LocalPod]`` from K8s objects defined in YAML. Good reference for the ``k8s/\*`` and ``db/\*`` commands. - ``pkg/ciliumenvoyconfig/testdata``: Complex component integration tests that go from K8s objects down to BPF maps. - ``pkg/datapath/linux/testdata/device-detection.txtar``: Low-level test that manipulates network devices in a new network namespace Internals: Dependency injection with reflection ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Hive is built on top of `uber/dig `\_, a reflection based library for building dependency injection frameworks. In dig, you create a container, add in your constructors and then "invoke" to create objects: .. code-block:: go func NewA() (A, error) { /\* ... \*/ | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/hive.rst | main | cilium | [
-0.032607339322566986,
0.012916101142764091,
-0.04035061225295067,
0.03370213881134987,
-0.07241424173116684,
-0.053757086396217346,
0.019367575645446777,
0.0747639462351799,
0.053616560995578766,
0.05778805539011955,
0.0602908656001091,
-0.03784523159265518,
0.011559410020709038,
-0.00349... | 0.17391 |
Internals: Dependency injection with reflection ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Hive is built on top of `uber/dig `\_, a reflection based library for building dependency injection frameworks. In dig, you create a container, add in your constructors and then "invoke" to create objects: .. code-block:: go func NewA() (A, error) { /\* ... \*/ } func NewB() B { /\* ... \*/ } func NewC(A, B) (C, error) { /\* ... \*/ } func setupC(C) error // Create a new container for our constructors. c := dig.New(dig.DeferAcyclicVerification()) // Add in the constructors. Order does not matter. c.Provide(NewC) c.Provide(NewB) c.Provide(NewA) // Invoke a function that can depend on any of the values supplied by the // registered constructors. // Since this depends on "C", dig will construct first A and B // (as C depends on them), and then C. c.Invoke(func(c \*C) { // Do something with C }) This is the basis on top of which Hive is built. Hive calls dig’s Provide() for each of the constructors registered with cell.Provide and then calls invoke functions to construct the needed objects. The results from the constructors are cached, so each constructor is called only once. ``uber/dig`` uses Go’s "reflect" package that provides access to the type information of the provide and invoke functions. For example, the `Provide `\_ method does something akin to this under the hood: .. code-block:: go // 'constructor' has type "func(...) ..." typ := reflect.TypeOf(constructor) if typ.Kind() != reflect.Func { /\* error \*/ } in := make([]reflect.Type, 0, typ.NumIn()) for i := 0; i < typ.NumIn(); i++ { in[i] = typ.In(i) } out := make([]reflect.Type, 0, typ.NumOut()) for i := 0; i < typ.NumOut(); i++ { out[i] = typ.Out(i) } container.providers = append(container.providers, &provider{constructor, in, out}) `Invoke `\_ will similarly reflect on the function value to find out what are the required inputs and then find the required constructors for the input objects and recursively their inputs. While building this on reflection is flexible, the downside is that missing dependencies lead to runtime errors. Luckily dig produces excellent errors and suggests closely matching object types in case of typos. Due to the desire to avoid these runtime errors the constructed hive should be as static as possible, e.g. the set of constructors and invoke functions should be determined at compile time and not be dependent on runtime configuration. This way the hive can be validated once with a simple unit test (``daemon/cmd/cells\_test.go``). Cell showcase ~~~~~~~~~~~~~ Logging ^^^^^^^ Logging is provided to all cells by default with the ``\*slog.Logger``. The log lines will include the attribute ``module=``. .. code-block:: go cell.Module( "example", "log example module", cell.Provide( func(log \*slog.Logger) Example { log.Info("Hello") // module=example msg=Hello return Example{log: log} }, ), ) Kubernetes client ^^^^^^^^^^^^^^^^^ The `client package `\_ provides the ``Clientset`` API that combines the different clientsets used by Cilium into one composite value. Also provides ``FakeClientCell`` for writing integration tests for cells that interact with the K8s api-server. .. code-block:: go var Cell = cell.Provide(New) func New(cs client.Clientset) Example { return Example{cs: cs} } func (e Example) CreateIdentity(id \*ciliumv2.CiliumIdentity) error { return e.cs.CiliumV2().CiliumIdentities().Create(e.ctx, id, metav1.CreateOptions{}) } Resource and the store (see below) is the preferred way of accessing Kubernetes object state to minimize traffic to the api-server. The Clientset should usually only be used for creating and updating objects. Kubernetes Resource and Store ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. note:: The ``Resource[T]`` pattern is being phased out in the Cilium Agent and new code should use StateDB. See `daemon/k8s/tables.go `\_, `pkg/k8s/statedb.go `\_ and `PR 34060 `\_. While not a cell by itself, `pkg/k8s/resource `\_ provides an useful abstraction for providing shared event-driven access to Kubernetes | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/hive.rst | main | cilium | [
-0.13298222422599792,
-0.02013213559985161,
-0.004489100072532892,
-0.0416003093123436,
-0.12117346376180649,
-0.06190941482782364,
0.0025516340974718332,
-0.01572806015610695,
-0.061124030500650406,
0.09962404519319534,
0.002071567578241229,
-0.06829158961772919,
-0.02337551675736904,
0.0... | 0.178605 |
and Store ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. note:: The ``Resource[T]`` pattern is being phased out in the Cilium Agent and new code should use StateDB. See `daemon/k8s/tables.go `\_, `pkg/k8s/statedb.go `\_ and `PR 34060 `\_. While not a cell by itself, `pkg/k8s/resource `\_ provides an useful abstraction for providing shared event-driven access to Kubernetes objects. Implemented on top of the client-go informer, ``workqueue`` and store to codify the suggested pattern for controllers in a type-safe way. This shared abstraction provides a simpler API to write and test against and allows central control over what data (and at what rate) is pulled from the api-server and how it’s stored (in-memory or persisted). The resources are usually made available centrally for the application, e.g. in cilium-agent they’re provided from `pkg/k8s/resource.go `\_. See also the runnable example in `pkg/k8s/resource/example `\_. .. code-block:: go import "github.com/cilium/cilium/pkg/k8s/resource" var nodesCell = cell.Provide( func(lc cell.Lifecycle, cs client.Clientset) resource.Resource[v1.Node] { lw := utils.ListerWatcherFromTyped[\*v1.NodeList](cs.CoreV1().Nodes()) return resource.New[\*v1.Node](lc, lw) }, ) var Cell = cell.Module( "resource-example", "Example of how to use Resource", nodesCell, cell.Invoke(printNodeUpdates), ) func printNodeUpdates(nodes resource.Resource[\*v1.Node]) { // Store() returns a typed locally synced store of the objects. // This call blocks until the store has been synchronized. store, err := nodes.Store() ... obj, exists, err := store.Get("my-node") ... objs, err := store.List() ... // Events() returns a channel of object change events. Closes // when 'ctx' is cancelled. // type Event[T] struct { Kind Kind; Key Key; Object T; Done func(err error) } for ev := range nodes.Events(ctx) { switch ev.Kind { case resource.Sync: // The store has now synced with api-server and // the set of observed upsert events forms a coherent // snapshot. Usually some sort of garbage collection or // reconciliation is performed. case resource.Upsert: fmt.Printf("Node %s has updated: %v\n", ev.Key, ev.Object) case resource.Delete: fmt.Printf("Node %s has been deleted\n", key) } // Each event must be marked as handled. If non-nil error // is given, the processing for this key is retried later // according to rate-limiting and retry policy. The built-in // retrying is often used if we perform I/O operations (like API client // calls) from the handler and retrying makes sense. It should not // be used on parse errors and similar. ev.Done(nil) } } Job groups ^^^^^^^^^^ The `job package `\_ contains logic that makes it easy to manage units of work that the package refers to as "jobs". These jobs are scheduled as part of a job group. Every job is a callback function provided by the user with additional logic which differs slightly for each job type. The jobs and groups manage a lot of the boilerplate surrounding lifecycle management. The callbacks are called from the job to perform the actual work. These jobs themselves come in several varieties. The ``OneShot`` job invokes its callback just once. This job type can be used for initialization after cell startup, routines that run for the full lifecycle of the cell, or for any other task you would normally use a plain goroutine for. The ``Timer`` job invokes its callback periodically. This job type can be used for periodic tasks such as synchronization or garbage collection. Timer jobs can also be externally triggered in addition to the periodic invocations. The ``Observer`` job invokes its callback for every message sent on a ``stream.Observable``. This job type can be used to react to a data stream or events created by other cells. | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/hive.rst | main | cilium | [
-0.0414905846118927,
0.07589174807071686,
-0.0658080130815506,
-0.03566572442650795,
-0.08984088152647018,
-0.012513598427176476,
0.040747255086898804,
0.047194670885801315,
0.129516139626503,
0.015677792951464653,
0.01790095865726471,
-0.05007800832390785,
0.015118297189474106,
-0.0110949... | 0.284481 |
be used to react to a data stream or events created by other cells. | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/hive.rst | main | cilium | [
-0.08647521585226059,
-0.0013935561291873455,
-0.05418231338262558,
0.011071499437093735,
0.021112045273184776,
0.08730189502239227,
0.03897598013281822,
0.037254177033901215,
0.10081808269023895,
0.020553821697831154,
-0.004850879777222872,
-0.03734162077307701,
-0.01985856331884861,
0.00... | 0.287774 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_statedb: StateDB in Cilium ================= .. warning:: StateDB and the reconciler are still under active development and the APIs & metrics documented here are not guaranteed to be stable yet. Introduction ~~~~~~~~~~~~ `StateDB `\_\_ is an in-memory database developed for the Cilium project to manage control-plane state. It aims to simplify access and indexing of state and to increase resilience, modularity and testability by separating the control-plane state from the controllers that operates on it. This document focuses on how StateDB is leveraged by Cilium and how to develop new features using it. For a detailed guide on StateDB API itself see the `StateDB documentation `\_\_. We assume familiarity with the Hive framework. If you're not familiar with it, consider reading through :ref:`guide-to-the-hive` first. Motivation ~~~~~~~~~~ StateDB is a project born from lessons learned from development and production struggles. It aims to be a tool to systematically improve the resilience, testability and inspectability of the Cilium agent. For developers it aims to offer simpler and safer ways to extend the agent by giving a unified API (``Table[Obj]``) for accessing shared state. The immutable data structures backing StateDB allow for lockless readers which improves resiliency compared to the RWMutex+hashmap+callback pattern where a bug in a controller observing the state may cause critical functions to either stop or significantly decrease throughput. Additionally having flexible ways to access and index the state allows for opportunities to deduplicate the state. Many components of the agent have historically functioned through callback-based subscriptions to and maintained their own copies of state which has a significant impact on memory usage and GC overhead. Unifying state storage behind a database-like abstraction allows building reusable utilities for inspecting the state (``cilium-dbg shell -- db``), reconciling state (StateDB reconciler) and observing operations on state (StateDB metrics). At scale this leads to an architecture that is easier to understand (smaller API surface), operate (state can be inspected) and extend (easy to access data). The separation of state from logic operating on it (e.g. moving away from kitchen-sink "Manager" pattern) also opens up the ability to do wider and more meaningful integration testing on components of the agent. When most of the inputs and outputs of a component are tables, we can combine multiple components into an integration test that is solely defined in terms of test inputs and expected outputs. This allows more validation to be performed with fairly simple integration tests rather than with slower and costly end-to-end tests. Architecture vision ~~~~~~~~~~~~~~~~~~~ .. image:: \_static/statedb-arch.svg :align: center The agent in this architectural style can be broadly considered to consist of: - \*User intent tables\*: objects from external data sources that tell the agent what it should do. These would be for example the Kubernetes core objects like Pods or the Cilium specific CRDs such as CiliumNetworkPolicy, or data ingested from other sources such as kvstore. - \*Controllers\*: control-loops that observe the user intent tables and compute the contents of the desired state tables. - \*Desired state tables\*: the internal state that the controllers produce to succinctly describe what should be done. For example a desired state table could describe what the contents of a BPF map should be or what routes should be installed. - \*Reconcilers\*: control-loops that observe the desired state tables and reconcile them against a target such as a BPF map or the Linux routing table. The reconciler is usually an instance of the StateDB reconciler which is defined in terms | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/statedb.rst | main | cilium | [
-0.08229765295982361,
-0.015607209876179695,
-0.10311104357242584,
0.010040060617029667,
0.0017326001543551683,
-0.01980159804224968,
0.01371585950255394,
0.05578190088272095,
-0.0268098171800375,
0.05669426918029785,
0.023678993806242943,
-0.032700315117836,
0.05978529527783394,
-0.060745... | 0.262947 |
BPF map should be or what routes should be installed. - \*Reconcilers\*: control-loops that observe the desired state tables and reconcile them against a target such as a BPF map or the Linux routing table. The reconciler is usually an instance of the StateDB reconciler which is defined in terms of a table of objects with a status field and the operations \*Update\*, \*Delete\* and \*Prune\*. Dividing the agent this way we achieve a nice separation of concerns: \* Separating the user intent into its own tables keeps the parsing and validation from the computation we'll perform on the data. It also makes it nicer to reuse as it's purely about representing the outside intent internally in an efficient way without tying it too much into implementation details of a specific feature. \* By defining the controller as essentially the function from input tables to output tables it becomes easy to understand and test. \* Separating the reconciliation from the desired state computation the complex logic of dealing with low-level errors and retrying is separate from the pure "business logic" computation. \* Using the generic reconcilers allows using tried-and-tested and instrumented retry implementation. \* The control-plane of the agent is essentially everything outside the reconcilers This allows us to integration test, simulate or benchmark the control-plane code without unreasonable amount of scaffolding. The easier it is to write reliable integration tests the more resilient the codebase becomes. What we're trying to achieve is well summarized by Fred Brooks in "The Mythical Man Month": | Show me your flowchart and conceal your tables, and I shall continue to be mystified. | Show me your tables, and I won't usually need your flowchart; it'll be obvious. Defining tables ~~~~~~~~~~~~~~~ `StateDB documentation `\_\_ gives a good introduction into how to create a table and its indexes, so we won't repeat that here, but instead focus on Cilium specific details. Let's start off with some guidelines that you might want to consider: \* By default publicly provide ``Table[Obj]`` so new features can build on it and it can be used in tests. Also export the table's indexes or the query functions (``var ByName = nameIndex.Query``). \* Do not export ``RWTable[Obj]`` if outside modules do not need to directly write into the table. If other modules do write into the table, consider defining "writer functions" that validate that the writes are well-formed. \* If the table is closely associated with a specific feature, define it alongside the implementation of the feature. If the table is shared by many modules, consider defining it in ``daemon/k8s`` or ``pkg/datapath/tables`` so it is easy to discover. \* Make sure the object can be JSON marshalled so it can be inspected. If you need to store non-marshallable data (e.g. functions), make them private or mark them with ``json:"-"`` struct tag. \* If the object contains a map or set and it is often mutated, consider using the immutable ``part.Map`` or ``part.Set`` from ``cilium/statedb``. Since these are immutable they don't need to be deep-copied when modifying the object and there's no risk of accidentally mutating them in-place. \* When designing a table consider how it can be used in tests outside your module. It's a good idea to export your table constructor (New\*Table) so it can be used by itself in an integration test of a module that depends on it. \* Take into account the fact that objects be immutable by designing them to be cheap to shallow-clone. For example this could mean splitting off fields that are constant from creation into their own struct that's referenced from the object. | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/statedb.rst | main | cilium | [
-0.009594686329364777,
-0.06019093468785286,
-0.04176392778754234,
-0.006093843374401331,
-0.03174648806452751,
0.01618681475520134,
-0.010524982586503029,
0.01115700788795948,
-0.02908174879848957,
0.027011146768927574,
0.02046506106853485,
-0.0037383297458291054,
0.007135780993849039,
-0... | 0.103094 |
an integration test of a module that depends on it. \* Take into account the fact that objects be immutable by designing them to be cheap to shallow-clone. For example this could mean splitting off fields that are constant from creation into their own struct that's referenced from the object. \* Write benchmarks for your table to understand the cost of the indexing and storage use. See ``benchmarks\_test.go`` in ``cilium/statedb`` for examples. \* If the object is small (<100 bytes) prefer storing it by value instead of by reference, e.g. ``Table[MyObject]`` instead of ``Table[\*MyObject]``. This reduces memory fragmentation and makes it safer to use since the fields can't be accidentally mutated (anything inside that's by reference of course can be mutated accidentally). Note though that each index will store a separate copy of the object. Measure if needed. With that out of the way, let's get concrete with a code example of a simple table and a controller that populates it: .. literalinclude:: ../../../contrib/examples/statedb/example.go :language: go To understand how the table defined by our example module can be consumed, we can construct a small mini-application: .. literalinclude:: ../../../contrib/examples/statedb/main.go :language: go You can find and run the above examples in ``contrib/examples/statedb``: .. code-block:: shell-session $ cd contrib/examples/statedb && go run . Pitfalls ^^^^^^^^ Here are some common mistakes to be aware of: \* Object is mutated after insertion to database. Since StateDB queries do not return copies, all readers will see the modifications. \* Object (stored by reference, e.g. ``\*T``) returned from a query is mutated and then inserted. StateDB will catch this and panic. Objects stored by reference must be (shallow) cloned before mutating. \* Query is made with ReadTxn and results are used in a WriteTxn. The results may have changed between the ReadTxn and WriteTxn! If you want optimistic concurrency control, then use CompareAndSwap in the write transaction. Inspecting with cilium-dbg ^^^^^^^^^^^^^^^^^^^^^^^^^^ StateDB comes with script commands to inspect the tables. These can be invoked via ``cilium-dbg shell``. The ``db`` command lists all registered tables: .. code-block:: shell-session root@kind-worker:/home/cilium# cilium-dbg shell -- db Name Object count Deleted objects Indexes Initializers Go type Last WriteTxn health 61 0 identifier, level [] types.Status health (107.3us ago, locked for 43.7us) sysctl 20 0 name, status [] \*tables.Sysctl sysctl (9.4m ago, locked for 12.8us) mtu 2 0 cidr [] mtu.RouteMTU mtu (19.4m ago, locked for 5.4us) ... The ``show`` command prints out the table using the \*TableRow\* and \*TableHeader\* methods: .. code-block:: shell-session root@kind-worker:/home/cilium# cilium-dbg shell -- db/show mtu Prefix DeviceMTU RouteMTU RoutePostEncryptMTU ::/0 1500 1450 1450 0.0.0.0/0 1500 1450 1450 The ``db/get``, ``db/prefix``, ``db/list`` and ``db/lowerbound`` allow querying a table, provided that the ``Index.FromString`` method has been defined: .. code-block:: shell-session root@kind-worker:/home/cilium# cilium-dbg shell -- db prefix --index=name devices cilium Name Index Selected Type MTU HWAddr Flags Addresses cilium\_host 3 false veth 1500 c2:f6:99:50:af:71 up|broadcast|multicast 10.244.1.105, fe80::c0f6:99ff:fe50:af71 cilium\_net 2 false veth 1500 5e:70:20:4d:8a:bc up|broadcast|multicast fe80::5c70:20ff:fe4d:8abc cilium\_vxlan 4 false vxlan 1500 b2:c6:10:14:48:47 up|broadcast|multicast fe80::b0c6:10ff:fe14:4847 The shell session can also be run interactively: .. code-block:: shell-session # cilium-dbg shell /¯¯\ /¯¯\\_\_/¯¯\ \\_\_/¯¯\\_\_/ Cilium 1.17.0-dev a5b41b93507e 2024-08-08T13:18:08+02:00 go version go1.23.1 linux/amd64 /¯¯\\_\_/¯¯\ Welcome to the Cilium Shell! Type 'help' for list of commands. \\_\_/¯¯\\_\_/ \\_\_/ cilium> help db db Describe StateDB configuration The 'db' command describes the StateDB configuration, showing ... cilium> db Name Object count Zombie objects Indexes Initializers Go type Last WriteTxn health 65 0 identifier, level [] types.Status health (993.6ms ago, locked for 25.7us) sysctl 20 0 name, status [] \*tables.Sysctl sysctl (5.3s ago, locked for 8.6us) mtu 2 0 cidr [] mtu.RouteMTU mtu (4.4s ago, locked for 3.1us) ... cilium> | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/statedb.rst | main | cilium | [
-0.009158586151897907,
0.03854457288980484,
-0.10539893805980682,
0.053040761500597,
-0.06309132277965546,
-0.09517636895179749,
0.01903160661458969,
0.09652632474899292,
-0.013775929808616638,
0.024249693378806114,
0.022729339078068733,
-0.07203364372253418,
0.018498068675398827,
-0.00137... | 0.176213 |
Object count Zombie objects Indexes Initializers Go type Last WriteTxn health 65 0 identifier, level [] types.Status health (993.6ms ago, locked for 25.7us) sysctl 20 0 name, status [] \*tables.Sysctl sysctl (5.3s ago, locked for 8.6us) mtu 2 0 cidr [] mtu.RouteMTU mtu (4.4s ago, locked for 3.1us) ... cilium> db/show mtu Prefix DeviceMTU RouteMTU RoutePostEncryptMTU ::/0 1500 1450 1450 0.0.0.0/0 1500 1450 1450 cilium> db/show --out=/tmp/devices.json --format=json devices ... Kubernetes reflection ^^^^^^^^^^^^^^^^^^^^^ To reflect Kubernetes objects from the API server into a table, the reflector utility in ``pkg/k8s`` can be used to automate this. For example, we can define a table of pods and reflect them from Kubernetes into the table: .. literalinclude:: ../../../contrib/examples/statedb\_k8s/pods.go :language: go :caption: contrib/examples/statedb\_k8s/pods.go :tab-width: 4 As earlier, we can then construct a small application to try this out: .. literalinclude:: ../../../contrib/examples/statedb\_k8s/main.go :language: go :caption: contrib/examples/statedb\_k8s/main.go :tab-width: 4 You can run the example in ``contrib/examples/statedb\_k8s`` to watch the pods in your current cluster: .. code-block:: shell-session $ cd contrib/examples/statedb\_k8s && go run . --k8s-kubeconfig-path ~/.kube/config level=info msg=Starting time="2024-09-05T11:22:15+02:00" level=info msg="Establishing connection to apiserver" host="https://127.0.0.1:44261" subsys=k8s-client time="2024-09-05T11:22:15+02:00" level=info msg="Connected to apiserver" subsys=k8s-client level=info msg=Started duration=9.675917ms Pod(default/nginx): Running (revision: 1, deleted: false) Pod(kube-system/cilium-envoy-8xwp7): Running (revision: 2, deleted: false) ... Reconcilers ~~~~~~~~~~~ The StateDB reconciler can be used to reconcile changes on table against a target system. To set up the reconciler you will need the following. Add ``reconciler.Status`` as a field into your object (there can be multiple): .. code-block:: go type MyObject struct { ID uint64 // ... Status reconciler.Status } Implement the reconciliation operations (``reconciler.Operations``): .. code-block:: go type myObjectOps struct { ... } var \_ reconciler.Operations[\*MyObject] = &myObjectOps{} // Update reconciles the changed [obj] with the target. func (ops \*myObjectOps) Update(ctx context.Context, txn statedb.ReadTxn, obj \*MyObject) error { // Synchronize the target state with [obj]. [obj] is a clone and can be updated from here. // [txn] can be used to access other tables, but note that Update() is only called when [obj] is // marked pending. ... // Return nil or an error. If not nil, the operation will be repeated with exponential backoff. // If object changes the retrying will reset and Update() is called with latest object. return err } // Delete removes the [obj] from the target. func (ops \*myObjectOps) Delete(ctx context.Context, txn statedb.ReadTxn, obj \*MyObject) error { ... // If error is not nil the delete is retried until it succeeds or an object is recreated // with the same primary key. return err } // Prune removes any stale/unexpected state in the target. func (ops \*myObjectOps) Prune(ctx context.Context, txn statedb.ReadTxn, objs iter.Seq2[\*MyObject, statedb.Revision]) error { // Compute the difference between [objs] and the target and remove anything unexpected in the target. ... // If the returned error is not nil error is logged and metrics incremented. Failed pruning is currently not retried, // but called periodically according to config. return err } Register the reconciler: .. code-block:: go func registerReconciler( params reconciler.Params, ops reconciler.Operations[\*MyObject], tbl statedb.RWTable[\*MyObject], ) error { // Reconciler[..] is an API the reconciler provides. Often not needed. // Currently only contains the Prune() method to trigger immediate pruning. var r reconciler.Reconciler[\*MyObject] r, err := RegisterReconciler( params, tbl, (\*MyObject).Clone, (\*MyObject).SetStatus, (\*MyObject).GetStatus, ops, nil, /\* optional batch operations \*/ ) return err } var Cell = cell.Module( "example", "Example module", ..., cell.Invoke(registerReconciler), ) Insert objects with the ``Status`` set to pending: .. code-block:: go var myObjects statedb.RWTable[\*MyObject] wtxn := db.WriteTxn(myObjects) myObjects.Insert(wtxn, &MyObject{ID: 123, Status: reconciler.StatusPending()}) wtxn.Commit() The reconciler watches the tables (using ``Changes()``) and calls ``Update`` for each changed object that is ``Pending`` or ``Delete`` for each | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/statedb.rst | main | cilium | [
-0.07903177291154861,
0.01058601401746273,
0.033509016036987305,
0.01814846321940422,
0.012192181311547756,
-0.10342223942279816,
-0.049713797867298126,
-0.012936955317854881,
-0.00431407056748867,
0.05341595411300659,
0.006371885538101196,
-0.077723927795887,
-0.004226729739457369,
-0.029... | 0.156455 |
"example", "Example module", ..., cell.Invoke(registerReconciler), ) Insert objects with the ``Status`` set to pending: .. code-block:: go var myObjects statedb.RWTable[\*MyObject] wtxn := db.WriteTxn(myObjects) myObjects.Insert(wtxn, &MyObject{ID: 123, Status: reconciler.StatusPending()}) wtxn.Commit() The reconciler watches the tables (using ``Changes()``) and calls ``Update`` for each changed object that is ``Pending`` or ``Delete`` for each deleted object. On errors the object will be retried (with configurable backoff) until the operation succeeds. See the full runnable example in the `StateDB repository `\_\_. The reconciler runs a background job which reports the health status of the reconciler. The status is degraded if any objects failed to be reconciled and queued for retries. Health can be inspected either with ``cilium-dbg status --all-health`` or ``cilium-dbg statedb health``. BPF maps ^^^^^^^^ BPF maps can be reconciled with the operations returned by ``bpf.NewMapOps``. The target object needs to implement the ``BinaryKey`` and ``BinaryValue`` to construct the BPF key and value respectively. These can either construct the binary value on the fly, or reference a struct defining the value. The example below uses a struct as this is the prevalent style in Cilium. .. code-block:: go // MyKey defines the raw BPF key type MyKey struct { ... } // MyValue defines the raw BPF key type MyValue struct { ... } type MyObject struct { Key MyKey Value MyValue Status reconciler.Status } func (m \*MyObject) BinaryKey() encoding.BinaryMarshaler { return bpf.StructBinaryMarshaler{&m.Key} } func (m \*MyObject) BinaryValue() encoding.BinaryMarshaler { return bpf.StructBinaryMarshaler{&m.Value} } func registerReconciler(params reconciler.Params, objs statedb.RWTable[\*MyObject], m \*bpf.Map) error { ops := bpf.NewMapOps[\*MyObject](m) \_, err := reconciler.Register( params, objs, func(obj \*MyObject) \*MyObject { return obj }, func(obj \*MyObject, s reconciler.Status) \*MyObject { obj.Status = obj return obj }, func(obj \*MyObject) reconciler.Status { return e.Status }, ops, nil, ) return err } For a real-world example see ``pkg/maps/bwmap/cell.go``. Script commands ~~~~~~~~~~~~~~~ StateDB comes with a rich set of script commands for inspecting and manipulating tables: .. code-block:: shell-session :caption: example.txtar # Show the registered tables db # Insert an object db/insert my-table example.yaml # Compare the contents of 'my-table' with a file. Retries until matches. db/cmp my-table expected.table # Show the contents of the table db/show # Write the object to a file db/get my-table 'Foo' --format=yaml --out=foo.yaml # Delete the object and assert that table is empty. db/delete my-table example.yaml db/empty my-table -- expected.table -- Name Color Foo Red -- example.yaml -- name: Foo color: Red See ``help db`` for full reference in ``cilium-dbg shell`` or in the ``break`` prompt in tests. A good reference is also the existing tests. These can be found with ``git grep db/insert``. Metrics ~~~~~~~ Metrics are available for both StateDB and the reconciler, but they are disabled by default due to their fine granularity. These are defined in ``pkg/hive/statedb\_metrics.go`` and ``pkg/hive/reconciler\_metrics.go``. As this documentation is manually maintained it may be out-of-date so if things are not working, check the source code. The metrics can be enabled by adding them to the helm ``prometheus.metrics`` option with the syntax ``+cilium\_``, where ```` is the name of the metric in the table below. For example, here is how to turn on all the metrics: .. code-block:: yaml prometheus: enabled: true metrics: - +cilium\_statedb\_write\_txn\_duration\_seconds - +cilium\_statedb\_write\_txn\_acquisition\_seconds - +cilium\_statedb\_table\_contention\_seconds - +cilium\_statedb\_table\_objects - +cilium\_statedb\_table\_revision - +cilium\_statedb\_table\_delete\_trackers - +cilium\_statedb\_table\_graveyard\_objects - +cilium\_statedb\_table\_graveyard\_low\_watermark - +cilium\_statedb\_table\_graveyard\_cleaning\_duration\_seconds - +cilium\_reconciler\_count - +cilium\_reconciler\_duration\_seconds - +cilium\_reconciler\_errors\_total - +cilium\_reconciler\_errors\_current - +cilium\_reconciler\_prune\_count - +cilium\_reconciler\_prune\_errors\_total - +cilium\_reconciler\_prune\_duration\_seconds These are still under development and the metric names may change. The metrics can be inspected even when disabled with the ``metrics`` and ``metrics/plot`` script commands as Cilium keeps samples of all metrics for the past 2 hours. These metrics are also available in sysdump in | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/statedb.rst | main | cilium | [
-0.0879962146282196,
-0.05521653965115547,
-0.07339271157979965,
0.09703849256038666,
-0.07267407327890396,
-0.04773349314928055,
-0.007180516608059406,
0.0010188062442466617,
0.0017372634029015899,
0.028078090399503708,
0.022521404549479485,
0.0574067085981369,
0.020392844453454018,
-0.05... | 0.117481 |
- +cilium\_reconciler\_prune\_errors\_total - +cilium\_reconciler\_prune\_duration\_seconds These are still under development and the metric names may change. The metrics can be inspected even when disabled with the ``metrics`` and ``metrics/plot`` script commands as Cilium keeps samples of all metrics for the past 2 hours. These metrics are also available in sysdump in HTML form (look for ``cilium-dbg-shell----metrics-html.html``). .. code-block:: shell-session # kubectl exec -it -n kube-system ds/cilium -- cilium-dbg shell /¯¯\ /¯¯\\_\_/¯¯\ \\_\_/¯¯\\_\_/ Cilium 1.17.0-dev a5b41b93507e 2024-08-08T13:18:08+02:00 go version go1.23.1 linux/amd64 /¯¯\\_\_/¯¯\ Welcome to the Cilium Shell! Type 'help' for list of commands. \\_\_/¯¯\\_\_/ \\_\_/ # Dump the sampled StateDB metrics from the last 2 hours cilium> metrics --sampled statedb Metric Labels 5min 30min 60min 120min cilium\_statedb\_table\_contention\_seconds handle=devices-controller table=devices 0s / 0s / 0s 0s / 0s / 0s 0s / 0s / 0s 0s / 0s / 0s ... # Plot the rate of change in the "health" table # (indicative of number of object writes per second) cilium> metrics/plot --rate statedb\_table\_revision.\*health cilium\_statedb\_table\_revision (rate per second) [ table=health ] ╭────────────────────────────────────────────────────────────────────╮ 2.4 ┤ .... ... ... . │ │ . . . . . . . .. │ │ . ............ ............. ............. .......│ 1.2 ┤ . │ │ . │ │ . │ 0.0 ┤. │ ╰───┬───────────────────────────────┬──────────────────────────────┬─╯ -120min -60min now # Plot the write transaction duration for the "devices" table # (indicative of how long the table is locked during writes) cilium> metrics/plot statedb\_write\_txn\_duration.\*devices ... omitted p50 and p90 plots ... cilium\_statedb\_write\_txn\_duration\_seconds (p99) [ handle=devices-controller ] ╭────────────────────────────────────────────────────────────────────╮ 47.2ms ┤ . │ │ . │ │ . . │ 23.9ms ┤ . . │ │ . . │ │ .. . . ... │ 0.5ms ┤................................. ..............................│ ╰───┬───────────────────────────────┬──────────────────────────────┬─╯ -120min -60min now # Plot the reconciliation errors for sysctl cilium> metrics/plot reconciler\_errors\_current.\*sysctl cilium\_reconciler\_errors\_current [ module\_id=agent.datapath.sysctl ] ╭────────────────────────────────────────────────────────────────────╮ 0.0 ┤ │ │ │ │ │ 0.0 ┤ │ │ │ │ │ 0.0 ┤....................................................................│ ╰───┬───────────────────────────────┬──────────────────────────────┬─╯ -120min -60min now StateDB ^^^^^^^ ========================================================= ======================== ============================================= Name Labels Description ========================================================= ======================== ============================================= ``statedb\_write\_txn\_duration\_seconds`` ``tables``, ``handle`` Duration of the write transaction ``statedb\_write\_txn\_acquisition\_seconds`` ``tables``, ``handle`` How long it took to lock target tables ``statedb\_table\_contention\_seconds`` ``table`` How long it took to lock a table for writing ``statedb\_table\_objects`` ``table`` Number of objects in a table ``statedb\_table\_revision`` ``table`` The current revision ``statedb\_table\_delete\_trackers`` ``table`` Number of delete trackers (e.g. Changes()) ``statedb\_table\_graveyard\_objects`` ``table`` Number of deleted objects in graveyard ``statedb\_table\_graveyard\_low\_watermark`` ``table`` Low watermark revision for deleting objects ``statedb\_table\_graveyard\_cleaning\_duration\_seconds`` ``table`` How long it took to GC the graveyard ========================================================= ======================== ============================================= The label ``handle`` is the database handle name (created with ``(\*DB).NewHandle``). The default handle is named ``DB``. The label ``table`` and ``tables`` (formatted as ``tableA+tableB``) are the StateDB tables which the metric concerns. Reconciler ^^^^^^^^^^ ========================================================= ======================== ================================= Name Labels Description ``reconciler\_count`` ``module\_id`` Number of reconciliation rounds performed ``reconciler\_duration\_seconds`` ``module\_id``, ``op`` Histogram of operation durations ``reconciler\_errors\_total`` ``module\_id`` Total number of errors (update/delete) ``reconciler\_errors\_current`` ``module\_id`` Current errors ``reconciler\_prune\_count`` ``module\_id`` Number of pruning rounds ``reconciler\_prune\_errors\_total`` ``module\_id`` Total number of errors during pruning ``reconciler\_prune\_duration\_seconds`` ``module\_id`` Histogram of operation durations ========================================================= ======================== ================================= The label ``module\_id`` is the identifier for the Hive module under which the reconciler was registered. ``op`` is the operation performed, either ``update`` or ``delete``. | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/statedb.rst | main | cilium | [
0.003023257479071617,
0.005826150067150593,
-0.027114473283290863,
-0.04196396842598915,
-0.041404321789741516,
-0.0891614779829979,
-0.05517401173710823,
0.02173323929309845,
0.09469828754663467,
-0.03156018257141113,
0.04072501137852669,
-0.1180613562464714,
-0.03695302456617355,
-0.0613... | 0.187535 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_container\_images: Building Container Images ========================= Two make targets exists to build container images automatically based on the locally checked out branch: Developer images ~~~~~~~~~~~~~~~~ Run ``make dev-docker-image`` to build a cilium-agent Docker image that contains your local changes. .. code-block:: shell-session ARCH=amd64 DOCKER\_DEV\_ACCOUNT=quay.io/myaccount DOCKER\_IMAGE\_TAG=jane-developer-my-fix make dev-docker-image Run ``make docker-operator-generic-image`` (respectively, ``docker-operator-aws-image`` or ``docker-operator-azure-image``) to build the cilium-operator Docker image: .. code-block:: shell-session ARCH=amd64 DOCKER\_DEV\_ACCOUNT=quay.io/myaccount DOCKER\_IMAGE\_TAG=jane-developer-my-fix make docker-operator-generic-image The commands above assumes that your username for ``quay.io`` is ``myaccount``. Set ``BASE\_IMAGE\_REGISTRY`` to redirect pull of base images (``cilium-builder``, ``cilium-runtime``, ``cilium-envoy``). This allows to keeps tags/digests pinned in the Dockerfile, but use custom registry for builds. ~~~~~~~~~~~~~~ Race detection ~~~~~~~~~~~~~~ See section on :ref:`compiling Cilium with race detection `. Official release images ~~~~~~~~~~~~~~~~~~~~~~~ Anyone can build official release images using the make target below. .. code-block:: shell-session DOCKER\_IMAGE\_TAG=v1.4.0 make docker-images-all Official Cilium repositories ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The following table contains the main container image repositories managed by the Cilium team. All images are built via Github Actions from their corresponding Github repositories. All images are `multi-platform `\_ with support for both ``linux/amd64`` and ``linux/arm64`` platforms. +-------------------------------+---------------------------------------------+-----------------------------------------------+ | \*\*Github Repository\*\* | \*\*Dockerfile\*\* | \*\*Container image repository\*\* | | | | + | | | | +-------------------------------+---------------------------------------------+-----------------------------------------------+ | github.com/cilium/cilium | images/builder/Dockerfile | quay.io/cilium/cilium-builder | | +---------------------------------------------+-----------------------------------------------+ | | images/cilium-docker-plugin/Dockerfile | [quay|docker].io/cilium/docker-plugin | | +---------------------------------------------+-----------------------------------------------+ | | images/cilium/Dockerfile | [quay|docker].io/cilium/cilium | | +---------------------------------------------+-----------------------------------------------+ | | images/clustermesh-apiserver/Dockerfile | [quay|docker].io/cilium/clustermesh-apiserver | | +---------------------------------------------+-----------------------------------------------+ | | images/hubble-relay/Dockerfile | [quay|docker].io/cilium/hubble-relay | | +---------------------------------------------+-----------------------------------------------+ | | images/operator/Dockerfile | [quay|docker].io/cilium/operator | | + +-----------------------------------------------+ | | | [quay|docker].io/cilium/operator-alibabacloud | | + +-----------------------------------------------+ | | | [quay|docker].io/cilium/operator-aws | | + +-----------------------------------------------+ | | | [quay|docker].io/cilium/operator-azure | | + +-----------------------------------------------+ | | | [quay|docker].io/cilium/operator-generic | | +---------------------------------------------+-----------------------------------------------+ | | images/runtime/Dockerfile | quay.io/cilium/cilium-runtime | +-------------------------------+---------------------------------------------+-----------------------------------------------+ | github.com/cilium/cilium-cli | Dockerfile | quay.io/cilium/cilium-cli | +-------------------------------+---------------------------------------------+-----------------------------------------------+ | github.com/cilium/image-tools | images/bpftool/Dockerfile | quay.io/cilium/cilium-bpftool | | +---------------------------------------------+-----------------------------------------------+ | | images/compilers/Dockerfile | quay.io/cilium/image-compilers | | +---------------------------------------------+-----------------------------------------------+ | | images/llvm/Dockerfile | quay.io/cilium/cilium-llvm | | +---------------------------------------------+-----------------------------------------------+ | | images/maker/Dockerfile | quay.io/cilium/image-maker | | +---------------------------------------------+-----------------------------------------------+ | | images/startup-script/Dockerfile | quay.io/cilium/startup-script | +-------------------------------+---------------------------------------------+-----------------------------------------------+ | github.com/cilium/proxy | Dockerfile.builder | quay.io/cilium/cilium-envoy-builder | | +---------------------------------------------+-----------------------------------------------+ | | Dockerfile | quay.io/cilium/cilium-envoy | +-------------------------------+---------------------------------------------+-----------------------------------------------+ Images dependency: :: cilium/cilium └── cilium/cilium-builder └── cilium/cilium-runtime ├── cilium/cilium-bpftool └── cilium/cilium-llvm cilium/cilium-envoy └── cilium/cilium-envoy-builder └── cilium/cilium-builder └── cilium/cilium-runtime ├── cilium/cilium-bpftool └── cilium/cilium-llvm cilium/operator └── cilium/cilium-builder └── cilium/cilium-runtime ├── cilium/cilium-bpftool └── cilium/cilium-llvm .. \_update\_cilium\_builder\_runtime\_images: Update cilium-builder and cilium-runtime images ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The steps described here, starting with a commit that updates the image versions, build the necessary images and update all the appropriate locations in the Cilium codebase. Hence, before executing the following steps, the user should have such a commit (e.g., see `this commit `\_\_) in their local tree. After following the steps below, the result would be another commit with the image updates (e.g,. see `this commit `\_\_). Please keep the two commits separate to ease backporting. If you only wish to update the packages in these images, then you can manually update the ``FORCE\_BUILD`` variable in ``images/runtime/Dockerfile`` to have a different value and then proceed with the steps below. #. Commit your changes and create a PR in cilium/cilium. .. code-block:: shell-session $ git commit -sam "images: update cilium-{runtime,builder}" #. Ping one of the members of `team/build `\_\_ to approve the build that was created by GitHub Actions `here `\_\_. Note that at this step cilium-builder build failure is expected since we have yet to update the runtime digest. #. Wait for build to complete. If the PR was | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/images.rst | main | cilium | [
0.02980915643274784,
0.037172749638557434,
-0.04481515288352966,
-0.03386074677109718,
0.0725361630320549,
-0.09781642258167267,
-0.022119052708148956,
-0.02132190577685833,
0.021994207054376602,
0.023347698152065277,
0.039829619228839874,
-0.11803380399942398,
0.047125376760959625,
-0.024... | 0.059924 |
#. Ping one of the members of `team/build `\_\_ to approve the build that was created by GitHub Actions `here `\_\_. Note that at this step cilium-builder build failure is expected since we have yet to update the runtime digest. #. Wait for build to complete. If the PR was opened from an external fork the build will fail while trying to push the changes, this is expected. #. If the PR was opened from an external fork, run the following commands and re-push the changes to your branch. Once this is done the CI can be executed. .. code-block:: shell-session $ make -C images/ update-runtime-image $ git commit -sam "images: update cilium-{runtime,builder}" --amend $ make -C images/ update-builder-image $ git commit -sam "images: update cilium-{runtime,builder}" --amend #. If the PR was opened from the main repository, the build will automatically generate one commit and push it to your branch with all the necessary changes across files in the repository. #. Run the full CI and ensure that it passes. #. Merge the PR. Image Building Process ~~~~~~~~~~~~~~~~~~~~~~ Images are automatically created by a GitHub action: ``build-images``. This action will automatically run for any Pull Request, including Pull Requests submitted from forked repositories, and push the images into ``quay.io/cilium/\*-ci``. They will be available there for 1 week before they are removed by the ``ci-images-garbage-collect`` workflow. Once they are removed, the developer must re-push the Pull Request into GitHub so that new images are created. | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/images.rst | main | cilium | [
0.0012447669869288802,
-0.004310317803174257,
-0.039795614778995514,
-0.010754186660051346,
0.032007697969675064,
-0.08597881346940994,
-0.05015670508146286,
0.06318654865026474,
0.04037833586335182,
0.06918025016784668,
0.08209028840065002,
-0.09537120908498764,
0.05047525465488434,
-0.05... | 0.030613 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io Updating dependencies with Renovate =================================== The Cilium project uses `Renovate Bot `\_\_ to maintain and update dependencies on a regular basis. This guide describes how to contribute a PR which modifies the Renovate configuration. There are two complementary methods for validating Renovate changes: Linting with the "local" platform, and testing the updates in your own fork. Linting locally ~~~~~~~~~~~~~~~ Use the ``renovate/renovate`` docker image to perform a dry run of Renovate. This step should complete in less than ten minutes, and it will report syntax errors in the configuration. #. Make some changes to the Renovate configuration in ``.github/renovate.json5``. #. Run the renovate image against the new configuration. .. tabs:: .. group-tab:: Simple .. code-block:: shell-session make renovate-local .. group-tab:: Advanced .. code-block:: shell-session docker run -ti -e LOG\_LEVEL=debug \ -e GITHUB\_COM\_TOKEN="$(gh auth token)" \ -v /tmp:/tmp \ -v $(pwd):/usr/src/app \ docker.io/renovate/renovate:slim \ renovate --platform=local \ | tee renovate.log This approach is based on the `Local platform guide `\_\_ provided by Renovate. See that guide for more details about usage and limitations. Testing on a fork ~~~~~~~~~~~~~~~~~ For most changes to the Renovate configuration, you will likely need to test the changes on your own fork of Cilium. #. Make some changes to the Renovate configuration. Renovate is configured in ``.github/renovate.json5``. #. (Optional) Disable unrelated configuration. For an example, see `this commit `\_\_. #. Push the branch to the default branch of your own fork. #. `Enable the Renovate GitHub app `\_\_ in your GitHub account. #. Ensure that Renovate is enabled in the repository settings in the `Renovate Dashboard `\_\_. #. Trigger the Renovate app from the dashboard or push a fresh commit to your fork's default branch to trigger Renovate again. #. Use the dashboard to trigger Renovate to create a PR on your fork and validate that the proposed PRs are updating the correct parts of the codebase. Once you have tested that the Renovate configuration works in your own fork, create a PR against Cilium and provide links in the description to inform reviewers about the testing you have performed on the changes. | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/renovate.rst | main | cilium | [
-0.030613698065280914,
0.024354945868253708,
0.02507508546113968,
-0.04820224270224571,
0.08632616698741913,
-0.10343573987483978,
-0.07565546780824661,
0.013421724550426006,
-0.004255614709109068,
0.018115142360329628,
0.028024422004818916,
-0.02668675407767296,
0.005207808688282967,
-0.0... | 0.090628 |
Introducing New CRDs ==================== Cilium uses a combination of code generation tools to facilitate adding CRDs to the Kubernetes instance it is installed on. These CRDs make themselves available in the generated Kubernetes client Cilium uses. Defining And Generating CRDs ---------------------------- Currently, two API versions exist ``v2`` and ``v2alpha1``. Paths: :: pkg/k8s/apis/cilium.io/v2/ pkg/k8s/apis/cilium.io/v2alpha1/ CRDs are defined via Golang structures, annotated with ``marks``, and generated with Cilium make file targets. Marks ~~~~~ Marks are used to tell ``controller-gen`` \*how\* to generate the CRD. This includes defining the CRD's various names (Singular, plural, group), its Scope (Cluster, Namespaced), Shortnames, etc… An example: :: // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object // +kubebuilder:resource:categories={cilium},singular="ciliumendpointslice",path="ciliumendpointslices",scope="Cluster",shortName={ces} // +kubebuilder:storageversion You can find CRD generation ``marks`` documentation `here `\_\_. Marks are also used to generate json-schema validation. You can define validation criteria such as "format=cidr" and "required" via validation ``marks`` in your struct's comments. An example: .. code-block:: go type CiliumBGPPeeringConfiguration struct { // PeerAddress is the IP address of the peer. // This must be in CIDR notation and use a /32 to express // a single host. // // +kubebuilder:validation:Required // +kubebuilder:validation:Format=cidr PeerAddress string `json:"peerAddress"` You can find CRD validation ``marks`` documentation `here `\_\_. Defining CRDs ~~~~~~~~~~~~~ Paths: :: pkg/k8s/apis/cilium.io/v2/ pkg/k8s/apis/cilium.io/v2alpha1/ The portion of the directory after ``apis/`` makes up the CRD's ``Group`` and ``Version``. See `KubeBuilder-GVK `\_\_ You can begin defining your ``CRD`` structure, making any subtypes you like to adequately define your data model and using ``marks`` to control the CRD generation process. Here is a brief example, omitting any further definitions of sub-types to express the CRD data model. .. code-block:: go // +genclient // +genclient:nonNamespaced // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object // +kubebuilder:resource:categories={cilium,ciliumbgp},singular="ciliumbgppeeringpolicy",path="ciliumbgppeeringpolicies",scope="Cluster",shortName={bgpp} // +kubebuilder:printcolumn:JSONPath=".metadata.creationTimestamp",name="Age",type=date // +kubebuilder:storageversion // CiliumBGPPeeringPolicy is a Kubernetes third-party resource for instructing // Cilium's BGP control plane to create peers. type CiliumBGPPeeringPolicy struct { // +k8s:openapi-gen=false // +deepequal-gen=false metav1.TypeMeta `json:",inline"` // +k8s:openapi-gen=false // +deepequal-gen=false metav1.ObjectMeta `json:"metadata"` // Spec is a human readable description of a BGP peering policy // // +kubebuilder:validation:Required Spec CiliumBGPPeeringPolicySpec `json:"spec,omitempty"` } Integrating CRDs Into Cilium ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Once you've coded your CRD data model you can use Cilium's ``make`` infrastructure to generate and integrate your CRD into Cilium. There are several make targets and a script which revolve around generating CRD and associated code gen (client, informers, ``DeepCopy`` implementations, ``DeepEqual`` implementations, etc). Each of the next sections also detail the steps you should take to integrate your CRD into Cilium. Generating CRD YAML ~~~~~~~~~~~~~~~~~~~ To simply generate the CRDs and copy them into the correct location you must perform two tasks: \* Update the ``Makefile`` to edit the ``CRDS\_CILIUM\_V2`` or ``CRDS\_CILIUM\_V2ALPHA1`` variable (depending on the version of your new CRD) to contain the plural name of your new CRD. \* Run ``make manifests`` This will generate your Golang structs into CRD manifests and copy them to ``./pkg/k8s/apis/cilium.io/client/crds/`` into the appropriate ``Version`` directory. You can inspect your generated ``CRDs`` to confirm they look OK. Additionally ``./contrib/scripts/check-k8s-code-gen.sh`` is a script which will generate the CRD manifest along with generating the necessary K8s API changes to use your CRDs via K8s client in Cilium source code. Generating Client Code ~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: shell-session make generate-k8s-api This make target will perform the necessary code-gen to integrate your CRD into Cilium's ``client-go`` client, create listers, watchers, and informers. Again, multiple steps must be taken to fully integrate your CRD into Cilium. Register With API Scheme ~~~~~~~~~~~~~~~~~~~~~~~~ Paths: :: pkg/k8s/apis/cilium.io/v2alpha1/register.go Make a change similar to this diff to register your CRDs with the API scheme. .. code-block:: diff diff --git a/pkg/k8s/apis/cilium.io/v2alpha1/register.go b/pkg/k8s/apis/cilium.io/v2alpha1/register.go index 9650e32f8d..0d85c5a233 100644 --- a/pkg/k8s/apis/cilium.io/v2alpha1/register.go +++ b/pkg/k8s/apis/cilium.io/v2alpha1/register.go @@ -55,6 +55,34 @@ const ( // CESName is the full | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/introducing_new_crds.rst | main | cilium | [
-0.07453394681215286,
0.04232080653309822,
-0.05342370644211769,
-0.043426040560007095,
-0.04413481429219246,
-0.013454079627990723,
-0.04004271700978279,
0.056063417345285416,
0.0838208794593811,
-0.012668210081756115,
0.07095865160226822,
-0.13828030228614807,
0.04413202777504921,
-0.032... | 0.181611 |
into Cilium. Register With API Scheme ~~~~~~~~~~~~~~~~~~~~~~~~ Paths: :: pkg/k8s/apis/cilium.io/v2alpha1/register.go Make a change similar to this diff to register your CRDs with the API scheme. .. code-block:: diff diff --git a/pkg/k8s/apis/cilium.io/v2alpha1/register.go b/pkg/k8s/apis/cilium.io/v2alpha1/register.go index 9650e32f8d..0d85c5a233 100644 --- a/pkg/k8s/apis/cilium.io/v2alpha1/register.go +++ b/pkg/k8s/apis/cilium.io/v2alpha1/register.go @@ -55,6 +55,34 @@ const ( // CESName is the full name of Cilium Endpoint Slice CESName = CESPluralName + "." + CustomResourceDefinitionGroup + + // Cilium BGP Peering Policy (BGPP) + + // BGPPPluralName is the plural name of Cilium BGP Peering Policy + BGPPPluralName = "ciliumbgppeeringpolicies" + + // BGPPKindDefinition is the kind name of Cilium BGP Peering Policy + BGPPKindDefinition = "CiliumBGPPeeringPolicy" + + // BGPPName is the full name of Cilium BGP Peering Policy + BGPPName = BGPPPluralName + "." + CustomResourceDefinitionGroup + + // Cilium BGP Load Balancer IP Pool (BGPPool) + + // BGPPoolPluralName is the plural name of Cilium BGP Load Balancer IP Pool + BGPPoolPluralName = "ciliumbgploadbalancerippools" + + // BGPPoolKindDefinition is the kind name of Cilium BGP Peering Policy + BGPPoolKindDefinition = "CiliumBGPLoadBalancerIPPool" + + // BGPPoolName is the full name of Cilium BGP Load Balancer IP Pool + BGPPoolName = BGPPoolPluralName + "." + CustomResourceDefinitionGroup ) // SchemeGroupVersion is group version used to register these objects @@ -102,6 +130,10 @@ func addKnownTypes(scheme \*runtime.Scheme) error { &CiliumEndpointSlice{}, &CiliumEndpointSliceList{}, + &CiliumBGPPeeringPolicy{}, + &CiliumBGPPeeringPolicyList{}, + &CiliumBGPLoadBalancerIPPool{}, + &CiliumBGPLoadBalancerIPPoolList{}, ) metav1.AddToGroupVersion(scheme, SchemeGroupVersion) You should also bump the ``CustomResourceDefinitionSchemaVersion`` variable in ``register.go`` to instruct Cilium that new CRDs have been added to the system. Register With Client ~~~~~~~~~~~~~~~~~~~~ ``pkg/k8s/apis/cilium.io/client/register.go`` Make a change similar to the following to register CRD types with the client. .. code-block:: diff diff --git a/pkg/k8s/apis/cilium.io/client/register.go b/pkg/k8s/apis/cilium.io/client/register.go index ede134d7d9..ec82169270 100644 --- a/pkg/k8s/apis/cilium.io/client/register.go +++ b/pkg/k8s/apis/cilium.io/client/register.go @@ -60,6 +60,12 @@ const ( // CESCRDName is the full name of the CES CRD. CESCRDName = k8sconstv2alpha1.CESKindDefinition + "/" + k8sconstv2alpha1.CustomResourceDefinitionVersion + + // BGPPCRDName is the full name of the BGPP CRD. + BGPPCRDName = k8sconstv2alpha1.BGPPKindDefinition + "/" + k8sconstv2alpha1.CustomResourceDefinitionVersion + + // BGPPoolCRDName is the full name of the BGPPool CRD. + BGPPoolCRDName = k8sconstv2alpha1.BGPPoolKindDefinition + "/" + k8sconstv2alpha1.CustomResourceDefinitionVersion ) var ( @@ -86,6 +92,7 @@ func CreateCustomResourceDefinitions(clientset apiextensionsclient.Interface) er synced.CRDResourceName(k8sconstv2.CLRPName): createCRD(CLRPCRDName, k8sconstv2.CLRPName), synced.CRDResourceName(k8sconstv2.CEGPName): createCRD(CEGPCRDName, k8sconstv2.CEGPName), synced.CRDResourceName(k8sconstv2alpha1.CESName): createCRD(CESCRDName, k8sconstv2alpha1.CESName), + synced.CRDResourceName(k8sconstv2alpha1.BGPPName): createCRD(BGPPCRDName, k8sconstv2alpha1.BGPPName), } for \_, r := range synced.AllCiliumCRDResourceNames() { fn, ok := resourceToCreateFnMapping[r] @@ -127,6 +134,12 @@ var ( //go:embed crds/v2alpha1/ciliumendpointslices.yaml crdsv2Alpha1Ciliumendpointslices []byte + + //go:embed crds/v2alpha1/ciliumbgppeeringpolicies.yaml + crdsv2Alpha1Ciliumbgppeeringpolicies []byte + + //go:embed crds/v2alpha1/ciliumbgploadbalancerippools.yaml + crdsv2Alpha1Ciliumbgploadbalancerippools []byte ) // GetPregeneratedCRD returns the pregenerated CRD based on the requested CRD ``pkg/k8s/watchers/watcher.go`` Also, configure the watcher for this resource (or tell the agent not to watch it) .. code-block:: diff diff --git a/pkg/k8s/watchers/watcher.go b/pkg/k8s/watchers/watcher.go index eedf397b6b..8419eb90fd 100644 --- a/pkg/k8s/watchers/watcher.go +++ b/pkg/k8s/watchers/watcher.go @@ -398,6 +398,7 @@ var ciliumResourceToGroupMapping = map[string]watcherInfo{ synced.CRDResourceName(v2.CECName): {afterNodeInit, k8sAPIGroupCiliumEnvoyConfigV2}, synced.CRDResourceName(v2alpha1.BGPPName): {skip, ""}, // Handled in BGP control plane synced.CRDResourceName(v2alpha1.BGPPoolName): {skip, ""}, // Handled in BGP control plane + synced.CRDResourceName(v2.CCOName): {skip, ""}, // Handled by init directly Getting Your CRDs Installed ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Your new CRDs must be installed into Kubernetes. This is controlled in the ``pkg/k8s/synced/crd.go`` file. Here is an example diff which installs the CRDs ``v2alpha1.BGPPName`` and ``v2alpha.BGPPoolName``: .. code-block:: diff diff --git a/pkg/k8s/synced/crd.go b/pkg/k8s/synced/crd.go index 52d975c449..10c554cf8a 100644 --- a/pkg/k8s/synced/crd.go +++ b/pkg/k8s/synced/crd.go @@ -42,6 +42,11 @@ func agentCRDResourceNames() []string { CRDResourceName(v2.CCNPName), CRDResourceName(v2.CNName), CRDResourceName(v2.CIDName), + CRDResourceName(v2.CIDName), + // TODO(louis) make this a conditional install + // based on --enable-bgp-control-plane flag + CRDResourceName(v2alpha1.BGPPName), + CRDResourceName(v2alpha1.BGPPoolName), } Updating RBAC Roles ~~~~~~~~~~~~~~~~~~~ Cilium is installed with a service account and this service account should be given RBAC permissions to access your new CRDs. The following files should be updated to include permissions to create, read, update, | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/introducing_new_crds.rst | main | cilium | [
-0.08868961036205292,
0.04537224397063255,
-0.0795813575387001,
-0.0594390332698822,
-0.03435792773962021,
-0.09607101231813431,
-0.02409311942756176,
0.09567824006080627,
0.032915640622377396,
-0.039783578366041183,
0.1013951525092125,
-0.08550658822059631,
-0.017276950180530548,
-0.00346... | 0.120308 |
install + // based on --enable-bgp-control-plane flag + CRDResourceName(v2alpha1.BGPPName), + CRDResourceName(v2alpha1.BGPPoolName), } Updating RBAC Roles ~~~~~~~~~~~~~~~~~~~ Cilium is installed with a service account and this service account should be given RBAC permissions to access your new CRDs. The following files should be updated to include permissions to create, read, update, and delete your new CRD. :: install/kubernetes/cilium/templates/cilium-agent/clusterrole.yaml install/kubernetes/cilium/templates/cilium-operator/clusterrole.yaml install/kubernetes/cilium/templates/cilium-preflight/clusterrole.yaml Here is a diff of updating the Agent's cluster role template to include our new BGP CRDs: .. code-block:: diff diff --git a/install/kubernetes/cilium/templates/cilium-agent/clusterrole.yaml b/install/kubernetes/cilium/templates/cilium-agent/clusterrole.yaml index 9878401a81..5ba6c30cd7 100644 --- a/install/kubernetes/cilium/templates/cilium-agent/clusterrole.yaml +++ b/install/kubernetes/cilium/templates/cilium-agent/clusterrole.yaml @@ -102,6 +102,8 @@ rules: - ciliumlocalredirectpolicies/finalizers - ciliumendpointslices + - ciliumbgppeeringpolicies + - ciliumbgploadbalancerippools verbs: - '\*' {{- end }} It's important to note, neither the Agent nor the Operator installs these manifests to the Kubernetes clusters. This means when testing your CRD out the updated ``clusterrole`` must be written to the cluster manually. Also please note, you should be specific about which 'verbs' are added to the Agent's cluster role. This ensures a good security posture and best practice. A convenient script for this follows: .. code-block:: bash createTemplate(){ if [ -z "${1}" ]; then echo "Commit SHA not set" return fi ciliumVersion=${1} MODIFY THIS LINE CD TO CILIUM ROOT DIR <----- cd install/kubernetes CILIUM\_CI\_TAG="${1}" helm template cilium ./cilium \ --namespace kube-system \ --set image.repository=quay.io/cilium/cilium-ci \ --set image.tag=$CILIUM\_CI\_TAG \ --set operator.image.repository=quay.io/cilium/operator \ --set operator.image.suffix=-ci \ --set operator.image.tag=$CILIUM\_CI\_TAG \ --set clustermesh.apiserver.image.repository=quay.io/cilium/clustermesh-apiserver-ci \ --set clustermesh.apiserver.image.tag=$CILIUM\_CI\_TAG \ --set hubble.relay.image.repository=quay.io/cilium/hubble-relay-ci \ --set hubble.relay.image.tag=$CILIUM\_CI\_TAG > /tmp/cilium.yaml echo "run kubectl apply -f /tmp/cilium.yaml" } The above script with install Cilium and newest ``clusterrole`` manifests to anywhere your ``kubectl`` is pointed. | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/introducing_new_crds.rst | main | cilium | [
-0.010862955823540688,
-0.08168917894363403,
-0.06306365132331848,
-0.05558837950229645,
-0.046734560281038284,
0.009974906221032143,
-0.03275761380791664,
0.002406883053481579,
-0.016097992658615112,
-0.004262804985046387,
0.029434548690915108,
-0.08116137236356735,
0.02412373386323452,
0... | 0.102884 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_periodic\_duties: Periodic duties =============== Some members of the Cilium organization have rotational duties that change periodically. Release managers ---------------- Release managers take care of the patch releases for each supported stable branch of Cilium. They typically coordinate in ``#launchpad`` on `Cilium Slack`\_. Backporters ----------- Backporters handle backports to Cilium's supported stable branches. They typically coordinate in ``#launchpad`` on `Cilium Slack`\_. The :ref:`backport\_process` provides some guidance on how to backport changes. Triagers -------- Triagers take care of several tasks: - They push and merge contributions from community contributors - They review updates to files without a dedicated code owner - They triage bugs, which means they interact with reporters until the issue is clear and can get the label associated to the corresponding working group, when possible - They keep an eye on `Cilium Slack`\_, to try and answer questions from the community They are members of the `TopHat team`\_ on GitHub. .. \_TopHat team: https://github.com/orgs/cilium/teams/tophat/members CI Health managers ------------------ CI Health managers monitor the status of the CI, track down flakes, and ensure that CI checks keep running smoothly. They typically coordinate in ``#testing`` on `Cilium Slack`\_. | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/reviewers_committers/duties.rst | main | cilium | [
-0.023512335494160652,
-0.04816680774092674,
-0.07239776104688644,
-0.020157039165496826,
0.017939584329724312,
-0.061170462518930435,
-0.06383820623159409,
0.00986417755484581,
-0.023082947358489037,
0.02206168882548809,
0.09223329275846481,
-0.022085947915911674,
-0.0007209620089270175,
... | 0.17018 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_reviewer\_committer: Contributing as a Reviewer or Committer ======================================= Some contributors have specific roles, such as reviewers or committers. The following resources provide guidance for some specific tasks attached to those roles. Refer to `Cilium's Contributor Ladder`\_ for details about the different roles. .. \_Cilium's Contributor Ladder: https://github.com/cilium/community/blob/main/CONTRIBUTOR-LADDER.md .. toctree:: :maxdepth: 2 review\_process review\_docs review\_vendor duties | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/reviewers_committers/index.rst | main | cilium | [
-0.02233947068452835,
0.028152236714959145,
-0.08118720352649689,
-0.004499525763094425,
0.012679187580943108,
-0.018590126186609268,
-0.05297897011041641,
-0.03775300458073616,
-0.02155406028032303,
0.04385662078857422,
0.010565225966274738,
-0.07085131853818893,
-0.008013220503926277,
-0... | 0.085683 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_review\_process: Pull requests review process for committers =========================================== Review process -------------- .. note:: These instructions assume that reviewers are members of the Cilium GitHub organization. This is required to obtain the privileges to modify GitHub labels on the pull request. See `Cilium's Contributor Ladder`\_ for details. .. \_Cilium's Contributor Ladder: https://github.com/cilium/community/blob/main/CONTRIBUTOR-LADDER.md #. Find Pull Requests (PRs) needing a review `from you `\_, or `from one of your teams `\_. #. If this PR was opened by a contributor who is not part of the Cilium organization, please assign yourself to that PR and keep track of the PR to ensure it gets reviewed and merged. If the contributor is a Cilium committer, then they are responsible for getting the PR ready to be merged by addressing review comments and resolving all CI checks for "Required" workflows. If this PR is a backport PR (typically with the label ``kind/backport``) and no-one else has reviewed the PR, review the changes as a sanity check. If any individual commits deviate from the original patch, request review from the original author to validate that the backport was correctly applied. #. Review overall correctness of the PR according to the rules specified in the section :ref:`submit\_pr`. #. Set the labels accordingly. A bot called maintainer's little helper might automatically help you with this. +--------------------------------+---------------------------------------------------------------------------+ | Labels | When to set | +================================+===========================================================================+ | ``dont-merge/needs-sign-off`` | Some commits are not signed off | +--------------------------------+---------------------------------------------------------------------------+ | ``dont-merge/needs-rebase`` | PR is outdated and needs to be rebased | +--------------------------------+---------------------------------------------------------------------------+ #. Validate that bugfixes are marked with ``kind/bug`` and validate whether the assessment of backport requirements as requested by the submitter conforms to the :ref:`backport\_criteria`. +--------------------------+---------------------------------------------------------------------------+ | Labels | When to set | +==========================+===========================================================================+ | ``needs-backport/X.Y`` | PR needs to be backported to these stable releases | +--------------------------+---------------------------------------------------------------------------+ #. If the PR is subject to backport, validate that the PR does not mix bugfix and refactoring of code as it will heavily complicate the backport process. Demand for the PR to be split. #. Validate the ``release-note/\*`` label and check the release note suitability. Release notes are passed through the dedicated ``release-note`` block (see :ref:`submit\_pr`), or through the PR title if this block is missing. To check if the notes are suitable, put yourself into the perspective of a future release notes reader with lack of context and ensure the title is precise but brief. +-----------------------------------+--------------------------------------------------------------------------------------------------------+ | Labels | When to set | +===================================+========================================================================================================+ | ``dont-merge/needs-release-note`` | Do NOT merge PR, needs a release note | +-----------------------------------+--------------------------------------------------------------------------------------------------------+ | ``release-note/bug`` | This is a non-trivial bugfix and is a user-facing bug | +-----------------------------------+--------------------------------------------------------------------------------------------------------+ | ``release-note/major`` | This is a major feature addition, e.g. Add MongoDB support | +-----------------------------------+--------------------------------------------------------------------------------------------------------+ | ``release-note/minor`` | This is a minor feature addition, e.g. Add support for a Kubernetes version | +-----------------------------------+--------------------------------------------------------------------------------------------------------+ | ``release-note/misc`` | This is a not user-facing change , e.g. Refactor endpoint package, a bug fix of a non-released feature | +-----------------------------------+--------------------------------------------------------------------------------------------------------+ | ``release-note/ci`` | This is a CI feature or bug fix. | +-----------------------------------+--------------------------------------------------------------------------------------------------------+ #. Check for upgrade compatibility impact and if in doubt, set the label ``upgrade-impact`` and discuss in `Cilium Slack`\_'s ``#development`` channel or in the weekly meeting. +--------------------------+---------------------------------------------------------------------------+ | Labels | When to set | +==========================+===========================================================================+ | ``upgrade-impact`` | The code changes have a potential upgrade impact | +--------------------------+---------------------------------------------------------------------------+ #. When submitting a review, provide explicit approval or request specific changes whenever possible. Clear feedback indicates whether contributors must take action before a | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/reviewers_committers/review_process.rst | main | cilium | [
-0.04840013012290001,
0.036546412855386734,
-0.11842390894889832,
-0.004542174283415079,
0.05119244381785393,
-0.010356828570365906,
-0.008866959251463413,
0.003887179307639599,
0.05859025940299034,
0.01640491373836994,
0.02423645183444023,
-0.08369739353656769,
0.00712172407656908,
-0.056... | 0.101632 |
in the weekly meeting. +--------------------------+---------------------------------------------------------------------------+ | Labels | When to set | +==========================+===========================================================================+ | ``upgrade-impact`` | The code changes have a potential upgrade impact | +--------------------------+---------------------------------------------------------------------------+ #. When submitting a review, provide explicit approval or request specific changes whenever possible. Clear feedback indicates whether contributors must take action before a PR can merge. If you need more information before you can approve or request changes, you can leave comments seeking clarity. If you do not explicitly approve or request changes, it's best practice to raise awareness about the discussion so that others can participate. Here are some ways you can raise awareness: - Re-request review from codeowners in the PR - Raise the topic for discussion in Slack or during community meetings When requesting changes, summarize your feedback for the PR, including overall issues for a contributor to consider and/or encouragement for what a contributor is already doing well. #. When all review objectives for all ``CODEOWNERS`` are met, all CI tests have passed, and all reviewers have approved the requested changes, you can merge the PR by clicking on the "Rebase and merge" button. Reviewer Teams -------------- Every reviewer, including committers in the `committers team`\_, belongs to `one or more teams in the Cilium organization `\_. If you would like to add or remove yourself from any team, please submit a PR against the `community repository`\_. Once a contributor opens a PR, GitHub automatically picks which `teams `\_ should review the PR using the ``CODEOWNERS`` file. Each reviewer can see the PRs they need to review by filtering by reviews requested. A good filter is provided in this `link `\_ so make sure to bookmark it. Reviewers are expected to focus their review on the areas of the code where GitHub requested their review. For small PRs, it may make sense to simply review the entire PR. However, if the PR is quite large then it can help to narrow the area of focus to one particular aspect of the code. When leaving a review, share which areas you focused on and which areas you think that other reviewers should look into. This helps others to focus on aspects of review that have not been covered as deeply. Belonging to a team does not mean that a reviewer needs to know every single line of code the team is maintaining. Once you have reviewed a PR, if you feel that another pair of eyes is needed, re-request a review from the appropriate team. In the following example, the reviewer belonging to the CI team is re-requesting a review for other team members to review the PR. This allows other team members belonging to the CI team to see the PR as part of the PRs that require review in the `filter `\_. .. image:: ../../../images/re-request-review.png :align: center :scale: 50% When all review objectives for all ``CODEOWNERS`` are met, all required CI tests have passed and a proper release label is set, a PR may be merged by any committer with access rights to click the green merge button. Maintainer's little helper may set the ``ready-to-merge`` label automatically to recognize the state of the PR. Periodically, a rotating assigned committer will review the list of PRs that are marked ``ready-to-merge``. .. \_committers team: https://github.com/orgs/cilium/teams/committers/members .. \_community repository: https://github.com/cilium/community .. \_cilium\_teams: https://github.com/orgs/cilium/teams/team/teams .. \_maintainers: https://github.com/orgs/cilium/teams/cilium-maintainers/members .. \_user\_review\_filter: https://github.com/cilium/cilium/pulls?q=is%3Apr+is%3Aopen+draft%3Afalse+user-review-requested%3A%40me+sort%3Aupdated-asc .. \_team\_review\_filter: https://github.com/cilium/cilium/pulls?q=is%3Apr+is%3Aopen+draft%3Afalse+review-requested%3A%40me+sort%3Aupdated-asc Code owners ----------- .. include:: ../../../codeowners.rst | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/reviewers_committers/review_process.rst | main | cilium | [
-0.057539358735084534,
0.035238202661275864,
-0.05540094152092934,
-0.008615146391093731,
-0.047794368118047714,
0.09719338268041611,
0.025653332471847534,
0.006682833191007376,
-0.017787376418709755,
-0.08559989929199219,
0.03361199051141739,
-0.04844805225729942,
-0.03835121914744377,
-0... | 0.012027 |
https://github.com/orgs/cilium/teams/team/teams .. \_maintainers: https://github.com/orgs/cilium/teams/cilium-maintainers/members .. \_user\_review\_filter: https://github.com/cilium/cilium/pulls?q=is%3Apr+is%3Aopen+draft%3Afalse+user-review-requested%3A%40me+sort%3Aupdated-asc .. \_team\_review\_filter: https://github.com/cilium/cilium/pulls?q=is%3Apr+is%3Aopen+draft%3Afalse+review-requested%3A%40me+sort%3Aupdated-asc Code owners ----------- .. include:: ../../../codeowners.rst | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/reviewers_committers/review_process.rst | main | cilium | [
-0.0691571906208992,
0.026939773932099342,
-0.11640141159296036,
0.06154695525765419,
0.027506014332175255,
-0.016674021258950233,
-0.03255981206893921,
-0.015840593725442886,
0.0159147996455431,
0.020001525059342384,
0.0330217219889164,
-0.04896169900894165,
-0.036334387958049774,
-0.0043... | 0.146275 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_review\_vendor: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Reviewing for @cilium/vendor \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* What is @cilium/vendor? ======================= Team `@cilium/vendor `\_ is a GitHub team of Cilium contributors who are responsible for maintaining the good state of Go dependencies for Cilium and its related projects by reviewing Pull Requests (PRs) that update files related to dependency declaration: \* `go.mod `\_ \* `go.sum `\_ \* `vendor/ `\_ Each time a contributor opens a PR modifying these files, GitHub automatically assigns one member of the team for review. Open Pull Requests awaiting reviews from @cilium/vendor are `listed here `\_. To join the team, you must be a Cilium Reviewer. see `Cilium's Contributor Ladder `\_ for details on the requirements and the application process. The team has a dedicated Slack channel in the Cilium Community Slack Workspace named `#dev-vendor `\_, which can be used for starting discussions and asking questions in regards to dependency management for Cilium and its related projects. .. \_vendor\_team: https://github.com/orgs/cilium/teams/vendor .. \_go\_dot\_mod: https://github.com/cilium/cilium/blob/main/go.mod .. \_go\_dot\_sum: https://github.com/cilium/cilium/blob/main/go.sum .. \_vendor\_slash: https://github.com/cilium/cilium/blob/main/vendor .. \_vendor\_to\_review: https://github.com/pulls?q=is%3Aopen+is%3Apr+team-review-requested%3Acilium%2Fvendor+archived%3Afalse+org%3Acilium+ .. \_ladder: https://github.com/cilium/community/blob/main/CONTRIBUTOR-LADDER.md .. \_dev\_vendor\_slack: https://cilium.slack.com/archives/C07GZTL0Z1P Reviewing Pull Requests ======================= This section describes some of the processes and expectations for reviewing PRs on behalf of @cilium/vendor. Note that :ref:`the generic PR review process for Committers ` still applies, even though it is not specific to dependencies. Existing Dependencies --------------------- Updates to existing dependencies most commonly occur through PRs opened by `Renovate `\_, which is a 3rd party service used throughout the Cilium organization. Renovate continually checks repositories for out-of-date dependencies and opens new PRs to update any it finds. When reviewing PRs that update an existing dependency, members of the @cilium/vendor team are required to ensure that the update does not include any breaking changes or licensing issues. These checks are facilitated via GitHub Action CI workflows, which are triggered by commenting ``/test`` within a PR. See :ref:`CI / GitHub Actions ` for more information on their use. .. \_renovate: https://docs.renovatebot.com New Dependencies ---------------- When a new dependency is added as part of a PR, the @cilium/vendor team will be assigned to ensure the new dependency meets the following criteria: 1. The new dependency must add functionality that is not already provided, in order of preference, within Go's standard library, an internal package to the project, or an existing dependency. 2. The functionality provided by the new dependency must be non-trivial to re-implement manually. 3. The new dependency must be actively maintained, having new commits and/or releases within the past year. 4. The new dependency must appear to be of generally good quality, having a strong user base, automated testing with high code coverage, and documentation. 5. The new dependency must have a license which is allowed by the `CNCF `\_, as either one of the `generally approved licenses `\_ or one that is allowed via `exception `\_. An automated CI check is in place to help check this requirement, but may need updating as the list of allowable licenses by the CNCF changes and Cilium dependencies change. The source for the license check tool can be found `here `\_. These criteria ensure the long-term success of the project by justifying the inclusion of the new dependency into the project's codebase. .. \_cncf: https://www.cncf.io .. \_allowed\_licenses: https://github.com/cncf/foundation/blob/main/allowed-third-party-license-policy.md .. \_license\_exceptions: https://github.com/cncf/foundation/tree/main/license-exceptions .. \_licensecheck: https://github.com/cilium/cilium/blob/main/tools/licensecheck/allowed.go Cilium Imports -------------- A subset of the repositories the @cilium/vendor team is responsible for import code from cilium/cilium as a dependency. A complication in this relationship is the usage of `replace directives `\_ in the `cilium/cilium | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/reviewers_committers/review_vendor.rst | main | cilium | [
-0.08891959488391876,
0.04389386251568794,
-0.06596314907073975,
-0.019800866022706032,
0.047321975231170654,
-0.026811042800545692,
0.021153723821043968,
-0.01039578951895237,
0.06532931327819824,
0.025153644382953644,
0.09763041138648987,
-0.049449432641267776,
0.008285469375550747,
-0.0... | 0.181518 |
project's codebase. .. \_cncf: https://www.cncf.io .. \_allowed\_licenses: https://github.com/cncf/foundation/blob/main/allowed-third-party-license-policy.md .. \_license\_exceptions: https://github.com/cncf/foundation/tree/main/license-exceptions .. \_licensecheck: https://github.com/cilium/cilium/blob/main/tools/licensecheck/allowed.go Cilium Imports -------------- A subset of the repositories the @cilium/vendor team is responsible for import code from cilium/cilium as a dependency. A complication in this relationship is the usage of `replace directives `\_ in the `cilium/cilium go.mod file `\_. Replace directives are only applied to the main module's go.mod file and do not carry over when imported by another module. This creates the need for replace directives used in the cilium/cilium go.mod file to be synced with any module which imports cilium/cilium as a dependency. The vendor team is therefore responsible for explicitly discouraging the use of replace directives where possible, due to the extra maintenance burden that they incur. A replace directive may be used if a required change to an imported library is in the process of being upstreamed and a fork of the upstream library is used as a temporary alternative until the upstream library is released with the required change. The developer introducing the replace directive should ensure that the replace directive will be removed before the next release, even if it involves creating a fork of the upstream library and modifying import statements of the library to point to the fork. When a replace directive is added into the go.mod file, the vendor team is responsible for the following: 1. A comment is added above the replace directive in the go.mod file describing the reason it was added. 2. An issue is created in the project's repository with a ``release-blocker`` label attached, tracking the removal of the replace directive before the next release of the project. The issue should be assigned to the developer who added the replace directive. 3. Ensuring that replace directives are synced when reviewing PRs which update the version of a cilium/cilium dependency. If a change that is required to be made to an imported library cannot be upstreamed, the library's import in the go.mod file should be changed to directly use a fork of the library containing the change, avoiding the need for a replace directive. For an example of this change, see `cilium/cilium#27582 `\_. .. \_replace\_directives: https://go.dev/ref/mod#go-mod-file-replace .. \_cilium\_cilium\_27582: https://github.com/cilium/cilium/pull/27582 | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/reviewers_committers/review_vendor.rst | main | cilium | [
-0.07903376966714859,
0.007250614929944277,
-0.05169609934091568,
-0.06385199725627899,
0.05903845652937889,
0.0015351445181295276,
-0.004595272243022919,
-0.004030558746308088,
-0.006485994439572096,
-0.0032672355882823467,
0.08880866318941116,
-0.013780780136585236,
-0.009027030318975449,
... | 0.081596 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_review\_docs: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Reviewing for @cilium/docs-structure \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* What is @cilium/docs-structure? =============================== Team `@cilium/docs-structure `\_ is a GitHub team of Cilium contributors who are responsible for maintaining the good state of the project's documentation, by reviewing Pull Requests (PRs) that update the documentation. Each time a non-draft PR touching files owned by the team opens, GitHub automatically assigns one member of the team for review. Open Cilium Pull Requests awaiting for reviews from @cilium/docs-structure are `listed here `\_. To join the team, you must be a Cilium Reviewer. See `Cilium's Contributor Ladder`\_ for details on the requirements and the application process. .. \_docs-structure\_team: https://github.com/orgs/cilium/teams/docs-structure .. \_docs-structure\_to\_review: https://github.com/cilium/cilium/pulls?q=is%3Apr+is%3Aopen+draft%3Afalse+team-review-requested%3Acilium%2Fdocs-structure .. \_Cilium's Contributor Ladder: https://github.com/cilium/community/blob/main/CONTRIBUTOR-LADDER.md Reviewing Pull Requests ======================= This section describes some of the process and expectations for reviewing PRs on behalf of cilium/docs-structure. Note that :ref:`the generic PR review process for Committers ` applies, even though it is not specific to documentation. Technical contents ------------------ You are not expected to review the technical aspects of the documentation changes in a PR. However, if you do have knowledge of the topic and if you find some elements that are incorrect or missing, do flag them. Documentation structure ----------------------- One essential part of a review is to ensure that the contribution maintains a coherent structure for the documentation. Ask yourself if the changes are located on the right page, at the right place. This is especially important if pages are added, removed, or shuffled around. If the addition is large, consider whether the page needs to split. Consider also whether new text comes with a satisfactory structure. For example, does it fit well with the surrounding context, or did the author simply use a "note" box instead of trying to integrate the new information to the relevant paragraph? See also :ref:`the recommendations on documentation structure for contributors `. Specific items to look out for ------------------------------ Backport labels ~~~~~~~~~~~~~~~ See :ref:`the backport criteria for documentation changes `. Mark the PR for backports by setting the labels for all supported branches to which the changes apply, that is to say, all supported branches containing the parent features to which the modified sections relate. CODEOWNERS updates ~~~~~~~~~~~~~~~~~~ All documentation sources are assigned to cilium/docs-structure for review by default. However, when a contributor creates a new page, consider whether it should be covered by another team as well so that this other team can review the technical aspects. If this is the case, ask the author to update the CODEOWNERS file. Beta disclaimer ~~~~~~~~~~~~~~~ When a feature is advertised as Beta in the PR, make sure that the author clearly indicates the Beta status in the documentation, both by mentioning "(Beta)" in the heading of the section for the feature and by including the dedicated banner, as follows: .. code-block:: rst .. include:: /Documentation/beta.rst Upgrade notes ~~~~~~~~~~~~~ When the PR introduces new user-facing options, metrics, or behavior that affects upgrades or downgrades, ensure that the author summarizes the changes with a note in ``Documentation/operations/upgrade.rst``. Completeness ~~~~~~~~~~~~ Make sure that new or updated content is complete, with no TODOs. Auto-generated reference documents ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When certain parts of the Cilium repository change, contributors may have to update some auto-generated reference documents that are part of Cilium's documentation, such as the command reference or the Helm reference. The CI validates that these updates are present in the PR. If they are missing, you may have to help contributors figure out what commands they need to run | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/reviewers_committers/review_docs.rst | main | cilium | [
-0.06436614692211151,
0.0785907506942749,
-0.038161009550094604,
0.0036960765719413757,
0.0745546892285347,
-0.044595398008823395,
-0.054691214114427567,
-0.0031920631881803274,
0.07185369729995728,
0.024806667119264603,
0.034634463489055634,
-0.05737323313951492,
0.03252513334155083,
-0.0... | 0.180666 |
to update some auto-generated reference documents that are part of Cilium's documentation, such as the command reference or the Helm reference. The CI validates that these updates are present in the PR. If they are missing, you may have to help contributors figure out what commands they need to run to perform the updates. These commands are usually provided in the logs of the GitHub workflows that failed to pass. Spell checker exceptions ~~~~~~~~~~~~~~~~~~~~~~~~ The Documentation checks include running a spell checker. This spell checker uses a file, ``Documentation/spelling\_wordlist.txt``, containing a list of spelling exceptions to ignore. Team cilium/docs-structure is the owner for this file. Usually, there is not much feedback to provide on updates to the list of exceptions. However, it's useful for reviewers to know that: - Entries are sorted alphabetically, with all words starting with uppercase letters coming before words starting with lowercase letters. - Entries in the list of exceptions must be spelled correctly. - Lowercase entries are case-insensitive for the spell checker, so reviewers should reject new entries with capital letters if the lowercase versions are already in the list. Netlify preview ~~~~~~~~~~~~~~~ `Netlify`\_ builds a new preview for each PR touching the documentation. You are not expected to check the preview for each PR. However, if the PR contains detailed formatting changes, such as nested blocks or directives, or changes to tables or tabs, then it's good to validate that changes render as expected. Also check the preview if you have a doubt as to the validity of the reStructuredText (RST) mark-up that the author uses. The list of checks on the PR page contains a link to the Netlify preview. If the preview build failed, the link leads to the build logs. .. \_Netlify: https://www.netlify.com/?attr=homepage-modal Formatting ---------- Read :ref:`Cilium's documentation style guide `. Flag poor formatting or obvious mistakes. The syntax for RST is not always trivial and some contributors make mistakes, or they simply forget to use RST and they employ Markdown mark-up instead. Make sure authors fix such issues. Keep an eye on :ref:`code-blocks `: do they include RST substitutions, and if so, do they use the right directive? If not, do they use the right language? Beyond that, the amount of time you spend on suggestions for improving formatting is up to you. Grammar and style ----------------- Read :ref:`Cilium's documentation style guide `. Flag obvious grammar mistakes. Try to read the updated text as a user would. Ask the contributors to revise any sentence that is too difficult to read or to understand. @cilium/docs-structure aims to keep the documentation clean, consistent, and in a clear and comprehensible state. User experience must always be as good as possible. To achieve this objective, Documentation updates must follow best practices, such as the ones from the style guide. Reviewing PRs at sufficient depth to flag all potential style improvements can be time consuming, so the amount of effort that you put into style guidance is up to you. There is no tooling in place to enforce particular style recommendations. Documentation build =================== The build framework ------------------- Here are the main resources involved or related to Cilium's documentation build framework: - :ref:`Instructions for building the documentation locally ` - ``Documentation/Makefile``, ``Documentation/Dockerfile``, ``Documentation/check-build.sh`` - Dependencies are in ``Documentation/requirements.txt``, which is generated from ``Documentation/requirements\_min/requirements.txt`` - The Sphinx theme we use is `our own fork `\_ of Read the Docs's theme .. \_cilium\_rtd\_theme: https://github.com/cilium/sphinx\_rtd\_theme Relevant CI workflows --------------------- Netlify preview ~~~~~~~~~~~~~~~ Documentation changes trigger the build of a new Netlify preview. If the build fails, the PR authors or reviewers must investigate it. Ideally the author should | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/reviewers_committers/review_docs.rst | main | cilium | [
-0.03770761936903,
-0.017657123506069183,
-0.042701929807662964,
-0.015760721638798714,
0.02500450611114502,
-0.03805489465594292,
-0.009510010480880737,
0.04975477606058121,
0.022178102284669876,
0.027728892862796783,
0.04267701134085655,
-0.01074278075248003,
0.03471396118402481,
-0.0997... | 0.139276 |
Sphinx theme we use is `our own fork `\_ of Read the Docs's theme .. \_cilium\_rtd\_theme: https://github.com/cilium/sphinx\_rtd\_theme Relevant CI workflows --------------------- Netlify preview ~~~~~~~~~~~~~~~ Documentation changes trigger the build of a new Netlify preview. If the build fails, the PR authors or reviewers must investigate it. Ideally the author should take care of this investigation, but in practice, contributors are not always familiar with RST or with our build framework, so consider giving a hand. Documentation build ~~~~~~~~~~~~~~~~~~~ Same as the Netlify preview, the Documentation workflow runs on doc changes and can raise missing updates on various generated pieces of documentation. Checkpatch ~~~~~~~~~~ The Checkpatch workflow is part of the BPF tests and is not directly relevant to documentation, but may raise some patch formatting issues, for example when the commit title is too long. So it should run on doc-only PRs, like for any other PR. Integration tests ~~~~~~~~~~~~~~~~~ Integration tests, be it on Travis or on GitHub Actions, are the only workflows that rebuild the ``docs-builder`` image. Building this image is necessary to validate changes to the ``Documentation/Dockerfile`` or to the list of Python dependencies located in ``Documentation/requirements.txt``. The GitHub workflow uses a pre-built image instead, and won't incorporate changes to these files. Integration tests also run a full build in the Cilium repository, including the post-build checks, in particular ``Documentation/Makefile``'s ``check`` target. Therefore, integration tests are able to raise inconsistencies in auto-generated files in the documentation. Ready to merge -------------- For PRs that only update documentation contents, the CI framework skips tests that are not relevant to the changes. Therefore, authors or reviewers should trigger the CI suite by commenting with ``/test``, just like for any other PR. Once all code owners for the PR have approved, and all tests have passed, the PR should automatically receive the ``ready-to-merge`` label. | https://github.com/cilium/cilium/blob/main//Documentation/contributing/development/reviewers_committers/review_docs.rst | main | cilium | [
-0.050114404410123825,
0.04216974601149559,
0.05427302047610283,
0.01237461157143116,
0.023693397641181946,
-0.025466373190283775,
-0.0467720553278923,
0.003014468587934971,
0.02971012517809868,
0.047318726778030396,
-0.01473396085202694,
-0.02176235243678093,
0.034524671733379364,
-0.0174... | 0.071981 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_backport\_process: Backporting process =================== .. \_backport\_criteria: Backport Criteria ----------------- Committers may nominate PRs that have been merged into main as candidates for backport into stable releases if they affect the stable production usage of community users. Backport Criteria for Current Minor Release ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Criteria for inclusion into the next stable release of the current latest minor version of Cilium, for example in a ``v1.2.z`` release prior to the release of version ``v1.3.0``: - All bugfixes - Debug tool improvements Backport Criteria for X.Y-1.Z and X.Y-2.Z ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Criteria for the inclusion into the next stable release of the prior two minor versions of Cilium, for example in a ``v1.0.z`` or ``v1.1.z`` release prior to the release of version ``v1.3.0``: - Security relevant fixes - Major bugfixes relevant to the correct operation of Cilium - Debug tool improvements .. \_backport\_criteria\_docs: Backport Criteria for documentation changes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Changes to Cilium's documentation should generally be subject to backports for all supported branches to which they apply (all supported branches containing the parent features to which the modified sections relate). The motivation is that users can then simply look at the branch of the documentation related to the version they are deploying, and find the latest correct instructions for their version. Proposing PRs for backporting ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ PRs are proposed for backporting by adding a ``needs-backport/X.Y`` label to them. Normally this is done by the author when the PR is created or one of the maintainers when the PR is reviewed. When proposing PRs that have already been merged, also add a comment to the PR to ensure that the backporters are notified. Marking PRs to be backported by the author ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For PRs which need to be backported, but are likely to run into conflicts or other difficulties, the author has the option of adding the ``backport/author`` label. This will exclude the PR from backporting automation, and the author is expected to perform the backport themselves. Backporting Guide for the Backporter ------------------------------------ Cilium PRs that are marked with the label ``needs-backport/X.Y`` need to be backported to the stable branch ``X.Y``. The following steps summarize the process for backporting these PRs: \* One-time setup \* Preparing PRs for backport \* Cherry-picking commits into a backport branch \* Posting the PR and updating GitHub labels .. \_backport\_setup: One-time Setup ~~~~~~~~~~~~~~ #. Make sure you have a GitHub developer access token with the ``public\_repos`` ``workflow``, ``read:user`` scopes available. You can do this directly from https://github.com/settings/tokens or by opening GitHub and then navigating to: User Profile -> Settings -> Developer Settings -> Personal access token -> Generate new token. #. The scripts referred to below need to be run on Linux, they do not work on macOS. It is recommended to create a container using (``contrib/backporting/Dockerfile``), as it will have all the correct versions of dependencies / libraries. .. code-block:: shell-session $ export GITHUB\_TOKEN= $ docker build -t cilium-backport contrib/backporting/. $ docker run -e GITHUB\_TOKEN -v $(pwd):/cilium -v "$HOME/.ssh":/home/user/.ssh \ -it cilium-backport /bin/bash .. note:: If you are running on a mac OS, and see ``/home/user/.ssh/config: line 3: Bad configuration option: usekeychain`` error message while running any of the backporting scripts, comment out the line ``UseKeychain yes``. #. Once you have a setup ready, you need to configure git to have your name and email address to be used in the commit messages: .. code-block:: shell-session $ git config --global user.name "John Doe" $ git config --global user.email johndoe@example.com #. Add remotes for | https://github.com/cilium/cilium/blob/main//Documentation/contributing/release/backports.rst | main | cilium | [
-0.004460110794752836,
-0.03491750732064247,
-0.059933703392744064,
-0.03928159922361374,
0.06840193271636963,
-0.04344269260764122,
-0.06554485857486725,
0.05144650116562843,
-0.03412589058279991,
0.030006583780050278,
0.08081531524658203,
0.013721494004130363,
0.004789778031408787,
-0.08... | 0.129276 |
the line ``UseKeychain yes``. #. Once you have a setup ready, you need to configure git to have your name and email address to be used in the commit messages: .. code-block:: shell-session $ git config --global user.name "John Doe" $ git config --global user.email johndoe@example.com #. Add remotes for the Cilium upstream repository and your Cilium repository fork. .. code-block:: shell-session $ git remote add johndoe git@github.com:johndoe/cilium.git $ git remote add upstream https://github.com/cilium/cilium.git #. Skip this step if you have created a setup using the pre-defined Dockerfile. This guide makes use of several tools to automate the backporting process. The basics require ``bash`` and ``git``, but to automate interactions with github, further tools are required. +--------------------------------------------------------------+-----------+---------------------------------------------------------+ | Dependency | Required? | Download Command | +==============================================================+===========+=========================================================+ | bash | Yes | N/A (OS-specific) | +--------------------------------------------------------------+-----------+---------------------------------------------------------+ | git | Yes | N/A (OS-specific) | +--------------------------------------------------------------+-----------+---------------------------------------------------------+ | jq | Yes | N/A (OS-specific) | +--------------------------------------------------------------+-----------+---------------------------------------------------------+ | python3 | Yes | `Python Downloads `\_ | +--------------------------------------------------------------+-----------+---------------------------------------------------------+ | `PyGithub `\_ | Yes | ``pip3 install PyGithub`` | +--------------------------------------------------------------+-----------+---------------------------------------------------------+ | `Github hub CLI (>= 2.8.3) `\_ | Yes | N/A (OS-specific) | +--------------------------------------------------------------+-----------+---------------------------------------------------------+ Verify your machine is correctly configured by running .. code-block:: shell-session $ go run ./tools/dev-doctor --backporting Preparation ~~~~~~~~~~~ Pull requests that are candidates for backports to the X.Y stable release are tracked through the following links: \* PRs with the needs-backport/X.Y label (\ |CURRENT\_RELEASE|: :github-backport:`GitHub Link`) \* PRs with the backport-pending/X.Y label (\ |CURRENT\_RELEASE|: :github-backport:`GitHub Link`) \* The X.Y GitHub project (\ |NEXT\_RELEASE|: :github-project:`GitHub Link<>`) Make sure that the Github labels are up-to-date, as this process will deal with all commits from PRs that have the ``needs-backport/X.Y`` label set (for a stable release version X.Y). Creating the Backports Branch ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ #. Check whether there are any `outstanding backport PRs for the target branch `\_\_. If there are already backports for that branch, create a thread in the #launchpad channel in `Cilium Slack`\_ and reach out to the author to coordinate triage, review and merge of the existing PR into the target branch. #. Run ``contrib/backporting/start-backport`` for the release version that you intend to backport PRs for. This will pull the latest repository commits from the Cilium repository (assumed to be the git remote ``origin``), create a new branch, and runs the ``contrib/backporting/check-stable`` script to fetch the full set of PRs to backport. .. code-block:: shell-session $ GITHUB\_TOKEN=xxx contrib/backporting/start-backport 1.0 .. note:: This command will leave behind a file in the current directory with a name based upon the release version and the current date in the form ``vRELEASE-backport-YYYY-MM-DD.txt`` which contains a prepared backport pull-request description so you don't need to write one yourself. #. Cherry-pick the commits using the ``main`` branch git SHAs listed, starting from the oldest (top), working your way down and fixing any merge conflicts as they appear. Note that for PRs that have multiple commits you will want to check that you are cherry-picking oldest commits first. The ``cherry-pick`` script accepts multiple arguments, in which case it will attempt to apply each commit in the order specified on the command line until one cherry pick fails or every commit is cherry-picked. .. code-block:: shell-session $ contrib/backporting/cherry-pick ... $ contrib/backporting/cherry-pick Conflicts may be resolved by applying changes or backporting other PRs to completely avoid conflicts. Backporting entire PRs is preferred if the changes in the dependent PRs are small. `This stackoverflow.com question `\_ describes how to determine the original PR corresponding to a particular commit SHA in the GitHub UI. If a conflict is resolved by modifying a commit during backport, describe the changes made in the commit message | https://github.com/cilium/cilium/blob/main//Documentation/contributing/release/backports.rst | main | cilium | [
0.02493302896618843,
-0.03501379117369652,
-0.05215422809123993,
-0.01846104860305786,
-0.029337842017412186,
-0.06374739110469818,
-0.027189647778868675,
0.10291672497987747,
0.01078376080840826,
0.042986493557691574,
-0.00025361916050314903,
-0.06605442613363266,
0.047563839703798294,
-0... | 0.065557 |
preferred if the changes in the dependent PRs are small. `This stackoverflow.com question `\_ describes how to determine the original PR corresponding to a particular commit SHA in the GitHub UI. If a conflict is resolved by modifying a commit during backport, describe the changes made in the commit message and collect these to add to the backport PR description when creating the PR below. This helps to direct backport reviewers towards which changes may deviate from the original commits to ensure that the changes are correctly backported. This can be fairly simple, for example inside the commit message of the modified commit:: commit f0f09158ae7f84fc8d888605aa975ce3421e8d67 Author: Joe Stringer Date: Tue Apr 20 16:48:18 2021 -0700 contrib: Automate digest PR creation [ upstream commit 893d0e7ec5766c03da2f0e7b8c548f7c4d89fcd7 ] [ Backporter's notes: Dropped conflicts in .github/ issue template ] There's still some interactive bits here just for safety, but one less step in the template. Signed-off-by: Joe Stringer \*\*It is the backporter's responsibility to check that the backport commits they are preparing are identical to the original commits\*\*. This can be achieved by preparing the commits, then running ``git show `` for both the original upstream commit and the prepared backport, and read through the commits side-by-side, line-by-line to check that the changes are the same. If there is any uncertainty about the backport, reach out to the original author directly to coordinate how to prepare the backport for the target branch. #. For backporting commits that update cilium-builder and cilium-runtime images, the backporter builds new images as described in :ref:`update\_cilium\_builder\_runtime\_images`. #. (Optional) If there are any commits or pull requests that are tricky or time-consuming to backport, consider reaching out for help on `Cilium Slack`\_. If the commit does not cherry-pick cleanly, please mention the necessary changes in the pull request description in the next section. Creating the Backport Pull Request ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The backport pull-request may be created via CLI tools, or alternatively you can use the GitHub web interface to achieve these steps. Via Command-Line Tools (Recommended) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ These steps require all of the tools described in the :ref:`backport\_setup` section above. It pushes the git tree, creates the pull request and updates the labels for the PRs that are backported, based on the ``vRELEASE-backport-YYYY-MM-DD.txt`` file in the current directory. .. code-block:: shell-session $ GITHUB\_TOKEN=xxx contrib/backporting/submit-backport The script takes up to three positional arguments:: usage: submit-backport [branch version] [pr-summary] [your remote] - The first parameter is the version of the branch against which the PR should be done, and defaults to the version passed to ``start-backport``. - The second one is the name of the file containing the text summary to use for the PR, and defaults to the file created by ``start-backport``. - The third one is the name of the git remote of your (forked) repository to which your changes will be pushed. It defaults to the git remote which matches ``github.com//cilium``. Via GitHub Web Interface ^^^^^^^^^^^^^^^^^^^^^^^^ #. Push your backports branch to your fork of the Cilium repo. .. code-block:: shell-session $ git push -u HEAD #. Create a new PR from your branch towards the feature branch you are backporting to. Note that by default Github creates PRs against the ``main`` branch, so you will need to change it. The title and description for the pull request should be based upon the ``vRELEASE-backport-YYYY-MM-DD.txt`` file that was generated by the scripts above. .. note:: The ``vRELEASE-backport-YYYY-MM-DD.txt`` file will include: .. code-block:: RST Once this PR is merged, a GitHub action will update the labels of these PRs: ```upstream-prs AAA BBB ``` The ``upstream-prs`` tag `is required `\_, so add it | https://github.com/cilium/cilium/blob/main//Documentation/contributing/release/backports.rst | main | cilium | [
-0.06744877249002457,
-0.022985240444540977,
-0.045587874948978424,
0.00003479906445136294,
0.09059316664934158,
0.0284456554800272,
-0.05453801155090332,
0.03605281189084053,
0.021710852161049843,
0.0323246605694294,
0.05510404333472252,
0.023615065962076187,
0.028079047799110413,
-0.0604... | -0.008999 |
based upon the ``vRELEASE-backport-YYYY-MM-DD.txt`` file that was generated by the scripts above. .. note:: The ``vRELEASE-backport-YYYY-MM-DD.txt`` file will include: .. code-block:: RST Once this PR is merged, a GitHub action will update the labels of these PRs: ```upstream-prs AAA BBB ``` The ``upstream-prs`` tag `is required `\_, so add it if you manually write the message. #. Label the new backport PR with the backport label for the stable branch such as ``backport/X.Y`` as well as ``kind/backports`` so that it is easy to find backport PRs later. #. Mark all PRs you backported with the backport pending label ``backport-pending/X.Y`` and clear the ``needs-backport/X.Y`` label. This can be done with the command printed out at the bottom of the output from the ``start-backport`` script above (``GITHUB\_TOKEN`` needs to be set for this to work). Running the CI Against the Pull Request ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To validate a cross-section of various tests against the PRs, backport PRs should be validated in the CI by running all CI targets. This can be triggered by adding a comment to the PR with exactly the text ``/test``, as described in :ref:`trigger\_phrases`. The comment must not contain any other characters. After the Backports are Merged ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ After the backport PR is merged, the GH workflow "Call Backport Label Updater" should take care of marking all backported PRs with the ``backport-done/X.Y`` label and clear the ``backport-pending/X.Y`` label(s). Verify that the workflow succeeded by looking `here `\_. Backporting Guide for Others ---------------------------- Original Committers and Reviewers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Committers should mark PRs needing backport as ``needs-backport/X.Y``, based on the `backport criteria `\_. It is up to the reviewers to confirm that the backport request is reasonable and, if not, raise concerns on the PR as comments. In addition, if conflicts are foreseen or significant changes to the PR are necessary for older branches, consider adding the ``backport/author`` label to mark the PR to be backported by the author. At some point, changes will be picked up on a backport PR and the committer will be notified and asked to approve the backport commits. Confirm that: #. All the commits from the original PR have been indeed backported. #. In case of conflicts, the resulting changes look good. | https://github.com/cilium/cilium/blob/main//Documentation/contributing/release/backports.rst | main | cilium | [
-0.028359176591038704,
-0.06699316203594208,
-0.07237032055854797,
-0.02629336342215538,
0.04060341417789459,
-0.036351680755615234,
-0.06797201931476593,
0.04831713065505028,
-0.04624027758836746,
0.003473060205578804,
0.04842834919691086,
-0.002349821850657463,
-0.020051224157214165,
-0.... | 0.042935 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io Organization ============ Release Cadence --------------- New feature releases of Cilium are released on a cadence of around six months. Minor releases are typically designated by incrementing the ``Y`` in the version format ``X.Y.Z``. Three stable branches are maintained at a time: One for the most recent minor release, and two for the prior two minor releases. For each minor release that is currently maintained, the stable branch ``vX.Y`` on github contains the code for the next stable release. New patch releases for an existing stable version ``X.Y.Z`` are published incrementing the ``Z`` in the version format. New patch releases for stable branches are made periodically to provide security and bug fixes, based upon community demand and bugfix severity. Potential fixes for an upcoming release are first merged into the ``main`` branch, then backported to the relevant stable branches according to the :ref:`backport\_criteria`. The following sections describe in more detail the general guidelines that the release management team follows for Cilium. The team may diverge from this process at their discretion. Feature Releases ~~~~~~~~~~~~~~~~ There are several key dates during the feature development cycle of Cilium which are important for developers: \* Pre-release days: The Cilium release management team aims to publish a snapshot of the latest changes in the ``main`` branch on the first weekday of each month. This provides developers a target delivery date to incrementally ship functionality, and allows community members to get early access to upcoming features to test and provide feedback. Pre-releases may not be published when a release candidate or final stable release is being published. \* Feature freeze: Around six weeks prior to a target feature release, the ``main`` branch is frozen for new feature contributions. The goal of the freeze is to focus community attention on stabilizing and hardening the upcoming release by prioritizing bugfixes, documentation improvements, and tests. In general, all new functionality that the community intends to distribute as part of the upcoming release must land into the ``main`` branch prior to this date. Any bugfixes, docs changes, or testing improvements can continue to be merged as usual following this date. \* Release candidates: Following the feature freeze, the release management team publishes a series of release candidates. These candidates should represent the functionality and behaviour of the final release. The release management team encourages community participation in testing and providing feedback on the release candidates, as this feedback is crucial to identifying any issues that may not have been discovered during development. Problems identified during this period may be reported as known issues in the final release or fixed, subject to severity and community contributions towards solutions. Release candidates are typically published every two weeks until the final release is published. \* Branching and feature thaw: Within two weeks of the feature freeze, the release management team aims to create a new branch to manage updates for the new stable feature release series. After this, all Pull Requests for the upcoming feature release must be labeled with a ``needs-backport/X.Y`` label with ``X.Y`` matching the target minor release version to trigger the backporting process and ensure the changes are ported to the release branch. The ``main`` branch is then unfrozen for feature changes and refactoring. Until the final release date, it is better to avoid invasive refactoring or significant new feature additions just to minimize the impact on backporting for the upcoming release during that period. \* Stable release: The new feature release ``X.Y.0`` version is | https://github.com/cilium/cilium/blob/main//Documentation/contributing/release/organization.rst | main | cilium | [
0.012746350839734077,
-0.03305757790803909,
-0.01855328120291233,
-0.041071951389312744,
0.05936679616570473,
-0.0521676167845726,
-0.0964205265045166,
0.028137994930148125,
0.013406889513134956,
-0.012359672226011753,
0.08324484527111053,
0.06319277733564377,
-0.019378164783120155,
-0.022... | 0.145503 |
branch is then unfrozen for feature changes and refactoring. Until the final release date, it is better to avoid invasive refactoring or significant new feature additions just to minimize the impact on backporting for the upcoming release during that period. \* Stable release: The new feature release ``X.Y.0`` version is published. All restrictions on submissions are lifted, and the cycle begins again. Stable Releases ~~~~~~~~~~~~~~~ The Cilium release management team typically aims to publish fresh releases for all maintained stable branches around the middle of each month. All changes that are merged into the target branch by the first week of the month should typically be published in that month's patch release. Changes which do not land into the target branch by that time may be deferred to the following month's patch release. For more information about how patches are merged into the ``main`` branch and subsequently backported to stable branches, see the :ref:`backport\_process`. | https://github.com/cilium/cilium/blob/main//Documentation/contributing/release/organization.rst | main | cilium | [
0.04800982028245926,
-0.04805262014269829,
0.0014613461680710316,
-0.01081597339361906,
0.04059985280036926,
-0.03749775141477585,
-0.12681376934051514,
0.029640143737196922,
-0.00850031990557909,
0.03646883741021156,
0.05610392242670059,
0.06084024906158447,
-0.07130562514066696,
-0.05281... | 0.122469 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_docs\_guide: Documentation ------------- This section provides guidance on the structure of Cilium documentation, describes its style, and explains how to test it. Before contributing, please review the structure recommendations and style guide. See the :ref:`clone and provision environment section ` to learn how to fork and clone the repository. .. toctree:: :maxdepth: 2 :glob: docsstructure docsstyle docstest docsframework The best way to get help if you get stuck is to ask a question on `Cilium Slack`\_. With Cilium contributors across the globe, there is almost always someone available to help. | https://github.com/cilium/cilium/blob/main//Documentation/contributing/docs/index.rst | main | cilium | [
-0.0146184666082263,
0.03447393327951431,
-0.03862558677792549,
-0.01483348198235035,
0.06415054202079773,
-0.07993102818727493,
-0.07157371193170547,
0.07276645302772522,
0.007109199650585651,
0.040174808353185654,
0.020124247297644615,
-0.04729960113763809,
0.04677495360374451,
-0.016620... | 0.133711 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_docs\_structure\_recommendations: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Recommendations on documentation structure \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This page contains recommendations to help contributors write better documentation. The goal of better documentation is a better user experience. If you take only one thing away from this guide, let it be this: don't document your feature. Instead, \*\*document how your feature guides users on their journey.\*\* Maintaining good information architecture ----------------------------------------- When you add, update, or remove documentation, consider how the change affects the site's `information architecture`\_. Information architecture is what shapes a user's experience and their ability to accomplish their goals with Cilium. If an addition, change, or removal would significantly alter a user's journey or prevent their success, make sure to flag the change clearly in :ref:`upgrade notes `. .. \_information architecture: https://www.usability.gov/what-and-why/information-architecture.html Adding a new page ----------------- When you need to write completely new content, create one or more new pages as one of the three following types: - Concept (no steps, just knowledge) - Task (how to do one discrete thing) - Tutorial (how to combine multiple features to accomplish specific goals) A \*concept\* explains some aspect of Cilium. Typically, concept pages don't include sequences of steps. Instead, they link to tasks or tutorials. For an example of a concept page, see :ref:`Routing `. A \*task\* shows how to do one discrete thing with Cilium. Task pages give readers a sequence of steps to perform. A task page can be short or long, but must remain focused on the task's singular goal. Task pages can blend brief explanations with the steps to perform, but if you need to provide a lengthy explanation, write a separate concept and link to it. Link related task and concept pages to each other. For an example of a task page, see :ref:`Migrating a Cluster to Cilium `. A \*tutorial\* shows how to accomplish a goal using multiple Cilium features. Tutorials are flexible: for example, a tutorial page could provide several discrete sequences of steps to perform, or show how related pieces of code could interact. Tutorials can blend brief explanations with the steps to perform, but lengthy explanations should link to related concept topics. For an example of a tutorial page, see :ref:`Inspecting Network Flows with the CLI `. .. note:: You may need to add multiple pages to support a new feature. For example, if a new feature requires an explanation of its underlying ideas, add a concept page as well as a task page. Updating an existing page ------------------------- Consider whether you can update an existing page or whether to add a new one. If adding or updating content to a page keeps it centered on a single concept or task, then you can update an existing page. If adding or updating content to a page expands it to include multiple concepts or tasks, then add new pages for individual concepts and tasks. If you're moving a page and changing its URL, make sure you update every link to that page in the documentation. Ask on `Cilium Slack`\_ (``#sig-docs``) for someone to set up a HTTP redirection from the old URL to the new one, if necessary. Removing content and entire pages --------------------------------- Removing stale content is a part of maintaining healthy docs. Whether you're removing stale content on a page or removing a page altogether, make sure to consider the impact of removal on a user's journey. Specific considerations include: - Updating any links to removed content - Ensuring users have clear | https://github.com/cilium/cilium/blob/main//Documentation/contributing/docs/docsstructure.rst | main | cilium | [
-0.05428034067153931,
0.06881711632013321,
-0.020946625620126724,
-0.0006325544673018157,
0.05108432099223137,
-0.03352593630552292,
-0.07784458249807358,
0.062464483082294464,
0.034877438098192215,
0.028085490688681602,
0.0034239075612276793,
-0.04260333254933357,
0.060465842485427856,
-0... | 0.178555 |
Removing stale content is a part of maintaining healthy docs. Whether you're removing stale content on a page or removing a page altogether, make sure to consider the impact of removal on a user's journey. Specific considerations include: - Updating any links to removed content - Ensuring users have clear guidance on what to do next .. note:: Without a clearly defined user journey, evaluation is largely qualitative. Practice empathy: would someone succeed if they had your skills but not your context? | https://github.com/cilium/cilium/blob/main//Documentation/contributing/docs/docsstructure.rst | main | cilium | [
0.005002005025744438,
0.000329651462379843,
0.07188645005226135,
0.010203195735812187,
0.009622118435800076,
-0.0352967344224453,
-0.04657229781150818,
-0.11710672080516815,
0.002951782662421465,
-0.06152438744902611,
-0.016427189111709595,
0.15049472451210022,
-0.013536631129682064,
0.029... | -0.005228 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_docs\_framework: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Documentation framework \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This page contains notes on the framework in use for Cilium documentation. Its objective is to help contributors understand the tools and build process for the documentation, and to help maintain it. Alas, this sort of document goes quickly out of date. When in doubt of accuracy, double-check the codebase to verify information. If you find discrepancies, please update this page. Sphinx ====== Cilium relies on `Sphinx`\_ to generate its documentation. .. \_Sphinx: https://www.sphinx-doc.org Sphinx usage ------------ Contributors do not usually call Sphinx directly, but rather use the Makefile targets defined in ``Documentation/Makefile``. For instructions on how to quickly render the documentation, see :ref:`testing documentation `. Sphinx features --------------- Here are some specific Sphinx features used in Cilium's documentation: - `Tab groups`\_ - `OpenAPI`\_ documentation generation - Mark-up languages: reStructuredText (rST) and Markdown (`MyST`\_ flavor) - Substitutions, for example: - ``|SCM\_WEB|`` - ``|CHART\_VERSION|`` - Multiple versions (for all supported branches, plus two aliases: ``stable`` and ``latest``) .. \_OpenAPI: https://github.com/sphinx-contrib/openapi .. \_Tab groups: https://github.com/executablebooks/sphinx-tabs/ .. \_MyST: https://myst-parser.readthedocs.io Sphinx version -------------- The version of Sphinx in use is defined in ``Documentation/requirements-min/requirements.txt``. For more details, see the :ref:`section on requirements `. Auto-generated contents ======================= Some contents are automatically generated at build time. File ``Documentation/Makefile`` contains the following target, shown here in a simplified version, which regenerates a number of documents and then checks that they are all up-to-date: .. code-block:: makefile check: builder-image api-flaggen update-cmdref update-crdlist update-helm-values update-codeowners update-redirects ./check-cmdref.sh ./check-helmvalues.sh $(DOCKER\_RUN) ./check-examples.sh # Runs "cilium policy validate" and "yamllint" ./check-codeowners.sh ./check-flaggen.sh ./check-crdlist.sh ./check-redirects.sh Regeneration happens when the different dependency targets for ``check`` are run. They are: - ``api-flaggen`` - Runs ``go run tools/apiflaggen`` - Generates ``Documentation/configuration/api-restrictions-table.rst`` - ``update-cmdref`` - Runs ``./update-cmdref.sh`` - Includes running various binaries with ``--cmdref`` - Generates ``Documentation/cmdref/\\*`` - ``update-crdlist`` - ``make -C ../ generate-crd-docs`` - Runs ``tools/crdlistgen/main.go`` - Parses docs to list CRDs - Generates ``Documentation/crdlist.rst`` - ``update-helm-values`` - Generates from ``install/kubernetes`` - Generates ``Documentation/helm-values.rst`` - ``update-codeowners`` - ``./update-codeowners.sh`` - Synchronizes teams description from ``CODEOWNERS`` - Generates ``Documentation/codeowners.rst`` - ``update-redirects`` - ``make -C Documentation update-redirects`` - Automatically generates redirects based on moved files based on git history. - Validates that all moved or deleted files have a redirect. - Generates ``Documentation/redirects.txt`` Other auto-generated contents include: - OpenAPI reference - YAML generated from the ``Makefile`` at the root of the repository - Relies on the contents of ``api``, linked as ``Documentation/\_api`` - Processed and included via a dedicated add-on, from ``Documentation/api.rst``: ``.. openapi:: ../api/v1/openapi.yaml`` - gRPC API reference - Markdown generated from the main ``Makefile`` at the root of the repository - Relies on the contents of ``api``, linked as ``Documentation/\_api`` - Included from ``Documentation/grpcapi.rst`` - SDP gRPC API reference - Markdown generated from the main ``Makefile`` at the root of the repository - Relies on the contents of ``api``, linked as ``Documentation/\_api`` - Included from ``Documentation/sdpapi.rst`` Build system ============ Makefile targets ---------------- Here are the main ``Makefile`` targets related to documentation to run from the root of the Cilium repository, as well as some indications on what they call: - ``make`` -> ``all: ... postcheck`` -> ``make -C Documentation check``: Build Cilium and validate the documentation via the ``postcheck`` target - ``make -C Documentation html``: Render the documentation as HTML - ``make test-docs`` -> ``make -C Documentation html``: Render the documentation as HTML - ``make -C Documentation live-preview``: Build the documentation and start a server for local preview - ``make render-docs`` -> ``make -C Documentation live-preview``: Build | https://github.com/cilium/cilium/blob/main//Documentation/contributing/docs/docsframework.rst | main | cilium | [
-0.03543153032660484,
0.04714658483862877,
-0.04609674960374832,
-0.03607385605573654,
0.03460146114230156,
-0.051712919026613235,
-0.018632126972079277,
0.02857179567217827,
0.024131299927830696,
0.057475510984659195,
0.04860921576619148,
-0.0534852035343647,
0.05287080258131027,
0.014145... | 0.188268 |
the ``postcheck`` target - ``make -C Documentation html``: Render the documentation as HTML - ``make test-docs`` -> ``make -C Documentation html``: Render the documentation as HTML - ``make -C Documentation live-preview``: Build the documentation and start a server for local preview - ``make render-docs`` -> ``make -C Documentation live-preview``: Build the documentation and start a server for local preview Generating documentation ------------------------ - The ``Makefile`` builds the documentation using the ``docs-builder`` Docker image. - The build includes running ``check-build.sh``. This script: a. Runs the linter (``rstcheck``), unless the environment variable ``SKIP\_LINT`` is set b. Runs the spell checker c. Builds the HTML version of the documentation d. Exits with an error if any unexpected warning or error is found Tweaks and tools ================ See also file ``Documentation/conf.py``. Spell checker ------------- The build system relies on Sphinx's `spell-checker module`\_ (considered a `builder`\_ in Sphinx). The spell checker uses a list of known exceptions contained in ``Documentation/spelling\_wordlist.txt``. Words in the list that are written with lowercase exclusively, or uppercase exclusively, are case-insensitive exceptions for spell-checking. Words with mixed case are case-sensitive. Keep this file sorted alphabetically. To add new entries to the list, run ``Documentation/update-spelling\_wordlist.sh``. To clean-up obsolete entries, first make sure the spell checker reports no issue on the current version of the documentation. Then remove all obsolete entries from the file, run the spell checker, and re-add all reported exceptions. Cilium's build framework uses a custom filter for the spell checker, for spelling ``WireGuard`` correctly as ``WireGuard``, or ``wireguard`` in some contexts, but never as ``Wireguard``. This filter is implemented in ``Documentation/\_exts/cilium\_spellfilters.py`` and registered in ``Documentation/conf.py``. .. \_spell-checker module: https://github.com/sphinx-contrib/spelling .. \_builder: https://www.sphinx-doc.org/en/master/usage/builders Redirect checker/builder ------------------------ The build system relies on the Sphinx extension `sphinxext-rediraffe`\_ (considered a `builder`\_ in Sphinx) for redirects. The redirect checker uses the git history to determine if a file has been moved or deleted in order to validate that a redirect for the file has been created in ``Documentation/redirects.txt``. Redirects are defined as a mapping from the original source file location to the new location within the ``Documentation/`` directory. The extension uses the ``rediraffe\_branch`` as the git ref to diff against to determine which files have been moved or deleted. Any changes prior to the ref specified by ``rediraffe\_branch`` will not be detected. To add new entries to the ``redirects.txt``, run ``make -C Documentation update-redirects``. If a file has been deleted, or has been moved and is not similar enough to the original source file, then you must manually update ``redirects.txt`` with the correct mapping. .. \_sphinxext-rediraffe: https://github.com/wpilibsuite/sphinxext-rediraffe :spelling:word:`rstcheck` ------------------------- The documentation framework relies on `rstcheck`\_ to validate the rST formatting. There is a list of warnings to ignore, in part because the linter has bugs. The call to the tool, and this list of exceptions, are configured in ``Documentation/check-build.sh``. .. \_rstcheck: https://rstcheck.readthedocs.io Link checker ------------ The documentation framework has a link checker under ``Documentation/check-links.sh``. However, due to some unsolved issues, it does not run in CI. See :gh-issue:`27116` for details. Web server for local preview ---------------------------- Launch a web server to preview the generated documentation locally with ``make render-docs``. For more information on this topic, see :ref:`testing documentation `. Custom Sphinx roles ------------------- The documentation defines several custom roles: - ``git-tree`` - ``github-project`` - ``github-backport`` - ``gh-issue`` - ``prev-docs`` Calling these roles helps insert links based on specific URL templates, via the `extlinks`\_ extension. They are all configured in ``Documentation/conf.py``. They should be used wherever relevant, to ensure that formatting for all links to the related resources remain consistent. .. \_extlinks: https://www.sphinx-doc.org/en/master/usage/extensions/extlinks.html Custom Sphinx directives ------------------------ Cilium's documentation does not implement custom directives as | https://github.com/cilium/cilium/blob/main//Documentation/contributing/docs/docsframework.rst | main | cilium | [
-0.059827305376529694,
0.03220699355006218,
-0.00946897454559803,
-0.009621810168027878,
0.06454001367092133,
-0.07124119997024536,
-0.05931578204035759,
0.013799593783915043,
0.018568921834230423,
0.04456375539302826,
-0.025870690122246742,
-0.03397614136338234,
0.023802293464541435,
-0.0... | -0.041161 |
links based on specific URL templates, via the `extlinks`\_ extension. They are all configured in ``Documentation/conf.py``. They should be used wherever relevant, to ensure that formatting for all links to the related resources remain consistent. .. \_extlinks: https://www.sphinx-doc.org/en/master/usage/extensions/extlinks.html Custom Sphinx directives ------------------------ Cilium's documentation does not implement custom directives as of this writing. Custom extensions ----------------- Cilium's documentation uses custom extensions for Sphinx, implemented under ``Documentation/\_exts``. - One defines the custom filters for the spell checker. - One patches Sphinx's HTML translator to open all external links in new tabs. Google Analytics ---------------- The documentation uses Google Analytics to collect metrics. This is configured in ``Documentation/conf.py``. Customization ------------- Here are additional elements of customization for Cilium's documentation defined in the main repository: - Some custom CSS; see also class ``wrapped-table`` in the related CSS file ``Documentation/\_static/wrapped-table.css`` - A "Copy" button, including a button to copy only commands from console-code blocks, implemented in ``Documentation/\_static/copybutton.js`` and ``Documentation/\_static/copybutton.css`` - Custom header and footer definitions, for example to make link to Slack target available on all pages - Warning banner on older branches, telling to check out the latest version (these may be handled directly in the ReadTheDocs configuration in the future, see also :gh-issue:`29969`) Algolia search engine --------------------- - :spelling:word:`Algolia` provides a search engine for the documentation website. See also the repository for the `DocSearch scraper`\_. .. \_DocSearch scraper: https://github.com/cilium/docsearch-scraper-webhook Build set up ============ .. \_docs\_requirements: Requirements (dependencies) --------------------------- The repository contains two files for requirements: one that declares and pins the core dependencies for the documentation build system, and that maintainers use to generate a second requirement files that includes all sub-dependencies, via a dedicated Makefile target. - The base requirements are defined in ``Documentation/requirements-min/requirements.txt``. - Running ``make -C Documentation update-requirements`` uses this file as a base to generate ``Documentation/requirements.txt``. Dependencies defined in ``Documentation/requirements-min/requirements.txt`` should never be updated in ``Documentation/requirements.txt`` directly. Instead, update the former and regenerate the latter. File ``Documentation/requirements.txt`` is used to build the ``docs-builder`` Docker image. Dependencies defined in these requirements files include the documentation's custom theme. Docker set-up ------------- The documentation build system relies on a Docker image, ``docs-builder``, to ensure the build environment is consistent across different systems. Resources related to this image include ``Documentation/Dockerfile`` and the requirement files. Versions of this image are automatically built and published to a registry when the Dockerfile or the list of dependencies is updated. This is handled in CI workflow ``.github/workflows/build-images-docs-builder.yaml``. If a Pull Request updates the Dockerfile or its dependencies, have someone run the two-steps deployment described in this workflow to ensure that the CI picks up an updated image. ReadTheDocs ----------- Cilium's documentation is hosted on ReadTheDocs. The main configuration options are defined in ``Documentation/.readthedocs.yaml``. Some options, however, are only configurable in the ReadTheDocs web interface. For example: - The location of the configuration file in the repository - Redirects - Triggers for deployment Custom theme ============ The online documentation uses a custom theme based on `the ReadTheDocs theme`\_. This theme is defined in its `dedicated sphinx\_rtd\_theme fork repository`\_. .. \_the ReadTheDocs theme: https://github.com/readthedocs/sphinx\_rtd\_theme .. \_dedicated sphinx\_rtd\_theme fork repository: https://github.com/cilium/sphinx\_rtd\_theme/ Do not use the ``master`` branch of this repository. The commit or branch to use is referenced in ``Documentation/requirements.txt``, generated from ``Documentation/requirements-min/requirements.txt``, in the Cilium repository. CI checks ========= There are several workflows relating to the documentation in CI: - Documentation workflow: - Defined in ``.github/workflows/documentation.yaml`` - Tests the build, runs the linter, checks the spelling, ensures auto-generated contents are up-to-date - Runs ``./Documentation/check-builds.sh html`` from the ``docs-builder`` image - Netlify preview: - Hook defined at Netlify, configured in Netlify's web interface - Checks the build - Used | https://github.com/cilium/cilium/blob/main//Documentation/contributing/docs/docsframework.rst | main | cilium | [
-0.030224593356251717,
0.003833465976640582,
-0.013558933511376381,
0.039271965622901917,
0.013131937012076378,
-0.013244591653347015,
-0.024777131155133247,
0.03530912101268768,
-0.024393752217292786,
0.0698133111000061,
0.05613894388079643,
-0.003220712998881936,
0.027313288301229477,
0.... | 0.091381 |
CI: - Documentation workflow: - Defined in ``.github/workflows/documentation.yaml`` - Tests the build, runs the linter, checks the spelling, ensures auto-generated contents are up-to-date - Runs ``./Documentation/check-builds.sh html`` from the ``docs-builder`` image - Netlify preview: - Hook defined at Netlify, configured in Netlify's web interface - Checks the build - Used for previews on Pull Requests, but \*not\* for deploying the documentation - Uses a separate Makefile target (``html-netlify``), runs ``check-build.sh`` with ``SKIP\_LINT=1`` - Runtime tests: - In the absence of updates to the Dockerfile or documentation dependencies, runtime tests are the only workflow that always rebuilds the ``docs-builder`` image before generating the docs. - Image update workflow: - Rebuilds the ``docs-builder`` image, pushes it to Quay.io, and updates the image reference with the new one in the documentation workflow - Triggers when requirements or ``Documentation/Dockerfile`` are updated - Needs approval from one of the ``docs-structure`` team members Redirects ========= Some pages change location or name over time. To improve user experience, there is a set of redirects in place. These redirects are configured from the ReadTheDocs interface. They are a pain to maintain. Redirects could possibly be configured from existing, dedicated Sphinx extensions, but this option would require research to analyze and implement. | https://github.com/cilium/cilium/blob/main//Documentation/contributing/docs/docsframework.rst | main | cilium | [
-0.06527140736579895,
0.01183631457388401,
0.03139440715312958,
0.025829780846834183,
0.08135895431041718,
-0.07288692891597748,
-0.06433654576539993,
-0.017168885096907616,
0.07771456986665726,
0.004951885901391506,
0.007713057100772858,
-0.05144090577960014,
0.046997688710689545,
-0.0189... | 0.099012 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_testing-documentation: Documentation testing ===================== First, start a local document server that automatically refreshes when you save files for real-time preview. It relies on the ``cilium/docs-builder`` Docker container. Set up your development environment ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To run Cilium's documentation locally, you need to install `docker engine `\_ and also the ``make`` package. To verify that ``make`` and ``docker`` is installed, run the command ``make --version`` and ``docker --version`` in your terminal. .. code-block:: shell-session $ docker --version Docker version 20.10.22, build 3a2c30b $ make --version GNU Make 4.2.1 For Windows ~~~~~~~~~~~ .. Note:: The preferred method is to upgrade to Windows 10 version 1903 Build 18362 or higher, you can upgrade to Windows Subsystem for Linux ``WSL2`` and run ``make`` in Linux. #. Verify you have access to the ``make`` command in your ``WSL2`` terminal. #. Download and install docker desktop. #. Set up docker to use `WSL2 `\_ as backend. #. Start docker desktop. Preview documentation locally ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Navigate to the root of the folder where you cloned the project, then run the code below in your terminal: .. code-block:: shell-session $ make render-docs This will build a docker image and start a docker container. Preview the documentation at http://localhost:9081/ as you make changes. After making changes to Cilium documentation you should check that you did not introduce any new warnings or errors, and also check that your changes look as you intended one last time before opening a pull request. To do this you can build the docs: .. code-block:: shell-session $ make test-docs .. note:: By default, ``render-docs`` generates a preview with instructions to install Cilium from the latest version on GitHub (i.e. from the HEAD of the main branch that has not been released) regardless of which Cilium branch you are in. You can target a specific branch by specifying ``READTHEDOCS\_VERSION`` environment variable: .. code-block:: shell-session READTHEDOCS\_VERSION=v1.7 make render-docs Submit local changes on GitHub (Pull Request) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ See the :ref:`submit a pull request ` section of the contributing guide. | https://github.com/cilium/cilium/blob/main//Documentation/contributing/docs/docstest.rst | main | cilium | [
0.01118567492812872,
0.029048627242445946,
-0.03473523259162903,
0.006231702398508787,
0.05947120860219002,
-0.05849888175725937,
-0.123676598072052,
0.04931719973683357,
0.05699090287089348,
0.01965957134962082,
0.019539397209882736,
-0.021671108901500702,
0.011231472715735435,
-0.0035528... | 0.06705 |
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_docs\_style\_guide: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Documentation style \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* .. |RST| replace:: reStructuredText Here are some guidelines and best practices for contributing to Cilium's documentation. They have several objectives: - Ensure that the documentation is rendered in the best possible way (in particular for code blocks). - Make the documentation easy to maintain and extend. - Help keep a consistent style throughout the documentation. - In the end, provide a better experience to users, and help them find the information they need. See also :ref:`the documentation for testing ` for instructions on how to preview documentation changes. General considerations ---------------------- Write in US English. For example, use "prioritize" instead of ":spelling:ignore:`prioritise`" and "color" instead of ":spelling:ignore:`colour`". Maintain a consistent style with the rest of the documentation when possible, or at least with the rest of the updated page. Omit hyphens when possible. For example, use "load balancing" instead of "load-balancing". Header ------ Use the following header when adding new files to the Documentation. .. code-block:: rst .. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io One exception is |RST| fragments that are supposed to be sourced from other documentation files. Those do not need this header. Headings -------- Prefer sentence case (capital letter on first word) rather than title case for all headings. Body ---- Wrap the lines for long sentences or paragraphs. There is no fixed convention on the length of lines, but targeting a width of about 80 characters should be safe in most circumstances. Capitalization -------------- Follow `the section on capitalization for API objects`\_ from the Kubernetes style guide for when to (not) capitalize API objects. In particular: When you refer specifically to interacting with an API object, use `UpperCamelCase`\_, also known as Pascal case. And: When you are generally discussing an API object, use `sentence-style capitalization`\_ For example, write "Gateway API", capitalized. Use "Gateway" when writing about an API object as an entity, and "gateway" for a specific instance. The following examples are correct:: - Gateway API is a subproject of Kubernetes SIG Network. - Cilium is conformant to the Gateway API spec at version X.Y.Z. - In order to expose this service, create a Gateway to hold the listener configuration. - Traffic from the Internet passes through the gateway to get to the backend service. - Now that you have created the "foo" gateway, you need to create some Routes. But the following examples are incorrect:: - The implementation of gateway API - To create a gateway object, ... .. \_the section on capitalization for API objects: https://kubernetes.io/docs/contribute/style/style-guide/#use-upper-camel-case-for-api-objects .. \_UpperCamelCase: https://en.wikipedia.org/wiki/Camel\_case .. \_sentence-style capitalization: https://docs.microsoft.com/en-us/style-guide/text-formatting/using-type/use-sentence-style-capitalization .. \_docs\_style\_code\_blocks: Code blocks ----------- Code snippets and other literal blocks usually fall under one of those three categories: - They contain `substitution references`\_ (for example: ``|SCM\_WEB|``). In that case, always use the ``.. parsed-literal`` directive, otherwise the token will not be substituted. Prefer: .. code-block:: rst .. parsed-literal:: $ kubectl create -f \ |SCM\_WEB|\/examples/minikube/http-sw-app.yaml Avoid: .. code-block:: rst .. code-block:: shell-session $ kubectl create -f \ |SCM\_WEB|\/examples/minikube/http-sw-app.yaml - If the text is not a code snippet, but just some fragment that should be printed verbatim (for example, the unstructured output of a shell command), use the marker for `literal blocks`\_ (``::``). Prefer: .. code-block:: rst See the output in ``dmesg``: :: [ 3389.935842] flen=6 proglen=70 pass=3 image=ffffffffa0069c8f from=tcpdump pid=20583 [ 3389.935847] JIT code: 00000000: 55 48 89 e5 48 83 ec | https://github.com/cilium/cilium/blob/main//Documentation/contributing/docs/docsstyle.rst | main | cilium | [
-0.04999404400587082,
0.10347963869571686,
0.008914086036384106,
0.007047983817756176,
0.0746392086148262,
-0.011494230479001999,
-0.05352194234728813,
0.047215383499860764,
0.04684126377105713,
0.024657057598233223,
-0.01616436056792736,
-0.006558057386428118,
0.056217581033706665,
-0.034... | 0.159244 |
should be printed verbatim (for example, the unstructured output of a shell command), use the marker for `literal blocks`\_ (``::``). Prefer: .. code-block:: rst See the output in ``dmesg``: :: [ 3389.935842] flen=6 proglen=70 pass=3 image=ffffffffa0069c8f from=tcpdump pid=20583 [ 3389.935847] JIT code: 00000000: 55 48 89 e5 48 83 ec 60 48 89 5d f8 44 8b 4f 68 See more output in ``dmesg``:: [ 3389.935849] JIT code: 00000010: 44 2b 4f 6c 4c 8b 87 d8 00 00 00 be 0c 00 00 00 [ 3389.935850] JIT code: 00000020: e8 1d 94 ff e0 3d 00 08 00 00 75 16 be 17 00 00 Avoid: .. code-block:: rst See the output in ``dmesg``: .. parsed-literal:: [ 3389.935842] flen=6 proglen=70 pass=3 image=ffffffffa0069c8f from=tcpdump pid=20583 [ 3389.935847] JIT code: 00000000: 55 48 89 e5 48 83 ec 60 48 89 5d f8 44 8b 4f 68 The reason is that because these snippets contain no code, there is no need to mark them as code or parsed literals. The former would tell Sphinx to attempt to apply syntax highlight, the second would tell it to look for |RST| markup to parse in the block. - If the text contained code or structured output, use the ``.. code-block`` directive. Do \*not\* use the ``.. code`` directive, which is slightly less flexible. Prefer: .. code-block:: rst .. code-block:: shell-session $ ls cilium $ cd cilium/ Avoid: .. code-block:: rst .. parsed-literal:: $ ls cilium $ cd cilium/ .. code-block:: bash $ ls cilium $ cd cilium/ .. code-block:: shell-session ls cilium cd cilium/ The ``.. code-block`` directive should always take a language name as argument, for example: ``.. code-block:: yaml`` or ``.. code-block:: shell-session``. The use of ``bash`` is possible but should be limited to Bash scripts. For any listing of shell commands, and in particular if the snippet mixes commands and their output, use ``shell-session``, which will bring the best coloration and may trigger the generation of the ``Copy commands`` button. For snippets containing shell commands, in particular if they also contain the output for those commands, use prompt symbols to prefix the commands. Use ``$`` for commands to run as a normal user, and ``#`` for commands to run with administrator privileges. You may use ``sudo`` as an alternative way to mark commands to run with privileges. .. \_substitution references: https://docutils.sourceforge.io/docs/ref/rst/restructuredtext.html#substitution-references .. \_literal blocks: https://docutils.sourceforge.io/docs/ref/rst/restructuredtext.html#literal-blocks Creating configuration files ---------------------------- When documenting the creation of a file, avoid using HEREDOC syntax with ``cat`` commands. Instead, prefer to use the ``literalinclude`` directive to reference actual files in the repository. Using literalinclude for configuration files ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When showing configuration file contents, use the ``literalinclude`` directive to reference files that exist in the repository (typically in the ``examples/`` directory). If the file does not yet exist in the repository, consider adding it to the ``examples/`` directory first. This approach ensures the documentation stays synchronized with the actual file and avoids drift between documentation and code. Follow this recommended pattern: #. Describe to the user how a task could be achieved with the following configuration. #. Use ``literalinclude`` to include the actual file contents. #. Explain what the configuration means, including the meaning of key settings. #. Provide a direct command that users can copy and paste to apply the configuration. When providing commands that reference repository files, use the ``|SCM\_WEB|`` substitution reference. This ensures the URL points to the correct version (branch/tag) of the file, matching the documentation version the user is reading. See the earlier section on `substitution references`\_ for details. Example: .. code-block:: rst To configure feature X, create a file with | https://github.com/cilium/cilium/blob/main//Documentation/contributing/docs/docsstyle.rst | main | cilium | [
-0.06083543971180916,
0.017475541681051254,
-0.0631367489695549,
-0.019498340785503387,
0.08439546078443527,
-0.04533546417951584,
0.013144620694220066,
0.06946307420730591,
-0.0033345113042742014,
-0.0024069149512797594,
-0.011609847657382488,
-0.050867676734924316,
0.02097308449447155,
-... | 0.06476 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.