content
large_stringlengths
3
20.5k
url
large_stringlengths
54
193
branch
large_stringclasses
4 values
source
large_stringclasses
42 values
embeddings
listlengths
384
384
score
float64
-0.21
0.65
reference repository files, use the ``|SCM\_WEB|`` substitution reference. This ensures the URL points to the correct version (branch/tag) of the file, matching the documentation version the user is reading. See the earlier section on `substitution references`\_ for details. Example: .. code-block:: rst To configure feature X, create a file with the following contents: .. literalinclude:: ../../examples/kubernetes/feature-x.yaml :language: yaml This configuration enables feature X by setting: - ``enableFeatureX: true``: Activates the feature - ``featureXMode: advanced``: Uses advanced mode for better performance Apply the configuration with: .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/feature-x.yaml Using templates with variable substitution ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For configuration files that require user-specific values (cluster names, IDs, regions), use template files with variable substitution instead of asking users to manually edit files. This approach reduces errors by controlling exactly what gets written and maintains the copy-paste testing workflow that maintainers rely on for debugging. Store the template file in the ``examples/`` directory with a ``.tmpl`` extension and use ``envsubst`` to substitute variables. Example: .. code-block:: rst Create the cluster configuration file: .. code-block:: parsed-literal export NAME="$(whoami)-$RANDOM" curl -L \ |SCM\_WEB|\/examples/kubernetes/eks-config.tmpl \\ | envsubst > eks-config.yaml The template contains: .. literalinclude:: ../../examples/kubernetes/eks-config.tmpl :language: yaml The ``${NAME}`` variable will be substituted with your cluster name. Create the cluster: .. code-block:: shell-session $ eksctl create cluster -f eks-config.yaml This pattern ensures that: - Maintainers can copy-paste commands sequentially to reproduce user issues - Variables are controlled rather than manually typed, reducing errors - Template files are version-controlled and stay synchronized with documentation - Failures are systematic (template issue) rather than random (user typos) Links ----- - Avoid using `embedded URIs`\_ (```... <...>`\_\_``), which make the document harder to read when looking at the source code of the documentation. Prefer to use `block-level hyperlink targets`\_ (where the URI is not written directly in the sentence in the |RST| file, below the paragraph). Prefer: .. code-block:: rst See the `documentation for Cilium`\_. Here is another link to `the same documentation `\_. .. \_documentation for Cilium: .. \_cilium documentation: https://docs.cilium.io/en/latest/ Avoid: .. code-block:: rst See the `documentation for Cilium `\_\_. - If using embedded URIs, use anonymous hyperlinks (```... <...>`\_\_`` with two underscores, see the documentation for `embedded URIs`\_) instead of named references (```... <...>`\_``, note the single underscore). Prefer (but see previous item): .. code-block:: rst See the `documentation for Cilium `\_\_. Avoid: .. code-block:: rst See the `documentation for Cilium `\_. .. \_embedded URIs: https://docutils.sourceforge.io/docs/ref/rst/restructuredtext.html#embedded-uris-and-aliases .. \_block-level hyperlink targets: https://docutils.sourceforge.io/docs/ref/rst/restructuredtext.html#hyperlink-targets Lists ----- - Left-align the body of a list item with the text on the first line, after the item symbol. Prefer: .. code-block:: rst - The text in this item wraps of several lines, with consistent indentation. Avoid: .. code-block:: rst - The text in this item wraps on several lines and the indent is not consistent with the first line. - For enumerated lists, prefer auto-numbering with the ``#.`` marker rather than manually numbering the sections. Prefer: .. code-block:: rst #. First item #. Second item Avoid: .. code-block:: rst 1. First item 2. Second item - Be consistent with periods at the end of list items. In general, omit periods from bulleted list items unless the items are complete sentences. But if one list item requires a period, use periods for all items. Prefer: .. code-block:: rst - This is one list item - This is another list item Avoid: .. code-block:: rst - This is one list item, period. We use punctuation. - This list item should have a period too, but doesn't Callouts -------- Use callouts effectively. For example, use the ``.. note::`` directive to highlight information that
https://github.com/cilium/cilium/blob/main//Documentation/contributing/docs/docsstyle.rst
main
cilium
[ 0.04968187212944031, -0.02713829092681408, 0.04204818606376648, 0.006379519589245319, 0.04201193526387215, 0.06245023384690285, -0.020102353766560555, 0.03632858023047447, -0.03933048993349075, 0.0040628956630826, -0.003387425560504198, -0.06391508877277374, -0.03668954595923424, -0.029075...
0.079369
one list item - This is another list item Avoid: .. code-block:: rst - This is one list item, period. We use punctuation. - This list item should have a period too, but doesn't Callouts -------- Use callouts effectively. For example, use the ``.. note::`` directive to highlight information that helps users in a specific context. Do not use it to avoid refactoring a section or paragraph. For example, when adding information about a new configuration flag that completes a feature, there is no need to append it as a note, given that it does not require particular attention from the reader. Avoid the following: .. parsed-literal:: Blinking pods are easier to spot in the dark. Use feature flag \`\`--blinking-pods\`\` to make new pods blink twice when they launch. If you create blinking pods often, sunglasses may help protect your eyes. \*\*\.. note:: Use the flag \`\`--blinking-pods-blink-number\`\` to change the number of times pods blink on start-up.\*\* Instead, merge the new content with the existing paragraph: .. parsed-literal:: Blinking pods are easier to spot in the dark. Use feature flag \`\`--blinking-pods\`\` to make new pods blink when they launch. \*\*By default, blinking pods blink twice, but you can use the flag \`\`--blinking-pods-blink-number\`\` to specify how many times they blink on start-up.\*\* If you create blinking pods often, sunglasses may help protect your eyes. Roles ----- - We have a dedicated role for referencing Cilium GitHub issues, to reference them in a consistent fashion. Use it when relevant. Prefer: .. code-block:: rst See :gh-issue:`1234`. Avoid: .. code-block:: rst See `this GitHub issue `\_\_. Common pitfalls --------------- There are best practices for writing documentation; follow them. In general, default to the `Kubernetes style guide`\_, especially for `content best practices`\_. The following subsections cover the most common feedback given for Cilium documentation Pull Requests. Use active voice ~~~~~~~~~~~~~~~~ Prefer:: Enable the flag. Avoid:: Ensure the flag is enabled. Use present tense ~~~~~~~~~~~~~~~~~ Prefer:: The service returns a response code. Avoid:: The service will return a response code. Address the user as "you", not "we" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Prefer:: You can specify values to filter tags. Avoid:: We'll specify this value to filter tags. Use plain, direct language ~~~~~~~~~~~~~~~~~~~~~~~~~~ Prefer:: Always configure the bundle explicitly in production environments. Avoid:: It is recommended to always configure the bundle explicitly in production environments. Write for good localization ~~~~~~~~~~~~~~~~~~~~~~~~~~~ Assume that what you write will be localized with machine translation. Figures of speech often localize poorly, as do idioms like "above" and "below". Prefer:: The following example To assist this process, Avoid:: The example below To give this process a boost, Define abbreviations ~~~~~~~~~~~~~~~~~~~~ Define abbreviations when you first use them on a page. Prefer:: Certificate authority (CA) Avoid:: CA Don't use Latin abbreviations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Prefer:: - For example, - In other words, - by following the ... - and others Avoid:: - e.g. - i.e. - via - etc. Spell words fully ~~~~~~~~~~~~~~~~~ Prefer:: and Avoid:: & .. \_Kubernetes style guide: https://kubernetes.io/docs/contribute/style/style-guide/ .. \_content best practices: https://kubernetes.io/docs/contribute/style/style-guide/#content-best-practices Specific language ----------------- Use specific language. Avoid words like "this" (as a pronoun) and "it" when referring to concepts, actions, or process states. Be as specific as possible, even if specificity seems overly repetitive. This requirement exists for two reasons: 1. Indirect language assumes too much clarity on the part of the writer and too much understanding on the part of the reader. 2. Specific language is easier to review and easier to localize. Words like "this" and "it" are indirect references. For example: .. code-block:: rst Feature A requires all pods to be painted blue. This means that the Agent
https://github.com/cilium/cilium/blob/main//Documentation/contributing/docs/docsstyle.rst
main
cilium
[ -0.05066010355949402, 0.05323363095521927, 0.013070197775959969, 0.05297595635056496, 0.07261478155851364, 0.04062449932098389, 0.12554970383644104, 0.06456350535154343, -0.056812576949596405, 0.0006120500038377941, 0.04333529993891716, -0.003941905219107866, -0.024460427463054657, -0.0048...
0.158461
the writer and too much understanding on the part of the reader. 2. Specific language is easier to review and easier to localize. Words like "this" and "it" are indirect references. For example: .. code-block:: rst Feature A requires all pods to be painted blue. This means that the Agent must apply its "paint" action to all pods. To achieve this, use the dedicated CLI invocation. In the preceding paragraph, the word "this" indirectly references both an inferred consequence ("this means") and a desired goal state ("to achieve this"). Instead, be as specific as possible: .. code-block:: rst Feature A requires all pods to be painted blue. Consequently, the Agent must apply its "paint" action to all pods. To make the Agent paint all pods blue, use the dedicated CLI invocation. The following subsections contain more examples. Use specific wording rather than vague wording ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Prefer:: For each core, the Ingester attempts to spawn a worker pool. Avoid:: For each core, it attempts to spawn a worker pool. Use specific instructions rather than vague instructions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Prefer:: Set the annotation value to remote. Avoid:: Set it to remote.
https://github.com/cilium/cilium/blob/main//Documentation/contributing/docs/docsstyle.rst
main
cilium
[ -0.009226762689650059, 0.052407316863536835, -0.019023463129997253, -0.0555422380566597, 0.09309845417737961, 0.047900713980197906, 0.12846067547798157, 0.012396038509905338, -0.04659119248390198, -0.008484810590744019, 0.0018565623322501779, -0.08551008254289627, 0.022816158831119537, 0.0...
0.210915
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_ci\_gha: CI / GitHub Actions -------------------- The main CI infrastructure is maintained on GitHub Actions (GHA). This infrastructure is broadly comprised of smoke tests and platform tests. Smoke tests are typically initiated by ``pull\_request`` or ``pull\_request\_target`` triggers automatically when opening or updating a pull request. Platform tests often require an organization member to manually trigger the test when the pull request is ready to be tested. Triggering Smoke Tests ~~~~~~~~~~~~~~~~~~~~~~ Several short-running tests are automatically triggered for all contributor submissions, subject to GitHub's limitations around first-time contributors. If no GitHub workflows are triggering on your PR, a committer for the project should trigger these within a few days. Reach out in the ``#testing`` channel on `Cilium Slack`\_ for assistance in running these tests. .. \_trigger\_phrases: Triggering Platform Tests ~~~~~~~~~~~~~~~~~~~~~~~~~ To ensure that build resources are used judiciously, some tests on GHA are manually triggered via comments. These builds typically make use of cloud infrastructure, such as allocating clusters or VMs in AKS, EKS or GKE. In order to trigger these jobs, a member of the GitHub organization must post a comment on the Pull Request with a "trigger phrase". If you'd like to trigger these jobs, ask in `Cilium Slack`\_ in the ``#testing`` channel. If you're regularly contributing to Cilium, you can also `become a member `\_\_ of the Cilium organization. Depending on the PR target branch, a specific set of jobs is marked as required, as per the `Cilium CI matrix`\_. They will be automatically featured in PR checks directly on the PR page. The ``/test`` trigger phrase may be used to trigger the full testsuite at once. Additional trigger phrases (such as ``/ci-e2e-upgrade``) can be used to run individual or optional jobs where supported. More triggers can be found in `ariane-config.yaml `\_ For a full list of GHA, see `GitHub Actions Page `\_ Using GitHub Actions for testing ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ On GHA, running a specific set of Ginkgo tests (``conformance-ginkgo.yaml``) can also be accomplished by modifying the files under ``.github/actions/ginkgo/`` by adding or removing entries. ``main-focus.yaml``: This file contains a list of tests to include and exclude. The ``cliFocus`` defined for each element in the "include" section is expanded to the specific defined ``focus``. This mapping allows us to determine which regex should be used with ``ginkgo --focus`` for each element in the "focus" list. See :ref:`ginkgo-documentation` for more information about ``--focus`` flag. Additionally, there is a list of excluded tests along with justifications in the form of comments, explaining why each test is excluded based on constraints defined in the ginkgo tests. For more information, refer to `GitHub's documentation on expanding matrix configurations `\_\_ ``main-k8s-versions.yaml``: This file defines which kernel versions should be run with specific Kubernetes (k8s) versions. It contains an "include" section where each entry consists of a k8s version, IP family, Kubernetes image, and kernel version. These details determine the combinations of k8s versions and kernel versions to be tested. ``main-prs.yaml``: This file specifies the k8s versions to be executed for each pull request (PR). The list of k8s versions under the "k8s-version" section determines the matrix of jobs that should be executed for CI when triggered by PRs. ``main-scheduled.yaml``: This file specifies the k8s versions to be executed on a regular basis. The list of k8s versions under the "k8s-version" section determines the matrix of jobs that should be executed for CI as part of scheduled jobs. Workflow interactions: - The ``main-focus.yaml`` file helps define the test focus
https://github.com/cilium/cilium/blob/main//Documentation/contributing/testing/ci.rst
main
cilium
[ -0.005629656836390495, -0.03755275160074234, -0.07123403996229172, 0.016319990158081055, 0.044725336134433746, -0.06913766264915466, -0.052875202149152756, -0.026519808918237686, 0.07402835041284561, 0.012559163384139538, 0.04658190906047821, -0.07109016180038452, -0.008772511035203934, -0...
0.120846
``main-scheduled.yaml``: This file specifies the k8s versions to be executed on a regular basis. The list of k8s versions under the "k8s-version" section determines the matrix of jobs that should be executed for CI as part of scheduled jobs. Workflow interactions: - The ``main-focus.yaml`` file helps define the test focus for CI jobs based on specific criteria, expanding the ``cliFocus`` to determine the relevant ``focus`` regex for ``ginkgo --focus``. - The ``main-k8s-versions.yaml`` file defines the mapping between k8s versions and the associated kernel versions to be tested. - Both ``main-prs.yaml`` and ``main-scheduled.yaml`` files utilize the "k8s-version" section to specify the k8s versions that should be included in the job matrix for PRs and scheduled jobs respectively. - These files collectively contribute to the generation of the job matrix for GitHub Actions workflows, ensuring appropriate testing and validation of the defined k8s versions. For example, to only run the test under ``f09-datapath-misc-2`` with Kubernetes version 1.26, the following files can be modified to have the following content: ``main-focus.yaml``: .. code-block:: yaml --- focus: - "f09-datapath-misc-2" include: - focus: "f09-datapath-misc-2" cliFocus: "K8sDatapathConfig Check|K8sDatapathConfig IPv4Only|K8sDatapathConfig High-scale|K8sDatapathConfig Iptables|K8sDatapathConfig IPv4Only|K8sDatapathConfig IPv6|K8sDatapathConfig Transparent" ``main-prs.yaml``: .. code-block:: yaml --- k8s-version: - "1.26" The ``main-k8s-versions.yaml`` and ``main-scheduled.yaml`` files can be left unmodified and this will result in the execution on the tests under ``f09-datapath-misc-2`` for the ``k8s-version`` "``1.26``". Bisect process ^^^^^^^^^^^^^^ Bisecting Ginkgo tests (``conformance-ginkgo.yaml``) can be performed by modifying the workflow file, as well as modifying the files under ``.github/actions/ginkgo/`` as explained in the previous section. The sections that need to be modified for the ``conformance-ginkgo.yaml`` can be found in form of comments inside that file under the ``on`` section and enable the event type of ``pull\_request``. Additionally, the following section also needs to be modified: .. code-block:: text jobs: check\_changes: name: Deduce required tests from code changes [...] outputs: tested: ${{ steps.tested-tree.outputs.src }} matrix\_sha: ${{ steps.sha.outputs.sha }} base\_branch: ${{ steps.sha.outputs.base\_branch }} sha: ${{ steps.sha.outputs.sha }} # # For bisect uncomment the base\_branch and 'sha' lines below and comment # the two lines above this comment # #base\_branch: #sha: As per the instructions, the ``base\_branch`` needs to be uncommented and should point to the base branch name that we are testing. The ``sha`` must to point to the commit SHA that we want to bisect. \*\*The SHA must point to an existing image tag under the ``quay.io/cilium/cilium-ci`` docker image repository\*\*. It is possible to find out whether or not a SHA exists by running either ``docker manifest inspect`` or ``docker buildx imagetools inspect``. This is an example output for the non-existing SHA ``22fa4bbd9a03db162f08c74c6ef260c015ecf25e`` and existing SHA ``7b368923823e63c9824ea2b5ee4dc026bc4d5cd8``: .. code-block:: shell $ docker manifest inspect quay.io/cilium/cilium-ci:22fa4bbd9a03db162f08c74c6ef260c015ecf25e ERROR: quay.io/cilium/cilium-ci:22fa4bbd9a03db162f08c74c6ef260c015ecf25e: not found $ docker buildx imagetools inspect quay.io/cilium/cilium-ci:7b368923823e63c9824ea2b5ee4dc026bc4d5cd8 Name: quay.io/cilium/cilium-ci:7b368923823e63c9824ea2b5ee4dc026bc4d5cd8 MediaType: application/vnd.docker.distribution.manifest.list.v2+json Digest: sha256:0b7d1078570e6979c3a3b98896e4a3811bff483834771abc5969660df38463b5 Manifests: Name: quay.io/cilium/cilium-ci:7b368923823e63c9824ea2b5ee4dc026bc4d5cd8@sha256:63dbffea393df2c4cc96ff340280e92d2191b6961912f70ff3b44a0dd2b73c74 MediaType: application/vnd.docker.distribution.manifest.v2+json Platform: linux/amd64 Name: quay.io/cilium/cilium-ci:7b368923823e63c9824ea2b5ee4dc026bc4d5cd8@sha256:0c310ab0b7a14437abb5df46d62188f4b8b809f0a2091899b8151e5c0c578d09 MediaType: application/vnd.docker.distribution.manifest.v2+json Platform: linux/arm64 Once the changes are committed and pushed into a draft Pull Request, it is possible to visualize the test results on the Pull Request's page. GitHub Test Results ^^^^^^^^^^^^^^^^^^^ Once the test finishes, its result is sent to the respective Pull Request's page. In case of a failure, it is possible to check with test failed by going over the summary of the test on the GitHub Workflow Run's page: .. image:: /images/gha-summary.png :align: center On this example, the test ``K8sDatapathConfig Transparent encryption DirectRouting Check connectivity with transparent encryption and direct routing with bpf\_host`` failed. With the ``cilium-sysdumps`` artifact available for download we can retrieve it and perform further inspection to identify the cause for the failure. To investigate CI failures, see :ref:`ci\_failure\_triage`. .. \_test\_matrix: Testing matrix ^^^^^^^^^^^^^^ Up to date CI testing
https://github.com/cilium/cilium/blob/main//Documentation/contributing/testing/ci.rst
main
cilium
[ -0.0652458593249321, -0.010127960704267025, 0.00032728337100706995, -0.03374496102333069, 0.025691846385598183, -0.01719263195991516, -0.062268566340208054, -0.014805967919528484, 0.051217637956142426, -0.003375895554199815, 0.0005251250695437193, 0.04150945693254471, -0.04007313400506973, ...
0.218237
DirectRouting Check connectivity with transparent encryption and direct routing with bpf\_host`` failed. With the ``cilium-sysdumps`` artifact available for download we can retrieve it and perform further inspection to identify the cause for the failure. To investigate CI failures, see :ref:`ci\_failure\_triage`. .. \_test\_matrix: Testing matrix ^^^^^^^^^^^^^^ Up to date CI testing information regarding k8s - kernel version pairs can always be found in the `Cilium CI matrix`\_. .. \_Cilium CI matrix: https://docs.google.com/spreadsheets/d/1TThkqvVZxaqLR-Ela4ZrcJ0lrTJByCqrbdCjnI32\_X0 .. \_ci\_failure\_triage: CI Failure Triage ~~~~~~~~~~~~~~~~~ This section describes the process to triage CI failures. We define 3 categories: +----------------------+-----------------------------------------------------------------------------------+ | Keyword | Description | +======================+===================================================================================+ | Flake | Failure due to a temporary situation such as loss of connectivity to external | | | services or bug in system component, e.g. quay.io is down, VM race conditions, | | | kube-dns bug, ... | +----------------------+-----------------------------------------------------------------------------------+ | CI-Bug | Bug in the test itself that renders the test unreliable, e.g. timing issue when | | | importing and missing to block until policy is being enforced before connectivity | | | is verified. | +----------------------+-----------------------------------------------------------------------------------+ | Regression | Failure is due to a regression, all failures in the CI that are not caused by | | | bugs in the test are considered regressions. | +----------------------+-----------------------------------------------------------------------------------+ Triage process ^^^^^^^^^^^^^^ #. Investigate the failure you are interested in and determine if it is a CI-Bug, Flake, or a Regression as defined in the table above. #. Search `GitHub issues `\_ to see if bug is already filed. Make sure to also include closed issues in your search as a CI issue can be considered solved and then re-appears. Good search terms are: - The test name, e.g. :: k8s-1.7.K8sValidatedKafkaPolicyTest Kafka Policy Tests KafkaPolicies (from (k8s-1.7.xml)) - The line on which the test failed, e.g. :: github.com/cilium/cilium/test/k8s/kafka\_policies.go:202 - The error message, e.g. :: Failed to produce from empire-hq on topic deathstar-plan #. If a corresponding GitHub issue exists, update it with: #. A link to the failing GHA build (note that the build information is eventually deleted). #. If no existing GitHub issue was found, file a `new GitHub issue `\_: #. Attach failure case and logs from failing test #. If the failure is a new regression or a real bug: #. Title: ```` #. Labels ``kind/bug`` and ``needs/triage``. #. If failure is a new CI-Bug, Flake or if you are unsure: #. Title ``CI: : ``, e.g. ``CI: K8sValidatedPolicyTest Namespaces: cannot curl service`` #. Labels ``kind/bug/CI`` and ``needs/triage`` #. Include the test name and whole Stacktrace section to help others find this issue. .. note:: Be extra careful when you see a new flake on a PR, and want to open an issue. It's much more difficult to debug these without context around the PR and the changes it introduced. When creating an issue for a PR flake, include a description of the code change, the PR, or the diff. If it isn't related to the PR, then it should already happen in the ``main`` branch, and a new issue isn't needed. \*\*Examples:\*\* \* ``Flake, quay.io is down`` \* ``Flake, DNS not ready, #3333`` \* ``CI-Bug, K8sValidatedPolicyTest: Namespaces, pod not ready, #9939`` \* ``Regression, k8s host policy, #1111`` Disabling Github Actions Workflows ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. warning:: Do not use the `GitHub web UI `\_ to disable GitHub Actions workflows. It makes it difficult to find out who disabled the workflows and why. Alternatives to Disabling Github Actions Workflows ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Before proceeding, consider the following alternatives to disabling an entire GitHub Actions workflow. - Skip individual tests. If specific tests are causing the workflow to fail, disable those tests instead of
https://github.com/cilium/cilium/blob/main//Documentation/contributing/testing/ci.rst
main
cilium
[ -0.04185466095805168, -0.0005083550931885839, -0.06519244611263275, -0.03984411805868149, 0.007570349145680666, -0.1115177571773529, -0.03595553711056709, -0.020956039428710938, -0.00523592671379447, 0.008760710246860981, 0.052606306970119476, -0.07262377440929413, 0.016889844089746475, -0...
0.12643
makes it difficult to find out who disabled the workflows and why. Alternatives to Disabling Github Actions Workflows ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Before proceeding, consider the following alternatives to disabling an entire GitHub Actions workflow. - Skip individual tests. If specific tests are causing the workflow to fail, disable those tests instead of disabling the workflow. When you disable a workflow, all the tests in the workflow stop running. This makes it easier to introduce new regressions that would have been caught by these tests otherwise. - Remove the workflow from the list of required status checks. This way the workflow still runs on pull requests, but you can still merge them without the workflow succeeding. To remove the workflow from the required status check list, post a message in the `#testing Slack channel `\_ and @mention people in the `cilium-maintainers team `\_\_. Step 1: Open a GitHub Issue ^^^^^^^^^^^^^^^^^^^^^^^^^^^ Open a GitHub issue to track activities related to fixing the workflow. If there are existing test flake GitHub issues, list them in the tracking issue. Find an assignee for the tracking issue to avoid the situation where the workflow remains disabled indefinitely because nobody is assigned to actually fix the workflow. Step 2: Update the required status check list ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If the workflow is in the required status check list, it needs to be removed from the list. Notify the `cilium-maintainers team `\_\_ by mentioning ``@cilium/cilium-maintainers`` in the tracking issue and ask them to remove the workflow from the required status check list. Step 3: Update the workflow configuration ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Update the workflow configuration as described in the following sub-steps depending on whether the workflow is triggered by the ``/test`` comment or by the ``pull\_request`` or ``pull\_request\_target`` trigger. Open a pull request with your changes, have it reviewed, then merged. .. tabs:: .. group-tab:: ``/test`` comment trigger For those workflows that get triggered by the ``/test`` comment, update ariane-config.yaml and remove the workflow from ``triggers:/test:workflows`` section (`an example `\_). Do not remove the targeted trigger (``triggers:/ci-e2e`` for example) so that you can still use the targeted trigger to run the workflow when needed. .. group-tab:: ``pull\_request`` or ``pull\_request\_target`` trigger For those workflows that get triggered by the ``pull\_request`` or ``pull\_request\_target`` trigger, remove the trigger from the workflow file. Do not remove the ``schedule`` trigger if the workflow has it. It is useful to be able to see if the workflow has stabilized enough over time when making the decision to re-enable the workflow.
https://github.com/cilium/cilium/blob/main//Documentation/contributing/testing/ci.rst
main
cilium
[ -0.0216732919216156, -0.03054407797753811, -0.0035950816236436367, 0.02842426672577858, 0.021677948534488678, -0.027771783992648125, -0.08138982951641083, -0.03216100484132767, 0.026905370876193047, -0.02815619297325611, 0.010699973441660404, 0.022800933569669724, -0.009222252294421196, -0...
0.046752
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_testing\_guide: Testing ------- There are multiple ways to test Cilium functionality, including unit-testing and integration testing. In order to improve developer throughput, we provide ways to run both the unit and integration tests in your own workspace as opposed to being fully reliant on the Cilium CI infrastructure. We encourage all PRs to add unit tests and if necessary, integration tests. Consult the following pages to see how to run the variety of tests that have been written for Cilium, and information about Cilium's CI infrastructure. .. \_testing\_root: .. toctree:: :maxdepth: 2 :glob: ci e2e e2e\_legacy scalability unit bpf The best way to get help if you get stuck is to ask a question on the `Cilium Slack`\_. With Cilium contributors across the globe, there is almost always someone available to help.
https://github.com/cilium/cilium/blob/main//Documentation/contributing/testing/index.rst
main
cilium
[ -0.00785836298018694, -0.03949357941746712, -0.07439868152141571, -0.0011125410674139857, 0.025157656520605087, -0.1092858761548996, -0.06088058650493622, 0.036754757165908813, -0.055557359009981155, -0.018156902864575386, 0.05056250840425491, -0.11508750170469284, 0.013622143305838108, -0...
0.145388
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_testsuite: End-To-End Connectivity Testing =============================== Introduction ~~~~~~~~~~~~ Cilium uses `cilium-cli connectivity tests `\_ for implementing and running end-to-end tests which test Cilium all the way from the API level (for example, importing policies, CLI) to the datapath (in order words, whether policy that is imported is enforced accordingly in the datapath). Running End-To-End Connectivity Tests ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The connectivity tests are implemented in such a way that they can be run against any K8s cluster running Cilium. The built-in feature detection allows the testing framework to automatically skip tests when a required test condition cannot be met (for example, skip the Egress Gateway tests if the Egress Gateway feature is disabled). Running tests locally ^^^^^^^^^^^^^^^^^^^^^ .. include:: /installation/cli-download.rst Alternatively, ``Cilium CLI`` can be manually built and installed by fetching ``https://github.com/cilium/cilium-cli``, and then running ``make install``. Next, you need a Kubernetes cluster to run Cilium. The easiest way to create one is to use `kind `\_. Cilium provides a wrapper script which simplifies creating K8s cluster with ``kind``. For example, to create a cluster consisting of 1 control-plane node, 3 worker nodes, without kube-proxy, and with ``DualStack`` enabled: .. code-block:: shell-session $ cd cilium/ $ ./contrib/scripts/kind.sh "" 3 "" "" "none" "dual" ... Kind is up! Time to install cilium: make kind-image make kind-install-cilium Afterwards, you need to install Cilium. The preferred way is to use `cilium-cli install `\_, as it is able to automate some steps (e.g., detecting ``kube-apiserver`` endpoint address which otherwise needs to be specified when running w/o ``kube-proxy``, or set an annotation to a K8s worker node to prevent Cilium from being scheduled on it). Assuming that Cilium was built with: .. code-block:: shell-session $ cd cilium/ $ make kind-image ... ^^^ Images pushed, multi-arch manifest should be above. ^^^ You can install Cilium with the following command: .. code-block:: shell-session $ cilium install --wait \ --chart-directory=$GOPATH/src/github.com/cilium/cilium/install/kubernetes/cilium \ --set image.override=localhost:5000/cilium/cilium-dev:local \ --set image.pullPolicy=Never \ --set operator.image.override=localhost:5000/cilium/operator-generic:local \ --set operator.image.pullPolicy=Never \ --set routingMode=tunnel \ --set tunnelProtocol=vxlan \ --nodes-without-cilium ... ⌛ Waiting for Cilium to be installed and ready... ✅ Cilium was successfully installed! Run 'cilium status' to view installation health Finally, to run tests: .. code-block:: shell-session $ cilium connectivity test ... ✅ All 32 tests (263 actions) successful, 2 tests skipped, 1 scenarios skipped. Alternatively, you can select which tests to run: .. code-block:: shell-session $ cilium connectivity test --test north-south-loadbalancing ... [=] Test [north-south-loadbalancing] Or, you can exclude specific test cases to run: .. code-block:: shell-session $ cilium connectivity test --test '!pod-to-world' ... Running tests in VM ^^^^^^^^^^^^^^^^^^^ To run Cilium and the connectivity tests in a virtual machine, one can use `little-vm-helper (LVH) `\_. The project provides a runner of qemu-based VMs, a builder of VM images, and a registry containing pre-built VM images. First, install the LVH cli tool: .. code-block:: shell-session $ go install github.com/cilium/little-vm-helper/cmd/lvh@latest $ lvh --help ... Use "lvh [command] --help" for more information about a command. Second, fetch a VM image: .. code-block:: shell-session $ lvh images pull quay.io/lvh-images/kind:6.1-main --dir . See ``\_ for all available images. To build a new VM image (or to update any existing) please refer to `little-vm-helper-images `\_. Next, start a VM: .. code-block:: shell-session $ lvh run --image ./images/kind\_6.1.qcow2 --host-mount $GOPATH/src/github.com/cilium/ --daemonize -p 2222:22 --cpu=3 --mem=6G .. \_test\_cilium\_on\_lvh: Finally, you can SSH into the VM to start a K8s cluster, install Cilium, and finally run the connectivity tests: .. code-block:: shell-session $ ssh -p 2222 -o "StrictHostKeyChecking=no" root@localhost #
https://github.com/cilium/cilium/blob/main//Documentation/contributing/testing/e2e.rst
main
cilium
[ -0.0008709534304216504, -0.006137445569038391, -0.05028243735432625, 0.0178211722522974, 0.03782998025417328, -0.07949785143136978, -0.03791530430316925, 0.000784944393672049, -0.036678437143564224, -0.018922727555036545, 0.06236036494374275, -0.07970963418483734, 0.035229213535785675, -0....
0.18943
a VM: .. code-block:: shell-session $ lvh run --image ./images/kind\_6.1.qcow2 --host-mount $GOPATH/src/github.com/cilium/ --daemonize -p 2222:22 --cpu=3 --mem=6G .. \_test\_cilium\_on\_lvh: Finally, you can SSH into the VM to start a K8s cluster, install Cilium, and finally run the connectivity tests: .. code-block:: shell-session $ ssh -p 2222 -o "StrictHostKeyChecking=no" root@localhost # cd /host/cilium # git config --global --add safe.directory /host/cilium # ./contrib/scripts/kind.sh "" 3 "" "" "none" "dual" # cd /host/cilium-cli # ./cilium install --wait \ --chart-directory=../cilium/install/kubernetes/cilium \ --version=v1.13.2 \ --set routingMode=tunnel \ --set tunnelProtocol=vxlan \ --nodes-without-cilium # ./cilium connectivity test ... ✅ All 32 tests (263 actions) successful, 2 tests skipped, 1 scenarios skipped. To stop the VM, run from the host: .. code-block:: shell-session $ pkill qemu-system-x86 Running tests in a VM with a custom kernel """""""""""""""""""""""""""""""""""""""""" It is possible to test Cilium on an LVH VM with a custom built Linux kernel (for example, for fast testing iterations when doing kernel development work for Cilium features). First, to configure and to build the kernel: .. code-block:: shell-session $ git clone --depth=1 https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git $ cd bpf-next/ # configure kernel, so that it can be run in LVH VM: $ git clone https://github.com/cilium/little-vm-helper-images $ cat ../little-vm-helper-images/\_data/kernels.json | \ jq -r '.common\_opts.[] | (.[0])+" "+(.[1])' | \ xargs ./scripts/config $ make -j$(nproc) Second, start the LVH VM with the custom kernel: .. code-block:: shell-session $ lvh run --image ./images/kind\_bpf-next.qcow2 \ --host-mount $(pwd) \ --kernel ./bpf-next/arch/x86\_64/boot/bzImage \ --daemonize -p 2222:22 --cpu=3 --mem=6G \ Third, SSH into the VM, and install the custom kernel modules (this step is no longer required once `little-vm-helper#117 `\_ has been resolved): .. code-block:: shell-session $ ssh -p 2222 -o "StrictHostKeyChecking=no" root@localhost # cd /host/bpf-next # make modules\_install Finally, you can use the instructions from :ref:`the previous chapter` to run and to test Cilium. Network performance test ^^^^^^^^^^^^^^^^^^^^^^^^ Cilium also provides `cilium-cli connectivity perf `\_\_ to test the network performance of pod-to-pod communication in the same node and different nodes. To run performance test: .. code-block:: shell-session $ cilium connectivity perf ... [=] Test [network-perf] [1/1] ... If you want to test the network performance between specific nodes, you can label the nodes to run test: .. code-block:: shell-session $ kubectl label nodes worker1 perf-test=server node/worker1 labeled $ kubectl label nodes worker2 perf-test=client node/worker2 labeled $ cilium connectivity perf --node-selector-client perf-test=client --node-selector-server perf-test=server ... [=] Test [network-perf] [1/1] ... Cleaning up tests ^^^^^^^^^^^^^^^^^ If the connectivity tests are interrupted or timeout, that will leave the test pods deployed. To clean this up, simply delete the connectivity tests namespace: .. code-block:: shell-session $ kubectl delete ns cilium-test If you specified the test namespace with ``--test-namespace``, make sure to replace ``cilium-test`` (default).
https://github.com/cilium/cilium/blob/main//Documentation/contributing/testing/e2e.rst
main
cilium
[ 0.03450827673077583, 0.0011194493854418397, -0.056462571024894714, 0.024819888174533844, -0.04767068475484848, -0.016978051513433456, -0.10785790532827377, 0.04932992905378342, 0.03331352025270462, 0.018561799079179764, 0.05803259089589119, -0.08866032212972641, -0.002506847493350506, 0.01...
0.113898
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_bpf\_testing: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* BPF Unit and Integration Testing \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Our BPF datapath has its own test framework, which allows us to write unit and integration tests that verify that our BPF code works as intended, independently from the other Cilium components. The framework uses the ``BPF\_PROG\_RUN`` feature to run eBPF programs in the kernel without attaching them to actual hooks. The framework is designed to allow datapath developers to quickly write tests for the code they are working on. The tests themselves are fully written in C to minimize context switching. Tests pass results back to the framework which will outputs the results in Go test output, for optimal integration with CI and other tools. Running tests ============= To run the tests in your local environment, execute the following command from the project root: .. code-block:: shell-session $ make run\_bpf\_tests .. note:: Running BPF tests requires Docker and is only expected to work on Linux. To run a single test, specify its name without extension. For example: $ make run\_bpf\_tests BPF\_TEST="xdp\_nodeport\_lb4\_nat\_lb" Writing tests ============= All BPF tests live in the ``bpf/tests`` directory. All ``.c`` files in this directory are assumed to contain BPF test programs which can be independently compiled, loaded, and executed using ``BPF\_PROG\_RUN``. All files in this directory are automatically picked up, so all you have to do is create a new ``.c`` file and start writing. All other files like ``.h`` files are ignored and can be used for sharing code for example. Each ``.c`` file must at least have one ``CHECK`` program. The ``CHECK`` macro replaces the ``SEC`` which is typically used in BPF programs. The ``CHECK`` macro takes two arguments, the first being the program type (for example ``xdp`` or ``tc``. See `the list of recognized types in the Go library `\_\_), the second being the name of the test which will appear in the output. All macros are defined in ``bpf/tests/common.h``, so all programs should start by including this file: ``#include "common.h"``. Each ``CHECK`` program should start with ``test\_init()`` and end with ``test\_finish()``, ``CHECK`` programs will return implicitly with the result of the test, a user doesn't need to add ``return`` statements to the code manually. A test will PASS if it reaches ``test\_finish()``, unless it is marked as failed(``test\_fail()``, ``test\_fail\_now()``, ``test\_fatal()``) or skipped(``test\_skip()``, ``test\_skip\_now()``). The name of the function has no significance for the tests themselves. The function names are still used as indicators in the kernel (at least the first 15 chars), used to populate tail call maps, and should be unique for the purposes of compilation. .. warning:: \*\*Map Persistence Across Tests\*\* BPF maps are not cleared between ``CHECK`` programs in the same file. Any map updates made in Test A will be visible to Test B. If Test A updates a map entry (e.g. adds a tunnel endpoint), Test B will see that entry. This allows for multi-stage testing where one test builds upon the state of a previous one. However, if test isolation is intended clean up map state or use unique data. .. note:: When a single ``.c`` file contains multiple tests, they are executed in alphabetical order of the test names (the second argument to the ``CHECK`` macro). This is important to consider if the tests have dependencies on each other or on a shared state. .. code-block:: c #include "common.h" CHECK("xdp", "nodeport-lb4") int nodeportLB4(struct \_\_ctx\_buff \*ctx) { test\_init(); /\* ensure preconditions are met \*/ /\* call the functions you would like
https://github.com/cilium/cilium/blob/main//Documentation/contributing/testing/bpf.rst
main
cilium
[ -0.027790222316980362, -0.04600139334797859, -0.09622557461261749, -0.013551067560911179, 0.013207979500293732, -0.0355374775826931, -0.009474038146436214, 0.023377105593681335, -0.052513930946588516, -0.012301936745643616, 0.03203374519944191, -0.1380239725112915, 0.04753997176885605, -0....
0.09888
second argument to the ``CHECK`` macro). This is important to consider if the tests have dependencies on each other or on a shared state. .. code-block:: c #include "common.h" CHECK("xdp", "nodeport-lb4") int nodeportLB4(struct \_\_ctx\_buff \*ctx) { test\_init(); /\* ensure preconditions are met \*/ /\* call the functions you would like to test \*/ /\* check that everything works as expected \*/ test\_finish(); } Sub-tests --------- Each ``CHECK`` program may contain sub-tests, each of which has its own test status. A sub-test is created with the ``TEST`` macro like so: .. code-block:: c #include "common.h" #include #include #include "bpf/section.h" CHECK("xdp", "jhash") int bpf\_test(\_\_maybe\_unused struct xdp\_md \*ctx) { test\_init(); TEST("Non-zero", { unsigned int hash = jhash\_3words(123, 234, 345, 456); if (hash != 2698615579) test\_fatal("expected '2698615579' got '%lu'", hash); }); TEST("Zero", { unsigned int hash = jhash\_3words(0, 0, 0, 0); if (hash != 459859287) test\_fatal("expected '459859287' got '%lu'", hash); }); test\_finish(); } Since all sub-tests are part of the same BPF program they are executed consecutively in one ``BPF\_PROG\_RUN`` invocation and can share setup code which can improve run speed and reduce code duplication. The name passed to the ``TEST`` macro for each sub-test serves to self-document the steps and makes it easier to spot what part of a test fails. Integration tests ----------------- Writing tests for a single function or small group of functions should be fairly straightforward, only requiring a ``CHECK`` program. Testing functionality across tail calls requires an additional step: given that the program does not return to the ``CHECK`` function after making a tail call, we can't check whether it was successful. The workaround is to use ``PKTGEN`` and ``SETUP`` programs in addition to a ``CHECK`` program. These programs will run before the ``CHECK`` program with the same name. Intended usage is that the ``PKGTEN`` program builds a BPF context (for example fill a ``struct \_\_sk\_buff`` for TC programs), and passes it on to the ``SETUP`` program, which performs further setup steps (for example fill a BPF map). The two-stage pattern is needed so that ``BPF\_PROG\_RUN`` gets invoked with the actual packet content (and for example fills ``skb->protocol``). The BPF context is then passed to the ``CHECK`` program, which can inspect the result. By executing the test setup and executing the tail call in ``SETUP`` we can execute complete programs. The return code of the ``SETUP`` program is prepended as a ``u32`` to the start of the packet data passed to ``CHECK``, meaning that the ``CHECK`` program will find the actual packet data at ``(void \*)data + 4``. This is an abbreviated example showing the key components: .. code-block:: c #include "common.h" #include "bpf/ctx/xdp.h" #include "bpf\_xdp.c" struct { \_\_uint(type, BPF\_MAP\_TYPE\_PROG\_ARRAY); \_\_uint(key\_size, sizeof(\_\_u32)); \_\_uint(max\_entries, 2); \_\_array(values, int()); } entry\_call\_map \_\_section(".maps") = { .values = { [0] = &cil\_xdp\_entry, }, }; PKTGEN("xdp", "l2\_example") int test1\_pktgen(struct \_\_ctx\_buff \*ctx) { /\* Create room for our packet to be crafted \*/ unsigned int data\_len = ctx->data\_end - ctx->data; int offset = offset = sizeof(struct ethhdr) - data\_len; bpf\_xdp\_adjust\_tail(ctx, offset); void \*data = (void \*)(long)ctx->data; void \*data\_end = (void \*)(long)ctx->data\_end; if (data + sizeof(struct ethhdr) > data\_end) return TEST\_ERROR; /\* Writing just the L2 header for brevity \*/ struct ethhdr l2 = { .h\_source = {0xAA, 0xBB, 0xCC, 0xDD, 0xEE, 0xFF}, .h\_dest = {0x12, 0x23, 0x34, 0x45, 0x56, 0x67}, .h\_proto = bpf\_htons(ETH\_P\_IP) }; memcpy(data, &l2, sizeof(struct ethhdr)); return 0; } SETUP("xdp", "l2\_example") int test1\_setup(struct \_\_ctx\_buff \*ctx) { /\* OMITTED setting up map state \*/ /\* Jump into the entrypoint \*/ tail\_call\_static(ctx, entry\_call\_map, 0); /\* Fail if we didn't jump \*/ return TEST\_ERROR; } CHECK("xdp", "l2\_example") int test1\_check(\_\_maybe\_unused const struct \_\_ctx\_buff \*ctx) { test\_init(); void \*data
https://github.com/cilium/cilium/blob/main//Documentation/contributing/testing/bpf.rst
main
cilium
[ 0.006608224008232355, 0.010909044183790684, 0.010805526748299599, 0.007222785614430904, 0.04818470776081085, -0.04431059584021568, 0.04309172183275223, 0.019161514937877655, -0.08761406689882278, 0.0018712067976593971, 0.06894688308238983, -0.07267702370882034, 0.0046878415159881115, -0.00...
0.120774
sizeof(struct ethhdr)); return 0; } SETUP("xdp", "l2\_example") int test1\_setup(struct \_\_ctx\_buff \*ctx) { /\* OMITTED setting up map state \*/ /\* Jump into the entrypoint \*/ tail\_call\_static(ctx, entry\_call\_map, 0); /\* Fail if we didn't jump \*/ return TEST\_ERROR; } CHECK("xdp", "l2\_example") int test1\_check(\_\_maybe\_unused const struct \_\_ctx\_buff \*ctx) { test\_init(); void \*data = (void \*)(long)ctx->data; void \*data\_end = (void \*)(long)ctx->data\_end; if (data + sizeof(\_\_u32) > data\_end) test\_fatal("status code out of bounds"); \_\_u32 \*status\_code = data; if (\*status\_code != XDP\_TX) test\_fatal("status code != XDP\_TX"); data += sizeof(\_\_u32); if (data + sizeof(struct ethhdr) > data\_end) test\_fatal("ctx doesn't fit ethhdr"); struct ethhdr \*l2 = data; data += sizeof(struct ethhdr); if (memcmp(l2->h\_source, fib\_smac, sizeof(fib\_smac))) test\_fatal("l2->h\_source != fib\_smac"); if (memcmp(l2->h\_dest, fib\_dmac, sizeof(fib\_dmac))) test\_fatal("l2->h\_dest != fib\_dmac"); if (data + sizeof(struct iphdr) > data\_end) test\_fatal("ctx doesn't fit iphdr"); test\_finish(); } Function reference ------------------ \* ``test\_log(fmt, args...)`` - writes a log message. The conversion specifiers supported by \*fmt\* are the same as for ``bpf\_trace\_printk()``. They are \*\*%d\*\*, \*\*%i\*\*, \*\*%u\*\*, \*\*%x\*\*, \*\*%ld\*\*, \*\*%li\*\*, \*\*%lu\*\*, \*\*%lx\*\*, \*\*%lld\*\*, \*\*%lli\*\*, \*\*%llu\*\*, \*\*%llx\*\*. No modifier (size of field, padding with zeroes, etc.) is available. \* ``test\_fail()`` - marks the current test or sub-test as failed but will continue execution. \* ``test\_fail\_now()`` - marks the current test or sub-test as failed and will stop execution of the test or sub-test (If called in a sub-test, the other sub-tests will still run). \* ``test\_fatal(fmt, args...)`` - writes a log and then calls ``test\_fail\_now()`` \* ``assert(stmt)`` - asserts that the statement within is true and call ``test\_fail\_now()`` otherwise. ``assert`` will log the file and line number of the assert statement. \* ``test\_skip()`` - marks the current test or sub-test as skipped but will continue execution. \* ``test\_skip\_now()`` - marks the current test or sub-test as skipped and will stop execution of the test or sub-test (If called in a sub-test, the other sub-tests will still run). \* ``test\_init()`` - initializes the internal state for the test and must be called before any of the functions above can be called. \* ``test\_finish()`` - submits the results and returns from the current function. .. warning:: Functions that halt the execution (``test\_fail\_now()``, ``test\_fatal()``, ``test\_skip\_now()``) can't be used within both a sub-test (``TEST``) and ``for``, ``while``, or ``switch/case`` blocks since they use the ``break`` keyword to stop a sub-test. These functions can still be used from within ``for``, ``while`` and ``switch/case`` blocks if no sub-tests are used, because in that case the flow interruption happens via ``return``. Function mocking ---------------- Being able to mock out a function is a great tool to have when creating tests for a number of reasons. You might for example want to test what happens if a specific function returns an error to see if it is handled gracefully. You might want to proxy function calls to record if the function under test actually called specific dependencies. Or you might want to test code that uses helpers which rely on a state we can't set in BPF, like the routing table. Mocking is easy with this framework: 1. Create a function with a unique name and the same signature as the function it is replacing. 2. Create a macro with the exact same name as the function we want to replace and point it to the function created in step 1. For example ``#define original\_function our\_mocked\_function`` 3. Include the file which contains the definition we are replacing. The following example mocks out the fib\_lookup helper call and replaces it with our mocked version, since we don't actually have routes for the IPs we want to test: .. code-block:: c #include "common.h" #include "bpf/ctx/xdp.h" #define fib\_lookup mock\_fib\_lookup static
https://github.com/cilium/cilium/blob/main//Documentation/contributing/testing/bpf.rst
main
cilium
[ 0.07828143984079361, 0.03619939088821411, 0.01650046557188034, -0.043205711990594864, -0.019712934270501137, -0.08608752489089966, 0.02988169714808464, 0.09562543779611588, -0.12580113112926483, -0.055796533823013306, 0.07359500974416733, -0.06882447749376297, 0.007739471737295389, -0.0498...
0.090254
Include the file which contains the definition we are replacing. The following example mocks out the fib\_lookup helper call and replaces it with our mocked version, since we don't actually have routes for the IPs we want to test: .. code-block:: c #include "common.h" #include "bpf/ctx/xdp.h" #define fib\_lookup mock\_fib\_lookup static const char fib\_smac[6] = {0xDE, 0xAD, 0xBE, 0xEF, 0x01, 0x02}; static const char fib\_dmac[6] = {0x13, 0x37, 0x13, 0x37, 0x13, 0x37}; long mock\_fib\_lookup(\_\_maybe\_unused void \*ctx, struct bpf\_fib\_lookup \*params, \_\_maybe\_unused int plen, \_\_maybe\_unused \_\_u32 flags) { memcpy(params->smac, fib\_smac, sizeof(fib\_smac)); memcpy(params->dmac, fib\_dmac, sizeof(fib\_dmac)); return 0; } #include "bpf\_xdp.c" #include "lib/nodeport.h" Limitations ----------- For all its benefits there are some limitations to this way of testing: \* Code must pass the verifier, so our setup and test code has to obey the same rules as other BPF programs. A side effect is that it automatically guarantees that all code that passes will also load. The biggest concern is the complexity limit on older kernels, this can be somewhat mitigated by separating heavy setup work into its own ``SETUP`` program and optionally tail calling into the code to be tested, to ensure the testing harness doesn't push us over the complexity limit. \* Test functions like ``test\_log()``, ``test\_fail()``, ``test\_skip()`` can only be executed within the scope of the main program or a ``TEST``. These functions rely on local variables set by ``test\_init()`` and will produce errors when used in other functions. \* Functions that halt the execution (``test\_fail\_now()``, ``test\_fatal()``, ``test\_skip\_now()``) can't be used within both a sub-test (``TEST``) and ``for``, ``while``, or ``switch/case`` blocks since they use the ``break`` keyword to stop a sub-test. These functions can still be used from within ``for``, ``while`` and ``switch/case`` blocks if no sub-tests are used, because in that case the flow interruption happens via ``return``. \* Sub-test names can't use more than 127 characters. \* Log messages can't use more than 127 characters and have no more than 12 arguments.
https://github.com/cilium/cilium/blob/main//Documentation/contributing/testing/bpf.rst
main
cilium
[ -0.08997029811143875, -0.002598139923065901, -0.00631236657500267, -0.03634532913565636, -0.027203699573874474, -0.03671635314822197, 0.004348827991634607, 0.05021192505955696, -0.05912984162569046, -0.000533919723238796, 0.015230689197778702, -0.04147413372993469, -0.07806675136089325, -0...
0.083112
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_integration\_testing: Integration Testing =================== Cilium uses the standard `go test `\_\_ framework. All new tests must use `the standard test framework`\_. .. \_the standard test framework: https://github.com/cilium/cilium/issues/16860 .. \_integration\_testing\_prerequisites: Prerequisites ^^^^^^^^^^^^^ Some tests interact with the kvstore and depend on a local kvstore instances of etcd. To start the local instances, run: .. code-block:: shell-session $ make start-kvstores Running all tests ^^^^^^^^^^^^^^^^^ To run integration tests over the entire repository, run the following command in the project root directory: .. code-block:: shell-session $ make integration-tests To run just unit tests, run: .. code-block:: shell-session $ go test ./... Testing individual packages ^^^^^^^^^^^^^^^^^^^^^^^^^^^ It is possible to test individual packages by invoking ``go test`` directly. You can then ``cd`` into the package subject to testing and invoke go test: .. code-block:: shell-session $ cd pkg/kvstore $ go test Integration tests have some prerequisites like :ref:`integration\_testing\_prerequisites`, you can use the following command to automatically set up the prerequisites, run the unit tests and tear down the prerequisites: .. code-block:: shell-session $ make integration-tests TESTPKGS=./pkg/kvstore Some tests are marked as 'privileged' if they require the test suite to be run as a privileged user or with a given set of capabilities. They are skipped by default when running ``go test``. There are a few ways to run privileged tests. 1. Run the whole test suite with sudo. .. code-block:: shell-session $ sudo make tests-privileged 2. To narrow down the packages under test, specify ``TESTPKGS``. Note that this takes the Go package pattern syntax, including ``...`` wildcard specifier. .. code-block:: shell-session $ sudo make tests-privileged TESTPKGS="./pkg/datapath/linux ./pkg/maps/..." 3. Set the ``PRIVILEGED\_TESTS`` environment variable and run ``go test`` directly. This only escalates privileges when executing the test binaries, the ``go build`` process is run unprivileged. .. code-block:: shell-session $ PRIVILEGED\_TESTS=true go test -exec "sudo -E" ./pkg/ipam Automatically run unit tests on code changes ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The script ``contrib/shell/test.sh`` contains some helpful bash functions to improve the feedback cycle between writing tests and seeing their results. If you're writing unit tests in a particular package, the ``watchtest`` function will watch for changes in a directory and run the unit tests for that package any time the files change. For example, if writing unit tests in ``pkg/policy``, run this in a terminal next to your editor: .. code-block:: shell-session $ . contrib/shell/test.sh $ watchtest pkg/policy This shell script depends on the ``inotify-tools`` package on Linux.
https://github.com/cilium/cilium/blob/main//Documentation/contributing/testing/unit.rst
main
cilium
[ 0.031154289841651917, 0.016314029693603516, -0.0544118694961071, -0.039524707943201065, 0.001006411388516426, -0.057335637509822845, -0.1382647156715393, 0.020000331103801727, -0.02875237911939621, -0.046548884361982346, 0.10109908878803253, -0.07098860293626785, -0.0016005815705284476, 0....
0.076306
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_testsuite-legacy: End-To-End Testing Framework (Legacy) ===================================== .. warning:: The Ginkgo end-to-end testing framework is deprecated. New end-to-end tests should be implemented using the `cilium-cli `\_ connectivity testing framework. For more information, see :ref:`testsuite`. Introduction ~~~~~~~~~~~~ This section provides an overview of the two modes available for running Cilium's end-to-end tests locally: Kubeconfig and similar to GitHub Actions (GHA). It offers instructions on setting up and running tests in these modes. Before proceeding, it is recommended to familiarize yourself with Ginkgo by reading the `Ginkgo Getting-Started Guide `\_. You can also run the `example tests `\_ to get a feel for the Ginkgo workflow. The tests in the ``test`` directory are built on top of Ginkgo and utilize the Ginkgo ``focus`` concept to determine which Kubernetes nodes are necessary to run specific tests. All test names must begin with one of the following prefixes: - ``Runtime``: Tests Cilium in a runtime environment running on a single node. - ``K8s``: Sets up a small multi-node Kubernetes environment for testing features beyond a single host and Kubernetes-specific functionalities. Running Tests with GitHub Actions (GHA) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ GitHub Actions provide an alternative mode for running Cilium's end-to-end tests. The configuration is set up to closely match the environment used in GHA. Refer to the relevant documentation for instructions on running tests using GHA. Running End-To-End Tests ~~~~~~~~~~~~~~~~~~~~~~~~ Running Locally Ginkgo Tests based on Ginkgo's GitHub Workflow ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Although it is not possible to run ``conformance-ginkgo.yaml`` or ``conformance-runtime.yaml`` locally, it is possible to setup an environment similar to the one used on GitHub. The following example will provide the steps to run one of the tests of the focus ``f09-datapath-misc-2`` on Kubernetes ``1.27`` with the kernel ``net-next`` for the commit SHA ``7b368923823e63c9824ea2b5ee4dc026bc4d5cd8``. You can also perform these steps automatically using the script ``contrib/scripts/run-gh-ginkgo-workflow.sh``. Run this script with ``-h`` for usage information. #. Download dependencies locally (``helm``, ``ginkgo``). For ``helm``, the instructions can be found `here `\_ .. code-block:: shell-session $ HELM\_VERSION=v3.13.1 $ wget "https://get.helm.sh/helm-${HELM\_VERSION}-linux-amd64.tar.gz" $ tar -xf "helm-v${HELM\_VERSION}-linux-amd64.tar.gz" $ mv linux-amd64/helm ./helm Store these dependencies under a specific directory that will be used to run Qemu in the next steps. For ``ginkgo``, we will be using the same version used on GitHub action: .. code-block:: shell-session $ cd ~/ $ go install github.com/onsi/ginkgo/ginkgo@v1.16.5 $ ${GOPATH}/bin/ginkgo version Ginkgo Version 1.16.5 #. Build the Ginkgo tests locally. This will create a binary named ``test.test`` which we can use later on to run our tests. .. code-block:: shell-session $ cd github.com/cilium/cilium/test $ ${GOPATH}/bin/ginkgo build #. Provision VMs using Qemu: \* Retrieve the image tag for the k8s and kernel versions that will be used for testing by checking the file ``.github/actions/ginkgo/main-k8s-versions.yaml``. For example: - kernel: ``bpf-next-20230526.105339@sha256:4133d4e09b1e86ac175df8d899873180281bb4220dc43e2566c47b0241637411`` - k8s: ``kindest/node:v1.27.1@sha256:b7d12ed662b873bd8510879c1846e87c7e676a79fefc93e17b2a52989d3ff42b`` \* Store the compressed VM image under a directory (``/tmp/\_images``). .. code-block:: shell-session $ mkdir -p /tmp/\_images $ kernel\_tag="bpf-next-20230526.105339@sha256:4133d4e09b1e86ac175df8d899873180281bb4220dc43e2566c47b0241637411" $ docker run -v /tmp/\_images:/mnt/images \ "quay.io/lvh-images/kind:${kernel\_tag}" \ cp -r /data/images/. /mnt/images/ \* Uncompress the VM image into a directory. .. code-block:: shell-session $ zstd -d /tmp/\_images/kind\_\*.qcow2.zst -o /tmp/\_images/datapath-conformance.qcow2 \* Provision the VM. \*\*Qemu will use the current terminal to provision the VM and will mount the current directory into the VM under\*\* ``/host``. .. code-block:: shell-session $ qemu-system-x86\_64 \ -nodefaults \ -no-reboot \ -smp 4 \ -m 12G \ -enable-kvm \ -cpu host \ -drive file=/tmp/\_images/datapath-conformance.qcow2,if=virtio,index=0,media=disk \ -netdev user,id=user.0,hostfwd=tcp::2222-:22 \ -device virtio-net-pci,netdev=user.0 \ -fsdev local,id=host\_id,path=./,security\_model=none \ -device virtio-9p-pci,fsdev=host\_id,mount\_tag=host\_mount \ -serial mon:stdio #. Installing dependencies in the VM (``helm``). .. code-block:: shell-session $ ssh -p
https://github.com/cilium/cilium/blob/main//Documentation/contributing/testing/e2e_legacy.rst
main
cilium
[ -0.0017710314132273197, 0.006723974831402302, -0.034710273146629333, 0.029594289138913155, -0.010026569478213787, -0.0900193527340889, -0.0885823592543602, 0.01878487877547741, -0.022661855444312096, 0.013510679826140404, 0.028734032064676285, -0.05636729672551155, -0.011945162899792194, 0...
0.180129
code-block:: shell-session $ qemu-system-x86\_64 \ -nodefaults \ -no-reboot \ -smp 4 \ -m 12G \ -enable-kvm \ -cpu host \ -drive file=/tmp/\_images/datapath-conformance.qcow2,if=virtio,index=0,media=disk \ -netdev user,id=user.0,hostfwd=tcp::2222-:22 \ -device virtio-net-pci,netdev=user.0 \ -fsdev local,id=host\_id,path=./,security\_model=none \ -device virtio-9p-pci,fsdev=host\_id,mount\_tag=host\_mount \ -serial mon:stdio #. Installing dependencies in the VM (``helm``). .. code-block:: shell-session $ ssh -p 2222 -o "StrictHostKeyChecking=no" root@localhost # echo "nameserver 8.8.8.8" > /etc/resolv.conf # git config --global --add safe.directory /host # cp /host/helm /usr/bin .. \_install\_kind: #. The VM is ready to be used for tests. Similarly to the GitHub Action, Kind will also be used to run the CI. The provisioning of Kind is different depending on the kernel version that is used, i.e., ginkgo tests are meant to run on differently when running on bpf-next. .. code-block:: shell-session $ ssh -p 2222 -o "StrictHostKeyChecking=no" root@localhost # cd /host/ # kernel\_tag="bpf-next-20230526.105339@sha256:4133d4e09b1e86ac175df8d899873180281bb4220dc43e2566c47b0241637411" # kubernetes\_image="kindest/node:v1.27.1@sha256:b7d12ed662b873bd8510879c1846e87c7e676a79fefc93e17b2a52989d3ff42b" # ip\_family="dual" # replace with "ipv4" if k8s 1.19 # # if [[ "${kernel\_tag}" == bpf-next-\* ]]; then # ./contrib/scripts/kind.sh "" 2 "" "${kubernetes\_image}" "none" "${ip\_family}" # kubectl label node kind-worker2 cilium.io/ci-node=kind-worker2 # # Avoid re-labeling this node by setting "node-role.kubernetes.io/controlplane" # kubectl label node kind-worker2 node-role.kubernetes.io/controlplane= # else # ./contrib/scripts/kind.sh "" 1 "" "${kubernetes\_image}" "iptables" "${ip\_family}" # fi # git config --global --add safe.directory /cilium Verify that kind is running inside the VM: .. code-block:: shell-session $ ssh -p 2222 -o "StrictHostKeyChecking=no" root@localhost # kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-787d4945fb-hqzpb 0/1 Pending 0 42s kube-system coredns-787d4945fb-tkq86 0/1 Pending 0 42s kube-system etcd-kind-control-plane 1/1 Running 0 57s kube-system kube-apiserver-kind-control-plane 1/1 Running 0 57s kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 56s kube-system kube-scheduler-kind-control-plane 1/1 Running 0 56s local-path-storage local-path-provisioner-6bd6454576-648bk 0/1 Pending 0 42s #. Now that Kind is provisioned, the tests can be executed inside the VM. Let us first retrieve the focus regex, under ``cliFocus``, of ``f09-datapath-misc-2`` from ``.github/actions/ginkgo/main-focus.yaml``. \* ``cliFocus="K8sDatapathConfig Check|K8sDatapathConfig IPv4Only|K8sDatapathConfig High-scale|K8sDatapathConfig Iptables|K8sDatapathConfig IPv4Only|K8sDatapathConfig IPv6|K8sDatapathConfig Transparent"`` Run the binary ``test.test`` that was compiled in the previous step. The following code block is exactly the same as used on the GitHub workflow with one exception: the flag ``-cilium.holdEnvironment=true``. This flag will hold the testing environment in case the test fails to allow for further diagnosis of the current cluster. .. code-block:: shell-session $ ssh -p 2222 -o "StrictHostKeyChecking=no" root@localhost # cd /host/test # kernel\_tag="bpf-next-20230526.105339@sha256:4133d4e09b1e86ac175df8d899873180281bb4220dc43e2566c47b0241637411" # k8s\_version="1.27" # # export K8S\_NODES=2 # export NETNEXT=0 # export K8S\_VERSION="${k8s\_version}" # export CNI\_INTEGRATION=kind # export INTEGRATION\_TESTS=true # # if [[ "${kernel\_tag}" == bpf-next-\* ]]; then # export KERNEL=net-next # export NETNEXT=1 # export KUBEPROXY=0 # export K8S\_NODES=3 # export NO\_CILIUM\_ON\_NODES=kind-worker2 # elif [[ "${kernel\_tag}" == 5.4-\* ]]; then # export KERNEL=54 # fi # # # GitHub actions do not support IPv6 connectivity to outside # # world. If the infrastructure environment supports it, then # # this line can be removed # export CILIUM\_NO\_IPV6\_OUTSIDE=true # # commit\_sha="7b368923823e63c9824ea2b5ee4dc026bc4d5cd8" # cliFocus="K8sDatapathConfig Check|K8sDatapathConfig IPv4Only|K8sDatapathConfig High-scale|K8sDatapathConfig Iptables|K8sDatapathConfig IPv4Only|K8sDatapathConfig IPv6|K8sDatapathConfig Transparent" # quay\_org="cilium" # # ./test.test \ --ginkgo.focus="${cliFocus}" \ --ginkgo.skip="" \ --ginkgo.seed=1679952881 \ --ginkgo.v -- \ -cilium.image=quay.io/${quay\_org}/cilium-ci \ -cilium.tag=${commit\_sha} \ -cilium.operator-image=quay.io/${quay\_org}/operator \ -cilium.operator-tag=${commit\_sha} \ -cilium.hubble-relay-image=quay.io/${quay\_org}/hubble-relay-ci \ -cilium.hubble-relay-tag=${commit\_sha} \ -cilium.kubeconfig=/root/.kube/config \ -cilium.operator-suffix=-ci \ -cilium.holdEnvironment=true Using CNI\_INTEGRATION="kind" Running Suite: Suite-k8s-1.27 ============================= Random Seed: 1679952881 Will run 7 of 132 specs #. Wait until the test execution completes. .. code-block:: shell-session Ran 7 of 132 Specs in 721.007 seconds SUCCESS! -- 7 Passed | 0 Failed | 0 Pending | 125 Skipped #. Clean up. Once tests are performed, terminate qemu to halt the VM: .. code-block:: shell-session $ pkill qemu-system-x86 The VM state is kept in ``/tmp/\_images/datapath-conformance.qcow2`` and the dependencies are installed. Thus steps up to and excluding step :ref:`installing kind ` can
https://github.com/cilium/cilium/blob/main//Documentation/contributing/testing/e2e_legacy.rst
main
cilium
[ 0.03399788588285446, 0.061616040766239166, -0.04684160277247429, -0.011151201091706753, 0.039425622671842575, 0.04072048142552376, -0.06972884386777878, 0.0771622583270073, 0.026828506961464882, 0.03593169525265694, 0.10684768110513687, -0.14322000741958618, -0.04588528722524643, 0.0409436...
0.005515
0 Failed | 0 Pending | 125 Skipped #. Clean up. Once tests are performed, terminate qemu to halt the VM: .. code-block:: shell-session $ pkill qemu-system-x86 The VM state is kept in ``/tmp/\_images/datapath-conformance.qcow2`` and the dependencies are installed. Thus steps up to and excluding step :ref:`installing kind ` can be skipped next time and the VM state can be re-used from step :ref:`installing kind ` onwards. Running Runtime Tests ^^^^^^^^^^^^^^^^^^^^^ To run all of the runtime tests, execute the following command from the ``test`` directory: .. code-block:: shell-session INTEGRATION\_TESTS=true ginkgo --focus="Runtime" Ginkgo searches for all tests in all subdirectories that are "named" beginning with the string "Runtime" and contain any characters after it. For instance, here is an example showing what tests will be ran using Ginkgo's dryRun option: .. code-block:: shell-session $ INTEGRATION\_TESTS=true ginkgo --focus="Runtime" -dryRun Running Suite: runtime ====================== Random Seed: 1516125117 Will run 42 of 164 specs ................ RuntimePolicyEnforcement Policy Enforcement Always Always to Never with policy /Users/ianvernon/go/src/github.com/cilium/cilium/test/runtime/Policies.go:258 • ------------------------------ RuntimePolicyEnforcement Policy Enforcement Always Always to Never without policy /Users/ianvernon/go/src/github.com/cilium/cilium/test/runtime/Policies.go:293 • ------------------------------ RuntimePolicyEnforcement Policy Enforcement Never Container creation /Users/ianvernon/go/src/github.com/cilium/cilium/test/runtime/Policies.go:332 • ------------------------------ RuntimePolicyEnforcement Policy Enforcement Never Never to default with policy /Users/ianvernon/go/src/github.com/cilium/cilium/test/runtime/Policies.go:349 ................. Ran 42 of 164 Specs in 0.002 seconds SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 122 Skipped PASS Ginkgo ran 1 suite in 1.830262168s Test Suite Passed The output has been truncated. For more information about this functionality, consult the aforementioned Ginkgo documentation. Available CLI Options ^^^^^^^^^^^^^^^^^^^^^ For more advanced workflows, check the list of available custom options for the Cilium framework in the ``test/`` directory and interact with ginkgo directly: .. code-block:: shell-session $ cd test/ $ ginkgo . -- -cilium.help -cilium.SSHConfig string Specify a custom command to fetch SSH configuration (eg: 'vagrant ssh-config') -cilium.help Display this help message. -cilium.holdEnvironment On failure, hold the environment in its current state -cilium.hubble-relay-image string Specifies which image of hubble-relay to use during tests -cilium.hubble-relay-tag string Specifies which tag of hubble-relay to use during tests -cilium.image string Specifies which image of cilium to use during tests -cilium.kubeconfig string Kubeconfig to be used for k8s tests -cilium.multinode Enable tests across multiple nodes. If disabled, such tests may silently pass (default true) -cilium.operator-image string Specifies which image of cilium-operator to use during tests -cilium.operator-tag string Specifies which tag of cilium-operator to use during tests -cilium.passCLIEnvironment Pass the environment invoking ginkgo, including PATH, to subcommands -cilium.showCommands Output which commands are ran to stdout -cilium.skipLogs skip gathering logs if a test fails -cilium.tag string Specifies which tag of cilium to use during tests -cilium.testScope string Specifies scope of test to be ran (k8s, runtime) -cilium.timeout duration Specifies timeout for test run (default 24h0m0s) Ginkgo ran 1 suite in 4.312100241s Test Suite Failed For more information about other built-in options to Ginkgo, consult the `ginkgo-documentation`\_. .. \_ginkgo-documentation: Running Specific Tests Within a Test Suite ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you want to run one specified test, there are a few options: \* By modifying code: add the prefix "FIt" on the test you want to run; this marks the test as focused. Ginkgo will skip other tests and will only run the "focused" test. For more information, consult the `Focused Specs`\_ documentation from Ginkgo. .. code-block:: go It("Example test", func(){ Expect(true).Should(BeTrue()) }) FIt("Example focused test", func(){ Expect(true).Should(BeTrue()) }) \* From the command line: specify a more granular focus if you want to focus on, say, Runtime L7 tests: .. code-block:: shell-session INTEGRATION\_TESTS=true ginkgo --focus "Runtime.\*L7" This will focus on tests that contain "Runtime", followed by any number of any characters, followed by "L7". ``--focus`` is a regular expression and quotes are
https://github.com/cilium/cilium/blob/main//Documentation/contributing/testing/e2e_legacy.rst
main
cilium
[ 0.0005840972298756242, 0.045188527554273605, 0.007470887620002031, 0.011510860174894333, -0.01847981847822666, 0.009067635051906109, -0.05883948504924774, 0.006175413262099028, -0.008155982941389084, 0.04652083292603493, 0.024490563198924065, -0.09566851705312729, -0.03896046802401543, 0.1...
-0.013687
command line: specify a more granular focus if you want to focus on, say, Runtime L7 tests: .. code-block:: shell-session INTEGRATION\_TESTS=true ginkgo --focus "Runtime.\*L7" This will focus on tests that contain "Runtime", followed by any number of any characters, followed by "L7". ``--focus`` is a regular expression and quotes are required if it contains spaces and to escape shell expansion of ``\*``. .. \_Focused Specs: https://onsi.github.io/ginkgo/#focused-specs Compiling the tests without running them ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To validate that the Go code you've written for testing is correct without needing to run the full test, you can build the test directory: .. code-block:: shell-session make -C test/ build Updating Cilium images for Kubernetes tests ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Sometimes when running the CI suite for a feature under development, it's common to re-run the CI suite on the CI VMs running on a local development machine after applying some changes to Cilium. For this the new Cilium images have to be built, and then used by the CI suite. To do so, one can run the following commands on the ``k8s1`` VM: .. code-block:: shell-session cd go/src/github.com/cilium/cilium make LOCKDEBUG=1 docker-cilium-image docker tag quay.io/cilium/cilium:latest \ k8s1:5000/cilium/cilium-dev:latest docker push k8s1:5000/cilium/cilium-dev:latest make -B LOCKDEBUG=1 docker-operator-generic-image docker tag quay.io/cilium/operator-generic:latest \ k8s1:5000/cilium/operator-generic:latest docker push k8s1:5000/cilium/operator-generic:latest The commands were adapted from the ``test/provision/compile.sh`` script. Test Reports ~~~~~~~~~~~~ The Cilium Ginkgo framework formulates JUnit reports for each test. The following files currently are generated depending upon the test suite that is ran: \* runtime.xml \* K8s.xml Best Practices for Writing Tests ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \* Provide informative output to console during a test using the `By construct `\_. This helps with debugging and gives those who did not write the test a good idea of what is going on. The lower the barrier of entry is for understanding tests, the better our tests will be! \* Leave the testing environment in the same state that it was in when the test started by deleting resources, resetting configuration, etc. \* Gather logs in the case that a test fails. If a test fails while running on Ginkgo, a postmortem needs to be done to analyze why. So, dumping logs to a location where Ginkgo can pick them up is of the highest imperative. Use the following code in an ``AfterFailed`` method: .. code-block:: go AfterFailed(func() { vm.ReportFailed() }) Ginkgo Extensions ~~~~~~~~~~~~~~~~~ In Cilium, some Ginkgo features are extended to cover some uses cases that are useful for testing Cilium. BeforeAll ^^^^^^^^^ This function will run before all `BeforeEach`\_ within a `Describe or Context`\_. This method is an equivalent to ``SetUp`` or initialize functions in common unit test frameworks. .. \_BeforeEach: https://onsi.github.io/ginkgo/#extracting-common-setup-beforeeach .. \_Describe or Context: https://onsi.github.io/ginkgo/#organizing-specs-with-container-nodes AfterAll ^^^^^^^^ This method will run after all `AfterEach`\_ functions defined in a `Describe or Context`\_. This method is used for tearing down objects created which are used by all ``Its`` within the given ``Context`` or ``Describe``. It is ran after all Its have ran, this method is a equivalent to ``tearDown`` or ``finalize`` methods in common unit test frameworks. A good use case for using ``AfterAll`` method is to remove containers or pods that are needed for multiple ``Its`` in the given ``Context`` or ``Describe``. .. \_AfterEach: BeforeEach\_ JustAfterEach ^^^^^^^^^^^^^ This method will run just after each test and before ``AfterFailed`` and ``AfterEach``. The main reason of this method is to perform some assertions for a group of tests. A good example of using a global ``JustAfterEach`` function is for deadlock detection, which checks the Cilium logs for deadlocks that may have occurred in the duration of the tests. AfterFailed ^^^^^^^^^^^ This method will run before all ``AfterEach`` and
https://github.com/cilium/cilium/blob/main//Documentation/contributing/testing/e2e_legacy.rst
main
cilium
[ -0.005388031248003244, 0.020369650796055794, -0.03100293129682541, 0.02247893251478672, -0.030147507786750793, -0.04871957749128342, 0.049658242613077164, 0.08814146369695663, -0.03593148663640022, 0.049598999321460724, -0.0017629637150093913, -0.08033308386802673, -0.053343337029218674, 0...
0.103508
is to perform some assertions for a group of tests. A good example of using a global ``JustAfterEach`` function is for deadlock detection, which checks the Cilium logs for deadlocks that may have occurred in the duration of the tests. AfterFailed ^^^^^^^^^^^ This method will run before all ``AfterEach`` and after ``JustAfterEach``. This function is only called when the test failed.This construct is used to gather logs, the status of Cilium, etc, which provide data for analysis when tests fail. Example Test Layout ^^^^^^^^^^^^^^^^^^^ Here is an example layout of how a test may be written with the aforementioned constructs: Test description diagram:: Describe BeforeAll(A) AfterAll(A) AfterFailed(A) AfterEach(A) JustAfterEach(A) TESTA1 TESTA2 TESTA3 Context BeforeAll(B) AfterAll(B) AfterFailed(B) AfterEach(B) JustAfterEach(B) TESTB1 TESTB2 TESTB3 Test execution flow:: Describe BeforeAll TESTA1; JustAfterEach(A), AfterFailed(A), AfterEach(A) TESTA2; JustAfterEach(A), AfterFailed(A), AfterEach(A) TESTA3; JustAfterEach(A), AfterFailed(A), AfterEach(A) Context BeforeAll(B) TESTB1: JustAfterEach(B); JustAfterEach(A) AfterFailed(B); AfterFailed(A); AfterEach(B) ; AfterEach(A); TESTB2: JustAfterEach(B); JustAfterEach(A) AfterFailed(B); AfterFailed(A); AfterEach(B) ; AfterEach(A); TESTB3: JustAfterEach(B); JustAfterEach(A) AfterFailed(B); AfterFailed(A); AfterEach(B) ; AfterEach(A); AfterAll(B) AfterAll(A) Debugging: ~~~~~~~~~~ You can retrieve all run commands and their output in the report directory (``./test/test\_results``). Each test creates a new folder, which contains a file called log where all information is saved, in case of a failing test an exhaustive data will be added. .. code-block:: shell-session $ head test/test\_results/RuntimeKafkaKafkaPolicyIngress/logs level=info msg=Starting testName=RuntimeKafka level=info msg="Vagrant: running command \"vagrant ssh-config runtime\"" cmd: "sudo cilium-dbg status" exitCode: 0 KVStore: Ok Etcd: 172.17.0.3:4001 ContainerRuntime: Ok Kubernetes: Disabled Kubernetes APIs: [""] Cilium: Ok OK NodeMonitor: Disabled Allocated IPv4 addresses: Running with delve ^^^^^^^^^^^^^^^^^^ `Delve `\_ is a debugging tool for Go applications. If you want to run your test with delve, you should add a new breakpoint using `runtime.BreakPoint() `\_ in the code, and run ginkgo using ``dlv``. Example how to run ginkgo using ``dlv``: .. code-block:: shell-session dlv test . -- --ginkgo.focus="Runtime" -ginkgo.v=true Running End-To-End Tests In Other Environments via kubeconfig ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can run the end-to-end tests with an arbitrary kubeconfig file by specifying ``--cilium.kubeconfig`` parameter on the Ginkgo command line. This will skip provisioning the environment and some setup tasks like labeling nodes for testing. This mode expects: - The current directory is ``cilium/test`` - A test focus with ``--focus``. ``--focus="K8s"`` selects all kubernetes tests. If not passing ``--focus=K8s`` then you must pass ``-cilium.testScope=K8s``. - Cilium images as full URLs specified with the ``--cilium.image`` and ``--cilium.operator-image`` options. - A working kubeconfig with the ``--cilium.kubeconfig`` option - A populated K8S\_VERSION environment variable set to the version of the cluster - If appropriate, set the ``CNI\_INTEGRATION`` environment variable set to one of ``gke``, ``eks``, ``eks-chaining``, ``microk8s`` or ``minikube``. This selects matching configuration overrides for cilium. Leaving this unset for non-matching integrations is also correct. For k8s environments that invoke an authentication agent, such as EKS and ``aws-iam-authenticator``, set ``--cilium.passCLIEnvironment=true`` An example invocation is .. code-block:: shell-session INTEGRATION\_TESTS=true CNI\_INTEGRATION=eks K8S\_VERSION=1.16 ginkgo --focus="K8s" -- -cilium.kubeconfig=`echo ~/.kube/config` -cilium.image="quay.io/cilium/cilium-ci" -cilium.operator-image="quay.io/cilium/operator" -cilium.operator-suffix="-ci" -cilium.passCLIEnvironment=true To run tests with Kind, try .. code-block:: shell-session K8S\_VERSION=1.25 ginkgo --focus=K8s -- --cilium.image=localhost:5000/cilium/cilium-dev -cilium.tag=local --cilium.operator-image=localhost:5000/cilium/operator -cilium.operator-tag=local -cilium.kubeconfig=`echo ~/.kube/config` -cilium.testScope=K8s -cilium.operator-suffix= Running in GKE ^^^^^^^^^^^^^^ 1- Setup a cluster as in :ref:`k8s\_install\_quick` or utilize an existing cluster. .. note:: You do not need to deploy Cilium in this step, as the End-To-End Testing Framework handles the deployment of Cilium. .. note:: The tests require machines larger than ``n1-standard-4``. This can be set with ``--machine-type n1-standard-4`` on cluster creation. 2- Invoke the tests from ``cilium/test`` with options set as explained in `Running End-To-End Tests In Other Environments via kubeconfig`\_ .. note:: The tests require the ``NATIVE\_CIDR`` environment variable to be set to the value of the cluster IPv4 CIDR
https://github.com/cilium/cilium/blob/main//Documentation/contributing/testing/e2e_legacy.rst
main
cilium
[ -0.05459527298808098, 0.01874140091240406, -0.015419493429362774, 0.009592916816473007, 0.061727702617645264, -0.050434332340955734, -0.04190918058156967, -0.005495544523000717, 0.05212542414665222, 0.022340554744005203, 0.09172096103429794, -0.01838471181690693, 0.002399894641712308, 0.00...
0.232975
This can be set with ``--machine-type n1-standard-4`` on cluster creation. 2- Invoke the tests from ``cilium/test`` with options set as explained in `Running End-To-End Tests In Other Environments via kubeconfig`\_ .. note:: The tests require the ``NATIVE\_CIDR`` environment variable to be set to the value of the cluster IPv4 CIDR returned by the ``gcloud container clusters describe`` command. .. code-block:: shell-session export CLUSTER\_NAME=cluster1 export CLUSTER\_ZONE=us-west2-a export NATIVE\_CIDR="$(gcloud container clusters describe $CLUSTER\_NAME --zone $CLUSTER\_ZONE --format 'value(clusterIpv4Cidr)')" INTEGRATION\_TESTS=true CNI\_INTEGRATION=gke K8S\_VERSION=1.17 ginkgo --focus="K8sDemo" -- -cilium.kubeconfig=`echo ~/.kube/config` -cilium.image="quay.io/cilium/cilium-ci" -cilium.operator-image="quay.io/cilium/operator" -cilium.operator-suffix="-ci" -cilium.hubble-relay-image="quay.io/cilium/hubble-relay-ci" -cilium.passCLIEnvironment=true .. note:: The kubernetes version defaults to 1.23 but can be configured with versions between 1.16 and 1.23. Version should match the server version reported by ``kubectl version``. AKS (experimental) ^^^^^^^^^^^^^^^^^^ .. note:: The tests require the ``NATIVE\_CIDR`` environment variable to be set to the value of the cluster IPv4 CIDR. 1. Setup a cluster as in :ref:`k8s\_install\_quick` or utilize an existing cluster. You do not need to deploy Cilium in this step, as the End-To-End Testing Framework handles the deployment of Cilium. 2. Invoke the tests from ``cilium/test`` with options set as explained in `Running End-To-End Tests In Other Environments via kubeconfig`\_ .. code-block:: shell-session export NATIVE\_CIDR="10.241.0.0/16" INTEGRATION\_TESTS=true CNI\_INTEGRATION=aks K8S\_VERSION=1.17 ginkgo --focus="K8s" -- -cilium.kubeconfig=`echo ~/.kube/config` -cilium.passCLIEnvironment=true -cilium.image="mcr.microsoft.com/oss/cilium/cilium" -cilium.tag="1.12.1" -cilium.operator-image="mcr.microsoft.com/oss/cilium/operator" -cilium.operator-suffix="" -cilium.operator-tag="1.12.1" AWS EKS (experimental) ^^^^^^^^^^^^^^^^^^^^^^ Not all tests can succeed on EKS. Many do, however and may be useful. :gh-issue:`9678#issuecomment-749350425` contains a list of tests that are still failing. 1. Setup a cluster as in :ref:`k8s\_install\_quick` or utilize an existing cluster. 2. Source the testing integration script from ``cilium/contrib/testing/integrations.sh``. 3. Invoke the ``gks`` function by passing which ``cilium`` docker image to run and the test focus. The command also accepts additional ginkgo arguments. .. code-block:: shell-session gks quay.io/cilium/cilium:latest K8sDemo Adding new Managed Kubernetes providers ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ All Managed Kubernetes test support relies on using a pre-configured kubeconfig file. This isn't always adequate, however, and adding defaults specific to each provider is possible. The `commit adding GKE `\_ support is a good reference. 1. Add a map of helm settings to act as an override for this provider in `test/helpers/kubectl.go `\_. These should be the helm settings used when generating cilium specs for this provider. 2. Add a unique `CI Integration constant `\_. This value is passed in when invoking ginkgo via the ``CNI\_INTEGRATON`` environment variable. 3. Update the `helm overrides `\_ mapping with the constant and the helm settings. 4. For cases where a test should be skipped use the ``SkipIfIntegration``. To skip whole contexts, use ``SkipContextIf``. More complex logic can be expressed with functions like ``IsIntegration``. These functions are all part of the `test/helpers `\_ package. Running End-To-End Tests In Other Environments via SSH ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If you want to run tests in an arbitrary environment with SSH access, you can use ``--cilium.SSHConfig`` to provide the SSH configuration of the endpoint on which tests will be run. The tests presume the following on the remote instance: - Cilium source code is located in the directory ``/home/$USER/go/src/github.com/cilium/cilium/``. - Cilium is installed and running. The ssh connection needs to be defined as a ``ssh-config`` file and need to have the following targets: - runtime: To run runtime tests - k8s{1..2}-${K8S\_VERSION}: to run Kubernetes tests. These instances must have Kubernetes installed and running as a prerequisite for running tests. An example ``ssh-config`` can be the following: :: Host runtime HostName 127.0.0.1 User vagrant Port 2222 UserKnownHostsFile /dev/null StrictHostKeyChecking no PasswordAuthentication no IdentityFile /home/eloy/.go/src/github.com/cilium/cilium/test/.vagrant/machines/runtime/virtualbox/private\_key IdentitiesOnly yes LogLevel FATAL To run this you can use the following command: .. code-block:: shell-session ginkgo -- --cilium.SSHConfig="cat ssh-config" Environment variables ~~~~~~~~~~~~~~~~~~~~~ There are a variety of configuration options that can
https://github.com/cilium/cilium/blob/main//Documentation/contributing/testing/e2e_legacy.rst
main
cilium
[ 0.045132942497730255, 0.017854081466794014, -0.03505435213446617, -0.025375772267580032, -0.022620409727096558, -0.03509801998734474, -0.03283926844596863, -0.024780333042144775, -0.008716926909983158, 0.0007348265498876572, 0.01863698847591877, -0.17605692148208618, 0.0338272899389267, -0...
0.069552
following: :: Host runtime HostName 127.0.0.1 User vagrant Port 2222 UserKnownHostsFile /dev/null StrictHostKeyChecking no PasswordAuthentication no IdentityFile /home/eloy/.go/src/github.com/cilium/cilium/test/.vagrant/machines/runtime/virtualbox/private\_key IdentitiesOnly yes LogLevel FATAL To run this you can use the following command: .. code-block:: shell-session ginkgo -- --cilium.SSHConfig="cat ssh-config" Environment variables ~~~~~~~~~~~~~~~~~~~~~ There are a variety of configuration options that can be passed as environment variables: +----------------------+-------------------+--------------+------------------------------------------------------------------+ | ENV variable | Default Value | Options | Description | +======================+===================+==============+==================================================================+ | K8S\\_NODES | 2 | 0..100 | Number of Kubernetes nodes in the cluster | +----------------------+-------------------+--------------+------------------------------------------------------------------+ | NO\_CILIUM\_ON\_NODE[S] | none | \\* | Comma-separated list of K8s nodes that should not run Cilium | +----------------------+-------------------+--------------+------------------------------------------------------------------+ | K8S\\_VERSION | 1.18 | 1.\\*\\* | Kubernetes version to install | +----------------------+-------------------+--------------+------------------------------------------------------------------+ | KUBEPROXY | 1 | 0-1 | If 0 the Kubernetes' kube-proxy won't be installed | +----------------------+-------------------+--------------+------------------------------------------------------------------+ Further Assistance ~~~~~~~~~~~~~~~~~~ Have a question about how the tests work or want to chat more about improving the testing infrastructure for Cilium? Hop on over to the ``#testing`` channel on `Cilium Slack`\_.
https://github.com/cilium/cilium/blob/main//Documentation/contributing/testing/e2e_legacy.rst
main
cilium
[ 0.12103063613176346, 0.04258109629154205, -0.10814692825078964, -0.02954406477510929, -0.05168949067592621, -0.005174270365387201, -0.09732058644294739, 0.05677959322929382, -0.010342177003622055, 0.023128900676965714, 0.03193215653300285, -0.09923338145017624, -0.0011289146495983005, 0.00...
0.054107
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_scalability\_testing: Scalability and Performance Testing =================================== Introduction ~~~~~~~~~~~~ Cilium scalability and performance tests leverage `ClusterLoader2 `\_. For an overview of ClusterLoader2, please refer to the `Readme `\_ and `Getting Started `\_. At a high level, ClusterLoader2 allows for specifying states of the cluster, how to transition between them and what metrics to measure during the test run. Additionally, it allows for failing the test if the metrics are not within the expected thresholds. Overview of existing tests ~~~~~~~~~~~~~~~~~~~~~~~~~~ Tests based on kOps and GCP VMs: \* 100 nodes scale test - ``/scale-100`` `Workflow `\_ that executes two test scenarios: \* `Upstream load test `\_ \* `Network policy scale test `\_ \* FQDN performance test - ``/fqdn-perf`` `Workflow `\_ is a simple two-node test that deploys pods with FQDN policies and measures the time it takes to resolve FQDNs from a client point of view. \* ClusterMesh scale test - ``/scale-clustermesh`` `Workflow `\_ leverages a `mock Clustermesh control plane `\_ that simulates large deployments of ClusterMesh. Test based on EKS: \* Egress Gateway scale test - ``/scale-egw``. `Workflow `\_ tests Egress Gateway on a small cluster, but with synthetically created Endpoints and Nodes to simulate a large cluster. Whenever developing a new test, consider if you want to add a test to an already existing workflow, create a new one, or extend some existing test. If you are unsure, you can always ask in the ``#sig-scalabilty`` `Slack channel `\_. For example, if you want to run a test on a large cluster, you might consider adding it as a separate test scenario to the already existing 100-nodes scale test to reduce the cost of CI, because spinning up a new cluster and tearing it down is quite a long process. For some use cases, it might be better to simulate only a large cluster but execute the test on a small cluster, like in the case of the Egress Gateway scale test or the ClusterMesh scale test. Running CL2 tests locally ~~~~~~~~~~~~~~~~~~~~~~~~~ Each CL2 test should be designed in a way that scales with the number of nodes. This allows for running a specific test case scenario in a local environment, to validate the test case. For example, let's run the network policy scale test in a local Kind cluster. First, set up a Kind cluster with Cilium, as documented in :ref:`dev\_env`. Build the ClusterLoader2 binary from the `perf-tests repository `\_. Then you can run: .. code-block:: bash export CL2\_PROMETHEUS\_PVC\_ENABLED=false export CL2\_PROMETHEUS\_SCRAPE\_CILIUM\_OPERATOR=true export CL2\_PROMETHEUS\_SCRAPE\_CILIUM\_AGENT=true export CL2\_PROMETHEUS\_SCRAPE\_CILIUM\_AGENT\_INTERVAL=5s ./clusterloader \ -v=2 \ --testconfig=.github/actions/cl2-modules/netpol/config.yaml \ --provider=kind \ --enable-prometheus-server \ --nodes=1 \ --report-dir=./report \ --prometheus-scrape-kube-proxy=false \ --prometheus-apiserver-scrape-port=6443 \ --kubeconfig=$HOME/.kube/config Some additional options worth mentioning are: \* ``--tear-down-prometheus-server=false`` - Leaves Prometheus and Grafana running after the test finishes, this helps speed up the test run when running multiple tests in a row, but also for exploring the metrics in Grafana. \* ``--experimental-prometheus-snapshot-to-report-dir=true`` - Creates a snapshot of the Prometheus data and saves it to the report directory By setting ``deleteAutomanagedNamespaces: false`` in the test config, you can also leave the test namespaces after the test finishes. This is especially useful for checking if your test created the expected resources. At the end of output, the test should end successfully with:: clusterloader.go:252] -------------------------------------------------------------------------------- clusterloader.go:253] Test Finished clusterloader.go:254] Test: .github/actions/cl2-modules/netpol/config.yaml clusterloader.go:255] Status: Success clusterloader.go:259] -------------------------------------------------------------------------------- All the test results are saved in the report directory, ``./report`` in this case. Most importantly, it contains: \* ``generatedConfig\_netpol.yaml`` - Rendered test scenario \* ``'GenericPrometheusQuery NetPol
https://github.com/cilium/cilium/blob/main//Documentation/contributing/testing/scalability.rst
main
cilium
[ -0.01735384576022625, -0.03704410791397095, -0.06195478513836861, 0.0567760095000267, 0.030806293711066246, -0.07638232409954071, -0.07779214531183243, 0.06264252960681915, -0.03243398293852806, -0.055034808814525604, 0.02339816465973854, -0.08841522783041, 0.033829618245363235, -0.0361615...
0.198659
the end of output, the test should end successfully with:: clusterloader.go:252] -------------------------------------------------------------------------------- clusterloader.go:253] Test Finished clusterloader.go:254] Test: .github/actions/cl2-modules/netpol/config.yaml clusterloader.go:255] Status: Success clusterloader.go:259] -------------------------------------------------------------------------------- All the test results are saved in the report directory, ``./report`` in this case. Most importantly, it contains: \* ``generatedConfig\_netpol.yaml`` - Rendered test scenario \* ``'GenericPrometheusQuery NetPol Average CPU Usage\_netpol\_.\*.json'`` - ``GenericPrometheusQuery`` contains results of the Prometheus queries executed during the test. In this example, it contains the CPU usage of the Cilium agents. All of the Prometheus Queries will be automatically visualized in :ref:`perfdash `. \* ``'PodPeriodicCommand.\*Profiles-stdout.\*'`` - Contains memory and CPU profiles gathered during the test run. To understand how to interpret them, refer to the :ref:`profiling` subsection. Accessing Grafana and Prometheus during the test run """""""""""""""""""""""""""""""""""""""""""""""""""" During the test execution, ClusterLoader2 deploys Prometheus and Grafana to the cluster. You can access Grafana and Prometheus by running: .. code-block:: bash kubectl port-forward -n monitoring svc/grafana 3000 kubectl port-forward -n monitoring svc/prometheus-k8s 9090 This can be especially useful for exploring the metrics and adding additional queries to the test. Metrics-based testing and alerting ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Sometimes, you might want to scrape additional targets during test execution on top of the default ones. In this case, you can simply create a Pod or Service monitor `example monitor `\_. Then you need to pass it as an additional argument to ClusterLoader2: .. code-block:: bash ./clusterloader \ --prometheus-additional-monitors-path=../../.github/actions/cl2-modules/egw/prom-extra-podmons ... Now you can use the additional metrics in your test, by leveraging regular ``GenericPrometheusQuery`` measurement. For example, Egress Gateway ensures that various percentiles of masquerade latency observed by clients are `below specific thresholds `\_. This can be achieved by the following measurement in ClusterLoader2: .. code-block:: text - Identifier: MasqueradeDelay{{ .metricsSuffix }} Method: GenericPrometheusQuery Params: action: {{ .action }} metricName: Masquerade Delay {{ .metricsSuffix }} metricVersion: v1 unit: s enableViolations: true queries: - name: P95 query: quantile(0.95, egw\_scale\_test\_masquerade\_delay\_seconds\_total{k8s\_instance="{{ .instance }}"}) threshold: {{ $MASQ\_DELAY\_THRESHOLD }} Running tests in CI ~~~~~~~~~~~~~~~~~~~ Once you are happy with the test and validated it locally, you can create a PR with the test. You can base your GitHub workflow on the existing tests, or add a test scenario to an already existing workflow. Accessing test results from PR or CI runs """"""""""""""""""""""""""""""""""""""""" You can run the specific scalability or performance test in your PR, some example commands are:: /scale-100 /scale-clustermesh /scale-egw /fqdn-perf After the test run, all results will be saved in the Google Storage bucket. In the workflow run, you will see a link to the test results at the bottom. For example, open `test runs `\_ and pick one of the runs. You should see a link like this: :: EXPORT\_DIR: gs://cilium-scale-results/logs/scale-100-main/1745287079 To see how to install gsutil check `Install gsutil `\_ section. To see the results, you can run: .. code-block:: bash gsutil ls -r gs://cilium-scale-results/logs/scale-100-main/1745287079 You can also copy results to your local machine by running: .. code-block:: bash gsutil -m cp -r gs://cilium-scale-results/logs/scale-100-main/1745287079 . .. \_perfdashdocs: Visualizing results in Perfdash """"""""""""""""""""""""""""""" Perfdash leverages exported results from ClusterLoader2 and visualizes them. Currently, we do not host a publicly available instance of Perfdash. To visualize the results, please check the `Scaffolding repository `\_. As an example, you can check CPU usage of the Cilium agent: .. image:: /images/perfdash.png :align: center Note that clicking on the graph redirects you to the Google Cloud Storage page containing all of the results for the specific test run. Accessing Prometheus snapshot """"""""""""""""""""""""""""" Each test run creates a snapshot of the Prometheus data and saves it to the report directory. This is enabled by setting ``--experimental-prometheus-snapshot-to-report-dir=true``. Prometheus snapshots help with debugging, give a good overview of the cluster
https://github.com/cilium/cilium/blob/main//Documentation/contributing/testing/scalability.rst
main
cilium
[ 0.02346794120967388, -0.03085733950138092, -0.057171013206243515, 0.03434951603412628, 0.006682977080345154, 0.026167012751102448, -0.011607545427978039, 0.04406583309173584, -0.021650221198797226, 0.03051651269197464, 0.08444810658693314, -0.04923870414495468, -0.025489216670393944, -0.01...
0.03716
page containing all of the results for the specific test run. Accessing Prometheus snapshot """"""""""""""""""""""""""""" Each test run creates a snapshot of the Prometheus data and saves it to the report directory. This is enabled by setting ``--experimental-prometheus-snapshot-to-report-dir=true``. Prometheus snapshots help with debugging, give a good overview of the cluster state during the test run and can be used to further improve alerting in CI based on existing metrics. For example, a snapshot can be found in the directory ``gs://cilium-scale-results/logs/scale-100-main/1745287079/artifacts/prometheus\_snapshot.tar.gz``. You need to extract it and run Prometheus locally: .. code-block:: console $ tar xvf ./prometheus\_snapshot.tar.gz prometheus/snapshots/20250422T013829Z-3ee723086c84c32a/ prometheus/snapshots/20250422T013829Z-3ee723086c84c32a/01JSDJB32JAM1FQ6SN8ESFNDN0/ prometheus/snapshots/20250422T013829Z-3ee723086c84c32a/01JSDJB32JAM1FQ6SN8ESFNDN0/meta.json prometheus/snapshots/20250422T013829Z-3ee723086c84c32a/01JSDJB32JAM1FQ6SN8ESFNDN0/tombstones prometheus/snapshots/20250422T013829Z-3ee723086c84c32a/01JSDJB32JAM1FQ6SN8ESFNDN0/index prometheus/snapshots/20250422T013829Z-3ee723086c84c32a/01JSDJB32JAM1FQ6SN8ESFNDN0/chunks/ prometheus/snapshots/20250422T013829Z-3ee723086c84c32a/01JSDJB32JAM1FQ6SN8ESFNDN0/chunks/000001 $ prometheus --storage.tsdb.path=./prometheus/snapshots/20250422T013829Z-3ee723086c84c32a/ --web.listen-address="0.0.0.0:9092" To visualize the data, you can run Grafana locally and connect it to the Prometheus instance. .. \_profiling: Accessing CPU and memory profiles """"""""""""""""""""""""""""""""" All of the scalability tests collect CPU and memory profiles. They are collected under file names like ``PodPeriodicCommand.\*Profiles-stdout.\*``. Each profile is taken periodically during the test run. The simplest way to visualize them is to leverage `pprof-merge `\_. Example commands to aggregate CPU and memory profiles from the whole test run: .. code-block:: bash gsutil -m cp gs://cilium-scale-results/logs/scale-100-main/1745287079/artifacts/PodPeriodicCommand\*Profiles-stdout\* ./ for file in \*.txt; do mv "$file" "${file%.txt}.tar.gz"; tar xvf "${file%.txt}.tar.gz"; done pprof-merge cilium-bugtool\*/cmd/pprof-cpu && mv merged.data cpu.pprof pprof-merge cilium-bugtool\*/cmd/pprof-heap && mv merged.data heap.pprof rm -r cilium-bugtool\* PodPeriodicCommand\* Then you can visualize the aggregated CPU and memory profiles by running: .. code-block:: bash go tool pprof -http=localhost:8080 cpu.pprof go tool pprof -http=localhost:8080 heap.pprof If you want to compare the profiles, you can compare them against the baseline extracted from different test run: .. code-block:: bash go tool pprof -http=localhost:8080 --base=baseline\_cpu.pprof cpu.pprof go tool pprof -http=localhost:8080 --base=baseline\_heap.pprof heap.pprof .. \_CL2: https://github.com/kubernetes/perf-tests/tree/master/clusterloader2 .. \_CL2\_GETTING\_STARTED: https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/docs/GETTING\_STARTED.md .. \_CL2\_README: https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/README.md .. \_CLUSTERMESH\_MOCK: https://github.com/cilium/scaffolding/tree/main/cmapisrv-mock .. \_CLUSTERMESH\_WORKFLOW: https://github.com/cilium/cilium/blob/main/.github/workflows/scale-test-clustermesh.yaml .. \_EGW\_MASQ\_METRICS: https://github.com/cilium/cilium/blob/main/.github/actions/cl2-modules/egw/modules/masq-metrics.yaml .. \_EGW\_WORKFLOW: https://github.com/cilium/cilium/blob/main/.github/workflows/scale-test-egw.yaml .. \_EXAMPLE\_MONITOR: https://github.com/cilium/cilium/blob/main/.github/actions/cl2-modules/egw/prom-extra-podmons/podmonitor.yaml .. \_FQDN\_PERF\_WORKFLOW: https://github.com/cilium/cilium/blob/main/.github/workflows/fqdn-perf.yaml .. \_GSUTIL\_INSTALL: https://cloud.google.com/storage/docs/gsutil\_install .. \_NETPOL\_SCALE\_TEST: https://github.com/cilium/cilium/tree/main/.github/actions/cl2-modules/netpol .. \_PERFDASH: https://github.com/cilium/scaffolding/tree/main/scale-tests .. \_PPROF\_MERGE: https://github.com/rakyll/pprof-merge .. \_SCALE\_100\_WORKFLOW: https://github.com/cilium/cilium/blob/main/.github/workflows/scale-test-100-gce.yaml .. \_SLACK\_CHANNEL: https://slack.cilium.io .. \_TEST\_RUN: https://github.com/cilium/cilium/actions/workflows/scale-test-100-gce.yaml .. \_UPSTREAM\_LOAD\_TEST: https://github.com/kubernetes/perf-tests/tree/master/clusterloader2/testing/load
https://github.com/cilium/cilium/blob/main//Documentation/contributing/testing/scalability.rst
main
cilium
[ -0.006737482734024525, 0.05510793998837471, -0.06291600316762924, 0.07373767346143723, 0.06544449180364609, -0.10390301048755646, -0.03986512869596481, 0.034428711980581284, -0.0026593878865242004, -0.015645964071154594, 0.019092125818133354, -0.09137553721666336, 0.04509908705949783, -0.0...
0.171464
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io cilium-operator =============== cilium-operator-alibabacloud ---------------------------- .. only:: html .. toctree:: :maxdepth: 0 :glob: :titlesonly: cilium-operator-alibabacloud\* cilium-operator-aws ------------------- .. only:: html .. toctree:: :maxdepth: 0 :glob: :titlesonly: cilium-operator-aws\* cilium-operator-azure --------------------- .. only:: html .. toctree:: :maxdepth: 0 :glob: :titlesonly: cilium-operator-azure\* cilium-operator-generic ----------------------- .. only:: html .. toctree:: :maxdepth: 0 :glob: :titlesonly: cilium-operator-generic\* cilium-operator --------------- .. only:: html .. toctree:: :maxdepth: 0 :glob: :titlesonly: cilium-operator cilium-operator\_\*
https://github.com/cilium/cilium/blob/main//Documentation/cmdref/index_cilium-operator.rst
main
cilium
[ -0.011502046138048172, 0.05966635048389435, -0.03922007605433464, -0.015377387404441833, 0.04583325237035751, -0.03330669552087784, -0.028584187850356102, -0.06333178281784058, 0.052434299141168594, -0.009519151411950588, 0.06823838502168655, -0.0695241391658783, 0.031982433050870895, -0.0...
0.072077
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_install\_kvstore: Key-Value Store =============== +---------------------+--------------------------------------+----------------------+ | Option | Description | Default | +---------------------+--------------------------------------+----------------------+ | --kvstore TYPE | Key Value Store Type: | | | | (etcd) | | +---------------------+--------------------------------------+----------------------+ | --kvstore-opt OPTS | | | +---------------------+--------------------------------------+----------------------+ etcd ---- When using etcd, one of the following options need to be provided to configure the etcd endpoints: +---------------------+---------+---------------------------------------------------+ | Option | Type | Description | +---------------------+---------+---------------------------------------------------+ | etcd.address | Address | Address of etcd endpoint | +---------------------+---------+---------------------------------------------------+ | etcd.config | Path | Path to an etcd configuration file. | +---------------------+---------+---------------------------------------------------+ Example of the etcd configuration file: .. code-block:: yaml --- endpoints: - https://192.168.0.1:2379 - https://192.168.0.2:2379 trusted-ca-file: '/var/lib/cilium/etcd-ca.pem' # In case you want client to server authentication key-file: '/var/lib/cilium/etcd-client.key' cert-file: '/var/lib/cilium/etcd-client.crt'
https://github.com/cilium/cilium/blob/main//Documentation/cmdref/kvstore.rst
main
cilium
[ 0.02370307967066765, 0.041717275977134705, -0.07096895575523376, -0.007913708686828613, 0.014575964771211147, -0.009311492554843426, -0.09599253535270691, 0.0587497241795063, 0.08119770884513855, -0.02565433643758297, 0.09609799832105637, -0.0734887644648552, -0.022015899419784546, -0.0378...
0.062049
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_node\_ipam: \*\*\*\*\*\*\*\*\*\*\*\* Node IPAM LB \*\*\*\*\*\*\*\*\*\*\*\* Node IPAM LoadBalancer is a feature inspired by k3s "ServiceLB" that allows you to "advertise" the node's IPs directly inside a Service LoadBalancer. This feature is especially useful if you don't control the network you are running on and can't use either the L2 or BGP capabilities of Cilium. It works by getting the Node addresses of the selected Nodes and advertising them. It will respect the ``.spec.ipFamilies`` to decide if IPv4 or IPv6 addresses shall be used and will use the ``ExternalIP`` addresses if any or the ``InternalIP`` addresses otherwise. If the Service has ``.spec.externalTrafficPolicy`` set to ``Cluster``, Node IPAM considers all nodes as candidates for selection. Otherwise, if ``.spec.externalTrafficPolicy`` is set to ``Local``, then Node IPAM considers all the Pods selected by the Service (via their EndpointSlices) as candidates. .. warning:: Node IPAM does not work properly if ``.spec.externalTrafficPolicy`` is set to ``Local`` but no EndpointSlice (or dummy EndpointSlice) is linked to the corresponding Service. As a result, you \*\*cannot\*\* set ``.spec.externalTrafficPolicy`` to ``Local`` with the Cilium implementations for GatewayAPI or Ingress, because Cilium currently uses a dummy Endpoints for the Service LoadBalancer (`see here `\_\_). Only the Cilium implementation is known to be affected by this limitation. Most other implementations are expected to work with this configuration. If they don't, check if the matching EndpointSlices look correct and/or try setting ``.spec.externalTrafficPolicy`` to ``Cluster``. Node IPAM honors the Node label ``node.kubernetes.io/exclude-from-external-load-balancers`` and the Node taint ``ToBeDeletedByClusterAutoscaler``. Node IPAM \*\*doesn't\*\* consider a node as a candidate for load balancing if the label ``node.kubernetes.io/exclude-from-external-load-balancers`` or the taint ``ToBeDeletedByClusterAutoscaler`` is present. To restrict the Nodes that should listen for incoming traffic, add annotation ``io.cilium.nodeipam/match-node-labels`` to the Service. The value of the annotation is a `Label Selector `\_\_. Enable and use Node IPAM ------------------------ To use this feature your Service must be of type ``LoadBalancer`` and have the `loadBalancerClass `\_\_ set to ``io.cilium/node``. You can also allow set ``defaultLBServiceIPAM`` to ``nodeipam`` to use this feature on a Service that doesn't specify a loadBalancerClass. Cilium's node IPAM is disabled by default. To install Cilium with the node IPAM, run: .. cilium-helm-install:: :namespace: kube-system :set: nodeIPAM.enabled=true To enable node IPAM on an existing installation, run: .. cilium-helm-upgrade:: :namespace: kube-system :extra-args: --reuse-values :set: nodeIPAM.enabled=true :post-commands: kubectl -n kube-system rollout restart deployment/cilium-operator
https://github.com/cilium/cilium/blob/main//Documentation/network/node-ipam.rst
main
cilium
[ -0.07863780111074448, -0.005301454104483128, -0.062277767807245255, -0.01847795583307743, 0.0307241789996624, -0.04643947631120682, -0.005663811229169369, -0.012910679914057255, 0.009122733026742935, -0.05127348005771637, 0.004150222986936569, -0.06457796692848206, 0.0369730219244957, -0.0...
0.169749
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_lb\_ipam: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* LoadBalancer IP Address Management (LB IPAM) \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* LB IPAM is a feature that allows Cilium to assign IP addresses to Services of type ``LoadBalancer``. This functionality is usually left up to a cloud provider, however, when deploying in a private cloud environment, these facilities are not always available. LB IPAM works in conjunction with features such as :ref:`bgp\_control\_plane` and :ref:`l2\_announcements`. Where LB IPAM is responsible for allocation and assigning of IPs to Service objects and other features are responsible for load balancing and/or advertisement of these IPs. Use :ref:`bgp\_control\_plane` to advertise the IP addresses assigned by LB IPAM over BGP and :ref:`l2\_announcements` to advertise them locally. LB IPAM is always enabled but dormant. The controller is awoken when the first IP Pool is added to the cluster. .. \_lb\_ipam\_pools: Pools ##### LB IPAM has the notion of IP Pools which the administrator can create to tell Cilium which IP ranges can be used to allocate IPs from. A basic IP Pools with both an IPv4 and IPv6 range looks like this: .. code-block:: yaml apiVersion: "cilium.io/v2" kind: CiliumLoadBalancerIPPool metadata: name: "blue-pool" spec: blocks: - cidr: "10.0.10.0/24" - cidr: "2004::0/112" - start: "20.0.20.100" stop: "20.0.20.200" After adding the pool to the cluster, it appears like so. .. code-block:: shell-session $ kubectl get ippools NAME DISABLED CONFLICTING IPS AVAILABLE AGE blue-pool false False 65892 2s .. warning:: Updating an IP pool can result in IP addresses being reassigned and service IPs could change. See :gh-issue:`40358` CIDRs, Ranges and reserved IPs ------------------------------ An IP pool can have multiple blocks of IPs. A block can be specified with CIDR notation (/) or a range notation with a start and stop IP. As pictured in :ref:`lb\_ipam\_pools`. When CIDRs are used to specify routable IP ranges, you might not want to allocate the first and the last IP of a CIDR. Typically the first IP is the "network address" and the last IP is the "broadcast address". In some networks these IPs are not usable and they do not always play well with all network equipment. By default, LB-IPAM uses all IPs in a given CIDR. If you wish to reserve the first and last IPs of CIDRs, you can set the ``.spec.allowFirstLastIPs`` field to ``No``. This option is ignored for /32 and /31 IPv4 CIDRs and /128 and /127 IPv6 CIDRs since these only have 1 or 2 IPs respectively. This setting only applies to blocks specified with ``.spec.blocks[].cidr`` and not to blocks specified with ``.spec.blocks[].start`` and ``.spec.blocks[].stop``. Service Selectors ----------------- IP Pools have an optional ``.spec.serviceSelector`` field which allows administrators to limit which services can get IPs from which pools using a `label selector `\_\_. The pool will allocate to any service if no service selector is specified. .. code-block:: yaml apiVersion: "cilium.io/v2" kind: CiliumLoadBalancerIPPool metadata: name: "blue-pool" spec: blocks: - cidr: "20.0.10.0/24" serviceSelector: matchExpressions: - {key: color, operator: In, values: [blue, cyan]} .. code-block:: yaml apiVersion: "cilium.io/v2" kind: CiliumLoadBalancerIPPool metadata: name: "red-pool" spec: blocks: - cidr: "20.0.10.0/24" serviceSelector: matchLabels: color: red There are a few special purpose selector fields which don't match on labels but instead on other metadata like ``.meta.name`` or ``.meta.namespace``. =============================== =================== Selector Field ------------------------------- ------------------- io.kubernetes.service.namespace ``.meta.namespace`` io.kubernetes.service.name ``.meta.name`` =============================== =================== For example: .. code-block:: yaml apiVersion: "cilium.io/v2" kind: CiliumLoadBalancerIPPool metadata: name: "blue-pool" spec: blocks: - cidr: "20.0.10.0/24" serviceSelector: matchLabels: "io.kubernetes.service.namespace": "tenant-a" Conflicts --------- IP Pools are not allowed to have overlapping CIDRs. When an administrator does create pools which overlap,
https://github.com/cilium/cilium/blob/main//Documentation/network/lb-ipam.rst
main
cilium
[ -0.07557275891304016, 0.005820608697831631, -0.08834674954414368, -0.033101264387369156, -0.02754044346511364, -0.055926498025655746, -0.047689542174339294, -0.012661036103963852, 0.027725517749786377, -0.05738368257880211, 0.032202862203121185, -0.05100678279995918, 0.03610433638095856, -...
0.190764
Selector Field ------------------------------- ------------------- io.kubernetes.service.namespace ``.meta.namespace`` io.kubernetes.service.name ``.meta.name`` =============================== =================== For example: .. code-block:: yaml apiVersion: "cilium.io/v2" kind: CiliumLoadBalancerIPPool metadata: name: "blue-pool" spec: blocks: - cidr: "20.0.10.0/24" serviceSelector: matchLabels: "io.kubernetes.service.namespace": "tenant-a" Conflicts --------- IP Pools are not allowed to have overlapping CIDRs. When an administrator does create pools which overlap, a soft error is caused. The last added pool will be marked as ``Conflicting`` and no further allocation will happen from that pool. Therefore, administrators should always check the status of all pools after making modifications. For example, if we add 2 pools (``blue-pool`` and ``red-pool``) both with the same CIDR, we will see the following: .. code-block:: shell-session $ kubectl get ippools NAME DISABLED CONFLICTING IPS AVAILABLE AGE blue-pool false False 254 25m red-pool false True 254 11s The reason for the conflict is stated in the status and can be accessed like so .. code-block:: shell-session $ kubectl get ippools/red-pool -o jsonpath='{.status.conditions[?(@.type=="cilium.io/PoolConflict")].message}' Pool conflicts since CIDR '20.0.10.0/24' overlaps CIDR '20.0.10.0/24' from IP Pool 'blue-pool' or .. code-block:: shell-session $ kubectl describe ippools/red-pool Name: red-pool #[...] Status: Conditions: #[...] Last Transition Time: 2022-10-25T14:09:05Z Message: Pool conflicts since CIDR '20.0.10.0/24' overlaps CIDR '20.0.10.0/24' from IP Pool 'blue-pool' Observed Generation: 1 Reason: cidr\_overlap Status: True Type: cilium.io/PoolConflict #[...] Disabling a Pool ----------------- IP Pools can be disabled. Disabling a pool will stop LB IPAM from allocating new IPs from the pool, but doesn't remove existing allocations. This allows an administrator to slowly drain pool or reserve a pool for future use. .. code-block:: yaml apiVersion: "cilium.io/v2" kind: CiliumLoadBalancerIPPool metadata: name: "blue-pool" spec: blocks: - cidr: "20.0.10.0/24" disabled: true .. code-block:: shell-session $ kubectl get ippools NAME DISABLED CONFLICTING IPS AVAILABLE AGE blue-pool true False 254 41m Status ------ The IP Pool's status contains additional counts which can be used to monitor the amount of used and available IPs. A machine parsable output can be obtained like so. .. code-block:: shell-session $ kubectl get ippools -o jsonpath='{.items[\*].status.conditions[?(@.type!="cilium.io/PoolConflict")]}' | jq { "lastTransitionTime": "2022-10-25T14:08:55Z", "message": "254", "observedGeneration": 1, "reason": "noreason", "status": "Unknown", "type": "cilium.io/IPsTotal" } { "lastTransitionTime": "2022-10-25T14:08:55Z", "message": "254", "observedGeneration": 1, "reason": "noreason", "status": "Unknown", "type": "cilium.io/IPsAvailable" } { "lastTransitionTime": "2022-10-25T14:08:55Z", "message": "0", "observedGeneration": 1, "reason": "noreason", "status": "Unknown", "type": "cilium.io/IPsUsed" } Or human readable output like so .. code-block:: shell-session $ kubectl describe ippools/blue-pool Name: blue-pool Namespace: Labels: Annotations: API Version: cilium.io/v2 Kind: CiliumLoadBalancerIPPool #[...] Status: Conditions: #[...] Last Transition Time: 2022-10-25T14:08:55Z Message: 254 Observed Generation: 1 Reason: noreason Status: Unknown Type: cilium.io/IPsTotal Last Transition Time: 2022-10-25T14:08:55Z Message: 254 Observed Generation: 1 Reason: noreason Status: Unknown Type: cilium.io/IPsAvailable Last Transition Time: 2022-10-25T14:08:55Z Message: 0 Observed Generation: 1 Reason: noreason Status: Unknown Type: cilium.io/IPsUsed Services ######## Any service with ``.spec.type=LoadBalancer`` can get IPs from any pool as long as the IP Pool's service selector matches the service. Lets say we add a simple service. .. code-block:: yaml apiVersion: v1 kind: Service metadata: name: service-red namespace: example labels: color: red spec: type: LoadBalancer ports: - port: 1234 This service will appear like so. .. code-block:: shell-session $ kubectl -n example get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service-red LoadBalancer 10.96.192.212 1234:30628/TCP 24s The ExternalIP field has a value of ```` which means no LB IPs have been assigned. When LB IPAM is unable to allocate or assign IPs for the service, it will update the service conditions in the status. The service conditions can be checked like so: .. code-block:: shell-session $ kubectl -n example get svc/service-red -o jsonpath='{.status.conditions}' | jq [ { "lastTransitionTime": "2022-10-06T13:40:48Z", "message": "There are no enabled CiliumLoadBalancerIPPools that match this service", "reason": "no\_pool", "status": "False", "type": "io.cilium/lb-ipam-request-satisfied" } ]
https://github.com/cilium/cilium/blob/main//Documentation/network/lb-ipam.rst
main
cilium
[ 0.003853901056572795, -0.029117105528712273, -0.01156003586947918, -0.034638553857803345, 0.0027159955352544785, -0.04975210502743721, 0.06480658054351807, -0.0907551571726799, 0.10154867172241211, 0.016591297462582588, 0.058967240154743195, -0.11664146184921265, 0.016191937029361725, 0.02...
0.065153
update the service conditions in the status. The service conditions can be checked like so: .. code-block:: shell-session $ kubectl -n example get svc/service-red -o jsonpath='{.status.conditions}' | jq [ { "lastTransitionTime": "2022-10-06T13:40:48Z", "message": "There are no enabled CiliumLoadBalancerIPPools that match this service", "reason": "no\_pool", "status": "False", "type": "io.cilium/lb-ipam-request-satisfied" } ] After updating the service labels to match our ``blue-pool`` from before we see: .. code-block:: shell-session $ kubectl -n example get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service-red LoadBalancer 10.96.192.212 20.0.10.163 1234:30628/TCP 12m $ kubectl -n example get svc/service-red -o jsonpath='{.status.conditions}' | jq [ { "lastTransitionTime": "2022-10-06T13:40:48Z", "message": "There are no enabled CiliumLoadBalancerIPPools that match this service", "reason": "no\_pool", "status": "False", "type": "io.cilium/lb-ipam-request-satisfied" }, { "lastTransitionTime": "2022-10-06T13:52:55Z", "message": "", "reason": "satisfied", "status": "True", "type": "io.cilium/lb-ipam-request-satisfied" } ] IPv4 / IPv6 families + policy ----------------------------- LB IPAM supports IPv4 and/or IPv6 in SingleStack or `DualStack `\_\_ mode. Services can use the ``.spec.ipFamilyPolicy`` and ``.spec.ipFamilies`` fields to change the requested IPs. If ``.spec.ipFamilyPolicy`` isn't specified, ``SingleStack`` mode is assumed. If both IPv4 and IPv6 are enabled in ``SingleStack`` mode, an IPv4 address is allocated. If ``.spec.ipFamilyPolicy`` is set to ``PreferDualStack``, LB IPAM will attempt to allocate both an IPv4 and IPv6 address if both are enabled on the cluster. If only IPv4 or only IPv6 is enabled on the cluster, the service is still considered "satisfied". If ``.spec.ipFamilyPolicy`` is set to ``RequireDualStack`` LB IPAM will attempt to allocate both an IPv4 and IPv6 address. The service is considered "unsatisfied" If IPv4 or IPv6 is disabled on the cluster. The order of ``.spec.ipFamilies`` has no effect on LB IPAM but is significant for cluster IP allocation which isn't handled by LB IPAM. LoadBalancerClass ----------------- Kubernetes >= v1.24 supports `multiple load balancers `\_ in the same cluster. Picking between load balancers is done with the ``.spec.loadBalancerClass`` field. When LB IPAM is enabled it allocates and assigns IPs for services with no load balancer class set. LB IPAM only does IP allocation and doesn't provide load balancing services by itself. Therefore, users should pick one of the following Cilium load balancer classes, all of which use LB IPAM for allocation (if the feature is enabled): =============================== ======================== loadBalancerClass Feature ------------------------------- ------------------------ ``io.cilium/bgp-control-plane`` :ref:`bgp\_control\_plane` ------------------------------- ------------------------ ``io.cilium/l2-announcer`` :ref:`l2\_announcements` =============================== ======================== If the ``.spec.loadBalancerClass`` is set to a class which isn't handled by Cilium's LB IPAM, then Cilium's LB IPAM will ignore the service entirely, not even setting a condition in the status. By default, if the ``.spec.loadBalancerClass`` field is not set, Cilium's LB IPAM will assume it can allocate IPs for the service from its configured pools. If this isn't the desired behavior, you can configure LB-IPAM to only allocate IPs for services from its configured pools when it has a recognized load balancer class by setting the following configuration in the Helm chart or ConfigMap: .. tabs:: .. group-tab:: Helm .. cilium-helm-upgrade:: :namespace: kube-system :extra-args: --reuse-values :set: defaultLBServiceIPAM=none .. group-tab:: ConfigMap .. code-block:: yaml default-lb-service-ipam: none Requesting IPs -------------- Services can request specific IPs. The legacy way of doing so is via ``.spec.loadBalancerIP`` which takes a single IP address. This method has been deprecated in k8s v1.24 but is supported until its future removal. The new way of requesting specific IPs is to use annotations, ``lbipam.cilium.io/ips`` in the case of Cilium LB IPAM. This annotation takes a comma-separated list of IP addresses, allowing for multiple IPs to be requested at once. The service selector of the IP Pool still applies, requested IPs will not be allocated or assigned if the services don't match the pool's selector. Don't configure the annotation to request the first
https://github.com/cilium/cilium/blob/main//Documentation/network/lb-ipam.rst
main
cilium
[ -0.039601556956768036, 0.01906392350792885, -0.04907706752419472, -0.02682691626250744, -0.04676418378949165, -0.038542475551366806, 0.0153847336769104, -0.02336309105157852, 0.08064323663711548, 0.01928037405014038, -0.0030871015042066574, -0.08880411088466644, -0.03435344621539116, -0.00...
0.183095
annotation takes a comma-separated list of IP addresses, allowing for multiple IPs to be requested at once. The service selector of the IP Pool still applies, requested IPs will not be allocated or assigned if the services don't match the pool's selector. Don't configure the annotation to request the first or last IP of an IP pool. They are reserved for the network and broadcast addresses respectively. .. code-block:: yaml apiVersion: v1 kind: Service metadata: name: service-blue namespace: example labels: color: blue annotations: "lbipam.cilium.io/ips": "20.0.10.100,20.0.10.200" spec: type: LoadBalancer ports: - port: 1234 .. code-block:: shell-session $ kubectl -n example get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service-blue LoadBalancer 10.96.26.105 20.0.10.100,20.0.10.200 1234:30363/TCP 43s Sharing Keys ------------ Services can share the same IP or set of IPs with other services. This is done by setting the ``lbipam.cilium.io/sharing-key`` annotation on the service. Services that have the same sharing key annotation will share the same IP or set of IPs. The sharing key is a string that can be any value. .. code-block:: yaml apiVersion: v1 kind: Service metadata: name: service-blue namespace: example labels: color: blue annotations: "lbipam.cilium.io/sharing-key": "1234" spec: type: LoadBalancer ports: - port: 1234 .. code-block:: yaml apiVersion: v1 kind: Service metadata: name: service-red namespace: example labels: color: red annotations: "lbipam.cilium.io/sharing-key": "1234" spec: type: LoadBalancer ports: - port: 2345 .. code-block:: shell-session $ kubectl -n example get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service-blue LoadBalancer 10.96.26.105 20.0.10.100 1234:30363/TCP 43s service-red LoadBalancer 10.96.26.106 20.0.10.100 2345:30131/TCP 43s As long as the services do not have conflicting ports, they will be allocated the same IP. If the services have conflicting ports, they will be allocated different IPs, which will be added to the set of IPs belonging to the sharing key. If a service has a sharing key and also requests a specific IP, the service will be allocated the requested IP and it will be added to the set of IPs belonging to that sharing key. By default, sharing IPs across namespaces is not allowed. To allow sharing across a namespace, set the ``lbipam.cilium.io/sharing-cross-namespace`` annotation to the namespaces the service can be shared with. The value must be a comma-separated list of namespaces. The annotation must be present on both services. You can allow all namespaces with ``\*``.
https://github.com/cilium/cilium/blob/main//Documentation/network/lb-ipam.rst
main
cilium
[ -0.026941226795315742, 0.02342839166522026, -0.03413185104727745, -0.061369240283966064, 0.01543713640421629, -0.011630682274699211, 0.06912443041801453, -0.05164249241352081, 0.11752690374851227, -0.011313085444271564, -0.009828769601881504, -0.10541905462741852, -0.024549052119255066, -0...
0.19118
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_l2\_announcements: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* L2 Announcements / L2 Aware LB (Beta) \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* .. include:: ../beta.rst L2 Announcements is a feature which makes services visible and reachable on the local area network. This feature is primarily intended for on-premises deployments within networks without BGP based routing such as office or campus networks. When used, this feature will respond to ARP/NDP queries for ExternalIPs and/or LoadBalancer IPs. These IPs are Virtual IPs (not installed on network devices) on multiple nodes, so for each service one node at a time will respond to ARP/NDP queries and respond with its MAC address. This node will perform load balancing with the service load balancing feature, thus acting as a north/south load balancer. The advantage of this feature over NodePort services is that each service can use a unique IP so multiple services can use the same port numbers. When using NodePorts, it is up to the client to decide to which host to send traffic, and if a node goes down, the IP+Port combo becomes unusable. With L2 announcements the service VIP simply migrates to another node and will continue to work. .. \_l2\_announcements\_settings: Configuration ############# The L2 Announcements feature and all the requirements can be enabled as follows: .. tabs:: .. group-tab:: Helm .. cilium-helm-upgrade:: :namespace: kube-system :extra-args: --reuse-values :set: l2announcements.enabled=true k8sClientRateLimit.qps={QPS} k8sClientRateLimit.burst={BURST} kubeProxyReplacement=true k8sServiceHost=${API\_SERVER\_IP} k8sServicePort=${API\_SERVER\_PORT} .. group-tab:: ConfigMap .. code-block:: yaml enable-l2-announcements: true kube-proxy-replacement: true k8s-client-qps: {QPS} k8s-client-burst: {BURST} .. warning:: Sizing the client rate limit (``k8sClientRateLimit.qps`` and ``k8sClientRateLimit.burst``) is important when using this feature due to increased API usage. See :ref:`sizing\_client\_rate\_limit` for sizing guidelines. Prerequisites ############# \* Kube Proxy replacement mode must be enabled. For more information, see :ref:`kubeproxy-free`. \* All devices on which L2 Aware LB will be announced should be enabled and included in the ``--devices`` flag or ``devices`` Helm option if explicitly set, see :ref:`NodePort Devices`. Limitations ########### \* Due to the way L3->L2 translation protocols work, one node receives all ARP/NDP requests for a specific IP, so no load balancing can happen before traffic hits the cluster. \* The feature currently has no traffic balancing mechanism so nodes within the same policy might be asymmetrically loaded. For details see :ref:`l2\_announcements\_leader\_election`. \* The feature is incompatible with the ``externalTrafficPolicy: Local`` on services as it may cause service IPs to be announced on nodes without pods causing traffic drops. Policies ######## Policies provide fine-grained control over which services should be announced, where, and how. This is an example policy using all optional fields: .. code-block:: yaml apiVersion: "cilium.io/v2alpha1" kind: CiliumL2AnnouncementPolicy metadata: name: policy1 spec: serviceSelector: matchLabels: color: blue nodeSelector: matchExpressions: - key: node-role.kubernetes.io/control-plane operator: DoesNotExist interfaces: - ^eth[0-9]+ externalIPs: true loadBalancerIPs: true Service Selector ---------------- The service selector is a `label selector `\_\_ that determines which services are selected by this policy. If no service selector is provided, all services are selected by the policy. A service must have `loadBalancerClass `\_\_ unspecified or set to ``io.cilium/l2-announcer`` to be selected by a policy for announcement. There are a few special purpose selector fields which don't match on labels but instead on other metadata like ``.meta.name`` or ``.meta.namespace``. =============================== =================== Selector Field ------------------------------- ------------------- io.kubernetes.service.namespace ``.meta.namespace`` io.kubernetes.service.name ``.meta.name`` =============================== =================== Node Selector ------------- The node selector field is a `label selector `\_\_ which determines which nodes are candidates to announce the services from. It might be desirable to pick a subset of nodes in you cluster, since the chosen node (see :ref:`l2\_announcements\_leader\_election`) will act as the north/south load
https://github.com/cilium/cilium/blob/main//Documentation/network/l2-announcements.rst
main
cilium
[ -0.038939300924539566, -0.01329687051475048, -0.052524253726005554, 0.011434940621256828, 0.08428610116243362, -0.023513063788414, -0.028964120894670486, 0.021616831421852112, 0.05054512619972229, -0.03419770300388336, 0.026778243482112885, -0.008249292150139809, 0.027312345802783966, -0.0...
0.197963
=============================== =================== Node Selector ------------- The node selector field is a `label selector `\_\_ which determines which nodes are candidates to announce the services from. It might be desirable to pick a subset of nodes in you cluster, since the chosen node (see :ref:`l2\_announcements\_leader\_election`) will act as the north/south load balancer for all of the traffic for a particular service. Interfaces ---------- The interfaces field is a list of regular expressions (`golang syntax `\_\_) that determine over which network interfaces the selected services will be announced. This field is optional, if not specified all interfaces will be used. The expressions are OR-ed together, so any network device matching any of the expressions will be matched. L2 announcements only work if the selected devices are also part of the set of devices specified in the ``devices`` Helm option, see :ref:`NodePort Devices`. .. note:: This selector is NOT a security feature, services will still be available via interfaces when not advertised (for example by hard-coding ARP/NDP entries). IP Types -------- The ``externalIPs`` and ``loadBalancerIPs`` fields determine what sort of IPs are announced. They are both set to ``false`` by default, so a functional policy should always have one or both set to ``true``. If ``externalIPs`` is ``true`` all IPs in `.spec.externalIPs `\_\_ field are announced. These IPs are managed by service authors. If ``loadBalancerIPs`` is ``true`` all IPs in the service's ``.status.loadbalancer.ingress`` field are announced. These can be assigned by :ref:`lb\_ipam` which can be configured by cluster admins for better control over which IPs can be allocated. .. note:: If a user intends to use ``externalIPs``, the ``externalIPs.enable=true`` Helm option should be set to enable service load balancing for external IPs. Status ------ If a policy is invalid for any number of reasons, the status of the policy will reflect that. For example if an invalid match expression is provided: .. code-block:: shell-session $ kubectl describe l2announcement Name: policy1 Namespace: Labels: Annotations: API Version: cilium.io/v2alpha1 Kind: CiliumL2AnnouncementPolicy Metadata: #[...] Spec: #[...] Service Selector: Match Expressions: Key: something Operator: NotIn Values: Status: Conditions: Last Transition Time: 2023-05-12T15:39:01Z Message: values: Invalid value: []string(nil): for 'in', 'notin' operators, values set can't be empty Observed Generation: 1 Reason: error Status: True Type: io.cilium/bad-service-selector The status of these error conditions will go to ``False`` as soon as the user updates the policy to resolve the error. .. \_l2\_announcements\_leader\_election: Leader Election ############### Due to the way ARP/NDP works, hosts only store one MAC address per IP, that being the latest reply they see. This means that only one node in the cluster is allowed to reply to requests for a given IP. To implement this behavior, every Cilium agent resolves which services are selected for its node and will start participating in leader election for every service. We use Kubernetes `lease mechanism `\_\_ to achieve this. Each service translates to a lease, the lease holder will start replying to requests on the selected interfaces. The lease mechanism is a first come, first serve picking order. So the first node to claim a lease gets it. This might cause asymmetric traffic distribution. Leases ------ The leases are created in the same namespace where Cilium is deployed, typically ``kube-system``. You can inspect the leases with the following command: .. code-block:: shell-session $ kubectl -n kube-system get lease NAME HOLDER AGE cilium-l2announce-default-deathstar worker-node 2d20h cilium-operator-resource-lock worker-node2-tPDVulKoRK 2d20h kube-controller-manager control-plane-node\_9bd97f6c-cd0c-4565-8486-e718deb310e4 2d21h kube-scheduler control-plane-node\_2c490643-dd95-4f73-8862-139afe771ffd 2d21h The leases starting with ``cilium-l2announce-`` are leases used by this feature. The last part of the name is the namespace and service name. The holder indicates the name of the node that currently holds the lease and thus
https://github.com/cilium/cilium/blob/main//Documentation/network/l2-announcements.rst
main
cilium
[ -0.012636911123991013, 0.019533641636371613, 0.036689285188913345, -0.02720732055604458, 0.07213031500577927, 0.042432837188243866, 0.059395965188741684, -0.03781848028302193, 0.0013881034683436155, -0.03887946903705597, -0.000618081889115274, -0.06949198991060257, 0.06174743175506592, -0....
0.162294
cilium-l2announce-default-deathstar worker-node 2d20h cilium-operator-resource-lock worker-node2-tPDVulKoRK 2d20h kube-controller-manager control-plane-node\_9bd97f6c-cd0c-4565-8486-e718deb310e4 2d21h kube-scheduler control-plane-node\_2c490643-dd95-4f73-8862-139afe771ffd 2d21h The leases starting with ``cilium-l2announce-`` are leases used by this feature. The last part of the name is the namespace and service name. The holder indicates the name of the node that currently holds the lease and thus announced the IPs of that given service. To inspect a lease: .. code-block:: shell-session $ kubectl -n kube-system get lease/cilium-l2announce-default-deathstar -o yaml apiVersion: coordination.k8s.io/v1 kind: Lease metadata: creationTimestamp: "2023-05-09T15:13:32Z" name: cilium-l2announce-default-deathstar namespace: kube-system resourceVersion: "449966" uid: e3c9c020-6e24-4c5c-9df9-d0c50f6c4cec spec: acquireTime: "2023-05-09T15:14:20.108431Z" holderIdentity: worker-node leaseDurationSeconds: 3 leaseTransitions: 1 renewTime: "2023-05-12T12:15:26.773020Z" The ``acquireTime`` is the time at which the current leader acquired the lease. The ``holderIdentity`` is the name of the current holder/leader node. If the leader does not renew the lease for ``leaseDurationSeconds`` seconds a new leader is chosen. ``leaseTransitions`` indicates how often the lease changed hands and ``renewTime`` the last time the leader renewed the lease. There are three Helm options that can be tuned with regards to leases: \* ``l2announcements.leaseDuration`` determines the ``leaseDurationSeconds`` value of created leases and by extent how long a leader must be "down" before failover occurs. Its default value is 15s, it must always be greater than 1s and be larger than ``leaseRenewDeadline``. \* ``l2announcements.leaseRenewDeadline`` is the interval at which the leader should renew the lease. Its default value is 5s, it must be greater than ``leaseRetryPeriod`` by at least 20% and is not allowed to be below ``1ns``. \* ``l2announcements.leaseRetryPeriod`` if renewing the lease fails, how long should the agent wait before it tries again. Its default value is 2s, it must be smaller than ``leaseRenewDeadline`` by at least 20% and above ``1ns``. .. note:: The theoretical shortest time between failure and failover is ``leaseDuration - leaseRenewDeadline`` and the longest ``leaseDuration + leaseRenewDeadline``. So with the default values, failover occurs between 10s and 20s. For the example below, these times are between 2s and 4s. .. tabs:: .. group-tab:: Helm .. cilium-helm-upgrade:: :namespace: kube-system :extra-args: --reuse-values :set: l2announcements.enabled=true kubeProxyReplacement=true k8sServiceHost=${API\_SERVER\_IP} k8sServicePort=${API\_SERVER\_PORT} k8sClientRateLimit.qps={QPS} k8sClientRateLimit.burst={BURST} l2announcements.leaseDuration=3s l2announcements.leaseRenewDeadline=1s l2announcements.leaseRetryPeriod=200ms .. group-tab:: ConfigMap .. code-block:: yaml enable-l2-announcements: true kube-proxy-replacement: true l2-announcements-lease-duration: 3s l2-announcements-renew-deadline: 1s l2-announcements-retry-period: 200ms k8s-client-qps: {QPS} k8s-client-burst: {BURST} There is a trade-off between fast failure detection and CPU + network usage. Each service incurs a CPU and network overhead, so clusters with smaller amounts of services can more easily afford faster failover times. Larger clusters might need to increase parameters if the overhead is too high. .. \_sizing\_client\_rate\_limit: Sizing client rate limit ======================== The leader election process continually generates API traffic, the exact amount depends on the configured lease duration, configured renew deadline, and amount of services using the feature. The default client rate limit is 5 QPS with allowed bursts up to 10 QPS. this default limit is quickly reached when utilizing L2 announcements and thus users should size the client rate limit accordingly. In a worst case scenario, services are distributed unevenly, so we will assume a peak load based on the renew deadline. In complex scenarios with multiple policies over disjointed sets of node, max QPS per node will be lower. .. code-block:: text QPS = #services \* (1 / leaseRenewDeadline) // example #services = 65 leaseRenewDeadline = 2s QPS = 65 \* (1 / 2s) = 32.5 QPS Setting the base QPS to around the calculated value should be sufficient, given in multi-node scenarios leases are spread around nodes, and non-holders participating in the election have a lower QPS. The burst QPS should be slightly higher to allow for bursts of traffic caused by other features which also use the API server. Failover
https://github.com/cilium/cilium/blob/main//Documentation/network/l2-announcements.rst
main
cilium
[ -0.044728267937898636, 0.028272628784179688, -0.056519899517297745, -0.04270148277282715, -0.02064792811870575, -0.008047674782574177, -0.017307423055171967, -0.0352659672498703, 0.11090464890003204, 0.040101442486047745, 0.05719533562660217, -0.03519248589873314, -0.010666639544069767, -0...
0.183901
around the calculated value should be sufficient, given in multi-node scenarios leases are spread around nodes, and non-holders participating in the election have a lower QPS. The burst QPS should be slightly higher to allow for bursts of traffic caused by other features which also use the API server. Failover -------- When nodes participating in leader election detect that the lease holder did not renew the lease for ``leaseDurationSeconds`` amount of seconds, they will ask the API server to make them the new holder. The first request to be processed gets through and the rest are denied. When a node becomes the leader/holder, it will send out a gratuitous ARP reply over all of the configured interfaces. Clients who accept these will update their ARP tables at once causing them to send traffic to the new leader/holder. Not all clients accept gratuitous ARP replies since they can be used for ARP spoofing. Such clients might experience longer downtime then configured in the leases since they will only re-query via ARP when TTL in their internal tables has been reached. Troubleshooting ############### This section is a step by step guide on how to troubleshoot L2 Announcements, hopefully solving your issue or narrowing it down to a specific area. The first thing we need to do is to check that the feature is enabled, kube proxy replacement is active and optionally that external IPs are enabled. .. code-block:: shell-session $ kubectl -n kube-system exec ds/cilium -- cilium-dbg config --all | grep EnableL2Announcements EnableL2Announcements : true $ kubectl -n kube-system exec ds/cilium -- cilium-dbg config --all | grep KubeProxyReplacement KubeProxyReplacement : true $ kubectl -n kube-system exec ds/cilium -- cilium-dbg config --all | grep EnableExternalIPs EnableExternalIPs : true If ``EnableL2Announcements`` or ``KubeProxyReplacement`` indicates ``false``, make sure to enable the correct settings and deploy the helm chart :ref:`l2\_announcements\_settings`. ``EnableExternalIPs`` should be set to ``true`` if you intend to use external IPs. Next, ensure you have at least one policy configured, L2 announcements will not work without a policy. .. code-block:: shell-session $ kubectl get CiliumL2AnnouncementPolicy NAME AGE policy1 6m16s L2 announcements should now create a lease for every service matched by the policy. We can check the leases like so: .. code-block:: shell-session $ kubectl -n kube-system get lease | grep "cilium-l2announce" cilium-l2announce-default-service-red kind-worker 34s If the output is empty, then the policy is not correctly configured or the agent is not running correctly. Check the logs of the agent for error messages: .. code-block:: shell-session $ kubectl -n kube-system logs ds/cilium | grep "l2" A common error is that the agent is not able to create leases. .. code-block:: shell-session $ kubectl -n kube-system logs ds/cilium | grep "error" time="2024-06-25T12:01:43Z" level=error msg="error retrieving resource lock kube-system/cilium-l2announce-default-service-red: leases.coordination.k8s.io \"cilium-l2announce-default-service-red\" is forbidden: User \"system:serviceaccount:kube-system:cilium\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-system\"" subsys=klog This can happen if the cluster role of the agent is not correct. This tends to happen when L2 announcements is enabled without using the helm chart. Redeploy the helm chart or manually update the cluster role, by running ``kubectl edit clusterrole cilium`` and adding the following block to the rules: .. code-block:: yaml - apiGroups: - coordination.k8s.io resources: - leases verbs: - create - get - update - list - delete Another common error is that the configured client rate limit is too low. This can be seen in the logs as well: .. code-block:: shell-session $ kubectl -n kube-system logs ds/cilium | grep "l2" 2023-07-04T14:59:51.959400310Z level=info msg="Waited for 1.395439596s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-l2announce-default-example" subsys=klog 2023-07-04T15:00:12.159409007Z level=info msg="Waited for 1.398748976s due
https://github.com/cilium/cilium/blob/main//Documentation/network/l2-announcements.rst
main
cilium
[ -0.05200766772031784, 0.03712722659111023, 0.04185667634010315, 0.016129089519381523, -0.03677405044436455, 0.02197551727294922, 0.01190369576215744, 0.00710300263017416, 0.053623467683792114, 0.07114478200674057, -0.048843394964933395, 0.021449735388159752, 0.09275659173727036, -0.0004686...
0.015855
configured client rate limit is too low. This can be seen in the logs as well: .. code-block:: shell-session $ kubectl -n kube-system logs ds/cilium | grep "l2" 2023-07-04T14:59:51.959400310Z level=info msg="Waited for 1.395439596s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-l2announce-default-example" subsys=klog 2023-07-04T15:00:12.159409007Z level=info msg="Waited for 1.398748976s due to client-side throttling, not priority and fairness, request: PUT:https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-l2announce-default-example" subsys=klog These logs are associated with intermittent failures to renew the lease, connection issues and/or frequent leader changes. See :ref:`sizing\_client\_rate\_limit` for more information on how to size the client rate limit. If you find a different L2 related error, please open a GitHub issue with the error message and the steps you took to get there. Assuming the leases are created, the next step is to check the agent internal state. Pick a service which isn't working and inspect its lease. Take the holder name and find the cilium agent pod for the holder node. Finally, take the name of the cilium agent pod and inspect the l2-announce state: .. code-block:: shell-session $ kubectl -n kube-system get lease cilium-l2announce-default-service-red NAME HOLDER AGE cilium-l2announce-default-service-red 20m $ kubectl -n kube-system get pod -l 'app.kubernetes.io/name=cilium-agent' -o wide | grep 1/1 Running 0 35m 172.19.0.3 kind-worker $ kubectl -n kube-system exec pod/ -- cilium-dbg shell -- db/show l2-announce # IP NetworkInterface 10.0.10.0 eth0 The l2 announce state should contain the IP of the service and the network interface it is announced on. If the lease is present but its IP is not in the l2-announce state, or you are missing an entry for a given network device. Double check that the device selector in the policy matches the desired network device (values are regular expressions). If the filter seems correct or isn't specified, inspect the known devices: .. code-block:: shell-session $ kubectl -n kube-system exec ds/cilium -- cilium-dbg shell -- db/show devices Name Index Selected Type MTU HWAddr Flags Addresses lxc5d23398605f6 10 false veth 1500 b6:ed:d8:d2:dd:ec up|broadcast|multicast fe80::b4ed:d8ff:fed2:ddec lxc3bf03c00d6e3 12 false veth 1500 8a:d1:0c:91:8a:d3 up|broadcast|multicast fe80::88d1:cff:fe91:8ad3 eth0 50 true veth 1500 02:42:ac:13:00:03 up|broadcast|multicast 172.19.0.3, fc00:c111::3, fe80::42:acff:fe13:3 lo 1 false device 65536 up|loopback 127.0.0.1, ::1 cilium\_net 2 false veth 1500 1a:a9:2f:4d:d3:3d up|broadcast|multicast fe80::18a9:2fff:fe4d:d33d cilium\_vxlan 4 false vxlan 1500 2a:05:26:8d:79:9c up|broadcast|multicast fe80::2805:26ff:fe8d:799c lxc611291f1ecbb 8 false veth 1500 7a:fb:ec:54:e2:5c up|broadcast|multicast fe80::78fb:ecff:fe54:e25c lxc\_health 16 false veth 1500 0a:94:bf:49:d5:50 up|broadcast|multicast fe80::894:bfff:fe49:d550 cilium\_host 3 false veth 1500 22:32:e2:80:21:34 up|broadcast|multicast 10.244.1.239, fd00:10:244:1::f58a Only devices with ``Selected`` set to ``true`` can be used for L2 announcements. Typically all physical devices with IPs assigned to them will be considered selected. The ``--devices`` flag or ``devices`` Helm option can be used to filter out devices. If your desired device is in the list but not selected, check the devices flag/option to see if it filters it out. Please open a Github issue if your desired device doesn't appear in the list or it isn't selected while you believe it should be. If the L2 state contains the IP and device combination but there are still connection issues, it's time to test ARP within the cluster. Pick a cilium agent pod other than the lease holder on the same L2 network. Then use the following command to send an ARP request to the service IP: .. code-block:: shell-session $ kubectl -n kube-system exec pod/cilium-z4ef7 -- sh -c 'apt update && apt install -y arping && arping -i ' [omitting apt output...] ARPING 10.0.10.0 58 bytes from 02:42:ac:13:00:03 (10.0.10.0): index=0 time=11.772 usec 58 bytes from 02:42:ac:13:00:03 (10.0.10.0): index=1 time=9.234 usec 58 bytes from 02:42:ac:13:00:03 (10.0.10.0): index=2 time=10.568 usec If the output is as above yet the service is still unreachable, from clients within the same L2 network,
https://github.com/cilium/cilium/blob/main//Documentation/network/l2-announcements.rst
main
cilium
[ -0.003498649690300226, 0.036619581282138824, 0.009446374140679836, -0.03420687094330788, -0.06286100298166275, -0.07698982208967209, -0.06998054683208466, 0.02197384648025036, 0.12085822224617004, 0.07175760716199875, -0.008284139446914196, -0.07940381020307541, -0.027218738570809364, -0.0...
0.133647
-i ' [omitting apt output...] ARPING 10.0.10.0 58 bytes from 02:42:ac:13:00:03 (10.0.10.0): index=0 time=11.772 usec 58 bytes from 02:42:ac:13:00:03 (10.0.10.0): index=1 time=9.234 usec 58 bytes from 02:42:ac:13:00:03 (10.0.10.0): index=2 time=10.568 usec If the output is as above yet the service is still unreachable, from clients within the same L2 network, the issue might be client related. If you expect the service to be reachable from outside the L2 network, and it is not, check the ARP and routing tables of the gateway device. If the ARP request fails (the output shows ``Timeout``), check the BPF map of the cilium-agent with the lease: .. code-block:: shell-session $ kubectl -n kube-system exec pod/cilium-vxz67 -- bpftool map dump pinned /sys/fs/bpf/tc/globals/cilium\_l2\_responder\_v4 [{ "key": { "ip4": 655370, "ifindex": 50 }, "value": { "responses\_sent": 20 } } ] The ``responses\_sent`` field is incremented every time the datapath responds to an ARP request. If the field is 0, then the ARP request doesn't make it to the node. If the field is greater than 0, the issue is on the return path. In both cases, inspect the network and the client. It is still possible that the service is unreachable even though ARP requests are answered. This can happen for a number of reasons, usually unrelated to L2 announcements, but rather other Cilium features. One common issue however is caused by the usage of ``.Spec.ExternalTrafficPolicy: Local`` on services. This setting normally tells a load balancer to only forward traffic to nodes with at least 1 ready pod to avoid a second hop. Unfortunately, L2 announcements isn't currently aware of this setting and will announce the service IP on all nodes matching policies. If a node without a pod receives traffic, it will drop it. To fix this, set the policy to ``.Spec.ExternalTrafficPolicy: Cluster``. Please open a Github issue if none of the above steps helped you solve your issue. .. \_l2\_pod\_announcements: L2 Pod Announcements #################### L2 Pod Announcements announce Pod IP addresses on the L2 network using Gratuitous ARP replies / Neighbor Discovery Advertisements. When enabled, the node transmits Gratuitous ARP replies / NDP Advertisements for every locally created pod, on the configured network interface(s). This feature is enabled separately from the above L2 announcements feature. To enable L2 Pod Announcements, set the following: .. tabs:: .. group-tab:: Helm .. cilium-helm-upgrade:: :namespace: kube-system :extra-args: --reuse-values :set: l2podAnnouncements.enabled=true l2podAnnouncements.interface=eth0 .. group-tab:: ConfigMap .. code-block:: yaml enable-l2-pod-announcements: true l2-pod-announcements-interface: eth0 The ``l2podAnnouncements.interface``/``l2-pod-announcements-interface`` options allows you to specify one interface use to send announcements. If you would like to send announcements on multiple interfaces, you should use the ``l2podAnnouncements.interfacePattern``/``l2-pod-announcements-interface-pattern`` option instead. This option takes a regex, matching on multiple interfaces. .. tabs:: .. group-tab:: Helm .. cilium-helm-upgrade:: :namespace: kube-system :extra-args: --reuse-values :set: l2podAnnouncements.enabled=true l2podAnnouncements.interfacePattern='^(eth0|ens1)$' .. group-tab:: ConfigMap .. code-block:: yaml enable-l2-pod-announcements: true l2-pod-announcements-interface-pattern: "^(eth0|ens1)$" .. note:: Since this feature has no IPv6 support yet, only ARP messages are sent, no Unsolicited Neighbor Advertisements are sent.
https://github.com/cilium/cilium/blob/main//Documentation/network/l2-announcements.rst
main
cilium
[ 0.021463744342327118, -0.03593720495700836, -0.005715387873351574, -0.05949673429131508, -0.028230484575033188, -0.06650003045797348, -0.021332310512661934, -0.008899513632059097, 0.07419904321432114, -0.02087542787194252, 0.004027257673442364, -0.0005225051427260041, -0.04398211091756821, ...
0.037994
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_pod\_mac\_address: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Use a Specific MAC Address for a Pod \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Some applications bind software licenses to network interface MAC addresses. Cilium provides the ability to specific MAC addresses for pods at deploy time instead of letting the operating system allocate them. Configuring the address ####################### Cilium will configure the MAC address for the primary interface inside a Pod if you specify the MAC address in the ``cni.cilium.io/mac-address`` annotation before deploying the Pod. This MAC address is isolated to the container so it will not collide with any other MAC addresses assigned to other Pods on the same node. The MAC address must be specified \*\*before\*\* deploying the Pod. Annotate the pod with ``cni.cilium.io/mac-address`` set to the desired MAC address. For example: .. code-block:: yaml apiVersion: v1 kind: Pod metadata: annotations: cni.cilium.io/mac-address: e2:9c:30:38:52:61 labels: app: busybox name: busybox namespace: default Deploy the Pod. Cilium will configure the mac address to the first interface in the Pod automatically. Check whether its mac address is the specified mac address. .. code-block:: shell-session $ kubectl exec -it busybox -- ip addr 1: lo: mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid\_lft forever preferred\_lft forever inet6 ::1/128 scope host valid\_lft forever preferred\_lft forever 7: eth0@if8: mtu 1500 qdisc noqueue qlen 1000 link/ether e2:9c:30:38:52:61 brd ff:ff:ff:ff:ff:ff inet 10.244.2.195/32 scope global eth0 valid\_lft forever preferred\_lft forever inet6 fe80::e46d:f4ff:fe4d:ebca/64 scope link valid\_lft forever preferred\_lft forever
https://github.com/cilium/cilium/blob/main//Documentation/network/pod-mac-address.rst
main
cilium
[ -0.0078428965061903, 0.00037852319655939937, -0.017701536417007446, -0.06685771048069, 0.019619110971689224, -0.06855107098817825, -0.029981840401887894, 0.0019471808336675167, 0.04771865904331207, -0.02377312257885933, 0.07308799028396606, -0.029162274673581123, 0.03256382420659065, -0.01...
0.209056
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Using BIRD to run BGP (deprecated) \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* `BIRD is an open-source implementation for routing Internet Protocol packets on Unix-like operating systems `\_. If you are not familiar with it, you had best have a glance at the `User's Guide`\_ first. .. \_`User's Guide`: https://bird.network.cz/?get\_doc&f=bird.html&v=20 BIRD provides a way to advertise routes using traditional networking protocols to allow Cilium-managed endpoints to be accessible outside the cluster. This guide assumes that Cilium is already deployed in the cluster, and that the remaining piece is how to ensure that the pod CIDR ranges are externally routable. `BIRD `\_ maintains two release families at present: ``1.x`` and ``2.x``, and the configuration format varies a lot between them. Unless you have already deployed the ``1.x``, we suggest using ``2.x`` directly, as the ``2.x`` will live longer. The following examples will denote ``bird`` as the ``bird2`` software and use configuration in the format that ``bird2`` understands. This guide shows how to install and configure bird on CentOS 7.x to make it collaborate with Cilium. Installation and configuration on other platforms should be very similar. Install bird ################## .. code-block:: shell-session $ yum install -y bird2 $ systemctl enable bird $ systemctl restart bird Test the installation: .. code-block:: shell-session $ birdc show route BIRD 2.0.6 ready. $ birdc # interactive shell BIRD 2.0.6 ready. bird> show bfd sessions There is no BFD protocol running bird> bird> show protocols all Name Proto Table State Since Info device1 Device --- up 10:53:40.147 direct1 Direct --- down 10:53:40.147 Channel ipv4 State: DOWN Input filter: ACCEPT Output filter: REJECT ... Basic configuration ##################### It's hard to discuss bird configurations without considering specific BGP schemes. However, BGP scheme design is beyond the scope of this guide. If you are interested in this topic, refer to `BGP in the Data Center `\_ (O'Reilly, 2017) for a quick start. In the following, we will restrict our BGP scenario as follows: .. image:: images/bird\_sample\_topo.png :scale: 70% \* physical network: simple 3-tier hierarchical architecture \* nodes connect to physical network via layer 2 switches \* announcing each node's PodCIDR to physical network via ``bird`` \* for each node, do not import route announcements from physical network In this design, the BGP connections look like this: .. image:: images/bird\_sample\_bgp.png :scale: 70% This scheme is simple in that: \* core routers learn PodCIDRs from ``bird``, which makes the Pod IP addresses routable within the entire network. \* ``bird`` doesn't learn routes from core routers and other nodes, which keeps the kernel routing table of each node clean and small, and suffering no performance issues. In this scheme, each node just sends pod egress traffic to node's default gateway (the core routers), and lets the latter do the routing. Below is the a reference configuration for fulfilling the above purposes: :: $ cat /etc/bird.conf log syslog all; router id {{ NODE\_IP }}; protocol device { scan time 10; # Scan interfaces every 10 seconds } # Disable automatically generating direct routes to all network interfaces. protocol direct { disabled; # Disable by default } # Forbid synchronizing BIRD routing tables with the OS kernel. protocol kernel { ipv4 { # Connect protocol to IPv4 table by channel import none; # Import to table, default is import all export none; # Export to protocol. default is export none }; } # Static IPv4 routes. protocol static { ipv4; route {{ POD\_CIDR }} via "cilium\_host"; } # BGP peers protocol bgp uplink0 { description
https://github.com/cilium/cilium/blob/main//Documentation/network/bird.rst
main
cilium
[ -0.011278157122433186, 0.014194965362548828, -0.06846185773611069, -0.07490906864404678, -0.006321200635284185, -0.08282135426998138, -0.039426613599061966, -0.06856685131788254, 0.015058642253279686, -0.02489444985985756, -0.01576077565550804, -0.02732638455927372, -0.05158184468746185, -...
0.162582
to IPv4 table by channel import none; # Import to table, default is import all export none; # Export to protocol. default is export none }; } # Static IPv4 routes. protocol static { ipv4; route {{ POD\_CIDR }} via "cilium\_host"; } # BGP peers protocol bgp uplink0 { description "BGP uplink 0"; local {{ NODE\_IP }} as {{ NODE\_ASN }}; neighbor {{ NEIGHBOR\_0\_IP }} as {{ NEIGHBOR\_0\_ASN }}; password {{ NEIGHBOR\_PWD }}; ipv4 { import filter {reject;}; export filter {accept;}; }; } protocol bgp uplink1 { description "BGP uplink 1"; local {{ NODE\_IP }} as {{ NODE\_ASN }}; neighbor {{ NEIGHBOR\_1\_IP }} as {{ NEIGHBOR\_1\_ASN }}; password {{ NEIGHBOR\_PWD }}; ipv4 { import filter {reject;}; export filter {accept;}; }; } Save the above file as ``/etc/bird.conf``, and replace the placeholders with your own: .. code-block:: shell-session sed -i 's/{{ NODE\_IP }}//g' /etc/bird.conf sed -i 's/{{ POD\_CIDR }}//g' /etc/bird.conf sed -i 's/{{ NODE\_ASN }}//g' /etc/bird.conf sed -i 's/{{ NEIGHBOR\_0\_IP }}//g' /etc/bird.conf sed -i 's/{{ NEIGHBOR\_1\_IP }}//g' /etc/bird.conf sed -i 's/{{ NEIGHBOR\_0\_ASN }}//g' /etc/bird.conf sed -i 's/{{ NEIGHBOR\_1\_ASN }}//g' /etc/bird.conf sed -i 's/{{ NEIGHBOR\_PWD }}//g' /etc/bird.conf Restart ``bird`` and check the logs: .. code-block:: shell-session $ systemctl restart bird # check logs $ journalctl -u bird -- Logs begin at Sat 2020-02-22 16:11:44 CST, end at Mon 2020-02-24 18:58:35 CST. -- Feb 24 18:58:24 node systemd[1]: Started BIRD Internet Routing Daemon. Feb 24 18:58:24 node systemd[1]: Starting BIRD Internet Routing Daemon... Feb 24 18:58:24 node bird[137410]: Started Verify the changes, you should get something like this: .. code-block:: shell-session $ birdc show route BIRD 2.0.6 ready. Table master4: 10.5.48.0/24 unicast [static1 20:14:51.478] \* (200) dev cilium\_host This indicates that the PodCIDR ``10.5.48.0/24`` on this node has been successfully imported into BIRD. .. code-block:: shell-session $ birdc show protocols all uplink0 | grep -A 3 -e "Description" -e "stats" Description: BGP uplink 0 BGP state: Established Neighbor address: 10.4.1.7 Neighbor AS: 65418 -- Route change stats: received rejected filtered ignored accepted Import updates: 0 0 0 0 0 Import withdraws: 10 0 --- 10 0 Export updates: 1 0 0 --- 1 Here we see that the uplink0 BGP session is established and our PodCIDR from above has been exported and accepted by the BGP peer. Monitoring ############## `bird\_exporter `\_ could collect bird daemon states, and export Prometheus-style metrics. It also provides a simple Grafana dashboard, but you could also create your own, e.g. `Trip.com's `\_ looks like this: .. image:: images/bird\_dashboard.png Advanced Configurations ####################### You may need some advanced configurations to make your BGP scheme production-ready. This section lists some of these parameters, but we will not dive into details, that's BIRD `User's Guide`\_'s responsibility. BFD ---- `Bidirectional Forwarding Detection (BFD) `\_ is a detection protocol designed to accelerate path failure detection. \*\*This feature also relies on peer side's configuration.\*\* :: protocol bfd { interface "{{ grains['node\_mgnt\_device'] }}" { min rx interval 100 ms; min tx interval 100 ms; idle tx interval 300 ms; multiplier 10; password {{ NEIGHBOR\_PWD }}; }; neighbor {{ NEIGHBOR\_0\_IP] }}; neighbor {{ NEIGHBOR\_1\_IP] }}; } protocol bgp uplink0 { ... bfd on; } Verify, you should see something like this: .. code-block:: shell-session $ birdc show bfd sessions BIRD 2.0.6 ready. bfd1: IP address Interface State Since Interval Timeout 10.5.40.2 bond0 Up 20:14:51.479 0.300 0.000 10.5.40.3 bond0 Up 20:14:51.479 0.300 0.000 ECMP ------ For some special purposes (e.g. L4LB), you may configure a same CIDR on multiple nodes. In this case, you need to configure `Equal-Cost Multi-Path (ECMP) routing `\_. \*\*This feature also relies on peer side's configuration.\*\* :: protocol kernel { ipv4 { # Connect
https://github.com/cilium/cilium/blob/main//Documentation/network/bird.rst
main
cilium
[ 0.06444752961397171, -0.0029442429076880217, -0.04299085959792137, -0.016325440257787704, -0.017203055322170258, 0.04656205698847771, -0.0015372943598777056, -0.028181426227092743, -0.09491313248872757, 0.03780781105160713, 0.005360099021345377, -0.050952255725860596, 0.04207954555749893, ...
-0.041316
bond0 Up 20:14:51.479 0.300 0.000 ECMP ------ For some special purposes (e.g. L4LB), you may configure a same CIDR on multiple nodes. In this case, you need to configure `Equal-Cost Multi-Path (ECMP) routing `\_. \*\*This feature also relies on peer side's configuration.\*\* :: protocol kernel { ipv4 { # Connect protocol to IPv4 table by channel import none; # Import to table, default is import all export none; # Export to protocol. default is export none }; # Configure ECMP merge paths yes limit {{ N }} ; } See the user manual for more detailed information. You need to check the ECMP correctness on physical network (Core router in the above scenario): .. code-block:: shell-session CORE01# show ip route 10.5.2.0 IP Route Table for VRF "default" '\*' denotes best ucast next-hop '\*\*' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%' in via output denotes VRF 10.5.2.0/24, ubest/mbest: 2/0 \*via 10.4.1.7, [200/0], 13w6d, bgp-65418, internal, tag 65418 \*via 10.4.1.8, [200/0], 12w4d, bgp-65418, internal, tag 65418 Graceful restart ---------------- \*\*This feature also relies on peer side's configuration.\*\* Add ``graceful restart`` to each ``bgp`` section: :: protocol bgp uplink0 { ... graceful restart; }
https://github.com/cilium/cilium/blob/main//Documentation/network/bird.rst
main
cilium
[ 0.01477889809757471, 0.004082602448761463, -0.07766591757535934, -0.06198536232113838, 0.010427400469779968, 0.026053743436932564, -0.06405992805957794, 0.04827805608510971, -0.04609730839729309, 0.028541631996631622, 0.01644502766430378, -0.09371384978294373, 0.002375069772824645, -0.0147...
0.01295
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_enable\_vtep: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* VXLAN Tunnel Endpoint (VTEP) Integration (beta) \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* .. include:: ../beta.rst The VTEP integration allows third party VTEP devices to send and receive traffic to and from Cilium-managed pods directly using VXLAN. This allows for example external load balancers like BIG-IP to load balance traffic to Cilium-managed pods using VXLAN. This document explains how to enable VTEP support and configure Cilium with VTEP endpoint IPs, CIDRs, and MAC addresses. .. note:: This guide assumes that Cilium has been correctly installed in your Kubernetes cluster. Please see :ref:`k8s\_quick\_install` for more information. If unsure, run ``cilium status`` and validate that Cilium is up and running. This guide also assumes VTEP devices has been configured with VTEP endpoint IP, VTEP CIDRs, VTEP MAC addresses (VTEP MAC). The VXLAN network identifier (VNI) \*must\* be configured as VNI ``2``, which represents traffic from the VTEP as the world identity. See :ref:`reserved\_labels` for more details. Enable VXLAN Tunnel Endpoint (VTEP) integration =============================================== This feature is disabled by default. When enabling the VTEP integration, you must also specify the IPs, CIDR ranges and MACs for each VTEP device as part of the configuration. .. tabs:: .. group-tab:: Helm If you installed Cilium via ``helm install``, you may enable the VTEP support with the following command: .. cilium-helm-upgrade:: :namespace: kube-system :extra-args: --reuse-values :set: vtep.enabled="true" vtep.endpoint="10.169.72.236 10.169.72.238" vtep.cidr="10.1.1.0/24 10.1.2.0/24" vtep.mask="255.255.255.0" vtep.mac="82:36:4c:98:2e:56 82:36:4c:98:2e:58" .. group-tab:: ConfigMap VTEP support can be enabled by setting the following options in the ``cilium-config`` ConfigMap: .. code-block:: yaml enable-vtep: "true" vtep-endpoint: "10.169.72.236 10.169.72.238" vtep-cidr: "10.1.1.0/24 10.1.2.0/24" vtep-mask: "255.255.255.0" vtep-mac: "82:36:4c:98:2e:56 82:36:4c:98:2e:58" Restart Cilium daemonset: .. code-block:: bash kubectl -n $CILIUM\_NAMESPACE rollout restart ds/cilium How to test VXLAN Tunnel Endpoint (VTEP) Integration ==================================================== Start up a Linux VM with node network connectivity to Cilium node. To configure the Linux VM, you will need to be ``root`` user or run the commands below using ``sudo``. :: Test VTEP Integration Node IP: 10.169.72.233 +--------------------------+ VM IP: 10.169.72.236 | | +------------------+ | CiliumNode | | Linux VM | | | | | | +---------+ | | | | | busybox | | | | | | | ens192<------>ens192 | | +--eth0---+ | | | | | | +-----vxlan2-------+ | | | | lxcxxx | | | | +------+-----cilium\_vxlan--+ .. code-block:: bash # Create a vxlan device and set the MAC address. ip link add vxlan2 type vxlan id 2 dstport 8472 local 10.169.72.236 dev ens192 ip link set dev vxlan2 address 82:36:4c:98:2e:56 ip link set vxlan2 up # Configure the VTEP with IP 10.1.1.236 to handle CIDR 10.1.1.0/24. ip addr add 10.1.1.236/24 dev vxlan2 # Assume Cilium podCIDR network is 10.0.0.0/16, add route to 10.0.0.0/16 ip route add 10.0.0.0/16 dev vxlan2 proto kernel scope link src 10.1.1.236 # Allow Linux VM to send ARP broadcast request to Cilium node for busybox pod # ARP resolution through vxlan2 device bridge fdb append 00:00:00:00:00:00 dst 10.169.72.233 dev vxlan2 If you are managing multiple VTEPs, follow the above process for each instance. Once the VTEPs are configured, you can configure Cilium to use the MAC, IP and CIDR ranges that you have configured on the VTEPs. Follow the instructions to :ref:`enable\_vtep`. To test the VTEP network connectivity: .. code-block:: bash # ping Cilium-managed busybox pod IP 10.0.1.1 for example from Linux VM ping 10.0.1.1 Limitations =========== \* This feature does not work with ipsec encryption between Cilium managed pod and VTEPs.
https://github.com/cilium/cilium/blob/main//Documentation/network/vtep.rst
main
cilium
[ -0.039796389639377594, 0.03799581900238991, -0.08498106896877289, -0.05153452232480049, -0.007558770943433046, -0.051538653671741486, -0.0704730674624443, 0.037350479513406754, 0.01867886260151863, -0.03435327485203743, 0.03629094362258911, -0.02860809676349163, -0.009737447835505009, -0.0...
0.229535
network connectivity: .. code-block:: bash # ping Cilium-managed busybox pod IP 10.0.1.1 for example from Linux VM ping 10.0.1.1 Limitations =========== \* This feature does not work with ipsec encryption between Cilium managed pod and VTEPs.
https://github.com/cilium/cilium/blob/main//Documentation/network/vtep.rst
main
cilium
[ 0.02454008162021637, 0.04007130488753319, -0.035834770649671555, -0.00024402368580922484, 0.012556820176541805, -0.019410274922847748, -0.005054276902228594, -0.01576966978609562, 0.026600806042551994, 0.03119906596839428, 0.0071563394740223885, -0.03366236388683319, -0.07567229866981506, ...
0.188823
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_enable\_multicast: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Multicast Support in Cilium (Beta) \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* .. include:: ../beta.rst The multicast capability allows user application to distribute data feeds to multiple consumers in the Kubernetes cluster. The container network multicast transmission technology based on eBPF focuses on solving the problem of efficient multicast transmission in the container network and provides support for multiple multicast protocols. This document explains how to enable multicast support and configure Cilium and CiliumNode with multicast group IP addresses and subscribers. Prerequisites ============= This guide assumes that Cilium has been correctly installed in your Kubernetes cluster. Please see :ref:`k8s\_quick\_install` for more information. If unsure, run ``cilium status`` and validate that Cilium is up and running. This guide also assumes Cilium is configured with vxlan mode, which is required when using multicast capability. Multicast only works on kernels >= 5.10 for AMD64, and on kernels >= 6.0 for AArch64. Enable Multicast Feature ======================== Multicast support can be enabled by updating ``cilium-config`` ConfigMap as following: .. code-block:: shell-session $ cilium config set multicast-enabled true ✨ Patching ConfigMap cilium-config with multicast-enabled=true... ♻️ Restarted Cilium pods Configure Multicast and Subscriber IPs ====================================== To use multicast with Cilium, we need to configure multicast group IP addresses and subscriber list based on the application requirements. This is done by running ``cilium-dbg`` command in each ``cilium-agent`` pod. Then, multicast subscriber pods can send out IGMP join and multicast sender pods can start sending multicast stream. As an example, the following guide uses ``239.255.0.1`` multicast group address. Get all CiliumNode IP addresses to be set as multicast subscribers: .. code-block:: shell-session $ kubectl get ciliumnodes.cilium.io NAME CILIUMINTERNALIP INTERNALIP AGE kind-control-plane 10.244.0.72 172.19.0.2 16m kind-worker 10.244.1.86 172.19.0.3 16m To set multicast IP address, enable multicast BPF maps in each ``cilium-agent``: .. code-block:: shell-session ### add multicast IP address $ cilium-dbg bpf multicast group add 239.255.0.1 ### check multicast IP address $ cilium-dbg bpf multicast group list Group Address 239.255.0.1 Then, set the subscriber IP addresses in each ``cilium-agent``: .. code-block:: shell-session ### cilium-agent on kind-control-plane $ cilium-dbg bpf multicast subscriber add 239.255.0.1 10.244.1.86 $ cilium-dbg bpf multicast subscriber list all Group Subscriber Type 239.255.0.1 10.244.1.86 Remote Node ### cilium-agent on kind-worker $ cilium-dbg bpf multicast subscriber add 239.255.0.1 10.244.0.72 .. note:: This multicast subscriber IP addresses are different CiliumNode IP addresses than your own one. To make all nodes join a specified multicast group, use the ``cilium multicast`` command. You can also get information about multicast groups and subscribers cluster-wide. .. code-block:: shell-session ### make all nodes join the specified multicast group $ cilium multicast add --group-ip 239.255.0.1 ### confirm the multicast groups and subscribers $ cilium multicast list subscriber --all Node Group Subscriber Type cl-worker 239.255.0.1 10.244.0.196 Remote Node cl-control-plane 239.255.0.1 10.244.1.122 Remote Node When you want to remove multicast IP addresses and subscriber list, run the following commands in the ``cilium-agent``. .. code-block:: shell-session $ cilium-dbg bpf multicast group delete 239.255.0.1 $ cilium-dbg bpf multicast subscriber delete 239.255.0.1 10.244.0.72 Limitations =========== \* The operation needs to be done on each CiliumNode that uses multicast feature. \* This feature does not work with ipsec encryption between Cilium managed pod.
https://github.com/cilium/cilium/blob/main//Documentation/network/multicast.rst
main
cilium
[ -0.011095677502453327, -0.010021227411925793, -0.04014843329787254, -0.09325090050697327, 0.04869134724140167, 0.02596707083284855, -0.03004809468984604, -0.019884727895259857, -0.02328473888337612, -0.031589433550834656, -0.01402251049876213, -0.07941517233848572, 0.03208976238965988, -0....
0.217234
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_kube-router: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Using Kube-Router to Run BGP (deprecated) \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This guide explains how to configure Cilium and kube-router to co-operate to use kube-router for BGP peering and route propagation and Cilium for policy enforcement and load-balancing. .. include:: ../beta.rst Deploy kube-router ################## Download the kube-router DaemonSet template: .. code-block:: shell-session curl -LO https://raw.githubusercontent.com/cloudnativelabs/kube-router/v1.2/daemonset/generic-kuberouter-only-advertise-routes.yaml Open the file ``generic-kuberouter-only-advertise-routes.yaml`` and edit the ``args:`` section. The following arguments are \*\*required\*\* to be set to exactly these values: .. code-block:: yaml - "--run-router=true" - "--run-firewall=false" - "--run-service-proxy=false" - "--enable-cni=false" - "--enable-pod-egress=false" The following arguments are \*\*optional\*\* and may be set according to your needs. For the purpose of keeping this guide simple, the following values are being used which require the least preparations in your cluster. Please see the `kube-router user guide `\_ for more information. .. code-block:: yaml - "--enable-ibgp=true" - "--enable-overlay=true" - "--advertise-cluster-ip=true" - "--advertise-external-ip=true" - "--advertise-loadbalancer-ip=true" The following arguments are \*\*optional\*\* and should be set if you want BGP peering with an external router. This is useful if you want externally routable Kubernetes Pod and Service IPs. Note the values used here should be changed to whatever IPs and ASNs are configured on your external router. .. code-block:: yaml - "--cluster-asn=65001" - "--peer-router-ips=10.0.0.1,10.0.2" - "--peer-router-asns=65000,65000" Apply the DaemonSet file to deploy kube-router and verify it has come up correctly: .. code-block:: shell-session $ kubectl apply -f generic-kuberouter-only-advertise-routes.yaml $ kubectl -n kube-system get pods -l k8s-app=kube-router NAME READY STATUS RESTARTS AGE kube-router-n6fv8 1/1 Running 0 10m kube-router-nj4vs 1/1 Running 0 10m kube-router-xqqwc 1/1 Running 0 10m kube-router-xsmd4 1/1 Running 0 10m Deploy Cilium ############# In order for routing to be delegated to kube-router, tunneling/encapsulation must be disabled. This is done by setting the ``routing-mode=native`` in the ConfigMap ``cilium-config`` or by adjusting the DaemonSet to run the ``cilium-agent`` with the argument ``--routing-mode=native``. Moreover, in the same ConfigMap, we must explicitly set ``ipam: kubernetes`` since kube-router pulls the pod CIDRs directly from K8s: .. code-block:: yaml # Encapsulation mode for communication between nodes # Possible values: # - disabled # - vxlan (default) # - geneve routing-mode: "native" ipam: "kubernetes" You can then install Cilium according to the instructions in section :ref:`ds\_deploy`. Ensure that Cilium is up and running: .. code-block:: shell-session $ kubectl -n kube-system get pods -l k8s-app=cilium NAME READY STATUS RESTARTS AGE cilium-fhpk2 1/1 Running 0 45m cilium-jh6kc 1/1 Running 0 44m cilium-rlx6n 1/1 Running 0 44m cilium-x5x9z 1/1 Running 0 45m Verify Installation ################### Verify that kube-router has installed routes: .. code-block:: shell-session $ kubectl -n kube-system exec ds/cilium -- ip route list scope global default via 172.0.32.1 dev eth0 proto dhcp src 172.0.50.227 metric 1024 10.2.0.0/24 via 10.2.0.172 dev cilium\_host src 10.2.0.172 10.2.1.0/24 via 172.0.51.175 dev eth0 proto 17 10.2.2.0/24 dev tun-172011760 proto 17 src 172.0.50.227 10.2.3.0/24 dev tun-1720186231 proto 17 src 172.0.50.227 In the above example, we see three categories of routes that have been installed: \* \*Local PodCIDR:\* This route points to all pods running on the host and makes these pods available to \* ``10.2.0.0/24 via 10.2.0.172 dev cilium\_host src 10.2.0.172`` \* \*BGP route:\* This type of route is installed if kube-router determines that the remote PodCIDR can be reached via a router known to the local host. It will instruct pod to pod traffic to be forwarded directly to that router without requiring any encapsulation. \* ``10.2.1.0/24 via 172.0.51.175 dev eth0 proto 17`` \* \*IPIP tunnel route:\* If no direct routing path exists, kube-router will fall back to using
https://github.com/cilium/cilium/blob/main//Documentation/network/kube-router.rst
main
cilium
[ -0.012310096994042397, -0.027442513033747673, -0.059494711458683014, -0.07610347121953964, -0.05605786666274071, -0.03615104407072067, -0.0853390246629715, -0.00998899806290865, -0.016411399468779564, -0.045504894107580185, 0.0075094676576554775, -0.06097367778420448, -0.05333434045314789, ...
0.173889
via a router known to the local host. It will instruct pod to pod traffic to be forwarded directly to that router without requiring any encapsulation. \* ``10.2.1.0/24 via 172.0.51.175 dev eth0 proto 17`` \* \*IPIP tunnel route:\* If no direct routing path exists, kube-router will fall back to using an overlay and establish an IPIP tunnel between the nodes. \* ``10.2.2.0/24 dev tun-172011760 proto 17 src 172.0.50.227`` \* ``10.2.3.0/24 dev tun-1720186231 proto 17 src 172.0.50.227`` .. include:: ../installation/k8s-install-validate.rst
https://github.com/cilium/cilium/blob/main//Documentation/network/kube-router.rst
main
cilium
[ 0.0066433260217309, 0.03995716944336891, 0.055245157331228256, -0.0013844369677826762, 0.0006326850270852447, 0.03025742806494236, -0.025392072275280952, -0.0005833900067955256, 0.022027064114809036, 0.053223151713609695, -0.044476836919784546, -0.030979976058006287, -0.05733158811926842, ...
0.124434
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gs\_ingress\_tls: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Ingress Example with TLS Termination \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This example builds on the HTTP and gRPC ingress examples, adding TLS termination. .. literalinclude:: ../../../examples/kubernetes/servicemesh/tls-ingress.yaml :language: yaml .. include:: tls-cert.rst Deploy the Ingress ================== The Ingress configuration for this demo provides the same routing as those demos but with the addition of TLS termination. .. tabs:: .. group-tab:: Self-signed Certificate .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/tls-ingress.yaml .. group-tab:: cert-manager .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/tls-ingress.yaml To tell cert-manager that this Ingress needs a certificate, annotate the Ingress with the name of the CA issuer we previously created: .. code-block:: shell-session $ kubectl annotate ingress tls-ingress cert-manager.io/issuer=ca-issuer This creates a Certificate object along with a Secret containing the TLS certificate. .. code-block:: shell-session $ kubectl get certificate,secret demo-cert NAME READY SECRET AGE certificate.cert-manager.io/demo-cert True demo-cert 33m NAME TYPE DATA AGE secret/demo-cert kubernetes.io/tls 3 33m External IP address will be shown up in Ingress .. code-block:: shell-session $ kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE tls-ingress cilium hipstershop.cilium.rocks,bookinfo.cilium.rocks 35.195.24.75 80, 443 6m5s In this Ingress configuration, the host names ``hipstershop.cilium.rocks`` and ``bookinfo.cilium.rocks`` are specified in the path routing rules. The client needs to specify which host it wants to access. This can be achieved by editing your local ``/etc/hosts``` file. (You will almost certainly need to be superuser to edit this file.) Add entries using the IP address assigned to the ingress service, so your file looks something like this: .. code-block:: shell-session $ sudo perl -ni -e 'print if !/\.cilium\.rocks$/d' /etc/hosts; sudo tee -a /etc/hosts \ <<<"$(kubectl get ing tls-ingress -o=jsonpath='{.status.loadBalancer.ingress[0].ip}') bookinfo.cilium.rocks hipstershop.cilium.rocks" Make HTTPS Requests =================== .. tabs:: .. group-tab:: Self-signed Certificate By specifying the CA's certificate on a curl request, you can say that you trust certificates signed by that CA. .. code-block:: shell-session $ curl --cacert minica.pem -v https://bookinfo.cilium.rocks/details/1 If you prefer, instead of supplying the CA you can specify ``-k`` to tell the curl client not to validate the server's certificate. Without either, you will get an error that the certificate was signed by an unknown authority. Specifying -v on the curl request, you can see that the TLS handshake took place successfully. Similarly you can specify the CA on a gRPC request like this: .. code-block:: shell-session # Download demo.proto file if you have not done before $ curl -o demo.proto https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/main/protos/demo.proto $ grpcurl -proto ./demo.proto -cacert minica.pem hipstershop.cilium.rocks:443 hipstershop.ProductCatalogService/ListProducts .. group-tab:: cert-manager .. code-block:: shell-session $ curl https://bookinfo.cilium.rocks/details/1 Similarly you can specify the CA on a gRPC request like this: .. code-block:: shell-session grpcurl -proto ./demo.proto -cacert minica.pem hipstershop.cilium.rocks:443 hipstershop.ProductCatalogService/ListProducts .. Note:: See the gRPC Ingress example if you don't already have the ``demo.proto`` file downloaded. You can also visit https://bookinfo.cilium.rocks in your browser. The browser might warn you that the certificate authority is unknown but if you proceed past this, you should see the bookstore application home page. Note that requests will time out if you don't specify ``https://``.
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/tls-termination.rst
main
cilium
[ -0.07744109630584717, 0.08894370496273041, -0.04096781834959984, -0.05602989345788956, -0.028841199353337288, -0.11309561878442764, -0.04246543347835541, -0.03742099180817604, 0.09157605469226837, -0.013343281112611294, 0.018233899027109146, -0.08345236629247665, 0.055024728178977966, -0.0...
0.154665
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_servicemesh\_root: \*\*\*\*\*\*\*\*\*\*\*\* Service Mesh \*\*\*\*\*\*\*\*\*\*\*\* What is Service Mesh? ##################### With the introduction of distributed applications, additional visibility, connectivity, and security requirements have surfaced. Application components communicate over untrusted networks across cloud and premises boundaries, load-balancing is required to understand application protocols, resiliency is becoming crucial, and security must evolve to a model where sender and receiver can authenticate each other’s identity. In the early days of distributed applications, these requirements were resolved by directly embedding the required logic into the applications. A service mesh extracts these features out of the application and offers them as part of the infrastructure for all applications to use and thus no longer requires to change each application. Looking at the feature set of a service mesh today, it can be summarized as follows: - \*\*Resilient Connectivity\*\*: Service to service communication must be possible across boundaries such as clouds, clusters, and premises. Communication must be resilient and fault tolerant. - \*\*L7 Traffic Management\*\*: Load balancing, rate limiting, and resiliency must be L7-aware (HTTP, REST, gRPC, WebSocket, …). - \*\*Identity-based Security\*\*: Relying on network identifiers to achieve security is no longer sufficient, both the sending and receiving services must be able to authenticate each other based on identities instead of a network identifier. - \*\*Observability & Tracing\*\*: Observability in the form of tracing and metrics is critical to understanding, monitoring, and troubleshooting application stability, performance, and availability. - \*\*Transparency\*\*: The functionality must be available to applications in a transparent manner, i.e. without requiring to change application code. .. admonition:: Video :class: attention If you'd like a video explanation of Cilium's Service Mesh implementation, check out `eCHO episode 27: eBPF-enabled Service Mesh `\_\_ and `eCHO episode 100: Next-gen mutual authentication in Cilium `\_\_. Why Cilium Service Mesh? ######################## Since its early days, Cilium has been well aligned with the service mesh concept by operating at both the networking and the application protocol layer to provide connectivity, load-balancing, security, and observability. For all network processing including protocols such as IP, TCP, and UDP, Cilium uses eBPF as the highly efficient in-kernel datapath. Protocols at the application layer such as HTTP, Kafka, gRPC, and DNS are parsed using a proxy such as Envoy. .. toctree:: :maxdepth: 3 :glob: ingress gateway-api/gateway-api gateway-api/gamma ingress-to-gateway/ingress-to-gateway istio mutual-authentication/mutual-authentication l7-traffic-management
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/index.rst
main
cilium
[ -0.07380682229995728, 0.06291602551937103, -0.037370115518569946, -0.08579692989587784, 0.045874256640672684, -0.07001028209924698, 0.01551545225083828, 0.031668152660131454, 0.08475285768508911, 0.02703782171010971, -0.008327710442245007, -0.007655736058950424, 0.07642277330160141, 0.0145...
0.278554
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gs\_ingress\_and\_network\_policy: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Ingress and Network Policy Example \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This example uses the same configuration as the base HTTP Ingress example, using the ``bookinfo`` demo microservices app from the Istio project, and then adds CiliumNetworkPolicy on the top. .. include:: demo-app.rst .. \_gs\_basic\_ingress\_policy: .. include:: basic-ingress.rst Confirm that your Ingress is working: .. code-block:: shell-session $ HTTP\_INGRESS=$(kubectl get ingress basic-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}') $ curl --fail -s http://"$HTTP\_INGRESS"/details/1 | jq { "id": 1, "author": "William Shakespeare", "year": 1595, "type": "paperback", "pages": 200, "publisher": "PublisherA", "language": "English", "ISBN-10": "1234567890", "ISBN-13": "123-1234567890" } .. include:: external-ingress-policy.rst .. include:: default-deny-ingress-policy.rst
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/ingress-and-network-policy.rst
main
cilium
[ -0.06724157184362411, 0.04927286133170128, -0.06793658435344696, -0.02588360384106636, -0.0076916334219276905, -0.05869312584400177, -0.01729600690305233, 0.00004210661427350715, -0.0039021482225507498, 0.0275256484746933, 0.04169045388698578, -0.09690993279218674, 0.06034292280673981, -0....
0.320819
Deploy the Demo App =================== .. code-block:: shell-session $ kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.11/samples/bookinfo/platform/kube/bookinfo.yaml This is just deploying the demo app, it's not adding any Istio components. You can confirm that with Cilium Service Mesh there is no Envoy sidecar created alongside each of the demo app microservices. .. code-block:: shell-session $ kubectl get pods NAME READY STATUS RESTARTS AGE details-v1-5498c86cf5-kjzkj 1/1 Running 0 2m39s productpage-v1-65b75f6885-ff59g 1/1 Running 0 2m39s ratings-v1-b477cf6cf-kv7bh 1/1 Running 0 2m39s reviews-v1-79d546878f-r5bjz 1/1 Running 0 2m39s reviews-v2-548c57f459-pld2f 1/1 Running 0 2m39s reviews-v3-6dd79655b9-nhrnh 1/1 Running 0 2m39s .. Note:: With the sidecar implementation the output would show 2/2 READY. One for the microservice and one for the Envoy sidecar.
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/demo-app.rst
main
cilium
[ 0.04371441900730133, -0.020228924229741096, -0.011116599664092064, 0.0025081622879952192, -0.040091123431921005, -0.04535381868481636, -0.04232447221875191, 0.0364132784307003, 0.009411091916263103, 0.06856612861156464, 0.04172859340906143, -0.142263263463974, -0.0721740871667862, 0.019524...
0.470917
.. Note:: Note that these Envoy resources are not validated by K8s at all, so any errors in the Envoy resources will only be seen by the Cilium Agent observing these CRDs. This means that ``kubectl apply`` will report success, while parsing and/or installing the resources for the node-local Envoy instance may have failed. Currently the only way of verifying this is by observing Cilium Agent logs for errors and warnings. Additionally, Cilium Agent will print warning logs for any conflicting Envoy resources in the cluster. .. Note:: Note that Cilium Ingress Controller will configure required Envoy resource under the hood. Please check Cilium Agent logs if you are creating Envoy resources explicitly to make sure there is no conflict.
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/warning.rst
main
cilium
[ 0.019745822995901108, -0.006554330233484507, 0.03655577450990677, 0.009444399736821651, 0.009637166745960712, -0.05118953064084053, -0.03998700901865959, -0.04035104438662529, 0.029312200844287872, 0.02254442870616913, -0.03820883482694626, -0.09863782674074173, 0.009394912980496883, 0.016...
0.170916
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gs\_ingress\_http: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Ingress HTTP Example \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* The example ingress configuration routes traffic to backend services from the ``bookinfo`` demo microservices app from the Istio project. .. include:: demo-app.rst .. \_gs\_basic\_ingress: .. include:: basic-ingress.rst Make HTTP Requests ================== Check (with ``curl`` or in your browser) that you can make HTTP requests to that external address. The ``/`` path takes you to the home page for the bookinfo application. From outside the cluster you can also make requests directly to the ``details`` service using the path ``/details``. But you can't directly access other URL paths that weren't defined in ``basic-ingress.yaml``. For example, you can get JSON data from a request to ``/details/1`` and get back some data, but you will get a 404 error if you make a request to ``/ratings``. .. code-block:: shell-session $ HTTP\_INGRESS=$(kubectl get ingress basic-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}') $ curl --fail -s http://"$HTTP\_INGRESS"/details/1 | jq { "id": 1, "author": "William Shakespeare", "year": 1595, "type": "paperback", "pages": 200, "publisher": "PublisherA", "language": "English", "ISBN-10": "1234567890", "ISBN-13": "123-1234567890" }
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/http.rst
main
cilium
[ -0.04257635027170181, 0.02157597802579403, -0.0906507819890976, -0.005435241851955652, 0.007240359205752611, -0.11482574790716171, -0.020755082368850708, 0.03424752131104469, 0.03446870669722557, 0.04704960435628891, 0.025772184133529663, -0.09773008525371552, 0.0005447055445984006, -0.033...
0.335737
Deploy the Echo App =================== We will use a deployment made of echo servers. The application will reply to the client and, in the body of the reply, will include information about the Pod and Node receiving the original request. We will use this information to illustrate how the traffic is manipulated by the Gateway. .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/gateway/echo.yaml Verify the Pods are running as expected. .. code-block:: shell-session $ kubectl get pods NAME READY STATUS RESTARTS AGE echo-1-7d88f779b-m6r46 1/1 Running 0 21s echo-2-5bfb6668b4-n7llh 1/1 Running 0 21s
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/echo-app.rst
main
cilium
[ 0.07820393890142441, 0.013205282390117645, -0.00003509973976179026, -0.014642389491200447, -0.03396637365221977, -0.045924536883831024, 0.03558364510536194, -0.04252924025058746, 0.04686281085014343, 0.06883805245161057, -0.036787647753953934, -0.051341310143470764, -0.03288085386157036, -...
0.14575
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gs\_envoy\_circuit\_breaker: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* L7 Circuit Breaking \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Cilium Service Mesh defines a ``CiliumClusterwideEnvoyConfig`` CRD which allows users to set the configuration of the Envoy component built into Cilium agents. Circuit breaking is an important pattern for creating resilient microservice applications. Circuit breaking allows you to write applications that limit the impact of failures, latency spikes, and other undesirable effects of network peculiarities. You will configure Circuit breaking rules with ``CiliumClusterwideEnvoyConfig`` and then test the configuration by intentionally “tripping” the circuit breaker in this example. Deploy Test Applications ======================== .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/envoy/test-application-proxy-circuit-breaker.yaml The test workloads consist of: - One client Deployment, ``fortio-deploy`` - One Service, ``echo-service`` View information about these Pods: .. code-block:: shell-session $ kubectl get pods --show-labels -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS echo-service-59557f5857-xh84s 2/2 Running 0 7m37s 10.0.0.125 cilium-control-plane kind=echo,name=echo-service,other=echo,pod-template-hash=59557f5857 fortio-deploy-687945c6dc-6qnh4 1/1 Running 0 7m37s 10.0.0.109 cilium-control-plane app=fortio,pod-template-hash=687945c6dc Configuring Envoy Circuit Breaker ================================= Apply the ``envoy-circuit-breaker.yaml`` file, which defines a ``CiliumClusterwideEnvoyConfig``. .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/envoy/envoy-circuit-breaker.yaml .. include:: warning.rst Verify the ``CiliumClusterwideEnvoyConfig`` was created correctly. .. code-block:: shell-session $ kubectl get ccec envoy-circuit-breaker -oyaml apiVersion: cilium.io/v2 kind: CiliumClusterwideEnvoyConfig ... resources: - "@type": type.googleapis.com/envoy.config.cluster.v3.Cluster name: "default/echo-service" connect\_timeout: 5s lb\_policy: ROUND\_ROBIN type: EDS circuit\_breakers: thresholds: - priority: "DEFAULT" max\_requests: 2 max\_pending\_requests: 1 outlier\_detection: split\_external\_local\_origin\_errors: true consecutive\_local\_origin\_failure: 2 services: - name: echo-service namespace: default In the ``CiliumClusterwideEnvoyConfig`` settings, you specified ``max\_pending\_requests: 1`` and ``max\_requests: 2``. These rules indicate that if you exceed more than one connection and request concurrently, you will see some failures when the envoy opens the circuit for further requests and connections. Tripping Envoy Circuit Breaker ============================== Make an environment variable with the Pod name for fortio: .. code-block:: shell-session $ export FORTIO\_POD=$(kubectl get pods -l app=fortio -o 'jsonpath={.items[0].metadata.name}') Use the following command to call the Service with two concurrent connections using the ``-c 2`` flag and send 20 requests using ``-n 20`` flag: .. code-block:: shell-session $ kubectl exec "$FORTIO\_POD" -c fortio -- /usr/bin/fortio load -c 2 -qps 0 -n 20 http://echo-service:8080 Output:: $ kubectl exec "$FORTIO\_POD" -c fortio -- /usr/bin/fortio load -c 2 -qps 0 -n 20 http://echo-service:8080 {"ts":1692767216.838976,"level":"info","file":"scli.go","line":107,"msg":"Starting Φορτίο 1.57.3 h1:kdPlBiws3cFsLcssZxCt2opFmHj14C3yPBokFhMWzmg= go1.20.6 amd64 linux"} Fortio 1.57.3 running at 0 queries per second, 4->4 procs, for 20 calls: http://echo-service:8080 {"ts":1692767216.839520,"level":"info","file":"httprunner.go","line":100,"msg":"Starting http test","run":"0","url":"http://echo-service:8080","threads":"2","qps":"-1.0","warmup":"parallel","conn-reuse":""} Starting at max qps with 2 thread(s) [gomax 4] for exactly 20 calls (10 per thread + 0) {"ts":1692767216.842149,"level":"warn","file":"http\_client.go","line":1104,"msg":"Non ok http code","code":"503","status":"HTTP/1.1 503","thread":"1","run":"0"} {"ts":1692767216.854289,"level":"info","file":"periodic.go","line":832,"msg":"T001 ended after 13.462339ms : 10 calls. qps=742.8129688310479"} {"ts":1692767216.854985,"level":"info","file":"periodic.go","line":832,"msg":"T000 ended after 14.158587ms : 10 calls. qps=706.2851681456631"} Ended after 14.197088ms : 20 calls. qps=1408.7 {"ts":1692767216.855035,"level":"info","file":"periodic.go","line":564,"msg":"Run ended","run":"0","elapsed":"14.197088ms","calls":"20","qps":"1408.739595049351"} Aggregated Function Time : count 20 avg 0.0013703978 +/- 0.000461 min 0.00092124 max 0.002696039 sum 0.027407957 # range, mid point, percentile, count >= 0.00092124 <= 0.001 , 0.00096062 , 10.00, 2 > 0.001 <= 0.002 , 0.0015 , 90.00, 16 > 0.002 <= 0.00269604 , 0.00234802 , 100.00, 2 # target 50% 0.0015 # target 75% 0.0018125 # target 90% 0.002 # target 99% 0.00262644 # target 99.9% 0.00268908 Error cases : count 1 avg 0.00133143 +/- 0 min 0.00133143 max 0.00133143 sum 0.00133143 # range, mid point, percentile, count >= 0.00133143 <= 0.00133143 , 0.00133143 , 100.00, 1 # target 50% 0.00133143 # target 75% 0.00133143 # target 90% 0.00133143 # target 99% 0.00133143 # target 99.9% 0.00133143 # Socket and IP used for each connection: [0] 1 socket used, resolved to 10.96.182.43:8080, connection timing : count 1 avg 0.000426815 +/- 0 min 0.000426815
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/envoy-circuit-breaker.rst
main
cilium
[ -0.046474531292915344, 0.00919309537857771, 0.016567323356866837, -0.032117053866386414, 0.003463955130428076, -0.032564129680395126, -0.059239462018013, 0.04633123800158501, -0.03852573037147522, 0.010684198699891567, 0.036347221583127975, -0.04356101155281067, 0.03679029643535614, -0.003...
0.159138
0.00133143 , 100.00, 1 # target 50% 0.00133143 # target 75% 0.00133143 # target 90% 0.00133143 # target 99% 0.00133143 # target 99.9% 0.00133143 # Socket and IP used for each connection: [0] 1 socket used, resolved to 10.96.182.43:8080, connection timing : count 1 avg 0.000426815 +/- 0 min 0.000426815 max 0.000426815 sum 0.000426815 [1] 2 socket used, resolved to 10.96.182.43:8080, connection timing : count 2 avg 0.0004071275 +/- 0.0001215 min 0.000285596 max 0.000528659 sum 0.000814255 Connection time histogram (s) : count 3 avg 0.00041369 +/- 9.966e-05 min 0.000285596 max 0.000528659 sum 0.00124107 # range, mid point, percentile, count >= 0.000285596 <= 0.000528659 , 0.000407128 , 100.00, 3 # target 50% 0.000346362 # target 75% 0.00043751 # target 90% 0.0004922 # target 99% 0.000525013 # target 99.9% 0.000528294 Sockets used: 3 (for perfect keepalive, would be 2) Uniform: false, Jitter: false, Catchup allowed: true IP addresses distribution: 10.96.182.43:8080: 3 Code 200 : 19 (95.0 %) Code 503 : 1 (5.0 %) Response Header Sizes : count 20 avg 370.5 +/- 85 min 0 max 390 sum 7410 Response Body/Total Sizes : count 20 avg 2340.15 +/- 465.7 min 310 max 2447 sum 46803 All done 20 calls (plus 0 warmup) 1.370 ms avg, 1408.7 qps From the above output, you can see that the response code of some requests is 503, which triggers a circuit breaker. Bring the number of concurrent connections up to 4. Output:: $ kubectl exec "$FORTIO\_POD" -c fortio -- /usr/bin/fortio load -c 4 -qps 0 -n 20 http://echo-service:8080 {"ts":1692767495.818546,"level":"info","file":"scli.go","line":107,"msg":"Starting Φορτίο 1.57.3 h1:kdPlBiws3cFsLcssZxCt2opFmHj14C3yPBokFhMWzmg= go1.20.6 amd64 linux"} Fortio 1.57.3 running at 0 queries per second, 4->4 procs, for 20 calls: http://echo-service:8080 {"ts":1692767495.819105,"level":"info","file":"httprunner.go","line":100,"msg":"Starting http test","run":"0","url":"http://echo-service:8080","threads":"4","qps":"-1.0","warmup":"parallel","conn-reuse":""} Starting at max qps with 4 thread(s) [gomax 4] for exactly 20 calls (5 per thread + 0) {"ts":1692767495.822424,"level":"warn","file":"http\_client.go","line":1104,"msg":"Non ok http code","code":"503","status":"HTTP/1.1 503","thread":"3","run":"0"} {"ts":1692767495.822428,"level":"warn","file":"http\_client.go","line":1104,"msg":"Non ok http code","code":"503","status":"HTTP/1.1 503","thread":"0","run":"0"} {"ts":1692767495.822603,"level":"warn","file":"http\_client.go","line":1104,"msg":"Non ok http code","code":"503","status":"HTTP/1.1 503","thread":"1","run":"0"} {"ts":1692767495.823855,"level":"warn","file":"http\_client.go","line":1104,"msg":"Non ok http code","code":"503","status":"HTTP/1.1 503","thread":"0","run":"0"} {"ts":1692767495.825250,"level":"warn","file":"http\_client.go","line":1104,"msg":"Non ok http code","code":"503","status":"HTTP/1.1 503","thread":"1","run":"0"} {"ts":1692767495.825285,"level":"warn","file":"http\_client.go","line":1104,"msg":"Non ok http code","code":"503","status":"HTTP/1.1 503","thread":"0","run":"0"} {"ts":1692767495.827282,"level":"warn","file":"http\_client.go","line":1104,"msg":"Non ok http code","code":"503","status":"HTTP/1.1 503","thread":"0","run":"0"} {"ts":1692767495.827514,"level":"warn","file":"http\_client.go","line":1104,"msg":"Non ok http code","code":"503","status":"HTTP/1.1 503","thread":"2","run":"0"} {"ts":1692767495.829886,"level":"warn","file":"http\_client.go","line":1104,"msg":"Non ok http code","code":"503","status":"HTTP/1.1 503","thread":"0","run":"0"} {"ts":1692767495.830156,"level":"info","file":"periodic.go","line":832,"msg":"T000 ended after 9.136284ms : 5 calls. qps=547.268451812575"} {"ts":1692767495.830326,"level":"warn","file":"http\_client.go","line":1104,"msg":"Non ok http code","code":"503","status":"HTTP/1.1 503","thread":"2","run":"0"} {"ts":1692767495.831175,"level":"warn","file":"http\_client.go","line":1104,"msg":"Non ok http code","code":"503","status":"HTTP/1.1 503","thread":"3","run":"0"} {"ts":1692767495.832826,"level":"warn","file":"http\_client.go","line":1104,"msg":"Non ok http code","code":"503","status":"HTTP/1.1 503","thread":"3","run":"0"} {"ts":1692767495.834028,"level":"warn","file":"http\_client.go","line":1104,"msg":"Non ok http code","code":"503","status":"HTTP/1.1 503","thread":"3","run":"0"} {"ts":1692767495.834116,"level":"info","file":"periodic.go","line":832,"msg":"T003 ended after 13.09904ms : 5 calls. qps=381.7073617608619"} {"ts":1692767495.834865,"level":"info","file":"periodic.go","line":832,"msg":"T001 ended after 13.846811ms : 5 calls. qps=361.09397318992796"} {"ts":1692767495.835370,"level":"info","file":"periodic.go","line":832,"msg":"T002 ended after 14.352324ms : 5 calls. qps=348.3756358900482"} Ended after 14.386516ms : 20 calls. qps=1390.2 {"ts":1692767495.835489,"level":"info","file":"periodic.go","line":564,"msg":"Run ended","run":"0","elapsed":"14.386516ms","calls":"20","qps":"1390.1906479650806"} Aggregated Function Time : count 20 avg 0.0024801033 +/- 0.001782 min 0.000721482 max 0.008055527 sum 0.049602066 # range, mid point, percentile, count >= 0.000721482 <= 0.001 , 0.000860741 , 10.00, 2 > 0.001 <= 0.002 , 0.0015 , 45.00, 7 > 0.002 <= 0.003 , 0.0025 , 80.00, 7 > 0.003 <= 0.004 , 0.0035 , 85.00, 1 > 0.005 <= 0.006 , 0.0055 , 95.00, 2 > 0.008 <= 0.00805553 , 0.00802776 , 100.00, 1 # target 50% 0.00214286 # target 75% 0.00285714 # target 90% 0.0055 # target 99% 0.00804442 # target 99.9% 0.00805442 Error cases : count 13 avg 0.0016602806 +/- 0.0006006 min 0.000721482 max 0.00281812 sum 0.021583648 # range, mid point, percentile, count >= 0.000721482 <= 0.001 , 0.000860741 , 15.38, 2 > 0.001 <= 0.002 , 0.0015 , 61.54, 6 > 0.002 <= 0.00281812 , 0.00240906 , 100.00, 5 # target 50% 0.00175 # target 75% 0.00228634 # target 90% 0.00260541 # target 99% 0.00279685 # target 99.9% 0.00281599 # Socket and IP used for each connection: [0] 5 socket used, resolved to 10.96.182.43:8080, connection timing : count 5 avg 0.0003044688 +/- 0.0001472 min 0.000120654 max 0.00053878 sum 0.001522344 [1] 3 socket used, resolved to
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/envoy-circuit-breaker.rst
main
cilium
[ 0.0014102808199822903, -0.02960897982120514, -0.08882966637611389, -0.00642338115721941, -0.06960439682006836, -0.0705810934305191, -0.02116106078028679, -0.02110273391008377, 0.036098092794418335, 0.006547833792865276, -0.010840186849236488, -0.0431382954120636, -0.004050067160278559, -0....
0.099915
75% 0.00228634 # target 90% 0.00260541 # target 99% 0.00279685 # target 99.9% 0.00281599 # Socket and IP used for each connection: [0] 5 socket used, resolved to 10.96.182.43:8080, connection timing : count 5 avg 0.0003044688 +/- 0.0001472 min 0.000120654 max 0.00053878 sum 0.001522344 [1] 3 socket used, resolved to 10.96.182.43:8080, connection timing : count 3 avg 0.00041437933 +/- 9.571e-05 min 0.000330279 max 0.000548277 sum 0.001243138 [2] 3 socket used, resolved to 10.96.182.43:8080, connection timing : count 3 avg 0.00041114067 +/- 0.0001352 min 0.000306734 max 0.00060203 sum 0.001233422 [3] 4 socket used, resolved to 10.96.182.43:8080, connection timing : count 4 avg 0.00038631225 +/- 0.0002447 min 0.000175125 max 0.00080311 sum 0.001545249 Connection time histogram (s) : count 15 avg 0.0003696102 +/- 0.0001758 min 0.000120654 max 0.00080311 sum 0.005544153 # range, mid point, percentile, count >= 0.000120654 <= 0.00080311 , 0.000461882 , 100.00, 15 # target 50% 0.000437509 # target 75% 0.000620309 # target 90% 0.00072999 # target 99% 0.000795798 # target 99.9% 0.000802379 Sockets used: 15 (for perfect keepalive, would be 4) Uniform: false, Jitter: false, Catchup allowed: true IP addresses distribution: 10.96.182.43:8080: 15 Code 200 : 7 (35.0 %) Code 503 : 13 (65.0 %) Response Header Sizes : count 20 avg 136.5 +/- 186 min 0 max 390 sum 2730 Response Body/Total Sizes : count 20 avg 1026.9 +/- 1042 min 241 max 2447 sum 20538 All done 20 calls (plus 0 warmup) 2.480 ms avg, 1390.2 qps Now you can start to see the expected Circuit breaking behavior. Only 35% of the requests succeeded and the rest were trapped by Circuit breaking. .. parsed-literal:: Code 200 : 7 (35.0 %) Code 503 : 13 (65.0 %) Cleaning up =========== Remove the rules. .. parsed-literal:: $ kubectl delete -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/envoy/envoy-circuit-breaker.yaml Remove the test application. .. parsed-literal:: $ kubectl delete -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/envoy/test-application-proxy-circuit-breaker.yaml
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/envoy-circuit-breaker.rst
main
cilium
[ 0.0022500231862068176, -0.03773857653141022, -0.08075795322656631, 0.013661207631230354, -0.02830718271434307, -0.0864252820611, 0.02083301544189453, -0.02506050281226635, 0.0873245969414711, 0.02836890146136284, -0.0040338486433029175, -0.028428029268980026, -0.02284989319741726, -0.03269...
0.091894
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gs\_l7\_traffic\_management: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* L7-Aware Traffic Management \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Cilium provides a way to control L7 traffic via CRDs (e.g. CiliumEnvoyConfig and CiliumClusterwideEnvoyConfig). Prerequisites ############# \* Cilium must be configured with the kube-proxy replacement, using ``kubeProxyReplacement=true``. For more information, see :ref:`kube-proxy replacement `. Caveats ####### \* ``CiliumEnvoyConfig`` resources have only minimal validation performed, and do not have a defined conflict resolution behavior. This means that if you create multiple CECs that modify the same parts of Envoy's config, the results may be unpredictable. \* In addition to this minimal validation, ``CiliumEnvoyConfig`` has minimal feedback to the user about the correctness of the configuration. So in the event a CEC does produce an undesirable outcome, troubleshooting will require inspecting the Envoy config and logs, rather than being able to look at the ``CiliumEnvoyConfig`` in question. \* ``CiliumEnvoyConfig`` is used by Cilium's Ingress and Gateway API support to direct traffic through the per-node Envoy proxies. If you create CECs that conflict with or modify the autogenerated config, results may be unpredictable. Be very careful using CECs for these use cases. The above risks are managed by ensuring that all config generated by Cilium is semantically valid, as far as possible. \* If you create a ``CiliumEnvoyConfig`` resource directly (ie, not via the Cilium Ingress or Gateway API controllers), if the CEC is intended to manage E/W traffic, set the annotation ``cec.cilium.io/use-original-source-address: "false"``. Otherwise, Envoy will bind the sockets for the upstream connection pools to the original source address/port. This may cause 5-tuple collisions when pods send multiple requests over the same pipelined HTTP/1.1 or HTTP/2 connection. (The Cilium agent assumes all CECs with parentRefs pointing to the Cilium Ingress or Gateway API controllers have annotation ``cec.cilium.io/use-original-source-address`` set to ``"false"``, but all other CECs are assumed to have this annotation set to ``"true"``.) .. include:: installation.rst Supported Envoy API Versions ============================ As of now only the Envoy API v3 is supported. Supported Envoy Extension Resource Types ======================================== Envoy extensions are resource types that may or may not be built in to an Envoy build. The standard types referred to in Envoy documentation, such as ``type.googleapis.com/envoy.config.listener.v3.Listener``, and ``type.googleapis.com/envoy.config.route.v3.RouteConfiguration``, are always available. Cilium nodes deploy an Envoy image to support Cilium HTTP policy enforcement and observability. This build of Envoy has been optimized for the needs of the Cilium Agent and does not contain many of the Envoy extensions available in the Envoy code base. To see which Envoy extensions are available, please have a look at the `Envoy extensions configuration file `\_. Only the extensions that have not been commented out with ``#`` are built in to the Cilium Envoy image. We will evolve the list of built-in extensions based on user feedback. Examples ######## Please refer to one of the below examples on how to use and leverage Cilium's Ingress features: .. toctree:: :maxdepth: 1 :glob: envoy-custom-listener envoy-traffic-management envoy-circuit-breaker envoy-load-balancing envoy-traffic-shifting
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/l7-traffic-management.rst
main
cilium
[ -0.0233754925429821, -0.0055833677761256695, -0.05934982746839523, -0.06860549747943878, 0.011812210083007812, -0.03814127668738365, -0.023418886587023735, 0.0031503268983215094, -0.009854203090071678, -0.003812470007687807, 0.02933678589761257, -0.0826445072889328, 0.04771830886602402, -0...
0.124065
Create TLS Certificate and Private Key ====================================== .. tabs:: .. group-tab:: Self-signed Certificate For demonstration purposes we will use a TLS certificate signed by a made-up, `self-signed `\_ certificate authority (CA). One easy way to do this is with `mkcert `\_. We want a certificate that will validate ``bookinfo.cilium.rocks`` and ``hipstershop.cilium.rocks``, as these are the host names used in this example. .. code-block:: shell-session $ mkcert bookinfo.cilium.rocks hispter.cilium.rocks Note: the local CA is not installed in the system trust store. Run "mkcert -install" for certificates to be trusted automatically ⚠️ Created a new certificate valid for the following names 📜 - "bookinfo.cilium.rocks" - "hispter.cilium.rocks" The certificate is at "./bookinfo.cilium.rocks+1.pem" and the key at "./bookinfo.cilium.rocks+1-key.pem" ✅ It will expire on 29 November 2026 🗓 Create a Kubernetes secret with this demo key and certificate: .. code-block:: shell-session $ kubectl create secret tls demo-cert --key=bookinfo.cilium.rocks+1-key.pem --cert=bookinfo.cilium.rocks+1.pem .. group-tab:: cert-manager Let us install cert-manager: .. code-block:: shell-session $ helm repo add jetstack https://charts.jetstack.io $ helm install cert-manager jetstack/cert-manager --version v1.16.2 \ --namespace cert-manager \ --set crds.enabled=true \ --create-namespace \ --set config.apiVersion="controller.config.cert-manager.io/v1alpha1" \ --set config.kind="ControllerConfiguration" \ --set config.enableGatewayAPI=true Now, create a CA Issuer: .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/ca-issuer.yaml
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/tls-cert.rst
main
cilium
[ -0.0018183531938120723, 0.052309587597846985, -0.05157538875937462, -0.008004098199307919, -0.052181728184223175, -0.06375724077224731, 0.042225416749715805, 0.006462363060563803, 0.03515917435288429, 0.008357281796634197, 0.03583531826734543, -0.15081000328063965, 0.10490696877241135, 0.0...
-0.018184
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* External Lock-down Policy \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* By default, all the external traffic is allowed. Let's apply a `CiliumNetworkPolicy` to lock down external traffic. .. literalinclude:: ../../../examples/kubernetes/servicemesh/policy/external-lockdown.yaml :language: yaml .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/policy/external-lockdown.yaml With this policy applied, any request originating from outside the cluster will be rejected with a ``403 Forbidden`` .. code-block:: shell-session $ curl --fail -v http://"$HTTP\_INGRESS"/details/1 \* Trying 172.18.255.194:80... \* Connected to 172.18.255.194 (172.18.255.194) port 80 > GET /details/1 HTTP/1.1 > Host: 172.18.255.194 > User-Agent: curl/8.6.0 > Accept: \*/\* > < HTTP/1.1 403 Forbidden < content-length: 15 < content-type: text/plain < date: Thu, 29 Feb 2024 12:59:54 GMT < server: envoy \* The requested URL returned error: 403 \* Closing connection curl: (22) The requested URL returned error: 403 # Capture hubble flows in another terminal $ kubectl --namespace=kube-system exec -i -t cilium-xjl4x -- hubble observe -f --identity ingress Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), wait-for-node-init (init), clean-cilium-state (init), install-cni-binaries (init) Feb 29 13:00:29.389: 172.18.0.1:53866 (ingress) -> kube-system/cilium-ingress:80 (world) http-request DROPPED (HTTP/1.1 GET http://172.18.255.194/details/1) Feb 29 13:00:29.389: 172.18.0.1:53866 (ingress) <- kube-system/cilium-ingress:80 (world) http-response FORWARDED (HTTP/1.1 403 0ms (GET http://172.18.255.194/details/1)) Let's check if in-cluster traffic to the Ingress endpoint is still allowed: .. parsed-literal:: # The test-application.yaml contains a client pod with curl available $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/envoy/test-application.yaml $ kubectl exec -it deployment/client -- curl -s http://$HTTP\_INGRESS/details/1 {"id":1,"author":"William Shakespeare","year":1595,"type":"paperback","pages":200,"publisher":"PublisherA","language":"English","ISBN-10":"1234567890","ISBN-13":"123-1234567890"}% Another common use case is to allow only a specific set of IP addresses to access the Ingress. This can be achieved via the below policy .. literalinclude:: ../../../examples/kubernetes/servicemesh/policy/allow-ingress-cidr.yaml :language: yaml .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/policy/allow-ingress-cidr.yaml .. code-block:: shell-session $ curl -s --fail http://"$HTTP\_INGRESS"/details/1 {"id":1,"author":"William Shakespeare","year":1595,"type":"paperback","pages":200,"publisher":"PublisherA","language":"English","ISBN-10":"1234567890","ISBN-13":"123-1234567890"}
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/external-ingress-policy.rst
main
cilium
[ -0.0034837028943002224, 0.013283860869705677, -0.07751151919364929, -0.07929471135139465, 0.012879185378551483, -0.030508629977703094, -0.013500023633241653, -0.0006075567798689008, 0.061804600059986115, 0.023751240223646164, 0.03182240203022957, -0.04100238159298897, 0.012651425786316395, ...
0.230631
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gsg\_istio: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Integration with Istio \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This page helps you get started using Istio with a Cilium-enabled Kubernetes cluster. This document covers the following common aspects of Cilium's integration with Istio: \* Cilium configuration \* Istio configuration \* Demo application .. note:: You can run Cilium with Istio in two ways: 1. \*\*With kube-proxy present (recommended):\*\* - Set ``kubeProxyReplacement: false`` (the default). - Cilium does not fully replace kube-proxy; kube-proxy continues to handle ClusterIP routing. - This is the recommended setup for using Istio with minimal disruption, particularly in sidecar or ambient mode. 2. \*\*With kube-proxy removed (full replacement):\*\* - Set ``kubeProxyReplacement: true``, ``socketLB.hostNamespaceOnly: true``, and ``cni.exclusive: false``. - These settings prevent Cilium’s socket-based load balancing from interfering with Istio’s proxying. - kube-proxy can be removed in this mode, but these configurations are required to ensure compatibility. In summary, you can run Istio with Cilium and kube-proxy by setting ``kubeProxyReplacement: false`` (the default and recommended for most Istio installs); or you can run without kube-proxy by setting ``kubeProxyReplacement: true``, but you must carefully configure Cilium to avoid conflicts with Istio. Cilium Configuration ==================== The main goal of Cilium configuration is to ensure that traffic redirected to Istio's `sidecar proxies (sidecar mode) `\_ or `node proxy (ambient mode) `\_ is not disrupted. Disruptions can happen when you enable Cilium's ``kubeProxyReplacement`` feature (see :ref:`kubeproxy-free` docs), which enables socket based load balancing inside a Pod. To ensure that Cilium does not interfere with Istio, it is important to set the ``bpf-lb-sock-hostns-only`` parameter in the Cilium ConfigMap to ``true``. This can be achieved by using the ``--set`` flag with the ``socketLB.hostNamespaceOnly`` Helm value set to ``true``. You can confirm the result with the following command: .. code-block:: shell-session $ kubectl get configmaps -n kube-system cilium-config -oyaml | grep bpf-lb-sock-hostns bpf-lb-sock-hostns-only: "true" Istio uses a CNI plugin to implement functionality for both sidecar and ambient modes. To ensure that Cilium does not interfere with other CNI plugins on the node, it is important to set the ``cni-exclusive`` parameter in the Cilium ConfigMap to ``false``. This can be achieved by using the ``--set`` flag with the ``cni.exclusive`` Helm value set to ``false``. You can confirm the result with the following command: .. code-block:: shell-session $ kubectl get configmaps -n kube-system cilium-config -oyaml | grep cni-exclusive cni-exclusive: "false" .. \_gsg\_istio\_cnp: Istio configuration =============================== When you deploy Cilium and Istio together, be aware of: \* Either Cilium or Istio L7 HTTP policy controls can be used, but it is not recommended to use \*\*both\*\* Cilium and Istio L7 HTTP policy controls at the same time, to avoid split-brain problems. In order to use Cilium L7 HTTP policy controls (for example, :ref:`l7\_policy`) with Istio (sidecar or ambient modes), you must: - Sidecar: Disable Istio mTLS for the workloads you wish to manage with Cilium L7 policy by configuring ``mtls.mode=DISABLE`` under Istio's `PeerAuthentication `\_. - Ambient: Remove the workloads you wish to manage with Cilium L7 policy from Istio ambient by removing either the ``istio.io/dataplane-mode`` label from the namespace, or annotating the pods you wish to manage with Cilium L7 with ``ambient.istio.io/redirection: disabled``. as otherwise the traffic between Istio-managed workloads will be encrypted by Istio with mTLS, and not accessible to Cilium for the purposes of L7 policy enforcement. If using Istio L7 HTTP policy controls, policy will be managed in Istio and disabling mTLS between workloads is not required. \* If using Istio mTLS in ambient mode with Istio L7 HTTP policy controls,
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/istio.rst
main
cilium
[ -0.01995738223195076, 0.0007288400665856898, -0.0279573742300272, -0.021061545237898827, -0.05828433483839035, -0.05687302350997925, -0.014217735268175602, 0.041944049298763275, 0.010012873448431492, -0.03696059063076973, 0.004218396730720997, -0.14580506086349487, 0.014083687216043472, -0...
0.535115
Istio with mTLS, and not accessible to Cilium for the purposes of L7 policy enforcement. If using Istio L7 HTTP policy controls, policy will be managed in Istio and disabling mTLS between workloads is not required. \* If using Istio mTLS in ambient mode with Istio L7 HTTP policy controls, traffic between ambient workloads will be `encrypted and tunneled in and out of the pods by Istio over port 15008 `\_. In this scenario, Cilium NetworkPolicy will still apply to the encrypted and tunneled L4 traffic entering and leaving the Istio-managed pods, but Cilium will have no visibility into the actual source and destination of that tunneled and encrypted L4 traffic, or any L7 information. This means that Istio should be used to enforce policy for traffic between Istio-managed, mTLS-secured workloads at L4 or above. Traffic ingressing to Istio-managed workloads from non-Istio-managed workloads will continue to be fully subjected to Cilium-enforced Kubernetes NetworkPolicy, as it would not be tunneled or encrypted. \* When using Istio in sidecar mode with `automatic sidecar injection `\_, together with Cilium overlay mode (VXLAN or GENEVE), ``istiod`` pods must be running with ``hostNetwork: true`` in order to be reachable by the API server. Demo Application (Using Cilium with Istio ambient mode) ======================================================= The following guide demonstrates the interaction between Istio's ambient ``mTLS`` mode and Cilium network policies when using Cilium L7 HTTP policy controls instead of Istio L7 HTTP policy controls, including the caveat described in the :ref:`gsg\_istio\_cnp` section. Prerequisites ^^^^^^^^^^^^^ \* Istio is already installed on the local Kubernetes cluster. \* Cilium is already installed with the ``socketLB.hostNamespaceOnly`` and ``cni.exclusive=false`` Helm values. \* Istio's ``istioctl`` is installed on the local host. Start by deploying a set of web servers and client applications across three different namespaces: .. parsed-literal:: kubectl create ns red kubectl label namespace red istio.io/dataplane-mode=ambient kubectl -n red apply -f <(curl -s \ |SCM\_WEB|\/examples/kubernetes-istio/httpbin.yaml) kubectl -n red apply -f <(curl -s \ |SCM\_WEB|\/examples/kubernetes-istio/netshoot.yaml) kubectl create ns blue kubectl label namespace blue istio.io/dataplane-mode=ambient kubectl -n blue apply -f <(curl -s \ |SCM\_WEB|\/examples/kubernetes-istio/httpbin.yaml) kubectl -n blue apply -f <(curl -s \ |SCM\_WEB|\/examples/kubernetes-istio/netshoot.yaml) kubectl create ns green kubectl -n green apply -f \ |SCM\_WEB|\/examples/kubernetes-istio/netshoot.yaml By default, Istio works in ``PERMISSIVE`` mode, allowing both Istio-ambient-managed and Istio-unmanaged pods to send and receive unsecured traffic between each other. You can test the connectivity between client and server applications deployed in the preceding example by entering the following commands: .. code-block:: shell-session kubectl exec -n red deploy/netshoot -- curl http://httpbin.red/ip -s -o /dev/null -m 1 -w "client 'red' to server 'red': %{http\_code}\n" kubectl exec -n blue deploy/netshoot -- curl http://httpbin.red/ip -s -o /dev/null -m 1 -w "client 'blue' to server 'red': %{http\_code}\n" kubectl exec -n green deploy/netshoot -- curl http://httpbin.red/ip -s -o /dev/null -m 1 -w "client 'green' to server 'red': %{http\_code}\n" kubectl exec -n red deploy/netshoot -- curl http://httpbin.blue/ip -s -o /dev/null -m 1 -w "client 'red' to server 'blue': %{http\_code}\n" kubectl exec -n blue deploy/netshoot -- curl http://httpbin.blue/ip -s -o /dev/null -m 1 -w "client 'blue' to server 'blue': %{http\_code}\n" kubectl exec -n green deploy/netshoot -- curl http://httpbin.blue/ip -s -o /dev/null -m 1 -w "client 'green' to server 'blue': %{http\_code}\n" All commands should complete successfully: .. code-block:: shell-session client 'red' to server 'red': 200 client 'blue' to server 'red': 200 client 'green' to server 'red': 200 client 'red' to server 'blue': 200 client 'blue' to server 'blue': 200 client 'green' to server 'blue': 200 You can apply Cilium-enforced L4 NetworkPolicy to restrict communication between namespaces. The following command applies an L4 network policy that restricts communication in the ``blue`` namespace to clients located only in ``blue``
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/istio.rst
main
cilium
[ -0.05458034574985504, -0.0026102063711732626, 0.0237521193921566, 0.016370855271816254, -0.034067146480083466, -0.04789689928293228, -0.001980256987735629, 0.03298550099134445, 0.0017019687220454216, -0.014488709159195423, -0.033718813210725784, -0.08939263224601746, -0.00001828064523579087,...
0.486738
client 'red' to server 'blue': 200 client 'blue' to server 'blue': 200 client 'green' to server 'blue': 200 You can apply Cilium-enforced L4 NetworkPolicy to restrict communication between namespaces. The following command applies an L4 network policy that restricts communication in the ``blue`` namespace to clients located only in ``blue`` and ``red`` namespaces. .. parsed-literal:: kubectl -n blue apply -f \ |SCM\_WEB|\/examples/kubernetes-istio/l4-policy.yaml Re-run the same connectivity checks to confirm the expected result: .. code-block:: shell-session client 'red' to server 'red': 200 client 'blue' to server 'red': 200 client 'green' to server 'red': 200 client 'red' to server 'blue': 200 client 'blue' to server 'blue': 200 client 'green' to server 'blue': 000 command terminated with exit code 28 You can then decide to enhance the same network policy to perform additional HTTP-based checks. The following command applies a Cilium L7 network policy allowing communication only with the ``/ip`` URL path: .. parsed-literal:: kubectl -n blue apply -f \ |SCM\_WEB|\/examples/kubernetes-istio/l7-policy.yaml At this point, all communication with the ``blue`` namespace is broken since the Cilium proxy (HTTP) interferes with Istio's mTLS-based HTTPS connections: .. code-block:: shell-session client 'red' to server 'red': 200 client 'blue' to server 'red': 200 client 'green' to server 'red': 200 client 'red' to server 'blue': 000 command terminated with exit code 28 client 'blue' to server 'blue': 000 command terminated with exit code 28 client 'green' to server 'blue': 000 command terminated with exit code 28 To solve the problem and allow Cilium to manage L7 policy, you must remove the workloads or namespaces you want Cilium to manage L7 policy for from the Istio ambient mesh: .. parsed-literal:: kubectl label namespace red istio.io/dataplane-mode- kubectl label namespace blue istio.io/dataplane-mode- Re-run a connectivity check to confirm that communication with the ``blue`` namespaces has been restored. You can verify that Cilium is enforcing the L7 network policy by accessing a different URL path, for example ``/deny``: .. code-block:: shell-session $ kubectl exec -n red deploy/netshoot -- curl http://httpbin.blue/deny -s -o /dev/null -m 1 -w "client 'red' to server 'blue': %{http\_code}\n" client 'red' to server 'blue': 403 Demo Application (Istio sidecar mode) ===================================== The following guide demonstrates the interaction between Istio's sidecar-based ``mTLS`` mode and Cilium network policies when using Cilium L7 HTTP policy controls instead of Istio L7 HTTP policy controls, including the caveat described in the :ref:`gsg\_istio\_cnp` section around disabling ``mTLS`` Prerequisites ^^^^^^^^^^^^^ \* Istio is already installed on the local Kubernetes cluster. \* Cilium is already installed with the ``socketLB.hostNamespaceOnly`` and ``cni.exclusive=false`` Helm values. \* Istio's ``istioctl`` is installed on the local host. Start by deploying a set of web servers and client applications across three different namespaces: .. parsed-literal:: kubectl create ns red kubectl -n red apply -f <(curl -s \ |SCM\_WEB|\/examples/kubernetes-istio/httpbin.yaml | istioctl kube-inject -f -) kubectl -n red apply -f <(curl -s \ |SCM\_WEB|\/examples/kubernetes-istio/netshoot.yaml | istioctl kube-inject -f -) kubectl create ns blue kubectl -n blue apply -f <(curl -s \ |SCM\_WEB|\/examples/kubernetes-istio/httpbin.yaml | istioctl kube-inject -f -) kubectl -n blue apply -f <(curl -s \ |SCM\_WEB|\/examples/kubernetes-istio/netshoot.yaml | istioctl kube-inject -f -) kubectl create ns green kubectl -n green apply -f \ |SCM\_WEB|\/examples/kubernetes-istio/netshoot.yaml By default, Istio works in ``PERMISSIVE`` mode, allowing both Istio-managed and Pods without sidecars to send and receive traffic between each other. You can test the connectivity between client and server applications deployed in the preceding example by entering the following commands: .. code-block:: shell-session kubectl exec -n red deploy/netshoot -- curl http://httpbin.red/ip -s -o /dev/null -m 1 -w "client 'red' to server 'red': %{http\_code}\n" kubectl exec -n blue deploy/netshoot -- curl http://httpbin.red/ip -s -o /dev/null -m 1 -w "client 'blue' to server 'red':
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/istio.rst
main
cilium
[ 0.049695245921611786, -0.0254228413105011, 0.010459845885634422, -0.06667453050613403, -0.06549280136823654, 0.00361643941141665, 0.03369824215769768, -0.05567678064107895, 0.0674356073141098, 0.03156740218400955, -0.002544051967561245, -0.08627801388502121, 0.020193567499518394, -0.018165...
0.219681
the preceding example by entering the following commands: .. code-block:: shell-session kubectl exec -n red deploy/netshoot -- curl http://httpbin.red/ip -s -o /dev/null -m 1 -w "client 'red' to server 'red': %{http\_code}\n" kubectl exec -n blue deploy/netshoot -- curl http://httpbin.red/ip -s -o /dev/null -m 1 -w "client 'blue' to server 'red': %{http\_code}\n" kubectl exec -n green deploy/netshoot -- curl http://httpbin.red/ip -s -o /dev/null -m 1 -w "client 'green' to server 'red': %{http\_code}\n" kubectl exec -n red deploy/netshoot -- curl http://httpbin.blue/ip -s -o /dev/null -m 1 -w "client 'red' to server 'blue': %{http\_code}\n" kubectl exec -n blue deploy/netshoot -- curl http://httpbin.blue/ip -s -o /dev/null -m 1 -w "client 'blue' to server 'blue': %{http\_code}\n" kubectl exec -n green deploy/netshoot -- curl http://httpbin.blue/ip -s -o /dev/null -m 1 -w "client 'green' to server 'blue': %{http\_code}\n" All commands should complete successfully: .. code-block:: shell-session client 'red' to server 'red': 200 client 'blue' to server 'red': 200 client 'green' to server 'red': 200 client 'red' to server 'blue': 200 client 'blue' to server 'blue': 200 client 'green' to server 'blue': 200 You can apply network policies to restrict communication between namespaces. The following command applies a Cilium-managed L4 network policy that restricts communication in the ``blue`` namespace to clients located only in ``blue`` and ``red`` namespaces. .. parsed-literal:: kubectl -n blue apply -f \ |SCM\_WEB|\/examples/kubernetes-istio/l4-policy.yaml Re-run the same connectivity checks to confirm the expected result: .. code-block:: shell-session client 'red' to server 'red': 200 client 'blue' to server 'red': 200 client 'green' to server 'red': 200 client 'red' to server 'blue': 200 client 'blue' to server 'blue': 200 client 'green' to server 'blue': 000 command terminated with exit code 28 You can then decide to enhance the L4 network policy to perform additional Cilium-managed HTTP-based checks. The following command applies Cilium L7 network policy allowing communication only with the ``/ip`` URL path: .. parsed-literal:: kubectl -n blue apply -f \ |SCM\_WEB|\/examples/kubernetes-istio/l7-policy.yaml At this point, all communication with the ``blue`` namespace is broken since the Cilium proxy (HTTP) interferes with Istio's mTLS-based HTTPs connections: .. code-block:: shell-session client 'red' to server 'red': 200 client 'blue' to server 'red': 200 client 'green' to server 'red': 200 client 'red' to server 'blue': 503 client 'blue' to server 'blue': 503 client 'green' to server 'blue': 000 command terminated with exit code 28 To solve the problem and allow Cilium to manage L7 policy, you must disable Istio's mTLS authentication by configuring a new policy: .. literalinclude:: ../../../examples/kubernetes-istio/authn.yaml :language: yaml You must apply this policy to the same namespace where you implement the HTTP-based network policy: .. parsed-literal:: kubectl -n blue apply -f \ |SCM\_WEB|\/examples/kubernetes-istio/authn.yaml Re-run a connectivity check to confirm that communication with the ``blue`` namespaces has been restored. You can verify that Cilium is enforcing the L7 network policy by accessing a different URL path, for example ``/deny``: .. code-block:: shell-session $ kubectl exec -n red deploy/netshoot -- curl http://httpbin.blue/deny -s -o /dev/null -m 1 -w "client 'red' to server 'blue': %{http\_code}\n" client 'red' to server 'blue': 403
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/istio.rst
main
cilium
[ 0.0022799179423600435, 0.012944968417286873, -0.05506954342126846, -0.039155587553977966, -0.0790480300784111, -0.0191025547683239, 0.006077716592699289, -0.02304171212017536, 0.0950363352894783, 0.04064685106277466, 0.04613731801509857, -0.08568637073040009, -0.005329221487045288, -0.0705...
0.135117
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io Deploy the First Ingress ======================== You'll find the example Ingress definition in ``basic-ingress.yaml``. .. literalinclude:: ../../../examples/kubernetes/servicemesh/basic-ingress.yaml :language: yaml .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/basic-ingress.yaml This example routes requests for the path ``/details`` to the ``details`` service, and ``/`` to the ``productpage`` service. Getting the list of services, you'll see a LoadBalancer service is automatically created for this ingress. Your cloud provider will automatically provision an external IP address, but it may take around 30 seconds. .. code-block:: shell-session # For dedicated load balancer mode $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cilium-ingress-basic-ingress LoadBalancer 10.98.169.125 10.98.169.125 80:32478/TCP 2m11s details ClusterIP 10.102.131.226 9080/TCP 2m15s kubernetes ClusterIP 10.96.0.1 443/TCP 10m productpage ClusterIP 10.97.231.139 9080/TCP 2m15s ratings ClusterIP 10.108.152.42 9080/TCP 2m15s reviews ClusterIP 10.111.145.160 9080/TCP 2m15s # For shared load balancer mode $ kubectl get services -n kube-system cilium-ingress NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cilium-ingress LoadBalancer 10.98.169.125 10.98.169.125 80:32690/TCP,443:31566/TCP 18m The external IP address should also be populated into the Ingress: .. code-block:: shell-session $ kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE basic-ingress cilium \* 10.98.169.125 80 97s .. Note:: Some providers e.g. EKS use a fully-qualified domain name rather than an IP address.
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/basic-ingress.rst
main
cilium
[ -0.009590775705873966, -0.00570771936327219, 0.002840828849002719, -0.05199703574180603, 0.01791374571621418, -0.03202959522604942, 0.005170275457203388, -0.004302445333451033, 0.10623207688331604, 0.02933923900127411, 0.02917325869202614, -0.08230715990066528, 0.038834959268569946, -0.049...
0.186239
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gs\_ingress\_grpc: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Ingress gRPC Example \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* The example ingress configuration in ``grpc-ingress.yaml`` shows how to route gRPC traffic to backend services. Deploy the Demo App \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* For this demo we will use `GCP's microservices demo app `\_. .. code-block:: shell-session $ kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/main/release/kubernetes-manifests.yaml Since gRPC is binary-encoded, you also need the proto definitions for the gRPC services in order to make gRPC requests. Download this for the demo app: .. code-block:: shell-session $ curl -o demo.proto https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/main/protos/demo.proto Deploy GRPC Ingress \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* You'll find the example Ingress definition in ``examples/kubernetes/servicemesh/grpc-ingress.yaml``. .. literalinclude:: ../../../examples/kubernetes/servicemesh/grpc-ingress.yaml :language: yaml .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/grpc-ingress.yaml This defines paths for requests to be routed to the ``productcatalogservice`` and ``currencyservice`` microservices. Just as in the previous HTTP Ingress Example, this creates a LoadBalancer service, and it may take a little while for your cloud provider to provision an external IP address. .. code-block:: shell-session $ kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE grpc-ingress cilium \* 10.111.109.99 80 3s Make gRPC Requests to Backend Services \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* To issue client gRPC requests you can use `grpcurl `\_. .. code-block:: shell-session $ GRPC\_INGRESS=$(kubectl get ingress grpc-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}') # To access the currency service: $ grpcurl -plaintext -proto ./demo.proto $GRPC\_INGRESS:80 hipstershop.CurrencyService/GetSupportedCurrencies #To access the product catalog service: $ grpcurl -plaintext -proto ./demo.proto $GRPC\_INGRESS:80 hipstershop.ProductCatalogService/ListProducts
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/grpc.rst
main
cilium
[ -0.016398100182414055, 0.002381486352533102, -0.07143433392047882, -0.05410558730363846, -0.03798371180891991, -0.07798951864242554, -0.0037077071610838175, 0.010242098942399025, 0.044516999274492264, 0.019498739391565323, 0.029787566512823105, -0.08167507499456406, 0.002334543503820896, -...
0.190973
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gs\_envoy\_traffic\_shifting: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* L7 Traffic Shifting \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Cilium Service Mesh defines a ``CiliumEnvoyConfig`` CRD which allows users to set the configuration of the Envoy component built into Cilium agents. This example sets up an Envoy listener which load balances requests to the helloworld Service by sending 90% of incoming requests to the backend ``helloworld-v1`` and 10% of incoming requests to the backend ``helloworld-v2``. Deploy Test Applications ======================== .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/envoy/client-helloworld.yaml The test workloads consist of: - One client Deployment, ``client`` - Two server Deployments, ``helloworld-v1`` and ``helloworld-v2`` View information about these Pods and the helloworld Service: .. code-block:: shell-session $ kubectl get pods --show-labels -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS client-64848f85dd-sjfmb 1/1 Running 0 2m23s 10.0.0.206 cilium-control-plane kind=client,name=client,pod-template-hash=64848f85dd helloworld-v1-5845f97d6b-gkdtk 1/1 Running 0 2m23s 10.0.0.241 cilium-control-plane app=helloworld,pod-template-hash=5845f97d6b,version=v1 helloworld-v2-7d55d87964-ns9kh 1/1 Running 0 2m23s 10.0.0.251 cilium-control-plane app=helloworld,pod-template-hash=7d55d87964,version=v2 $ kubectl get svc --show-labels NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS helloworld ClusterIP 10.96.194.77 5000/TCP 8m27s app=helloworld,service=helloworld Apply weight-based routing ========================== Make an environment variable with the Pod name for client: .. code-block:: shell-session $ export CLIENT=$(kubectl get pods -l name=client -o jsonpath='{.items[0].metadata.name}') Try making several requests to the helloworld Service. .. code-block:: shell-session $ for i in {1..10}; do kubectl exec -it $CLIENT -- curl helloworld:5000/hello; done The test results are as follows:: Hello version: v2, instance: helloworld-v2-7d55d87964-ns9kh Hello version: v2, instance: helloworld-v2-7d55d87964-ns9kh Hello version: v2, instance: helloworld-v2-7d55d87964-ns9kh Hello version: v1, instance: helloworld-v1-5845f97d6b-gkdtk Hello version: v1, instance: helloworld-v1-5845f97d6b-gkdtk Hello version: v1, instance: helloworld-v1-5845f97d6b-gkdtk Hello version: v2, instance: helloworld-v2-7d55d87964-ns9kh Hello version: v1, instance: helloworld-v1-5845f97d6b-gkdtk Hello version: v2, instance: helloworld-v2-7d55d87964-ns9kh Hello version: v1, instance: helloworld-v1-5845f97d6b-gkdtk The test results were as expected. Of the requests sent to the helloworld service, 50% of them were sent to the backend ``helloworld-v1`` and 50% of them were sent to the backend ``helloworld-v2``. ``CiliumEnvoyConfig`` can be used to load balance traffic destined to one Service to a group of backend Services. To load balance traffic to the helloworld Service, first create individual Services for each backend Deployment. .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/envoy/helloworld-service-v1-v2.yaml Apply the ``envoy-helloworld-v1-90-v2-10.yaml`` file, which defines a ``CiliumEnvoyConfig`` to send 90% of traffic to the helloworld-v1 Service backend and 10% of traffic to the helloworld-v2 Service backend: .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/envoy/envoy-helloworld-v1-90-v2-10.yaml View information about these Pods and Services: .. code-block:: shell-session $ kubectl get pods --show-labels -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS client-64848f85dd-sjfmb 1/1 Running 0 2m23s 10.0.0.206 cilium-control-plane kind=client,name=client,pod-template-hash=64848f85dd helloworld-v1-5845f97d6b-gkdtk 1/1 Running 0 2m23s 10.0.0.241 cilium-control-plane app=helloworld,pod-template-hash=5845f97d6b,version=v1 helloworld-v2-7d55d87964-ns9kh 1/1 Running 0 2m23s 10.0.0.251 cilium-control-plane app=helloworld,pod-template-hash=7d55d87964,version=v2 $ kubectl get svc --show-labels NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS helloworld ClusterIP 10.96.194.77 5000/TCP 16m app=helloworld,service=helloworld helloworld-v1 ClusterIP 10.96.0.240 5000/TCP 4s app=helloworld,service=helloworld,version=v1 helloworld-v2 ClusterIP 10.96.41.142 5000/TCP 4s app=helloworld,service=helloworld,version=v2 .. include:: warning.rst Try making several requests to the helloworld Service again. .. code-block:: shell-session $ for i in {1..10}; do kubectl exec -it $CLIENT -- curl helloworld:5000/hello; done The test results are as follows:: Hello version: v1, instance: helloworld-v1-5845f97d6b-gkdtk Hello version: v1, instance: helloworld-v1-5845f97d6b-gkdtk Hello version: v1, instance: helloworld-v1-5845f97d6b-gkdtk Hello version: v2, instance: helloworld-v2-7d55d87964-ns9kh Hello version: v1, instance: helloworld-v1-5845f97d6b-gkdtk Hello version: v1, instance: helloworld-v1-5845f97d6b-gkdtk Hello version: v1, instance: helloworld-v1-5845f97d6b-gkdtk Hello version: v1, instance: helloworld-v1-5845f97d6b-gkdtk Hello version: v1, instance: helloworld-v1-5845f97d6b-gkdtk Hello version: v1, instance: helloworld-v1-5845f97d6b-gkdtk The test results were as expected. Of the requests sent to the helloworld service, 90% of them were sent to the backend ``helloworld-v1`` and 10%
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/envoy-traffic-shifting.rst
main
cilium
[ -0.03757468983530998, 0.00685117905959487, -0.03806718438863754, -0.04162150248885155, 0.004583069123327732, -0.06487062573432922, 0.0021954961121082306, -0.000311336072627455, 0.02763199806213379, 0.010997915640473366, 0.03462178260087967, -0.06653479486703873, 0.06302257627248764, -0.032...
0.185366
Hello version: v1, instance: helloworld-v1-5845f97d6b-gkdtk Hello version: v1, instance: helloworld-v1-5845f97d6b-gkdtk Hello version: v1, instance: helloworld-v1-5845f97d6b-gkdtk Hello version: v1, instance: helloworld-v1-5845f97d6b-gkdtk Hello version: v1, instance: helloworld-v1-5845f97d6b-gkdtk The test results were as expected. Of the requests sent to the helloworld service, 90% of them were sent to the backend ``helloworld-v1`` and 10% of them were sent to the backend ``helloworld-v2``. Cleaning up =========== Remove the rules. .. parsed-literal:: $ kubectl delete -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/envoy/envoy-helloworld-v1-90-v2-10.yaml Remove the test application. .. parsed-literal:: $ kubectl delete -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/envoy/client-helloworld.yaml $ kubectl delete -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/envoy/helloworld-service-v1-v2.yaml
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/envoy-traffic-shifting.rst
main
cilium
[ -0.0022104796953499317, 0.03139094263315201, 0.038768935948610306, -0.01841270737349987, -0.011099806055426598, -0.11511356383562088, 0.003294975496828556, -0.08890983462333679, 0.059799421578645706, 0.02974439412355423, 0.021357756108045578, -0.1127079650759697, 0.02964884601533413, -0.03...
0.09011
Installation ############ .. tabs:: .. group-tab:: Helm Cilium Ingress Controller can be enabled with helm flag ``ingressController.enabled`` set as true. Please refer to :ref:`k8s\_install\_helm` for a fresh installation. .. cilium-helm-upgrade:: :namespace: kube-system :extra-args: --reuse-values :set: ingressController.enabled=true ingressController.loadbalancerMode=dedicated :post-commands: kubectl -n kube-system rollout restart deployment/cilium-operator kubectl -n kube-system rollout restart ds/cilium Cilium can become the default ingress controller by setting the ``--set ingressController.default=true`` flag. This will create ingress entries even when the ``ingressClass`` is not set. If you only want to use envoy traffic management feature without Ingress support, you should only enable ``--enable-envoy-config`` flag. .. cilium-helm-upgrade:: :namespace: kube-system :extra-args: --reuse-values :set: envoyConfig.enabled=true :post-commands: kubectl -n kube-system rollout restart deployment/cilium-operator kubectl -n kube-system rollout restart ds/cilium Additionally, the proxy load-balancing feature can be configured with the ``loadBalancer.l7.backend=envoy`` flag. .. cilium-helm-upgrade:: :namespace: kube-system :extra-args: --reuse-values :set: loadBalancer.l7.backend=envoy :post-commands: kubectl -n kube-system rollout restart deployment/cilium-operator kubectl -n kube-system rollout restart ds/cilium Next you can check the status of the Cilium agent and operator: .. code-block:: shell-session $ cilium status .. include:: ../../installation/cli-download.rst .. group-tab:: Cilium CLI .. include:: ../../installation/cli-download.rst Cilium Ingress Controller can be enabled with the below command .. parsed-literal:: $ cilium install |CHART\_VERSION| \ --set kubeProxyReplacement=true \ --set ingressController.enabled=true \ --set ingressController.loadbalancerMode=dedicated Cilium can become the default ingress controller by setting the ``--set ingressController.default=true`` flag. This will create ingress entries even when the ``ingressClass`` is not set. If you only want to use envoy traffic management feature without Ingress support, you should only enable ``--enable-envoy-config`` flag. .. parsed-literal:: $ cilium install |CHART\_VERSION| \ --set kubeProxyReplacement=true \ --set envoyConfig.enabled=true Additionally, the proxy load-balancing feature can be configured with the ``loadBalancer.l7.backend=envoy`` flag. .. parsed-literal:: $ cilium install |CHART\_VERSION| \ --set kubeProxyReplacement=true \ --set envoyConfig.enabled=true \ --set loadBalancer.l7.backend=envoy Next you can check the status of the Cilium agent and operator: .. code-block:: shell-session $ cilium status It is also recommended that you :ref:`install Hubble CLI` which will be used used to observe the traffic in later steps.
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/installation.rst
main
cilium
[ -0.01364046335220337, 0.011962886899709702, -0.025912217795848846, -0.00888623483479023, -0.03977097198367119, -0.013776171952486038, -0.024675991386175156, 0.027937376871705055, 0.02866654470562935, 0.03130348399281502, 0.018546026200056076, -0.10930102318525314, 0.00822225958108902, -0.0...
0.193811
Reference ######### How Cilium Ingress and Gateway API differ from other Ingress controllers \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* One of the biggest differences between Cilium's Ingress and Gateway API support and other Ingress controllers is how closely tied the implementation is to the CNI. For Cilium, Ingress and Gateway API are part of the networking stack, and so behave in a different way to other Ingress or Gateway API controllers (even other Ingress or Gateway API controllers running in a Cilium cluster). Other Ingress or Gateway API controllers are generally installed as a Deployment or Daemonset in the cluster, and exposed via a Loadbalancer Service or similar (which Cilium can, of course, enable). Cilium's Ingress and Gateway API config is exposed with a Loadbalancer or NodePort service, or optionally can be exposed on the Host network also. But in all of these cases, when traffic arrives at the Service's port, eBPF code intercepts the traffic and transparently forwards it to Envoy (using the TPROXY kernel facility). This affects things like client IP visibility, which works differently for Cilium's Ingress and Gateway API support to other Ingress controllers. It also allows Cilium's Network Policy engine to apply CiliumNetworkPolicy to traffic bound for and traffic coming from an Ingress. Cilium's ingress config and CiliumNetworkPolicy \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Ingress and Gateway API traffic bound to backend services via Cilium passes through a per-node Envoy proxy. The per-node Envoy proxy has special code that allows it to interact with the eBPF policy engine, and do policy lookups on traffic. This allows Envoy to be a Network Policy enforcement point, both for Ingress (and Gateway API) traffic, and also for east-west traffic via GAMMA or L7 Traffic Management. However, for ingress config, there's also an additional step. Traffic that arrives at Envoy \*for Ingress or Gateway API\* is assigned the special ``ingress`` identity in Cilium's Policy engine. Traffic coming from outside the cluster is usually assigned the ``world`` identity (unless there are IP CIDR policies in the cluster). This means that there are actually \*two\* logical Policy enforcement points in Cilium Ingress - before traffic arrives at the ``ingress`` identity, and after, when it is about to exit the per-node Envoy. .. image:: /images/ingress-policy.png :align: center This means that, when applying Network Policy to a cluster, it's important to ensure that both steps are allowed, and that traffic is allowed from ``world`` to ``ingress``, and from ``ingress`` to identities in the cluster (like the ``productpage`` identity in the image above). Please see the :ref:`gs\_ingress\_and\_network\_policy` for more details for Ingress, although the same principles also apply for Gateway API. Source IP Visibility \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* .. Note:: By default, source IP visibility for Cilium ingress config, both Ingress and Gateway API, should \*just work\* on most installations. Read this section for more information on requirements and relevant settings. Having a backend be able to deduce what IP address the actual request came from is important for most applications. By default, Cilium's Envoy instances are configured to append the visible source address of incoming HTTP connections to the ``X-Forwarded-For`` header, using the usual rules. That is, by default Cilium sets the number of trusted hops to ``0``, indicating that Envoy should use the address the connection is opened from, rather than a value inside the ``X-Forwarded-For`` list. Increasing this count will have Envoy use the ``n`` th value from the list, counting from the right. Envoy will also set the ``X-Envoy-External-Address`` header to the trusted client address, whatever that turns out to be, based on ``X-Forwarded-For``. .. Note:: Backends using Cilium ingress (whether via Ingress or Gateway API) should just see the ``X-Forwarded-For`` and ``X-Envoy-External-Address``
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/ingress-reference.rst
main
cilium
[ -0.07631722092628479, -0.021039636805653572, -0.08615269511938095, -0.013742156326770782, -0.02632126212120056, -0.032129913568496704, -0.0021767953876405954, 0.03456338495016098, 0.05542043596506119, 0.02355368621647358, 0.03560187295079231, -0.058313656598329544, 0.09371721744537354, -0....
0.285153
``n`` th value from the list, counting from the right. Envoy will also set the ``X-Envoy-External-Address`` header to the trusted client address, whatever that turns out to be, based on ``X-Forwarded-For``. .. Note:: Backends using Cilium ingress (whether via Ingress or Gateway API) should just see the ``X-Forwarded-For`` and ``X-Envoy-External-Address`` headers (which are handled transparently by many HTTP libraries). ``externalTrafficPolicy`` for Loadbalancer or NodePort Services =============================================================== Cilium's ingress support (both for Ingress and Gateway API) often uses a Loadbalancer or NodePort Service to expose the Envoy Daemonset. In these cases, the Service object has one field that is particularly relevant to Client IP visibility - the ``externalTrafficPolicy`` field. It has two relevant settings: - ``Local``: Nodes will only route traffic to Pods running on the local node, \*without masquerading the source IP\*. Because of this, in clusters that use ``kube-proxy``, this is the only way to ensure source IP visibility. Part of the contract for ``externalTrafficPolicy`` local is also that the node will open a port (the ``healthCheckNodePort``, automatically set by Kubernetes when ``externalTrafficPolicy: Local`` is set), and requests to ``http://:/healthz`` will return 200 on nodes that have local pods running, and non-200 on nodes that don't. Cilium implements this for general Loadbalancer Services, but it's a bit different for Cilium ingress config (both Ingress and Gateway API). - ``Cluster``: Node will route to all endpoints across the cluster evenly. This has a couple of other effects: Firstly, upstream loadbalancers will expect to be able to send traffic to any node and have it end up at a backend Pod, and the node \*may\* masquerade the source IP. This means that in many cases, ``externalTrafficPolicy: Cluster`` may mean that the backend pod does \*not\* see the source IP. In Cilium's case, all ingress traffic bound for a Service that exposes Envoy is \*always\* going to the local node, and is \*always\* forwarded to Envoy using the Linux Kernel TPROXY function, which transparently forwards packets to the backend. This means that for Cilium ingress config, for both Ingress and Gateway API, things work a little differently in both ``externalTrafficPolicy`` cases. .. Note:: In \*both\* ``externalTrafficPolicy`` cases, traffic will arrive at any node in the cluster, and be forwarded to \*Envoy\* \*\*while keeping the source IP intact\*\*. Also, for any Services that exposes Cilium's Envoy, Cilium will ensure that when ``externalTrafficPolicy: Local`` is set, every node in the cluster will pass the ``healthCheckNodePort`` check, so that external load balancers will forward correctly. However, for Cilium's ingress config, both Ingress and Gateway API, \*\*it is not necessary\*\* to configure ``externalTrafficPolicy: Local`` to keep the source IP visible to the backend pod (via the ``X-Forwarded-For`` and ``X-Envoy-External-Address`` fields). TLS Passthrough and source IP visibility ======================================== Both Ingress and Gateway API support TLS Passthrough configuration (via annotation for Ingress, and the TLSRoute resource for Gateway API). This configuration allows multiple TLS Passthrough backends to share the same TLS port on a loadbalancer, with Envoy inspecting the Server Name Indicator (SNI) field of the TLS handshake, and using that to forward the TLS stream to a backend. However, this poses problems for source IP visibility, because Envoy is doing a TCP Proxy of the TLS stream. What happens is that the TLS traffic arrives at Envoy, terminating a TCP stream, Envoy inspects the client hello to find the SNI, picks a backend to forward to, then starts a new TCP stream and forwards the TLS traffic inside the downstream (outside) packets to the upstream (the backend). Because it's a new TCP stream, as far as the backends are concerned, the source IP is Envoy (which is
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/ingress-reference.rst
main
cilium
[ -0.06823786348104477, 0.017392808571457863, -0.029195990413427353, -0.0013880562037229538, 0.027764393016695976, -0.06117429956793785, 0.053387243300676346, 0.013004292733967304, 0.03731323033571243, -0.014644055627286434, -0.018182342872023582, -0.01975257880985737, 0.07646270096302032, -...
0.218717
find the SNI, picks a backend to forward to, then starts a new TCP stream and forwards the TLS traffic inside the downstream (outside) packets to the upstream (the backend). Because it's a new TCP stream, as far as the backends are concerned, the source IP is Envoy (which is often the Node IP, depending on your Cilium config). .. Note:: When doing TLS Passthrough, backends will see Cilium Envoy's IP address as the source of the forwarded TLS streams.
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/ingress-reference.rst
main
cilium
[ -0.010074278339743614, 0.023942384868860245, -0.07175405323505402, -0.018000775948166847, -0.018719958141446114, -0.07395629584789276, -0.0037917045410722494, -0.024770814925432205, 0.07405293732881546, -0.03878968954086304, -0.02138545922935009, -0.07840637862682343, 0.026064706966280937, ...
0.099438
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Default Deny Ingress Policy \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Let's apply a `CiliumClusterwideNetworkPolicy` to deny all traffic by default: .. literalinclude:: ../../../examples/kubernetes/servicemesh/policy/default-deny.yaml :language: yaml .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/policy/default-deny.yaml With this policy applied, the request to the ``/details`` endpoint will be denied for external and in-cluster traffic. .. code-block:: shell-session $ curl --fail -v http://"$HTTP\_INGRESS"/details/1 \* Trying 172.19.255.194:80... \* Connected to 172.19.255.194 (172.19.255.194) port 80 > GET /details/1 HTTP/1.1 > Host: 172.19.255.194 > User-Agent: curl/8.6.0 > Accept: \*/\* > < HTTP/1.1 403 Forbidden < content-length: 15 < content-type: text/plain < date: Sun, 17 Mar 2024 13:52:38 GMT < server: envoy \* The requested URL returned error: 403 \* Closing connection curl: (22) The requested URL returned error: 403 # Capture hubble flows in another terminal $ kubectl --namespace=kube-system exec -i -t cilium-xjl4x -- hubble observe -f --identity ingress Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), wait-for-node-init (init), clean-cilium-state (init), install-cni-binaries (init) Mar 17 13:56:00.709: 172.19.0.1:34104 (ingress) -> default/cilium-ingress-basic-ingress:80 (world) http-request DROPPED (HTTP/1.1 GET http://172.19.255.194/details/1) Mar 17 13:56:00.709: 172.19.0.1:34104 (ingress) <- default/cilium-ingress-basic-ingress:80 (world) http-response FORWARDED (HTTP/1.1 403 0ms (GET http://172.19.255.194/details/1)) Now let's check if in-cluster traffic to the same endpoint is denied: .. parsed-literal:: # The test-application.yaml contains a client pod with curl available $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/envoy/test-application.yaml $ kubectl exec -it deployment/client -- curl -s http://$HTTP\_INGRESS/details/1 Access denied The next step is to allow ingress traffic to the ``/details`` endpoint: .. literalinclude:: ../../../examples/kubernetes/servicemesh/policy/allow-ingress-cluster.yaml :language: yaml .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/policy/allow-ingress-cluster.yaml .. code-block:: shell-session $ curl -s --fail http://"$HTTP\_INGRESS"/details/1 {"id":1,"author":"William Shakespeare","year":1595,"type":"paperback","pages":200,"publisher":"PublisherA","language":"English","ISBN-10":"1234567890","ISBN-13":"123-1234567890"} $ kubectl exec -it deployment/client -- curl -s http://$HTTP\_INGRESS/details/1 {"id":1,"author":"William Shakespeare","year":1595,"type":"paperback","pages":200,"publisher":"PublisherA","language":"English","ISBN-10":"1234567890","ISBN-13":"123-1234567890"} NetworkPolicy that selects ``reserved:ingress`` and allows egress to specific identities could also be used. But in general, it's probably more reliable to allow all traffic from the ``reserved:ingress`` identity to all ``cluster`` identities, given that Cilium Ingress is part of the networking infrastructure.
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/default-deny-ingress-policy.rst
main
cilium
[ 0.006562070921063423, 0.07155680656433105, -0.029471300542354584, -0.06395862251520157, 0.03046877682209015, -0.048248179256916046, -0.01200634054839611, -0.03576759621500969, 0.05494166165590286, 0.013656651601195335, 0.04058228060603142, -0.07841349393129349, 0.03907107189297676, -0.0098...
0.241377
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gs\_envoy\_load\_balancing: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Proxy Load Balancing for Kubernetes Services (beta) \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This guide explains how to configure Proxy Load Balancing for Kubernetes services using Cilium, which is useful for use cases such as gRPC load-balancing. Once enabled, the traffic to a Kubernetes service will be redirected to a Cilium-managed Envoy proxy for load balancing. This feature is independent of the :ref:`gs\_ingress` feature. .. include:: ../../beta.rst Deploy Test Applications ======================== .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/envoy/test-application-proxy-loadbalancing.yaml The test workloads consist of: - one client deployment ``client`` - one service ``echo-service`` with two backend pods. View information about these pods: .. code-block:: shell-session $ kubectl get pods --show-labels -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS client-7dccb64ff6-t5gc7 1/1 Running 0 39s 10.244.0.125 minikube kind=client,name=client,pod-template-hash=7dccb64ff6 echo-service-744b6dd45b-487tn 2/2 Running 0 39s 10.244.0.71 minikube kind=echo,name=echo-service,other=echo,pod-template-hash=744b6dd45b echo-service-744b6dd45b-mdjc2 2/2 Running 0 39s 10.244.0.213 minikube kind=echo,name=echo-service,other=echo,pod-template-hash=744b6dd45b .. code-block:: shell-session $ CLIENT=$(kubectl get pods -l name=client -o jsonpath='{.items[0].metadata.name}') Start Observing Traffic with Hubble =================================== Enable Hubble in your cluster with the step mentioned in :ref:`hubble\_setup`. Start a second terminal, then enable hubble port forwarding and observe traffic for the service ``echo-service``: .. code-block:: shell-session $ kubectl -n kube-system port-forward deployment/hubble-relay 4245:4245 & $ hubble observe --service echo-service -f You should be able to get a response from both of the backend services individually from ``client``: .. code-block:: shell-session $ kubectl exec -it $CLIENT -- curl -v echo-service:8080/ Notice that Hubble shows all the flows between the client pod and the backend pods via ``echo-service`` service. :: Jan 16 04:28:10.690: default/client-7dccb64ff6-t5gc7 (ID:5152) <> default/echo-service:8080 (world) pre-xlate-fwd TRACED (TCP) Jan 16 04:28:10.690: default/echo-service:8080 (world) <> default/client-7dccb64ff6-t5gc7 (ID:5152) post-xlate-rev TRANSLATED (TCP) Add Proxy Load Balancing Annotations to the Services ==================================================== Adding a Layer 7 policy introduces the Envoy proxy into the path for this traffic. .. code-block:: shell-session $ kubectl annotate service echo-service service.cilium.io/lb-l7=enabled service/echo-service annotated Make a request to a backend service and observe the traffic with Hubble again: .. code-block:: shell-session $ kubectl exec -it $CLIENT -- curl -v echo-service:8080/ The request is now proxied through the Envoy proxy and then flows to the backend. :: Jan 16 04:32:27.737: default/client-7dccb64ff6-t5gc7:56462 (ID:5152) -> default/echo-service:8080 (world) to-proxy FORWARDED (TCP Flags: SYN) Jan 16 04:32:27.737: default/client-7dccb64ff6-t5gc7:56462 (ID:5152) <- default/echo-service:8080 (world) to-endpoint FORWARDED (TCP Flags: SYN, ACK) Jan 16 04:32:27.737: default/client-7dccb64ff6-t5gc7:56462 (ID:5152) -> default/echo-service:8080 (world) to-proxy FORWARDED (TCP Flags: ACK) Jan 16 04:32:27.737: default/client-7dccb64ff6-t5gc7:56462 (ID:5152) -> default/echo-service:8080 (world) to-proxy FORWARDED (TCP Flags: ACK, PSH) Jan 16 04:32:27.739: default/client-7dccb64ff6-t5gc7:56462 (ID:5152) <- default/echo-service:8080 (world) to-endpoint FORWARDED (TCP Flags: ACK, PSH) Jan 16 04:32:27.740: default/client-7dccb64ff6-t5gc7:56462 (ID:5152) -> default/echo-service:8080 (world) to-proxy FORWARDED (TCP Flags: ACK, FIN) Jan 16 04:32:27.740: default/client-7dccb64ff6-t5gc7:56462 (ID:5152) <- default/echo-service:8080 (world) to-endpoint FORWARDED (TCP Flags: ACK, FIN) Jan 16 04:32:27.740: default/client-7dccb64ff6-t5gc7:56462 (ID:5152) -> default/echo-service:8080 (world) to-proxy FORWARDED (TCP Flags: ACK) Supported Annotations ===================== .. list-table:: :widths: 40 25 25 25 :header-rows: 1 \* - Name - Description - Applicable Values - Default Value \* - ``service.cilium.io/lb-l7`` - Enable L7 Load balancing for kubernetes service. - ``enabled``, ``disabled`` - Defaults to ``disabled`` \* - ``service.cilium.io/lb-l7-algorithm`` - The LB algorithm to be used for services. - ``round\_robin``, ``least\_request``, ``random`` - Defaults to Helm option ``loadBalancer.l7.algorithm`` value.
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/envoy-load-balancing.rst
main
cilium
[ -0.07206390053033829, 0.020367855206131935, -0.08022290468215942, -0.03076470084488392, -0.039582494646310806, -0.06441178172826767, -0.03962298855185509, -0.0060575478710234165, 0.02816777303814888, 0.003027369501069188, -0.013991282321512699, -0.02976781502366066, 0.028436843305826187, -...
0.229831
``loadBalancer.l7.algorithm`` value.
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/envoy-load-balancing.rst
main
cilium
[ -0.11497132480144501, 0.02216252125799656, -0.06170817092061043, -0.00863641407340765, 0.022604236379265785, -0.07180193811655045, -0.05137065425515175, 0.02056683786213398, -0.08461467921733856, -0.061669085174798965, -0.07142449915409088, -0.038634806871414185, -0.0007528228452429175, -0...
0.072374
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gs\_ingress\_path\_types: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Ingress Path Types Example \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This example walks through how various path types interact and allows you to test that Cilium is working as it should. This example requires that Cilium Ingress is enabled, and ``kubectl`` and ``jq`` must be installed. Deploy the example app ====================== This deploys five copies of the ingress-conformance-echo tool, that will allow us to see what paths are forwarded to what backends. .. code-block:: shell-session $ # Apply the base definitions $ kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/main/examples/kubernetes/servicemesh/ingress-path-types.yaml $ # Apply the Ingress $ kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/main/examples/kubernetes/servicemesh/ingress-path-types-ingress.yaml Review the Ingress ================== Here is the Ingress used: .. literalinclude:: ../../../examples/kubernetes/servicemesh/ingress-path-types-ingress.yaml :language: yaml You can see here that there are five matches, one for each of our deployments. The Ingress deliberately has the rules in a different order to what they will be configured in Envoy. \* For Exact matches, we only match ``/exact`` and send that to the ``exactpath`` Service. \* For Prefix matches, we match ``/``, send that to the ``prefixpath`` Service, and match ``/prefix`` and send that to the ``prefixpath2`` Service. \* For ImplementationSpecific matches, we match ``/impl.+`` (a full regex), and send that to the ``implpath2`` Service. We also match ``/impl`` (without regex characters) and send that to the ``implpath`` Service. The intent here is to allow us to tell which rule we have matched by consulting the echoed response from the ingress-conformance-echo containers. Check that the Ingress has provisioned correctly ================================================ Firstly, we need to check that the Ingress has been provisioned correctly. .. code-block:: shell-session $ export PATHTYPE\_IP=`k get ing multiple-path-types -o json | jq -r '.status.loadBalancer.ingress[0].ip'` $ curl -s -H "Host: pathtypes.example.com" http://$PATHTYPE\_IP/ | jq { "path": "/", "host": "pathtypes.example.com", "method": "GET", "proto": "HTTP/1.1", "headers": { "Accept": [ "\*/\*" ], "User-Agent": [ "curl/7.81.0" ], "X-Envoy-External-Address": [ "your-ip-here" ], "X-Forwarded-For": [ "your-ip-here" ], "X-Forwarded-Proto": [ "http" ], "X-Request-Id": [ "6bb145e8-addb-4fd5-a76f-b53d07bd1867" ] }, "namespace": "default", "ingress": "", "service": "", "pod": "prefixpath-7cb697f5cd-wvv7b" } Here you can see that the Ingress has been provisioned correctly and is responding to requests. Also, you can see that the ``/`` path has been served by the ``prefixpath`` deployment, which is as expected from the Ingress. Check that paths perform as expected ==================================== The following example uses ``jq`` to extract the first element out of the ``pod`` field, which is the name of the associated deployment. So, ``prefixpath-7cb697f5cd-wvv7b`` will return ``prefixpath``. .. code-block:: shell-session $ echo Should show "prefixpath" Should show prefixpath $ curl -s -H "Host: pathtypes.example.com" http://$PATHTYPE\_IP/ | jq '.pod | split("-")[0]' "prefixpath" $ echo Should show "exactpath" Should show exactpath $ curl -s -H "Host: pathtypes.example.com" http://$PATHTYPE\_IP/exact | jq '.pod | split("-")[0]' "exactpath" $ echo Should show "prefixpath2" Should show prefixpath2 $ curl -s -H "Host: pathtypes.example.com" http://$PATHTYPE\_IP/prefix | jq '.pod | split("-")[0]' "prefixpath2" $ echo Should show "implpath" Should show implpath $ curl -s -H "Host: pathtypes.example.com" http://$PATHTYPE\_IP/impl | jq '.pod | split("-")[0]' "implpath" $ echo Should show "implpath2" Should show implpath2 $ curl -s -H "Host: pathtypes.example.com" http://$PATHTYPE\_IP/implementation | jq '.pod | split("-")[0]' "implpath2" (You can use the "Copy Commands" button above to do less copy-and-paste.) The most interesting example here is the last one, where we send ``/implementation`` to the ``implpath2`` Service, while ``/impl`` goes to ``implpath``. This is because ``/implementation`` matches the ``/impl.+`` regex, and ``/impl`` matches the ``/impl`` regex. If we now patch the Ingress object to use the regex ``/impl.\*`` instead (note the ``\*``, which matches \*\*zero or more\*\* characters of the
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/path-types.rst
main
cilium
[ -0.016410451382398605, 0.0069523341953754425, -0.006110901478677988, -0.012418361380696297, 0.010830452665686607, -0.03083166666328907, -0.02571512572467327, -0.013937700539827347, 0.026935456320643425, -0.02798372693359852, 0.050447192043066025, -0.08708784729242325, 0.049238815903663635, ...
0.156156
we send ``/implementation`` to the ``implpath2`` Service, while ``/impl`` goes to ``implpath``. This is because ``/implementation`` matches the ``/impl.+`` regex, and ``/impl`` matches the ``/impl`` regex. If we now patch the Ingress object to use the regex ``/impl.\*`` instead (note the ``\*``, which matches \*\*zero or more\*\* characters of the type instead of the previous ``+``, which matches \*\*one or more\*\* characters), then we will get a different result for the last two checks: .. code-block:: shell-session $ echo Should show "implpath2" Should show implpath $ curl -s -H "Host: pathtypes.example.com" http://$PATHTYPE\_IP/impl | jq '.pod | split("-")[0]' "implpath" $ echo Should show "implpath2" Should show implpath2 $ curl -s -H "Host: pathtypes.example.com" http://$PATHTYPE\_IP/implementation | jq '.pod | split("-")[0]' "implpath2" The request to ``/impl`` now matches the \*\*longer\*\* pattern ``/impl.\*``. The moral here is to be careful with your regular expressions! Clean up the example ==================== Finally, we clean up our example: .. code-block:: shell-session $ # Apply the base definitions $ kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/main/examples/kubernetes/servicemesh/ingress-path-types.yaml $ # Apply the Ingress $ kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/main/examples/kubernetes/servicemesh/ingress-path-types-ingress.yaml
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/path-types.rst
main
cilium
[ -0.07859916985034943, 0.09684941917657852, 0.056372418999671936, -0.050642017275094986, -0.04953355714678764, -0.07484396547079086, 0.045228514820337296, 0.005535448901355267, 0.06457669287919998, 0.04917912930250168, 0.015406548976898193, -0.05502316355705261, -0.00004996717689209618, 0.0...
0.115173
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gs\_envoy\_traffic\_management: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* L7 Load Balancing and URL re-writing \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Cilium Service Mesh defines a ``CiliumEnvoyConfig`` CRD which allows users to set the configuration of the Envoy component built into Cilium agents. This example sets up an Envoy listener which load balances requests between two backend services. Deploy Test Applications ======================== .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/envoy/test-application.yaml The test workloads consist of: - two client deployments, ``client`` and ``client2`` - two services, ``echo-service-1`` and ``echo-service-2`` View information about these pods: .. code-block:: shell-session $ kubectl get pods --show-labels -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS client-7568bc7f86-dlfqr 1/1 Running 0 100s 10.0.1.8 minikube-m02 kind=client,name=client,pod-template-hash=7568bc7f86 client2-8b4c4fd75-xn25d 1/1 Running 0 100s 10.0.1.24 minikube-m02 kind=client,name=client2,other=client,pod-template-hash=8b4c4fd75 echo-service-1-97748874-4sztx 2/2 Running 0 100s 10.0.1.86 minikube-m02 kind=echo,name=echo-service-1,other=echo,pod-template-hash=97748874 echo-service-2-76c584c4bf-p4z4w 2/2 Running 0 100s 10.0.1.16 minikube-m02 kind=echo,name=echo-service-2,pod-template-hash=76c584c4bf You can see that - Only ``client2`` is labeled with ``other=client`` - we will use this in a ``CiliumNetworkPolicy`` definition later in this example. Make an environment variable with the pod ID for ``client2``: .. code-block:: shell-session $ export CLIENT2=$(kubectl get pods -l name=client2 -o jsonpath='{.items[0].metadata.name}') We are going to use Envoy configuration to load-balance requests between these two services ``echo-service-1`` and ``echo-service-2``. Start Observing Traffic with Hubble =================================== Enable Hubble in your cluster with the step mentioned in :ref:`hubble\_setup`. Start a second terminal, then enable hubble port forwarding and observe traffic from the ``client2`` pod: .. code-block:: shell-session $ kubectl -n kube-system port-forward deployment/hubble-relay 4245:4245 & $ hubble observe --from-pod $CLIENT2 -f You should be able to get a response from both of the backend services individually from ``client2``: .. code-block:: shell-session $ kubectl exec -it $CLIENT2 -- curl -v echo-service-1:8080/ $ kubectl exec -it $CLIENT2 -- curl -v echo-service-2:8080/ Notice that Hubble shows all the flows between these pods as being either ``to/from-stack``, ``to/from-overlay`` or ``to/from-endpoint`` - there is no traffic marked as flowing to or from the proxy at this stage. (This assumes you don't already have any Layer 7 policies in place affecting this traffic.) Verify that you get a 404 error response if you curl to the non-existent URL ``/foo`` on these services: .. code-block:: shell-session $ kubectl exec -it $CLIENT2 -- curl -v echo-service-1:8080/foo $ kubectl exec -it $CLIENT2 -- curl -v echo-service-2:8080/foo Add Layer 7 Policy ================== Adding a Layer 7 policy introduces the Envoy proxy into the path for this traffic. .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/envoy/client-egress-l7-http.yaml $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/envoy/client-egress-only-dns.yaml Make a request to a backend service (either will do): .. code-block:: shell-session $ kubectl exec -it $CLIENT2 -- curl -v echo-service-1:8080/ $ kubectl exec -it $CLIENT2 -- curl -v echo-service-2:8080/foo Adding a Layer 7 policy enables Layer 7 visibility. Notice that the Hubble output now includes flows ``to-proxy``, and also shows the HTTP protocol information at level 7 (for example ``HTTP/1.1 GET http://echo-service-1:8080/``) .. Note:: Note that Envoy may `sanitize some headers `\_. Instead, you can make Envoy trust previous hops and prevent Envoy from rewriting some of these HTTP headers. Trust previous hops by setting Helm values ``envoy.xffNumTrustedHopsL7PolicyIngress`` and ``envoy.xffNumTrustedHopsL7PolicyEgress`` to the number of hops to trust. For an egress policy the previous hop is the source pod, whereas for an ingress policy it can be either the source pod, the "egress policy transparent proxy", Cilium Ingress Controller, Cilium Gateway API, or any other Ingress proxy or infrastructure. Depending on your environment, you should consider the security implications of trusting previous hops. Test Layer
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/envoy-traffic-management.rst
main
cilium
[ -0.04962513968348503, 0.003934078384190798, -0.0687502846121788, -0.030652357265353203, -0.03160995617508888, -0.04329972714185715, -0.041672319173812866, -0.01359741110354662, 0.024270085617899895, -0.004662838764488697, 0.00031790047069080174, -0.06130455434322357, 0.06968117505311966, -...
0.164227
is the source pod, whereas for an ingress policy it can be either the source pod, the "egress policy transparent proxy", Cilium Ingress Controller, Cilium Gateway API, or any other Ingress proxy or infrastructure. Depending on your environment, you should consider the security implications of trusting previous hops. Test Layer 7 Policy Enforcement =============================== The policy only permits GET requests to the ``/`` path, so you will see requests to any other URL being dropped. For example, try: .. code-block:: shell-session $ kubectl exec -it $CLIENT2 -- curl -v echo-service-1:8080/foo The Hubble output will show the HTTP request being dropped, like this: :: Jul 7 08:40:15.076: default/client2-8b4c4fd75-6pgvl:58586 -> default/echo-service-1-97748874-n7758:8080 http-request DROPPED (HTTP/1.1 GET http://echo-service-1:8080/foo) And the curl should show a ``403 Forbidden response``. Add Envoy load-balancing and URL re-writing =========================================== Apply the ``envoy-traffic-management-test.yaml`` file, which defines a ``CiliumClusterwideEnvoyConfig``. .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/envoy/envoy-traffic-management-test.yaml .. include:: warning.rst This configuration listens for traffic intended for either of the two ``echo-`` services and: - load-balances 50/50 between the two backend ``echo-`` services - rewrites the path ``/foo`` to ``/`` A request to ``/foo`` should now succeed, because of the path re-writing: .. code-block:: shell-session $ kubectl exec -it $CLIENT2 -- curl -v echo-service-1:8080/foo But the network policy still prevents requests to any path that is not rewritten to ``/``. For example, this request will result in a packet being dropped and a 403 Forbidden response code: .. code-block:: shell-session $ kubectl exec -it $CLIENT2 -- curl -v echo-service-1:8080/bar ### Output from hubble observe Jul 7 08:43:47.165: default/client2-8b4c4fd75-6pgvl:33376 -> default/echo-service-2-76c584c4bf-874dm:8080 http-request DROPPED (HTTP/1.1 GET http://echo-service-1:8080/bar) Try making several requests to one backend service. You should see in the Hubble output approximately half the time, they are handled by the other backend. Example: :: Jul 7 08:45:25.807: default/client2-8b4c4fd75-6pgvl:37388 -> kube-system/coredns-64897985d-8jhhn:53 L3-L4 REDIRECTED (UDP) Jul 7 08:45:25.807: default/client2-8b4c4fd75-6pgvl:37388 -> kube-system/coredns-64897985d-8jhhn:53 to-proxy FORWARDED (UDP) Jul 7 08:45:25.807: default/client2-8b4c4fd75-6pgvl:37388 -> kube-system/coredns-64897985d-8jhhn:53 dns-request FORWARDED (DNS Query echo-service-1.default.svc.cluster.local. AAAA) Jul 7 08:45:25.807: default/client2-8b4c4fd75-6pgvl:37388 -> kube-system/coredns-64897985d-8jhhn:53 dns-request FORWARDED (DNS Query echo-service-1.default.svc.cluster.local. A) Jul 7 08:45:25.808: default/client2-8b4c4fd75-6pgvl:57942 -> default/echo-service-1:8080 none REDIRECTED (TCP Flags: SYN) Jul 7 08:45:25.808: default/client2-8b4c4fd75-6pgvl:57942 -> default/echo-service-1:8080 to-proxy FORWARDED (TCP Flags: SYN) Jul 7 08:45:25.808: default/client2-8b4c4fd75-6pgvl:57942 -> default/echo-service-1:8080 to-proxy FORWARDED (TCP Flags: ACK) Jul 7 08:45:25.808: default/client2-8b4c4fd75-6pgvl:57942 -> default/echo-service-1:8080 to-proxy FORWARDED (TCP Flags: ACK, PSH) Jul 7 08:45:25.809: default/client2-8b4c4fd75-6pgvl:57942 -> default/echo-service-2-76c584c4bf-874dm:8080 L3-L4 REDIRECTED (TCP Flags: SYN) Jul 7 08:45:25.809: default/client2-8b4c4fd75-6pgvl:57942 -> default/echo-service-2-76c584c4bf-874dm:8080 to-endpoint FORWARDED (TCP Flags: SYN) Jul 7 08:45:25.809: default/client2-8b4c4fd75-6pgvl:57942 -> default/echo-service-2-76c584c4bf-874dm:8080 to-endpoint FORWARDED (TCP Flags: ACK) Jul 7 08:45:25.809: default/client2-8b4c4fd75-6pgvl:57942 -> default/echo-service-2-76c584c4bf-874dm:8080 to-endpoint FORWARDED (TCP Flags: ACK, PSH) Jul 7 08:45:25.809: default/client2-8b4c4fd75-6pgvl:57942 -> default/echo-service-2-76c584c4bf-874dm:8080 http-request FORWARDED (HTTP/1.1 GET http://echo-service-1:8080/) Jul 7 08:45:25.811: default/client2-8b4c4fd75-6pgvl:57942 -> default/echo-service-1:8080 to-proxy FORWARDED (TCP Flags: ACK, FIN) Jul 7 08:45:25.811: default/client2-8b4c4fd75-6pgvl:57942 -> default/echo-service-1:8080 to-proxy FORWARDED (TCP Flags: ACK) Jul 7 08:45:30.811: default/client2-8b4c4fd75-6pgvl:57942 -> default/echo-service-2-76c584c4bf-874dm:8080 to-endpoint FORWARDED (TCP Flags: ACK, FIN)
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/envoy-traffic-management.rst
main
cilium
[ -0.0066474066115915775, 0.05568256229162216, 0.025971457362174988, -0.01757093518972397, 0.0025059920735657215, -0.046563468873500824, -0.014712825417518616, -0.02131488360464573, 0.07463454455137253, 0.06521487981081009, -0.02785548008978367, -0.028361225500702858, -0.015475606545805931, ...
0.149522
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gs\_ingress: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Kubernetes Ingress Support \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Cilium uses the standard `Kubernetes Ingress`\_ resource definition, with an ``ingressClassName`` of ``cilium``. This can be used for path-based routing and for TLS termination. For backwards compatibility, the ``kubernetes.io/ingress.class`` annotation with value of ``cilium`` is also supported. .. Note:: The ingress controller creates a Service of LoadBalancer type, so your environment will need to support this. Cilium allows you to specify load balancer mode for the Ingress resource: - ``dedicated``: The Ingress controller will create a dedicated loadbalancer for the Ingress. - ``shared``: The Ingress controller will use a shared loadbalancer for all Ingress resources. Each load balancer mode has its own benefits and drawbacks. The shared mode saves resources by sharing a single LoadBalancer config across all Ingress resources in the cluster, while the dedicated mode can help to avoid potential conflicts (e.g. path prefix) between resources. .. Note:: It is possible to change the load balancer mode for an Ingress resource. When the mode is changed, active connections to backends of the Ingress may be terminated during the reconfiguration due to a new load balancer IP address being assigned to the Ingress resource. This is a step-by-step guide on how to enable the Ingress Controller in an existing K8s cluster with Cilium installed. .. \_Kubernetes Ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/ Prerequisites ############# \* Cilium must be configured with the kube-proxy replacement, using ``kubeProxyReplacement=true``. For more information, see :ref:`kube-proxy replacement `. \* Cilium must be configured with the L7 proxy enabled using ``l7Proxy=true`` (enabled by default). \* By default, the Ingress controller creates a Service of LoadBalancer type, so your environment will need to support this. Alternatively, you can change this to NodePort or, since Cilium 1.16+, directly expose the Cilium L7 proxy on the :ref:`host network`. .. include:: installation.rst .. include:: ingress-reference.rst Ingress Path Types and Precedence \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* The Ingress specification supports three types of paths: \* \*\*Exact\*\* - match the given path exactly. \* \*\*Prefix\*\* - match the URL path prefix split by ``/``. The last path segment must match the whole segment - if you configure a Prefix path of ``/foo/bar``, ``/foo/bar/baz`` will match, but ``/foo/barbaz`` will not. \* \*\*ImplementationSpecific\*\* - Interpretation of the Path is up to the IngressClass. \*\*In Cilium's case, we define ImplementationSpecific to be "Regex"\*\*, so Cilium will interpret any given path as a regular expression and program Envoy accordingly. Notably, some other implementations have ImplementationSpecific mean "Prefix", and in those cases, Cilium will treat the paths differently. (Since a path like ``/foo/bar`` contains no regex characters, when it is configured in Envoy as a regex, it will function as an ``Exact`` match instead). When multiple path types are configured on an Ingress object, Cilium will configure Envoy with the matches in the following order: #. Exact #. ImplementationSpecific (that is, regular expression) #. Prefix #. The ``/`` Prefix match has special handling and always goes last. Within each of these path types, the paths are sorted in decreasing order of string length. If you do use ImplementationSpecific regex support, be careful with using the ``\*`` operator, since it will increase the length of the regex, but may match another, shorter option. For example, if you have two ImplementationSpecific paths, ``/impl``, and ``/impl.\*``, the second will be sorted ahead of the first in the generated config. But because ``\*`` is in use, the ``/impl`` match will never be hit, as any request to that path will match the ``/impl.\*`` path first. See the
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/ingress.rst
main
cilium
[ -0.04318494722247124, 0.05390045419335365, -0.020785393193364143, -0.05239338055253029, -0.005196012556552887, -0.06258993595838547, -0.010318737477064133, -0.0048590656369924545, 0.06803351640701294, -0.019082756713032722, 0.03945991024374962, -0.04903789237141609, 0.035565201193094254, -...
0.23842
example, if you have two ImplementationSpecific paths, ``/impl``, and ``/impl.\*``, the second will be sorted ahead of the first in the generated config. But because ``\*`` is in use, the ``/impl`` match will never be hit, as any request to that path will match the ``/impl.\*`` path first. See the :ref:`Ingress Path Types ` for more information. Supported Ingress Annotations \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* .. list-table:: :header-rows: 1 \* - Name - Description - Default Value \* - ``ingress.cilium.io/loadbalancer-mode`` - | The loadbalancer mode for the ingress. | Allows a per ingress override | of the default set in the Helm value | ``ingressController.loadbalancerMode``. | Applicable values are ``dedicated`` and | ``shared``. - | ``dedicated`` | (from Helm chart) \* - ``ingress.cilium.io/loadbalancer-class`` - | The loadbalancer class for the ingress. | Only applicable when ``loadbalancer-mode`` is set to ``dedicated``. - unspecified \* - ``ingress.cilium.io/service-type`` - | The Service type for dedicated Ingress. | Applicable values are ``LoadBalancer`` | and ``NodePort``. - ``LoadBalancer`` \* - ``ingress.cilium.io/service-external-traffic-policy`` - | The Service externalTrafficPolicy for dedicated | Ingress. Applicable values are ``Cluster`` | and ``Local``. - ``Cluster`` \* - ``ingress.cilium.io/insecure-node-port`` - | The NodePort to use for the HTTP Ingress. | Applicable only if ``ingress.cilium.io/service-type`` | is ``NodePort``. If unspecified, a random | NodePort will be allocated by kubernetes. - unspecified \* - ``ingress.cilium.io/secure-node-port`` - | The NodePort to use for the HTTPS Ingress. | Applicable only if ``ingress.cilium.io/service-type`` | is ``NodePort``. If unspecified, a random | NodePort will be allocated by kubernetes. - unspecified \* - ``ingress.cilium.io/host-listener-port`` - | The port to use for the Envoy listener on the host | network. Applicable and mandatory only for | dedicated Ingress and if :ref:`host network mode` is | enabled. - ``8080`` \* - ``ingress.cilium.io/tls-passthrough`` - | Enable TLS Passthrough mode for this Ingress. | Applicable values are ``enabled`` and ``disabled``, | although boolean-style values will also be | accepted. | | Note that some conditions apply to TLS | Passthrough Ingresses, due to how | TLS Passthrough works: | \* A ``host`` field must be set in the Ingress | \* Default backends are ignored | \* Rules with paths other than ``/`` are ignored | If all the rules in an Ingress are ignored for | these reasons, no Envoy config will be generated | and the Ingress will have no effect. | | Note that this annotation is analogous to | the ``ssl-passthrough`` on other Ingress | controllers. - ``disabled`` \* - ``ingress.cilium.io/force-https`` - | Enable enforced HTTPS redirects for this Ingress. | Applicable values are ``enabled`` and ``disabled``, | although boolean-style values will also be | accepted. | | Note that if the annotation is not present, this | behavior will be controlled by the | ``enforce-ingress-https`` configuration | file setting (or ``ingressController.enforceHttps`` | in Helm). | | Any host with TLS config will have redirects to | HTTPS configured for each match specified in the | Ingress. - unspecified \* - ``ingress.cilium.io/request-timeout`` - | Request timeout in seconds for Ingress backend HTTP requests. | | Note that if the annotation is present, it will override | any value set by the ``ingress-default-request-timeout`` operator flag. | If neither is set, defaults to ``0`` (no limit) - ``0`` Additionally, cloud-provider specific annotations for the LoadBalancer Service are supported. By default, annotations with values beginning with: \* ``lbipam.cilium.io`` \* ``nodeipam.cilium.io`` \* ``service.beta.kubernetes.io`` \* ``service.kubernetes.io`` \* ``cloud.google.com`` will be copied from an Ingress object to the generated LoadBalancer Service objects. This setting is controlled by the Cilium Operator's ``ingress-lb-annotation-prefixes`` config flag, and can be configured in Cilium's Helm ``values.yaml`` using the ``ingressController.ingressLBAnnotationPrefixes`` setting. Please refer to the
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/ingress.rst
main
cilium
[ -0.10622569918632507, 0.039418213069438934, -0.002411108696833253, -0.004756333772093058, 0.005234858952462673, -0.022610116750001907, 0.02730318158864975, 0.06099114194512367, 0.023033499717712402, 0.02801193855702877, 0.02448273077607155, -0.05339571461081505, 0.06790705770254135, -0.043...
0.09991
\* ``lbipam.cilium.io`` \* ``nodeipam.cilium.io`` \* ``service.beta.kubernetes.io`` \* ``service.kubernetes.io`` \* ``cloud.google.com`` will be copied from an Ingress object to the generated LoadBalancer Service objects. This setting is controlled by the Cilium Operator's ``ingress-lb-annotation-prefixes`` config flag, and can be configured in Cilium's Helm ``values.yaml`` using the ``ingressController.ingressLBAnnotationPrefixes`` setting. Please refer to the `Kubernetes documentation `\_ for more details. .. \_gs\_ingress\_host\_network\_mode: Host network mode ################# .. note:: Supported since Cilium 1.16+ Host network mode allows you to expose the Cilium ingress controller (Envoy listener) directly on the host network. This is useful in cases where a LoadBalancer Service is unavailable, such as in development environments or environments with cluster-external loadbalancers. .. note:: \* Enabling the Cilium ingress controller host network mode automatically disables the LoadBalancer/NodePort type Service mode. They are mutually exclusive. \* The listener is exposed on all interfaces (``0.0.0.0`` for IPv4 and/or ``::`` for IPv6). Host network mode can be enabled via Helm: .. code-block:: yaml ingressController: enabled: true hostNetwork: enabled: true Once enabled, host network ports can be specified with the following methods: \* Shared Ingress: Globally via Helm flags \* ``ingressController.hostNetwork.sharedListenerPort``: Host network port to expose the Cilium ingress controller Envoy listener. The default port is ``8080``. If you change it, you should choose a port number higher than ``1023`` (see `Bind to privileged port`\_). \* Dedicated Ingress: Per ``Ingress`` resource via annotations \* ``ingress.cilium.io/host-listener-port``: Host network port to expose the Cilium ingress controller Envoy listener. The default port is ``8080`` but it can only be used for a single ``Ingress`` resource as it needs to be unique per ``Ingress`` resource. You should choose a port higher than ``1023`` (see `Bind to privileged port`\_). This annotation is mandatory if the global Cilium ingress controller mode is configured to ``dedicated`` (``ingressController.loadbalancerMode``) or the ingress resource sets the ``ingress.cilium.io/loadbalancer-mode`` annotation to ``dedicated`` and multiple ``Ingress`` resources are deployed. The default behavior regarding shared or dedicated ingress can be configured via ``ingressController.loadbalancerMode``. .. warning:: Be aware that misconfiguration might result in port clashes. Configure unique ports that are still available on all Cilium Nodes where Cilium ingress controller Envoy listeners are exposed. Bind to privileged port \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* By default, the Cilium L7 Envoy process does not have any Linux capabilities out-of-the-box and is therefore not allowed to listen on privileged ports. If you choose a port equal to or lower than ``1023``, ensure that the Helm value ``envoy.securityContext.capabilities.keepCapNetBindService=true`` is configured and to add the capability ``NET\_BIND\_SERVICE`` to the respective :ref:`Cilium Envoy container via Helm values`: \* Standalone DaemonSet mode: ``envoy.securityContext.capabilities.envoy`` \* Embedded mode: ``securityContext.capabilities.ciliumAgent`` Configure the following Helm values to allow privileged port bindings in host network mode: .. tabs:: .. group-tab:: Standalone DaemonSet mode .. code-block:: yaml ingressController: enabled: true hostNetwork: enabled: true envoy: enabled: true securityContext: capabilities: keepCapNetBindService: true envoy: # Add NET\_BIND\_SERVICE to the list (keep the others!) - NET\_BIND\_SERVICE .. group-tab:: Embedded mode .. code-block:: yaml ingressController: enabled: true hostNetwork: enabled: true envoy: securityContext: capabilities: keepCapNetBindService: true securityContext: capabilities: ciliumAgent: # Add NET\_BIND\_SERVICE to the list (keep the others!) - NET\_BIND\_SERVICE Deploy Cilium Ingress listeners on subset of nodes \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* The Cilium ingress controller Envoy listener can be exposed on a specific subset of nodes. This only works in combination with the host network mode and can be configured via a node label selector in the Helm values: .. code-block:: yaml ingressController: enabled: true hostNetwork: enabled: true nodes: matchLabels: role: infra component: ingress This will deploy the Ingress Controller Envoy listener only on the Cilium Nodes matching the configured labels. An empty selector selects all nodes and continues to expose the functionality on all Cilium nodes. Examples ########
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/ingress.rst
main
cilium
[ -0.06447011232376099, 0.016472311690449715, -0.009189164265990257, 0.003401530906558037, -0.03842870891094208, -0.03836759179830551, 0.004804822616279125, -0.0002739110786933452, 0.059831514954566956, -0.008593752980232239, -0.016213329508900642, -0.09340263158082962, 0.02885979413986206, ...
0.181868
.. code-block:: yaml ingressController: enabled: true hostNetwork: enabled: true nodes: matchLabels: role: infra component: ingress This will deploy the Ingress Controller Envoy listener only on the Cilium Nodes matching the configured labels. An empty selector selects all nodes and continues to expose the functionality on all Cilium nodes. Examples ######## Please refer to one of the below examples on how to use and leverage Cilium's Ingress features: .. toctree:: :maxdepth: 1 :glob: http ingress-and-network-policy path-types grpc tls-termination tls-default-certificate
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/ingress.rst
main
cilium
[ 0.028923945501446724, 0.03297585994005203, -0.06430791318416595, -0.007924845442175865, 0.029933711513876915, -0.050078392028808594, 0.0430135503411293, -0.05234692245721817, 0.03763963282108307, 0.011192921549081802, 0.039707109332084656, -0.07905113697052002, 0.002392330439761281, 0.0081...
0.200972
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Defaults certificate for Ingresses \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Cilium can use a default certificate for ingresses without ``.spec.tls[].secretName`` set. It's still necessary to have ``.spec.tls[].hosts`` defined. Prerequisites ############# \* Cilium must be configured with Kubernetes Ingress Support. Please refer to :ref:`Kubernetes Ingress Support ` for more details. Installation ############ .. tabs:: .. group-tab:: Helm Defaults certificate for Ingresses can be enabled with helm flags ``ingressController.defaultSecretNamespace`` and ``ingressController.defaultSecretName``` set as true. Please refer to :ref:`k8s\_install\_helm` for a fresh installation. .. cilium-helm-upgrade:: :namespace: kube-system :extra-args: --reuse-values :set: ingressController.defaultSecretNamespace=kube-system ingressController.defaultSecretName=default-cert :post-commands: kubectl -n kube-system rollout restart deployment/cilium-operator kubectl -n kube-system rollout restart ds/cilium .. group-tab:: Cilium CLI .. include:: ../../installation/cli-download.rst Cilium Ingress Controller can be enabled with the following command: .. parsed-literal:: $ cilium install |CHART\_VERSION| \ --set kubeProxyReplacement=true \ --set ingressController.enabled=true \ --set ingressController.defaultSecretNamespace=kube-system \ --set ingressController.defaultSecretName=default-cert
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/tls-default-certificate.rst
main
cilium
[ -0.05452683940529823, 0.09995847940444946, -0.045757096260786057, -0.032116781920194626, 0.002771452534943819, -0.09486062079668045, -0.02669222094118595, 0.00269137229770422, 0.05806128680706024, -0.015002796426415443, 0.04674908518791199, -0.0807371586561203, 0.07156866788864136, 0.04443...
0.085143
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gs\_envoy\_custom\_listener: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* L7 Path Translation \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This example replicates the Prometheus metrics listener which is already available via the command line option ``--proxy-prometheus-port``. So the point of this example is not to add new functionality, but to show how a feature that previously required Cilium Agent code changes can be implemented with the new Cilium Envoy Config CRD. Apply Example CRD ================= This example adds a new Envoy listener ``envoy-prometheus-metrics-listener`` on the standard Prometheus port (e.g. ``9090``) to each Cilium node, translating the default Prometheus metrics path ``/metrics`` to Envoy's Prometheus metrics path ``/stats/prometheus``. Apply this Cilium Envoy Config CRD: .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/envoy/envoy-prometheus-metrics-listener.yaml This version of the ``CiliumClusterwideEnvoyConfig`` CRD is Cluster-scoped, (i.e., not namespaced), so the name needs to be unique in the cluster, unless you want to replace a CRD with a new one. .. include:: warning.rst .. code-block:: shell-session $ kubectl logs -n kube-system ds/cilium | grep -E "level=(error|warning)" Test the Listener Port ====================== Test that the new port is responding to the metrics requests: .. code-block:: shell-session $ curl http://:9090/metrics Where ```` is the IP address of one of your k8s cluster nodes. Clean-up ======== Remove the prometheus listener with: .. parsed-literal:: $ kubectl delete -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/envoy/envoy-prometheus-metrics-listener.yaml
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/envoy-custom-listener.rst
main
cilium
[ -0.0315789058804512, -0.01447283010929823, -0.011539220809936523, -0.03259910270571709, -0.045886892825365067, -0.028313634917140007, 0.01331749465316534, -0.017185019329190254, 0.022872593253850937, -0.003372523933649063, -0.0013274296652525663, -0.0880928784608841, -0.0007922163931652904, ...
0.211691
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gs\_gateway\_http\_migration: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* HTTP Migration Example \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This example shows you how to migrate an existing Ingress configuration to the equivalent Gateway API resource. The Cilium :ref:`gs\_ingress\_http` serves as the starting Ingress configuration. The same approach applies to other controllers, though each Ingress controller configuration varies. The example Ingress configuration routes traffic to backend services from the ``bookinfo`` demo microservices app from the Istio project. Review Ingress Configuration ============================ You can find the example Ingress definition in ``basic-ingress.yaml``. .. literalinclude:: ../../../../examples/kubernetes/servicemesh/basic-ingress.yaml :language: yaml This example listens for traffic on port 80, routes requests for the path ``/details`` to the ``details`` service, and ``/`` to the ``productpage`` service. Create Equivalent Gateway Configuration ======================================= To create the equivalent Gateway configuration, consider the following: - Entry Point The entry point is a combination of an IP address and port through which external clients access the data plane. .. tabs:: .. group-tab:: Ingress Every Ingress resource has two implicit entry points -- one for HTTP and the other for HTTPS traffic. An Ingress controller provides the entry points. Typically, entry points are either shared by all Ingress resources, or every Ingress resource has dedicated entry points. .. code-block:: shell-session apiVersion: networking.k8s.io/v1 kind: Ingress spec: ingressClassName: cilium .. group-tab:: Gateway API In the Gateway API, entry points must be explicitly defined in a Gateway resource. For example, for the data plane to handle HTTP traffic on port 80, you must define a listener for that traffic. Typically, a Gateway implementation provides a dedicated data plane for each Gateway resource. .. code-block:: shell-session apiVersion: gateway.networking.k8s.io/v1beta1 kind: Gateway metadata: name: cilium-gateway spec: gatewayClassName: cilium listeners: - name: http port: 80 protocol: HTTP - Routing Rules When using Ingress or Gateway API, routing rules must be defined to attach applications to those entry points. .. tabs:: .. group-tab:: Ingress The path-based routing rules are configured in the Ingress resource. In the Ingress resource, each hostname has separate routing rules: .. code-block:: shell-session apiVersion: networking.k8s.io/v1 kind: Ingress [...] rules: - http: paths: - backend: service: name: details port: number: 9080 path: /details pathType: Prefix - backend: service: name: productpage port: number: 9080 path: / pathType: Prefix .. group-tab:: Gateway API The routing rules are configured in the HTTPRoute. .. code-block:: shell-session --- apiVersion: gateway.networking.k8s.io/v1beta1 kind: HTTPRoute spec: parentRefs: - name: cilium-gateway rules: - matches: - path: type: PathPrefix value: / backendRefs: - name: productpage port: 9080 - matches: - path: type: PathPrefix value: /details backendRefs: - name: details port: 9080 - Selecting Data Plane to Attach to: Both Ingress and Gateway API resources must be explicitly attached to a Dataplane. .. tabs:: .. group-tab:: Ingress An Ingress resource must specify a class that selects which Ingress controller to use. .. code-block:: shell-session apiVersion: networking.k8s.io/v1 kind: Ingress spec: ingressClassName: cilium .. group-tab:: Gateway API A Gateway resource must also specify a class: in this example, it is always the ``cilium`` class. An HTTPRoute must specify which Gateway (or Gateways) to attach to via a ``parentRef``. .. code-block:: shell-session apiVersion: gateway.networking.k8s.io/v1beta1 kind: Gateway metadata: name: cilium-gateway namespace: default spec: gatewayClassName: cilium [...] --- apiVersion: gateway.networking.k8s.io/v1beta1 kind: HTTPRoute spec: parentRefs: - name: cilium-gateway Review Equivalent Gateway Configuration ======================================= You can find the equivalent final Gateway and HTTPRoute definition in ``http-migration.yaml``. .. literalinclude:: ../../../../examples/kubernetes/gateway/http-migration.yaml :language: yaml The preceding example creates a Gateway named ``cilium-gateway`` that listens on port 80 for HTTP traffic. Two routes are defined, one for ``/details`` to the ``details`` service, and one for ``/`` to
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/ingress-to-gateway/http-migration.rst
main
cilium
[ -0.04600175470113754, -0.016230672597885132, -0.10397639870643616, -0.02513520047068596, 0.028821580111980438, -0.07504506409168243, -0.009069690480828285, 0.03134029731154442, 0.003551651258021593, 0.02466958947479725, 0.04257430136203766, -0.08447172492742538, 0.05254384130239487, -0.045...
0.262994
======================================= You can find the equivalent final Gateway and HTTPRoute definition in ``http-migration.yaml``. .. literalinclude:: ../../../../examples/kubernetes/gateway/http-migration.yaml :language: yaml The preceding example creates a Gateway named ``cilium-gateway`` that listens on port 80 for HTTP traffic. Two routes are defined, one for ``/details`` to the ``details`` service, and one for ``/`` to the ``productpage`` service. Deploy the resources and verify that the HTTP requests are routed successfully to the services. For more information, consult the Gateway API :ref:`gs\_gateway\_http`.
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/ingress-to-gateway/http-migration.rst
main
cilium
[ 0.015161136165261269, -0.025736026465892792, -0.025580285117030144, -0.08235829323530197, -0.053563330322504044, -0.04534117877483368, -0.020132018253207207, -0.050946902483701706, 0.05460282415151596, 0.025828303769230843, -0.018530333414673805, -0.019987592473626137, -0.019372768700122833,...
0.089821
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gs\_ingress-to-gateway: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Migrating from Ingress to Gateway \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* The Gateway API is not only the long-term successor to the Ingress API, it also supports use cases beyond HTTP/HTTPS-based applications. This section highlights some of the limitations with Ingress, explains some of the benefits of the Gateway API, and describes some of the options available with migrating from Ingress API to Gateway API. Ingress API Limitations ####################### Development of the Gateway API stemmed from the realization that the Kubernetes Ingress API has some limitations. - Limited support for advanced routing The Ingress API supports basic routing based on path and host rules, but it lacks native support for more advanced routing features such as traffic splitting, header modification, and URL rewriting. - Limited protocol support The Ingress API only supports HTTP and HTTPS traffic, and does not natively support other protocols like TCP or UDP. The Ingress API specification was too limited and not extensible enough. To address these technical limitations, software vendors and developers created vendor-specific annotations. However, using annotations created inconsistencies from one Ingress Controller to another. For example, issues often arise when switching from one Ingress Controller to another because annotations are often vendor-specific. - Operational constraints Finally, the Ingress API suffers from operational constraints: it is not well suited for multi-team clusters with shared load-balancing infrastructure. Benefits of the Gateway API ########################### The Gateway API was designed to address the limitations of Ingress API. The `Kubernetes SIG-Network `\_ team designs and maintains the Gateway API. For more information about the Gateway API, see `the Gateway API project page `\_. The Gateway API provides a centralized mechanism for managing and enforcing policies for external traffic, including HTTP routing, TLS termination, traffic splitting/weighting, and header modification. Native support of policies for external traffic means that annotations are no longer required to support ingress traffic patterns. This means that Gateway API resources are more portable from one Gateway API implementation to another. When customization is required, Gateway API provides several flexible models, including specific extension points to enable diverse traffic patterns. As the Gateway API team adds extensions, the team looks for common denominators and promotes features of API conformance to maximize the ease of extending Ingress API resources. Finally, the Gateway API is designed with role-based personas in mind. The Ingress model is based on a persona where developers manage and create ingress and service resources themselves. In more complex deployments, more personas are involved: - Infrastructure Providers administrate the managed services of a cloud provider, or the infrastructure/network team when running Kubernetes on-premises. - Cluster Operators are responsible for the administration of a cluster. - Application Developers are responsible for defining application configuration and service composition. By deconstructing the Ingress API into several Gateway API objects, personas gain the specific access and privileges that their responsibilities require. For example, application developers in a specific team could be assigned permissions to create Route objects in a specified namespace without also gaining permissions to modify the Gateway configuration or edit Route objects in namespaces other than theirs. Migration Methods ################# There are two primary methods to migrate Ingress API resources to Gateway API: - \*manual\*: manually creating Gateway API resources based on existing Ingress API resources. - \*automated\*: creating rules using the `ingress2gateway tool `\_. The ingress2gateway project reads Ingress resources from a Kubernetes cluster based on your current Kube Config. It outputs YAML for equivalent Gateway API resources to stdout. .. note:: The
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/ingress-to-gateway/ingress-to-gateway.rst
main
cilium
[ -0.07669161260128021, 0.025008905678987503, -0.06738228350877762, -0.03919414058327675, 0.00753154419362545, -0.06594061106443405, -0.04110361263155937, 0.005316628608852625, 0.03967731073498726, -0.004603818990290165, 0.010280306451022625, -0.012563149444758892, 0.05704164505004883, -0.04...
0.269254
\*manual\*: manually creating Gateway API resources based on existing Ingress API resources. - \*automated\*: creating rules using the `ingress2gateway tool `\_. The ingress2gateway project reads Ingress resources from a Kubernetes cluster based on your current Kube Config. It outputs YAML for equivalent Gateway API resources to stdout. .. note:: The ``ingress2gateway`` tool remains experimental and is not recommended for production. Ingress Annotations Migration ############################# Most Ingress controllers use annotations to provide support for specific features, such as HTTP request manipulation and routing. As noted in `Benefits of the Gateway API`\_, the Gateway API avoids implementation-specific annotations in order to provide a portable configuration. As a consequence, it's rare to port implementation-specific Ingress annotations to a Gateway API resource. Instead, the Gateway API provides native support for some of these features, including: - Request/response manipulation - Traffic splitting - Header, query parameter, or method-based routing Examples ######## For examples of migrating to Cilium's Gateway API features, see: .. toctree:: :maxdepth: 1 :glob: http-migration tls-migration
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/ingress-to-gateway/ingress-to-gateway.rst
main
cilium
[ -0.029972728341817856, 0.006342150736600161, 0.020078137516975403, -0.0018391766352578998, -0.011247879825532436, -0.020848240703344345, 0.023649875074625015, -0.003446014830842614, 0.05664511397480965, 0.023145081475377083, -0.07263357192277908, -0.08976277709007263, 0.011344388127326965, ...
0.214879
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gs\_gateway\_tls\_migration: \*\*\*\*\*\*\*\*\*\*\*\*\* TLS Migration \*\*\*\*\*\*\*\*\*\*\*\*\* This migration example builds on the previous :ref:`gs\_gateway\_http\_migration` and adds TLS termination for two HTTP routes. For simplicity, this example omits the second route to ``productpage``. Review Ingress Configuration ============================ You can find the example Ingress definition in ``tls-ingress.yaml``. .. literalinclude:: ../../../../examples/kubernetes/servicemesh/tls-ingress.yaml :language: yaml This example: - listens for HTTPS traffic on port 443. - terminates TLS for the ``hipstershop.cilium.rocks`` and ``bookinfo.cilium.rocks`` hostnames using the TLS certificate and key from the Secret \*demo-cert\*. - routes HTTPS requests for the ``hipstershop.cilium.rocks`` hostname with the URI prefix ``/hipstershop.ProductCatalogService`` to the \*productcatalogservice\* Service. - routes HTTPS requests for the ``hipstershop.cilium.rocks`` hostname with the URI prefix ``/hipstershop.CurrencyService`` to the \*currencyservice\* Service. - routes HTTPS requests for the ``bookinfo.cilium.rocks`` hostname with the URI prefix ``/details`` to the \*details\* Service. - routes HTTPS requests for the ``bookinfo.cilium.rocks`` hostname with any other prefix to the \*productpage\* Service. Create Equivalent Gateway Configuration ======================================= To create the equivalent TLS termination configuration, consider the following: - TLS Termination .. tabs:: .. group-tab:: Ingress The Ingress resource supports TLS termination via the TLS section, where the TLS certificate and key are stored in a Kubernetes Secret. .. code-block:: shell-session apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: tls-ingress namespace: default [...] spec: tls: - hosts: - bookinfo.cilium.rocks - hipstershop.cilium.rocks secretName: demo-cert .. group-tab:: Gateway API In the Gateway API, TLS termination is a property of the Gateway listener, and similarly to the Ingress, a TLS certificate and key are also stored in a Secret. .. code-block:: shell-session apiVersion: gateway.networking.k8s.io/v1beta1 kind: Gateway metadata: name: tls-gateway spec: gatewayClassName: cilium listeners: - name: bookinfo.cilium.rocks protocol: HTTPS port: 443 hostname: "bookinfo.cilium.rocks" tls: certificateRefs: - kind: Secret name: demo-cert - name: hipstershop.cilium.rocks protocol: HTTPS port: 443 hostname: "hipstershop.cilium.rocks" tls: certificateRefs: - kind: Secret name: demo-cert - Host-header-based Routing Rules .. tabs:: .. group-tab:: Ingress The Ingress API uses the term \*host\*. With Ingress, each host has separate routing rules. .. code-block:: shell-session apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: tls-ingress namespace: default spec: ingressClassName: cilium rules: - host: hipstershop.cilium.rocks http: paths: - backend: service: name: productcatalogservice port: number: 3550 path: /hipstershop.ProductCatalogService pathType: Prefix .. group-tab:: Gateway API The Gateway API uses the \*hostname\* term. The host-header-based routing rules map to the hostnames of the HTTPRoute. In the HTTPRoute, the routing rules apply to all hostnames. The hostnames of an HTTPRoute must match the hostname of the Gateway listener. Otherwise, the listener will ignore the routing rules for the unmatched hostnames. .. code-block:: shell-session --- apiVersion: gateway.networking.k8s.io/v1beta1 kind: HTTPRoute metadata: name: hipstershop-cilium-rocks namespace: default spec: hostnames: - hipstershop.cilium.rocks parentRefs: - name: cilium-gateway rules: - matches: - path: type: PathPrefix value: /hipstershop.ProductCatalogService backendRefs: - name: productcatalogservice port: 3550 Review Equivalent Gateway Configuration ======================================= You can find the equivalent final Gateway and HTTPRoute definition in ``tls-migration.yaml``. .. literalinclude:: ../../../../examples/kubernetes/gateway/tls-migration.yaml :language: yaml Deploy the resources and verify that HTTPS requests are routed successfully to the services. For more information, consult the Gateway API :ref:`gs\_gateway\_https`.
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/ingress-to-gateway/tls-migration.rst
main
cilium
[ -0.03852476179599762, 0.02422315627336502, -0.027546804398298264, -0.04320787265896797, 0.006850601639598608, -0.08687713742256165, -0.021967751905322075, -0.019750196486711502, 0.07497914135456085, -0.014251280575990677, 0.05549357458949089, -0.05929100885987282, 0.029865434393286705, -0....
0.137422
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gs\_mutual\_authentication: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Mutual Authentication (Beta) \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* .. note:: This is a beta feature. Please provide feedback and file a GitHub issue if you experience any problems. This feature is still incomplete, see :ref:`mutual\_auth\_roadmap` below for more details. Mutual Authentication and mTLS Background ######################################### Mutual Transport Layer Security (mTLS) is a mechanism that ensures the authenticity, integrity, and confidentiality of data exchanged between two entities over a network. Unlike traditional TLS, which involves a one-way authentication process where the client verifies the server's identity, mutual TLS adds an additional layer of security by requiring both the client and the server to authenticate each other. Mutual TLS aims at providing authentication, confidentiality and integrity to service-to-service communications. Mutual Authentication in Cilium ############################### Cilium's mTLS-based Mutual Authentication support brings the mutual authentication handshake out-of-band for regular connections. For Cilium to meet most of the common requirements for service-to-service authentication and encryption, users must enable encryption. .. Note:: Cilium's encryption features, :ref:`encryption\_wg` and :ref:`encryption\_ipsec`, can be enabled to automatically create and maintain encrypted connections between Pods. To address the challenge of identity verification in dynamic and heterogeneous environments, mutual authentication requires a framework secure identity verification for distributed systems. .. Note:: To learn more about the Mutual Authentication architecture for the Cilium Service Mesh, read the `CFP `\_. .. \_identity\_management: Identity Management ################### In Cilium's current mutual authentication support, identity management is provided through the use of SPIFFE (Secure Production Identity Framework for Everyone). SPIFFE benefits --------------- Here are some of the benefits provided by `SPIFFE `\_ : - Trustworthy identity issuance: SPIFFE provides a standardized mechanism for issuing and managing identities. It ensures that each service in a distributed system receives a unique and verifiable identity, even in dynamic environments where services may scale up or down frequently. - Identity attestation: SPIFFE allows services to prove their identities through attestation. It ensures that services can demonstrate their authenticity and integrity by providing verifiable evidence about their identity, like digital signatures or cryptographic proofs. - Dynamic and scalable environments: SPIFFE addresses the challenges of identity management in dynamic environments. It supports automatic identity issuance, rotation, and revocation, which are critical in cloud-native architectures where services may be constantly deployed, updated, or retired. Cilium and SPIFFE ----------------- SPIFFE provides an API model that allows workloads to request an identity from a central server. In our case, a workload means the same thing that a Cilium Security Identity does - a set of pods described by a label set. A SPIFFE identity is a subclass of URI, and looks something like this: ``spiffe://trust.domain/path/with/encoded/info``. There are two main parts of a SPIFFE setup: - A central SPIRE server, which forms the root of trust for the trust domain. - A per-node SPIRE agent, which first gets its own identity from the SPIRE server, then validates the identity requests of workloads running on its node. When a workload wants to get its identity, usually at startup, it connects to the local SPIRE agent using the SPIFFE workload API, and describes itself to the agent. The SPIRE agent then checks that the workload is really who it says it is, and then connects to the SPIRE server and attests that the workload is requesting an identity, and that the request is valid. The SPIRE agent checks a number of things about the workload, that the pod is actually running on the node it's coming from, that the labels match, and so on. Once
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/mutual-authentication/mutual-authentication.rst
main
cilium
[ -0.0954553633928299, 0.05426241084933281, -0.05805717408657074, -0.06546243280172348, 0.034199103713035583, -0.05837038904428482, 0.0028999114874750376, -0.0013076334726065397, 0.0291108600795269, -0.019369754940271378, 0.05375280976295471, -0.05257786810398102, 0.10365902632474899, 0.0109...
0.138256
to the SPIRE server and attests that the workload is requesting an identity, and that the request is valid. The SPIRE agent checks a number of things about the workload, that the pod is actually running on the node it's coming from, that the labels match, and so on. Once the SPIRE agent has requested an identity from the SPIRE server, it passes it back to the workload in the SVID (SPIFFE Verified Identity Document) format. This document includes a TLS keypair in the X.509 version. In the usual flow for SPIRE, the workload requests its own information from the SPIRE server. In Cilium's support for SPIFFE, the Cilium agents get a common SPIFFE identity and can themselves ask for identities on behalf of other workloads. This is demonstrated in the following example. .. include:: installation.rst Examples ######## Please refer to the following example on how to use and leverage the mutual authentication feature: .. toctree:: :maxdepth: 1 :glob: mutual-authentication-example .. admonition:: Video :class: attention If you'd like a video explanation and demo of Mutual Authentication in Cilium, check out `eCHO episode 100: Next-gen mutual authentication in Cilium `\_\_. Limitations ########### \* Cilium Mutual Authentication is still in development and considered beta. Several planned security features have not been implemented yet, see below for details. \* Cilium's Mutual authentication has only been validated with SPIRE, the production-ready implementation of SPIFFE. As Cilium uses SPIFFE APIs, it's possible that other SPIFFE implementations may work. However, Cilium is currently only tested with the supplied SPIRE install, and using any other SPIFFE implementation is currently not supported. \* There is no current option to build a single trust domain across multiple clusters for combining Cluster Mesh and Service Mesh. Therefore clusters connected in a Cluster Mesh are not currently compatible with Mutual Authentication. \* The current support of mutual authentication only works within a Cilium-managed cluster and is not compatible with an external mTLS solution. .. \_mutual\_auth\_roadmap: Detailed Roadmap Status ####################### The following table shows the roadmap status of the mutual authentication feature. There are several work items outstanding before the feature is complete from a security model perspective. For details, see the [roadmap issue](https://github.com/cilium/cilium/issues/28986). +--------------------------------------------------+----------------------------------------------------------+ | SPIFFE/SPIRE Integration | Beta | +--------------------------------------------------+----------------------------------------------------------+ | Authentication API for agent | Beta | +--------------------------------------------------+----------------------------------------------------------+ | mTLS handshake between agents | Beta | +--------------------------------------------------+----------------------------------------------------------+ | Auth cache to enable per-identity handshake | Beta | +--------------------------------------------------+----------------------------------------------------------+ | CiliumNetworkPolicy support | Beta | +--------------------------------------------------+----------------------------------------------------------+ | Integrate with WireGuard | TODO | +--------------------------------------------------+----------------------------------------------------------+ | Per-connection handshake | TODO | +--------------------------------------------------+----------------------------------------------------------+ | Sync ipcache with auth data | TODO | +--------------------------------------------------+----------------------------------------------------------+ | Detailed documentation of security model | TODO | +--------------------------------------------------+----------------------------------------------------------+ | Conduct penetration test of model | TODO | +--------------------------------------------------+----------------------------------------------------------+ | Minimize packet drops | TODO | +--------------------------------------------------+----------------------------------------------------------+ | Use auth secret for network encryption | TODO | +--------------------------------------------------+----------------------------------------------------------+ | Review maturity and consider for stable | TODO | +--------------------------------------------------+----------------------------------------------------------+
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/mutual-authentication/mutual-authentication.rst
main
cilium
[ -0.05870788171887398, 0.045431818813085556, -0.08003938943147659, -0.021378450095653534, 0.008193819783627987, -0.06250769644975662, 0.04487139731645584, -0.03158337250351906, 0.08706769347190857, -0.027898075059056282, -0.005870643071830273, -0.004301339853554964, 0.0007148604490794241, 0...
0.12111
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gs\_mutual\_authentication\_example: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Mutual Authentication Example \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* This example shows you how to enforce mutual authentication between two Pods. Deploy a client (pod-worker) and a server (echo) using the following manifest: .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/mutual-auth-example.yaml $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/cnp-without-mutual-auth.yaml service/echo created deployment.apps/echo created pod/pod-worker created ciliumnetworkpolicy.cilium.io/no-mutual-auth-echo created Verify that the Pods have been successfully deployed: .. code-block:: shell-session $ kubectl get svc echo NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE echo ClusterIP 10.96.16.90 3000/TCP 42m $ kubectl get pod pod-worker NAME READY STATUS RESTARTS AGE pod-worker 1/1 Running 0 40m Verify that the network policy has been deployed successfully and filters the traffic as expected. Run the following commands: .. code-block:: shell-session $ kubectl exec -it pod-worker -- curl -s -o /dev/null -w "%{http\_code}" http://echo:3000/headers 200 $ kubectl exec -it pod-worker -- curl http://echo:3000/headers-1 Access denied The first request should be successful (the \*pod-worker\* Pod is able to connect to the \*echo\* Service over a specific HTTP path and the HTTP status code is ``200``). The second one should be denied (the \*pod-worker\* Pod is unable to connect to the \*echo\* Service over a specific HTTP path other than '/headers'). Before we enable mutual authentication between ``pod-worker`` and ``echo``, let's verify that the SPIRE server is healthy. Assuming you have followed the installation instructions and have a SPIRE server serving Cilium, adding mutual authentication simply requires adding ``authentication.mode: "required"`` in the ingress/egress block in your network policies. Verify SPIRE Health =================== .. note:: This example assumes a default SPIRE installation. Let's first verify that the SPIRE server and agents automatically deployed are working as expected. The SPIRE server is deployed as a StatefulSet and the SPIRE agents are deployed as a DaemonSet (you should therefore see one SPIRE agent per node). .. code-block:: shell-session $ kubectl get all -n cilium-spire NAME READY STATUS RESTARTS AGE pod/spire-agent-27jd7 1/1 Running 0 144m pod/spire-agent-qkc8l 1/1 Running 0 144m pod/spire-server-0 2/2 Running 0 144m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/spire-server ClusterIP 10.96.124.177 8081/TCP 144m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/spire-agent 2 2 2 2 2 144m NAME READY AGE statefulset.apps/spire-server 1/1 144m Run a healthcheck on the SPIRE server. .. code-block:: shell-session $ kubectl exec -n cilium-spire spire-server-0 -c spire-server -- /opt/spire/bin/spire-server healthcheck Server is healthy. Verify the list of attested agents: .. code-block:: shell-session $ kubectl exec -n cilium-spire spire-server-0 -c spire-server -- /opt/spire/bin/spire-server agent list Found 2 attested agents: SPIFFE ID : spiffe://spiffe.cilium/spire/agent/k8s\_psat/default/64745bf2-bd9d-4e42-bb2b-e095a6b65121 Attestation type : k8s\_psat Expiration time : 2023-07-04 18:39:50 +0000 UTC Serial number : 110848236251310359782141595494072495768 SPIFFE ID : spiffe://spiffe.cilium/spire/agent/k8s\_psat/default/d4a8a6da-d808-4993-b67a-bed250bbc53e Attestation type : k8s\_psat Expiration time : 2023-07-04 18:39:55 +0000 UTC Serial number : 7806033782886940845084156064765627978 Notice that the SPIRE Server uses Kubernetes Projected Service Account Tokens (PSATs) to verify the Identity of a SPIRE Agent running on a Kubernetes Cluster. Projected Service Account Tokens provide additional security guarantees over traditional Kubernetes Service Account Tokens and when supported by a Kubernetes cluster, PSAT is the recommended attestation strategy. Verify SPIFFE Identities ======================== Now that we know the SPIRE service is healthy, let's verify that the Cilium and SPIRE integration has been successful: - The Cilium agent and operator should have a registered delegate Identity with the SPIRE Server. - The Cilium operator should have registered Identities with the SPIRE server on behalf of the workloads (Kubernetes Pods). Verify that the Cilium agent and operator have Identities on the SPIRE server: .. code-block:: shell-session $ kubectl exec
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/mutual-authentication/mutual-authentication-example.rst
main
cilium
[ -0.03516559302806854, 0.04745610058307648, -0.018595518544316292, -0.0858205109834671, -0.0025332870427519083, -0.04437636584043503, 0.03835774213075638, -0.008160017430782318, 0.04665062204003334, 0.01762036420404911, 0.05463109165430069, -0.08399174362421036, 0.08465240150690079, 0.01718...
0.199382
and operator should have a registered delegate Identity with the SPIRE Server. - The Cilium operator should have registered Identities with the SPIRE server on behalf of the workloads (Kubernetes Pods). Verify that the Cilium agent and operator have Identities on the SPIRE server: .. code-block:: shell-session $ kubectl exec -n cilium-spire spire-server-0 -c spire-server -- /opt/spire/bin/spire-server entry show -parentID spiffe://spiffe.cilium/ns/cilium-spire/sa/spire-agent Found 2 entries Entry ID : b6424c87-4323-4d64-98dd-cd5b51a1fcbb SPIFFE ID : spiffe://spiffe.cilium/cilium-agent Parent ID : spiffe://spiffe.cilium/ns/cilium-spire/sa/spire-agent Revision : 0 X509-SVID TTL : default JWT-SVID TTL : default Selector : k8s:ns:kube-system Selector : k8s:sa:cilium Entry ID : 8aa91d65-16c4-48a0-bc1f-c9bf26e6a25f SPIFFE ID : spiffe://spiffe.cilium/cilium-operator Parent ID : spiffe://spiffe.cilium/ns/cilium-spire/sa/spire-agent Revision : 0 X509-SVID TTL : default JWT-SVID TTL : default Selector : k8s:ns:kube-system Selector : k8s:sa:cilium-operator Next, verify that the \*echo\* Pod has an Identity registered with the SPIRE server. To do this, you must first construct the Pod's SPIFFE ID. The SPIFFE ID for a workload is based on the ``spiffe://spiffe.cilium/identity/$IDENTITY\_ID`` format, where ``$IDENTITY\_ID`` is a workload's Cilium Identity. Grab the Cilium Identity for the \*echo\* Pod; .. code-block:: shell-session $ IDENTITY\_ID=$(kubectl get cep -l app=echo -o=jsonpath='{.items[0].status.identity.id}') $ echo $IDENTITY\_ID 17947 Use the Cilium Identity for the \*echo\* pod to construct its SPIFFE ID and check it is registered on the SPIRE server: .. code-block:: shell-session $ kubectl exec -n cilium-spire spire-server-0 -c spire-server -- /opt/spire/bin/spire-server entry show -spiffeID spiffe://spiffe.cilium/identity/$IDENTITY\_ID Found 1 entry Entry ID : 9fc13971-fb19-4814-b9f0-737b30e336c6 SPIFFE ID : spiffe://spiffe.cilium/identity/17947 Parent ID : spiffe://spiffe.cilium/cilium-operator Revision : 0 X509-SVID TTL : default JWT-SVID TTL : default Selector : cilium:mutual-auth You can see the that the \*cilium-operator\* was listed in the ``Parent ID``. That is because the Cilium operator creates SPIRE entries for Cilium Identities as they are created. To get all registered entries, execute the following command: .. code-block:: shell-session kubectl exec -n cilium-spire spire-server-0 -c spire-server -- /opt/spire/bin/spire-server entry show -selector cilium:mutual-auth There are as many entries as there are identities. Verify that these match by running the command: .. code-block:: shell-session kubectl get ciliumidentities The identify ID listed under ``NAME`` should match with the digits at the end of the SPIFFE ID executed in the previous command. Enforce Mutual Authentication ============================= Rolling out mutual authentication with Cilium is as simple as adding the following block to an existing or new CiliumNetworkPolicy egress or ingress rules: .. code-block:: yaml authentication: mode: "required" Update the existing rule to only allow ingress access to mutually authenticated workloads to access \*echo\* using: .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/cnp-with-mutual-auth.yaml Verify Mutual Authentication ============================ Start by enabling debug level: .. code-block:: shell-session cilium config set debug true Re-try your connectivity tests. They should give similar results as before: .. code-block:: shell-session $ kubectl exec -it pod-worker -- curl -s -o /dev/null -w "%{http\_code}" http://echo:3000/headers 200 $ kubectl exec -it pod-worker -- curl http://echo:3000/headers-1 Access denied Verify that mutual authentication has happened by accessing the logs on the agent. Examine the logs on the Cilium agent located in the same node as the \*echo\* Pod. For brevity, you can search for some specific log messages by label: .. code-block:: shell-session $ kubectl -n kube-system -c cilium-agent logs -l k8s-app=cilium --timestamps=true | grep "Policy is requiring authentication\|Validating Server SNI\|Validated certificate\|Successfully authenticated" 2023-07-04T17:58:28.795760597Z level=debug msg="Policy is requiring authentication" key="localIdentity=17947, remoteIdentity=39239, remoteNodeID=54264, authType=spire" subsys=auth 2023-07-04T17:58:28.800509503Z level=debug msg="Validating Server SNI" SNI ID=39239 subsys=auth 2023-07-04T17:58:28.800525190Z level=debug msg="Validated certificate" subsys=auth uri-san="[spiffe://spiffe.cilium/identity/39239]" 2023-07-04T17:58:28.801441968Z level=debug msg="Successfully authenticated" key="localIdentity=17947, remoteIdentity=39239, remoteNodeID=54264, authType=spire" remote\_node\_ip=10.0.1.175 subsys=auth When you apply a mutual authentication policy, the agent retrieves the identity of the source Pod, connects to the node where the destination Pod is running and performs a mutual TLS
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/mutual-authentication/mutual-authentication-example.rst
main
cilium
[ -0.019211914390325546, 0.02040397748351097, -0.029317619279026985, -0.007268852088600397, -0.07251247018575668, -0.003086721058934927, 0.022021343931555748, -0.029206447303295135, 0.06882332265377045, 0.022383449599146843, -0.01147607620805502, -0.09980199486017227, -0.03622198849916458, -...
0.155979
SNI ID=39239 subsys=auth 2023-07-04T17:58:28.800525190Z level=debug msg="Validated certificate" subsys=auth uri-san="[spiffe://spiffe.cilium/identity/39239]" 2023-07-04T17:58:28.801441968Z level=debug msg="Successfully authenticated" key="localIdentity=17947, remoteIdentity=39239, remoteNodeID=54264, authType=spire" remote\_node\_ip=10.0.1.175 subsys=auth When you apply a mutual authentication policy, the agent retrieves the identity of the source Pod, connects to the node where the destination Pod is running and performs a mutual TLS handshake (with the log above showing one side of the mutual TLS handshake). As the handshake succeeded, the connection was authenticated and the traffic protected by policy could proceed. Packets between the two Pods can flow until the network policy is removed or the entry expires.
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/mutual-authentication/mutual-authentication-example.rst
main
cilium
[ -0.06860628724098206, 0.05003727227449417, 0.015169919468462467, -0.011909855529665947, 0.061112180352211, -0.03253984823822975, 0.030640268698334694, -0.01218540407717228, 0.008129014633595943, 0.044232577085494995, 0.009565122425556183, -0.07372406125068665, 0.06854105740785599, 0.044382...
0.026053
Prerequisites ############# \* Mutual authentication is only currently supported with SPIFFE APIs for certificate management. \* The Cilium Helm chart includes an option to deploy a SPIRE server for mutual authentication. You may also deploy your own SPIRE server and configure Cilium to use it. Installation ############ .. Note:: The default installation requires `PersistentVolumeClaim `\_ support in the cluster, so please check with your cluster provider if it's supported or how to enable it. For lab or local cluster, you can switch to in-memory storage by passing ``authentication.mutual.spire.install.server.dataStorage.enabled=false`` to the installation command, at the cost of re-creating all data when the SPIRE server pod is restarted. .. tabs:: .. group-tab:: Cilium CLI .. include:: ../../../installation/cli-download.rst You can enable mutual authentication and its associated SPIRE server with the following command. This command requires the Cilium CLI Helm mode version 0.15 or later. .. code-block:: shell-session $ cilium install \ --set authentication.mutual.spire.enabled=true \ --set authentication.mutual.spire.install.enabled=true Next, you can check the status of the Cilium agent and operator: .. code-block:: shell-session $ cilium status .. group-tab:: Helm The Cilium Helm chart includes an option to deploy SPIRE server for mutual authentication. You may also deploy your own SPIRE server and configure Cilium to use it. Please refer to :ref:`k8s\_install\_helm` for a fresh installation. .. cilium-helm-install:: :namespace: kube-system :set: authentication.mutual.spire.enabled=true authentication.mutual.spire.install.enabled=true :post-commands: kubectl -n kube-system rollout restart deployment/cilium-operator kubectl -n kube-system rollout restart ds/cilium Next, you can check the status of the Cilium agent and operator: .. code-block:: shell-session $ cilium status .. include:: ../../../installation/cli-download.rst
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/mutual-authentication/installation.rst
main
cilium
[ -0.0028006047941744328, 0.004750407300889492, -0.09225130826234818, -0.002700366545468569, -0.02171316370368004, -0.007730802986770868, -0.042896829545497894, 0.036807965487241745, -0.009729095734655857, 0.027987847104668617, 0.048278700560331345, -0.048688117414712906, 0.06646009534597397, ...
0.042388
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gs\_gateway\_splitting: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* Traffic Splitting Example \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* HTTP traffic splitting is the process of sending incoming traffic to multiple backend services, based on predefined weights or other criteria. The Cilium Gateway API includes built-in support for traffic splitting, allowing users to easily distribute incoming traffic across multiple backend services. This is very useful for canary testing or A/B scenarios. This particular example uses the Gateway API to load balance incoming traffic to different backends, starting with the same weights before testing with a 99/1 weight distribution. .. include:: ../echo-app.rst Deploy the Cilium Gateway ========================= You can find an example Gateway and HTTPRoute definition in ``splitting.yaml``: .. literalinclude:: ../../../../examples/kubernetes/gateway/splitting.yaml :language: yaml Notice the even 50/50 split between the two Services. Deploy the Gateway and the HTTPRoute: .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/gateway/splitting.yaml The preceding example creates a Gateway named ``cilium-gw`` that listens on port 80. A single route is defined and includes two different ``backendRefs`` (``echo-1`` and ``echo-2``) and weights associated with them. .. code-block:: shell-session $ kubectl get gateway cilium-gw NAME CLASS ADDRESS PROGRAMMED AGE cilium-gw cilium 172.18.255.200 8s .. Note:: Some providers like EKS use a fully-qualified domain name rather than an IP address. Even traffic split ================== Now that the Gateway is ready, you can make HTTP requests to the services. .. code-block:: shell-session $ GATEWAY=$(kubectl get gateway cilium-gw -o jsonpath='{.status.addresses[0].value}') $ curl --fail -s http://$GATEWAY/echo Hostname: echo-1-7d88f779b-m6r46 Pod Information: node name: kind-worker2 pod name: echo-1-7d88f779b-m6r46 pod namespace: default pod IP: 10.0.2.15 Server values: server\_version=nginx: 1.12.2 - lua: 10010 Request Information: client\_address=10.0.2.252 method=GET real path=/echo query= request\_version=1.1 request\_scheme=http request\_uri=http://172.18.255.200:8080/echo Request Headers: accept=\*/\* host=172.18.255.200 user-agent=curl/7.81.0 x-forwarded-proto=http x-request-id=ee152a07-2be2-4539-b74d-ebcebf912907 Request Body: -no body in request- Notice that the reply includes the name of the Pod that received the query. For example: .. code-block:: shell-session Hostname: echo-2-5bfb6668b4-2rl4t Repeat the command several times. You should see the reply balanced evenly across both Pods and Nodes. Verify that traffic is evenly split across multiple Pods by running a loop and counting the requests: .. code-block:: shell-session while true; do curl -s -k "http://$GATEWAY/echo" >> curlresponses.txt ;done Stop the loop with ``Ctrl+C``. Verify that the responses are more or less evenly distributed. .. code-block:: shell-session $ cat curlresponses.txt| grep -c "Hostname: echo-1" 1221 $ cat curlresponses.txt| grep -c "Hostname: echo-2" 1162 Uneven (99/1) traffic split =========================== Update the HTTPRoute weights, either by using ``kubectl edit httproute`` or by updating the value in the original manifest before reapplying it to. For example, set ``99`` for echo-1 and ``1`` for echo-2: .. code-block:: shell-session backendRefs: - kind: Service name: echo-1 port: 8080 weight: 99 - kind: Service name: echo-2 port: 8090 weight: 1 Verify that traffic is unevenly split across multiple Pods by running a loop and counting the requests: .. code-block:: shell-session while true; do curl -s -k "http://$GATEWAY/echo" >> curlresponses991.txt ;done Stop the loop with ``Ctrl+C``. Verify that responses are more or less evenly distributed. .. code-block:: shell-session $ cat curlresponses991.txt| grep -c "Hostname: echo-1" 24739 $ cat curlresponses991.txt| grep -c "Hostname: echo-2" 239
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/gateway-api/splitting.rst
main
cilium
[ -0.061908822506666183, -0.009222177788615227, -0.07980761677026749, -0.04988563060760498, -0.009603581391274929, -0.10607947409152985, 0.0028452856931835413, -0.002474649576470256, 0.022658172994852066, -0.03373507782816887, 0.00012605913798324764, -0.031025825068354607, 0.04564492404460907,...
0.181485
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gs\_gateway\_http: \*\*\*\*\*\*\*\*\*\*\*\* HTTP Example \*\*\*\*\*\*\*\*\*\*\*\* In this example, we will deploy a simple HTTP service and expose it to the Cilium Gateway API. The demo application is from the ``bookinfo`` demo microservices app from the Istio project. .. include:: ../demo-app.rst Deploy the Cilium Gateway ========================= You'll find the example Gateway and HTTPRoute definition in ``basic-http.yaml``. .. literalinclude:: ../../../../examples/kubernetes/gateway/basic-http.yaml :language: yaml .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/gateway/basic-http.yaml The above example creates a Gateway named ``my-gateway`` that listens on port 80. Two routes are defined, one for ``/details`` to the ``details`` service, and one for ``/`` to the ``productpage`` service. Your cloud provider will automatically provision an external IP address for the gateway, but it may take up to 20 minutes. .. code-block:: shell-session $ kubectl get gateway my-gateway NAME CLASS ADDRESS PROGRAMMED AGE my-gateway cilium 10.100.26.37 True 2d7h .. Note:: Some providers e.g. EKS use a fully-qualified domain name rather than an IP address. Make HTTP Requests ================== Now that the Gateway is ready, you can make HTTP requests to the services. .. code-block:: shell-session $ GATEWAY=$(kubectl get gateway my-gateway -o jsonpath='{.status.addresses[0].value}') $ curl --fail -s http://"$GATEWAY"/details/1 | jq { "id": 1, "author": "William Shakespeare", "year": 1595, "type": "paperback", "pages": 200, "publisher": "PublisherA", "language": "English", "ISBN-10": "1234567890", "ISBN-13": "123-1234567890" } $ curl -v -H 'magic: foo' http://"$GATEWAY"\?great\=example ... Simple Bookstore App ### Hello! This is a simple bookstore application consisting of three services as shown below | name | http://details:9080 | | --- | --- | | endpoint | details | | children | | name | endpoint | children | | --- | --- | --- | | http://details:9080 | details | | | http://reviews:9080 | reviews | | name | endpoint | children | | --- | --- | --- | | http://ratings:9080 | ratings | | | | #### Click on one of the links below to auto generate a request to the backend as a real user or a tester [Normal user](/productpage?u=normal) [Test user](/productpage?u=test)
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/gateway-api/http.rst
main
cilium
[ -0.013965082354843616, 0.010215867310762405, -0.09321090579032898, -0.031179871410131454, -0.055939968675374985, -0.10124879330396652, -0.06046847626566887, 0.034938037395477295, 0.03739892318844795, -0.016679098829627037, 0.033887989819049835, -0.07211364805698395, -0.005692989099770784, ...
0.336116
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gs\_gateway\_https: \*\*\*\*\*\*\*\*\*\*\*\*\* HTTPS Example \*\*\*\*\*\*\*\*\*\*\*\*\* This example builds on the previous :ref:`gs\_gateway\_http` and add TLS termination for two HTTP routes. For simplicity, the second route to ``productpage`` is omitted. .. literalinclude:: ../../../../examples/kubernetes/gateway/basic-https.yaml :language: yaml .. include:: ../tls-cert.rst Deploy the Gateway and HTTPRoute ================================ The Gateway configuration for this demo provides the similar routing to the ``details`` and ``productpage`` services. .. tabs:: .. group-tab:: Self-signed Certificate .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/gateway/basic-https.yaml .. group-tab:: cert-manager .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/gateway/basic-https.yaml To tell cert-manager that this Ingress needs a certificate, annotate the Gateway with the name of the CA issuer we previously created: .. code-block:: shell-session $ kubectl annotate gateway tls-gateway cert-manager.io/issuer=ca-issuer This creates a Certificate object along with a Secret containing the TLS certificate. .. code-block:: shell-session $ kubectl get certificate,secret demo-cert NAME READY SECRET AGE certificate.cert-manager.io/demo-cert True demo-cert 29s NAME TYPE DATA AGE secret/demo-cert kubernetes.io/tls 3 29s External IP address will be shown up in Gateway. Also, the host names should be shown up in related HTTPRoutes. .. code-block:: shell-session $ kubectl get gateway tls-gateway NAME CLASS ADDRESS PROGRAMMED AGE tls-gateway cilium 10.104.247.23 True 29s $ kubectl get httproutes https-app-route-1 https-app-route-2 NAME HOSTNAMES AGE https-app-route-1 ["bookinfo.cilium.rocks"] 29s https-app-route-2 ["hipstershop.cilium.rocks"] 29s Update ``/etc/hosts`` with the host names and IP address of the Gateway: .. code-block:: shell-session $ sudo perl -ni -e 'print if !/\.cilium\.rocks$/d' /etc/hosts; sudo tee -a /etc/hosts \ <<<"$(kubectl get gateway tls-gateway -o jsonpath='{.status.addresses[0].value}') bookinfo.cilium.rocks hipstershop.cilium.rocks" Make HTTPS Requests =================== .. tabs:: .. group-tab:: Self-signed Certificate By specifying the CA's certificate on a curl request, you can say that you trust certificates signed by that CA. .. code-block:: shell-session $ curl --cacert minica.pem -v https://bookinfo.cilium.rocks/details/1 $ curl --cacert minica.pem -v https://hipstershop.cilium.rocks/ If you prefer, instead of supplying the CA you can specify ``-k`` to tell the curl client not to validate the server's certificate. Without either, you will get an error that the certificate was signed by an unknown authority. Specifying -v on the curl request, you can see that the TLS handshake took place successfully. .. group-tab:: cert-manager .. code-block:: shell-session $ curl https://bookinfo.cilium.rocks/details/1 $ curl https://hipstershop.cilium.rocks/
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/gateway-api/https.rst
main
cilium
[ 0.007941863499581814, 0.04050612822175026, -0.04090607166290283, -0.055432941764593124, -0.02261888049542904, -0.04312199354171753, -0.03966541588306427, -0.011908271349966526, 0.08820226788520813, -0.03392168506979942, 0.016622185707092285, -0.04439034312963486, 0.013663822785019875, -0.0...
0.074501
.. only:: not (epub or latex or html) WARNING: You are looking at unreleased Cilium documentation. Please use the official rendered version released here: https://docs.cilium.io .. \_gs\_gateway\_grpc: \*\*\*\*\*\*\*\*\*\*\*\*\* gRPC Example \*\*\*\*\*\*\*\*\*\*\*\*\* This example demonstrates how to set up a Gateway that terminates TLS traffic and routes requests to a gRPC service (i.e. using HTTP/2). In order for this example to work, ALPN support needs to be enabled with the Helm flag ``gatewayAPI.enableAlpn`` set to true. This enables clients to request HTTP/2 through the TLS negotiation. .. literalinclude:: ../../../../examples/kubernetes/gateway/grpc-tls-termination.yaml :language: yaml .. tabs:: .. group-tab:: Self-signed Certificate This example uses a TLS certificate signed by a made-up, `self-signed `\_ certificate authority (CA). One easy way to do this is with `mkcert `\_. The certificate will validate the hostname ``grpc-echo.cilium.rocks`` used in this example. .. code-block:: shell-session $ mkcert bookinfo.cilium.rocks hispter.cilium.rocks Created a new local CA 💥 Note: the local CA is not installed in the system trust store. Run "mkcert -install" for certificates to be trusted automatically ⚠ Created a new certificate valid for the following names 📜 - "grpc-echo.cilium.rocks" The certificate is at "./grpc-echo.cilium.rocks.pem" and the key at "./grpc-echo.cilium.rocks-key.pem" ✅ It will expire on 28 September 2027 🗓 Create a Kubernetes secret with this demo key and certificate: .. code-block:: shell-session $ kubectl create secret tls grpc-certificate --key=grpc-echo.cilium.rocks-key.pem --cert=grpc-echo.cilium.rocks.pem .. group-tab:: cert-manager Install cert-manager: .. code-block:: shell-session $ helm repo add jetstack https://charts.jetstack.io $ helm install cert-manager jetstack/cert-manager --version v1.16.2 \ --namespace cert-manager \ --set crds.enabled=true \ --create-namespace \ --set config.apiVersion="controller.config.cert-manager.io/v1alpha1" \ --set config.kind="ControllerConfiguration" \ --set config.enableGatewayAPI=true Now, create a CA Issuer: .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/servicemesh/ca-issuer.yaml Deploy the Gateway and GRPCRoute ================================ This sets up a simple gRPC echo server and a Gateway to expose it. .. tabs:: .. group-tab:: Self-signed Certificate .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/gateway/grpc-tls-termination.yaml The self-signed certificate Secrets from the previous step will be used by this Gateway. .. group-tab:: cert-manager .. parsed-literal:: $ kubectl apply -f \ |SCM\_WEB|\/examples/kubernetes/gateway/grpc-tls-termination.yaml To tell cert-manager that this Gateway needs a certificate, annotate the Gateway with the name of the CA issuer you created previously: .. code-block:: shell-session $ kubectl annotate gateway tls-gateway cert-manager.io/issuer=ca-issuer This creates a Certificate object along with a Secret containing the TLS certificate. .. code-block:: shell-session $ kubectl get certificate,secret grpc-certificate NAME READY SECRET AGE certificate.cert-manager.io/grpc-certificate True grpc-certificate 83s NAME TYPE DATA AGE secret/grpc-certificate kubernetes.io/tls 3 78s External IP address will be shown up in Gateway. Also, the host names should show up in related HTTPRoutes. .. code-block:: shell-session $ kubectl get gateway tls-gateway NAME CLASS ADDRESS PROGRAMMED AGE tls-gateway cilium 10.104.247.23 True 29s $ kubectl get grpcroutes NAME HOSTNAMES AGE grpc-route 116s Update ``/etc/hosts`` with the host names and IP address of the Gateway: .. code-block:: shell-session $ sudo perl -ni -e 'print if !/\.cilium\.rocks$/d' /etc/hosts; sudo tee -a /etc/hosts \ <<<"$(kubectl get gateway tls-gateway -o jsonpath='{.status.addresses[0].value}') grpc-echo.cilium.rocks" Make gRPC Requests =================== You can use the `grpcurl `\_ cli tool to verify that the service works correctly. The echo server used in this example will respond with information about the HTTP/2 request the client made. .. tabs:: .. group-tab:: Self-signed Certificate By specifying the CA's certificate on a curl request, you can say that you trust certificates signed by that CA. .. code-block:: shell-session $ grpcurl -cacert ~/.local/share/mkcert/rootCA.pem grpc-echo.cilium.rocks:443 proto.EchoTestService/Echo If you prefer, instead of supplying the CA you can specify ``-insecure`` to tell the curl client not to validate the server's certificate. Without either, you will get an error that the certificate was signed by an unknown authority. .. group-tab:: cert-manager .. code-block:: shell-session $ grpcurl grpc-echo.cilium.rocks:443 proto.EchoTestService/Echo
https://github.com/cilium/cilium/blob/main//Documentation/network/servicemesh/gateway-api/grpc.rst
main
cilium
[ -0.032786574214696884, 0.0512225441634655, -0.04374938830733299, -0.05542410910129547, -0.07489597797393799, -0.0823165625333786, -0.028684306889772415, -0.027920743450522423, 0.10412195324897766, -0.031151525676250458, 0.0035277395509183407, -0.04955355077981949, -0.03169245272874832, 0.0...
0.072891