content
large_stringlengths
3
20.5k
url
large_stringlengths
54
193
branch
large_stringclasses
4 values
source
large_stringclasses
42 values
embeddings
listlengths
384
384
score
float64
-0.21
0.65
Follow this guide to install, configure, and use an Istio mesh with the Pod Security admission controller ([PSA](https://kubernetes.io/docs/concepts/security/pod-security-admission/)) enforcing the `baseline` [policy](https://kubernetes.io/docs/concepts/security/pod-security-standards/) on namespaces in the mesh. By default Istio injects an init container, `istio-init`, in pods deployed in the mesh. The `istio-init` requires the user or service-account deploying pods to the mesh to have sufficient Kubernetes RBAC permissions to deploy [containers with the `NET\_ADMIN` and `NET\_RAW` capabilities](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-capabilities-for-a-container). However, the `baseline` policy does not include `NET\_ADMIN` or `NET\_RAW` in its [allowed capabilities](https://kubernetes.io/docs/concepts/security/pod-security-standards/#baseline). In order to avoid enforcing the `privileged` policy in all meshed namespaces, it is necessary to use Istio mesh with the [Istio Container Network Interface plugin](/docs/setup/additional-setup/cni/). The `istio-cni-node` DaemonSet in the `istio-system` namespace requires `hostPath` volumes to access local CNI directories. Since this is not allowed in the `baseline` policy, the namespace where the CNI DaemonSet will be deployed needs to enforce the `privileged` [policy](https://kubernetes.io/docs/concepts/security/pod-security-standards/#privileged). By default, this namespace is `istio-system`. {{< warning >}} Namespaces in the mesh may also use the `restricted` [policy](https://kubernetes.io/docs/concepts/security/pod-security-standards/#baseline). You will need to configure the `seccompProfile` for your applications according to the policy specifications. {{< /warning >}} ## Install Istio with PSA 1. Create the `istio-system` namespace and label it to enforce the `privileged` policy. {{< text bash >}} $ kubectl create namespace istio-system $ kubectl label --overwrite ns istio-system \ pod-security.kubernetes.io/enforce=privileged \ pod-security.kubernetes.io/enforce-version=latest namespace/istio-system labeled {{< /text >}} 1. [Install Istio with CNI](/docs/setup/additional-setup/cni/#install-cni) on a Kubernetes cluster version 1.25 or later. {{< text bash >}} $ istioctl install --set components.cni.enabled=true -y ✔ Istio core installed ✔ Istiod installed ✔ Ingress gateways installed ✔ CNI installed ✔ Installation complete {{< /text >}} ## Deploy the sample application 1. Add a namespace label to enforce the `baseline` policy for the default namespace where the demo application will run: {{< text bash >}} $ kubectl label --overwrite ns default \ pod-security.kubernetes.io/enforce=baseline \ pod-security.kubernetes.io/enforce-version=latest namespace/default labeled {{< /text >}} 1. Deploy the sample application using the PSA enabled configuration resources: {{< text bash >}} $ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo-psa.yaml@ service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created {{< /text >}} 1. Verify that the app is running inside the cluster and serving HTML pages by checking for the page title in the response: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep -o ".\*" Simple Bookstore App {{< /text >}} ## Uninstall 1. Delete the sample application {{< text bash >}} $ kubectl delete -f samples/bookinfo/platform/kube/bookinfo-psa.yaml {{< /text >}} 1. Delete the labels on the default namespace {{< text bash >}} $ kubectl label namespace default pod-security.kubernetes.io/enforce- pod-security.kubernetes.io/enforce-version- {{< /text >}} 1. Uninstall Istio {{< text bash >}} $ istioctl uninstall -y --purge {{< /text >}} 1. Delete the `istio-system` namespace {{< text bash >}} $ kubectl delete namespace istio-system {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/additional-setup/pod-security-admission/index.md
master
istio
[ 0.02688021957874298, -0.014216713607311249, -0.009410540573298931, 0.0019481064518913627, -0.011688009835779667, -0.018011879175901413, 0.04721633344888687, 0.01812252774834633, 0.0005369571736082435, 0.07984211295843124, -0.0004507956618908793, -0.0866532176733017, 0.019630134105682373, 0...
0.386338
The Istio {{< gloss="cni" >}}CNI{{< /gloss >}} node agent is used to configure traffic redirection for pods in the mesh. It runs as a DaemonSet, on every node, with elevated privileges. The CNI node agent is used by both Istio {{< gloss >}}data plane{{< /gloss >}} modes. For the {{< gloss >}}sidecar{{< /gloss >}} data plane mode, the Istio CNI node agent is optional. It removes the requirement of running privileged init containers in every pod in the mesh, replacing that model with a single privileged node agent pod on each Kubernetes node. The Istio CNI node agent is \*\*required\*\* in the {{< gloss >}}ambient{{< /gloss >}} data plane mode. This guide is focused on using the Istio CNI node agent as an optional part of the sidecar data plane mode. Consult [the ambient mode documentation](/docs/ambient/) for information on using the ambient data plane mode. {{< tip >}} Note: The Istio CNI node agent \_does not\_ replace your cluster's existing {{< gloss="cni" >}}CNI{{< /gloss >}}. Among other things, it installs a \_chained\_ CNI plugin, which is designed to be layered on top of another, previously-installed primary interface CNI, such as [Calico](https://docs.projectcalico.org), or the cluster CNI used by your cloud provider. See [compatibility with CNIs](/docs/setup/additional-setup/cni/#compatibility-with-other-cnis) for details. {{< /tip >}} Follow this guide to install, configure, and use the Istio CNI node agent with the sidecar data plane mode. ## How sidecar traffic redirection works ### Using the init container (without the Istio CNI node agent) By default Istio injects an init container, `istio-init`, in pods deployed in the mesh. The `istio-init` container sets up the pod network traffic redirection to/from the Istio sidecar proxy. This requires the user or service-account deploying pods to the mesh to have sufficient Kubernetes RBAC permissions to deploy [containers with the `NET\_ADMIN` and `NET\_RAW` capabilities](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-capabilities-for-a-container). ### Using the Istio CNI node agent Requiring Istio users to have elevated Kubernetes RBAC permissions is problematic for some organizations' security compliance, as is the requirement to deploy privileged init containers with every workload. The `istio-cni` node agent is effectively a replacement for the `istio-init` container that enables the same networking functionality, but without requiring the use or deployment of privileged init containers in every workload. Instead, `istio-cni` itself runs as a single privileged pod on the node. It uses this privilege to install a [chained CNI plugin](https://www.cni.dev/docs/spec/#section-2-execution-protocol) on the node, which is invoked after your "primary" interface CNI plugin. CNI plugins are invoked dynamically by Kubernetes as a privileged process on the host node whenever a new pod is created, and are able to configure pod networking. The Istio chained CNI plugin always runs after the primary interface plugins, identifies user application pods with sidecars requiring traffic redirection, and sets up redirection in the Kubernetes pod lifecycle's network setup phase, thereby removing the need for privileged init containers, as well as the [requirement for `NET\_ADMIN` and `NET\_RAW` capabilities](/docs/ops/deployment/application-requirements/) for users and pod deployments. {{< image width="60%" link="./cni.svg" caption="Istio CNI" >}} ## Prerequisites for use 1. Install Kubernetes with a correctly-configured primary interface CNI plugin. As [supporting CNI plugins is required to implement the Kubernetes network model](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/), you probably already have this if you have a reasonably recent Kubernetes cluster with functional pod networking. \* AWS EKS, Azure AKS, and IBM Cloud IKS clusters have this capability. \* Google Cloud GKE clusters have CNI enabled when any of the following features are enabled: [network policy](https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy), [intranode visibility](https://cloud.google.com/kubernetes-engine/docs/how-to/intranode-visibility), [workload identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity), [pod security policy](https://cloud.google.com/kubernetes-engine/docs/how-to/pod-security-policies#overview), or [dataplane v2](https://cloud.google.com/kubernetes-engine/docs/concepts/dataplane-v2). \* Kind has CNI enabled by default. \* OpenShift has CNI enabled by default. 1. Install Kubernetes with the [ServiceAccount admission controller](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#serviceaccount) enabled. \*
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/additional-setup/cni/index.md
master
istio
[ -0.015120563097298145, 0.02974667400121689, 0.03790198266506195, 0.018973473459482193, -0.009030190296471119, -0.024943316355347633, 0.05674927681684494, 0.022051749750971794, -0.019489970058202744, 0.014412028715014458, -0.0029353932477533817, -0.11558379977941513, -0.02233116701245308, -...
0.475302
Cloud GKE clusters have CNI enabled when any of the following features are enabled: [network policy](https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy), [intranode visibility](https://cloud.google.com/kubernetes-engine/docs/how-to/intranode-visibility), [workload identity](https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity), [pod security policy](https://cloud.google.com/kubernetes-engine/docs/how-to/pod-security-policies#overview), or [dataplane v2](https://cloud.google.com/kubernetes-engine/docs/concepts/dataplane-v2). \* Kind has CNI enabled by default. \* OpenShift has CNI enabled by default. 1. Install Kubernetes with the [ServiceAccount admission controller](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#serviceaccount) enabled. \* The Kubernetes documentation highly recommends this for all Kubernetes installations where `ServiceAccounts` are utilized. ## Installing the CNI node agent ### Install Istio with the `istio-cni` component In most environments, a basic Istio cluster with the `istio-cni` component enabled can be installed using the following commands: {{< tabset category-name="gateway-install-type" >}} {{< tab name="IstioOperator" category-value="iop" >}} {{< text syntax=bash snip\_id=cni\_agent\_operator\_install >}} $ cat < istio-cni.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: components: cni: namespace: istio-system enabled: true EOF $ istioctl install -f istio-cni.yaml -y {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} {{< text syntax=bash snip\_id=cni\_agent\_helm\_install >}} $ helm install istio-cni istio/cni -n istio-system --wait {{< /text >}} {{< /tab >}} {{< /tabset >}} This will deploy an `istio-cni` DaemonSet into the cluster, which will create one Pod on every active node, deploy the Istio CNI plugin binary on each, and set up the necessary node-level configuration for the plugin. The CNI DaemonSet runs with [`system-node-critical`](https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/) `PriorityClass`. This is because it is the only means of actually reconfiguring pod networking to add them to the Istio mesh. {{< tip >}} You can install `istio-cni` into any Kubernetes namespace, but the namespace must allow pods with the `system-node-critical` PriorityClass to be scheduled in it. Some cloud providers (notably GKE) by default disallow the scheduling of `system-node-critical` pods in any namespace but specific ones, such as `kube-system`. You may either install `istio-cni` into `kube-system`, or (recommended) define a ResourceQuota for your GKE cluster that allows the use of `system-node-critical` pods inside `istio-system`. See [here](/docs/ambient/install/platform-prerequisites#google-kubernetes-engine-gke) for more details. {{< /tip >}} Note that if installing `istiod` with the Helm chart according to the [Install with Helm](/docs/setup/install/helm/#installation-steps) guide, you must install `istiod` with the following extra override value, in order to disable the privileged init container injection: {{< text syntax=bash snip\_id=cni\_agent\_helm\_istiod\_install >}} $ helm install istiod istio/istiod -n istio-system --set pilot.cni.enabled=true --wait {{< /text >}} ### Additional configuration In addition to the above basic configuration there are additional configuration flags that can be set: \* `values.cni.cniBinDir` and `values.cni.cniConfDir` configure the directory paths to install the plugin binary and create plugin configuration. \* `values.cni.cniConfFileName` configures the name of the plugin configuration file. \* `values.cni.chained` controls whether to configure the plugin as a chained CNI plugin. Normally, these do not need to be changed, but some platforms may use nonstandard paths. Please check the guidelines for your specific platform, if any, [here](/docs/ambient/install/platform-prerequisites). {{< tip >}} There is a time gap between a node becomes schedulable and the Istio CNI plugin becomes ready on that node. If an application pod starts up during this time, it is possible that traffic redirection is not properly set up and traffic would be able to bypass the Istio sidecar. This race condition is mitigated for the sidecar data plane mode by a "detect and repair" method. Please take a look at [race condition & mitigation](/docs/setup/additional-setup/cni/#race-condition--mitigation) section to understand the implication of this mitigation, and for configuration instructions. {{< /tip >}} ### Handling init container injection for revisions When installing revisioned control planes with the CNI component enabled, `values.pilot.cni.enabled=true` needs to be set for each installed revision, so that the sidecar injector does not attempt inject the `istio-init` init container for that revision. {{< text yaml >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: revision: REVISION\_NAME ... values: pilot: cni: enabled: true
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/additional-setup/cni/index.md
master
istio
[ -0.03992854431271553, -0.008433015085756779, 0.05379774421453476, -0.056607555598020554, 0.018197346478700638, 0.01827080175280571, 0.002153545618057251, -0.05778328701853752, 0.019044378772377968, 0.03529352694749832, 0.01188373751938343, -0.06974296271800995, 0.02613808400928974, -0.0435...
0.141958
revisioned control planes with the CNI component enabled, `values.pilot.cni.enabled=true` needs to be set for each installed revision, so that the sidecar injector does not attempt inject the `istio-init` init container for that revision. {{< text yaml >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: revision: REVISION\_NAME ... values: pilot: cni: enabled: true ... {{< /text >}} The CNI plugin at version `1.x` is compatible with control plane at version `1.x-1`, `1.x`, and `1.x+1`, which means CNI and control plane can be upgraded in any order, as long as their version difference is within one minor version. ## Operating clusters with the CNI node agent installed ### Upgrading When upgrading Istio with [in-place upgrade](/docs/setup/upgrade/in-place/), the CNI component can be upgraded together with the control plane using one `IstioOperator` resource. When upgrading Istio with [canary upgrade](/docs/setup/upgrade/canary/), because the CNI component runs as a cluster singleton, it is recommended to operate and upgrade the CNI component separately from the revisioned control plane. The following `IstioOperator` can be used to upgrade the CNI component independently. {{< text yaml >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: profile: empty # Do not include other components components: cni: enabled: true values: cni: excludeNamespaces: - istio-system {{< /text >}} This is not a problem for Helm as the istio-cni is installed separately, and can be upgraded via Helm: {{< text syntax=bash snip\_id=cni\_agent\_helm\_upgrade >}} $ helm upgrade istio-cni istio/cni -n istio-system --wait {{< /text >}} ### Race condition & mitigation The Istio CNI DaemonSet installs the CNI network plugin on every node. However, a time gap exists between when the DaemonSet pod gets scheduled onto a node, and the CNI plugin is installed and ready to be used. There is a chance that an application pod starts up during that time gap, and the `kubelet` has no knowledge of the Istio CNI plugin. The result is that the application pod comes up without Istio traffic redirection and bypasses Istio sidecar. To mitigate the race between an application pod and the Istio CNI DaemonSet, an `istio-validation` init container is added as part of the sidecar injection, which detects if traffic redirection is set up correctly, and blocks the pod starting up if not. The CNI DaemonSet will detect and handle any pod stuck in such state; how the pod is handled is dependent on configuration described below. This mitigation is enabled by default and can be turned off by setting `values.cni.repair.enabled` to false. This repair capability can be further configured with different RBAC permissions to help mitigate the theoretical attack vector detailed in [`ISTIO-SECURITY-2023-005`](/news/security/istio-security-2023-005/). By setting the below fields to true/false as required, you can select the Kubernetes RBAC permissions granted to the Istio CNI. |Configuration | Roles | Behavior on Error | Notes |---------------------------------|-------------|-----------------------------------------------------------------------------------------------------------------------------------------------|------- |`values.cni.repair.deletePods` | DELETE pods | Pods are deleted, when rescheduled they will have the correct configuration. | Default in 1.20 and older |`values.cni.repair.labelPods` | UPDATE pods | Pods are only labeled. User will need to take manual action to resolve. | |`values.cni.repair.repairPods` | None | Pods are dynamically reconfigured to have appropriate configuration. When the container restarts, the pod will continue normal execution. | Default in 1.21 and newer ### Traffic redirection parameters To redirect traffic in the application pod's network namespace to/from the Istio proxy sidecar, the Istio CNI plugin configures the namespace's iptables. You can adjust traffic redirection parameters using the same pod annotations as normal, such as ports and IP ranges to be included or excluded from redirection. See [resource annotations](/docs/reference/config/annotations) for available parameters. ### Compatibility with application init containers The Istio CNI plugin may cause networking connectivity problems for any application init containers in
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/additional-setup/cni/index.md
master
istio
[ -0.04601125419139862, -0.004512247163802385, -0.027411768212914467, 0.024807188659906387, -0.01095806434750557, 0.003933134023100138, -0.04212088882923126, 0.019427552819252014, -0.09714724123477936, 0.03977363184094429, 0.061975717544555664, -0.08797454833984375, -0.01852954365313053, 0.0...
0.339997
traffic redirection parameters using the same pod annotations as normal, such as ports and IP ranges to be included or excluded from redirection. See [resource annotations](/docs/reference/config/annotations) for available parameters. ### Compatibility with application init containers The Istio CNI plugin may cause networking connectivity problems for any application init containers in sidecar data plane mode. When using Istio CNI, `kubelet` starts a pod with the following steps: 1. The default interface CNI plugin sets up pod network interfaces and assigns pod IPs. 1. The Istio CNI plugin sets up traffic redirection to the Istio sidecar proxy within the pod. 1. All init containers execute and complete successfully. 1. The Istio sidecar proxy starts in the pod along with the pod's other containers. Init containers execute before the sidecar proxy starts, which can result in traffic loss during their execution. Avoid this traffic loss with one of the following settings: 1. Set the `uid` of the init container to `1337` using `runAsUser`. `1337` is the [`uid` used by the sidecar proxy](/docs/ops/deployment/application-requirements/#pod-requirements). Traffic sent by this `uid` is not captured by the Istio's `iptables` rule. Application container traffic will still be captured as usual. 1. Set the `traffic.sidecar.istio.io/excludeOutboundIPRanges` annotation to disable redirecting traffic to any CIDRs the init containers communicate with. 1. Set the `traffic.sidecar.istio.io/excludeOutboundPorts` annotation to disable redirecting traffic to the specific outbound ports the init containers use. {{< tip >}} You must use the `runAsUser 1337` workaround if [DNS proxying](/docs/ops/configuration/traffic-management/dns-proxy/) is enabled, and an init container sends traffic to a host name which requires DNS resolution. {{< /tip >}} {{< tip >}} Some platforms (e.g. OpenShift) do not use `1337` as the sidecar `uid` and instead use a pseudo-random number, that is only known at runtime. In such cases, you can instruct the proxy to run as a predefined `uid` by leveraging the [custom injection feature](/docs/setup/additional-setup/sidecar-injection/#customizing-injection), and use that same `uid` for the init container. {{< /tip >}} {{< warning >}} Please use traffic capture exclusions with caution, since the IP/port exclusion annotations not only apply to init container traffic, but also application container traffic. i.e. application traffic sent to the configured IP/port will bypass the Istio sidecar. {{< /warning >}} ### Compatibility with other CNIs The Istio CNI plugin follows the [CNI spec](https://www.cni.dev/docs/spec/#container-network-interface-cni-specification), and should be compatible with any CNI, container runtime, or other plugin that also follows the spec. The Istio CNI plugin operates as a chained CNI plugin. This means its configuration is appended to the list of existing CNI plugins configurations. See the [CNI specification reference](https://www.cni.dev/docs/spec/#section-1-network-configuration-format) for further details. When a pod is created or deleted, the container runtime invokes each plugin in the list in order. The Istio CNI plugin performs actions to set up the application pod's traffic redirection - in the sidecar data plane mode, this means applying `iptables` rules in the pod's network namespace to redirect in-pod traffic to the injected Istio proxy sidecar.
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/additional-setup/cni/index.md
master
istio
[ 0.03197528421878815, 0.03277406841516495, 0.03414442017674446, 0.017361624166369438, -0.06311473995447159, -0.02586490288376808, -0.009026340208947659, 0.03463897854089737, -0.06557086110115051, 0.009854055009782314, -0.02321792021393776, -0.07491621375083923, -0.05528891831636429, -0.0357...
0.386266
## Injection In order to take advantage of all of Istio's features, pods in the mesh must be running an Istio sidecar proxy. The following sections describe two ways of injecting the Istio sidecar into a pod: enabling automatic Istio sidecar injection in the pod's namespace, or by manually using the [`istioctl`](/docs/reference/commands/istioctl) command. When enabled in a pod's namespace, automatic injection injects the proxy configuration at pod creation time using an admission controller. Manual injection directly modifies configuration, like deployments, by adding the proxy configuration into it. If you are not sure which one to use, automatic injection is recommended. ### Automatic sidecar injection Sidecars can be automatically added to applicable Kubernetes pods using a [mutating webhook admission controller](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) provided by Istio. {{< tip >}} While admission controllers are enabled by default, some Kubernetes distributions may disable them. If this is the case, follow the instructions to [turn on admission controllers](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#how-do-i-turn-on-an-admission-controller). {{< /tip >}} When you set the `istio-injection=enabled` label on a namespace and the injection webhook is enabled, any new pods that are created in that namespace will automatically have a sidecar added to them. Note that unlike manual injection, automatic injection occurs at the pod-level. You won't see any change to the deployment itself. Instead, you'll want to check individual pods (via `kubectl describe`) to see the injected proxy. #### Deploying an app Deploy curl app. Verify both deployment and pod have a single container. {{< text bash >}} $ kubectl apply -f @samples/curl/curl.yaml@ $ kubectl get deployment -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR curl 1/1 1 1 12s curl curlimages/curl app=curl {{< /text >}} {{< text bash >}} $ kubectl get pod NAME READY STATUS RESTARTS AGE curl-8f795f47d-hdcgs 1/1 Running 0 42s {{< /text >}} Label the `default` namespace with `istio-injection=enabled` {{< text bash >}} $ kubectl label namespace default istio-injection=enabled --overwrite $ kubectl get namespace -L istio-injection NAME STATUS AGE ISTIO-INJECTION default Active 5m9s enabled ... {{< /text >}} Injection occurs at pod creation time. Kill the running pod and verify a new pod is created with the injected sidecar. The original pod has `1/1 READY` containers, and the pod with injected sidecar has `2/2 READY` containers. {{< text bash >}} $ kubectl delete pod -l app=curl $ kubectl get pod -l app=curl pod "curl-776b7bcdcd-7hpnk" deleted NAME READY STATUS RESTARTS AGE curl-776b7bcdcd-7hpnk 1/1 Terminating 0 1m curl-776b7bcdcd-bhn9m 2/2 Running 0 7s {{< /text >}} View detailed state of the injected pod. You should see the injected `istio-proxy` container and corresponding volumes. {{< text bash >}} $ kubectl describe pod -l app=curl ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- ... Normal Created 11s kubelet Created container istio-init Normal Started 11s kubelet Started container istio-init ... Normal Created 10s kubelet Created container curl Normal Started 10s kubelet Started container curl ... Normal Created 9s kubelet Created container istio-proxy Normal Started 8s kubelet Started container istio-proxy {{< /text >}} Disable injection for the `default` namespace and verify new pods are created without the sidecar. {{< text bash >}} $ kubectl label namespace default istio-injection- $ kubectl delete pod -l app=curl $ kubectl get pod namespace/default labeled pod "curl-776b7bcdcd-bhn9m" deleted NAME READY STATUS RESTARTS AGE curl-776b7bcdcd-bhn9m 2/2 Terminating 0 2m curl-776b7bcdcd-gmvnr 1/1 Running 0 2s {{< /text >}} #### Controlling the injection policy In the above examples, you enabled and disabled injection at the namespace level. Injection can also be controlled on a per-pod basis, by configuring the `sidecar.istio.io/inject` label on a pod: | Resource | Label | Enabled value | Disabled value | | -------- | ----- | ------------- |
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/additional-setup/sidecar-injection/index.md
master
istio
[ 0.01585524156689644, 0.024793503805994987, 0.04543158784508705, 0.02701728045940399, -0.06354237347841263, -0.0039263758808374405, 0.024048740044236183, 0.033036161214113235, -0.019960012286901474, 0.051042672246694565, 0.013289278373122215, -0.08333127200603485, -0.028532497584819794, 0.0...
0.49939
injection policy In the above examples, you enabled and disabled injection at the namespace level. Injection can also be controlled on a per-pod basis, by configuring the `sidecar.istio.io/inject` label on a pod: | Resource | Label | Enabled value | Disabled value | | -------- | ----- | ------------- | -------------- | | Namespace | `istio-injection` | `enabled` | `disabled` | | Pod | `sidecar.istio.io/inject` | `"true"` | `"false"` | If you are using [control plane revisions](/docs/setup/upgrade/canary/), revision specific labels are instead used by a matching `istio.io/rev` label. For example, for a revision named `canary`: | Resource | Enabled label | Disabled label | | -------- | ------------- | -------------- | | Namespace | `istio.io/rev=canary` | `istio-injection=disabled` | | Pod | `istio.io/rev=canary` | `sidecar.istio.io/inject="false"` | If the `istio-injection` label and the `istio.io/rev` label are both present on the same namespace, the `istio-injection` label will take precedence. The injector is configured with the following logic: 1. If either label (`istio-injection` or `sidecar.istio.io/inject`) is disabled, the pod is not injected. 1. If either label (`istio-injection` or `sidecar.istio.io/inject` or `istio.io/rev`) is enabled, the pod is injected. 1. If neither label is set, the pod is injected if `.values.sidecarInjectorWebhook.enableNamespacesByDefault` is enabled. This is not enabled by default, so generally this means the pod is not injected. ### Manual sidecar injection To manually inject a deployment, use [`istioctl kube-inject`](/docs/reference/commands/istioctl/#istioctl-kube-inject): {{< text bash >}} $ istioctl kube-inject -f @samples/curl/curl.yaml@ | kubectl apply -f - serviceaccount/curl created service/curl created deployment.apps/curl created {{< /text >}} By default, this will use the in-cluster configuration. Alternatively, injection can be done using local copies of the configuration. {{< text bash >}} $ kubectl -n istio-system get configmap istio-sidecar-injector -o=jsonpath='{.data.config}' > inject-config.yaml $ kubectl -n istio-system get configmap istio-sidecar-injector -o=jsonpath='{.data.values}' > inject-values.yaml $ kubectl -n istio-system get configmap istio -o=jsonpath='{.data.mesh}' > mesh-config.yaml {{< /text >}} Run `kube-inject` over the input file and deploy. {{< text bash >}} $ istioctl kube-inject \ --injectConfigFile inject-config.yaml \ --meshConfigFile mesh-config.yaml \ --valuesFile inject-values.yaml \ --filename @samples/curl/curl.yaml@ \ | kubectl apply -f - serviceaccount/curl created service/curl created deployment.apps/curl created {{< /text >}} Verify that the sidecar has been injected into the curl pod with `2/2` under the READY column. {{< text bash >}} $ kubectl get pod -l app=curl NAME READY STATUS RESTARTS AGE curl-64c6f57bc8-f5n4x 2/2 Running 0 24s {{< /text >}} ## Customizing injection Generally, pod are injected based on the sidecar injection template, configured in the `istio-sidecar-injector` configmap. Per-pod configuration is available to override these options on individual pods. This is done by adding an `istio-proxy` container to your pod. The sidecar injection will treat any configuration defined here as an override to the default injection template. Care should be taken when customizing these settings, as this allows complete customization of the resulting `Pod`, including making changes that cause the sidecar container to not function properly. For example, the following configuration customizes a variety of settings, including lowering the CPU requests, adding a volume mount, and adding a `preStop` hook: {{< text yaml >}} apiVersion: v1 kind: Pod metadata: name: example spec: containers: - name: hello image: alpine - name: istio-proxy image: auto resources: requests: cpu: "100m" volumeMounts: - mountPath: /etc/certs name: certs lifecycle: preStop: exec: command: ["curl", "10"] volumes: - name: certs secret: secretName: istio-certs {{< /text >}} In general, any field in a pod can be set. However, care must be taken for certain fields: \* Kubernetes requires the `image` field to be set before the injection has run. While you can set a specific image to override the default one, it is recommended to set the `image` to `auto` which will cause
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/additional-setup/sidecar-injection/index.md
master
istio
[ 0.015740059316158295, 0.036291103810071945, -0.024593649432063103, 0.08093293756246567, -0.041075028479099274, 0.022465186193585396, -0.012981818057596684, -0.0026012645103037357, -0.03845759108662605, 0.003997368272393942, 0.06448160856962204, -0.04941779002547264, -0.043108981102705, -0....
0.405996
pod can be set. However, care must be taken for certain fields: \* Kubernetes requires the `image` field to be set before the injection has run. While you can set a specific image to override the default one, it is recommended to set the `image` to `auto` which will cause the sidecar injector to automatically select the image to use. \* Some fields in `Pod` are dependent on related settings. For example, CPU request must be less than CPU limit. If both fields are not configured together, the pod may fail to start. \* Fields `securityContext.RunAsUser` and `securityContext.RunAsGroup` might not be honored in some cases, for instance, when `TPROXY` mode is used, as it requires the sidecar to run as user `0`. Overriding these fields incorrectly can cause traffic loss, and should be done with extreme caution. {{< warning >}} Other admission controllers may execute against the Pod spec prior to Istio injection, which can mutate or reject the configuration. For instance, `LimitRange` may automatically insert resource requests prior to Istio adding its configured resources, giving unexpected results. {{< /warning >}} Additionally, certain fields are configurable by [annotations](/docs/reference/config/annotations/) on the pod, although it is recommended to use the above approach to customizing settings. Additional care must be taken for certain annotations: \* If `sidecar.istio.io/proxyCPU` is set, make sure to explicitly set `sidecar.istio.io/proxyCPULimit`. Otherwise the sidecar's `cpu` limit will be set as unlimited. \* If `sidecar.istio.io/proxyMemory` is set, make sure to explicitly set `sidecar.istio.io/proxyMemoryLimit`. Otherwise the sidecar's `memory` limit will be set as unlimited. For example, see the below incomplete resources annotation configuration and the corresponding injected resource settings: {{< text yaml >}} spec: template: metadata: annotations: sidecar.istio.io/proxyCPU: "200m" sidecar.istio.io/proxyMemoryLimit: "5Gi" {{< /text >}} {{< text yaml >}} spec: containers: - name: istio-proxy resources: limits: memory: 5Gi requests: cpu: 200m memory: 5Gi securityContext: allowPrivilegeEscalation: false {{< /text >}} ### Custom templates (experimental) {{< warning >}} This feature is experimental and subject to change, or removal, at any time. {{< /warning >}} Completely custom templates can also be defined at installation time. For example, to define a custom template that injects the `GREETING` environment variable into the `istio-proxy` container: {{< text yaml >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: name: istio spec: values: sidecarInjectorWebhook: templates: custom: | spec: containers: - name: istio-proxy env: - name: GREETING value: hello-world {{< /text >}} Pods will, by default, use the `sidecar` injection template, which is automatically created. This can be overridden by the `inject.istio.io/templates` annotation. For example, to apply the default template and our customization, you can set `inject.istio.io/templates=sidecar,custom`. In addition to the `sidecar`, a `gateway` template is provided by default to support proxy injection into Gateway deployments.
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/additional-setup/sidecar-injection/index.md
master
istio
[ 0.046606261283159256, 0.03554129600524902, 0.017034003511071205, 0.03606155514717102, -0.0226304829120636, 0.0009806366870179772, -0.0013120562070980668, 0.04911702498793602, -0.022437766194343567, 0.0482337549328804, 0.023977508768439293, -0.011881879530847073, 0.0052336715161800385, 0.00...
0.320574
## Prerequisites \* Istio 1.17 or later. \* Kubernetes 1.23 or later [configured for dual-stack operations](https://kubernetes.io/docs/concepts/services-networking/dual-stack/). ## Installation steps If you want to use `kind` for your test, you can set up a dual stack cluster with the following command: {{< text syntax=bash snip\_id=none >}} $ kind create cluster --name istio-ds --config - <}} To enable dual-stack for Istio, you will need to modify your `IstioOperator` or Helm values with the following configuration. {{< tabset category-name="dualstack" >}} {{< tab name="IstioOperator" category-value="iop" >}} {{< text syntax=yaml snip\_id=none >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: meshConfig: defaultConfig: proxyMetadata: ISTIO\_DUAL\_STACK: "true" values: pilot: env: ISTIO\_DUAL\_STACK: "true" ipFamilyPolicy: RequireDualStack # The below values are optional and can be used based on your requirements gateways: istio-ingressgateway: ipFamilyPolicy: RequireDualStack istio-egressgateway: ipFamilyPolicy: RequireDualStack {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} {{< text syntax=yaml snip\_id=none >}} meshConfig: defaultConfig: proxyMetadata: ISTIO\_DUAL\_STACK: "true" pilot: env: ISTIO\_DUAL\_STACK: "true" ipFamilyPolicy: RequireDualStack # The below values are optional and can be used based on your requirements gateways: istio-ingressgateway: ipFamilyPolicy: RequireDualStack istio-egressgateway: ipFamilyPolicy: RequireDualStack {{< /text >}} {{< /tab >}} {{< /tabset >}} ## Verification 1. Create three namespaces: \* `dual-stack`: `tcp-echo` will listen on both an IPv4 and IPv6 address. \* `ipv4`: `tcp-echo` will listen on only an IPv4 address. \* `ipv6`: `tcp-echo` will listen on only an IPv6 address. {{< text bash >}} $ kubectl create namespace dual-stack $ kubectl create namespace ipv4 $ kubectl create namespace ipv6 {{< /text >}} 1. Enable sidecar injection on all of those namespaces as well as the `default` namespace: {{< text bash >}} $ kubectl label --overwrite namespace default istio-injection=enabled $ kubectl label --overwrite namespace dual-stack istio-injection=enabled $ kubectl label --overwrite namespace ipv4 istio-injection=enabled $ kubectl label --overwrite namespace ipv6 istio-injection=enabled {{< /text >}} 1. Create [tcp-echo]({{< github\_tree >}}/samples/tcp-echo) deployments in the namespaces: {{< text bash >}} $ kubectl apply --namespace dual-stack -f @samples/tcp-echo/tcp-echo-dual-stack.yaml@ $ kubectl apply --namespace ipv4 -f @samples/tcp-echo/tcp-echo-ipv4.yaml@ $ kubectl apply --namespace ipv6 -f @samples/tcp-echo/tcp-echo-ipv6.yaml@ {{< /text >}} 1. Deploy the [curl]({{< github\_tree >}}/samples/curl) sample app to use as a test source for sending requests. {{< text bash >}} $ kubectl apply -f @samples/curl/curl.yaml@ {{< /text >}} 1. Verify the traffic reaches the dual-stack pods: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -o jsonpath='{.items[0].metadata.name}')" -- sh -c "echo dualstack | nc tcp-echo.dual-stack 9000" hello dualstack {{< /text >}} 1. Verify the traffic reaches the IPv4 pods: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -o jsonpath='{.items[0].metadata.name}')" -- sh -c "echo ipv4 | nc tcp-echo.ipv4 9000" hello ipv4 {{< /text >}} 1. Verify the traffic reaches the IPv6 pods: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -o jsonpath='{.items[0].metadata.name}')" -- sh -c "echo ipv6 | nc tcp-echo.ipv6 9000" hello ipv6 {{< /text >}} 1. Verify the envoy listeners: {{< text syntax=bash snip\_id=none >}} $ istioctl proxy-config listeners "$(kubectl get pod -n dual-stack -l app=tcp-echo -o jsonpath='{.items[0].metadata.name}')" -n dual-stack --port 9000 -ojson | jq '.[] | {name: .name, address: .address, additionalAddresses: .additionalAddresses}' {{< /text >}} You will see listeners are now bound to multiple addresses, but only for dual stack services. Other services will only be listening on a single IP address. {{< text syntax=json snip\_id=none >}} "name": "fd00:10:96::f9fc\_9000", "address": { "socketAddress": { "address": "fd00:10:96::f9fc", "portValue": 9000 } }, "additionalAddresses": [ { "address": { "socketAddress": { "address": "10.96.106.11", "portValue": 9000 } } } ], {{< /text >}} 1. Verify virtual inbound addresses are configured to listen on both `0.0.0.0` and `[::]`. {{< text syntax=bash snip\_id=none >}} $ istioctl proxy-config listeners "$(kubectl get pod -n dual-stack -l app=tcp-echo -o
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/additional-setup/dual-stack/index.md
master
istio
[ -0.010470118373632431, -0.0373605340719223, 0.03289791941642761, 0.037523768842220306, -0.06196564435958862, -0.0003965602663811296, -0.031146731227636337, 0.061038900166749954, -0.013194152154028416, 0.02694045752286911, -0.03407672047615051, -0.16085807979106903, -0.0560191310942173, 0.0...
0.464853
}, "additionalAddresses": [ { "address": { "socketAddress": { "address": "10.96.106.11", "portValue": 9000 } } } ], {{< /text >}} 1. Verify virtual inbound addresses are configured to listen on both `0.0.0.0` and `[::]`. {{< text syntax=bash snip\_id=none >}} $ istioctl proxy-config listeners "$(kubectl get pod -n dual-stack -l app=tcp-echo -o jsonpath='{.items[0].metadata.name}')" -n dual-stack -o json | jq '.[] | select(.name=="virtualInbound") | {name: .name, address: .address, additionalAddresses: .additionalAddresses}' {{< /text >}} {{< text syntax=json snip\_id=none >}} "name": "virtualInbound", "address": { "socketAddress": { "address": "0.0.0.0", "portValue": 15006 } }, "additionalAddresses": [ { "address": { "socketAddress": { "address": "::", "portValue": 15006 } } } ], {{< /text >}} 1. Verify envoy endpoints are configured to route to both IPv4 and IPv6: {{< text syntax=bash snip\_id=none >}} $ istioctl proxy-config endpoints "$(kubectl get pod -l app=curl -o jsonpath='{.items[0].metadata.name}')" --port 9000 ENDPOINT STATUS OUTLIER CHECK CLUSTER 10.244.0.19:9000 HEALTHY OK outbound|9000||tcp-echo.ipv4.svc.cluster.local 10.244.0.26:9000 HEALTHY OK outbound|9000||tcp-echo.dual-stack.svc.cluster.local fd00:10:244::1a:9000 HEALTHY OK outbound|9000||tcp-echo.dual-stack.svc.cluster.local fd00:10:244::18:9000 HEALTHY OK outbound|9000||tcp-echo.ipv6.svc.cluster.local {{< /text >}} Now you can experiment with dual-stack services in your environment! ## Cleanup 1. Cleanup application namespaces and deployments {{< text bash >}} $ kubectl delete -f @samples/curl/curl.yaml@ $ kubectl delete ns dual-stack ipv4 ipv6 {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/additional-setup/dual-stack/index.md
master
istio
[ 0.01587718538939953, 0.03406639024615288, 0.019778389483690262, -0.03809014707803726, -0.05128636211156845, -0.034079499542713165, 0.03933647647500038, -0.06446429342031479, 0.04388532415032387, -0.007144823204725981, -0.04231032729148865, -0.09987128525972366, -0.037655025720596313, 0.005...
0.176205
## Prerequisites Before you begin, check the following prerequisites: 1. [Download the Istio release](/docs/setup/additional-setup/download-istio-release/). 1. Perform any necessary [platform-specific setup](/docs/setup/platform-setup/). 1. Check the [Requirements for Pods and Services](/docs/ops/deployment/application-requirements/). 1. [Usage of helm for Istio installation](/docs/setup/install/helm). 1. Helm version that supports post rendering. (>= 3.1) 1. kubectl or kustomize. ## Advanced Helm Chart Customization Istio's helm chart tries to incorporate most of the attributes needed by users for their specific requirements. However, it does not contain every possible Kubernetes value you may want to tweak. While it is not practical to have such a mechanism in place, in this document we will demonstrate a method which would allow you to do some advanced helm chart customization without the need to directly modify Istio's helm chart. ### Using Helm with kustomize to post-render Istio charts Using the Helm `post-renderer` capability, you can tweak the installation manifests to meet your requirements easily. `Post-rendering` gives the flexibility to manipulate, configure, and/or validate rendered manifests before they are installed by Helm. This enables users with advanced configuration needs to use tools like Kustomize to apply configuration changes without the need for any additional support from the original chart maintainers. ### Adding a value to an already existing chart In this example, we will add a `sysctl` value to Istio’s `ingress-gateway` deployment. We are going to: 1. Create a `sysctl` deployment customization patch template. 1. Apply the patch using Helm `post-rendering`. 1. Verify that the `sysctl` patch was correctly applied to the pods. ## Create the Kustomization First, we create a `sysctl` patch file, adding a `securityContext` to the `ingress-gateway` pod with the additional attribute: {{< text bash >}} $ cat > sysctl-ingress-gw-customization.yaml <}} The below shell script helps to bridge the gap between Helm `post-renderer` and Kustomize, as the former works with `stdin/stdout` and the latter works with files. {{< text bash >}} $ cat > kustomize.sh < base.yaml exec kubectl kustomize # you can also use "kustomize build ." if you have it installed. EOF $ chmod +x ./kustomize.sh {{< /text >}} Finally, let us create the `kustomization` yaml file, which is the input for `kustomize` with the set of resources and associated customization details. {{< text bash >}} $ cat > kustomization.yaml <}} ## Apply the Kustomization Now that the Kustomization file is ready, let us use Helm to make sure this gets applied properly. ### Add the Helm repository for Istio {{< text bash >}} $ helm repo add istio https://istio-release.storage.googleapis.com/charts $ helm repo update {{< /text >}} ### Render and Verify using Helm Template We can use Helm `post-renderer` to validate rendered manifests before they are installed by Helm {{< text bash >}} $ helm template istio-ingress istio/gateway --namespace istio-ingress --post-renderer ./kustomize.sh | grep -B 2 -A 1 netfilter.nf\_conntrack\_tcp\_timeout\_close\_wait {{< /text >}} In the output, check for the newly added `sysctl` attribute for `ingress-gateway` pod: {{< text yaml >}} securityContext: sysctls: - name: net.netfilter.nf\_conntrack\_tcp\_timeout\_close\_wait value: "10" {{< /text >}} ### Apply the patch using Helm `Post-Renderer` Use the below command to install an Istio ingress-gateway, applying our customization using Helm `post-renderer`: {{< text bash >}} $ kubectl create ns istio-ingress $ helm upgrade -i istio-ingress istio/gateway --namespace istio-ingress --wait --post-renderer ./kustomize.sh {{< /text >}} ## Verify the Kustomization Examine the ingress-gateway deployment, you will see the newly manipulated `sysctl` value: {{< text bash >}} $ kubectl -n istio-ingress get deployment istio-ingress -o yaml {{< /text >}} {{< text yaml >}} apiVersion: apps/v1 kind: Deployment metadata: … name: istio-ingress namespace: istio-ingress spec: template: metadata: … spec: securityContext: sysctls: - name: net.netfilter.nf\_conntrack\_tcp\_timeout\_close\_wait value: "10" {{< /text >}} ## Additional Information For further detailed
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/additional-setup/customize-installation-helm/index.md
master
istio
[ 0.03317037224769592, 0.04106760025024414, 0.03389667719602585, -0.02769148349761963, -0.041121602058410645, 0.01747100055217743, -0.021632948890328407, 0.08154302835464478, -0.013313055038452148, 0.04986303672194481, -0.04071488976478577, -0.14254425466060638, -0.03694940358400345, 0.01252...
0.46773
text bash >}} $ kubectl -n istio-ingress get deployment istio-ingress -o yaml {{< /text >}} {{< text yaml >}} apiVersion: apps/v1 kind: Deployment metadata: … name: istio-ingress namespace: istio-ingress spec: template: metadata: … spec: securityContext: sysctls: - name: net.netfilter.nf\_conntrack\_tcp\_timeout\_close\_wait value: "10" {{< /text >}} ## Additional Information For further detailed information about the concepts and techniques described in this document, please refer to: 1. [IstioOperator - Customize Installation](/docs/setup/additional-setup/customize-installation) 1. [Advanced Helm Techniques](https://helm.sh/docs/topics/advanced/) 1. [Kustomize](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/)
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/additional-setup/customize-installation-helm/index.md
master
istio
[ 0.013344666920602322, 0.09588725119829178, 0.010146443732082844, 0.005153170321136713, 0.0008181617595255375, -0.021067533642053604, 0.04591365531086922, 0.03171185404062271, 0.05554024502635002, 0.044965729117393494, 0.0027179932221770287, -0.13318508863449097, -0.032268185168504715, -0.0...
0.479155
Each Istio release includes a \_release archive\_ which contains: - the [`istioctl`](/docs/ops/diagnostic-tools/istioctl/) binary - [installation profiles](/docs/setup/additional-setup/config-profiles/) and [Helm charts](/docs/setup/install/helm) - samples, including the [Bookinfo](/docs/examples/bookinfo/) application A release archive is built for each supported processor architecture and operating system. ## Download Istio {#download} 1. Go to the [Istio release]({{< istio\_release\_url >}}) page to download the installation file for your OS, or download and extract the latest release automatically (Linux or macOS): {{< text bash >}} $ curl -L https://istio.io/downloadIstio | sh - {{< /text >}} {{< tip >}} The command above downloads the latest release (numerically) of Istio. You can pass variables on the command line to download a specific version or to override the processor architecture. For example, to download Istio {{< istio\_full\_version >}} for the x86\_64 architecture, run: {{< text bash >}} $ curl -L https://istio.io/downloadIstio | ISTIO\_VERSION={{< istio\_full\_version >}} TARGET\_ARCH=x86\_64 sh - {{< /text >}} {{< /tip >}} 1. Move to the Istio package directory. For example, if the package is `istio-{{< istio\_full\_version >}}`: {{< text syntax=bash snip\_id=none >}} $ cd istio-{{< istio\_full\_version >}} {{< /text >}} The installation directory contains: - Sample applications in `samples/` - The [`istioctl`](/docs/reference/commands/istioctl) client binary in the `bin/` directory. 1. Add the `istioctl` client to your path (Linux or macOS): {{< text bash >}} $ export PATH=$PWD/bin:$PATH {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/additional-setup/download-istio-release/index.md
master
istio
[ 0.011995058506727219, -0.013470124453306198, -0.0009197898907586932, -0.02653668448328972, 0.03077465109527111, -0.00853891484439373, -0.041490018367767334, 0.09830918908119202, -0.040228668600320816, 0.02645268104970455, 0.016618266701698303, -0.07283999770879745, -0.07825960963964462, -0...
0.553895
## Prerequisites Before you begin, check the following prerequisites: 1. [Download the Istio release](/docs/setup/additional-setup/download-istio-release/). 1. Perform any necessary [platform-specific setup](/docs/setup/platform-setup/). 1. Check the [Requirements for Pods and Services](/docs/ops/deployment/application-requirements/). In addition to installing any of Istio's built-in [configuration profiles](/docs/setup/additional-setup/config-profiles/), `istioctl install` provides a complete API for customizing the configuration. - [The `IstioOperator` API](/docs/reference/config/istio.operator.v1alpha1/) The configuration parameters in this API can be set individually using `--set` options on the command line. For example, to enable debug logging in a default configuration profile, use this command: {{< text bash >}} $ istioctl install --set values.global.logging.level=debug {{< /text >}} Alternatively, the `IstioOperator` configuration can be specified in a YAML file and passed to `istioctl` using the `-f` option: {{< text bash >}} $ istioctl install -f samples/operator/pilot-k8s.yaml {{< /text >}} {{< tip >}} For backwards compatibility, the previous [Helm installation options](https://archive.istio.io/v1.4/docs/reference/config/installation-options/), with the exception of Kubernetes resource settings, are also fully supported. To set them on the command line, prepend the option name with "`values.`". For example, the following command overrides the `pilot.traceSampling` Helm configuration option: {{< text bash >}} $ istioctl install --set values.pilot.traceSampling=0.1 {{< /text >}} Helm values can also be set in an `IstioOperator` CR (YAML file) as described in [Customize Istio settings using the Helm API](/docs/setup/additional-setup/customize-installation/#customize-istio-settings-using-the-helm-api), below. If you want to set Kubernetes resource settings, use the `IstioOperator` API as described in [Customize Kubernetes settings](/docs/setup/additional-setup/customize-installation/#customize-kubernetes-settings). {{< /tip >}} ### Identify an Istio component The `IstioOperator` API defines components as shown in the table below: | Components | | ------------| `base` | `pilot` | `ingressGateways` | `egressGateways` | `cni` | `istiodRemote` | The configurable settings for each of these components are available in the API under `components.`. For example, to use the API to change (to false) the `enabled` setting for the `pilot` component, use `--set components.pilot.enabled=false` or set it in an `IstioOperator` resource like this: {{< text yaml >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: components: pilot: enabled: false {{< /text >}} All of the components also share a common API for changing Kubernetes-specific settings, under `components..k8s`, as described in the following section. ### Customize Kubernetes settings The `IstioOperator` API allows each component's Kubernetes settings to be customized in a consistent way. Each component has a [`KubernetesResourceSpec`](/docs/reference/config/istio.operator.v1alpha1/#KubernetesResourcesSpec), which allows the following settings to be changed. Use this list to identify the setting to customize: 1. [Resources](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container) 1. [Readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) 1. [Replica count](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) 1. [`HorizontalPodAutoscaler`](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) 1. [`PodDisruptionBudget`](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#how-disruption-budgets-work) 1. [Pod annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) 1. [Service annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) 1. [`ImagePullPolicy`](https://kubernetes.io/docs/concepts/containers/images/) 1. [Priority class name](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass) 1. [Node selector](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) 1. [Affinity and anti-affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) 1. [Service](https://kubernetes.io/docs/concepts/services-networking/service/) 1. [Toleration](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) 1. [Strategy](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) 1. [Env](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) 1. [Pod security context](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod) 1. [Volumes and volume mounts](https://kubernetes.io/docs/concepts/storage/volumes/) All of these Kubernetes settings use the Kubernetes API definitions, so [Kubernetes documentation](https://kubernetes.io/docs/concepts/) can be used for reference. The following example overlay file adjusts the resources and horizontal pod autoscaling settings for Pilot: {{< text yaml >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: components: pilot: k8s: resources: requests: cpu: 1000m # override from default 500m memory: 4096Mi # ... default 2048Mi hpaSpec: maxReplicas: 10 # ... default 5 minReplicas: 2 # ... default 1 {{< /text >}} Use `istioctl install` to apply the modified settings to the cluster: {{< text syntax="bash" repo="operator" >}} $ istioctl install -f samples/operator/pilot-k8s.yaml {{< /text >}} ### Customize Istio settings using the Helm API The `IstioOperator` API includes a pass-through interface to the [Helm API](https://archive.istio.io/v1.4/docs/reference/config/installation-options/) using the `values` field. The following YAML file configures global and Pilot settings through the Helm API: {{< text yaml >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: values: pilot: traceSampling: 0.1 # override from 1.0 global: monitoringPort: 15014 {{< /text >}} Some parameters will temporarily exist in both the Helm and
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/additional-setup/customize-installation/index.md
master
istio
[ 0.04004600644111633, -0.05192061886191368, 0.04098019376397133, 0.02953510731458664, -0.05896342918276787, -0.03003901056945324, -0.012155343778431416, 0.09473011642694473, -0.08703619241714478, 0.038562655448913574, -0.04304791986942291, -0.1446005254983902, -0.06780789792537689, 0.057109...
0.562373
using the `values` field. The following YAML file configures global and Pilot settings through the Helm API: {{< text yaml >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: values: pilot: traceSampling: 0.1 # override from 1.0 global: monitoringPort: 15014 {{< /text >}} Some parameters will temporarily exist in both the Helm and `IstioOperator` APIs, including Kubernetes resources, namespaces and enablement settings. The Istio community recommends using the `IstioOperator` API as it is more consistent, is validated, and follows the [community graduation process](https://github.com/istio/community/blob/master/FEATURE-LIFECYCLE-CHECKLIST.md#feature-lifecycle-checklist). ### Configure gateways Gateways are a special type of component, since multiple ingress and egress gateways can be defined. In the [`IstioOperator` API](/docs/reference/config/istio.operator.v1alpha1/), gateways are defined as a list type. The `default` profile installs one ingress gateway, called `istio-ingressgateway`. You can [inspect the default values for this gateway]({{< github\_tree >}}/manifests/charts/gateways/istio-ingress/values.yaml). The built-in gateways can be customized just like any other component. {{< warning >}} From 1.7 onward, the gateway name must always be specified when overlaying. Not specifying any name no longer defaults to `istio-ingressgateway` or `istio-egressgateway`. {{< /warning >}} A new user gateway can be created by adding a new list entry: {{< text yaml >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: components: ingressGateways: - name: istio-ingressgateway enabled: true - namespace: user-ingressgateway-ns name: ilb-gateway enabled: true k8s: resources: requests: cpu: 200m serviceAnnotations: cloud.google.com/load-balancer-type: "internal" service: ports: - port: 8060 targetPort: 8060 name: tcp-citadel-grpc-tls - port: 5353 name: tcp-dns {{< /text >}} Note that Helm values (`spec.values.gateways.istio-ingressgateway/egressgateway`) are shared by all ingress/egress gateways. If these must be customized per gateway, it is recommended to use a separate IstioOperator CR to generate a manifest for the user gateways, separate from the main Istio installation: {{< text yaml >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: profile: empty components: ingressGateways: - name: ilb-gateway namespace: user-ingressgateway-ns enabled: true # Copy settings from istio-ingressgateway as needed. values: gateways: istio-ingressgateway: debug: error {{< /text >}} ## Advanced install customization ### Customizing external charts and profiles The `istioctl` `install`, `manifest generate` and `profile` commands can use any of the following sources for charts and profiles: - compiled in charts. This is the default if no `--manifests` option is set. The compiled in charts are the same as those in the `manifests/` directory of the Istio release `.tgz`. - charts in the local file system, e.g., `istioctl install --manifests istio-{{< istio\_full\_version >}}/manifests`. Local file system charts and profiles can be customized by editing the files in `manifests/`. For extensive changes, we recommend making a copy of the `manifests` directory and make changes there. Note, however, that the content layout in the `manifests` directory must be preserved. Profiles, found under `manifests/profiles/`, can be edited and new ones added by creating new files with the desired profile name and a `.yaml` extension. `istioctl` scans the `profiles` subdirectory and all profiles found there can be referenced by name in the `IstioOperatorSpec` profile field. Built-in profiles are overlaid on the default profile YAML before user overlays are applied. For example, you can create a new profile file called `custom1.yaml` which customizes some settings from the `default` profile, and then apply a user overlay file on top of that: {{< text bash >}} $ istioctl manifest generate --manifests mycharts/ --set profile=custom1 -f path-to-user-overlay.yaml {{< /text >}} In this case, the `custom1.yaml` and `user-overlay.yaml` files will be overlaid on the `default.yaml` file to obtain the final values used as the input for manifest generation. In general, creating new profiles is not necessary since a similar result can be achieved by passing multiple overlay files. For example, the command above is equivalent to passing two user overlay files: {{< text bash >}} $ istioctl manifest generate --manifests
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/additional-setup/customize-installation/index.md
master
istio
[ 0.03886185586452484, 0.02635944075882435, -0.012855681590735912, 0.006506006699055433, -0.005766851361840963, -0.0042081549763679504, 0.01569727249443531, 0.027661411091685295, 0.033341825008392334, 0.037701480090618134, -0.025888996198773384, -0.1130923256278038, -0.045403458178043365, 0....
0.456924
values used as the input for manifest generation. In general, creating new profiles is not necessary since a similar result can be achieved by passing multiple overlay files. For example, the command above is equivalent to passing two user overlay files: {{< text bash >}} $ istioctl manifest generate --manifests mycharts/ -f manifests/profiles/custom1.yaml -f path-to-user-overlay.yaml {{< /text >}} Creating a custom profile is only required if you need to refer to the profile by name through the `IstioOperatorSpec`. ### Patching the output manifest The `IstioOperator` CR, input to `istioctl`, is used to generate the output manifest containing the Kubernetes resources to be applied to the cluster. The output manifest can be further customized to add, modify or delete resources through the `IstioOperator` [overlays](/docs/reference/config/istio.operator.v1alpha1/#K8sObjectOverlay) API, after it is generated but before it is applied to the cluster. The following example overlay file (`patch.yaml`) demonstrates the type of output manifest patching that can be done: {{< text yaml >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: profile: empty hub: docker.io/istio tag: 1.1.6 components: pilot: enabled: true namespace: istio-control k8s: overlays: - kind: Deployment name: istiod patches: # Select list item by value - path: spec.template.spec.containers.[name:discovery].args.[30m] value: "60m" # overridden from 30m # Select list item by key:value - path: spec.template.spec.containers.[name:discovery].ports.[containerPort:8080].containerPort value: 1234 # Override with object (note | on value: first line) - path: spec.template.spec.containers.[name:discovery].env.[name:POD\_NAMESPACE].valueFrom value: | fieldRef: apiVersion: v2 fieldPath: metadata.myPath # Deletion of list item - path: spec.template.spec.containers.[name:discovery].env.[name:REVISION] # Deletion of map item - path: spec.template.spec.containers.[name:discovery].securityContext - kind: Service name: istiod patches: - path: spec.ports.[name:https-dns].port value: 11111 # OVERRIDDEN {{< /text >}} Passing the file to `istioctl manifest generate -f patch.yaml` applies the above patches to the default profile output manifest. The two patched resources will be modified as shown below (some parts of the resources are omitted for brevity): {{< text yaml >}} apiVersion: apps/v1 kind: Deployment metadata: name: istiod spec: template: spec: containers: - args: - 60m env: - name: POD\_NAMESPACE valueFrom: fieldRef: apiVersion: v2 fieldPath: metadata.myPath name: discovery ports: - containerPort: 1234 --- apiVersion: v1 kind: Service metadata: name: istiod spec: ports: - name: https-dns port: 11111 --- {{< /text >}} Note that the patches are applied in the given order. Each patch is applied over the output from the previous patch. Paths in patches that don't exist in the output manifest will be created. ### List item path selection Both the `istioctl --set` flag and the `k8s.overlays` field in `IstioOperator` CR support list item selection by `[index]`, `[value]` or by `[key:value]`. The --set flag also creates any intermediate nodes in the path that are missing in the resource.
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/additional-setup/customize-installation/index.md
master
istio
[ 0.033688049763441086, -0.0003062811738345772, 0.016388867050409317, 0.028651263564825058, -0.03516533970832825, 0.009032953530550003, 0.07306504249572754, 0.08502382785081863, 0.008692413568496704, -0.017694737762212753, -0.004913270939141512, -0.13391175866127014, 0.03928403928875923, -0....
0.309779
{{< tip >}} {{< boilerplate gateway-api-future >}} If you use the Gateway API, you will not need to install and manage a gateway `Deployment` as described in this document. By default, a gateway `Deployment` and `Service` will be automatically provisioned based on the `Gateway` configuration. Refer to the [Gateway API task](/docs/tasks/traffic-management/ingress/gateway-api/#automated-deployment) for details. {{< /tip >}} Along with creating a service mesh, Istio allows you to manage [gateways](/docs/concepts/traffic-management/#gateways), which are Envoy proxies running at the edge of the mesh, providing fine-grained control over traffic entering and leaving the mesh. Some of Istio's built in [configuration profiles](/docs/setup/additional-setup/config-profiles/) deploy gateways during installation. For example, a call to `istioctl install` with [default settings](/docs/setup/install/istioctl/#install-istio-using-the-default-profile) will deploy an ingress gateway along with the control plane. Although fine for evaluation and simple use cases, this couples the gateway to the control plane, making management and upgrade more complicated. For production Istio deployments, it is highly recommended to decouple these to allow independent operation. Follow this guide to separately deploy and manage one or more gateways in a production installation of Istio. ## Prerequisites This guide requires the Istio control plane [to be installed](/docs/setup/install/) before proceeding. {{< tip >}} You can use the `minimal` profile, for example `istioctl install --set profile=minimal`, to prevent any gateways from being deployed during installation. {{< /tip >}} ## Deploying a gateway Using the same mechanisms as [Istio sidecar injection](/docs/setup/additional-setup/sidecar-injection/#automatic-sidecar-injection), the Envoy proxy configuration for gateways can similarly be auto-injected. Using auto-injection for gateway deployments is recommended as it gives developers full control over the gateway deployment, while also simplifying operations. When a new upgrade is available, or a configuration has changed, gateway pods can be updated by simply restarting them. This makes the experience of operating a gateway deployment the same as operating sidecars. To support users with existing deployment tools, Istio provides a few different ways to deploy a gateway. Each method will produce the same result. Choose the method you are most familiar with. {{< tip >}} As a security best practice, it is recommended to deploy the gateway in a different namespace from the control plane. {{< /tip >}} All methods listed below rely on [Injection](/docs/setup/additional-setup/sidecar-injection/) to populate additional pod settings at runtime. In order to support this, the namespace the gateway is deployed in must not have the `istio-injection=disabled` label. If it does, you will see pods failing to startup attempting to pull the `auto` image, which is a placeholder that is intended to be replaced when a pod is created. {{< tabset category-name="gateway-install-type" >}} {{< tab name="IstioOperator" category-value="iop" >}} First, setup an `IstioOperator` configuration file, called `ingress.yaml` here: {{< text yaml >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: name: ingress spec: profile: empty # Do not install CRDs or the control plane components: ingressGateways: - name: istio-ingressgateway namespace: istio-ingress enabled: true label: # Set a unique label for the gateway. This is required to ensure Gateways # can select this workload istio: ingressgateway values: gateways: istio-ingressgateway: # Enable gateway injection injectionTemplate: gateway {{< /text >}} Then install using standard `istioctl` commands: {{< text bash >}} $ kubectl create namespace istio-ingress $ istioctl install -f ingress.yaml {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Install using standard `helm` commands: {{< text bash >}} $ kubectl create namespace istio-ingress $ helm install istio-ingressgateway istio/gateway -n istio-ingress {{< /text >}} To see possible supported configuration values, run `helm show values istio/gateway`. The Helm repository [README](https://artifacthub.io/packages/helm/istio-official/gateway) contains additional information on usage. {{< tip >}} When deploying the gateway in an OpenShift cluster, use the `openshift` profile to override the default values, for example: {{< text bash >}} $
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/additional-setup/gateway/index.md
master
istio
[ -0.023617835715413094, -0.013978595845401287, -0.012227619998157024, 0.03440815955400467, -0.06286561489105225, -0.020714163780212402, 0.009255336597561836, 0.05837559700012207, 0.0011398266069591045, 0.05032486096024513, -0.06142178177833557, -0.0626666247844696, -0.03644121438264847, 0.0...
0.544028
{{< /text >}} To see possible supported configuration values, run `helm show values istio/gateway`. The Helm repository [README](https://artifacthub.io/packages/helm/istio-official/gateway) contains additional information on usage. {{< tip >}} When deploying the gateway in an OpenShift cluster, use the `openshift` profile to override the default values, for example: {{< text bash >}} $ helm install istio-ingressgateway istio/gateway -n istio-ingress --set global.platform=openshift {{< /text >}} {{< /tip >}} {{< /tab >}} {{< tab name="Kubernetes YAML" category-value="yaml" >}} First, setup the Kubernetes configuration, called `ingress.yaml` here: {{< text yaml >}} apiVersion: v1 kind: Service metadata: name: istio-ingressgateway namespace: istio-ingress spec: type: LoadBalancer selector: istio: ingressgateway ports: - port: 80 name: http - port: 443 name: https --- apiVersion: apps/v1 kind: Deployment metadata: name: istio-ingressgateway namespace: istio-ingress spec: selector: matchLabels: istio: ingressgateway template: metadata: annotations: # Select the gateway injection template (rather than the default sidecar template) inject.istio.io/templates: gateway labels: # Set a unique label for the gateway. This is required to ensure Gateways can select this workload istio: ingressgateway # Enable gateway injection. If connecting to a revisioned control plane, replace with "istio.io/rev: revision-name" sidecar.istio.io/inject: "true" spec: # Allow binding to all ports (such as 80 and 443) securityContext: sysctls: - name: net.ipv4.ip\_unprivileged\_port\_start value: "0" containers: - name: istio-proxy image: auto # The image will automatically update each time the pod starts. # Drop all privileges, allowing to run as non-root securityContext: capabilities: drop: - ALL runAsUser: 1337 runAsGroup: 1337 --- # Set up roles to allow reading credentials for TLS apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: istio-ingressgateway-sds namespace: istio-ingress rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: istio-ingressgateway-sds namespace: istio-ingress roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: istio-ingressgateway-sds subjects: - kind: ServiceAccount name: default {{< /text >}} {{< warning >}} This example shows the bare minimum needed to get a gateway running. For production usage, additional configuration such as `HorizontalPodAutoscaler`, `PodDisruptionBudget`, and resource requests/limits are recommended. These are automatically included when using the other gateway installation methods. {{< /warning >}} {{< tip >}} The `sidecar.istio.io/inject` label on the pod is used in this example to enable injection. Just like application sidecar injection, this can instead be controlled at the namespace level. See [Controlling the injection policy](/docs/setup/additional-setup/sidecar-injection/#controlling-the-injection-policy) for more information. {{< /tip >}} Next, apply it to the cluster: {{< text bash >}} $ kubectl create namespace istio-ingress $ kubectl apply -f ingress.yaml {{< /text >}} {{< /tab >}} {{< /tabset >}} ## Managing gateways The following describes how to manage gateways after installation. For more information on their usage, follow the [Ingress](/docs/tasks/traffic-management/ingress/) and [Egress](/docs/tasks/traffic-management/egress/) tasks. ### Gateway selectors The labels on a gateway deployment's pods are used by `Gateway` configuration resources, so it's important that your `Gateway` selector matches these labels. For example, in the above deployments, the `istio=ingressgateway` label is set on the gateway pods. To apply a `Gateway` to these deployments, you need to select the same label: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: Gateway metadata: name: gateway spec: selector: istio: ingressgateway ... {{< /text >}} ### Gateway deployment topologies Depending on your mesh configuration and use cases, you may wish to deploy gateways in different ways. A few different gateway deployment patterns are shown below. Note that more than one of these patterns can be used within the same cluster. #### Shared gateway In this model, a single centralized gateway is used by many applications, possibly across many namespaces. Gateway(s) in the `ingress` namespace delegate ownership of routes to application namespaces, but retain control over TLS configuration. {{< image width="50%" link="shared-gateway.svg" caption="Shared gateway" >}} This model works well when you have many
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/additional-setup/gateway/index.md
master
istio
[ 0.057407453656196594, 0.028676405549049377, -0.0066123721189796925, 0.031212100759148598, -0.016311561688780785, 0.0005892404005862772, -0.03565988317131996, 0.03553713113069534, 0.01502637192606926, 0.017167920246720314, -0.029186252504587173, -0.1171150803565979, -0.03476068004965782, -0...
0.389188
gateway In this model, a single centralized gateway is used by many applications, possibly across many namespaces. Gateway(s) in the `ingress` namespace delegate ownership of routes to application namespaces, but retain control over TLS configuration. {{< image width="50%" link="shared-gateway.svg" caption="Shared gateway" >}} This model works well when you have many applications you want to expose externally, as they are able to use shared infrastructure. It also works well in use cases that have the same domain or TLS certificates shared by many applications. #### Dedicated application gateway In this model, an application namespace has its own dedicated gateway installation. This allows giving full control and ownership to a single namespace. This level of isolation can be helpful for critical applications that have strict performance or security requirements. {{< image width="50%" link="user-gateway.svg" caption="Dedicated application gateway" >}} Unless there is another load balancer in front of Istio, this typically means that each application will have its own IP address, which may complicate DNS configurations. ## Upgrading gateways ### In place upgrade Because gateways utilize pod injection, new gateway pods that are created will automatically be injected with the latest configuration, which includes the version. To pick up changes to the gateway configuration, the pods can simply be restarted, using commands such as `kubectl rollout restart deployment`. If you would like to change the [control plane revision](/docs/setup/upgrade/canary/) in use by the gateway, you can set the `istio.io/rev` label on the gateway Deployment, which will also trigger a rolling restart. {{< image width="50%" link="inplace-upgrade.svg" caption="In place upgrade in progress" >}} ### Canary upgrade (advanced) {{< warning >}} This upgrade method depends on control plane revisions, and therefore can only be used in conjunction with [control plane canary upgrade](/docs/setup/upgrade/canary/). {{< /warning >}} If you would like to more slowly control the rollout of a new control plane revision, you can run multiple versions of a gateway deployment. For example, if you want to roll out a new revision, `canary`, create a copy of your gateway deployment with the `istio.io/rev=canary` label set: {{< text yaml >}} apiVersion: apps/v1 kind: Deployment metadata: name: istio-ingressgateway-canary namespace: istio-ingress spec: selector: matchLabels: istio: ingressgateway template: metadata: annotations: inject.istio.io/templates: gateway labels: istio: ingressgateway istio.io/rev: canary # Set to the control plane revision you want to deploy spec: containers: - name: istio-proxy image: auto {{< /text >}} When this deployment is created, you will then have two versions of the gateway, both selected by the same Service: {{< text bash >}} $ kubectl get endpoints -n istio-ingress -o "custom-columns=NAME:.metadata.name,PODS:.subsets[\*].addresses[\*].targetRef.name" NAME PODS istio-ingressgateway istio-ingressgateway-...,istio-ingressgateway-canary-... {{< /text >}} {{< image width="50%" link="canary-upgrade.svg" caption="Canary upgrade in progress" >}} Unlike application services deployed inside the mesh, you cannot use [Istio traffic shifting](/docs/tasks/traffic-management/traffic-shifting/) to distribute the traffic between the gateway versions because their traffic is coming directly from external clients that Istio does not control. Instead, you can control the distribution of traffic by the number of replicas of each deployment. If you use another load balancer in front of Istio, you may also use that to control the traffic distribution. {{< warning >}} Because other installation methods bundle the gateway `Service`, which controls its external IP address, with the gateway `Deployment`, only the [Kubernetes YAML](/docs/setup/additional-setup/gateway/#tabset-docs-setup-additional-setup-gateway-1-2-tab) method is supported for this upgrade method. {{< /warning >}} ### Canary upgrade with external traffic shifting (advanced) A variant of the [canary upgrade](#canary-upgrade) approach is to shift the traffic between the versions using a high level construct outside Istio, such as an external load balancer or DNS. {{< image width="50%" link="high-level-canary.svg" caption="Canary upgrade in progress with external traffic shifting" >}} This offers fine-grained control, but may be unsuitable or overly
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/additional-setup/gateway/index.md
master
istio
[ -0.04852062091231346, -0.04386886954307556, -0.03803277388215065, 0.02069942280650139, -0.05025321617722511, -0.07874337583780289, 0.029716014862060547, 0.06885910034179688, 0.049310777336359024, -0.026547636836767197, -0.05000654608011246, -0.06254705041646957, 0.007572079077363014, 0.022...
0.47693
the [canary upgrade](#canary-upgrade) approach is to shift the traffic between the versions using a high level construct outside Istio, such as an external load balancer or DNS. {{< image width="50%" link="high-level-canary.svg" caption="Canary upgrade in progress with external traffic shifting" >}} This offers fine-grained control, but may be unsuitable or overly complicated to set up in some environments. ## Cleanup - Cleanup Istio ingress gateway {{< text bash >}} $ istioctl uninstall --istioNamespace istio-ingress -y --purge $ kubectl delete ns istio-ingress {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/additional-setup/gateway/index.md
master
istio
[ 0.019511623308062553, 0.0033807572908699512, 0.0487724132835865, 0.003985577262938023, -0.03609698265790939, -0.07550155371427536, -0.034978725016117096, -0.005782332271337509, -0.026259824633598328, 0.01358284056186676, -0.04563867300748825, -0.06923075020313263, -0.09163086116313934, -0....
0.463102
Upgrading Istio can be done by first running a canary deployment of the new control plane, allowing you to monitor the effect of the upgrade with a small percentage of the workloads before migrating all of the traffic to the new version. This is much safer than doing an [in-place upgrade](/docs/setup/upgrade/in-place/) and is the recommended upgrade method. When installing Istio, the `revision` installation setting can be used to deploy multiple independent control planes at the same time. A canary version of an upgrade can be started by installing the new Istio version's control plane next to the old one, using a different `revision` setting. Each revision is a full Istio control plane implementation with its own `Deployment`, `Service`, etc. ## Before you upgrade Before upgrading Istio, it is recommended to run the `istioctl x precheck` command to make sure the upgrade is compatible with your environment. {{< text bash >}} $ istioctl x precheck ✔ No issues found when checking the cluster. Istio is safe to install or upgrade! To get started, check out https://istio.io/latest/docs/setup/getting-started/ {{< /text >}} {{< idea >}} When using revision-based upgrades jumping across two minor versions is supported (e.g. upgrading directly from version `1.15` to `1.17`). This is in contrast to in-place upgrades where it is required to upgrade to each intermediate minor release. {{< /idea >}} ## Control plane To install a new revision called `canary`, you would set the `revision` field as follows: {{< tip >}} In a production environment, a better revision name would correspond to the Istio version. However, you must replace `.` characters in the revision name, for example, `revision={{< istio\_full\_version\_revision >}}` for Istio `{{< istio\_full\_version >}}`, because `.` is not a valid revision name character. {{< /tip >}} {{< text bash >}} $ istioctl install --set revision=canary {{< /text >}} After running the command, you will have two control plane deployments and services running side-by-side: {{< text bash >}} $ kubectl get pods -n istio-system -l app=istiod NAME READY STATUS RESTARTS AGE istiod-{{< istio\_previous\_version\_revision >}}-1-bdf5948d5-htddg 1/1 Running 0 47s istiod-canary-84c8d4dcfb-skcfv 1/1 Running 0 25s {{< /text >}} {{< text bash >}} $ kubectl get svc -n istio-system -l app=istiod NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istiod-{{< istio\_previous\_version\_revision >}}-1 ClusterIP 10.96.93.151 15010/TCP,15012/TCP,443/TCP,15014/TCP 109s istiod-canary ClusterIP 10.104.186.250 15010/TCP,15012/TCP,443/TCP,15014/TCP 87s {{< /text >}} You will also see that there are two sidecar injector configurations including the new revision. {{< text bash >}} $ kubectl get mutatingwebhookconfigurations NAME WEBHOOKS AGE istio-sidecar-injector-{{< istio\_previous\_version\_revision >}}-1 2 2m16s istio-sidecar-injector-canary 2 114s {{< /text >}} ## Data plane Refer to [Gateway Canary Upgrade](/docs/setup/additional-setup/gateway/#canary-upgrade-advanced) to understand how to run revision specific instances of Istio gateway. In this example, since we use the `default` Istio profile, Istio gateways do not run revision-specific instances, but are instead in-place upgraded to use the new control plane revision. You can verify that the `istio-ingress` gateway is using the `canary` revision by running the following command: {{< text bash >}} $ istioctl proxy-status | grep "$(kubectl -n istio-system get pod -l app=istio-ingressgateway -o jsonpath='{.items..metadata.name}')" | awk -F '[[:space:]][[:space:]]+' '{print $8}' istiod-canary-6956db645c-vwhsk {{< /text >}} However, simply installing the new revision has no impact on the existing sidecar proxies. To upgrade these, you must configure them to point to the new `istiod-canary` control plane. This is controlled during sidecar injection based on the namespace label `istio.io/rev`. Create a namespace `test-ns` with `istio-injection` enabled. In the `test-ns` namespace, deploy a sample curl pod: 1. Create a namespace `test-ns`. {{< text bash >}} $ kubectl create ns test-ns {{< /text >}} 1. Label the namespace using `istio-injection` label. {{< text bash >}} $ kubectl label namespace test-ns
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/upgrade/canary/index.md
master
istio
[ 0.030204948037862778, -0.0721861869096756, -0.00834603700786829, 0.024778295308351517, 0.011834821663796902, -0.052080754190683365, -0.03478584066033363, 0.021668436005711555, -0.06546826660633087, 0.026463357731699944, -0.006475411355495453, -0.07176190614700317, -0.04615350067615509, 0.0...
0.524499
`istio.io/rev`. Create a namespace `test-ns` with `istio-injection` enabled. In the `test-ns` namespace, deploy a sample curl pod: 1. Create a namespace `test-ns`. {{< text bash >}} $ kubectl create ns test-ns {{< /text >}} 1. Label the namespace using `istio-injection` label. {{< text bash >}} $ kubectl label namespace test-ns istio-injection=enabled {{< /text >}} 1. Bring up a sample curl pod in `test-ns` namespace. {{< text bash >}} $ kubectl apply -n test-ns -f samples/curl/curl.yaml {{< /text >}} To upgrade the namespace `test-ns`, remove the `istio-injection` label, and add the `istio.io/rev` label to point to the `canary` revision. The `istio-injection` label must be removed because it takes precedence over the `istio.io/rev` label for backward compatibility. {{< text bash >}} $ kubectl label namespace test-ns istio-injection- istio.io/rev=canary {{< /text >}} After the namespace updates, you need to restart the pods to trigger re-injection. One way to restart all pods in namespace `test-ns` is using: {{< text bash >}} $ kubectl rollout restart deployment -n test-ns {{< /text >}} When the pods are re-injected, they will be configured to point to the `istiod-canary` control plane. You can verify this by using `istioctl proxy-status`. {{< text bash >}} $ istioctl proxy-status | grep "\.test-ns " {{< /text >}} The output will show all pods under the namespace that are using the canary revision. ## Stable revision labels {{< tip >}} If you're using Helm, refer to the [Helm upgrade documentation](/docs/setup/upgrade/helm). {{}} {{< boilerplate revision-tags-preamble >}} ### Usage {{< boilerplate revision-tags-usage >}} 1. Install two revisions of control plane: {{< text bash >}} $ istioctl install --revision={{< istio\_previous\_version\_revision >}}-1 --set profile=minimal --skip-confirmation $ istioctl install --revision={{< istio\_full\_version\_revision >}} --set profile=minimal --skip-confirmation {{< /text >}} 1. Create `stable` and `canary` revision tags and associate them to the respective revisions: {{< text bash >}} $ istioctl tag set prod-stable --revision {{< istio\_previous\_version\_revision >}}-1 $ istioctl tag set prod-canary --revision {{< istio\_full\_version\_revision >}} {{< /text >}} 1. Label application namespaces to map to the respective revision tags: {{< text bash >}} $ kubectl create ns app-ns-1 $ kubectl label ns app-ns-1 istio.io/rev=prod-stable $ kubectl create ns app-ns-2 $ kubectl label ns app-ns-2 istio.io/rev=prod-stable $ kubectl create ns app-ns-3 $ kubectl label ns app-ns-3 istio.io/rev=prod-canary {{< /text >}} 1. Bring up a sample curl pod in each namespace: {{< text bash >}} $ kubectl apply -n app-ns-1 -f samples/curl/curl.yaml $ kubectl apply -n app-ns-2 -f samples/curl/curl.yaml $ kubectl apply -n app-ns-3 -f samples/curl/curl.yaml {{< /text >}} 1. Verify application to control plane mapping using `istioctl proxy-status` command: {{< text bash >}} $ istioctl ps NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION curl-78ff5975c6-62pzf.app-ns-3 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-{{< istio\_full\_version\_revision >}}-7f6fc6cfd6-s8zfg {{< istio\_full\_version >}} curl-78ff5975c6-8kxpl.app-ns-1 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-{{< istio\_previous\_version\_revision >}}-1-bdf5948d5-n72r2 {{< istio\_previous\_version >}}.1 curl-78ff5975c6-8q7m6.app-ns-2 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-{{< istio\_previous\_version\_revision >}}-1-bdf5948d5-n72r2 {{< istio\_previous\_version\_revision >}}.1 {{< /text >}} {{< boilerplate revision-tags-middle >}} {{< text bash >}} $ istioctl tag set prod-stable --revision {{< istio\_full\_version\_revision >}} --overwrite {{< /text >}} {{< boilerplate revision-tags-prologue >}} {{< text bash >}} $ kubectl rollout restart deployment -n app-ns-1 $ kubectl rollout restart deployment -n app-ns-2 {{< /text >}} Verify the application to control plane mapping using `istioctl proxy-status` command: {{< text bash >}} $ istioctl ps NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION curl-5984f48bc7-kmj6x.app-ns-1 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-{{< istio\_full\_version\_revision >}}-7f6fc6cfd6-jsktb {{< istio\_full\_version >}} curl-78ff5975c6-jldk4.app-ns-3 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-{{< istio\_full\_version\_revision >}}-7f6fc6cfd6-jsktb {{< istio\_full\_version >}} curl-7cdd8dccb9-5bq5n.app-ns-2 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-{{< istio\_full\_version\_revision >}}-7f6fc6cfd6-jsktb {{< istio\_full\_version >}} {{< /text >}} ### Default
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/upgrade/canary/index.md
master
istio
[ 0.02015717141330242, 0.015878602862358093, -0.015941059216856956, 0.011786142364144325, -0.09838472306728363, -0.023185191676020622, -0.020260218530893326, -0.010169845074415207, 0.06051047146320343, 0.013079570606350899, 0.0018912618979811668, -0.12467867881059647, -0.04338771477341652, -...
0.410327
ECDS ISTIOD VERSION curl-5984f48bc7-kmj6x.app-ns-1 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-{{< istio\_full\_version\_revision >}}-7f6fc6cfd6-jsktb {{< istio\_full\_version >}} curl-78ff5975c6-jldk4.app-ns-3 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-{{< istio\_full\_version\_revision >}}-7f6fc6cfd6-jsktb {{< istio\_full\_version >}} curl-7cdd8dccb9-5bq5n.app-ns-2 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-{{< istio\_full\_version\_revision >}}-7f6fc6cfd6-jsktb {{< istio\_full\_version >}} {{< /text >}} ### Default tag {{< boilerplate revision-tags-default-intro >}} {{< text bash >}} $ istioctl tag set default --revision {{< istio\_full\_version\_revision >}} {{< /text >}} {{< boilerplate revision-tags-default-outro >}} ## Uninstall old control plane After upgrading both the control plane and data plane, you can uninstall the old control plane. For example, the following command uninstalls a control plane of revision `{{< istio\_previous\_version\_revision >}}-1`: {{< text bash >}} $ istioctl uninstall --revision {{< istio\_previous\_version\_revision >}}-1 -y {{< /text >}} If the old control plane does not have a revision label, uninstall it using its original installation options, for example: {{< text bash >}} $ istioctl uninstall -f manifests/profiles/default.yaml -y {{< /text >}} Confirm that the old control plane has been removed and only the new one still exists in the cluster: {{< text bash >}} $ kubectl get pods -n istio-system -l app=istiod NAME READY STATUS RESTARTS AGE istiod-canary-55887f699c-t8bh8 1/1 Running 0 27m {{< /text >}} Note that the above instructions only removed the resources for the specified control plane revision, but not cluster-scoped resources shared with other control planes. To uninstall Istio completely, refer to the [uninstall guide](/docs/setup/install/istioctl/#uninstall-istio). ## Uninstall canary control plane If you decide to rollback to the old control plane, instead of completing the canary upgrade, you can uninstall the canary revision using: {{< text bash >}} $ istioctl uninstall --revision=canary -y {{< /text >}} However, in this case you must first reinstall the gateway(s) for the previous revision manually, because the uninstall command will not automatically revert the previously in-place upgraded ones. {{< tip >}} Make sure to use the `istioctl` version corresponding to the old control plane to reinstall the old gateways and, to avoid downtime, make sure the old gateways are up and running before proceeding with the canary uninstall. {{< /tip >}} ## Cleanup 1. Clean up created revisioned tags: {{< text bash >}} $ istioctl tag remove prod-stable $ istioctl tag remove prod-canary {{< /text >}} 1. Clean up the namespaces used for canary upgrade with revision labels example: {{< text bash >}} $ kubectl delete ns istio-system test-ns {{< /text >}} 1. Clean up the namespaces used for canary upgrade with revision tags example: {{< text bash >}} $ kubectl delete ns istio-system app-ns-1 app-ns-2 app-ns-3 {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/upgrade/canary/index.md
master
istio
[ 0.01490192674100399, 0.018640678375959396, 0.06170104816555977, -0.022862259298563004, -0.0036391662433743477, -0.013712349347770214, -0.08104056864976883, -0.026727117598056793, 0.07290349155664444, 0.057557787746191025, -0.014329819940030575, -0.1419951617717743, -0.0765887051820755, -0....
0.41218
The `istioctl upgrade` command performs an upgrade of Istio. {{< tip >}} [Canary Upgrade](/docs/setup/upgrade/canary/) is safer than doing an in-place upgrade and is the recommended upgrade method. {{< /tip >}} The upgrade command can also perform a downgrade of Istio. See the [`istioctl` upgrade reference](/docs/reference/commands/istioctl/#istioctl-upgrade) for all the options provided by the `istioctl upgrade` command. {{< warning >}} `istioctl upgrade` is for in-place upgrade and not compatible with installations done with the `--revision` flag. Upgrades of such installations will fail with an error. {{< /warning >}} ## Upgrade prerequisites Before you begin the upgrade process, check the following prerequisites: \* The installed Istio version is no more than one minor version less than the upgrade version. For example, 1.6.0 or higher is required before you start the upgrade process to 1.7.x. \* Your Istio installation was [installed using {{< istioctl >}}](/docs/setup/install/istioctl/). ## Upgrade steps {{< warning >}} Traffic disruption may occur during the upgrade process. To minimize the disruption, ensure that at least two replicas of `istiod` are running. Also, ensure that [`PodDisruptionBudgets`](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) are configured with a minimum availability of 1. {{< /warning >}} The commands in this section should be run using the new version of `istioctl` which can be found in the `bin/` subdirectory of the downloaded package. 1. [Download the new Istio release](/docs/setup/additional-setup/download-istio-release/) and change directory to the new release directory. 1. Ensure that your Kubernetes configuration points to the cluster to upgrade: {{< text bash >}} $ kubectl config view {{< /text >}} 1. Ensure that the upgrade is compatible with your environment. {{< text bash >}} $ istioctl x precheck ✔ No issues found when checking the cluster. Istio is safe to install or upgrade! To get started, check out https://istio.io/latest/docs/setup/getting-started/ {{< /text >}} 1. Begin the upgrade by running this command: {{< text bash >}} $ istioctl upgrade {{< /text >}} {{< warning >}} If you installed Istio using the `-f` flag, for example `istioctl install -f `, then you must provide the same `-f` flag value to the `istioctl upgrade` command. {{< /warning >}} If you installed Istio using `--set` flags, ensure that you pass the same `--set` flags to upgrade, otherwise the customizations done with `--set` will be reverted. For production use, the use of a configuration file instead of `--set` is recommended. If you omit the `-f` flag, Istio upgrades using the default profile. After performing several checks, `istioctl` will ask you to confirm whether to proceed. 1. `istioctl` will in-place upgrade the Istio control plane and gateways to the new version and indicate the completion status. 1. After `istioctl` completes the upgrade, you must manually update the Istio data plane by restarting any pods with Istio sidecars: {{< text bash >}} $ kubectl rollout restart deployment {{< /text >}} ## Downgrade prerequisites Before you begin the downgrade process, check the following prerequisites: \* Your Istio installation was [installed using {{< istioctl >}}](/docs/setup/install/istioctl/). \* The Istio version you intend to downgrade to is no more than one minor version less than the installed Istio version. For example, you can downgrade to no lower than 1.6.0 from Istio 1.7.x. \* Downgrade must be done using the `istioctl` binary version that corresponds to the Istio version that you intend to downgrade to. For example, if you are downgrading from Istio 1.7 to 1.6.5, use `istioctl` version 1.6.5. ## Steps to downgrade to a lower Istio version You can use `istioctl upgrade` to downgrade to a lower version of Istio. The steps are identical to the upgrade process described in the previous section, only using the `istioctl` binary corresponding to the lower version (e.g.,
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/upgrade/in-place/index.md
master
istio
[ 0.02693556621670723, -0.018003150820732117, 0.04005022719502449, 0.028702665120363235, -0.00043989342520944774, -0.05915657430887222, -0.02235298790037632, 0.013893220573663712, -0.060666315257549286, 0.00244405516423285, 0.018235288560390472, -0.114387646317482, -0.06907535344362259, 0.07...
0.473406
use `istioctl` version 1.6.5. ## Steps to downgrade to a lower Istio version You can use `istioctl upgrade` to downgrade to a lower version of Istio. The steps are identical to the upgrade process described in the previous section, only using the `istioctl` binary corresponding to the lower version (e.g., 1.6.5). When completed, Istio will be restored to the previously installed version. Alternatively, `istioctl install` can be used to install an older version of the Istio control plane.
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/upgrade/in-place/index.md
master
istio
[ 0.004866349045187235, -0.022005781531333923, 0.034921664744615555, 0.03125431016087532, -0.0257720910012722, -0.022085968405008316, -0.02168288454413414, 0.04257109761238098, -0.05125381797552109, 0.019530082121491432, -0.007443769369274378, -0.08524280786514282, -0.10206338763237, 0.04249...
0.546999
Follow this guide to upgrade and configure an Istio mesh using [Helm](https://helm.sh/docs/). This guide assumes you have already performed an [installation with Helm](/docs/setup/install/helm) for a previous minor or patch version of Istio. {{< boilerplate helm-preamble >}} {{< boilerplate helm-prereqs >}} ## Upgrade steps Before upgrading Istio, it is recommended to run the `istioctl x precheck` command to make sure the upgrade is compatible with your environment. {{< text bash >}} $ istioctl x precheck ✔ No issues found when checking the cluster. Istio is safe to install or upgrade! To get started, check out {{< /text >}} ### Canary upgrade (recommended) You can install a canary version of Istio control plane to validate that the new version is compatible with your existing configuration and data plane using the steps below: {{< warning >}} Note that when you install a canary version of the `istiod` service, the underlying cluster-wide resources from the base chart are shared across your primary and canary installations. {{< /warning >}} {{< boilerplate crd-upgrade-123 >}} 1. Upgrade the Istio base chart to ensure all cluster-wide resources are up-to-date {{< text bash >}} $ helm upgrade istio-base istio/base -n istio-system {{< /text >}} 1. Install a canary version of the Istio discovery chart by setting the revision value: {{< text bash >}} $ helm install istiod-canary istio/istiod \ --set revision=canary \ -n istio-system {{< /text >}} 1. Verify that you have two versions of `istiod` installed in your cluster: {{< text bash >}} $ kubectl get pods -l app=istiod -L istio.io/rev -n istio-system NAME READY STATUS RESTARTS AGE REV istiod-5649c48ddc-dlkh8 1/1 Running 0 71m default istiod-canary-9cc9fd96f-jpc7n 1/1 Running 0 34m canary {{< /text >}} 1. If you are using [Istio gateways](/docs/setup/additional-setup/gateway/#deploying-a-gateway), install a canary revision of the Gateway chart by setting the revision value: {{< text bash >}} $ helm install istio-ingress-canary istio/gateway \ --set revision=canary \ -n istio-ingress {{< /text >}} 1. Verify that you have two versions of `istio-ingress gateway` installed in your cluster: {{< text bash >}} $ kubectl get pods -L istio.io/rev -n istio-ingress NAME READY STATUS RESTARTS AGE REV istio-ingress-754f55f7f6-6zg8n 1/1 Running 0 5m22s default istio-ingress-canary-5d649bd644-4m8lp 1/1 Running 0 3m24s canary {{< /text >}} See [Upgrading Gateways](/docs/setup/additional-setup/gateway/#canary-upgrade-advanced) for in-depth documentation on gateway canary upgrade. 1. Follow the steps [here](/docs/setup/upgrade/canary/#data-plane) to test or migrate existing workloads to use the canary control plane. 1. Once you have verified and migrated your workloads to use the canary control plane, you can uninstall your old control plane: {{< text bash >}} $ helm delete istiod -n istio-system {{< /text >}} 1. Upgrade the Istio base chart again, this time making the new `canary` revision the cluster-wide default. {{< text bash >}} $ helm upgrade istio-base istio/base --set defaultRevision=canary -n istio-system {{< /text >}} ### Stable revision labels {{< boilerplate revision-tags-preamble >}} #### Usage {{< boilerplate revision-tags-usage >}} {{< text bash >}} $ helm template istiod istio/istiod -s templates/revision-tags-mwc.yaml --set revisionTags="{prod-stable}" --set revision={{< istio\_previous\_version\_revision >}}-1 -n istio-system | kubectl apply -f - $ helm template istiod istio/istiod -s templates/revision-tags-mwc.yaml --set revisionTags="{prod-canary}" --set revision={{< istio\_full\_version\_revision >}} -n istio-system | kubectl apply -f - {{< /text >}} {{< warning >}} These commands create new `MutatingWebhookConfiguration` resources in your cluster, however, they are not owned by any Helm chart due to `kubectl` manually applying the templates. See the instructions below to uninstall revision tags. {{< /warning >}} {{< boilerplate revision-tags-middle >}} {{< text bash >}} $ helm template istiod istio/istiod -s templates/revision-tags-mwc.yaml --set revisionTags="{prod-stable}" --set revision={{< istio\_full\_version\_revision >}} -n istio-system | kubectl apply -f - {{< /text >}} {{< boilerplate revision-tags-prologue >}} #### Default tag {{< boilerplate revision-tags-default-intro >}} {{< text
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/upgrade/helm/index.md
master
istio
[ 0.032535117119550705, 0.000014618468412663788, 0.022617680951952934, -0.003861044766381383, 0.029077624902129173, -0.01894817128777504, -0.03417540714144707, 0.049791544675827026, -0.05126538872718811, 0.04857635125517845, -0.002559538697823882, -0.12011323124170303, -0.04390852153301239, ...
0.53541
uninstall revision tags. {{< /warning >}} {{< boilerplate revision-tags-middle >}} {{< text bash >}} $ helm template istiod istio/istiod -s templates/revision-tags-mwc.yaml --set revisionTags="{prod-stable}" --set revision={{< istio\_full\_version\_revision >}} -n istio-system | kubectl apply -f - {{< /text >}} {{< boilerplate revision-tags-prologue >}} #### Default tag {{< boilerplate revision-tags-default-intro >}} {{< text bash >}} $ helm template istiod istio/istiod -s templates/revision-tags-mwc.yaml --set revisionTags="{default}" --set revision={{< istio\_full\_version\_revision >}} -n istio-system | kubectl apply -f - {{< /text >}} {{< boilerplate revision-tags-default-outro >}} ### In place upgrade You can perform an in place upgrade of Istio in your cluster using the Helm upgrade workflow. {{< warning >}} Add your override values file or custom options to the commands below to preserve your custom configuration during Helm upgrades. {{< /warning >}} {{< boilerplate crd-upgrade-123 >}} 1. Upgrade the Istio base chart: {{< text bash >}} $ helm upgrade istio-base istio/base -n istio-system {{< /text >}} 1. Upgrade the Istio discovery chart: {{< text bash >}} $ helm upgrade istiod istio/istiod -n istio-system {{< /text >}} 1. (Optional) Upgrade any gateway charts installed in your cluster: {{< text bash >}} $ helm upgrade istio-ingress istio/gateway -n istio-ingress {{< /text >}} ## Uninstall Please refer to the uninstall section in our [Helm install guide](/docs/setup/install/helm/#uninstall).
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/upgrade/helm/index.md
master
istio
[ 0.02418471686542034, 0.07809701561927795, 0.04648799076676369, 0.00505231786519289, 0.053672850131988525, 0.028024490922689438, -0.04045722633600235, 0.02566167339682579, 0.06742087006568909, 0.020565317943692207, -0.016993850469589233, -0.10680490732192993, -0.07455570995807648, -0.010741...
0.382395
Follow this guide to deploy Istio and connect a virtual machine to it. ## Prerequisites 1. [Download the Istio release](/docs/setup/additional-setup/download-istio-release/) 1. Perform any necessary [platform-specific setup](/docs/setup/platform-setup/) 1. Check the requirements [for Pods and Services](/docs/ops/deployment/application-requirements/) 1. Virtual machines must have IP connectivity to the ingress gateway in the connecting mesh, and optionally every pod in the mesh via L3 networking if enhanced performance is desired. 1. Learn about [Virtual Machine Architecture](/docs/ops/deployment/vm-architecture/) to gain an understanding of the high level architecture of Istio's virtual machine integration. ## Prepare the guide environment 1. Create a virtual machine 1. Set the environment variables `VM\_APP`, `WORK\_DIR` , `VM\_NAMESPACE`, and `SERVICE\_ACCOUNT` on your machine that you're using to set up the cluster. (e.g., `WORK\_DIR="${HOME}/vmintegration"`): {{< tabset category-name="network-mode" >}} {{< tab name="Single-Network" category-value="single" >}} {{< text bash >}} $ VM\_APP="" $ VM\_NAMESPACE="" $ WORK\_DIR="" $ SERVICE\_ACCOUNT="" $ CLUSTER\_NETWORK="" $ VM\_NETWORK="" $ CLUSTER="Kubernetes" {{< /text >}} {{< /tab >}} {{< tab name="Multi-Network" category-value="multiple" >}} {{< text bash >}} $ VM\_APP="" $ VM\_NAMESPACE="" $ WORK\_DIR="" $ SERVICE\_ACCOUNT="" $ # Customize values for multi-cluster/multi-network as needed $ CLUSTER\_NETWORK="kube-network" $ VM\_NETWORK="vm-network" $ CLUSTER="cluster1" {{< /text >}} {{< /tab >}} {{< /tabset >}} 1. Create the working directory on your machine that you're using to set up the cluster: {{< text syntax=bash snip\_id=setup\_wd >}} $ mkdir -p "${WORK\_DIR}" {{< /text >}} ## Install the Istio control plane If your cluster already has an Istio control plane, you can skip the installation steps, but will still need to expose the control plane for virtual machine access. Install Istio and expose the control plane on cluster so that your virtual machine can access it. 1. Create the `IstioOperator` spec for installation. {{< text syntax="bash yaml" snip\_id=setup\_iop >}} $ cat < ./vm-cluster.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: name: istio spec: values: global: meshID: mesh1 multiCluster: clusterName: "${CLUSTER}" network: "${CLUSTER\_NETWORK}" EOF {{< /text >}} 1. Install Istio. {{< tabset category-name="registration-mode" >}} {{< tab name="Default" category-value="default" >}} {{< text bash >}} $ istioctl install -f vm-cluster.yaml {{< /text >}} {{< /tab >}} {{< tab name="Automated WorkloadEntry Creation" category-value="autoreg" >}} {{< boilerplate alpha >}} {{< text syntax=bash snip\_id=install\_istio >}} $ istioctl install -f vm-cluster.yaml --set values.pilot.env.PILOT\_ENABLE\_WORKLOAD\_ENTRY\_AUTOREGISTRATION=true --set values.pilot.env.PILOT\_ENABLE\_WORKLOAD\_ENTRY\_HEALTHCHECKS=true {{< /text >}} {{< /tab >}} {{< /tabset >}} 1. Deploy the east-west gateway: {{< warning >}} If the control-plane was installed with a revision, add the `--revision rev` flag to the `gen-eastwest-gateway.sh` command. {{< /warning >}} {{< tabset category-name="network-mode" >}} {{< tab name="Single-Network" category-value="single" >}} {{< text syntax=bash snip\_id=install\_eastwest >}} $ @samples/multicluster/gen-eastwest-gateway.sh@ --single-cluster | istioctl install -y -f - {{< /text >}} {{< /tab >}} {{< tab name="Multi-Network" category-value="multiple" >}} {{< text bash >}} $ @samples/multicluster/gen-eastwest-gateway.sh@ \ --network "${CLUSTER\_NETWORK}" | \ istioctl install -y -f - {{< /text >}} {{< /tab >}} {{< /tabset >}} 1. Expose services inside the cluster via the east-west gateway: {{< tabset category-name="network-mode" >}} {{< tab name="Single-Network" category-value="single" >}} Expose the control plane: {{< text syntax=bash snip\_id=expose\_istio >}} $ kubectl apply -n istio-system -f @samples/multicluster/expose-istiod.yaml@ {{< /text >}} {{< /tab >}} {{< tab name="Multi-Network" category-value="multiple" >}} Expose the control plane: {{< text bash >}} $ kubectl apply -n istio-system -f @samples/multicluster/expose-istiod.yaml@ {{< /text >}} Expose cluster services: {{< text bash >}} $ kubectl apply -n istio-system -f @samples/multicluster/expose-services.yaml@ {{< /text >}} Ensure to label the istio-system namespace with the defined cluster network: {{< text bash >}} $ kubectl label namespace istio-system topology.istio.io/network="${CLUSTER\_NETWORK}" {{< /text >}} {{< /tab >}} {{< /tabset >}} ## Configure the VM namespace 1. Create the namespace that will host the virtual machine: {{< text syntax=bash snip\_id=install\_namespace >}} $ kubectl create namespace "${VM\_NAMESPACE}" {{< /text >}} 1. Create a serviceaccount for
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/virtual-machine/index.md
master
istio
[ 0.02244657650589943, -0.027882078662514687, -0.009323752485215664, 0.02524121105670929, -0.047544289380311966, -0.015743710100650787, -0.02069646492600441, 0.05038022622466087, -0.06214052811264992, 0.05335136502981186, -0.02051878720521927, -0.1311345398426056, -0.03392084687948227, -0.02...
0.555862
bash >}} $ kubectl label namespace istio-system topology.istio.io/network="${CLUSTER\_NETWORK}" {{< /text >}} {{< /tab >}} {{< /tabset >}} ## Configure the VM namespace 1. Create the namespace that will host the virtual machine: {{< text syntax=bash snip\_id=install\_namespace >}} $ kubectl create namespace "${VM\_NAMESPACE}" {{< /text >}} 1. Create a serviceaccount for the virtual machine: {{< text syntax=bash snip\_id=install\_sa >}} $ kubectl create serviceaccount "${SERVICE\_ACCOUNT}" -n "${VM\_NAMESPACE}" {{< /text >}} ## Create files to transfer to the virtual machine {{< tabset category-name="registration-mode" >}} {{< tab name="Default" category-value="default" >}} First, create a template `WorkloadGroup` for the VM(s): {{< text bash >}} $ cat < workloadgroup.yaml apiVersion: networking.istio.io/v1 kind: WorkloadGroup metadata: name: "${VM\_APP}" namespace: "${VM\_NAMESPACE}" spec: metadata: labels: app: "${VM\_APP}" template: serviceAccount: "${SERVICE\_ACCOUNT}" network: "${VM\_NETWORK}" EOF {{< /text >}} {{< /tab >}} {{< tab name="Automated WorkloadEntry Creation" category-value="autoreg" >}} First, create a template `WorkloadGroup` for the VM(s): {{< boilerplate alpha >}} {{< text syntax=bash snip\_id=create\_wg >}} $ cat < workloadgroup.yaml apiVersion: networking.istio.io/v1 kind: WorkloadGroup metadata: name: "${VM\_APP}" namespace: "${VM\_NAMESPACE}" spec: metadata: labels: app: "${VM\_APP}" template: serviceAccount: "${SERVICE\_ACCOUNT}" network: "${VM\_NETWORK}" EOF {{< /text >}} Then, to allow automated `WorkloadEntry` creation, push the `WorkloadGroup` to the cluster: {{< text syntax=bash snip\_id=apply\_wg >}} $ kubectl --namespace "${VM\_NAMESPACE}" apply -f workloadgroup.yaml {{< /text >}} Using the Automated `WorkloadEntry` Creation feature, application health checks are also available. These share the same API and behavior as [Kubernetes Readiness Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/). For example, to configure a probe on the `/ready` endpoint of your application: {{< text bash >}} $ cat < workloadgroup.yaml apiVersion: networking.istio.io/v1 kind: WorkloadGroup metadata: name: "${VM\_APP}" namespace: "${VM\_NAMESPACE}" spec: metadata: labels: app: "${VM\_APP}" template: serviceAccount: "${SERVICE\_ACCOUNT}" network: "${VM\_NETWORK}" probe: periodSeconds: 5 initialDelaySeconds: 1 httpGet: port: 8080 path: /ready EOF {{< /text >}} With this configuration, the automatically generated `WorkloadEntry` will not be marked "Ready" until the probe succeeds. {{< /tab >}} {{< /tabset >}} {{< warning >}} Before proceeding to generate the `istio-token`, as part of `istioctl x workload entry`, you should verify third party tokens are enabled in your cluster by following the steps describe [here](/docs/ops/best-practices/security/#configure-third-party-service-account-tokens). If third party tokens are not enabled, you should add the option `--set values.global.jwtPolicy=first-party-jwt` to the Istio install commands. {{< /warning >}} Next, use the `istioctl x workload entry` command to generate: \* `cluster.env`: Contains metadata that identifies what namespace, service account, network CIDR and (optionally) what inbound ports to capture. \* `istio-token`: A Kubernetes token used to get certs from the CA. \* `mesh.yaml`: Provides `ProxyConfig` to configure `discoveryAddress`, health-checking probes, and some authentication options. \* `root-cert.pem`: The root certificate used to authenticate. \* `hosts`: An addendum to `/etc/hosts` that the proxy will use to reach istiod for xDS.\* {{< idea >}} A sophisticated option involves configuring DNS within the virtual machine to reference an external DNS server. This option is beyond the scope of this guide. {{< /idea >}} {{< tabset category-name="registration-mode" >}} {{< tab name="Default" category-value="default" >}} {{< text bash >}} $ istioctl x workload entry configure -f workloadgroup.yaml -o "${WORK\_DIR}" --clusterID "${CLUSTER}" {{< /text >}} {{< /tab >}} {{< tab name="Automated WorkloadEntry Creation" category-value="autoreg" >}} {{< boilerplate alpha >}} {{< text syntax=bash snip\_id=configure\_wg >}} $ istioctl x workload entry configure -f workloadgroup.yaml -o "${WORK\_DIR}" --clusterID "${CLUSTER}" --autoregister {{< /text >}} {{< /tab >}} {{< /tabset >}} ## Configure the virtual machine Run the following commands on the virtual machine you want to add to the Istio mesh: 1. Securely transfer the files from `"${WORK\_DIR}"` to the virtual machine. How you choose to securely transfer those files should be done with consideration for your information security policies. For convenience in this guide, transfer all of the required files to
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/virtual-machine/index.md
master
istio
[ 0.042680542916059494, -0.031770288944244385, -0.045562565326690674, 0.006573057733476162, -0.07542000710964203, 0.043133314698934555, 0.04085550084710121, 0.025202443823218346, 0.011814944446086884, 0.03808793053030968, -0.004883954767137766, -0.16346903145313263, -0.0015940351877361536, -...
0.262132
machine you want to add to the Istio mesh: 1. Securely transfer the files from `"${WORK\_DIR}"` to the virtual machine. How you choose to securely transfer those files should be done with consideration for your information security policies. For convenience in this guide, transfer all of the required files to `"${HOME}"` in the virtual machine. 1. Install the root certificate at `/etc/certs`: {{< text bash >}} $ sudo mkdir -p /etc/certs $ sudo cp "${HOME}"/root-cert.pem /etc/certs/root-cert.pem {{< /text >}} 1. Install the token at `/var/run/secrets/tokens`: {{< text bash >}} $ sudo mkdir -p /var/run/secrets/tokens $ sudo cp "${HOME}"/istio-token /var/run/secrets/tokens/istio-token {{< /text >}} 1. Install the package containing the Istio virtual machine integration runtime: {{< tabset category-name="vm-os" >}} {{< tab name="Debian" category-value="debian" >}} {{< text syntax=bash snip\_id=none >}} $ curl -LO https://storage.googleapis.com/istio-release/releases/{{< istio\_full\_version >}}/deb/istio-sidecar.deb $ sudo dpkg -i istio-sidecar.deb {{< /text >}} {{< /tab >}} {{< tab name="CentOS" category-value="centos" >}} Note: only CentOS 8 is currently supported. {{< text syntax=bash snip\_id=none >}} $ curl -LO https://storage.googleapis.com/istio-release/releases/{{< istio\_full\_version >}}/rpm/istio-sidecar.rpm $ sudo rpm -i istio-sidecar.rpm {{< /text >}} {{< /tab >}} {{< /tabset >}} 1. Install `cluster.env` within the directory `/var/lib/istio/envoy/`: {{< text bash >}} $ sudo cp "${HOME}"/cluster.env /var/lib/istio/envoy/cluster.env {{< /text >}} 1. Install the [Mesh Config](/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig) to `/etc/istio/config/mesh`: {{< text bash >}} $ sudo cp "${HOME}"/mesh.yaml /etc/istio/config/mesh {{< /text >}} 1. Add the istiod host to `/etc/hosts`: {{< text bash >}} $ sudo sh -c 'cat $(eval echo ~$SUDO\_USER)/hosts >> /etc/hosts' {{< /text >}} 1. Transfer ownership of the files in `/etc/certs/` and `/var/lib/istio/envoy/` to the Istio proxy: {{< text bash >}} $ sudo mkdir -p /etc/istio/proxy $ sudo chown -R istio-proxy /var/lib/istio /etc/certs /etc/istio/proxy /etc/istio/config /var/run/secrets /etc/certs/root-cert.pem {{< /text >}} ## Start Istio within the virtual machine 1. Start the Istio agent: {{< text bash >}} $ sudo systemctl start istio {{< /text >}} ## Verify Istio Works Successfully 1. Check the log in `/var/log/istio/istio.log`. You should see entries similar to the following: {{< text bash >}} $ 2020-08-21T01:32:17.748413Z info sds resource:default pushed key/cert pair to proxy $ 2020-08-21T01:32:20.270073Z info sds resource:ROOTCA new connection $ 2020-08-21T01:32:20.270142Z info sds Skipping waiting for gateway secret $ 2020-08-21T01:32:20.270279Z info cache adding watcher for file ./etc/certs/root-cert.pem $ 2020-08-21T01:32:20.270347Z info cache GenerateSecret from file ROOTCA $ 2020-08-21T01:32:20.270494Z info sds resource:ROOTCA pushed root cert to proxy $ 2020-08-21T01:32:20.270734Z info sds resource:default new connection $ 2020-08-21T01:32:20.270763Z info sds Skipping waiting for gateway secret $ 2020-08-21T01:32:20.695478Z info cache GenerateSecret default $ 2020-08-21T01:32:20.695595Z info sds resource:default pushed key/cert pair to proxy {{< /text >}} 1. Create a Namespace to deploy a Pod-based Service: {{< text bash >}} $ kubectl create namespace sample $ kubectl label namespace sample istio-injection=enabled {{< /text >}} 1. Deploy the `HelloWorld` Service: {{< text bash >}} $ kubectl apply -n sample -f @samples/helloworld/helloworld.yaml@ {{< /text >}} 1. Send requests from your Virtual Machine to the Service: {{< text bash >}} $ curl helloworld.sample.svc:5000/hello Hello version: v1, instance: helloworld-v1-578dd69f69-fxwwk {{< /text >}} ## Next Steps For more information about virtual machines: \* [Debugging Virtual Machines](/docs/ops/diagnostic-tools/virtual-machines/) to troubleshoot issues with virtual machines. \* [Bookinfo with a Virtual Machine](/docs/examples/virtual-machines/) to set up an example deployment of virtual machines. ## Uninstall Stop Istio on the virtual machine: {{< text bash >}} $ sudo systemctl stop istio {{< /text >}} Then, remove the Istio-sidecar package: {{< tabset category-name="vm-os" >}} {{< tab name="Debian" category-value="debian" >}} {{< text bash >}} $ sudo dpkg -r istio-sidecar $ dpkg -s istio-sidecar {{< /text >}} {{< /tab >}} {{< tab name="CentOS" category-value="centos" >}} {{< text bash >}} $ sudo rpm -e istio-sidecar {{< /text >}} {{< /tab >}} {{< /tabset >}} To uninstall Istio, run
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/virtual-machine/index.md
master
istio
[ -0.0013885132502764463, 0.04926346242427826, -0.06797897815704346, -0.0059602889232337475, -0.02415534295141697, -0.01197167206555605, -0.000758049194701016, 0.104221872985363, -0.04250670224428177, 0.008941107429564, 0.022481992840766907, -0.15060611069202423, 0.012716707773506641, 0.0636...
0.455763
tab name="Debian" category-value="debian" >}} {{< text bash >}} $ sudo dpkg -r istio-sidecar $ dpkg -s istio-sidecar {{< /text >}} {{< /tab >}} {{< tab name="CentOS" category-value="centos" >}} {{< text bash >}} $ sudo rpm -e istio-sidecar {{< /text >}} {{< /tab >}} {{< /tabset >}} To uninstall Istio, run the following command: {{< text bash >}} $ kubectl delete -n istio-system -f @samples/multicluster/expose-istiod.yaml@ $ istioctl uninstall -y --purge {{< /text >}} The control plane namespace (e.g., `istio-system`) is not removed by default. If no longer needed, use the following command to remove it: {{< text bash >}} $ kubectl delete namespace istio-system {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/virtual-machine/index.md
master
istio
[ -0.00584145775064826, 0.02646438218653202, -0.006583103444427252, -0.013881013728678226, 0.0049436441622674465, -0.03987902030348778, 0.016286654397845268, 0.019830528646707535, 0.017126047983765602, 0.0010120931547135115, 0.03669732064008713, -0.11908969283103943, -0.10032214969396591, -0...
0.444487
This guide walks you through the process of installing an {{< gloss >}}external control plane{{< /gloss >}} and then connecting one or more {{< gloss "remote cluster" >}}remote clusters{{< /gloss >}} to it. The external control plane [deployment model](/docs/ops/deployment/deployment-models/#control-plane-models) allows a mesh operator to install and manage a control plane on an external cluster, separate from the data plane cluster (or multiple clusters) comprising the mesh. This deployment model allows a clear separation between mesh operators and mesh administrators. Mesh operators install and manage Istio control planes while mesh admins only need to configure the mesh. {{< image width="75%" link="external-controlplane.svg" caption="External control plane cluster and remote cluster" >}} Envoy proxies (sidecars and gateways) running in the remote cluster access the external istiod via an ingress gateway which exposes the endpoints needed for discovery, CA, injection, and validation. While configuration and management of the external control plane is done by the mesh operator in the external cluster, the first remote cluster connected to an external control plane serves as the config cluster for the mesh itself. The mesh administrator will use the config cluster to configure the mesh resources (gateways, virtual services, etc.) in addition to the mesh services themselves. The external control plane will remotely access this configuration from the Kubernetes API server, as shown in the above diagram. ## Before you begin ### Clusters This guide requires that you have two Kubernetes clusters with any of the [supported Kubernetes versions:](/docs/releases/supported-releases#support-status-of-istio-releases) {{< supported\_kubernetes\_versions >}}. The first cluster will host the {{< gloss >}}external control plane{{< /gloss >}} installed in the `external-istiod` namespace. An ingress gateway is also installed in the `istio-system` namespace to provide cross-cluster access to the external control plane. The second cluster is a {{< gloss >}}remote cluster{{< /gloss >}} that will run the mesh application workloads. Its Kubernetes API server also provides the mesh configuration used by the external control plane (istiod) to configure the workload proxies. ### API server access The Kubernetes API server in the remote cluster must be accessible to the external control plane cluster. Many cloud providers make API servers publicly accessible via network load balancers (NLBs). If the API server is not directly accessible, you will need to modify the installation procedure to enable access. For example, the [east-west](https://en.wikipedia.org/wiki/East-west\_traffic) gateway used in a [multicluster configuration](#adding-clusters) could also be used to enable access to the API server. ### Environment Variables The following environment variables will be used throughout to simplify the instructions: Variable | Description -------- | ----------- `CTX\_EXTERNAL\_CLUSTER` | The context name in the default [Kubernetes configuration file](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) used for accessing the external control plane cluster. `CTX\_REMOTE\_CLUSTER` | The context name in the default [Kubernetes configuration file](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) used for accessing the remote cluster. `REMOTE\_CLUSTER\_NAME` | The name of the remote cluster. `EXTERNAL\_ISTIOD\_ADDR` | The hostname for the ingress gateway on the external control plane cluster. This is used by the remote cluster to access the external control plane. `SSL\_SECRET\_NAME` | The name of the secret that holds the TLS certs for the ingress gateway on the external control plane cluster. Set the `CTX\_EXTERNAL\_CLUSTER`, `CTX\_REMOTE\_CLUSTER`, and `REMOTE\_CLUSTER\_NAME` now. You will set the others later. {{< text syntax=bash snip\_id=none >}} $ export CTX\_EXTERNAL\_CLUSTER= $ export CTX\_REMOTE\_CLUSTER= $ export REMOTE\_CLUSTER\_NAME= {{< /text >}} ## Cluster configuration ### Mesh operator steps A mesh operator is responsible for installing and managing the external Istio control plane on the external cluster. This includes configuring an ingress gateway on the external cluster, which allows the remote cluster to access the control plane, and installing the sidecar injector webhook configuration on the remote cluster so that it will use the
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/external-controlplane/index.md
master
istio
[ -0.03489794209599495, -0.05439765751361847, -0.01554214209318161, 0.06955021619796753, -0.0215557012706995, -0.022505931556224823, 0.03673286736011505, 0.05175070837140083, 0.011973430402576923, 0.10456147789955139, 0.018301475793123245, -0.0324549563229084, 0.009726794436573982, 0.0074033...
0.332004
for installing and managing the external Istio control plane on the external cluster. This includes configuring an ingress gateway on the external cluster, which allows the remote cluster to access the control plane, and installing the sidecar injector webhook configuration on the remote cluster so that it will use the external control plane. #### Set up a gateway in the external cluster 1. Create the Istio install configuration for the ingress gateway that will expose the external control plane ports to other clusters: {{< text bash >}} $ cat < controlplane-gateway.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: namespace: istio-system spec: components: ingressGateways: - name: istio-ingressgateway enabled: true k8s: service: ports: - port: 15021 targetPort: 15021 name: status-port - port: 15012 targetPort: 15012 name: tls-xds - port: 15017 targetPort: 15017 name: tls-webhook EOF {{< /text >}} Then, install the gateway in the `istio-system` namespace of the external cluster: {{< text bash >}} $ istioctl install -f controlplane-gateway.yaml --context="${CTX\_EXTERNAL\_CLUSTER}" {{< /text >}} 1. Run the following command to confirm that the ingress gateway is up and running: {{< text bash >}} $ kubectl get po -n istio-system --context="${CTX\_EXTERNAL\_CLUSTER}" NAME READY STATUS RESTARTS AGE istio-ingressgateway-9d4c7f5c7-7qpzz 1/1 Running 0 29s istiod-68488cd797-mq8dn 1/1 Running 0 38s {{< /text >}} You will notice an istiod deployment is also created in the `istio-system` namespace. This is used to configure the ingress gateway and is NOT the control plane used by remote clusters. {{< tip >}} This ingress gateway could be configured to host multiple external control planes, in different namespaces on the external cluster, although in this example you will only deploy a single external istiod in the `external-istiod` namespace. {{< /tip >}} 1. Configure your environment to expose the Istio ingress gateway service using a public hostname with TLS. Set the `EXTERNAL\_ISTIOD\_ADDR` environment variable to the hostname and `SSL\_SECRET\_NAME` environment variable to the secret that holds the TLS certs: {{< text syntax=bash snip\_id=none >}} $ export EXTERNAL\_ISTIOD\_ADDR= $ export SSL\_SECRET\_NAME= {{< /text >}} These instructions assume that you are exposing the external cluster's gateway using a hostname with properly signed DNS certs as this is the recommended approach in a production environment. Refer to the [secure ingress task](/docs/tasks/traffic-management/ingress/secure-ingress/#configure-a-tls-ingress-gateway-for-a-single-host) for more information on exposing a secure gateway. Your environment variables should look something like this: {{< text bash >}} $ echo "$EXTERNAL\_ISTIOD\_ADDR" "$SSL\_SECRET\_NAME" myhost.example.com myhost-example-credential {{< /text >}} {{< tip >}} If you don't have a DNS hostname but want to experiment with an external control plane in a test environment, you can access the gateway using its external load balancer IP address: {{< text bash >}} $ export EXTERNAL\_ISTIOD\_ADDR=$(kubectl -n istio-system --context="${CTX\_EXTERNAL\_CLUSTER}" get svc istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') $ export SSL\_SECRET\_NAME=NONE {{< /text >}} Doing this will also require a few other changes in the configuration. Make sure to follow all of the related steps in the instructions below. {{< /tip >}} #### Set up the remote config cluster 1. Use the `remote` profile to configure the remote cluster's Istio installation. This installs an injection webhook that uses the external control plane's injector, instead of a locally deployed one. Because this cluster will also serve as the config cluster, the Istio CRDs and other resources that will be needed on the remote cluster are also installed by setting `global.configCluster` and `pilot.configMap` to `true`: {{< text syntax=bash snip\_id=get\_remote\_config\_cluster\_iop >}} $ cat < remote-config-cluster.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: namespace: external-istiod spec: profile: remote values: global: istioNamespace: external-istiod configCluster: true pilot: configMap: true istiodRemote: injectionURL: https://${EXTERNAL\_ISTIOD\_ADDR}:15017/inject/cluster/${REMOTE\_CLUSTER\_NAME}/net/network1 base: validationURL: https://${EXTERNAL\_ISTIOD\_ADDR}:15017/validate EOF {{< /text >}} {{< tip >}} If your cluster name contains `/` (slash) characters, replace them with `--slash--` in the
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/external-controlplane/index.md
master
istio
[ -0.02452647127211094, -0.01990710385143757, -0.05859988182783127, 0.04963083937764168, -0.024837516248226166, 0.01095154695212841, 0.015276751480996609, 0.08320175856351852, -0.033745381981134415, 0.06960757821798325, 0.011427382007241249, -0.10012894123792648, -0.03171052783727646, -0.004...
0.469119
$ cat < remote-config-cluster.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: namespace: external-istiod spec: profile: remote values: global: istioNamespace: external-istiod configCluster: true pilot: configMap: true istiodRemote: injectionURL: https://${EXTERNAL\_ISTIOD\_ADDR}:15017/inject/cluster/${REMOTE\_CLUSTER\_NAME}/net/network1 base: validationURL: https://${EXTERNAL\_ISTIOD\_ADDR}:15017/validate EOF {{< /text >}} {{< tip >}} If your cluster name contains `/` (slash) characters, replace them with `--slash--` in the `injectionURL`, e.g., `injectionURL: https://1.2.3.4:15017/inject/cluster/``cluster--slash--1``/net/network1`. {{< /tip >}} 1. If you are using an IP address for the `EXTERNAL\_ISTIOD\_ADDR`, instead of a proper DNS hostname, modify the configuration to specify the discovery address and paths, instead of URLs: {{< warning >}} This is not recommended in a production environment. {{< /warning >}} {{< text bash >}} $ sed -i'.bk' \ -e "s|injectionURL: https://${EXTERNAL\_ISTIOD\_ADDR}:15017|injectionPath: |" \ -e "/istioNamespace:/a\\ remotePilotAddress: ${EXTERNAL\_ISTIOD\_ADDR}" \ -e '/base:/,+1d' \ remote-config-cluster.yaml; rm remote-config-cluster.yaml.bk {{< /text >}} 1. Install the configuration on the remote cluster: {{< text bash >}} $ kubectl create namespace external-istiod --context="${CTX\_REMOTE\_CLUSTER}" $ istioctl install -f remote-config-cluster.yaml --set values.defaultRevision=default --context="${CTX\_REMOTE\_CLUSTER}" {{< /text >}} 1. Confirm that the remote cluster's injection webhook configuration has been installed: {{< text bash >}} $ kubectl get mutatingwebhookconfiguration --context="${CTX\_REMOTE\_CLUSTER}" NAME WEBHOOKS AGE istio-revision-tag-default-external-istiod 4 2m2s istio-sidecar-injector-external-istiod 4 2m5s {{< /text >}} 1. Confirm that the remote cluster's validation webhook configurations have been installed: {{< text bash >}} $ kubectl get validatingwebhookconfiguration --context="${CTX\_REMOTE\_CLUSTER}" NAME WEBHOOKS AGE istio-validator-external-istiod 1 6m53s istiod-default-validator 1 6m53s {{< /text >}} #### Set up the control plane in the external cluster 1. Create the `external-istiod` namespace, which will be used to host the external control plane: {{< text bash >}} $ kubectl create namespace external-istiod --context="${CTX\_EXTERNAL\_CLUSTER}" {{< /text >}} 1. The control plane in the external cluster needs access to the remote cluster to discover services, endpoints, and pod attributes. Create a secret with credentials to access the remote cluster’s `kube-apiserver` and install it in the external cluster: {{< text bash >}} $ istioctl create-remote-secret \ --context="${CTX\_REMOTE\_CLUSTER}" \ --type=config \ --namespace=external-istiod \ --service-account=istiod \ --create-service-account=false | \ kubectl apply -f - --context="${CTX\_EXTERNAL\_CLUSTER}" {{< /text >}} {{< tip >}} If you are running in `kind`, then you will need to pass `--server https://:6443` to the `istioctl create-remote-secret` command, where `` is the IP address of the node running the API server. {{< /tip >}} 1. Create the Istio configuration to install the control plane in the `external-istiod` namespace of the external cluster. Notice that istiod is configured to use the locally mounted `istio` configmap and the `SHARED\_MESH\_CONFIG` environment variable is set to `istio`. This instructs istiod to merge the values set by the mesh admin in the config cluster's configmap with the values in the local configmap set by the mesh operator, here, which will take precedence if there are any conflicts: {{< text syntax=bash snip\_id=get\_external\_istiod\_iop >}} $ cat < external-istiod.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: namespace: external-istiod spec: profile: empty meshConfig: rootNamespace: external-istiod defaultConfig: discoveryAddress: $EXTERNAL\_ISTIOD\_ADDR:15012 proxyMetadata: XDS\_ROOT\_CA: /etc/ssl/certs/ca-certificates.crt CA\_ROOT\_CA: /etc/ssl/certs/ca-certificates.crt components: pilot: enabled: true k8s: overlays: - kind: Deployment name: istiod patches: - path: spec.template.spec.volumes[100] value: |- name: config-volume configMap: name: istio - path: spec.template.spec.volumes[100] value: |- name: inject-volume configMap: name: istio-sidecar-injector - path: spec.template.spec.containers[0].volumeMounts[100] value: |- name: config-volume mountPath: /etc/istio/config - path: spec.template.spec.containers[0].volumeMounts[100] value: |- name: inject-volume mountPath: /var/lib/istio/inject env: - name: INJECTION\_WEBHOOK\_CONFIG\_NAME value: "" - name: VALIDATION\_WEBHOOK\_CONFIG\_NAME value: "" - name: EXTERNAL\_ISTIOD value: "true" - name: LOCAL\_CLUSTER\_SECRET\_WATCHER value: "true" - name: CLUSTER\_ID value: ${REMOTE\_CLUSTER\_NAME} - name: SHARED\_MESH\_CONFIG value: istio values: global: externalIstiod: true caAddress: $EXTERNAL\_ISTIOD\_ADDR:15012 istioNamespace: external-istiod operatorManageWebhooks: true configValidation: false meshID: mesh1 multiCluster: clusterName: ${REMOTE\_CLUSTER\_NAME} network: network1 EOF {{< /text >}} 1. If you are using an IP address for the `EXTERNAL\_ISTIOD\_ADDR`, instead of a proper DNS hostname, delete the proxy metadata and update
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/external-controlplane/index.md
master
istio
[ 0.006667414680123329, -0.04302056506276131, -0.020803388208150864, 0.028267819434404373, 0.02691308595240116, -0.001593122840858996, -0.020686866715550423, 0.04508223384618759, -0.011658923700451851, 0.026805520057678223, 0.03800993785262108, -0.14074519276618958, -0.01939886435866356, 0.0...
0.412866
name: SHARED\_MESH\_CONFIG value: istio values: global: externalIstiod: true caAddress: $EXTERNAL\_ISTIOD\_ADDR:15012 istioNamespace: external-istiod operatorManageWebhooks: true configValidation: false meshID: mesh1 multiCluster: clusterName: ${REMOTE\_CLUSTER\_NAME} network: network1 EOF {{< /text >}} 1. If you are using an IP address for the `EXTERNAL\_ISTIOD\_ADDR`, instead of a proper DNS hostname, delete the proxy metadata and update the webhook config environment variables in the configuration: {{< warning >}} This is not recommended in a production environment. {{< /warning >}} {{< text bash >}} $ sed -i'.bk' \ -e '/proxyMetadata:/,+2d' \ -e '/INJECTION\_WEBHOOK\_CONFIG\_NAME/{n;s/value: ""/value: istio-sidecar-injector-external-istiod/;}' \ -e '/VALIDATION\_WEBHOOK\_CONFIG\_NAME/{n;s/value: ""/value: istio-validator-external-istiod/;}' \ external-istiod.yaml ; rm external-istiod.yaml.bk {{< /text >}} 1. Apply the Istio configuration on the external cluster: {{< text bash >}} $ istioctl install -f external-istiod.yaml --context="${CTX\_EXTERNAL\_CLUSTER}" {{< /text >}} 1. Confirm that the external istiod has been successfully deployed: {{< text bash >}} $ kubectl get po -n external-istiod --context="${CTX\_EXTERNAL\_CLUSTER}" NAME READY STATUS RESTARTS AGE istiod-779bd6fdcf-bd6rg 1/1 Running 0 70s {{< /text >}} 1. Create the Istio `Gateway`, `VirtualService`, and `DestinationRule` configuration to route traffic from the ingress gateway to the external control plane: {{< text syntax=bash snip\_id=get\_external\_istiod\_gateway\_config >}} $ cat < external-istiod-gw.yaml apiVersion: networking.istio.io/v1 kind: Gateway metadata: name: external-istiod-gw namespace: external-istiod spec: selector: istio: ingressgateway servers: - port: number: 15012 protocol: https name: https-XDS tls: mode: SIMPLE credentialName: $SSL\_SECRET\_NAME hosts: - $EXTERNAL\_ISTIOD\_ADDR - port: number: 15017 protocol: https name: https-WEBHOOK tls: mode: SIMPLE credentialName: $SSL\_SECRET\_NAME hosts: - $EXTERNAL\_ISTIOD\_ADDR --- apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: external-istiod-vs namespace: external-istiod spec: hosts: - $EXTERNAL\_ISTIOD\_ADDR gateways: - external-istiod-gw http: - match: - port: 15012 route: - destination: host: istiod.external-istiod.svc.cluster.local port: number: 15012 - match: - port: 15017 route: - destination: host: istiod.external-istiod.svc.cluster.local port: number: 443 --- apiVersion: networking.istio.io/v1 kind: DestinationRule metadata: name: external-istiod-dr namespace: external-istiod spec: host: istiod.external-istiod.svc.cluster.local trafficPolicy: portLevelSettings: - port: number: 15012 tls: mode: SIMPLE connectionPool: http: h2UpgradePolicy: UPGRADE - port: number: 443 tls: mode: SIMPLE EOF {{< /text >}} 1. If you are using an IP address for the `EXTERNAL\_ISTIOD\_ADDR`, instead of a proper DNS hostname, modify the configuration. Delete the `DestinationRule`, don't terminate TLS in the `Gateway`, and use TLS routing in the `VirtualService`: {{< warning >}} This is not recommended in a production environment. {{< /warning >}} {{< text bash >}} $ sed -i'.bk' \ -e '55,$d' \ -e 's/mode: SIMPLE/mode: PASSTHROUGH/' -e '/credentialName:/d' -e "s/${EXTERNAL\_ISTIOD\_ADDR}/\"\*\"/" \ -e 's/http:/tls:/' -e 's/https/tls/' -e '/route:/i\ sniHosts:\ - "\*"' \ external-istiod-gw.yaml; rm external-istiod-gw.yaml.bk {{< /text >}} 1. Apply the configuration on the external cluster: {{< text bash >}} $ kubectl apply -f external-istiod-gw.yaml --context="${CTX\_EXTERNAL\_CLUSTER}" {{< /text >}} ### Mesh admin steps Now that Istio is up and running, a mesh administrator only needs to deploy and configure services in the mesh, including gateways, if needed. {{< tip >}} Some of the `istioctl` CLI commands won't work by default on a remote cluster, although you can easily configure `istioctl` to make it fully functional. See the [Istioctl-proxy Ecosystem project](https://github.com/istio-ecosystem/istioctl-proxy-sample) for details. {{< /tip >}} #### Deploy a sample application 1. Create, and label for injection, the `sample` namespace on the remote cluster: {{< text bash >}} $ kubectl create --context="${CTX\_REMOTE\_CLUSTER}" namespace sample $ kubectl label --context="${CTX\_REMOTE\_CLUSTER}" namespace sample istio-injection=enabled {{< /text >}} 1. Deploy the `helloworld` (`v1`) and `curl` samples: {{< text bash >}} $ kubectl apply -f @samples/helloworld/helloworld.yaml@ -l service=helloworld -n sample --context="${CTX\_REMOTE\_CLUSTER}" $ kubectl apply -f @samples/helloworld/helloworld.yaml@ -l version=v1 -n sample --context="${CTX\_REMOTE\_CLUSTER}" $ kubectl apply -f @samples/curl/curl.yaml@ -n sample --context="${CTX\_REMOTE\_CLUSTER}" {{< /text >}} 1. Wait a few seconds for the `helloworld` and `curl` pods to be running with sidecars injected: {{< text bash >}} $ kubectl get pod -n sample --context="${CTX\_REMOTE\_CLUSTER}" NAME READY STATUS RESTARTS AGE
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/external-controlplane/index.md
master
istio
[ -0.043223533779382706, 0.04551742970943451, -0.015275088138878345, -0.01855398342013359, -0.07496634125709534, -0.05817662551999092, -0.003189953276887536, -0.016822779551148415, -0.032100290060043335, 0.05103450268507004, 0.021999888122081757, -0.1628437340259552, -0.027242232114076614, -...
0.246096
@samples/helloworld/helloworld.yaml@ -l version=v1 -n sample --context="${CTX\_REMOTE\_CLUSTER}" $ kubectl apply -f @samples/curl/curl.yaml@ -n sample --context="${CTX\_REMOTE\_CLUSTER}" {{< /text >}} 1. Wait a few seconds for the `helloworld` and `curl` pods to be running with sidecars injected: {{< text bash >}} $ kubectl get pod -n sample --context="${CTX\_REMOTE\_CLUSTER}" NAME READY STATUS RESTARTS AGE curl-64d7d56698-wqjnm 2/2 Running 0 9s helloworld-v1-776f57d5f6-s7zfc 2/2 Running 0 10s {{< /text >}} 1. Send a request from the `curl` pod to the `helloworld` service: {{< text bash >}} $ kubectl exec --context="${CTX\_REMOTE\_CLUSTER}" -n sample -c curl \ "$(kubectl get pod --context="${CTX\_REMOTE\_CLUSTER}" -n sample -l app=curl -o jsonpath='{.items[0].metadata.name}')" \ -- curl -sS helloworld.sample:5000/hello Hello version: v1, instance: helloworld-v1-776f57d5f6-s7zfc {{< /text >}} #### Enable gateways {{< tip >}} {{< boilerplate gateway-api-future >}} If you use the Gateway API, you will not need to install any gateway components. You can skip the following instructions and proceed directly to [configure and test an ingress gateway](#configure-and-test-an-ingress-gateway). {{< /tip >}} Enable an ingress gateway on the remote cluster: {{< tabset category-name="ingress-gateway-install-type" >}} {{< tab name="IstioOperator" category-value="iop" >}} {{< text bash >}} $ cat < istio-ingressgateway.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: name: ingress-install spec: profile: empty components: ingressGateways: - namespace: external-istiod name: istio-ingressgateway enabled: true values: gateways: istio-ingressgateway: injectionTemplate: gateway EOF $ istioctl install -f istio-ingressgateway.yaml --set values.global.istioNamespace=external-istiod --context="${CTX\_REMOTE\_CLUSTER}" {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} {{< text bash >}} $ helm install istio-ingressgateway istio/gateway -n external-istiod --kube-context="${CTX\_REMOTE\_CLUSTER}" {{< /text >}} See [Installing Gateways](/docs/setup/additional-setup/gateway/) for in-depth documentation on gateway installation. {{< /tab >}} {{< /tabset >}} You can optionally enable other gateways as well. For example, an egress gateway: {{< tabset category-name="egress-gateway-install-type" >}} {{< tab name="IstioOperator" category-value="iop" >}} {{< text bash >}} $ cat < istio-egressgateway.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: name: egress-install spec: profile: empty components: egressGateways: - namespace: external-istiod name: istio-egressgateway enabled: true values: gateways: istio-egressgateway: injectionTemplate: gateway EOF $ istioctl install -f istio-egressgateway.yaml --set values.global.istioNamespace=external-istiod --context="${CTX\_REMOTE\_CLUSTER}" {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} {{< text bash >}} $ helm install istio-egressgateway istio/gateway -n external-istiod --kube-context="${CTX\_REMOTE\_CLUSTER}" --set service.type=ClusterIP {{< /text >}} See [Installing Gateways](/docs/setup/additional-setup/gateway/) for in-depth documentation on gateway installation. {{< /tab >}} {{< /tabset >}} #### Configure and test an ingress gateway {{< tip >}} {{< boilerplate gateway-api-choose >}} {{< /tip >}} 1. Make sure that the cluster is ready to configure the gateway: {{< tabset category-name="config-api" >}} {{< tab name="Istio APIs" category-value="istio-apis" >}} Confirm that the Istio ingress gateway is running: {{< text bash >}} $ kubectl get pod -l app=istio-ingressgateway -n external-istiod --context="${CTX\_REMOTE\_CLUSTER}" NAME READY STATUS RESTARTS AGE istio-ingressgateway-7bcd5c6bbd-kmtl4 1/1 Running 0 8m4s {{< /text >}} {{< /tab >}} {{< tab name="Gateway API" category-value="gateway-api" >}} The Kubernetes Gateway API CRDs do not come installed by default on most Kubernetes clusters, so make sure they are installed before using the Gateway API: {{< text syntax=bash snip\_id=install\_crds >}} $ kubectl get crd gateways.gateway.networking.k8s.io --context="${CTX\_REMOTE\_CLUSTER}" &> /dev/null || \ { kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref={{< k8s\_gateway\_api\_version >}}" | kubectl apply -f - --context="${CTX\_REMOTE\_CLUSTER}"; } {{< /text >}} {{< /tab >}} {{< /tabset >}} 2) Expose the `helloworld` application on an ingress gateway: {{< tabset category-name="config-api" >}} {{< tab name="Istio APIs" category-value="istio-apis" >}} {{< text bash >}} $ kubectl apply -f @samples/helloworld/helloworld-gateway.yaml@ -n sample --context="${CTX\_REMOTE\_CLUSTER}" {{< /text >}} {{< /tab >}} {{< tab name="Gateway API" category-value="gateway-api" >}} {{< text bash >}} $ kubectl apply -f @samples/helloworld/gateway-api/helloworld-gateway.yaml@ -n sample --context="${CTX\_REMOTE\_CLUSTER}" {{< /text >}} {{< /tab >}} {{< /tabset >}} 3) Set the `GATEWAY\_URL` environment variable (see [determining the ingress IP and ports](/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports) for details): {{< tabset category-name="config-api" >}} {{< tab name="Istio APIs" category-value="istio-apis" >}} {{< text bash >}} $ export
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/external-controlplane/index.md
master
istio
[ 0.04277227446436882, 0.023859798908233643, -0.009861028753221035, -0.020574532449245453, -0.052476342767477036, -0.03790657967329025, -0.019417578354477882, -0.02271238900721073, 0.06378830969333649, 0.018112391233444214, 0.02460625395178795, -0.10734832286834717, -0.005249831825494766, -0...
0.143942
bash >}} $ kubectl apply -f @samples/helloworld/gateway-api/helloworld-gateway.yaml@ -n sample --context="${CTX\_REMOTE\_CLUSTER}" {{< /text >}} {{< /tab >}} {{< /tabset >}} 3) Set the `GATEWAY\_URL` environment variable (see [determining the ingress IP and ports](/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports) for details): {{< tabset category-name="config-api" >}} {{< tab name="Istio APIs" category-value="istio-apis" >}} {{< text bash >}} $ export INGRESS\_HOST=$(kubectl -n external-istiod --context="${CTX\_REMOTE\_CLUSTER}" get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') $ export INGRESS\_PORT=$(kubectl -n external-istiod --context="${CTX\_REMOTE\_CLUSTER}" get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}') $ export GATEWAY\_URL=$INGRESS\_HOST:$INGRESS\_PORT {{< /text >}} {{< /tab >}} {{< tab name="Gateway API" category-value="gateway-api" >}} {{< text bash >}} $ kubectl -n sample --context="${CTX\_REMOTE\_CLUSTER}" wait --for=condition=programmed gtw helloworld-gateway $ export INGRESS\_HOST=$(kubectl -n sample --context="${CTX\_REMOTE\_CLUSTER}" get gtw helloworld-gateway -o jsonpath='{.status.addresses[0].value}') $ export GATEWAY\_URL=$INGRESS\_HOST:80 {{< /text >}} {{< /tab >}} {{< /tabset >}} 4) Confirm you can access the `helloworld` application through the ingress gateway: {{< text bash >}} $ curl -s "http://${GATEWAY\_URL}/hello" Hello version: v1, instance: helloworld-v1-776f57d5f6-s7zfc {{< /text >}} ## Adding clusters to the mesh (optional) {#adding-clusters} This section shows you how to expand an existing external control plane mesh to multicluster by adding another remote cluster. This allows you to easily distribute services and use [location-aware routing and fail over](/docs/tasks/traffic-management/locality-load-balancing/) to support high availability of your application. {{< image width="75%" link="external-multicluster.svg" caption="External control plane with multiple remote clusters" >}} Unlike the first remote cluster, the second and subsequent clusters added to the same external control plane do not provide mesh config, but instead are only sources of endpoint configuration, just like remote clusters in a [primary-remote](/docs/setup/install/multicluster/primary-remote\_multi-network/) Istio multicluster configuration. To proceed, you'll need another Kubernetes cluster for the second remote cluster of the mesh. Set the following environment variables to the context name and cluster name of the cluster: {{< text syntax=bash snip\_id=none >}} $ export CTX\_SECOND\_CLUSTER= $ export SECOND\_CLUSTER\_NAME= {{< /text >}} ### Register the new cluster 1. Create the remote Istio install configuration, which installs the injection webhook that uses the external control plane's injector, instead of a locally deployed one: {{< text syntax=bash snip\_id=get\_second\_remote\_cluster\_iop >}} $ cat < second-remote-cluster.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: namespace: external-istiod spec: profile: remote values: global: istioNamespace: external-istiod istiodRemote: injectionURL: https://${EXTERNAL\_ISTIOD\_ADDR}:15017/inject/cluster/${SECOND\_CLUSTER\_NAME}/net/network2 EOF {{< /text >}} 1. If you are using an IP address for the `EXTERNAL\_ISTIOD\_ADDR`, instead of a proper DNS hostname, modify the configuration to specify the discovery address and path, instead of an injection URL: {{< warning >}} This is not recommended in a production environment. {{< /warning >}} {{< text bash >}} $ sed -i'.bk' \ -e "s|injectionURL: https://${EXTERNAL\_ISTIOD\_ADDR}:15017|injectionPath: |" \ -e "/istioNamespace:/a\\ remotePilotAddress: ${EXTERNAL\_ISTIOD\_ADDR}" \ second-remote-cluster.yaml; rm second-remote-cluster.yaml.bk {{< /text >}} 1. Create and annotate the system namespace on the remote cluster: {{< text bash >}} $ kubectl create namespace external-istiod --context="${CTX\_SECOND\_CLUSTER}" $ kubectl annotate namespace external-istiod "topology.istio.io/controlPlaneClusters=${REMOTE\_CLUSTER\_NAME}" --context="${CTX\_SECOND\_CLUSTER}" {{< /text >}} The `topology.istio.io/controlPlaneClusters` annotation specifies the cluster ID of the external control plane that should manage this remote cluster. Notice that this is the name of the first remote (config) cluster, which was used to set the cluster ID of the external control plane when it was installed in the external cluster earlier. 1. Install the configuration on the remote cluster: {{< text bash >}} $ istioctl install -f second-remote-cluster.yaml --context="${CTX\_SECOND\_CLUSTER}" {{< /text >}} 1. Confirm that the remote cluster's injection webhook configuration has been installed: {{< text bash >}} $ kubectl get mutatingwebhookconfiguration --context="${CTX\_SECOND\_CLUSTER}" NAME WEBHOOKS AGE istio-sidecar-injector-external-istiod 4 4m13s {{< /text >}} 1. Create a secret with credentials to allow the control plane to access the endpoints on the second remote cluster and install it: {{< text bash >}} $ istioctl create-remote-secret \ --context="${CTX\_SECOND\_CLUSTER}" \ --name="${SECOND\_CLUSTER\_NAME}" \ --type=remote \ --namespace=external-istiod \ --create-service-account=false | \
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/external-controlplane/index.md
master
istio
[ 0.02942325547337532, 0.00819182675331831, -0.02735728584229946, 0.010574608109891415, -0.03640957549214363, -0.02883686125278473, 0.03686853498220444, 0.022653337568044662, 0.041500240564346313, 0.04103197529911995, -0.03630778566002846, -0.15482115745544434, -0.025254935026168823, -0.0114...
0.30686
WEBHOOKS AGE istio-sidecar-injector-external-istiod 4 4m13s {{< /text >}} 1. Create a secret with credentials to allow the control plane to access the endpoints on the second remote cluster and install it: {{< text bash >}} $ istioctl create-remote-secret \ --context="${CTX\_SECOND\_CLUSTER}" \ --name="${SECOND\_CLUSTER\_NAME}" \ --type=remote \ --namespace=external-istiod \ --create-service-account=false | \ kubectl apply -f - --context="${CTX\_EXTERNAL\_CLUSTER}" {{< /text >}} Note that unlike the first remote cluster of the mesh, which also serves as the config cluster, the `--type` argument is set to `remote` this time, instead of `config`. ### Setup east-west gateways 1. Deploy east-west gateways on both remote clusters: {{< text bash >}} $ @samples/multicluster/gen-eastwest-gateway.sh@ \ --network network1 > eastwest-gateway-1.yaml $ istioctl manifest generate -f eastwest-gateway-1.yaml \ --set values.global.istioNamespace=external-istiod | \ kubectl apply --context="${CTX\_REMOTE\_CLUSTER}" -f - {{< /text >}} {{< text bash >}} $ @samples/multicluster/gen-eastwest-gateway.sh@ \ --network network2 > eastwest-gateway-2.yaml $ istioctl manifest generate -f eastwest-gateway-2.yaml \ --set values.global.istioNamespace=external-istiod | \ kubectl apply --context="${CTX\_SECOND\_CLUSTER}" -f - {{< /text >}} 1. Wait for the east-west gateways to be assigned external IP addresses: {{< text bash >}} $ kubectl --context="${CTX\_REMOTE\_CLUSTER}" get svc istio-eastwestgateway -n external-istiod NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-eastwestgateway LoadBalancer 10.0.12.121 34.122.91.98 ... 51s {{< /text >}} {{< text bash >}} $ kubectl --context="${CTX\_SECOND\_CLUSTER}" get svc istio-eastwestgateway -n external-istiod NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-eastwestgateway LoadBalancer 10.0.12.121 34.122.91.99 ... 51s {{< /text >}} 1. Expose services via the east-west gateways: {{< text bash >}} $ kubectl --context="${CTX\_REMOTE\_CLUSTER}" apply -n external-istiod -f \ @samples/multicluster/expose-services.yaml@ {{< /text >}} ### Validate the installation 1. Create, and label for injection, the `sample` namespace on the remote cluster: {{< text bash >}} $ kubectl create --context="${CTX\_SECOND\_CLUSTER}" namespace sample $ kubectl label --context="${CTX\_SECOND\_CLUSTER}" namespace sample istio-injection=enabled {{< /text >}} 1. Deploy the `helloworld` (`v2`) and `curl` samples: {{< text bash >}} $ kubectl apply -f @samples/helloworld/helloworld.yaml@ -l service=helloworld -n sample --context="${CTX\_SECOND\_CLUSTER}" $ kubectl apply -f @samples/helloworld/helloworld.yaml@ -l version=v2 -n sample --context="${CTX\_SECOND\_CLUSTER}" $ kubectl apply -f @samples/curl/curl.yaml@ -n sample --context="${CTX\_SECOND\_CLUSTER}" {{< /text >}} 1. Wait a few seconds for the `helloworld` and `curl` pods to be running with sidecars injected: {{< text bash >}} $ kubectl get pod -n sample --context="${CTX\_SECOND\_CLUSTER}" NAME READY STATUS RESTARTS AGE curl-557747455f-wtdbr 2/2 Running 0 9s helloworld-v2-54df5f84b-9hxgw 2/2 Running 0 10s {{< /text >}} 1. Send a request from the `curl` pod to the `helloworld` service: {{< text bash >}} $ kubectl exec --context="${CTX\_SECOND\_CLUSTER}" -n sample -c curl \ "$(kubectl get pod --context="${CTX\_SECOND\_CLUSTER}" -n sample -l app=curl -o jsonpath='{.items[0].metadata.name}')" \ -- curl -sS helloworld.sample:5000/hello Hello version: v2, instance: helloworld-v2-54df5f84b-9hxgw {{< /text >}} 1. Confirm that when accessing the `helloworld` application several times through the ingress gateway, both version `v1` and `v2` are now being called: {{< text bash >}} $ for i in {1..10}; do curl -s "http://${GATEWAY\_URL}/hello"; done Hello version: v1, instance: helloworld-v1-776f57d5f6-s7zfc Hello version: v2, instance: helloworld-v2-54df5f84b-9hxgw Hello version: v1, instance: helloworld-v1-776f57d5f6-s7zfc Hello version: v2, instance: helloworld-v2-54df5f84b-9hxgw ... {{< /text >}} ## Cleanup Clean up the external control plane cluster: {{< text bash >}} $ kubectl delete -f external-istiod-gw.yaml --context="${CTX\_EXTERNAL\_CLUSTER}" $ istioctl uninstall -y --purge -f external-istiod.yaml --context="${CTX\_EXTERNAL\_CLUSTER}" $ kubectl delete ns istio-system external-istiod --context="${CTX\_EXTERNAL\_CLUSTER}" $ rm controlplane-gateway.yaml external-istiod.yaml external-istiod-gw.yaml {{< /text >}} Clean up the remote config cluster: {{< text bash >}} $ kubectl delete ns sample --context="${CTX\_REMOTE\_CLUSTER}" $ istioctl uninstall -y --purge -f remote-config-cluster.yaml --set values.defaultRevision=default --context="${CTX\_REMOTE\_CLUSTER}" $ kubectl delete ns external-istiod --context="${CTX\_REMOTE\_CLUSTER}" $ rm remote-config-cluster.yaml istio-ingressgateway.yaml $ rm istio-egressgateway.yaml eastwest-gateway-1.yaml || true {{< /text >}} Clean up the optional second remote cluster if you installed it: {{< text bash >}} $ kubectl delete ns sample --context="${CTX\_SECOND\_CLUSTER}" $ istioctl uninstall -y --purge -f second-remote-cluster.yaml --context="${CTX\_SECOND\_CLUSTER}" $
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/external-controlplane/index.md
master
istio
[ 0.006040648091584444, 0.01623096689581871, -0.057211220264434814, 0.006598225329071283, -0.0416722446680069, -0.02010575495660305, -0.010394016280770302, 0.03969449922442436, -0.004421105142682791, 0.01635443978011608, 0.03195009008049965, -0.15173819661140442, 0.009770427830517292, -0.035...
0.294286
--context="${CTX\_REMOTE\_CLUSTER}" $ kubectl delete ns external-istiod --context="${CTX\_REMOTE\_CLUSTER}" $ rm remote-config-cluster.yaml istio-ingressgateway.yaml $ rm istio-egressgateway.yaml eastwest-gateway-1.yaml || true {{< /text >}} Clean up the optional second remote cluster if you installed it: {{< text bash >}} $ kubectl delete ns sample --context="${CTX\_SECOND\_CLUSTER}" $ istioctl uninstall -y --purge -f second-remote-cluster.yaml --context="${CTX\_SECOND\_CLUSTER}" $ kubectl delete ns external-istiod --context="${CTX\_SECOND\_CLUSTER}" $ rm second-remote-cluster.yaml eastwest-gateway-2.yaml {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/external-controlplane/index.md
master
istio
[ 0.05685201659798622, -0.015429518185555935, -0.004451704677194357, -0.017305385321378708, -0.007445468567311764, -0.012575287371873856, 0.03895093500614166, -0.03128078207373619, 0.06963881105184555, 0.045607488602399826, 0.021589701995253563, -0.09621128439903259, -0.032276835292577744, -...
0.301314
Follow this guide to install and configure an Istio mesh for in-depth evaluation or production use. If you are new to Istio, and just want to try it out, follow the [quick start instructions](/docs/setup/getting-started) instead. This installation guide uses the [istioctl](/docs/reference/commands/istioctl/) command line tool to provide rich customization of the Istio control plane and of the sidecars for the Istio data plane. It has user input validation to help prevent installation errors and customization options to override any aspect of the configuration. Using these instructions, you can select any one of Istio's built-in [configuration profiles](/docs/setup/additional-setup/config-profiles/) and then further customize the configuration for your specific needs. The `istioctl` command supports the full [`IstioOperator` API](/docs/reference/config/istio.operator.v1alpha1/) via command-line options for individual settings or for passing a yaml file containing an `IstioOperator` {{}}custom resource (CR){{}}. ## Prerequisites Before you begin, check the following prerequisites: 1. [Download the Istio release](/docs/setup/additional-setup/download-istio-release/). 1. Perform any necessary [platform-specific setup](/docs/setup/platform-setup/). 1. Check the [Requirements for Pods and Services](/docs/ops/deployment/application-requirements/). ## Install Istio using the default profile The simplest option is to install the `default` Istio [configuration profile](/docs/setup/additional-setup/config-profiles/) using the following command: {{< text bash >}} $ istioctl install {{< /text >}} This command installs the `default` profile on the cluster defined by your Kubernetes configuration. The `default` profile is a good starting point for establishing a production environment, unlike the larger `demo` profile that is intended for evaluating a broad set of Istio features. Various settings can be configured to modify the installations. For example, to enable access logs: {{< text bash >}} $ istioctl install --set meshConfig.accessLogFile=/dev/stdout {{< /text >}} {{< tip >}} Many of the examples on this page and elsewhere in the documentation are written using `--set` to modify installation parameters, rather than passing a configuration file with `-f`. This is done to make the examples more compact. The two methods are equivalent, but `-f` is strongly recommended for production. The above command would be written as follows using `-f`: {{< text bash >}} $ cat < ./my-config.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: meshConfig: accessLogFile: /dev/stdout EOF $ istioctl install -f my-config.yaml {{< /text >}} {{< /tip >}} {{< tip >}} The full API is documented in the [`IstioOperator` API reference](/docs/reference/config/istio.operator.v1alpha1/). In general, you can use the `--set` flag in `istioctl` as you would with Helm, and the Helm `values.yaml` API is currently supported for backwards compatibility. The only difference is you must prefix the legacy `values.yaml` paths with `values.` because this is the prefix for the Helm pass-through API. {{< /tip >}} ## Install from external charts By default, `istioctl` uses compiled-in charts to generate the install manifest. These charts are released together with `istioctl` for auditing and customization purposes and can be found in the release tar in the `manifests` directory. `istioctl` can also use external charts rather than the compiled-in ones. To select external charts, set the `manifests` flag to a local file system path: {{< text bash >}} $ istioctl install --manifests=manifests/ {{< /text >}} If using the `istioctl` {{< istio\_full\_version >}} binary, this command will result in the same installation as `istioctl install` alone, because it points to the same charts as the compiled-in ones. Other than for experimenting with or testing new features, we recommend using the compiled-in charts rather than external ones to ensure compatibility of the `istioctl` binary with the charts. ## Install a different profile Other Istio configuration profiles can be installed in a cluster by passing the profile name on the command line. For example, the following command can be used to install the `demo` profile: {{< text bash >}} $ istioctl install --set profile=demo {{< /text
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/istioctl/index.md
master
istio
[ -0.0018535569543018937, -0.008940803818404675, -0.0123436339199543, 0.029886022210121155, -0.06507720053195953, -0.026250917464494705, 0.009021002799272537, 0.0951341912150383, -0.08202005922794342, 0.02747548371553421, -0.023106107488274574, -0.12404380738735199, -0.06534173339605331, 0.0...
0.605743
the charts. ## Install a different profile Other Istio configuration profiles can be installed in a cluster by passing the profile name on the command line. For example, the following command can be used to install the `demo` profile: {{< text bash >}} $ istioctl install --set profile=demo {{< /text >}} ## Generate a manifest before installation You can generate the manifest before installing Istio using the `manifest generate` sub-command. For example, use the following command to generate a manifest for the `default` profile that can be installed with `kubectl`: {{< text bash >}} $ istioctl manifest generate > $HOME/generated-manifest.yaml {{< /text >}} The generated manifest can be used to inspect what exactly is installed as well as to track changes to the manifest over time. While the `IstioOperator` CR represents the full user configuration and is sufficient for tracking it, the output from `manifest generate` also captures possible changes in the underlying charts and therefore can be used to track the actual installed resources. {{< tip >}} Any additional flags or custom values overrides you would normally use for installation should also be supplied to the `istioctl manifest generate` command. {{< /tip >}} {{< warning >}} If attempting to install and manage Istio using `istioctl manifest generate`, please note the following caveats: 1. The Istio namespace (`istio-system` by default) must be created manually. 1. Istio validation will not be enabled by default. Unlike `istioctl install`, the `manifest generate` command will not create the `istiod-default-validator` validating webhook configuration unless `values.defaultRevision` is set: {{< text bash >}} $ istioctl manifest generate --set values.defaultRevision=default {{< /text >}} 1. Resources may not be installed with the same sequencing of dependencies as `istioctl install`. 1. This method is not tested as part of Istio releases. 1. While `istioctl install` will automatically detect environment specific settings from your Kubernetes context, `manifest generate` cannot as it runs offline, which may lead to unexpected results. In particular, you must ensure that you follow [these steps](/docs/ops/best-practices/security/#configure-third-party-service-account-tokens) if your Kubernetes environment does not support third party service account tokens. It is recommended to append `--cluster-specific` to your `istio manifest generate` command to detect the target cluster's environment, which will embed those cluster-specific environment settings into the generated manifests. This requires network access to your running cluster. 1. `kubectl apply` of the generated manifest may show transient errors due to resources not being available in the cluster in the correct order. 1. `istioctl install` automatically prunes any resources that should be removed when the configuration changes (e.g. if you remove a gateway). This does not happen when you use `istio manifest generate` with `kubectl` and these resources must be removed manually. {{< /warning >}} See [Customizing the installation configuration](/docs/setup/additional-setup/customize-installation/) for additional information on customizing the install. ## Uninstall Istio To completely uninstall Istio from a cluster, run the following command: {{< text bash >}} $ istioctl uninstall --purge {{< /text >}} {{< warning >}} The optional `--purge` flag will remove all Istio resources, including cluster-scoped resources that may be shared with other Istio control planes. {{< /warning >}} Alternatively, to remove only a specific Istio control plane, run the following command: {{< text bash >}} $ istioctl uninstall {{< /text >}} or {{< text bash >}} $ istioctl manifest generate | kubectl delete --ignore-not-found=true -f - {{< /text >}} The control plane namespace (e.g., `istio-system`) is not removed by default. If no longer needed, use the following command to remove it: {{< text bash >}} $ kubectl delete namespace istio-system {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/istioctl/index.md
master
istio
[ 0.030725577846169472, -0.005738402251154184, 0.028575336560606956, -0.0032269665971398354, 0.02095228061079979, -0.018734309822320938, 0.028296662494540215, 0.07884562015533447, -0.03513765335083008, -0.0183431264013052, -0.010771392844617367, -0.15641716122627258, 0.021503744646906853, 0....
0.424713
control plane namespace (e.g., `istio-system`) is not removed by default. If no longer needed, use the following command to remove it: {{< text bash >}} $ kubectl delete namespace istio-system {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/istioctl/index.md
master
istio
[ 0.011745743453502655, 0.012146913446485996, 0.0029466869309544563, 0.017014307901263237, -0.03796908259391785, -0.0014038537628948689, 0.03214164078235626, -0.04669913649559021, 0.13306491076946259, 0.02162165194749832, -0.0022430766839534044, -0.08095187693834305, -0.05369067192077637, -0...
0.329108
{{< boilerplate experimental-feature-warning >}} This guide walks you through the process of installing multiple Istio control planes within a single cluster and then a way to scope workloads to specific control planes. This deployment model has a single Kubernetes control plane with multiple Istio control planes and meshes. The separation between the meshes is provided by Kubernetes namespaces and RBAC. {{< image width="90%" link="single-cluster-multiple-istiods.svg" caption="Multiple meshes in a single cluster" >}} Using `discoverySelectors`, you can scope Kubernetes resources in a cluster to specific namespaces managed by an Istio control plane. This includes the Istio custom resources (e.g., Gateway, VirtualService, DestinationRule, etc.) used to configure the mesh. Furthermore, `discoverySelectors` can be used to configure which namespaces should include the `istio-ca-root-cert` config map for a particular Istio control plane. Together, these functions allow mesh operators to specify the namespaces for a given control plane, enabling soft multi-tenancy for multiple meshes based on the boundary of one or more namespaces. This guide uses `discoverySelectors`, along with the revisions capability of Istio, to demonstrate how two meshes can be deployed on a single cluster, each working with a properly scoped subset of the cluster's resources. ## Before you begin This guide requires that you have a Kubernetes cluster with any of the [supported Kubernetes versions:](/docs/releases/supported-releases#support-status-of-istio-releases) {{< supported\_kubernetes\_versions >}}. This cluster will host two control planes installed in two different system namespaces. The mesh application workloads will run in multiple application-specific namespaces, each namespace associated with one or the other control plane based on revision and discovery selector configurations. ## Cluster configuration ### Deploying multiple control planes Deploying multiple Istio control planes on a single cluster can be achieved by using different system namespaces for each control plane. Istio revisions and `discoverySelectors` are then used to scope the resources and workloads that are managed by each control plane. 1. Create the first system namespace, `usergroup-1`, and deploy istiod in it: {{< text bash >}} $ kubectl create ns usergroup-1 $ kubectl label ns usergroup-1 usergroup=usergroup-1 $ istioctl install -y -f - <}} 1. Create the second system namespace, `usergroup-2`, and deploy istiod in it: {{< text bash >}} $ kubectl create ns usergroup-2 $ kubectl label ns usergroup-2 usergroup=usergroup-2 $ istioctl install -y -f - <}} 1. Deploy a policy for workloads in the `usergroup-1` namespace to only accept mutual TLS traffic: {{< text bash >}} $ kubectl apply -f - <}} 1. Deploy a policy for workloads in the `usergroup-2` namespace to only accept mutual TLS traffic: {{< text bash >}} $ kubectl apply -f - <}} ### Verify the multiple control plane creation 1. Check the labels on the system namespaces for each control plane: {{< text bash >}} $ kubectl get ns usergroup-1 usergroup-2 --show-labels NAME STATUS AGE LABELS usergroup-1 Active 13m kubernetes.io/metadata.name=usergroup-1,usergroup=usergroup-1 usergroup-2 Active 12m kubernetes.io/metadata.name=usergroup-2,usergroup=usergroup-2 {{< /text >}} 1. Verify the control planes are deployed and running: {{< text bash >}} $ kubectl get pods -n usergroup-1 NAMESPACE NAME READY STATUS RESTARTS AGE usergroup-1 istiod-usergroup-1-5ccc849b5f-wnqd6 1/1 Running 0 12m {{< /text >}} {{< text bash >}} $ kubectl get pods -n usergroup-2 NAMESPACE NAME READY STATUS RESTARTS AGE usergroup-2 istiod-usergroup-2-658d6458f7-slpd9 1/1 Running 0 12m {{< /text >}} You will notice that one istiod deployment per usergroup is created in the specified namespaces. 1. Run the following commands to list the installed webhooks: {{< text bash >}} $ kubectl get validatingwebhookconfiguration NAME WEBHOOKS AGE istio-validator-usergroup-1-usergroup-1 1 18m istio-validator-usergroup-2-usergroup-2 1 18m istiod-default-validator 1 18m {{< /text >}} {{< text bash >}} $ kubectl get mutatingwebhookconfiguration NAME WEBHOOKS AGE istio-revision-tag-default-usergroup-1 4 18m istio-sidecar-injector-usergroup-1-usergroup-1 2 19m istio-sidecar-injector-usergroup-2-usergroup-2 2 18m {{< /text >}} Note that
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/multiple-controlplanes/index.md
master
istio
[ 0.025569351390004158, -0.021714603528380394, 0.0003051820967812091, 0.07214764505624771, -0.007744204718619585, 0.0014784337254241109, 0.04734097421169281, 0.012017840519547462, 0.039840150624513626, 0.027660535648465157, -0.05526832491159439, -0.15254339575767517, 0.004764747340232134, 0....
0.438055
installed webhooks: {{< text bash >}} $ kubectl get validatingwebhookconfiguration NAME WEBHOOKS AGE istio-validator-usergroup-1-usergroup-1 1 18m istio-validator-usergroup-2-usergroup-2 1 18m istiod-default-validator 1 18m {{< /text >}} {{< text bash >}} $ kubectl get mutatingwebhookconfiguration NAME WEBHOOKS AGE istio-revision-tag-default-usergroup-1 4 18m istio-sidecar-injector-usergroup-1-usergroup-1 2 19m istio-sidecar-injector-usergroup-2-usergroup-2 2 18m {{< /text >}} Note that the output includes `istiod-default-validator` and `istio-revision-tag-default-usergroup-1`, which are the default webhook configurations used for handling requests coming from resources which are not associated with any revision. In a fully scoped environment where every control plane is associated with its resources through proper namespace labeling, there is no need for these default webhook configurations. They should never be invoked. ### Deploy application workloads per usergroup 1. Create three application namespaces: {{< text bash >}} $ kubectl create ns app-ns-1 $ kubectl create ns app-ns-2 $ kubectl create ns app-ns-3 {{< /text >}} 1. Label each namespace to associate them with their respective control planes: {{< text bash >}} $ kubectl label ns app-ns-1 usergroup=usergroup-1 istio.io/rev=usergroup-1 $ kubectl label ns app-ns-2 usergroup=usergroup-2 istio.io/rev=usergroup-2 $ kubectl label ns app-ns-3 usergroup=usergroup-2 istio.io/rev=usergroup-2 {{< /text >}} 1. Deploy one `curl` and `httpbin` application per namespace: {{< text bash >}} $ kubectl -n app-ns-1 apply -f samples/curl/curl.yaml $ kubectl -n app-ns-1 apply -f samples/httpbin/httpbin.yaml $ kubectl -n app-ns-2 apply -f samples/curl/curl.yaml $ kubectl -n app-ns-2 apply -f samples/httpbin/httpbin.yaml $ kubectl -n app-ns-3 apply -f samples/curl/curl.yaml $ kubectl -n app-ns-3 apply -f samples/httpbin/httpbin.yaml {{< /text >}} 1. Wait a few seconds for the `httpbin` and `curl` pods to be running with sidecars injected: {{< text bash >}} $ kubectl get pods -n app-ns-1 NAME READY STATUS RESTARTS AGE httpbin-9dbd644c7-zc2v4 2/2 Running 0 115m curl-78ff5975c6-fml7c 2/2 Running 0 115m {{< /text >}} {{< text bash >}} $ kubectl get pods -n app-ns-2 NAME READY STATUS RESTARTS AGE httpbin-9dbd644c7-sd9ln 2/2 Running 0 115m curl-78ff5975c6-sz728 2/2 Running 0 115m {{< /text >}} {{< text bash >}} $ kubectl get pods -n app-ns-3 NAME READY STATUS RESTARTS AGE httpbin-9dbd644c7-8ll27 2/2 Running 0 115m curl-78ff5975c6-sg4tq 2/2 Running 0 115m {{< /text >}} ### Verify the application to control plane mapping Now that the applications are deployed, you can use the `istioctl ps` command to confirm that the application workloads are managed by their respective control plane, i.e., `app-ns-1` is managed by `usergroup-1`, `app-ns-2` and `app-ns-3` are managed by `usergroup-2`: {{< text bash >}} $ istioctl ps -i usergroup-1 NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION httpbin-9dbd644c7-hccpf.app-ns-1 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-usergroup-1-5ccc849b5f-wnqd6 1.17-alpha.f5212a6f7df61fd8156f3585154bed2f003c4117 curl-78ff5975c6-9zb77.app-ns-1 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-usergroup-1-5ccc849b5f-wnqd6 1.17-alpha.f5212a6f7df61fd8156f3585154bed2f003c4117 {{< /text >}} {{< text bash >}} $ istioctl ps -i usergroup-2 NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION httpbin-9dbd644c7-vvcqj.app-ns-3 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-usergroup-2-658d6458f7-slpd9 1.17-alpha.f5212a6f7df61fd8156f3585154bed2f003c4117 httpbin-9dbd644c7-xzgfm.app-ns-2 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-usergroup-2-658d6458f7-slpd9 1.17-alpha.f5212a6f7df61fd8156f3585154bed2f003c4117 curl-78ff5975c6-fthmt.app-ns-2 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-usergroup-2-658d6458f7-slpd9 1.17-alpha.f5212a6f7df61fd8156f3585154bed2f003c4117 curl-78ff5975c6-nxtth.app-ns-3 Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-usergroup-2-658d6458f7-slpd9 1.17-alpha.f5212a6f7df61fd8156f3585154bed2f003c4117 {{< /text >}} ### Verify the application connectivity is ONLY within the respective usergroup 1. Send a request from the `curl` pod in `app-ns-1` in `usergroup-1` to the `httpbin` service in `app-ns-2` in `usergroup-2`. The communication should fail: {{< text bash >}} $ kubectl -n app-ns-1 exec "$(kubectl -n app-ns-1 get pod -l app=curl -o jsonpath={.items..metadata.name})" -c curl -- curl -sIL http://httpbin.app-ns-2.svc.cluster.local:8000 HTTP/1.1 503 Service Unavailable content-length: 95 content-type: text/plain date: Sat, 24 Dec 2022 06:54:54 GMT server: envoy {{< /text >}} 1. Send a request from the `curl` pod in `app-ns-2` in `usergroup-2` to the `httpbin` service in `app-ns-3` in `usergroup-2`. The communication should work: {{< text bash >}} $ kubectl -n app-ns-2
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/multiple-controlplanes/index.md
master
istio
[ -0.03082762099802494, 0.08833368122577667, -0.07319643348455429, 0.010157418437302113, -0.002475723857060075, -0.05379346013069153, 0.031316667795181274, -0.03980058804154396, 0.03641180694103241, 0.01571357063949108, 0.06349336355924606, -0.17394764721393585, -0.0014005436096340418, -0.06...
0.320954
Service Unavailable content-length: 95 content-type: text/plain date: Sat, 24 Dec 2022 06:54:54 GMT server: envoy {{< /text >}} 1. Send a request from the `curl` pod in `app-ns-2` in `usergroup-2` to the `httpbin` service in `app-ns-3` in `usergroup-2`. The communication should work: {{< text bash >}} $ kubectl -n app-ns-2 exec "$(kubectl -n app-ns-2 get pod -l app=curl -o jsonpath={.items..metadata.name})" -c curl -- curl -sIL http://httpbin.app-ns-3.svc.cluster.local:8000 HTTP/1.1 200 OK server: envoy date: Thu, 22 Dec 2022 15:01:36 GMT content-type: text/html; charset=utf-8 content-length: 9593 access-control-allow-origin: \* access-control-allow-credentials: true x-envoy-upstream-service-time: 3 {{< /text >}} ## Cleanup 1. Clean up the first usergroup: {{< text bash >}} $ istioctl uninstall --revision usergroup-1 --set values.global.istioNamespace=usergroup-1 $ kubectl delete ns app-ns-1 usergroup-1 {{< /text >}} 1. Clean up the second usergroup: {{< text bash >}} $ istioctl uninstall --revision usergroup-2 --set values.global.istioNamespace=usergroup-2 $ kubectl delete ns app-ns-2 app-ns-3 usergroup-2 {{< /text >}} {{< warning >}} A Cluster Administrator must make sure that Mesh Administrators DO NOT have permission to invoke the global `istioctl uninstall --purge` command, because that would uninstall all control planes in the cluster. {{< /warning >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/multiple-controlplanes/index.md
master
istio
[ 0.0497102215886116, 0.027808966115117073, -0.04283609613776207, -0.0292170662432909, -0.06583168357610703, -0.0580146349966526, -0.03620872274041176, -0.0410892553627491, 0.1156202182173729, 0.055561669170856476, -0.01087979692965746, -0.09327569603919983, -0.05485277250409126, 0.009557235...
0.116888
Follow this guide to install an Istio {{< gloss >}}service mesh{{< /gloss >}} that spans multiple {{< gloss "cluster" >}}clusters{{< /gloss >}}. This guide covers some of the most common concerns when creating a {{< gloss >}}multicluster{{< /gloss >}} mesh: - [Network topologies](/docs/ops/deployment/deployment-models#network-models): one or two networks - [Control plane topologies](/docs/ops/deployment/deployment-models#control-plane-models): multiple {{< gloss "primary cluster" >}}primary clusters{{< /gloss >}}, a primary and {{< gloss >}}remote cluster{{< /gloss >}} {{< tip >}} For meshes that span more than two clusters, you can extend the steps in this guide to configure more complex topologies. See [deployment models](/docs/ops/deployment/deployment-models) for more information. {{< /tip >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/multicluster/_index.md
master
istio
[ -0.015364267863333225, -0.08106876909732819, 0.008023018017411232, -0.020380813628435135, -0.03308650478720665, -0.020286766812205315, -0.05732914060354233, 0.023687679320573807, 0.0009510073577985168, 0.023121356964111328, -0.010671566240489483, -0.1104455441236496, 0.025378050282597542, ...
0.245589
Follow this guide to verify that your multicluster Istio installation is working properly. Before proceeding, be sure to complete the steps under [before you begin](/docs/setup/install/multicluster/before-you-begin) as well as choosing and following one of the multicluster installation guides. In this guide, we will verify multicluster is functional, deploy the `HelloWorld` application `V1` to `cluster1` and `V2` to `cluster2`. Upon receiving a request, `HelloWorld` will include its version in its response. We will also deploy the `curl` container to both clusters. We will use these pods as the source of requests to the `HelloWorld` service, simulating in-mesh traffic. Finally, after generating traffic, we will observe which cluster received the requests. ## Verify Multicluster To confirm that Istiod is now able to communicate with the Kubernetes control plane of the remote cluster. {{< text bash >}} $ istioctl remote-clusters --context="${CTX\_CLUSTER1}" NAME SECRET STATUS ISTIOD cluster1 synced istiod-7b74b769db-kb4kj cluster2 istio-system/istio-remote-secret-cluster2 synced istiod-7b74b769db-kb4kj {{< /text >}} All clusters should indicate their status as `synced`. If a cluster is listed with a `STATUS` of `timeout` that means that Istiod in the primary cluster is unable to communicate with the remote cluster. See the Istiod logs for detailed error messages. Note: if you do see `timeout` issues and there is an intermediary host (such as the [Rancher auth proxy](https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint#two-authentication-methods-for-rke-clusters)) sitting between Istiod in the primary cluster and the Kubernetes control plane in the remote cluster, you may need to update the `certificate-authority-data` field of the kubeconfig that `istioctl create-remote-secret` generates in order to match the certificate being used by the intermediate host. ## Deploy the `HelloWorld` Service In order to make the `HelloWorld` service callable from any cluster, the DNS lookup must succeed in each cluster (see [deployment models](/docs/ops/deployment/deployment-models#dns-with-multiple-clusters) for details). We will address this by deploying the `HelloWorld` Service to each cluster in the mesh. To begin, create the `sample` namespace in each cluster: {{< text bash >}} $ kubectl create --context="${CTX\_CLUSTER1}" namespace sample $ kubectl create --context="${CTX\_CLUSTER2}" namespace sample {{< /text >}} Enable automatic sidecar injection for the `sample` namespace: {{< text bash >}} $ kubectl label --context="${CTX\_CLUSTER1}" namespace sample \ istio-injection=enabled $ kubectl label --context="${CTX\_CLUSTER2}" namespace sample \ istio-injection=enabled {{< /text >}} Create the `HelloWorld` service in both clusters: {{< text bash >}} $ kubectl apply --context="${CTX\_CLUSTER1}" \ -f @samples/helloworld/helloworld.yaml@ \ -l service=helloworld -n sample $ kubectl apply --context="${CTX\_CLUSTER2}" \ -f @samples/helloworld/helloworld.yaml@ \ -l service=helloworld -n sample {{< /text >}} ## Deploy `HelloWorld` `V1` Deploy the `helloworld-v1` application to `cluster1`: {{< text bash >}} $ kubectl apply --context="${CTX\_CLUSTER1}" \ -f @samples/helloworld/helloworld.yaml@ \ -l version=v1 -n sample {{< /text >}} Confirm the `helloworld-v1` pod status: {{< text bash >}} $ kubectl get pod --context="${CTX\_CLUSTER1}" -n sample -l app=helloworld NAME READY STATUS RESTARTS AGE helloworld-v1-86f77cd7bd-cpxhv 2/2 Running 0 40s {{< /text >}} Wait until the status of `helloworld-v1` is `Running`. ## Deploy `HelloWorld` `V2` Deploy the `helloworld-v2` application to `cluster2`: {{< text bash >}} $ kubectl apply --context="${CTX\_CLUSTER2}" \ -f @samples/helloworld/helloworld.yaml@ \ -l version=v2 -n sample {{< /text >}} Confirm the status the `helloworld-v2` pod status: {{< text bash >}} $ kubectl get pod --context="${CTX\_CLUSTER2}" -n sample -l app=helloworld NAME READY STATUS RESTARTS AGE helloworld-v2-758dd55874-6x4t8 2/2 Running 0 40s {{< /text >}} Wait until the status of `helloworld-v2` is `Running`. ## Deploy `curl` Deploy the `curl` application to both clusters: {{< text bash >}} $ kubectl apply --context="${CTX\_CLUSTER1}" \ -f @samples/curl/curl.yaml@ -n sample $ kubectl apply --context="${CTX\_CLUSTER2}" \ -f @samples/curl/curl.yaml@ -n sample {{< /text >}} Confirm the status `curl` pod on `cluster1`: {{< text bash >}} $ kubectl get pod --context="${CTX\_CLUSTER1}" -n sample -l app=curl NAME READY STATUS RESTARTS AGE curl-754684654f-n6bzf 2/2 Running 0 5s {{<
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/multicluster/verify/index.md
master
istio
[ 0.0004089077701792121, -0.006072413641959429, 0.030244428664445877, -0.03011777065694332, -0.037279315292835236, -0.0713779553771019, -0.058565881103277206, -0.005810631904751062, 0.0140942158177495, 0.04462548345327377, -0.0017755150329321623, -0.1673680543899536, -0.01667269878089428, 0....
0.415287
apply --context="${CTX\_CLUSTER1}" \ -f @samples/curl/curl.yaml@ -n sample $ kubectl apply --context="${CTX\_CLUSTER2}" \ -f @samples/curl/curl.yaml@ -n sample {{< /text >}} Confirm the status `curl` pod on `cluster1`: {{< text bash >}} $ kubectl get pod --context="${CTX\_CLUSTER1}" -n sample -l app=curl NAME READY STATUS RESTARTS AGE curl-754684654f-n6bzf 2/2 Running 0 5s {{< /text >}} Wait until the status of the `curl` pod is `Running`. Confirm the status of the `curl` pod on `cluster2`: {{< text bash >}} $ kubectl get pod --context="${CTX\_CLUSTER2}" -n sample -l app=curl NAME READY STATUS RESTARTS AGE curl-754684654f-dzl9j 2/2 Running 0 5s {{< /text >}} Wait until the status of the `curl` pod is `Running`. ## Verifying Cross-Cluster Traffic To verify that cross-cluster load balancing works as expected, call the `HelloWorld` service several times using the `curl` pod. To ensure load balancing is working properly, call the `HelloWorld` service from all clusters in your deployment. Send one request from the `curl` pod on `cluster1` to the `HelloWorld` service: {{< text bash >}} $ kubectl exec --context="${CTX\_CLUSTER1}" -n sample -c curl \ "$(kubectl get pod --context="${CTX\_CLUSTER1}" -n sample -l \ app=curl -o jsonpath='{.items[0].metadata.name}')" \ -- curl -sS helloworld.sample:5000/hello {{< /text >}} Repeat this request several times and verify that the `HelloWorld` version should toggle between `v1` and `v2`: {{< text plain >}} Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8 Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv ... {{< /text >}} Now repeat this process from the `curl` pod on `cluster2`: {{< text bash >}} $ kubectl exec --context="${CTX\_CLUSTER2}" -n sample -c curl \ "$(kubectl get pod --context="${CTX\_CLUSTER2}" -n sample -l \ app=curl -o jsonpath='{.items[0].metadata.name}')" \ -- curl -sS helloworld.sample:5000/hello {{< /text >}} Repeat this request several times and verify that the `HelloWorld` version should toggle between `v1` and `v2`: {{< text plain >}} Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8 Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv ... {{< /text >}} \*\*Congratulations!\*\* You successfully installed and verified Istio on multiple clusters! ## Next Steps Check out the [locality load balancing tasks](/docs/tasks/traffic-management/locality-load-balancing) to learn how to control the traffic across a multicluster mesh.
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/multicluster/verify/index.md
master
istio
[ 0.03413611650466919, 0.028585027903318405, -0.007136269938200712, -0.026420263573527336, -0.03090600110590458, -0.025247875601053238, 0.01749495416879654, -0.011555955745279789, 0.08160708099603653, 0.0011729325633496046, 0.031956084072589874, -0.1367858499288559, 0.01616825722157955, -0.0...
0.12798
Follow this guide to install the Istio control plane on both `cluster1` and `cluster2`, making each a {{< gloss >}}primary cluster{{< /gloss >}}. Cluster `cluster1` is on the `network1` network, while `cluster2` is on the `network2` network. This means there is no direct connectivity between pods across cluster boundaries. Before proceeding, be sure to complete the steps under [before you begin](/docs/setup/install/multicluster/before-you-begin). {{< boilerplate multi-cluster-with-metallb >}} In this configuration, both `cluster1` and `cluster2` observe the API Servers in each cluster for endpoints. Service workloads across cluster boundaries communicate indirectly, via dedicated gateways for [east-west](https://en.wikipedia.org/wiki/East-west\_traffic) traffic. The gateway in each cluster must be reachable from the other cluster. {{< image width="75%" link="arch.svg" caption="Multiple primary clusters on separate networks" >}} ## Set the default network for `cluster1` If the istio-system namespace is already created, we need to set the cluster's network there: {{< text bash >}} $ kubectl --context="${CTX\_CLUSTER1}" get namespace istio-system && \ kubectl --context="${CTX\_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1 {{< /text >}} ## Configure `cluster1` as a primary Create the `istioctl` configuration for `cluster1`: {{< tabset category-name="multicluster-install-type-cluster-1" >}} {{< tab name="IstioOperator" category-value="iop" >}} Install Istio as primary in `cluster1` using istioctl and the `IstioOperator` API. {{< text bash >}} $ cat < cluster1.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: values: global: meshID: mesh1 multiCluster: clusterName: cluster1 network: network1 EOF {{< /text >}} Apply the configuration to `cluster1`: {{< text bash >}} $ istioctl install --context="${CTX\_CLUSTER1}" -f cluster1.yaml {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Install Istio as primary in `cluster1` using the following Helm commands: Install the `base` chart in `cluster1`: {{< text bash >}} $ helm install istio-base istio/base -n istio-system --kube-context "${CTX\_CLUSTER1}" {{< /text >}} Then, install the `istiod` chart in `cluster1` with the following multi-cluster settings: {{< text bash >}} $ helm install istiod istio/istiod -n istio-system --kube-context "${CTX\_CLUSTER1}" --set global.meshID=mesh1 --set global.multiCluster.clusterName=cluster1 --set global.network=network1 {{< /text >}} {{< /tab >}} {{< /tabset >}} ## Install the east-west gateway in `cluster1` Install a gateway in `cluster1` that is dedicated to [east-west](https://en.wikipedia.org/wiki/East-west\_traffic) traffic. By default, this gateway will be public on the Internet. Production systems may require additional access restrictions (e.g. via firewall rules) to prevent external attacks. Check with your cloud vendor to see what options are available. {{< warning >}} Layer 7 load balancers terminate TLS and are incompatible with `AUTO\_PASSTHROUGH`, which can result in mTLS handshake failures and 503 errors. Do not expose an east-west gateway with a Layer 7 load balancer. {{< /warning >}} {{< tabset category-name="east-west-gateway-install-type-cluster-1" >}} {{< tab name="IstioOperator" category-value="iop" >}} {{< text bash >}} $ @samples/multicluster/gen-eastwest-gateway.sh@ \ --network network1 | \ istioctl --context="${CTX\_CLUSTER1}" install -y -f - {{< /text >}} {{< warning >}} If the control-plane was installed with a revision, add the `--revision rev` flag to the `gen-eastwest-gateway.sh` command. {{< /warning >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Install the east-west gateway in `cluster1` using the following Helm command: {{< text bash >}} $ helm install istio-eastwestgateway istio/gateway -n istio-system --kube-context "${CTX\_CLUSTER1}" --set name=istio-eastwestgateway --set networkGateway=network1 {{< /text >}} {{< warning >}} If the control-plane was installed with a revision, you must add a `--set revision=` flag to the Helm install command. {{< /warning >}} {{< /tab >}} {{< /tabset >}} Wait for the east-west gateway to be assigned an external IP address: {{< text bash >}} $ kubectl --context="${CTX\_CLUSTER1}" get svc istio-eastwestgateway -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-eastwestgateway LoadBalancer 10.80.6.124 34.75.71.237 ... 51s {{< /text >}} ## Expose services in `cluster1` Since the clusters are on separate networks, we need to expose all services (\*.local) on the east-west gateway in both clusters. While
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/multicluster/multi-primary_multi-network/index.md
master
istio
[ 0.015758436173200607, -0.06781560927629471, 0.019219325855374336, 0.031971525400877, -0.011594672687351704, -0.06043974682688713, -0.06283538043498993, 0.04034099727869034, 0.0026412864681333303, -0.013229268603026867, -0.03709099441766739, -0.13822293281555176, -0.006258086301386356, -0.0...
0.345663
$ kubectl --context="${CTX\_CLUSTER1}" get svc istio-eastwestgateway -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-eastwestgateway LoadBalancer 10.80.6.124 34.75.71.237 ... 51s {{< /text >}} ## Expose services in `cluster1` Since the clusters are on separate networks, we need to expose all services (\*.local) on the east-west gateway in both clusters. While this gateway is public on the Internet, services behind it can only be accessed by services with a trusted mTLS certificate and workload ID, just as if they were on the same network. {{< text bash >}} $ kubectl --context="${CTX\_CLUSTER1}" apply -n istio-system -f \ @samples/multicluster/expose-services.yaml@ {{< /text >}} ## Set the default network for `cluster2` If the istio-system namespace is already created, we need to set the cluster's network there: {{< text bash >}} $ kubectl --context="${CTX\_CLUSTER2}" get namespace istio-system && \ kubectl --context="${CTX\_CLUSTER2}" label namespace istio-system topology.istio.io/network=network2 {{< /text >}} ## Configure cluster2 as a primary Create the `istioctl` configuration for `cluster2`: {{< tabset category-name="multicluster-install-type-cluster-2" >}} {{< tab name="IstioOperator" category-value="iop" >}} Install Istio as primary in `cluster2` using istioctl and the `IstioOperator` API. {{< text bash >}} $ cat < cluster2.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: values: global: meshID: mesh1 multiCluster: clusterName: cluster2 network: network2 EOF {{< /text >}} Apply the configuration to `cluster2`: {{< text bash >}} $ istioctl install --context="${CTX\_CLUSTER2}" -f cluster2.yaml {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Install Istio as primary in `cluster2` using the following Helm commands: Install the `base` chart in `cluster2`: {{< text bash >}} $ helm install istio-base istio/base -n istio-system --kube-context "${CTX\_CLUSTER2}" {{< /text >}} Then, install the `istiod` chart in `cluster2` with the following multi-cluster settings: {{< text bash >}} $ helm install istiod istio/istiod -n istio-system --kube-context "${CTX\_CLUSTER2}" --set global.meshID=mesh1 --set global.multiCluster.clusterName=cluster2 --set global.network=network2 {{< /text >}} {{< /tab >}} {{< /tabset >}} ## Install the east-west gateway in `cluster2` As we did with `cluster1` above, install a gateway in `cluster2` that is dedicated to east-west traffic. {{< tabset category-name="east-west-gateway-install-type-cluster-2" >}} {{< tab name="IstioOperator" category-value="iop" >}} {{< text bash >}} $ @samples/multicluster/gen-eastwest-gateway.sh@ \ --network network2 | \ istioctl --context="${CTX\_CLUSTER2}" install -y -f - {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Install the east-west gateway in `cluster2` using the following Helm command: {{< text bash >}} $ helm install istio-eastwestgateway istio/gateway -n istio-system --kube-context "${CTX\_CLUSTER2}" --set name=istio-eastwestgateway --set networkGateway=network2 {{< /text >}} {{< warning >}} If the control-plane was installed with a revision, you must add a `--set revision=` flag to the Helm install command. {{< /warning >}} {{< /tab >}} {{< /tabset >}} Wait for the east-west gateway to be assigned an external IP address: {{< text bash >}} $ kubectl --context="${CTX\_CLUSTER2}" get svc istio-eastwestgateway -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-eastwestgateway LoadBalancer 10.0.12.121 34.122.91.98 ... 51s {{< /text >}} ## Expose services in `cluster2` As we did with `cluster1` above, expose services via the east-west gateway. {{< text bash >}} $ kubectl --context="${CTX\_CLUSTER2}" apply -n istio-system -f \ @samples/multicluster/expose-services.yaml@ {{< /text >}} ## Enable Endpoint Discovery Install a remote secret in `cluster2` that provides access to `cluster1`’s API server. {{< text bash >}} $ istioctl create-remote-secret \ --context="${CTX\_CLUSTER1}" \ --name=cluster1 | \ kubectl apply -f - --context="${CTX\_CLUSTER2}" {{< /text >}} Install a remote secret in `cluster1` that provides access to `cluster2`’s API server. {{< text bash >}} $ istioctl create-remote-secret \ --context="${CTX\_CLUSTER2}" \ --name=cluster2 | \ kubectl apply -f - --context="${CTX\_CLUSTER1}" {{< /text >}} \*\*Congratulations!\*\* You successfully installed an Istio mesh across multiple primary clusters on different networks! ## Next Steps You can now [verify the installation](/docs/setup/install/multicluster/verify). ## Cleanup Uninstall Istio from both `cluster1` and `cluster2`
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/multicluster/multi-primary_multi-network/index.md
master
istio
[ -0.03524836525321007, -0.01157386414706707, -0.03943741321563721, -0.0043168021366000175, -0.05404084175825119, -0.04514338821172714, 0.020897287875413895, 0.013631452806293964, 0.07231960445642471, -0.033853672444820404, -0.07011563330888748, -0.11680180579423904, 0.02021866850554943, 0.0...
0.304655
>}} $ istioctl create-remote-secret \ --context="${CTX\_CLUSTER2}" \ --name=cluster2 | \ kubectl apply -f - --context="${CTX\_CLUSTER1}" {{< /text >}} \*\*Congratulations!\*\* You successfully installed an Istio mesh across multiple primary clusters on different networks! ## Next Steps You can now [verify the installation](/docs/setup/install/multicluster/verify). ## Cleanup Uninstall Istio from both `cluster1` and `cluster2` using the same mechanism you installed Istio with (istioctl or Helm). {{< tabset category-name="multicluster-uninstall-type-cluster-1" >}} {{< tab name="IstioOperator" category-value="iop" >}} Uninstall Istio in `cluster1`: {{< text syntax=bash snip\_id=none >}} $ istioctl uninstall --context="${CTX\_CLUSTER1}" -y --purge $ kubectl delete ns istio-system --context="${CTX\_CLUSTER1}" {{< /text >}} Uninstall Istio in `cluster2`: {{< text syntax=bash snip\_id=none >}} $ istioctl uninstall --context="${CTX\_CLUSTER2}" -y --purge $ kubectl delete ns istio-system --context="${CTX\_CLUSTER2}" {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Delete Istio Helm installation from `cluster1`: {{< text syntax=bash >}} $ helm delete istiod -n istio-system --kube-context "${CTX\_CLUSTER1}" $ helm delete istio-eastwestgateway -n istio-system --kube-context "${CTX\_CLUSTER1}" $ helm delete istio-base -n istio-system --kube-context "${CTX\_CLUSTER1}" {{< /text >}} Delete the `istio-system` namespace from `cluster1`: {{< text syntax=bash >}} $ kubectl delete ns istio-system --context="${CTX\_CLUSTER1}" {{< /text >}} Delete Istio Helm installation from `cluster2`: {{< text syntax=bash >}} $ helm delete istiod -n istio-system --kube-context "${CTX\_CLUSTER2}" $ helm delete istio-eastwestgateway -n istio-system --kube-context "${CTX\_CLUSTER2}" $ helm delete istio-base -n istio-system --kube-context "${CTX\_CLUSTER2}" {{< /text >}} Delete the `istio-system` namespace from `cluster2`: {{< text syntax=bash >}} $ kubectl delete ns istio-system --context="${CTX\_CLUSTER2}" {{< /text >}} (Optional) Delete CRDs installed by Istio: Deleting CRDs permanently removes any Istio resources you have created in your clusters. To delete Istio CRDs installed in your clusters: {{< text syntax=bash snip\_id=delete\_crds >}} $ kubectl get crd -oname --context "${CTX\_CLUSTER1}" | grep --color=never 'istio.io' | xargs kubectl delete --context "${CTX\_CLUSTER1}" $ kubectl get crd -oname --context "${CTX\_CLUSTER2}" | grep --color=never 'istio.io' | xargs kubectl delete --context "${CTX\_CLUSTER2}" {{< /text >}} {{< /tab >}} {{< /tabset >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/multicluster/multi-primary_multi-network/index.md
master
istio
[ 0.003923565149307251, -0.03730512782931328, 0.01133550051599741, -0.016950005665421486, 0.0021592234261333942, -0.034305863082408905, -0.02733081579208374, 0.02418418973684311, 0.03238965943455696, 0.0026015921030193567, 0.012273530475795269, -0.199101060628891, -0.017356127500534058, 0.03...
0.421508
Follow this guide to install the Istio control plane on `cluster1` (the {{< gloss >}}primary cluster{{< /gloss >}}) and configure `cluster2` (the {{< gloss >}}remote cluster{{< /gloss >}}) to use the control plane in `cluster1`. Both clusters reside on the `network1` network, meaning there is direct connectivity between the pods in both clusters. Before proceeding, be sure to complete the steps under [before you begin](/docs/setup/install/multicluster/before-you-begin). {{< boilerplate multi-cluster-with-metallb >}} {{< warning >}} These instructions are not suitable for AWS EKS primary cluster deployment. The reason behind this incompatibility is that AWS Load Balancers (LB) are presented as Fully Qualified Domain Names (FQDN), while the remote cluster utilizes the Kubernetes service type `ExternalName`. However, the `ExternalName` type exclusively supports IP addresses and does not accommodate FQDNs. {{< /warning >}} In this configuration, cluster `cluster1` will observe the API Servers in both clusters for endpoints. In this way, the control plane will be able to provide service discovery for workloads in both clusters. Service workloads communicate directly (pod-to-pod) across cluster boundaries. Services in `cluster2` will reach the control plane in `cluster1` via a dedicated gateway for [east-west](https://en.wikipedia.org/wiki/East-west\_traffic) traffic. {{< image width="75%" link="arch.svg" caption="Primary and remote clusters on the same network" >}} ## Configure `cluster1` as a primary Create the `istioctl` configuration for `cluster1`: {{< tabset category-name="multicluster-primary-remote-install-type-primary-cluster" >}} {{< tab name="IstioOperator" category-value="iop" >}} Install Istio as primary in `cluster1` using istioctl and the `IstioOperator` API. {{< text bash >}} $ cat < cluster1.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: values: global: meshID: mesh1 multiCluster: clusterName: cluster1 network: network1 externalIstiod: true EOF {{< /text >}} Apply the configuration to `cluster1`: {{< text bash >}} $ istioctl install --context="${CTX\_CLUSTER1}" -f cluster1.yaml {{< /text >}} Notice that `values.global.externalIstiod` is set to `true`. This enables the control plane installed on `cluster1` to also serve as an external control plane for other remote clusters. When this feature is enabled, `istiod` will attempt to acquire the leadership lock, and consequently manage, [appropriately annotated](#set-the-control-plane-cluster-for-cluster2) remote clusters that are attached to it (`cluster2` in this case). {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Install Istio as primary in `cluster1` using the following Helm commands: Install the `base` chart in `cluster1`: {{< text bash >}} $ helm install istio-base istio/base -n istio-system --kube-context "${CTX\_CLUSTER1}" {{< /text >}} Then, install the `istiod` chart in `cluster1` with the following multi-cluster settings: {{< text bash >}} $ helm install istiod istio/istiod -n istio-system --kube-context "${CTX\_CLUSTER1}" --set global.meshID=mesh1 --set global.externalIstiod=true --set global.multiCluster.clusterName=cluster1 --set global.network=network1 {{< /text >}} Notice that `values.global.externalIstiod` is set to `true`. This enables the control plane installed on `cluster1` to also serve as an external control plane for other remote clusters. When this feature is enabled, `istiod` will attempt to acquire the leadership lock, and consequently manage, [appropriately annotated](#set-the-control-plane-cluster-for-cluster2) remote clusters that are attached to it (`cluster2` in this case). {{< /tab >}} {{< /tabset >}} ## Install the east-west gateway in `cluster1` Install a gateway in `cluster1` that is dedicated to [east-west](https://en.wikipedia.org/wiki/East-west\_traffic) traffic. By default, this gateway will be public on the Internet. Production systems may require additional access restrictions (e.g. via firewall rules) to prevent external attacks. Check with your cloud vendor to see what options are available. {{< tabset category-name="east-west-gateway-install-type-cluster-1" >}} {{< tab name="IstioOperator" category-value="iop" >}} {{< text bash >}} $ @samples/multicluster/gen-eastwest-gateway.sh@ \ --network network1 | \ istioctl --context="${CTX\_CLUSTER1}" install -y -f - {{< /text >}} {{< warning >}} If the control-plane was installed with a revision, add the `--revision rev` flag to the `gen-eastwest-gateway.sh` command. {{< /warning >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Install the east-west gateway in `cluster1` using the following Helm command: {{< text
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/multicluster/primary-remote/index.md
master
istio
[ -0.03442958742380142, -0.041514329612255096, 0.002143316436558962, 0.02972923219203949, -0.04136471450328827, -0.004925655201077461, -0.037417732179164886, 0.031108122318983078, 0.04829001799225807, 0.023117931559681892, -0.024679604917764664, -0.15629974007606506, 0.01815204881131649, -0....
0.27476
-y -f - {{< /text >}} {{< warning >}} If the control-plane was installed with a revision, add the `--revision rev` flag to the `gen-eastwest-gateway.sh` command. {{< /warning >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Install the east-west gateway in `cluster1` using the following Helm command: {{< text bash >}} $ helm install istio-eastwestgateway istio/gateway -n istio-system --kube-context "${CTX\_CLUSTER1}" --set name=istio-eastwestgateway --set networkGateway=network1 {{< /text >}} {{< warning >}} If the control-plane was installed with a revision, you must add a `--set revision=` flag to the Helm install command. {{< /warning >}} {{< /tab >}} {{< /tabset >}} Wait for the east-west gateway to be assigned an external IP address: {{< text bash >}} $ kubectl --context="${CTX\_CLUSTER1}" get svc istio-eastwestgateway -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-eastwestgateway LoadBalancer 10.80.6.124 34.75.71.237 ... 51s {{< /text >}} ## Expose the control plane in `cluster1` Before we can install on `cluster2`, we need to first expose the control plane in `cluster1` so that services in `cluster2` will be able to access service discovery: {{< text bash >}} $ kubectl apply --context="${CTX\_CLUSTER1}" -n istio-system -f \ @samples/multicluster/expose-istiod.yaml@ {{< /text >}} {{< warning >}} If the control-plane was installed with a revision `rev`, use the following command instead: {{< text bash >}} $ sed 's/{{.Revision}}/rev/g' @samples/multicluster/expose-istiod-rev.yaml.tmpl@ | kubectl apply --context="${CTX\_CLUSTER1}" -n istio-system -f - {{< /text >}} {{< /warning >}} ## Set the control plane cluster for `cluster2` We need identify the external control plane cluster that should manage `cluster2` by annotating the istio-system namespace: {{< text bash >}} $ kubectl --context="${CTX\_CLUSTER2}" create namespace istio-system $ kubectl --context="${CTX\_CLUSTER2}" annotate namespace istio-system topology.istio.io/controlPlaneClusters=cluster1 {{< /text >}} Setting the `topology.istio.io/controlPlaneClusters` namespace annotation to `cluster1` instructs the `istiod` running in the same namespace (istio-system in this case) on `cluster1` to manage `cluster2` when it is [attached as a remote cluster](#attach-cluster2-as-a-remote-cluster-of-cluster1). ## Configure `cluster2` as a remote Save the address of `cluster1`’s east-west gateway. {{< text bash >}} $ export DISCOVERY\_ADDRESS=$(kubectl \ --context="${CTX\_CLUSTER1}" \ -n istio-system get svc istio-eastwestgateway \ -o jsonpath='{.status.loadBalancer.ingress[0].ip}') {{< /text >}} Now create a remote configuration for `cluster2`. {{< tabset category-name="multicluster-primary-remote-install-type-remote-cluster" >}} {{< tab name="IstioOperator" category-value="iop" >}} {{< text bash >}} $ cat < cluster2.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: profile: remote values: istiodRemote: injectionPath: /inject/cluster/cluster2/net/network1 global: remotePilotAddress: ${DISCOVERY\_ADDRESS} EOF {{< /text >}} Apply the configuration to `cluster2`: {{< text bash >}} $ istioctl install --context="${CTX\_CLUSTER2}" -f cluster2.yaml {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Install Istio as remote in `cluster2` using the following Helm commands: Install the `base` chart in `cluster2`: {{< text bash >}} $ helm install istio-base istio/base -n istio-system --set profile=remote --kube-context "${CTX\_CLUSTER2}" {{< /text >}} Then, install the `istiod` chart in `cluster2` with the following multi-cluster settings: {{< text bash >}} $ helm install istiod istio/istiod -n istio-system --set profile=remote --set global.multiCluster.clusterName=cluster2 --set istiodRemote.injectionPath=/inject/cluster/cluster2/net/network1 --set global.configCluster=true --set global.remotePilotAddress="${DISCOVERY\_ADDRESS}" --kube-context "${CTX\_CLUSTER2}" {{< /text >}} {{< tip >}} The `remote` profile for the `base` and `istiod` Helm charts is only available from Istio release 1.24 onwards. {{< /tip >}} {{< /tab >}} {{< /tabset >}} {{< tip >}} Here we're configuring the location of the control plane using the `injectionPath` and `remotePilotAddress` parameters. Although convenient for demonstration, in a production environment it is recommended to instead configure the `injectionURL` parameter using properly signed DNS certs similar to the configuration shown in the [external control plane instructions](/docs/setup/install/external-controlplane/#register-the-new-cluster). {{< /tip >}} ## Attach `cluster2` as a remote cluster of `cluster1` To attach the remote cluster to its control plane, we give the control plane in `cluster1` access to the API Server in `cluster2`. This will do
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/multicluster/primary-remote/index.md
master
istio
[ 0.06309632211923599, 0.014911421574652195, -0.011758705601096153, 0.008429824374616146, 0.01877199299633503, -0.006013031583279371, -0.008957627229392529, 0.02865433134138584, 0.025348268449306488, 0.02726493962109089, 0.00712553458288312, -0.10922666639089584, -0.015879279002547264, -0.06...
0.257504
certs similar to the configuration shown in the [external control plane instructions](/docs/setup/install/external-controlplane/#register-the-new-cluster). {{< /tip >}} ## Attach `cluster2` as a remote cluster of `cluster1` To attach the remote cluster to its control plane, we give the control plane in `cluster1` access to the API Server in `cluster2`. This will do the following: - Enables the control plane to authenticate connection requests from workloads running in `cluster2`. Without API Server access, the control plane will reject the requests. - Enables discovery of service endpoints running in `cluster2`. Because it has been included in the `topology.istio.io/controlPlaneClusters` namespace annotation, the control plane on `cluster1` will also: - Patch certs in the webhooks in `cluster2`. - Start the namespace controller which writes configmaps in namespaces in `cluster2`. To provide API Server access to `cluster2`, we generate a remote secret and apply it to `cluster1`: {{< text bash >}} $ istioctl create-remote-secret \ --context="${CTX\_CLUSTER2}" \ --name=cluster2 | \ kubectl apply -f - --context="${CTX\_CLUSTER1}" {{< /text >}} \*\*Congratulations!\*\* You successfully installed an Istio mesh across primary and remote clusters! ## Next Steps You can now [verify the installation](/docs/setup/install/multicluster/verify). ## Cleanup Uninstall Istio from both `cluster1` and `cluster2` using the same mechanism you installed Istio with (istioctl or Helm). {{< tabset category-name="multicluster-uninstall-type-cluster-1" >}} {{< tab name="IstioOperator" category-value="iop" >}} Uninstall Istio in `cluster1`: {{< text syntax=bash snip\_id=none >}} $ istioctl uninstall --context="${CTX\_CLUSTER1}" -y --purge $ kubectl delete ns istio-system --context="${CTX\_CLUSTER1}" {{< /text >}} Uninstall Istio in `cluster2`: {{< text syntax=bash snip\_id=none >}} $ istioctl uninstall --context="${CTX\_CLUSTER2}" -y --purge $ kubectl delete ns istio-system --context="${CTX\_CLUSTER2}" {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Delete Istio Helm installation from `cluster1`: {{< text syntax=bash >}} $ helm delete istiod -n istio-system --kube-context "${CTX\_CLUSTER1}" $ helm delete istio-eastwestgateway -n istio-system --kube-context "${CTX\_CLUSTER1}" $ helm delete istio-base -n istio-system --kube-context "${CTX\_CLUSTER1}" {{< /text >}} Delete the `istio-system` namespace from `cluster1`: {{< text syntax=bash >}} $ kubectl delete ns istio-system --context="${CTX\_CLUSTER1}" {{< /text >}} Delete Istio Helm installation from `cluster2`: {{< text syntax=bash >}} $ helm delete istiod -n istio-system --kube-context "${CTX\_CLUSTER2}" $ helm delete istio-base -n istio-system --kube-context "${CTX\_CLUSTER2}" {{< /text >}} Delete the `istio-system` namespace from `cluster2`: {{< text syntax=bash >}} $ kubectl delete ns istio-system --context="${CTX\_CLUSTER2}" {{< /text >}} (Optional) Delete CRDs installed by Istio: Deleting CRDs permanently removes any Istio resources you have created in your clusters. To delete Istio CRDs installed in your clusters: {{< text syntax=bash snip\_id=delete\_crds >}} $ kubectl get crd -oname --context "${CTX\_CLUSTER1}" | grep --color=never 'istio.io' | xargs kubectl delete --context "${CTX\_CLUSTER1}" $ kubectl get crd -oname --context "${CTX\_CLUSTER2}" | grep --color=never 'istio.io' | xargs kubectl delete --context "${CTX\_CLUSTER2}" {{< /text >}} {{< /tab >}} {{< /tabset >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/multicluster/primary-remote/index.md
master
istio
[ -0.07896528393030167, -0.05652866140007973, -0.07598789781332016, 0.08987636864185333, 0.02728077583014965, -0.04049288108944893, -0.005185627844184637, 0.03739045932888985, 0.040653426200151443, 0.027399469166994095, 0.026872387155890465, -0.0911761000752449, 0.061700232326984406, 0.02388...
0.294401
Before you begin a multicluster installation, review the [deployment models guide](/docs/ops/deployment/deployment-models) which describes the foundational concepts used throughout this guide. In addition, review the requirements and perform the initial steps below. ## Requirements ### Cluster This guide requires that you have two Kubernetes clusters with any of the [supported Kubernetes versions:](/docs/releases/supported-releases#support-status-of-istio-releases) {{< supported\_kubernetes\_versions >}}. {{< tip >}} If you are testing multicluster setup on `kind`, you can use the script `samples/kind-lb/setupkind.sh` to quickly set up clusters with load balancer support: {{< text bash >}} $ @samples/kind-lb/setupkind.sh@ --cluster-name cluster-1 --ip-space 254 $ @samples/kind-lb/setupkind.sh@ --cluster-name cluster-2 --ip-space 255 {{< /text >}} {{< /tip >}} ### API Server Access The API Server in each cluster must be accessible to the other clusters in the mesh. Many cloud providers make API Servers publicly accessible via network load balancers (NLB). If the API Server is not directly accessible, you will have to modify the installation procedure to enable access. For example, the [east-west](https://en.wikipedia.org/wiki/East-west\_traffic) gateway used in the multi-network and primary-remote configurations could also be used to enable access to the API Server. ## Environment Variables This guide will refer to two clusters: `cluster1` and `cluster2`. The following environment variables will be used throughout to simplify the instructions: Variable | Description -------- | ----------- `CTX\_CLUSTER1` | The context name in the default [Kubernetes configuration file](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) used for accessing the `cluster1` cluster. `CTX\_CLUSTER2` | The context name in the default [Kubernetes configuration file](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) used for accessing the `cluster2` cluster. Set the two variables before proceeding: {{< text syntax=bash snip\_id=none >}} $ export CTX\_CLUSTER1= $ export CTX\_CLUSTER2= {{< /text >}} {{< tip >}} If you're using `kind`, set the following contexts: {{< text bash >}} $ export CTX\_CLUSTER1=$(kubectl config get-contexts -o name | grep kind-cluster-1) $ export CTX\_CLUSTER2=$(kubectl config get-contexts -o name | grep kind-cluster-2) {{< /text >}} {{< /tip >}} ## Configure Trust A multicluster service mesh deployment requires that you establish trust between all clusters in the mesh. Depending on the requirements for your system, there may be multiple options available for establishing trust. See [certificate management](/docs/tasks/security/cert-management/) for detailed descriptions and instructions for all available options. Depending on which option you choose, the installation instructions for Istio may change slightly. {{< tip >}} If you are planning to deploy only one primary cluster (i.e., one of the Primary-Remote installations, below), you will only have a single CA (i.e., `istiod` on `cluster1`) issuing certificates for both clusters. In that case, you can skip the following CA certificate generation step and simply use the default self-signed CA for the installation. {{< /tip >}} This guide will assume that you use a common root to generate intermediate certificates for each primary cluster. Follow the [instructions](/docs/tasks/security/cert-management/plugin-ca-cert/) to generate and push a CA certificate secret to both the `cluster1` and `cluster2` clusters. {{< tip >}} If you currently have a single cluster with a self-signed CA (as described in [Getting Started](/docs/setup/getting-started/)), you need to change the CA using one of the methods described in [certificate management](/docs/tasks/security/cert-management/). Changing the CA typically requires reinstalling Istio. The installation instructions below may have to be altered based on your choice of CA. {{< /tip >}} {{< tip >}} If you're using `kind`, you can quickly generate self-signed CA certificates for your clusters using the provided Makefile: {{< text bash >}} $ make -f @tools/certs/Makefile.selfsigned.mk@ \ ROOTCA\_CN="Root CA" \ ROOTCA\_ORG=istio.io \ root-ca $ make -f @tools/certs/Makefile.selfsigned.mk@ \ INTERMEDIATE\_CN="Cluster 1 Intermediate CA" \ INTERMEDIATE\_ORG=istio.io \ cluster1-cacerts $ make -f @tools/certs/Makefile.selfsigned.mk@ \ INTERMEDIATE\_CN="Cluster 2 Intermediate CA" \ INTERMEDIATE\_ORG=istio.io \ cluster2-cacerts {{< /text >}} This will create a root CA and intermediate CA certificates for each cluster, which you
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/multicluster/before-you-begin/index.md
master
istio
[ 0.005842444021254778, -0.039438363164663315, 0.06257559359073639, -0.032437216490507126, -0.04144010320305824, -0.011753293685615063, -0.11040394008159637, 0.029120437800884247, 0.020251300185918808, 0.019486458972096443, -0.039660535752773285, -0.12769091129302979, -0.001616169000044465, ...
0.169875
ROOTCA\_CN="Root CA" \ ROOTCA\_ORG=istio.io \ root-ca $ make -f @tools/certs/Makefile.selfsigned.mk@ \ INTERMEDIATE\_CN="Cluster 1 Intermediate CA" \ INTERMEDIATE\_ORG=istio.io \ cluster1-cacerts $ make -f @tools/certs/Makefile.selfsigned.mk@ \ INTERMEDIATE\_CN="Cluster 2 Intermediate CA" \ INTERMEDIATE\_ORG=istio.io \ cluster2-cacerts {{< /text >}} This will create a root CA and intermediate CA certificates for each cluster, which you can then use to set up trust between your clusters. To create the `cacerts` secret in each cluster, use the following command after generating the certificates: {{< text bash >}} $ kubectl --context="${CTX\_CLUSTER1}" create namespace istio-system $ kubectl --context="${CTX\_CLUSTER1}" create secret generic cacerts -n istio-system \ --from-file=ca-cert.pem=cluster1/ca-cert.pem \ --from-file=ca-key.pem=cluster1/ca-key.pem \ --from-file=root-cert.pem=cluster1/root-cert.pem \ --from-file=cert-chain.pem=cluster1/cert-chain.pem $ kubectl --context="${CTX\_CLUSTER2}" create namespace istio-system $ kubectl --context="${CTX\_CLUSTER2}" create secret generic cacerts -n istio-system \ --from-file=ca-cert.pem=cluster2/ca-cert.pem \ --from-file=ca-key.pem=cluster2/ca-key.pem \ --from-file=root-cert.pem=cluster2/root-cert.pem \ --from-file=cert-chain.pem=cluster2/cert-chain.pem {{< /text >}} This will create the `cacerts` secret in the `istio-system` namespace of each cluster, allowing Istio to use your custom CA certificates. {{< /tip >}} ## Next steps You're now ready to install an Istio mesh across multiple clusters. The particular steps will depend on your requirements for network and control plane topology. Choose the installation that best fits your needs: - [Install Multi-Primary](/docs/setup/install/multicluster/multi-primary) - [Install Primary-Remote](/docs/setup/install/multicluster/primary-remote) - [Install Multi-Primary on Different Networks](/docs/setup/install/multicluster/multi-primary\_multi-network) - [Install Primary-Remote on Different Networks](/docs/setup/install/multicluster/primary-remote\_multi-network) {{< tip >}} If you plan on installing Istio multi-cluster using Helm, follow the [Helm prerequisites](/docs/setup/install/helm/#prerequisites) in the Helm install guide first. {{< /tip >}} {{< tip >}} For meshes that span more than two clusters, you may need to use more than one of these options. For example, you may have a primary cluster per region (i.e. multi-primary) where each zone has a remote cluster that uses the control plane in the regional primary (i.e. primary-remote). See [deployment models](/docs/ops/deployment/deployment-models) for more information. {{< /tip >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/multicluster/before-you-begin/index.md
master
istio
[ -0.012889071367681026, -0.04681207612156868, -0.030166277661919594, 0.015257532708346844, -0.05795201286673546, -0.06868548691272736, -0.006752411834895611, 0.026480691507458687, 0.02777007594704628, -0.02411503903567791, -0.009233933873474598, -0.2039491981267929, 0.0894886702299118, 0.02...
0.308253
Follow this guide to install the Istio control plane on `cluster1` (the {{< gloss >}}primary cluster{{< /gloss >}}) and configure `cluster2` (the {{< gloss >}}remote cluster{{< /gloss >}}) to use the control plane in `cluster1`. Cluster `cluster1` is on the `network1` network, while `cluster2` is on the `network2` network. This means there is no direct connectivity between pods across cluster boundaries. Before proceeding, be sure to complete the steps under [before you begin](/docs/setup/install/multicluster/before-you-begin). {{< boilerplate multi-cluster-with-metallb >}} In this configuration, cluster `cluster1` will observe the API Servers in both clusters for endpoints. In this way, the control plane will be able to provide service discovery for workloads in both clusters. Service workloads across cluster boundaries communicate indirectly, via dedicated gateways for [east-west](https://en.wikipedia.org/wiki/East-west\_traffic) traffic. The gateway in each cluster must be reachable from the other cluster. Services in `cluster2` will reach the control plane in `cluster1` via the same east-west gateway. {{< image width="75%" link="arch.svg" caption="Primary and remote clusters on separate networks" >}} ## Set the default network for `cluster1` If the istio-system namespace is already created, we need to set the cluster's network there: {{< text bash >}} $ kubectl --context="${CTX\_CLUSTER1}" get namespace istio-system && \ kubectl --context="${CTX\_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1 {{< /text >}} ## Configure `cluster1` as a primary Create the `istioctl` configuration for `cluster1`: {{< tabset category-name="multicluster-primary-remote-install-type-primary-cluster" >}} {{< tab name="IstioOperator" category-value="iop" >}} Install Istio as primary in `cluster1` using istioctl and the `IstioOperator` API. {{< text bash >}} $ cat < cluster1.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: values: global: meshID: mesh1 multiCluster: clusterName: cluster1 network: network1 externalIstiod: true EOF {{< /text >}} Apply the configuration to `cluster1`: {{< text bash >}} $ istioctl install --context="${CTX\_CLUSTER1}" -f cluster1.yaml {{< /text >}} Notice that `values.global.externalIstiod` is set to `true`. This enables the control plane installed on `cluster1` to also serve as an external control plane for other remote clusters. When this feature is enabled, `istiod` will attempt to acquire the leadership lock, and consequently manage, [appropriately annotated](#set-the-control-plane-cluster-for-cluster2) remote clusters that are attached to it (`cluster2` in this case). {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Install Istio as primary in `cluster1` using the following Helm commands: Install the `base` chart in `cluster1`: {{< text bash >}} $ helm install istio-base istio/base -n istio-system --kube-context "${CTX\_CLUSTER1}" {{< /text >}} Then, install the `istiod` chart in `cluster1` with the following multi-cluster settings: {{< text bash >}} $ helm install istiod istio/istiod -n istio-system --kube-context "${CTX\_CLUSTER1}" --set global.meshID=mesh1 --set global.externalIstiod=true --set global.multiCluster.clusterName=cluster1 --set global.network=network1 {{< /text >}} Notice that `values.global.externalIstiod` is set to `true`. This enables the control plane installed on `cluster1` to also serve as an external control plane for other remote clusters. When this feature is enabled, `istiod` will attempt to acquire the leadership lock, and consequently manage, [appropriately annotated](#set-the-control-plane-cluster-for-cluster2) remote clusters that are attached to it (`cluster2` in this case). {{< /tab >}} {{< /tabset >}} ## Install the east-west gateway in `cluster1` Install a gateway in `cluster1` that is dedicated to east-west traffic. By default, this gateway will be public on the Internet. Production systems may require additional access restrictions (e.g. via firewall rules) to prevent external attacks. Check with your cloud vendor to see what options are available. {{< tabset category-name="east-west-gateway-install-type-cluster-1" >}} {{< tab name="IstioOperator" category-value="iop" >}} {{< text bash >}} $ @samples/multicluster/gen-eastwest-gateway.sh@ \ --network network1 | \ istioctl --context="${CTX\_CLUSTER1}" install -y -f - {{< /text >}} {{< warning >}} If the control-plane was installed with a revision, add the `--revision rev` flag to the `gen-eastwest-gateway.sh` command. {{< /warning >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Install the east-west gateway in `cluster1` using
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/multicluster/primary-remote_multi-network/index.md
master
istio
[ 0.00388549268245697, -0.09021090716123581, 0.0107573876157403, 0.021153345704078674, -0.04387783631682396, -0.0339251309633255, -0.03778513893485069, 0.04924166575074196, 0.007494431454688311, 0.010510548017919064, -0.03345115855336189, -0.13731132447719574, -0.00139065261464566, -0.015664...
0.321072
network1 | \ istioctl --context="${CTX\_CLUSTER1}" install -y -f - {{< /text >}} {{< warning >}} If the control-plane was installed with a revision, add the `--revision rev` flag to the `gen-eastwest-gateway.sh` command. {{< /warning >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Install the east-west gateway in `cluster1` using the following Helm command: {{< text bash >}} $ helm install istio-eastwestgateway istio/gateway -n istio-system --kube-context "${CTX\_CLUSTER1}" --set name=istio-eastwestgateway --set networkGateway=network1 {{< /text >}} {{< warning >}} If the control-plane was installed with a revision, you must add a `--set revision=` flag to the Helm install command. {{< /warning >}} {{< /tab >}} {{< /tabset >}} Wait for the east-west gateway to be assigned an external IP address: {{< text bash >}} $ kubectl --context="${CTX\_CLUSTER1}" get svc istio-eastwestgateway -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-eastwestgateway LoadBalancer 10.80.6.124 34.75.71.237 ... 51s {{< /text >}} ## Expose the control plane in `cluster1` Before we can install on `cluster2`, we need to first expose the control plane in `cluster1` so that services in `cluster2` will be able to access service discovery: {{< text bash >}} $ kubectl apply --context="${CTX\_CLUSTER1}" -n istio-system -f \ @samples/multicluster/expose-istiod.yaml@ {{< /text >}} {{< warning >}} If the control-plane was installed with a revision `rev`, use the following command instead: {{< text bash >}} $ sed 's/{{.Revision}}/rev/g' @samples/multicluster/expose-istiod-rev.yaml.tmpl@ | kubectl apply --context="${CTX\_CLUSTER1}" -n istio-system -f - {{< /text >}} {{< /warning >}} ## Set the control plane cluster for `cluster2` We need identify the external control plane cluster that should manage `cluster2` by annotating the istio-system namespace: {{< text bash >}} $ kubectl --context="${CTX\_CLUSTER2}" create namespace istio-system $ kubectl --context="${CTX\_CLUSTER2}" annotate namespace istio-system topology.istio.io/controlPlaneClusters=cluster1 {{< /text >}} Setting the `topology.istio.io/controlPlaneClusters` namespace annotation to `cluster1` instructs the `istiod` running in the same namespace (istio-system in this case) on `cluster1` to manage `cluster2` when it is [attached as a remote cluster](#attach-cluster2-as-a-remote-cluster-of-cluster1). ## Set the default network for `cluster2` Set the network for `cluster2` by adding a label to the istio-system namespace: {{< text bash >}} $ kubectl --context="${CTX\_CLUSTER2}" label namespace istio-system topology.istio.io/network=network2 {{< /text >}} ## Configure `cluster2` as a remote Save the address of `cluster1`’s east-west gateway. {{< text bash >}} $ export DISCOVERY\_ADDRESS=$(kubectl \ --context="${CTX\_CLUSTER1}" \ -n istio-system get svc istio-eastwestgateway \ -o jsonpath='{.status.loadBalancer.ingress[0].ip}') {{< /text >}} Now create a remote configuration on `cluster2`. {{< tabset category-name="multicluster-primary-remote-install-type-remote-cluster" >}} {{< tab name="IstioOperator" category-value="iop" >}} {{< text bash >}} $ cat < cluster2.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: profile: remote values: istiodRemote: injectionPath: /inject/cluster/cluster2/net/network2 global: remotePilotAddress: ${DISCOVERY\_ADDRESS} EOF {{< /text >}} Apply the configuration to `cluster2`: {{< text bash >}} $ istioctl install --context="${CTX\_CLUSTER2}" -f cluster2.yaml {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Install Istio as remote in `cluster2` using the following Helm commands: Install the `base` chart in `cluster2`: {{< text bash >}} $ helm install istio-base istio/base -n istio-system --set profile=remote --kube-context "${CTX\_CLUSTER2}" {{< /text >}} Then, install the `istiod` chart in `cluster2` with the following multi-cluster settings: {{< text bash >}} $ helm install istiod istio/istiod -n istio-system --set profile=remote --set global.multiCluster.clusterName=cluster2 --set global.network=network2 --set istiodRemote.injectionPath=/inject/cluster/cluster2/net/network2 --set global.configCluster=true --set global.remotePilotAddress="${DISCOVERY\_ADDRESS}" --kube-context "${CTX\_CLUSTER2}" {{< /text >}} {{< tip >}} The `remote` profile for the `base` and `istiod` Helm charts is only available from Istio release 1.24 onwards. {{< /tip >}} {{< /tab >}} {{< /tabset >}} {{< tip >}} Here we're configuring the location of the control plane using the `injectionPath` and `remotePilotAddress` parameters. Although convenient for demonstration, in a production environment it is recommended to instead configure the `injectionURL` parameter using properly signed DNS certs similar to the configuration shown in the
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/multicluster/primary-remote_multi-network/index.md
master
istio
[ 0.06410329788923264, 0.02050129324197769, -0.02353736013174057, 0.01885458454489708, 0.006677707657217979, -0.0027479694690555334, 0.0009400632698088884, 0.017130082473158836, 0.015696009621024132, 0.03616892546415329, 0.006065438035875559, -0.11452268064022064, -0.007635450456291437, -0.0...
0.234436
{{< /tabset >}} {{< tip >}} Here we're configuring the location of the control plane using the `injectionPath` and `remotePilotAddress` parameters. Although convenient for demonstration, in a production environment it is recommended to instead configure the `injectionURL` parameter using properly signed DNS certs similar to the configuration shown in the [external control plane instructions](/docs/setup/install/external-controlplane/#register-the-new-cluster). {{< /tip >}} ## Attach `cluster2` as a remote cluster of `cluster1` To attach the remote cluster to its control plane, we give the control plane in `cluster1` access to the API Server in `cluster2`. This will do the following: - Enables the control plane to authenticate connection requests from workloads running in `cluster2`. Without API Server access, the control plane will reject the requests. - Enables discovery of service endpoints running in `cluster2`. Because it has been included in the `topology.istio.io/controlPlaneClusters` namespace annotation, the control plane on `cluster1` will also: - Patch certs in the webhooks in `cluster2`. - Start the namespace controller which writes configmaps in namespaces in `cluster2`. To provide API Server access to `cluster2`, we generate a remote secret and apply it to `cluster1`: {{< text bash >}} $ istioctl create-remote-secret \ --context="${CTX\_CLUSTER2}" \ --name=cluster2 | \ kubectl apply -f - --context="${CTX\_CLUSTER1}" {{< /text >}} ## Install the east-west gateway in `cluster2` As we did with `cluster1` above, install a gateway in `cluster2` that is dedicated to east-west traffic and expose user services. {{< tabset category-name="east-west-gateway-install-type-cluster-2" >}} {{< tab name="IstioOperator" category-value="iop" >}} {{< text bash >}} $ @samples/multicluster/gen-eastwest-gateway.sh@ \ --network network2 | \ istioctl --context="${CTX\_CLUSTER2}" install -y -f - {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Install the east-west gateway in `cluster2` using the following Helm command: {{< text bash >}} $ helm install istio-eastwestgateway istio/gateway -n istio-system --kube-context "${CTX\_CLUSTER2}" --set name=istio-eastwestgateway --set networkGateway=network2 {{< /text >}} {{< warning >}} If the control-plane was installed with a revision, you must add a `--set revision=` to the Helm install command. {{< /warning >}} {{< /tab >}} {{< /tabset >}} Wait for the east-west gateway to be assigned an external IP address: {{< text bash >}} $ kubectl --context="${CTX\_CLUSTER2}" get svc istio-eastwestgateway -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-eastwestgateway LoadBalancer 10.0.12.121 34.122.91.98 ... 51s {{< /text >}} ## Expose services in `cluster1` and `cluster2` Since the clusters are on separate networks, we also need to expose all user services (\*.local) on the east-west gateway in both clusters. While these gateways are public on the Internet, services behind them can only be accessed by services with a trusted mTLS certificate and workload ID, just as if they were on the same network. {{< text bash >}} $ kubectl --context="${CTX\_CLUSTER1}" apply -n istio-system -f \ @samples/multicluster/expose-services.yaml@ {{< /text >}} {{< tip >}} Since `cluster2` is installed with a remote profile, exposing services on the primary cluster will expose them on the east-west gateways of both clusters. {{< /tip >}} \*\*Congratulations!\*\* You successfully installed an Istio mesh across primary and remote clusters on different networks! ## Next Steps You can now [verify the installation](/docs/setup/install/multicluster/verify). ## Cleanup Uninstall Istio from both `cluster1` and `cluster2` using the same mechanism you installed Istio with (istioctl or Helm). {{< tabset category-name="multicluster-uninstall-type-cluster-1" >}} {{< tab name="IstioOperator" category-value="iop" >}} Uninstall Istio in `cluster1`: {{< text syntax=bash snip\_id=none >}} $ istioctl uninstall --context="${CTX\_CLUSTER1}" -y --purge $ kubectl delete ns istio-system --context="${CTX\_CLUSTER1}" {{< /text >}} Uninstall Istio in `cluster2`: {{< text syntax=bash snip\_id=none >}} $ istioctl uninstall --context="${CTX\_CLUSTER2}" -y --purge $ kubectl delete ns istio-system --context="${CTX\_CLUSTER2}" {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Delete Istio Helm installation from `cluster1`: {{< text syntax=bash >}} $ helm
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/multicluster/primary-remote_multi-network/index.md
master
istio
[ -0.047813549637794495, -0.05398348346352577, -0.06103432551026344, 0.0534902922809124, -0.036510586738586426, -0.030280349776148796, 0.01882903091609478, 0.04891335591673851, 0.020434726029634476, 0.045444928109645844, 0.03746014088392258, -0.075471431016922, 0.055787332355976105, 0.032649...
0.264573
ns istio-system --context="${CTX\_CLUSTER1}" {{< /text >}} Uninstall Istio in `cluster2`: {{< text syntax=bash snip\_id=none >}} $ istioctl uninstall --context="${CTX\_CLUSTER2}" -y --purge $ kubectl delete ns istio-system --context="${CTX\_CLUSTER2}" {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Delete Istio Helm installation from `cluster1`: {{< text syntax=bash >}} $ helm delete istiod -n istio-system --kube-context "${CTX\_CLUSTER1}" $ helm delete istio-eastwestgateway -n istio-system --kube-context "${CTX\_CLUSTER1}" $ helm delete istio-base -n istio-system --kube-context "${CTX\_CLUSTER1}" {{< /text >}} Delete the `istio-system` namespace from `cluster1`: {{< text syntax=bash >}} $ kubectl delete ns istio-system --context="${CTX\_CLUSTER1}" {{< /text >}} Delete Istio Helm installation from `cluster2`: {{< text syntax=bash >}} $ helm delete istiod -n istio-system --kube-context "${CTX\_CLUSTER2}" $ helm delete istio-eastwestgateway -n istio-system --kube-context "${CTX\_CLUSTER2}" $ helm delete istio-base -n istio-system --kube-context "${CTX\_CLUSTER2}" {{< /text >}} Delete the `istio-system` namespace from `cluster2`: {{< text syntax=bash >}} $ kubectl delete ns istio-system --context="${CTX\_CLUSTER2}" {{< /text >}} (Optional) Delete CRDs installed by Istio: Deleting CRDs permanently removes any Istio resources you have created in your clusters. To delete Istio CRDs installed in your clusters: {{< text syntax=bash snip\_id=delete\_crds >}} $ kubectl get crd -oname --context "${CTX\_CLUSTER1}" | grep --color=never 'istio.io' | xargs kubectl delete --context "${CTX\_CLUSTER1}" $ kubectl get crd -oname --context "${CTX\_CLUSTER2}" | grep --color=never 'istio.io' | xargs kubectl delete --context "${CTX\_CLUSTER2}" {{< /text >}} {{< /tab >}} {{< /tabset >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/multicluster/primary-remote_multi-network/index.md
master
istio
[ 0.05677745118737221, 0.02525544911623001, -0.023048903793096542, -0.02441283129155636, -0.017479365691542625, -0.031086357310414314, 0.09326014667749405, -0.013927029445767403, 0.037574976682662964, 0.011631234548985958, 0.007047644350677729, -0.14963440597057343, 0.0007434510625898838, -0...
0.422716
Follow this guide to install the Istio control plane on both `cluster1` and `cluster2`, making each a {{< gloss >}}primary cluster{{< /gloss >}}. Both clusters reside on the `network1` network, meaning there is direct connectivity between the pods in both clusters. Before proceeding, be sure to complete the steps under [before you begin](/docs/setup/install/multicluster/before-you-begin). In this configuration, each control plane observes the API Servers in both clusters for endpoints. Service workloads communicate directly (pod-to-pod) across cluster boundaries. {{< image width="75%" link="arch.svg" caption="Multiple primary clusters on the same network" >}} ## Configure `cluster1` as a primary Create the `istioctl` configuration for `cluster1`: {{< tabset category-name="multicluster-install-type-cluster-1" >}} {{< tab name="IstioOperator" category-value="iop" >}} Install Istio as primary in `cluster1` using istioctl and the `IstioOperator` API. {{< text bash >}} $ cat < cluster1.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: values: global: meshID: mesh1 multiCluster: clusterName: cluster1 network: network1 EOF {{< /text >}} Apply the configuration to `cluster1`: {{< text bash >}} $ istioctl install --context="${CTX\_CLUSTER1}" -f cluster1.yaml {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Install Istio as primary in `cluster1` using the following Helm commands: Install the `base` chart in `cluster1`: {{< text bash >}} $ helm install istio-base istio/base -n istio-system --kube-context "${CTX\_CLUSTER1}" {{< /text >}} Then, install the `istiod` chart in `cluster1` with the following multi-cluster settings: {{< text bash >}} $ helm install istiod istio/istiod -n istio-system --kube-context "${CTX\_CLUSTER1}" --set global.meshID=mesh1 --set global.multiCluster.clusterName=cluster1 --set global.network=network1 {{< /text >}} {{< /tab >}} {{< /tabset >}} ## Configure `cluster2` as a primary Create the `istioctl` configuration for `cluster2`: {{< tabset category-name="multicluster-install-type-cluster-2" >}} {{< tab name="IstioOperator" category-value="iop" >}} Install Istio as primary in `cluster2` using istioctl and the `IstioOperator` API. {{< text bash >}} $ cat < cluster2.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: values: global: meshID: mesh1 multiCluster: clusterName: cluster2 network: network1 EOF {{< /text >}} Apply the configuration to `cluster2`: {{< text bash >}} $ istioctl install --context="${CTX\_CLUSTER2}" -f cluster2.yaml {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Install Istio as primary in `cluster2` using the following Helm commands: Install the `base` chart in `cluster2`: {{< text bash >}} $ helm install istio-base istio/base -n istio-system --kube-context "${CTX\_CLUSTER2}" {{< /text >}} Then, install the `istiod` chart in `cluster2` with the following multi-cluster settings: {{< text bash >}} $ helm install istiod istio/istiod -n istio-system --kube-context "${CTX\_CLUSTER2}" --set global.meshID=mesh1 --set global.multiCluster.clusterName=cluster2 --set global.network=network1 {{< /text >}} {{< /tab >}} {{< /tabset >}} ## Enable Endpoint Discovery Install a remote secret in `cluster2` that provides access to `cluster1`’s API server. {{< text bash >}} $ istioctl create-remote-secret \ --context="${CTX\_CLUSTER1}" \ --name=cluster1 | \ kubectl apply -f - --context="${CTX\_CLUSTER2}" {{< /text >}} Install a remote secret in `cluster1` that provides access to `cluster2`’s API server. {{< text bash >}} $ istioctl create-remote-secret \ --context="${CTX\_CLUSTER2}" \ --name=cluster2 | \ kubectl apply -f - --context="${CTX\_CLUSTER1}" {{< /text >}} \*\*Congratulations!\*\* You successfully installed an Istio mesh across multiple primary clusters! ## Next Steps You can now [verify the installation](/docs/setup/install/multicluster/verify). ## Cleanup Uninstall Istio from both `cluster1` and `cluster2` using the same mechanism you installed Istio with (istioctl or Helm). {{< tabset category-name="multicluster-uninstall-type-cluster-1" >}} {{< tab name="IstioOperator" category-value="iop" >}} Uninstall Istio in `cluster1`: {{< text syntax=bash snip\_id=none >}} $ istioctl uninstall --context="${CTX\_CLUSTER1}" -y --purge $ kubectl delete ns istio-system --context="${CTX\_CLUSTER1}" {{< /text >}} Uninstall Istio in `cluster2`: {{< text syntax=bash snip\_id=none >}} $ istioctl uninstall --context="${CTX\_CLUSTER2}" -y --purge $ kubectl delete ns istio-system --context="${CTX\_CLUSTER2}" {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Delete Istio Helm installation from `cluster1`: {{< text syntax=bash >}} $ helm delete istiod -n istio-system --kube-context "${CTX\_CLUSTER1}" $
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/multicluster/multi-primary/index.md
master
istio
[ 0.022132201120257378, -0.08140422403812408, 0.015623624436557293, 0.03286419063806534, -0.02103906124830246, -0.061986107379198074, -0.04768470302224159, 0.029832636937499046, 0.006137718912214041, 0.009141664020717144, -0.03881651908159256, -0.11926572024822235, -0.02487434633076191, 0.00...
0.36859
Istio in `cluster2`: {{< text syntax=bash snip\_id=none >}} $ istioctl uninstall --context="${CTX\_CLUSTER2}" -y --purge $ kubectl delete ns istio-system --context="${CTX\_CLUSTER2}" {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Delete Istio Helm installation from `cluster1`: {{< text syntax=bash >}} $ helm delete istiod -n istio-system --kube-context "${CTX\_CLUSTER1}" $ helm delete istio-base -n istio-system --kube-context "${CTX\_CLUSTER1}" {{< /text >}} Delete the `istio-system` namespace from `cluster1`: {{< text syntax=bash >}} $ kubectl delete ns istio-system --context="${CTX\_CLUSTER1}" {{< /text >}} Delete Istio Helm installation from `cluster2`: {{< text syntax=bash >}} $ helm delete istiod -n istio-system --kube-context "${CTX\_CLUSTER2}" $ helm delete istio-base -n istio-system --kube-context "${CTX\_CLUSTER2}" {{< /text >}} Delete the `istio-system` namespace from `cluster2`: {{< text syntax=bash >}} $ kubectl delete ns istio-system --context="${CTX\_CLUSTER2}" {{< /text >}} (Optional) Delete CRDs installed by Istio: Deleting CRDs permanently removes any Istio resources you have created in your clusters. Delete Istio CRDs installed in your clusters by running: {{< text syntax=bash snip\_id=delete\_crds >}} $ kubectl get crd -oname --context "${CTX\_CLUSTER1}" | grep --color=never 'istio.io' | xargs kubectl delete --context "${CTX\_CLUSTER1}" $ kubectl get crd -oname --context "${CTX\_CLUSTER2}" | grep --color=never 'istio.io' | xargs kubectl delete --context "${CTX\_CLUSTER2}" {{< /text >}} {{< /tab >}} {{< /tabset >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/multicluster/multi-primary/index.md
master
istio
[ 0.03723011538386345, 0.007573474198579788, -0.018566016107797623, -0.0073737758211791515, -0.017488980665802956, -0.025777550414204597, 0.06590399891138077, -0.02021995559334755, 0.049910735338926315, 0.0012476785341277719, -0.0014037920627743006, -0.17001639306545258, -0.00995535496622324, ...
0.436424
Follow this guide to install and configure an Istio mesh using [Helm](https://helm.sh/docs/). {{< boilerplate helm-preamble >}} {{< boilerplate helm-prereqs >}} ## Installation steps This section describes the procedure to install Istio using Helm. The general syntax for helm installation is: {{< text syntax=bash snip\_id=none >}} $ helm install --namespace --create-namespace [--set ] {{< /text >}} The variables specified in the command are as follows: \* `` A path to a packaged chart, a path to an unpacked chart directory or a URL. \* `` A name to identify and manage the Helm chart once installed. \* `` The namespace in which the chart is to be installed. Default configuration values can be changed using one or more `--set =` arguments. Alternatively, you can specify several parameters in a custom values file using the `--values ` argument. {{< tip >}} You can display the default values of configuration parameters using the `helm show values ` command or refer to `artifacthub` chart documentation at [Custom Resource Definition parameters](https://artifacthub.io/packages/helm/istio-official/base?modal=values), [Istiod chart configuration parameters](https://artifacthub.io/packages/helm/istio-official/istiod?modal=values) and [Gateway chart configuration parameters](https://artifacthub.io/packages/helm/istio-official/gateway?modal=values). {{< /tip >}} 1. Install the Istio base chart which contains cluster-wide Custom Resource Definitions (CRDs) which must be installed prior to the deployment of the Istio control plane: {{< warning >}} When performing a revisioned installation, the base chart requires the `--set defaultRevision=` value to be set for resource validation to function. Below we install the `default` revision, so `--set defaultRevision=default` is configured. {{< /warning >}} {{< text syntax=bash snip\_id=install\_base >}} $ helm install istio-base istio/base -n istio-system --set defaultRevision=default --create-namespace {{< /text >}} 1. Validate the CRD installation with the `helm ls` command: {{< text syntax=bash >}} $ helm ls -n istio-system NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION istio-base istio-system 1 2024-04-17 22:14:45.964722028 +0000 UTC deployed base-{{< istio\_full\_version >}} {{< istio\_full\_version >}} {{< /text >}} In the output locate the entry for `istio-base` and make sure the status is set to `deployed`. 1. If you intend to use Istio CNI chart you must do so now. See [Install Istio with the CNI plugin](/docs/setup/additional-setup/cni/#installing-with-helm) for more info. 1. Install the Istio discovery chart which deploys the `istiod` service: {{< text syntax=bash snip\_id=install\_discovery >}} $ helm install istiod istio/istiod -n istio-system --wait {{< /text >}} 1. Verify the Istio discovery chart installation: {{< text syntax=bash >}} $ helm ls -n istio-system NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION istio-base istio-system 1 2024-04-17 22:14:45.964722028 +0000 UTC deployed base-{{< istio\_full\_version >}} {{< istio\_full\_version >}} istiod istio-system 1 2024-04-17 22:14:45.964722028 +0000 UTC deployed istiod-{{< istio\_full\_version >}} {{< istio\_full\_version >}} {{< /text >}} 1. Get the status of the installed helm chart to ensure it is deployed: {{< text syntax=bash >}} $ helm status istiod -n istio-system NAME: istiod LAST DEPLOYED: Fri Jan 20 22:00:44 2023 NAMESPACE: istio-system STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: "istiod" successfully installed! To learn more about the release, try: $ helm status istiod $ helm get all istiod Next steps: \* Deploy a Gateway: https://istio.io/latest/docs/setup/additional-setup/gateway/ \* Try out our tasks to get started on common configurations: \* https://istio.io/latest/docs/tasks/traffic-management \* https://istio.io/latest/docs/tasks/security/ \* https://istio.io/latest/docs/tasks/policy-enforcement/ \* https://istio.io/latest/docs/tasks/policy-enforcement/ \* Review the list of actively supported releases, CVE publications and our hardening guide: \* https://istio.io/latest/docs/releases/supported-releases/ \* https://istio.io/latest/news/security/ \* https://istio.io/latest/docs/ops/best-practices/security/ For further documentation see https://istio.io website Tell us how your install/upgrade experience went at https://forms.gle/99uiMML96AmsXY5d6 {{< /text >}} 1. Check `istiod` service is successfully installed and its pods are running: {{< text syntax=bash >}} $ kubectl get deployments -n istio-system --output wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR istiod 1/1 1 1 10m discovery docker.io/istio/pilot:{{< istio\_full\_version >}} istio=pilot {{< /text >}} 1. (Optional)
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/helm/index.md
master
istio
[ 0.036785926669836044, 0.037172842770814896, -0.024911407381296158, -0.004432972054928541, -0.032108087092638016, 0.040572281926870346, 0.006053713615983725, 0.11814465373754501, -0.03082774020731449, 0.029208391904830933, 0.0012174780713394284, -0.13808192312717438, 0.0007778729777783155, ...
0.40568
{{< /text >}} 1. Check `istiod` service is successfully installed and its pods are running: {{< text syntax=bash >}} $ kubectl get deployments -n istio-system --output wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR istiod 1/1 1 1 10m discovery docker.io/istio/pilot:{{< istio\_full\_version >}} istio=pilot {{< /text >}} 1. (Optional) Install an ingress gateway: {{< text syntax=bash snip\_id=install\_ingressgateway >}} $ kubectl create namespace istio-ingress $ helm install istio-ingress istio/gateway -n istio-ingress --wait {{< /text >}} See [Installing Gateways](/docs/setup/additional-setup/gateway/) for in-depth documentation on gateway installation. {{< warning >}} The namespace the gateway is deployed in must not have a `istio-injection=disabled` label. See [Controlling the injection policy](/docs/setup/additional-setup/sidecar-injection/#controlling-the-injection-policy) for more info. {{< /warning >}} {{< tip >}} See [Advanced Helm Chart Customization](/docs/setup/additional-setup/customize-installation-helm/) for in-depth documentation on how to use Helm post-renderer to customize the Helm charts. {{< /tip >}} ## Updating your Istio configuration You can provide override settings specific to any Istio Helm chart used above and follow the Helm upgrade workflow to customize your Istio mesh installation. The available configurable options can be found by using `helm show values istio/`; for example `helm show values istio/gateway`. ### Migrating from non-Helm installations If you're migrating from a version of Istio installed using `istioctl` to Helm (Istio 1.5 or earlier), you need to delete your current Istio control plane resources and re-install Istio using Helm as described above. When deleting your current Istio installation, you must not remove the Istio Custom Resource Definitions (CRDs) as that can lead to loss of your custom Istio resources. {{< warning >}} It is highly recommended to take a backup of your Istio resources using steps described above before deleting current Istio installation in your cluster. {{< /warning >}} You can follow steps mentioned in the [Istioctl uninstall guide](/docs/setup/install/istioctl#uninstall-istio). ## Uninstall You can uninstall Istio and its components by uninstalling the charts installed above. 1. List all the Istio charts installed in `istio-system` namespace: {{< text syntax=bash snip\_id=helm\_ls >}} $ helm ls -n istio-system NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION istio-base istio-system 1 2024-04-17 22:14:45.964722028 +0000 UTC deployed base-{{< istio\_full\_version >}} {{< istio\_full\_version >}} istiod istio-system 1 2024-04-17 22:14:45.964722028 +0000 UTC deployed istiod-{{< istio\_full\_version >}} {{< istio\_full\_version >}} {{< /text >}} 1. (Optional) Delete any Istio gateway chart installations: {{< text syntax=bash snip\_id=delete\_delete\_gateway\_charts >}} $ helm delete istio-ingress -n istio-ingress $ kubectl delete namespace istio-ingress {{< /text >}} 1. Delete Istio discovery chart: {{< text syntax=bash snip\_id=helm\_delete\_discovery\_chart >}} $ helm delete istiod -n istio-system {{< /text >}} 1. Delete Istio base chart: {{< tip >}} By design, deleting a chart via Helm doesn't delete the installed Custom Resource Definitions (CRDs) installed via the chart. {{< /tip >}} {{< text syntax=bash snip\_id=helm\_delete\_base\_chart >}} $ helm delete istio-base -n istio-system {{< /text >}} 1. Delete the `istio-system` namespace: {{< text syntax=bash snip\_id=delete\_istio\_system\_namespace >}} $ kubectl delete namespace istio-system {{< /text >}} ## Uninstall stable revision label resources If you decide to continue using the old control plane, instead of completing the update, you can uninstall the newer revision and its tag by first issuing `helm template istiod istio/istiod -s templates/revision-tags-mwc.yaml --set revisionTags={prod-canary} --set revision=canary -n istio-system | kubectl delete -f -`. You must then uninstall the revision of Istio that it pointed to by following the uninstall procedure above. If you installed the gateway(s) for this revision using in-place upgrades, you must also reinstall the gateway(s) for the previous revision manually. Removing the previous revision and its tags will not automatically revert the previously upgraded gateway(s). ### (Optional) Deleting CRDs installed by Istio Deleting CRDs permanently removes any Istio resources you have created in your cluster. To
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/helm/index.md
master
istio
[ 0.0083292992785573, 0.02906985394656658, 0.003068173537030816, 0.018803339451551437, -0.03653830289840698, -0.012421133928000927, 0.019962623715400696, 0.025138961151242256, 0.03260813280940056, 0.024603871628642082, -0.032320454716682434, -0.1221831813454628, -0.02968582883477211, 0.00504...
0.510702
using in-place upgrades, you must also reinstall the gateway(s) for the previous revision manually. Removing the previous revision and its tags will not automatically revert the previously upgraded gateway(s). ### (Optional) Deleting CRDs installed by Istio Deleting CRDs permanently removes any Istio resources you have created in your cluster. To delete Istio CRDs installed in your cluster: {{< text syntax=bash snip\_id=delete\_crds >}} $ kubectl get crd -oname | grep --color=never 'istio.io' | xargs kubectl delete {{< /text >}} ## Generate a manifest before installation You can generate the manifests for each component before installing Istio using the `helm template` sub-command. For example, to generate a manifest that can be installed with `kubectl` for the `istiod` component: {{< text syntax=bash snip\_id=none >}} $ helm template istiod istio/istiod -n istio-system --kube-version {Kubernetes version of target cluster} > istiod.yaml {{< /text >}} The generated manifest can be used to inspect what exactly is installed as well as to track changes to the manifest over time. {{< tip >}} Any additional flags or custom values overrides you would normally use for installation should also be supplied to the `helm template` command. {{< /tip >}} To install the manifest generated above, which will create the `istiod` component in the target cluster: {{< text syntax=bash snip\_id=none >}} $ kubectl apply -f istiod.yaml {{< /text >}} {{< warning >}} If attempting to install and manage Istio using `helm template`, please note the following caveats: 1. The Istio namespace (`istio-system` by default) must be created manually. 1. Resources may not be installed with the same sequencing of dependencies as `helm install` 1. This method is not tested as part of Istio releases. 1. While `helm install` will automatically detect environment specific settings from your Kubernetes context, `helm template` cannot as it runs offline, which may lead to unexpected results. In particular, you must ensure that you follow [these steps](/docs/ops/best-practices/security/#configure-third-party-service-account-tokens) if your Kubernetes environment does not support third party service account tokens. 1. `kubectl apply` of the generated manifest may show transient errors due to resources not being available in the cluster in the correct order. 1. `helm install` automatically prunes any resources that should be removed when the configuration changes (e.g. if you remove a gateway). This does not happen when you use `helm template` with `kubectl`, and these resources must be removed manually. {{< /warning >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/install/helm/index.md
master
istio
[ 0.022680025547742844, 0.03368774428963661, 0.054794009774923325, 0.0025294001679867506, 0.017482658848166466, -0.01916361227631569, -0.000008321902896568645, -0.015184913761913776, 0.03911975026130676, -0.0028533146250993013, -0.012624934315681458, -0.10984770208597183, -0.021085280925035477...
0.447665
{{< tip >}} Want to explore Istio's {{< gloss "ambient" >}}ambient mode{{< /gloss >}}? Visit the [Getting Started with Ambient Mode](/docs/ambient/getting-started) guide! {{< /tip >}} This guide lets you quickly evaluate Istio. If you are already familiar with Istio or interested in installing other configuration profiles or advanced [deployment models](/docs/ops/deployment/deployment-models/), refer to our [which Istio installation method should I use?](/about/faq/#install-method-selection) FAQ page. You will need a Kubernetes cluster to proceed. If you don't have a cluster, you can use [kind](/docs/setup/platform-setup/kind) or any other [supported Kubernetes platform](/docs/setup/platform-setup). Follow these steps to get started with Istio: 1. [Download and install Istio](#download) 1. [Install the Kubernetes Gateway API CRDs](#gateway-api) 1. [Deploy the sample application](#bookinfo) 1. [Open the application to outside traffic](#ip) 1. [View the dashboard](#dashboard) ## Download Istio {#download} 1. Go to the [Istio release]({{< istio\_release\_url >}}) page to download the installation file for your OS, or [download and extract the latest release automatically](/docs/setup/additional-setup/download-istio-release) (Linux or macOS): {{< text bash >}} $ curl -L https://istio.io/downloadIstio | sh - {{< /text >}} 1. Move to the Istio package directory. For example, if the package is `istio-{{< istio\_full\_version >}}`: {{< text syntax=bash snip\_id=none >}} $ cd istio-{{< istio\_full\_version >}} {{< /text >}} The installation directory contains: - Sample applications in `samples/` - The [`istioctl`](/docs/reference/commands/istioctl) client binary in the `bin/` directory. 1. Add the `istioctl` client to your path (Linux or macOS): {{< text bash >}} $ export PATH=$PWD/bin:$PATH {{< /text >}} ## Install Istio {#install} For this guide, we use the `demo` [configuration profile](/docs/setup/additional-setup/config-profiles/). It is selected to have a good set of defaults for testing, but there are other profiles for production, performance testing or [OpenShift](/docs/setup/platform-setup/openshift/). Unlike [Istio Gateways](/docs/concepts/traffic-management/#gateways), creating [Kubernetes Gateways](https://gateway-api.sigs.k8s.io/api-types/gateway/) will, by default, also [deploy gateway proxy servers](/docs/tasks/traffic-management/ingress/gateway-api/#automated-deployment). Because they won't be used, we disable the deployment of the default Istio gateway services that are normally installed as part of the `demo` profile. 1. Install Istio using the `demo` profile, without any gateways: {{< text bash >}} $ istioctl install -f @samples/bookinfo/demo-profile-no-gateways.yaml@ -y ✔ Istio core installed ✔ Istiod installed ✔ Installation complete Made this installation the default for injection and validation. {{< /text >}} 1. Add a namespace label to instruct Istio to automatically inject Envoy sidecar proxies when you deploy your application later: {{< text bash >}} $ kubectl label namespace default istio-injection=enabled namespace/default labeled {{< /text >}} ## Install the Kubernetes Gateway API CRDs {#gateway-api} The Kubernetes Gateway API CRDs do not come installed by default on most Kubernetes clusters, so make sure they are installed before using the Gateway API. 1. Install the Gateway API CRDs, if they are not already present: {{< text bash >}} $ kubectl get crd gateways.gateway.networking.k8s.io &> /dev/null || \ { kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref={{< k8s\_gateway\_api\_version >}}" | kubectl apply -f -; } {{< /text >}} ## Deploy the sample application {#bookinfo} You have configured Istio to inject sidecar containers into any application you deploy in your `default` namespace. 1. Deploy the [`Bookinfo` sample application](/docs/examples/bookinfo/): {{< text bash >}} $ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@ service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created deployment.apps/reviews-v2 created deployment.apps/reviews-v3 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created {{< /text >}} The application will start. As each pod becomes ready, the Istio sidecar will be deployed along with it. {{< text bash >}} $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE details ClusterIP 10.0.0.212 9080/TCP 29s kubernetes ClusterIP 10.0.0.1 443/TCP 25m productpage ClusterIP 10.0.0.57 9080/TCP 28s ratings ClusterIP 10.0.0.33 9080/TCP 29s reviews ClusterIP 10.0.0.28 9080/TCP 29s {{< /text >}} and {{< text bash >}} $ kubectl get pods
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/getting-started/index.md
master
istio
[ -0.008928804658353329, -0.027017029002308846, 0.05154679715633392, 0.030035721138119698, -0.027534611523151398, -0.03860696777701378, 0.025200489908456802, 0.06529650837182999, -0.02571883797645569, 0.006245548836886883, -0.014772485941648483, -0.17024335265159607, -0.04054408520460129, 0....
0.578927
text bash >}} $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE details ClusterIP 10.0.0.212 9080/TCP 29s kubernetes ClusterIP 10.0.0.1 443/TCP 25m productpage ClusterIP 10.0.0.57 9080/TCP 28s ratings ClusterIP 10.0.0.33 9080/TCP 29s reviews ClusterIP 10.0.0.28 9080/TCP 29s {{< /text >}} and {{< text bash >}} $ kubectl get pods NAME READY STATUS RESTARTS AGE details-v1-558b8b4b76-2llld 2/2 Running 0 2m41s productpage-v1-6987489c74-lpkgl 2/2 Running 0 2m40s ratings-v1-7dc98c7588-vzftc 2/2 Running 0 2m41s reviews-v1-7f99cc4496-gdxfn 2/2 Running 0 2m41s reviews-v2-7d79d5bd5d-8zzqd 2/2 Running 0 2m41s reviews-v3-7dbcdcbc56-m8dph 2/2 Running 0 2m41s {{< /text >}} Note that the pods show `READY 2/2`, confirming they have their application container and the Istio sidecar container. 1. Validate that the app is running inside the cluster by checking for the page title in the response: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep -o ".\*" Simple Bookstore App {{< /text >}} ## Open the application to outside traffic {#ip} The Bookinfo application is deployed, but not accessible from the outside. To make it accessible, you need to create an ingress gateway, which maps a path to a route at the edge of your mesh. 1. Create a [Kubernetes Gateway](https://gateway-api.sigs.k8s.io/api-types/gateway/) for the Bookinfo application: {{< text syntax=bash snip\_id=deploy\_bookinfo\_gateway >}} $ kubectl apply -f @samples/bookinfo/gateway-api/bookinfo-gateway.yaml@ gateway.gateway.networking.k8s.io/bookinfo-gateway created httproute.gateway.networking.k8s.io/bookinfo created {{< /text >}} By default, Istio creates a `LoadBalancer` service for a gateway. As we will access this gateway by a tunnel, we don't need a load balancer. If you want to learn about how load balancers are configured for external IP addresses, read the [ingress gateways](/docs/tasks/traffic-management/ingress/ingress-control/) documentation. 1. Change the service type to `ClusterIP` by annotating the gateway: {{< text syntax=bash snip\_id=annotate\_bookinfo\_gateway >}} $ kubectl annotate gateway bookinfo-gateway networking.istio.io/service-type=ClusterIP --namespace=default {{< /text >}} 1. To check the status of the gateway, run: {{< text bash >}} $ kubectl get gateway NAME CLASS ADDRESS PROGRAMMED AGE bookinfo-gateway istio bookinfo-gateway-istio.default.svc.cluster.local True 42s {{< /text >}} ## Access the application You will connect to the Bookinfo `productpage` service through the gateway you just provisioned. To access the gateway, you need to use the `kubectl port-forward` command: {{< text syntax=bash snip\_id=none >}} $ kubectl port-forward svc/bookinfo-gateway-istio 8080:80 {{< /text >}} Open your browser and navigate to `http://localhost:8080/productpage` to view the Bookinfo application. {{< image width="80%" link="./bookinfo-browser.png" caption="Bookinfo Application" >}} If you refresh the page, you should see the book reviews and ratings changing as the requests are distributed across the different versions of the `reviews` service. ## View the dashboard {#dashboard} Istio integrates with [several different telemetry applications](/docs/ops/integrations). These can help you gain an understanding of the structure of your service mesh, display the topology of the mesh, and analyze the health of your mesh. Use the following instructions to deploy the [Kiali](/docs/ops/integrations/kiali/) dashboard, along with [Prometheus](/docs/ops/integrations/prometheus/), [Grafana](/docs/ops/integrations/grafana), and [Jaeger](/docs/ops/integrations/jaeger/). 1. Install [Kiali and the other addons]({{< github\_tree >}}/samples/addons) and wait for them to be deployed. {{< text bash >}} $ kubectl apply -f @samples/addons/kiali.yaml@ $ kubectl rollout status deployment/kiali -n istio-system Waiting for deployment "kiali" rollout to finish: 0 of 1 updated replicas are available... deployment "kiali" successfully rolled out {{< /text >}} 1. Access the Kiali dashboard. {{< text bash >}} $ istioctl dashboard kiali {{< /text >}} 1. In the left navigation menu, select \_Graph\_ and in the \_Namespace\_ drop down, select \_default\_. {{< tip >}} {{< boilerplate trace-generation >}} {{< /tip >}} The Kiali dashboard shows an overview of your mesh with the relationships between the services in the `Bookinfo` sample application. It also provides filters to visualize the traffic flow. {{< image link="./kiali-example2.png" caption="Kiali Dashboard" >}} ##
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/getting-started/index.md
master
istio
[ 0.06494159996509552, 0.028859958052635193, -0.04863891750574112, -0.03135832026600838, -0.019390007480978966, 0.024677900597453117, 0.02506042644381523, -0.018418189138174057, 0.07226689904928207, 0.04593704268336296, 0.027884086593985558, -0.09190746396780014, -0.008050466887652874, -0.06...
0.149073
drop down, select \_default\_. {{< tip >}} {{< boilerplate trace-generation >}} {{< /tip >}} The Kiali dashboard shows an overview of your mesh with the relationships between the services in the `Bookinfo` sample application. It also provides filters to visualize the traffic flow. {{< image link="./kiali-example2.png" caption="Kiali Dashboard" >}} ## Next steps Congratulations on completing the evaluation installation! These tasks are a great place for beginners to further evaluate Istio's features using this `demo` installation: - [Request routing](/docs/tasks/traffic-management/request-routing/) - [Fault injection](/docs/tasks/traffic-management/fault-injection/) - [Traffic shifting](/docs/tasks/traffic-management/traffic-shifting/) - [Querying metrics](/docs/tasks/observability/metrics/querying-metrics/) - [Visualizing metrics](/docs/tasks/observability/metrics/using-istio-dashboard/) - [Accessing external services](/docs/tasks/traffic-management/egress/egress-control/) - [Visualizing your mesh](/docs/tasks/observability/kiali/) Before you customize Istio for production use, see these resources: - [Deployment models](/docs/ops/deployment/deployment-models/) - [Deployment best practices](/docs/ops/best-practices/deployment/) - [Pod requirements](/docs/ops/deployment/application-requirements/) - [General installation instructions](/docs/setup/) ## Join the Istio community We welcome you to ask questions and give us feedback by joining the [Istio community](/get-involved/). ## Uninstall To delete the `Bookinfo` sample application and its configuration, see [`Bookinfo` cleanup](/docs/examples/bookinfo/#cleanup). The Istio uninstall deletes the RBAC permissions and all resources hierarchically under the `istio-system` namespace. It is safe to ignore errors for non-existent resources because they may have been deleted hierarchically. {{< text bash >}} $ kubectl delete -f @samples/addons@ $ istioctl uninstall -y --purge {{< /text >}} The `istio-system` namespace is not removed by default. If no longer needed, use the following command to remove it: {{< text bash >}} $ kubectl delete namespace istio-system {{< /text >}} The label to instruct Istio to automatically inject Envoy sidecar proxies is not removed by default. If no longer needed, use the following command to remove it: {{< text bash >}} $ kubectl label namespace default istio-injection- {{< /text >}} If you installed the Kubernetes Gateway API CRDs and would now like to remove them, run one of the following commands: - If you ran any tasks that required the \*\*experimental version\*\* of the CRDs: {{< text bash >}} $ kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd/experimental?ref={{< k8s\_gateway\_api\_version >}}" | kubectl delete -f - {{< /text >}} - Otherwise: {{< text bash >}} $ kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref={{< k8s\_gateway\_api\_version >}}" | kubectl delete -f - {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/getting-started/index.md
master
istio
[ -0.020094184204936028, -0.003257938427850604, -0.04733964428305626, 0.012467952445149422, -0.012828031554818153, -0.008851524442434311, 0.022060198709368706, 0.061298154294490814, -0.027048809453845024, -0.005583128426223993, -0.052455078810453415, -0.13317571580410004, 0.022859441116452217,...
0.486064
{{< boilerplate untested-document >}} Follow these instructions to prepare an [Alibaba Cloud Kubernetes Container Service](https://www.alibabacloud.com/product/kubernetes) cluster for Istio. You can deploy a Kubernetes cluster to Alibaba Cloud quickly and easily in the `Container Service console`, which fully supports Istio. {{< tip >}} Alibaba Cloud offers a fully managed service mesh platform named Alibaba Cloud Service Mesh (ASM), which is fully compatible with Istio. Refer to [Alibaba Cloud Service Mesh](https://www.alibabacloud.com/help/doc-detail/147513.htm) for details and instructions. {{< /tip >}} ## Prerequisites 1. [Follow the Alibaba Cloud instructions](https://www.alibabacloud.com/help/doc-detail/95108.htm) to activate the following services: Container Service, Resource Orchestration Service (ROS), and RAM. ## Procedure 1. Log on to the `Container Service console`, and click \*\*Clusters\*\* under \*\*Kubernetes\*\* in the left-side navigation pane to enter the \*\*Cluster List\*\* page. 1. Click the \*\*Create Kubernetes Cluster\*\* button in the upper-right corner. 1. Enter the cluster name. The cluster name can be 1–63 characters long and it can contain numbers, Chinese characters, English letters, and hyphens (-). 1. Select the \*\*region\*\* and \*\*zone\*\* in which the cluster resides. 1. Set the cluster network type. Kubernetes clusters only support the VPC network type now. 1. Configure the node type, Pay-As-You-Go and Subscription types are supported. 1. Configure the master nodes. Select the generation, family, and type for the master nodes. 1. Configure the worker nodes. Select whether to create a worker node or add an existing ECS instance as the worker node. 1. Configure the logon mode, and configure the Pod Network CIDR and Service CIDR.
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/platform-setup/alicloud/index.md
master
istio
[ -0.010696213692426682, -0.010671029798686504, -0.011690700426697731, 0.03804837912321091, -0.02587377093732357, 0.02312476560473442, 0.01327146403491497, 0.025598078966140747, 0.016194729134440422, 0.03941236063838005, -0.02504470944404602, -0.09334395080804825, -0.013973032124340534, -0.0...
0.511051
This page was last updated August 28, 2019. {{< boilerplate untested-document >}} Follow these instructions to prepare MicroK8s for using Istio. {{< warning >}} Administrative privileges are required to run MicroK8s. {{< /warning >}} 1. Install the latest version of [MicroK8s](https://microk8s.io) using the command {{< text bash >}} $ sudo snap install microk8s --classic {{< /text >}} 1. Enable Istio with the following command: {{< text bash >}} $ microk8s.enable istio {{< /text >}} 1. When prompted, choose whether to enforce mutual TLS authentication among sidecars. If you have a mixed deployment with non-Istio and Istio enabled services or you're unsure, choose No. Please run the following command to check deployment progress: {{< text bash >}} $ watch microk8s.kubectl get all --all-namespaces {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/platform-setup/MicroK8s/index.md
master
istio
[ -0.07007754594087601, 0.07236938923597336, 0.04512669891119003, -0.022966761142015457, -0.042024876922369, -0.046235647052526474, -0.06196845322847366, 0.04417148977518082, 0.019586453214287758, 0.016047576442360878, 0.023372860625386238, -0.08483202010393143, -0.005526137072592974, 0.0572...
0.387324
1. To run Istio with Docker Desktop, install a version which contains a [supported Kubernetes version](/docs/releases/supported-releases#support-status-of-istio-releases) ({{< supported\_kubernetes\_versions >}}). 1. If you want to run Istio under Docker Desktop's built-in Kubernetes, you need to increase Docker's memory limit under the \*Resources->Advanced\* pane of Docker Desktop's \*Settings...\*. Set the resources to at least 8.0 `GB` of memory and 4 `CPUs`. {{< image width="60%" link="./dockerprefs.png" caption="Docker Preferences" >}} {{< warning >}} Minimum memory requirements vary. 8 `GB` is sufficient to run Istio and Bookinfo. If you don't have enough memory allocated in Docker Desktop, the following errors could occur: - image pull failures - healthcheck timeout failures - kubectl failures on the host - general network instability of the hypervisor Additional Docker Desktop resources may be freed up using: {{< text bash >}} $ docker system prune {{< /text >}} {{< /warning >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/platform-setup/docker/index.md
master
istio
[ 0.04460068792104721, 0.04247143864631653, 0.027551185339689255, 0.0011573261581361294, -0.037624575197696686, -0.0035520705860108137, -0.05049876123666763, 0.0712149441242218, 0.018979601562023163, 0.036166783422231674, -0.09954972565174103, -0.1597154140472412, -0.05037463828921318, 0.033...
0.44508
k3d is a lightweight wrapper to run [k3s](https://github.com/rancher/k3s) (Rancher Lab’s minimal Kubernetes distribution) in docker. k3d makes it very easy to create single- and multi-node k3s clusters in docker, e.g. for local development on Kubernetes. ## Prerequisites - To use k3d, you will also need to [install docker](https://docs.docker.com/install/). - Install the latest version of [k3d](https://k3d.io/v5.4.7/#installation). - To interact with the Kubernetes cluster [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) - (Optional) [Helm](https://helm.sh/docs/intro/install/) is the package manager for Kubernetes ## Installation 1. Create a cluster and disable `Traefik` with the following command: {{< text bash >}} $ k3d cluster create --api-port 6550 -p '9080:80@loadbalancer' -p '9443:443@loadbalancer' --agents 2 --k3s-arg '--disable=traefik@server:\*' {{< /text >}} 1. To see the list of k3d clusters, use the following command: {{< text bash >}} $ k3d cluster list k3s-default {{< /text >}} 1. To list the local Kubernetes contexts, use the following command. {{< text bash >}} $ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE \* k3d-k3s-default k3d-k3s-default k3d-k3s-default {{< /text >}} {{< tip >}} `k3d-` is prefixed to the context and cluster names, for example: `k3d-k3s-default` {{< /tip >}} 1. If you run multiple clusters, you need to choose which cluster `kubectl` talks to. You can set a default cluster for `kubectl` by setting the current context in the [Kubernetes kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file. Additionally you can run following command to set the current context for `kubectl`. {{< text bash >}} $ kubectl config use-context k3d-k3s-default Switched to context "k3d-k3s-default". {{< /text >}} ## Set up Istio for k3d 1. Once you are done setting up a k3d cluster, you can proceed to [install Istio with Helm 3](/docs/setup/install/helm/) on it. {{< text bash >}} $ kubectl create namespace istio-system $ helm install istio-base istio/base -n istio-system --wait $ helm install istiod istio/istiod -n istio-system --wait {{< /text >}} 1. (Optional) Install an ingress gateway: {{< text bash >}} $ helm install istio-ingressgateway istio/gateway -n istio-system --wait {{< /text >}} ## Set up Dashboard UI for k3d k3d does not have a built-in Dashboard UI like minikube. But you can still set up Dashboard, a web based Kubernetes UI, to view your cluster. Follow these instructions to set up Dashboard for k3d. 1. To deploy Dashboard, run the following command: {{< text bash >}} $ helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/ $ helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard {{< /text >}} 1. Verify that Dashboard is deployed and running. {{< text bash >}} $ kubectl get pod -n kubernetes-dashboard NAME READY STATUS RESTARTS AGE dashboard-metrics-scraper-8c47d4b5d-dd2ks 1/1 Running 0 25s kubernetes-dashboard-67bd8fc546-4xfmm 1/1 Running 0 25s {{< /text >}} 1. Create a `ServiceAccount` and `ClusterRoleBinding` to provide admin access to the newly created cluster. {{< text bash >}} $ kubectl create serviceaccount -n kubernetes-dashboard admin-user $ kubectl create clusterrolebinding -n kubernetes-dashboard admin-user --clusterrole cluster-admin --serviceaccount=kubernetes-dashboard:admin-user {{< /text >}} 1. To log in to your Dashboard, you need a Bearer Token. Use the following command to store the token in a variable. {{< text bash >}} $ token=$(kubectl -n kubernetes-dashboard create token admin-user) {{< /text >}} Display the token using the `echo` command and copy it to use for logging in to your Dashboard. {{< text bash >}} $ echo $token {{< /text >}} 1. You can access your Dashboard using the kubectl command-line tool by running the following command: {{< text bash >}} $ kubectl proxy Starting to serve on 127.0.0.1:8001 {{< /text >}} Click [Kubernetes Dashboard](http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard-web:web/proxy/) to view your deployments and services. {{< warning >}} You have to save your token somewhere, otherwise you have to run step number 4 everytime you need a token to log in to your Dashboard. {{< /warning
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/platform-setup/k3d/index.md
master
istio
[ -0.054409366101026535, -0.026898641139268875, -0.02819201722741127, -0.04023119434714317, -0.01699775829911232, -0.04934152960777283, -0.04331987351179123, -0.059088513255119324, 0.04622175171971321, 0.06656070053577423, -0.0618930459022522, -0.10263213515281677, -0.018542101606726646, -0....
0.145285
kubectl proxy Starting to serve on 127.0.0.1:8001 {{< /text >}} Click [Kubernetes Dashboard](http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard-web:web/proxy/) to view your deployments and services. {{< warning >}} You have to save your token somewhere, otherwise you have to run step number 4 everytime you need a token to log in to your Dashboard. {{< /warning >}} ## Uninstall 1. When you are done experimenting and you want to delete the existing cluster, use the following command: {{< text bash >}} $ k3d cluster delete k3s-default Deleting cluster "k3s-default" ... {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/platform-setup/k3d/index.md
master
istio
[ 0.008446099236607552, -0.001848459243774414, 0.005174356047064066, -0.0655590146780014, -0.03520319610834122, -0.02499951422214508, -0.0394955538213253, -0.07589888572692871, 0.08886359632015228, 0.05847538262605667, -0.08487048745155334, -0.0380776971578598, 0.025459375232458115, 0.004249...
0.092145
Follow these instructions to prepare minikube for Istio installation with sufficient resources to run Istio and some basic applications. ## Prerequisites - Administrative privileges are required to run minikube. - To enable the [Secret Discovery Service](https://www.envoyproxy.io/docs/envoy/latest/configuration/security/secret#sds-configuration) (SDS) for your mesh, you must add [extra configurations](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection) to your Kubernetes deployment. Refer to the [`api-server` reference docs](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/) for the most up-to-date flags. ## Installation steps 1. Install the latest version of [minikube](https://kubernetes.io/docs/tasks/tools/#minikube) and a [minikube hypervisor driver](https://minikube.sigs.k8s.io/docs/start/#install-a-hypervisor). 1. If you're not using the default driver, set your minikube hypervisor driver. For example, if you installed the KVM hypervisor, set the `driver` within the minikube configuration using the following command: {{< text bash >}} $ minikube config set driver kvm2 {{< /text >}} 1. Start minikube with 16384 `MB` of memory and 4 `CPUs`. This example uses Kubernetes version \*\*1.26.1\*\*. You can change the version to any Kubernetes version supported by Istio by altering the `--kubernetes-version` value: {{< text bash >}} $ minikube start --memory=16384 --cpus=4 --kubernetes-version=v1.26.1 {{< /text >}} Depending on the hypervisor you use and the platform on which the hypervisor is run, minimum memory requirements vary. 16384 `MB` is sufficient to run Istio and bookinfo. {{< tip >}} If you don't have enough RAM allocated to the minikube virtual machine, the following errors could occur: - image pull failures - healthcheck timeout failures - kubectl failures on the host - general network instability of the virtual machine and the host - complete lock-up of the virtual machine - host NMI watchdog reboots One effective way to monitor memory usage in minikube is to `ssh` into the minikube virtual machine and from that prompt run the top command: {{< text bash >}} $ minikube ssh {{< /text >}} {{< text bash >}} $ top GiB Mem : 12.4/15.7 {{< /text >}} This shows 12.4GiB used of an available 15.7 GiB RAM within the virtual machine. This data was generated with the VMWare Fusion hypervisor on a Macbook Pro 13" with 16GiB RAM running Istio 1.2 with bookinfo installed. {{< /tip >}} 1. (Optional, recommended) If you want minikube to provide a load balancer for use by Istio, you can use the [minikube tunnel](https://minikube.sigs.k8s.io/docs/tasks/loadbalancer/#using-minikube-tunnel) feature. Run this command in a different terminal, because the minikube tunnel feature will block your terminal to output diagnostic information about the network: {{< text bash >}} $ minikube tunnel {{< /text >}} {{< warning >}} Sometimes minikube does not clean up the tunnel network properly. To force a proper cleanup: {{< text bash >}} $ minikube tunnel --cleanup {{< /text >}} {{< /warning >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/platform-setup/minikube/index.md
master
istio
[ 0.04654907435178757, 0.04031466692686081, 0.02362722158432007, 0.011797810904681683, -0.021837985143065453, 0.018330948427319527, -0.03783704712986946, 0.059272900223731995, 0.02380504459142685, 0.10257972031831741, -0.0488719679415226, -0.09380558878183365, -0.012331501580774784, 0.033358...
0.353776
## Bootstrapping Gardener To set up your own [Gardener](https://gardener.cloud) for your organization's Kubernetes-as-a-Service needs, follow the [documentation](https://github.com/gardener/gardener/blob/master/docs/README.md). For testing purposes, you can set up [Gardener on your laptop](https://github.com/gardener/gardener/blob/master/docs/development/getting\_started\_locally.md) by checking out the source code repository and simply running `make kind-up gardener-up` (the easiest developer way of checking out Gardener!). Alternatively, [`23 Technologies GmbH`](https://23technologies.cloud/) offers a fully-managed Gardener service that conveniently works with all supported cloud providers and comes with a free trial: [`Okeanos`](https://okeanos.dev/). Similarly, cloud providers such as [`STACKIT`](https://stackit.de/), [`B'Nerd`](https://bnerd.com/), [`MetalStack`](https://metalstack.cloud/), and many others run Gardener as their Kubernetes Engine. To learn more about the inception of this open source project, read [Gardener Project Update](https://kubernetes.io/blog/2019/12/02/gardener-project-update/) and [Gardener - The Kubernetes Botanist](https://kubernetes.io/blog/2018/05/17/gardener/) on [`kubernetes.io`](https://kubernetes.io/blog). [Gardener yourself a Shoot with Istio, custom Domains, and Certificates](https://gardener.cloud/docs/extensions/others/gardener-extension-shoot-cert-service/tutorials/tutorial-custom-domain-with-istio/) is a detailed tutorial for the end user of Gardener. ### Install and configure `kubectl` 1. If you already have `kubectl` CLI, run `kubectl version --short` to check the version. You need a current version that at least matches your Kubernetes cluster version you want to order. If your `kubectl` is older, follow the next step to install a newer version. 1. [Install the `kubectl` CLI](https://kubernetes.io/docs/tasks/tools/). ### Access Gardener 1. Create a project in the Gardener dashboard. This will essentially create a Kubernetes namespace with the name `garden-`. 1. [Configure access to your Gardener project](https://gardener.cloud/docs/dashboard/usage/gardener-api/) using a kubeconfig. {{< tip >}} You can skip this step if you intend to create and interact with your cluster using the Gardener dashboard and the embedded webterminal; this step is only needed for programmatic access. {{< /tip >}} If you are not the Gardener Administrator already, you can create a technical user in the Gardener dashboard: go to the "Members" section and add a service account. You can then download the kubeconfig for your project. Make sure you `export KUBECONFIG=garden-my-project.yaml` in your shell. ![Download kubeconfig for Gardener](https://raw.githubusercontent.com/gardener/dashboard/master/docs/images/01-add-service-account.png "downloading the kubeconfig using a service account") ### Creating a Kubernetes cluster You can create your cluster using the `kubectl` cli by providing a cluster specification yaml file. You can find an example for GCP [here](https://github.com/gardener/gardener/blob/master/example/90-shoot.yaml). Make sure the namespace matches that of your project. Then apply the prepared so-called "shoot" cluster manifest with `kubectl`: {{< text bash >}} $ kubectl apply --filename my-cluster.yaml {{< /text >}} An easier alternative is to create the cluster following the cluster creation wizard in the Gardener dashboard: ![shoot creation](https://raw.githubusercontent.com/gardener/dashboard/master/docs/images/dashboard-demo.gif "shoot creation via the dashboard") ### Configure `kubectl` for your cluster You can now download the kubeconfig for your freshly created cluster in the Gardener dashboard or via cli as follows: {{< text bash >}} $ kubectl --namespace shoot--my-project--my-cluster get secret kubecfg --output jsonpath={.data.kubeconfig} | base64 --decode > my-cluster.yaml {{< /text >}} This kubeconfig file has full administrator access to you cluster. For any activities with the payload cluster be sure you have `export KUBECONFIG=my-cluster.yaml` set. ## Cleaning up Use the Gardener dashboard to delete your cluster, or execute the following with `kubectl` pointing to your `garden-my-project.yaml` kubeconfig: {{< text bash >}} $ kubectl --kubeconfig garden-my-project.yaml --namespace garden--my-project annotate shoot my-cluster confirmation.garden.sapcloud.io/deletion=true $ kubectl --kubeconfig garden-my-project.yaml --namespace garden--my-project delete shoot my-cluster {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/platform-setup/gardener/index.md
master
istio
[ -0.03239072859287262, -0.05224551633000374, 0.02926931343972683, -0.011047538369894028, 0.041101958602666855, -0.01797017641365528, -0.05632373318076134, -0.06893491744995117, 0.06507377326488495, 0.08238030970096588, 0.05525857210159302, -0.03876578435301781, 0.033400412648916245, 0.00337...
0.075398
Follow these instructions to prepare an Azure cluster for Istio. {{< tip >}} Azure offers a {{< gloss >}}managed control plane{{< /gloss >}} add-on for the Azure Kubernetes Service (AKS), which you can use instead of installing Istio manually. Please refer to [Deploy Istio-based service mesh add-on for Azure Kubernetes Service](https://learn.microsoft.com/azure/aks/istio-deploy-addon) for details and instructions. {{< /tip >}} You can deploy a Kubernetes cluster to Azure via [AKS](https://azure.microsoft.com/en-us/services/kubernetes-service/) or [Cluster API provider for Azure (CAPZ) for self-managed Kubernetes or AKS](https://capz.sigs.k8s.io/) which fully supports Istio. ## AKS You can create an AKS cluster via numerous means such as [the az cli](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough), [the Azure portal](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal), [az cli with Bicep](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-bicep?tabs=azure-cli), or [Terraform](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-terraform?tabs=bash) For the `az` cli option, complete `az login` authentication OR use cloud shell, then run the following commands below. 1. Determine the desired region name which supports AKS {{< text bash >}} $ az provider list --query "[?namespace=='Microsoft.ContainerService'].resourceTypes[] | [?resourceType=='managedClusters'].locations[]" -o tsv {{< /text >}} 1. Verify the supported Kubernetes versions for the desired region Replace `my location` using the desired region value from the above step, and then execute: {{< text bash >}} $ az aks get-versions --location "my location" --query "orchestrators[].orchestratorVersion" {{< /text >}} 1. Create the resource group and deploy the AKS cluster Replace `myResourceGroup` and `myAKSCluster` with desired names, `my location` using the value from step 1, `1.28.3` if not supported in the region, and then execute: {{< text bash >}} $ az group create --name myResourceGroup --location "my location" $ az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 3 --kubernetes-version 1.28.3 --generate-ssh-keys {{< /text >}} 1. Get the AKS `kubeconfig` credentials Replace `myResourceGroup` and `myAKSCluster` with the names from the previous step and execute: {{< text bash >}} $ az aks get-credentials --resource-group myResourceGroup --name myAKSCluster {{< /text >}} ### Using Gateway API with Azure If you are using Gateway API with AKS, you might also need add the following configuration to the `Gateway` resource: {{< text yaml >}} infrastructure: annotations: service.beta.kubernetes.io/port\_\_health-probe\_protocol: tcp {{< /text >}} where `` is the port number of your HTTP(S) listener. If you have multiple HTTP(S) listeners, you need to add an annotation for each listener. This annotation is required for Azure Load Balancer health checks to work when the `/` path does not respond with a 200. For example, if you are following the [Ingress Gateways](docs/setup/getting-started) example using Gateway API, you will need deploy the following `Gateway` instead: {{< text bash >}} $ kubectl apply -f - <}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/platform-setup/azure/index.md
master
istio
[ 0.006324570160359144, -0.034915897995233536, -0.013694860972464085, 0.04004841297864914, -0.053802140057086945, 0.05685367435216904, 0.043715380132198334, -0.017455050721764565, 0.03606325760483742, 0.13798022270202637, -0.02512934058904648, -0.11355743557214737, 0.015614069066941738, 0.06...
0.487264
This page was last updated September 20, 2021. {{< boilerplate untested-document >}} Follow these instructions to prepare an Oracle Container Engine for Kubernetes (OKE) cluster for Istio. ## Create an OKE cluster To create an OKE cluster, you must either belong to the tenancy's Administrator's group or a group to which a policy grants the `CLUSTER\_MANAGE` permission. The simplest way to [create an OKE cluster][CREATE] is to use the [Quick Create Workflow][QUICK] available in the [Oracle Cloud Infrastructure (OCI) console][CONSOLE]. Other methods include the [Custom Create Workflow][CUSTOM] and the [Oracle Cloud Infrastructure (OCI) API][API]. You can also create a cluster using the [OCI CLI][OCICLI] using the following example: {{< text bash >}} $ oci ce cluster create \ --name \ --kubernetes-version \ --compartment-id \ --vcn-id {{< /text >}} | Parameter | Expected value | |-----------------------|------------------------------------------------------------ | | `oke-cluster-name` | A name to assign to your new OKE cluster | | `kubernetes-version` | A [supported version of Kubernetes][K8S] to deploy | | `compartment-ocid` | The [OCID][CONCEPTS] of an existing [compartment][CONCEPTS] | | `vcn-ocid` | The [OCID][CONCEPTS] of an existing [virtual cloud network][CONCEPTS] (VCN) | ## Setting up local access to an OKE cluster [Install `kubectl`][KUBECTL] and the [OCI CLI][OCICLI] (`oci`) to access an OKE cluster from your local machine. Use the following OCI CLI command to create or update your `kubeconfig` file to include an `oci` command that dynamically generates and inserts a short-lived authentication token which allows `kubectl` to access the cluster: {{< text bash >}} $ oci ce cluster create-kubeconfig \ --cluster-id \ --file $HOME/.kube/config \ --token-version 2.0.0 \ --kube-endpoint [PRIVATE\_ENDPOINT|PUBLIC\_ENDPOINT] {{< /text >}} {{< tip >}} While an OKE cluster may have multiple endpoints exposed, only one can be targeted in the `kubeconfig` file. {{< /tip >}} The supported values for `kube-endpoint` are either `PUBLIC\_ENDPOINT` or `PRIVATE\_ENDPOINT`. You may also need to configure an SSH tunnel via a [bastion host][BASTION] to access clusters that only have a private endpoint. Replace `cluster-ocid` with the [OCID][CONCEPTS] of the target OKE cluster. ## Verify access to the cluster Use the `kubectl get nodes` command to verify `kubectl` is able to connect to the cluster: {{< text bash >}} $ kubectl get nodes {{< /text >}} You can now install Istio using [`istioctl`](../../install/istioctl/), [Helm](../../install/helm/), or manually. [CREATE]: https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengcreatingclusterusingoke.htm [API]: https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengcreatingclusterusingoke\_topic-Using\_the\_API.htm [QUICK]: https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengcreatingclusterusingoke\_topic-Using\_the\_Console\_to\_create\_a\_Quick\_Cluster\_with\_Default\_Settings.htm [CUSTOM]: https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengcreatingclusterusingoke\_topic-Using\_the\_Console\_to\_create\_a\_Custom\_Cluster\_with\_Explicitly\_Defined\_Settings.htm [OCICLI]: https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm [K8S]: https://docs.oracle.com/en-us/iaas/Content/ContEng/Concepts/contengaboutk8sversions.htm [KUBECTL]: https://kubernetes.io/docs/tasks/tools/ [CONCEPTS]: https://docs.oracle.com/en-us/iaas/Content/GSG/Concepts/concepts.htm [BASTION]: https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengdownloadkubeconfigfile.htm#localdownload [CONSOLE]: https://docs.oracle.com/en-us/iaas/Content/GSG/Concepts/console.htm
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/platform-setup/oci/index.md
master
istio
[ -0.023544596508145332, -0.011699650436639786, -0.01772299036383629, 0.006266566459089518, -0.04051182419061661, -0.0356038436293602, 0.016529260203242302, 0.024038324132561684, 0.03183240070939064, 0.009370677173137665, -0.03460842743515968, -0.14896780252456665, 0.006851273588836193, 0.02...
0.290867
[kind](https://kind.sigs.k8s.io/) is a tool for running local Kubernetes clusters using Docker container `nodes`. kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI. Follow these instructions to prepare a kind cluster for Istio installation. ## Prerequisites - Please use the latest Go version. - To use kind, you will also need to [install docker](https://docs.docker.com/install/). - Install the latest version of [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). - Increase Docker's [memory limit](/docs/setup/platform-setup/docker/). ## Installation steps 1. Create a cluster with the following command: {{< text bash >}} $ kind create cluster --name istio-testing {{< /text >}} `--name` is used to assign a specific name to the cluster. By default, the cluster will be given the name "kind". 1. To see the list of kind clusters, use the following command: {{< text bash >}} $ kind get clusters istio-testing {{< /text >}} 1. To list the local Kubernetes contexts, use the following command. {{< text bash >}} $ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE \* kind-istio-testing kind-istio-testing kind-istio-testing minikube minikube minikube {{< /text >}} {{< tip >}} `kind` is prefixed to the context and cluster names, for example: `kind-istio-testing` {{< /tip >}} 1. If you run multiple clusters, you need to choose which cluster `kubectl` talks to. You can set a default cluster for `kubectl` by setting the current context in the [Kubernetes kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) file. Additionally you can run following command to set the current context for `kubectl`. {{< text bash >}} $ kubectl config use-context kind-istio-testing Switched to context "kind-istio-testing". {{< /text >}} Once you are done setting up a kind cluster, you can proceed to [install Istio](/docs/setup/additional-setup/download-istio-release/) on it. 1. When you are done experimenting and you want to delete the existing cluster, use the following command: {{< text bash >}} $ kind delete cluster --name istio-testing Deleting cluster "istio-testing" ... {{< /text >}} ## Setup LoadBalancer for kind kind does not have any built-in way to provide IP addresses to your `Loadbalancer` service types, to ensure IP address assignments to `Gateway` Services please consult [this guide](https://kind.sigs.k8s.io/docs/user/loadbalancer/) for more information. ## Setup Dashboard UI for kind kind does not have a built-in Dashboard UI like minikube. But you can still setup Dashboard, a web-based Kubernetes UI, to view your cluster. Follow these instructions to set up Dashboard for kind. 1. To deploy Dashboard, run the following command: {{< text bash >}} $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml {{< /text >}} 1. Verify that Dashboard is deployed and running. {{< text bash >}} $ kubectl get pod -n kubernetes-dashboard NAME READY STATUS RESTARTS AGE dashboard-metrics-scraper-76585494d8-zdb66 1/1 Running 0 39s kubernetes-dashboard-b7ffbc8cb-zl8zg 1/1 Running 0 39s {{< /text >}} 1. Create a `ServiceAccount` and `ClusterRoleBinding` to provide admin access to the newly created cluster. {{< text bash >}} $ kubectl create serviceaccount -n kubernetes-dashboard admin-user $ kubectl create clusterrolebinding -n kubernetes-dashboard admin-user --clusterrole cluster-admin --serviceaccount=kubernetes-dashboard:admin-user {{< /text >}} 1. To log in to your Dashboard, you need a Bearer Token. Use the following command to store the token in a variable. {{< text bash >}} $ token=$(kubectl -n kubernetes-dashboard create token admin-user) {{< /text >}} Display the token using the `echo` command and copy it to use for logging in to your Dashboard. {{< text bash >}} $ echo $token {{< /text >}} 1. You can access your Dashboard using the kubectl command-line tool by running the following command: {{< text bash >}} $ kubectl proxy Starting to serve on 127.0.0.1:8001 {{< /text >}} Click [Kubernetes Dashboard](http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/) to view your deployments and services. {{< warning >}} You have to save your token somewhere, otherwise you have to run step number
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/platform-setup/kind/index.md
master
istio
[ 0.02840062417089939, 0.06519961357116699, 0.029614070430397987, 0.05511600896716118, -0.0215789582580328, 0.01270521990954876, -0.009662018157541752, 0.07045149058103561, -0.036223385483026505, 0.025111783295869827, -0.03279387205839157, -0.07937934994697571, -0.06643683463335037, 0.031980...
0.438085
the kubectl command-line tool by running the following command: {{< text bash >}} $ kubectl proxy Starting to serve on 127.0.0.1:8001 {{< /text >}} Click [Kubernetes Dashboard](http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/) to view your deployments and services. {{< warning >}} You have to save your token somewhere, otherwise you have to run step number 4 everytime you need a token to log in to your Dashboard. {{< /warning >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/platform-setup/kind/index.md
master
istio
[ 0.049995940178632736, 0.012737860903143883, 0.01087987795472145, -0.056987911462783813, -0.02567516639828682, -0.027244098484516144, -0.038971591740846634, 0.03148555010557175, 0.0854027196764946, 0.05467114970088005, -0.10694294422864914, -0.07130467891693115, 0.04153136536478996, 0.00968...
0.095299
Follow these instructions to prepare a cluster for Istio using the [IBM Cloud Kubernetes Service](https://cloud.ibm.com/docs/containers?topic=containers-getting-started). {{< tip >}} IBM offers a {{< gloss >}}managed control plane{{< /gloss >}} add-on for the IBM Cloud Kubernetes Service, which you can use instead of installing Istio manually. Refer to [Istio on IBM Cloud Kubernetes Service](https://cloud.ibm.com/docs/containers?topic=containers-istio) for details and instructions. {{< /tip >}} To prepare a cluster before manually installing Istio, proceed as follows: 1. [Install the IBM Cloud CLI, the IBM Cloud Kubernetes Service plug-in, and the Kubernetes CLI](https://cloud.ibm.com/docs/containers?topic=containers-cs\_cli\_install). 1. Create a standard Kubernetes cluster using the following command. Replace `` with the name you want to use for your cluster and `` with the name of an available zone. {{< tip >}} You can display your available zones by running `ibmcloud ks zones --provider classic`. The IBM Cloud Kubernetes Service [Locations Reference Guide](https://cloud.ibm.com/docs/containers?topic=containers-regions-and-zones) describes the available zones and how to specify them. {{< /tip >}} {{< text bash >}} $ ibmcloud ks cluster create classic --zone --machine-type b3c.4x16 \ --workers 3 --name {{< /text >}} {{< tip >}} If you already have a private or a public VLAN, you must specify them in the above command using the `--private-vlan` and `--public-vlan` options. Otherwise, they will be automatically created for you. You can view your available VLANs by running `ibmcloud ks vlans --zone `. {{< /tip >}} 1. Run the following command to download your cluster configuration. {{< text bash >}} $ ibmcloud ks cluster config --cluster {{< /text >}} {{< warning >}} Make sure to use the `kubectl` CLI version that matches the Kubernetes version of your cluster. {{< /warning >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/platform-setup/ibm/index.md
master
istio
[ 0.01175803504884243, -0.041990719735622406, 0.006912508979439735, 0.01698593981564045, -0.036435484886169434, 0.022935990244150162, 0.02861875481903553, 0.0476965568959713, -0.009687921963632107, 0.03027980402112007, -0.012564132921397686, -0.14456608891487122, -0.020000798627734184, 0.005...
0.544612
Follow these instructions to prepare a GKE cluster for Istio. 1. Create a new cluster. {{< text bash >}} $ export PROJECT\_ID=`gcloud config get-value project` && \ export M\_TYPE=n1-standard-2 && \ export ZONE=us-west2-a && \ export CLUSTER\_NAME=${PROJECT\_ID}-${RANDOM} && \ gcloud services enable container.googleapis.com && \ gcloud container clusters create $CLUSTER\_NAME \ --cluster-version latest \ --machine-type=$M\_TYPE \ --num-nodes 4 \ --zone $ZONE \ --project $PROJECT\_ID {{< /text >}} {{< tip >}} The default installation of Istio requires nodes with >1 vCPU. If you are installing with the [demo configuration profile](/docs/setup/additional-setup/config-profiles/), you can remove the `--machine-type` argument to use the smaller `n1-standard-1` machine size instead. {{< /tip >}} {{< warning >}} To use the Istio CNI feature on GKE Standard, please check the [CNI installation guide](/docs/setup/additional-setup/cni/#prerequisites) for prerequisite cluster configuration steps. Since the CNI node agent requires the SYS\_ADMIN capability, it is not available on GKE Autopilot. Instead, use the istio-init container. {{< /warning >}} {{< warning >}} \*\*For private GKE clusters\*\* An automatically created firewall rule does not open port 15017. This is needed by the istiod discovery validation webhook. To review this firewall rule for master access: {{< text bash >}} $ gcloud compute firewall-rules list --filter="name~gke-${CLUSTER\_NAME}-[0-9a-z]\*-master" {{< /text >}} To replace the existing rule and allow master access: {{< text bash >}} $ gcloud compute firewall-rules update --allow tcp:10250,tcp:443,tcp:15017 {{< /text >}} {{< /warning >}} 1. Retrieve your credentials for `kubectl`. {{< text bash >}} $ gcloud container clusters get-credentials $CLUSTER\_NAME \ --zone $ZONE \ --project $PROJECT\_ID {{< /text >}} 1. Grant cluster administrator (admin) permissions to the current user. To create the necessary RBAC rules for Istio, the current user requires admin permissions. {{< text bash >}} $ kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole=cluster-admin \ --user=$(gcloud config get-value core/account) {{< /text >}} ## Multi-cluster communication In some cases, a firewall rule must be explicitly created to allow cross-cluster traffic. {{< warning >}} The following instructions will allow communication between \*all\* clusters in your project. Adjust the commands as needed. {{< /warning >}} 1. Gather information about your clusters' network. {{< text bash >}} $ function join\_by { local IFS="$1"; shift; echo "$\*"; } $ ALL\_CLUSTER\_CIDRS=$(gcloud --project $PROJECT\_ID container clusters list --format='value(clusterIpv4Cidr)' | sort | uniq) $ ALL\_CLUSTER\_CIDRS=$(join\_by , $(echo "${ALL\_CLUSTER\_CIDRS}")) $ ALL\_CLUSTER\_NETTAGS=$(gcloud --project $PROJECT\_ID compute instances list --format='value(tags.items.[0])' | sort | uniq) $ ALL\_CLUSTER\_NETTAGS=$(join\_by , $(echo "${ALL\_CLUSTER\_NETTAGS}")) {{< /text >}} 1. Create the firewall rule. {{< text bash >}} $ gcloud compute firewall-rules create istio-multicluster-pods \ --allow=tcp,udp,icmp,esp,ah,sctp \ --direction=INGRESS \ --priority=900 \ --source-ranges="${ALL\_CLUSTER\_CIDRS}" \ --target-tags="${ALL\_CLUSTER\_NETTAGS}" --quiet {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/platform-setup/gke/index.md
master
istio
[ 0.026900116354227066, -0.04212784022092819, 0.031104760244488716, -0.011474852450191975, -0.0048255533911287785, -0.0693896934390068, -0.025600267574191093, 0.031162193045020103, -0.05850737541913986, 0.003543435363098979, -0.02875504083931446, -0.1750391274690628, 0.019365817308425903, -0...
0.380181
Follow these instructions to prepare a cluster for Istio using the [Huawei Cloud Container Engine](https://www.huaweicloud.com/intl/product/cce.html). You can deploy a Kubernetes cluster to Huawei Cloud quickly and easily in the `Cloud Container Engine Console`, which fully supports Istio. {{< tip >}} Huawei offers a {{< gloss >}}managed control plane{{< /gloss >}} add-on for the Huawei Cloud Container Engine, which you can use instead of installing Istio manually. Refer to [Huawei Application Service Mesh](https://support.huaweicloud.com/asm/index.html) for details and instructions. {{< /tip >}} Following the [Huawei Cloud Instructions](https://support.huaweicloud.com/en-us/qs-cce/cce\_qs\_0008.html) to prepare a cluster before manually installing Istio, proceed as follows: 1. Log in to the CCE console. Choose \*\*Dashboard\*\* > \*\*Buy Cluster\*\* to open the \*\*Buy Hybrid Cluster\*\* page. An alternative way to open that page is to choose \*\*Resource Management\*\* > \*\*Clusters\*\* in the navigation pane and click \*\*Buy\*\* next to \*\*Hybrid Cluster\*\*. 1. On the \*\*Configure Cluster\*\* page, configure cluster parameters. In this example, a majority of parameters retain default values. After the cluster configuration is complete, click Next: \*\*Create Node\*\* to go to the node creation page. {{< tip >}} Istio release has some requirements for the Kubernetes version, select the version according to Istio's [support policy](/docs/releases/supported-releases#support-status-of-istio-releases). {{< /tip >}} The image below shows the GUI where you create and configure the cluster: {{< image link="./create-cluster.png" caption="Configure Cluster" >}} 1. On the node creation page, configure the following parameters {{< tip >}} Istio adds some additional resource consumption, from our experience, reserve at least 4 vCPU and 8 GB memory to begin playing. {{< /tip >}} The image below shows the GUI where you create and configure the node: {{< image link="./create-node.png" caption="Configure Node" >}} 1. [Configure kubectl](https://support.huaweicloud.com/intl/en-us/cce\_faq/cce\_faq\_00041.html) 1. Now you can install Istio on CCE cluster according to [install guide](/docs/setup/install). 1. Configure [ELB](https://support.huaweicloud.com/intl/productdesc-elb/en-us\_topic\_0015479966.html) to expose Istio ingress gateway if needed. - [Create Elastic Load Balancer](https://console.huaweicloud.com/vpc/?region=ap-southeast-1#/elbs/createEnhanceElb) - Bind the ELB instance to `istio-ingressgateway` service Set the ELB instance ID and `loadBalancerIP` to `istio-ingressgateway`. {{< text bash >}} $ kubectl apply -f - <}} Start playing with Istio by trying out the various [tasks](/docs/tasks).
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/platform-setup/huaweicloud/index.md
master
istio
[ 0.015146697871387005, -0.019633496180176735, 0.0299284178763628, 0.02272341586649418, -0.006784590892493725, -0.0041169291362166405, -0.023737678304314613, 0.03000754676759243, -0.014263024553656578, 0.03416099771857262, 0.02686757594347, -0.16058126091957092, -0.003473190823569894, 0.0080...
0.518783
This page was last updated March 9, 2021. {{< boilerplate untested-document >}} Follow these instructions to prepare the [KubeSphere Container Platform](https://github.com/kubesphere/kubesphere) for Istio. You can download KubeSphere to easily install a Kubernetes cluster on your Linux machines. {{< tip >}} KubeSphere provides [All-in-One](https://kubesphere.io/docs/installation/all-in-one/) and [Multi-Node](https://kubesphere.io/docs/installation/multi-node/) installations. This enables quick setup and manages Kubernetes and Istio in a unified web console. This tutorial will walk you through the All-in-One installation. Reference [Multi-node Installation](https://kubesphere.io/docs/installation/multi-node/) for further information. {{< /tip >}} ## Prerequisites A Linux machine that is either a virtual machine or bare metal. This machine requires at a minimum: - Hardware: - CPU: at least 2 Cores - Memory: at least 4 `GB` - Operating Systems: - CentOS 7.4 ~ 7.7 (`64-bit`) - Ubuntu 16.04/18.04 LTS (`64-bit`) - RHEL 7.4 (`64-bit`) - Debian Stretch 9.5 (`64-bit`) {{< tip >}} Ensure your firewall meets the [port requirements](https://kubesphere.io/docs/installation/port-firewall/). If this is not immediately feasible, you may evaluate Istio and KubeSphere by disabling the firewall as documented in your distribution. {{< /tip >}} ## Provisioning a Kubernetes cluster 1. Download KubeSphere to your Linux machine, move to the KubeSphere directory. For example, if the created directory is `kubesphere-all-v2.1.1`: {{< text bash >}} $ curl -L https://kubesphere.io/download/stable/latest > installer.tar.gz $ tar -xzf installer.tar.gz $ cd kubesphere-all-v2.1.1/scripts {{< /text >}} 1. Execute the installation script, it will create a standard Kubernetes cluster. Select the \*\*"1) All-in-one"\*\* option when prompted: {{< text bash >}} $ ./install.sh {{< /text >}} 1. Installation may take 15 ~ 20 minutes. Wait until all pods are running. Access the console using the account information obtained from the installation logs: {{< text plain >}} ##################################################### ### Welcome to KubeSphere! ### ##################################################### Console: http://192.168.0.8:30880 Account: admin Password: It will be generated by KubeSphere Installer {{< /text >}} {{< tip >}} At the same time, Kubernetes has been installed into your environment. {{< /tip >}} ![KubeSphere Console](images/kubesphere-console.png) ## Enable installing Istio on Kubernetes KubeSphere will install Istio within Kubernetes. Now reference [Enable Service Mesh](https://kubesphere.io/docs/pluggable-components/service-mesh/) to enable Istio.
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/platform-setup/kubesphere/index.md
master
istio
[ 0.03596136346459389, -0.03526840731501579, 0.04458704963326454, -0.01183623168617487, -0.007526491768658161, 0.0002947935718111694, -0.05372856557369232, 0.06491844356060028, -0.030863964930176735, 0.03963381424546242, -0.051018133759498596, -0.13707312941551208, -0.012208268977701664, -0....
0.499371
{{< tip >}} No special configuration is required to run Istio on Kubernetes clusters version 1.22 or newer. For prior Kubernetes versions, you will need to continue to perform these steps. {{< /tip >}} If you wish to run Istio [Secret Discovery Service](https://www.envoyproxy.io/docs/envoy/latest/configuration/security/secret#sds-configuration) (SDS) for your mesh on Kops managed clusters, you must add [extra configurations](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection) to enable service account token projection volumes in the api-server. 1. Open the configuration file: {{< text bash >}} $ kops edit cluster $YOURCLUSTER {{< /text >}} 1. Add the following in the configuration file: {{< text yaml >}} kubeAPIServer: apiAudiences: - api - istio-ca serviceAccountIssuer: kubernetes.default.svc {{< /text >}} 1. Perform the update: {{< text bash >}} $ kops update cluster $ kops update cluster --yes {{< /text >}} 1. Launch the rolling update: {{< text bash >}} $ kops rolling-update cluster $ kops rolling-update cluster --yes {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/platform-setup/kops/index.md
master
istio
[ -0.012034142389893532, 0.010787048377096653, 0.018869495019316673, 0.028458314016461372, 0.0025721248239278793, 0.01663939468562603, -0.006928529590368271, 0.009443690069019794, 0.051454536616802216, 0.08264022320508957, -0.04545285552740097, -0.11630738526582718, -0.0030826260335743427, 0...
0.393587
## Prerequisites Follow these instructions to prepare a [Tencent Kubernetes Engine](https://intl.cloud.tencent.com/products/tke) or [Elastic Kubernetes Service](https://intl.cloud.tencent.com/product/eks) cluster for Istio. You can deploy a Kubernetes cluster to Tencent Cloud via [Tencent Kubernetes Engine](https://intl.cloud.tencent.com/document/product/457/40029) or [Elastic Kubernetes Service](https://intl.cloud.tencent.com/document/product/457/34048) which fully supports Istio. {{< image link="./tke.png" caption="Create Cluster" >}} ## Procedure After creating a Tencent Kubernetes Engine or Elastic Kubernetes Service cluster, you can quickly start to deploy and use Istio by [Tencent Cloud Mesh](https://cloud.tencent.com/product/tcm): {{< image link="./tcm.png" caption="Create Tencent Cloud Mesh" >}} 1. Log on to the `Container Service console`, and click \*\*Service Mesh\*\* in the left-side navigation pane to enter the \*\*Service Mesh\*\* page. 1. Click the \*\*Create\*\* button in the upper-left corner. 1. Enter the mesh name. {{< tip >}} The mesh name can be 1–60 characters long and it can contain numbers, Chinese characters, English letters, and hyphens (-). {{< /tip >}} 1. Select the \*\*Region\*\* and \*\*Zone\*\* in which the cluster resides. 1. Choose the Istio version. 1. Choose the service mesh mode: `Managed Mesh` or `Stand-Alone Mesh`. {{< tip >}} Tencent Cloud Mesh supports \*\*Stand-Alone Mesh\*\* (Istiod is running in the user cluster and managed by users) and \*\*Managed Mesh\*\* (Istiod is managed by Tencent Cloud Mesh Team). {{< /tip >}} 1. Configure the Egress traffic policy: `Register Only` or `Allow Any` . 1. Choose the related \*\*Tencent Kubernetes Engine\*\* or \*\*Elastic Kubernetes Service\*\* cluster. 1. Choose to open sidecar injection in the selected namespaces. 1. Configure external requests to bypass the IP address block directly accessed by the sidecar, and external request traffic will not be able to use Istio traffic management, observability and other features. 1. Choose to open \*\*SideCar Readiness Guarantee\*\* or not. If it is open, app containers will be created after sidecar is running. 1. Configure the Ingress Gateway and Egress Gateway. {{< image link="./tps.png" caption="Configure Observability" >}} 1. Configure the Observability of Metrics, Tracing and Logging. {{< tip >}} Besides the default Cloud Monitor services, You can choose to open the advanced external services like [Managed Service for Prometheus](https://intl.cloud.tencent.com/document/product/457/38824?has\_map=1) and the [Cloud Log Service](https://intl.cloud.tencent.com/product/cls). {{< /tip >}} After finishing these steps, you can confirm to create Istio and start to use Istio in Tencent Cloud Mesh.
https://github.com/istio/istio.io/blob/master//content/en/docs/setup/platform-setup/tencent-cloud-mesh/index.md
master
istio
[ -0.04235101491212845, 0.001557198935188353, 0.02757493034005165, -0.019826926290988922, 0.0126793896779418, -0.01158688124269247, -0.016809141263365746, 0.05177793279290199, 0.00456283800303936, 0.04068665951490402, -0.015552133321762085, -0.13996878266334534, -0.021504029631614685, 0.0431...
0.536735
Follow this guide to upgrade and configure an ambient mode installation using [Helm](https://helm.sh/docs/). This guide assumes you have already performed an [ambient mode installation with Helm](/docs/ambient/install/helm/) with a previous version of Istio. {{< warning >}} In contrast to sidecar mode, ambient mode supports moving application pods to an upgraded ztunnel proxy without a mandatory restart or reschedule of running application pods. However, upgrading ztunnel \*\*will\*\* cause all long-lived TCP connections on the upgraded node to reset, and Istio does not currently support canary upgrades of ztunnel, \*\*even with the use of revisions\*\*. Node cordoning and blue/green node pools are recommended to limit the blast radius of resets on application traffic during production upgrades. See your Kubernetes provider documentation for details. {{< /warning >}} ## Understanding ambient mode upgrades All Istio upgrades involve upgrading the control plane, data plane, and Istio CRDs. Because the ambient data plane is split across [two components](/docs/ambient/architecture/data-plane), the ztunnel and gateways (which includes waypoints), upgrades involve separate steps for these components. Upgrading the control plane and CRDs is covered here in brief, but is essentially identical to [the process for upgrading these components in sidecar mode](/docs/setup/upgrade/canary/). Like sidecar mode, gateways can make use of [revision tags](/docs/setup/upgrade/canary/#stable-revision-labels) to allow fine-grained control over ({{< gloss >}}gateway{{}}) upgrades, including waypoints, with simple controls for rolling back to a previous version of the Istio control plane at any point. However, unlike sidecar mode, the ztunnel runs as a DaemonSet — a per-node proxy — meaning that ztunnel upgrades affect, at minimum, an entire node at a time. While this may be acceptable in many cases, applications with long-lived TCP connections may be disrupted. In such cases, we recommend using node cordoning and draining before upgrading the ztunnel for a given node. For the sake of simplicity, this document will demonstrate in-place upgrades of the ztunnel, which may involve a short downtime. ## Prerequisites ### Prepare for the upgrade Before upgrading Istio, we recommend downloading the new version of istioctl, and running `istioctl x precheck` to make sure the upgrade is compatible with your environment. The output should look something like this: {{< text syntax=bash snip\_id=istioctl\_precheck >}} $ istioctl x precheck ✔ No issues found when checking the cluster. Istio is safe to install or upgrade! To get started, check out {{< /text >}} Now, update the Helm repository: {{< text syntax=bash snip\_id=update\_helm >}} $ helm repo update istio {{< /text >}} {{< tabset category-name="upgrade-prerequisites" >}} {{< tab name="In-place upgrade" category-value="in-place" >}} No additional preparations for in-place upgrades, proceed to the next step. {{< /tab >}} {{< tab name="Revisioned upgrade" category-value="revisions" >}} ### Organize your tags and revisions In order to upgrade a mesh in ambient mode in a controlled manner, we recommend that your gateways and namespaces use the `istio.io/rev` label to specify a revision tag to control which gateway and control plane versions will be used to manage traffic for your workloads. We recommend dividing your production cluster into multiple tags to organize your upgrade. All members of a given tag will be upgraded simultaneously, so it is wise to begin your upgrade with your lowest risk applications. We do not recommend referencing revisions directly via labels for upgrades, as this process can easily result in the accidental upgrade of a large number of proxies, and is difficult to segment. To see what tags and revisions you are using in your cluster, see the section on upgrading tags. ### Choose a revision name Revisions identify unique instances of the Istio control plane, allowing you to run multiple distinct versions of the control plane simultaneously in a single mesh. It
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/upgrade/helm/index.md
master
istio
[ 0.0038267357740551233, 0.020683957263827324, 0.11827407032251358, 0.038000449538230896, 0.0072774821892380714, -0.07321083545684814, 0.020814131945371628, 0.03774912655353546, -0.018621519207954407, 0.02843518927693367, -0.023005269467830658, -0.05212971195578575, -0.017078757286071777, 0....
0.367218
segment. To see what tags and revisions you are using in your cluster, see the section on upgrading tags. ### Choose a revision name Revisions identify unique instances of the Istio control plane, allowing you to run multiple distinct versions of the control plane simultaneously in a single mesh. It is recommended that revisions stay immutable, that is, once a control plane is installed with a particular revision name, the installation should not be modified, and the revision name should not be reused. Tags, on the other hand, are mutable pointers to revisions. This enables a cluster operator to effect data plane upgrades without the need to adjust any workload labels, simply by moving a tag from one revision to the next. All data planes will connect only to one control plane, specified by the `istio.io/rev` label (pointing to either a revision or tag), or by the default revision if no `istio.io/rev` label is present. Upgrading a data plane consists of simply changing the control plane it is pointed to via modifying labels or editing tags. Because revisions are intended to be immutable, we recommend choosing a revision name that corresponds with the version of Istio you are installing, such as `1-22-1`. In addition to choosing a new revision name, you should note your current revision name. You can find this by running: {{< text syntax=bash snip\_id=list\_revisions >}} $ kubectl get mutatingwebhookconfigurations -l 'istio.io/rev,!istio.io/tag' -L istio\.io/rev $ # Store your revision and new revision in variables: $ export REVISION=istio-1-22-1 $ export OLD\_REVISION=istio-1-21-2 {{< /text >}} {{< /tab >}} {{< /tabset >}} ## Upgrade the control plane ### Base components {{< boilerplate crd-upgrade-123 >}} The cluster-wide Custom Resource Definitions (CRDs) must be upgraded prior to the deployment of a new version of the control plane: {{< text syntax=bash snip\_id=upgrade\_crds >}} $ helm upgrade istio-base istio/base -n istio-system {{< /text >}} ### istiod control plane The [Istiod](/docs/ops/deployment/architecture/#istiod) control plane manages and configures the proxies that route traffic within the mesh. The following command will install a new instance of the control plane alongside the current one, but will not introduce any new gateway proxies or waypoints, or take over control of existing ones. If you have customized your istiod installation, you can reuse the `values.yaml` file from previous upgrades or installs to keep your control planes consistent. {{< tabset category-name="upgrade-control-plane" >}} {{< tab name="In-place upgrade" category-value="in-place" >}} {{< text syntax=bash snip\_id=upgrade\_istiod\_inplace >}} $ helm upgrade istiod istio/istiod -n istio-system --wait {{< /text >}} {{< /tab >}} {{< tab name="Revisioned upgrade" category-value="revisions" >}} {{< text syntax=bash snip\_id=upgrade\_istiod\_revisioned >}} $ helm install istiod-"$REVISION" istio/istiod -n istio-system --set revision="$REVISION" --set profile=ambient --wait {{< /text >}} {{< /tab >}} {{< /tabset >}} ### CNI node agent The Istio CNI node agent is responsible for detecting pods added to the ambient mesh, informing ztunnel that proxy ports should be established within added pods, and configuring traffic redirection within the pod network namespace. It is not part of the data plane or control plane. The CNI at version 1.x is compatible with the control plane at version 1.x+1 and 1.x. This means the control plane must be upgraded before Istio CNI, as long as their version difference is within one minor version. {{< warning >}} Istio does not currently support canary upgrades of istio-cni, \*\*even with the use of revisions\*\*. If this is a significant disruption concern for your environment, or stricter blast radius controls are desired for CNI upgrades, it is recommended to defer `istio-cni` upgrades until the nodes themselves are drained and upgraded, or leverage node taints and manually orchestrate the upgrade for this component. The
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/upgrade/helm/index.md
master
istio
[ 0.005594391375780106, -0.03878020495176315, -0.01920141838490963, 0.027628423646092415, 0.029155075550079346, -0.06183160841464996, -0.00808083638548851, -0.019337205216288567, 0.019341832026839256, 0.0064207762479782104, 0.027241120114922523, -0.04949144273996353, -0.05151314288377762, -0...
0.358688
of revisions\*\*. If this is a significant disruption concern for your environment, or stricter blast radius controls are desired for CNI upgrades, it is recommended to defer `istio-cni` upgrades until the nodes themselves are drained and upgraded, or leverage node taints and manually orchestrate the upgrade for this component. The Istio CNI node agent is a [system-node-critical](https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/) DaemonSet. It \*\*must\*\* be running on each node for Istio's ambient traffic security and operational guarantees to be upheld on that node. By default, the Istio CNI node agent DaemonSet supports safe in-place upgrades, and while being upgraded or restarted will prevent new pods from being started on that node until an instance of the agent is available on the node to handle them, in order to prevent unsecured traffic leakage. Existing pods that have already been successfully added to the ambient mesh prior to the upgrade will continue to operate under Istio's traffic security requirements during the upgrade. {{< /warning >}} {{< text syntax=bash snip\_id=upgrade\_cni >}} $ helm upgrade istio-cni istio/cni -n istio-system {{< /text >}} ## Upgrade the data plane ### ztunnel DaemonSet The {{< gloss >}}ztunnel{{< /gloss >}} DaemonSet is the node proxy component. The ztunnel at version 1.x is compatible with the control plane at version 1.x+1 and 1.x. This means the control plane must be upgraded before ztunnel, as long as their version difference is within one minor version. If you have previously customized your ztunnel installation, you can reuse the `values.yaml` file from previous upgrades or installs to keep your {{< gloss >}}data plane{{< /gloss >}} consistent. {{< warning >}} Upgrading ztunnel in-place will briefly disrupt all ambient mesh traffic on the node, \*\*even with the use of revisions\*\*. In practice the disruption period is a very small window, primarily affecting long-running connections. Node cordoning and blue/green node pools are recommended to mitigate blast radius risk during production upgrades. See your Kubernetes provider documentation for details. {{< /warning >}} {{< tabset category-name="upgrade-ztunnel" >}} {{< tab name="In-place upgrade" category-value="in-place" >}} {{< text syntax=bash snip\_id=upgrade\_ztunnel\_inplace >}} $ helm upgrade ztunnel istio/ztunnel -n istio-system --wait {{< /text >}} {{< /tab >}} {{< tab name="Revisioned upgrade" category-value="revisions" >}} {{< text syntax=bash snip\_id=upgrade\_ztunnel\_revisioned >}} $ helm upgrade ztunnel istio/ztunnel -n istio-system --set revision="$REVISION" --wait {{< /text >}} {{< /tab >}} {{< /tabset >}} {{< tabset category-name="change-gateway-revision" >}} {{< tab name="In-place upgrade" category-value="in-place" >}} ### Upgrade manually deployed gateway chart (optional) `Gateway`s that were [deployed manually](/docs/tasks/traffic-management/ingress/gateway-api/#manual-deployment) must be upgraded individually using Helm: {{< text syntax=bash snip\_id=none >}} $ helm upgrade istio-ingress istio/gateway -n istio-ingress {{< /text >}} {{< /tab >}} {{< tab name="Revisioned upgrade" category-value="revisions" >}} ### Upgrade waypoints and gateways using tags If you have followed best practices, all of your gateways, workloads, and namespaces use either the default revision (effectively, a tag named `default`), or the `istio.io/rev` label with the value set to a tag name. You can now upgrade all of these to the new version of the Istio data plane by moving their tags to point to the new version, one at a time. To list all tags in your cluster, run: {{< text syntax=bash snip\_id=list\_tags >}} $ kubectl get mutatingwebhookconfigurations -l 'istio.io/tag' -L istio\.io/tag,istio\.io/rev {{< /text >}} For each tag, you can upgrade the tag by running the following command, replacing `$MYTAG` with your tag name, and `$REVISION` with your revision name: {{< text syntax=bash snip\_id=upgrade\_tag >}} $ helm template istiod istio/istiod -s templates/revision-tags-mwc.yaml --set revisionTags="{$MYTAG}" --set revision="$REVISION" -n istio-system | kubectl apply -f - {{< /text >}} This will upgrade all objects referencing that tag, except for those using [manual gateway deployment mode](/docs/tasks/traffic-management/ingress/gateway-api/#manual-deployment), which are dealt with
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/upgrade/helm/index.md
master
istio
[ -0.033133283257484436, 0.021785151213407516, 0.03166979178786278, 0.01979229599237442, 0.057411618530750275, -0.028678162023425102, -0.00388994999229908, 0.006780104711651802, 0.018442479893565178, 0.0646902546286583, -0.0012721394887194037, -0.06421618163585663, -0.008599651046097279, -0....
0.476822
with your revision name: {{< text syntax=bash snip\_id=upgrade\_tag >}} $ helm template istiod istio/istiod -s templates/revision-tags-mwc.yaml --set revisionTags="{$MYTAG}" --set revision="$REVISION" -n istio-system | kubectl apply -f - {{< /text >}} This will upgrade all objects referencing that tag, except for those using [manual gateway deployment mode](/docs/tasks/traffic-management/ingress/gateway-api/#manual-deployment), which are dealt with below, and sidecars, which are not used in ambient mode. It is recommended that you closely monitor the health of applications using the upgraded data plane before upgrading the next tag. If you detect a problem, you can rollback a tag, resetting it to point to the name of your old revision: {{< text syntax=bash snip\_id=rollback\_tag >}} $ helm template istiod istio/istiod -s templates/revision-tags-mwc.yaml --set revisionTags="{$MYTAG}" --set revision="$OLD\_REVISION" -n istio-system | kubectl apply -f - {{< /text >}} ### Upgrade manually deployed gateways (optional) `Gateway`s that were [deployed manually](/docs/tasks/traffic-management/ingress/gateway-api/#manual-deployment) must be upgraded individually using Helm: {{< text syntax=bash snip\_id=upgrade\_gateway >}} $ helm upgrade istio-ingress istio/gateway -n istio-ingress {{< /text >}} ## Uninstall the previous control plane If you have upgraded all data plane components to use the new revision of the Istio control plane, and are satisfied that you do not need to roll back, you can remove the previous revision of the control plane by running: {{< text syntax=bash snip\_id=delete\_old\_revision >}} $ helm delete istiod-"$OLD\_REVISION" -n istio-system {{< /text >}} {{< /tab >}} {{< /tabset >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/upgrade/helm/index.md
master
istio
[ 0.01890849508345127, 0.08278743177652359, 0.06665126234292984, 0.000006694544481433695, 0.053739991039037704, -0.06715259701013565, -0.023235619068145752, -0.0008334335871040821, 0.04036618396639824, 0.034078050404787064, -0.0059430706314742565, -0.07126085460186005, -0.04634539410471916, ...
0.268291
Follow this guide to upgrade and configure an ambient mode installation using [Helm](https://helm.sh/docs/). This guide assumes you have already performed an [ambient mode installation with Helm and the ambient wrapper chart](/docs/ambient/install/helm/all-in-one) with a previous version of Istio. {{< warning >}} Note that these upgrade instructions only apply if you are upgrading Helm installation created using the ambient wrapper chart, if you installed via individual Helm component charts, see [the relevant upgrade docs](docs/ambient/upgrade/helm) {{< /warning >}} ## Understanding ambient mode upgrades {{< warning >}} Note that if you install everything as part of this wrapper chart, you can only upgrade or uninstall ambient via this wrapper chart - you cannot upgrade or uninstall sub-components individually. {{< /warning >}} ## Prerequisites ### Prepare for the upgrade Before upgrading Istio, we recommend downloading the new version of istioctl, and running `istioctl x precheck` to make sure the upgrade is compatible with your environment. The output should look something like this: {{< text syntax=bash snip\_id=istioctl\_precheck >}} $ istioctl x precheck ✔ No issues found when checking the cluster. Istio is safe to install or upgrade! To get started, check out {{< /text >}} Now, update the Helm repository: {{< text syntax=bash snip\_id=update\_helm >}} $ helm repo update istio {{< /text >}} ### Upgrade the Istio ambient control plane and data plane {{< warning >}} Upgrading using the wrapper chart in-place will briefly disrupt all ambient mesh traffic on the node, \*\*even with the use of revisions\*\*. In practice the disruption period is a very small window, primarily affecting long-running connections. Node cordoning and blue/green node pools are recommended to mitigate blast radius risk during production upgrades. See your Kubernetes provider documentation for details. {{< /warning >}} The `ambient` chart upgrades all the Istio data plane and control plane components required for ambient, using a Helm wrapper chart that composes the individual component charts. If you have customized your istiod installation, you can reuse the `values.yaml` file from previous upgrades or installs to keep settings consistent. {{< text syntax=bash snip\_id=upgrade\_ambient\_aio >}} $ helm upgrade istio-ambient istio/ambient -n istio-system --wait {{< /text >}} ### Upgrade manually deployed gateway chart (optional) `Gateway`s that were [deployed manually](/docs/tasks/traffic-management/ingress/gateway-api/#manual-deployment) must be upgraded individually using Helm: {{< text syntax=bash snip\_id=none >}} $ helm upgrade istio-ingress istio/gateway -n istio-ingress {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/upgrade/helm/all-in-one/index.md
master
istio
[ 0.02340949885547161, 0.0286895502358675, 0.08745599538087845, 0.011300268582999706, 0.11634007841348648, -0.08155065774917603, -0.017464449629187584, 0.046312566846609116, -0.07018794119358063, -0.003699448425322771, 0.023564575240015984, -0.12237066775560379, -0.02500145696103573, 0.04819...
0.379286
{{< tip >}} Follow this guide to install and configure an Istio mesh with support for ambient mode. If you are new to Istio and just want to try it out, follow the [quick start instructions](/docs/ambient/getting-started) instead. {{< /tip >}} This installation guide uses the [istioctl](/docs/reference/commands/istioctl/) command-line tool. `istioctl`, like other installation methods, exposes many customization options. Additionally, it offers user input validation to help prevent installation errors, and includes many post-installation analysis and configuration tools. Using these instructions, you can select any one of Istio's built-in [configuration profiles](/docs/setup/additional-setup/config-profiles/) and then further customize the configuration for your specific needs. The `istioctl` command supports the full [`IstioOperator` API](/docs/reference/config/istio.operator.v1alpha1/) via command-line options for individual settings, or passing a YAML file containing an `IstioOperator` {{}}custom resource{{}}. ## Prerequisites Before you begin, check the following prerequisites: 1. [Download the Istio release](/docs/setup/additional-setup/download-istio-release/). 1. Perform any necessary [platform-specific setup](/docs/ambient/install/platform-prerequisites/). ## Install or upgrade the Kubernetes Gateway API CRDs {{< boilerplate gateway-api-install-crds >}} ## Install Istio using the ambient profile `istioctl` supports a number of [configuration profiles](/docs/setup/additional-setup/config-profiles/) that include different default options, and can be customized for your production needs. Support for ambient mode is included in the `ambient` profile. Install Istio with the following command: {{< text syntax=bash snip\_id=install\_ambient >}} $ istioctl install --set profile=ambient --skip-confirmation {{< /text >}} This command installs the `ambient` profile on the cluster defined by your Kubernetes configuration. ## Configure and modify profiles Istio's installation API is documented in the [`IstioOperator` API reference](/docs/reference/config/istio.operator.v1alpha1/). You can use the `--set` option to `istioctl install` to modify individual installation parameters, or specify your own configuration file with `-f`. Full details on how to use and customize `istioctl` installations are available in [the sidecar installation documentation](/docs/setup/install/istioctl/). ## Uninstall Istio To completely uninstall Istio from a cluster, run the following command: {{< text syntax=bash snip\_id=uninstall >}} $ istioctl uninstall --purge -y {{< /text >}} {{< warning >}} The optional `--purge` flag will remove all Istio resources, including cluster-scoped resources that may be shared with other Istio control planes. {{< /warning >}} Alternatively, to remove only a specific Istio control plane, run the following command: {{< text syntax=bash snip\_id=none >}} $ istioctl uninstall {{< /text >}} The control plane namespace (e.g., `istio-system`) is not removed by default. If no longer needed, use the following command to remove it: {{< text syntax=bash snip\_id=remove\_namespace >}} $ kubectl delete namespace istio-system {{< /text >}} ## Generate a manifest before installation You can generate the manifest before installing Istio using the `manifest generate` sub-command. For example, use the following command to generate a manifest for the `default` profile that can be installed with `kubectl`: {{< text syntax=bash snip\_id=none >}} $ istioctl manifest generate > $HOME/generated-manifest.yaml {{< /text >}} The generated manifest can be used to inspect what exactly is installed as well as to track changes to the manifest over time. While the `IstioOperator` CR represents the full user configuration and is sufficient for tracking it, the output from `manifest generate` also captures possible changes in the underlying charts and therefore can be used to track the actual installed resources. {{< tip >}} Any additional flags or custom values overrides you would normally use for installation should also be supplied to the `istioctl manifest generate` command. {{< /tip >}} {{< warning >}} If attempting to install and manage Istio using `istioctl manifest generate`, please note the following caveats: 1. Manually create the Istio namespace (`istio-system` by default). 1. Istio validation will not be enabled by default. Unlike `istioctl install`, the `manifest generate` command will not create the `istiod-default-validator` validating webhook configuration unless `values.defaultRevision` is set: {{< text syntax=bash snip\_id=none >}} $
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/install/istioctl/index.md
master
istio
[ -0.00841737911105156, 0.005242978688329458, 0.03674336150288582, 0.05742435157299042, -0.00749991973862052, -0.07281117886304855, 0.036112841218709946, 0.09644327312707901, -0.07345343381166458, -0.006657114252448082, -0.007087894715368748, -0.14579132199287415, -0.0463632196187973, 0.0669...
0.608252
manifest generate`, please note the following caveats: 1. Manually create the Istio namespace (`istio-system` by default). 1. Istio validation will not be enabled by default. Unlike `istioctl install`, the `manifest generate` command will not create the `istiod-default-validator` validating webhook configuration unless `values.defaultRevision` is set: {{< text syntax=bash snip\_id=none >}} $ istioctl manifest generate --set values.defaultRevision=default {{< /text >}} 1. Resources may not be installed with the same sequencing of dependencies as `istioctl install`. 1. This method is not tested as part of Istio releases. 1. While `istioctl install` will automatically detect environment-specific settings from your Kubernetes context, `manifest generate` cannot do so as it runs offline, which may lead to unexpected results. In particular, you must ensure that you follow [these steps](/docs/ops/best-practices/security/#configure-third-party-service-account-tokens) if your Kubernetes environment does not support third party service account tokens. It is recommended to append `--cluster-specific` to your `istio manifest generate` command to detect the target cluster's environment, which will embed those cluster-specific environment settings into the generated manifests. This requires network access to your running cluster. 1. `kubectl apply` of the generated manifest may show transient errors due to resources not being available in the cluster in the correct order. 1. `istioctl install` automatically prunes any resources that should be removed when the configuration changes (e.g. if you remove a gateway). This does not happen when you use `istio manifest generate` with `kubectl` and these resources must be removed manually. {{< /warning >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/install/istioctl/index.md
master
istio
[ -0.02305569313466549, 0.014906482771039009, 0.008580791763961315, -0.04020282253623009, -0.010334938764572144, -0.03275610879063606, 0.033461958169937134, 0.023173687979578972, 0.03174256905913353, -0.004440599121153355, -0.027821999043226242, -0.20740926265716553, -0.0017576063983142376, ...
0.426092
This document covers any platform- or environment-specific prerequisites for installing Istio in ambient mode. ## Platform Certain Kubernetes environments require you to set various Istio configuration options to support them. ### Google Kubernetes Engine (GKE) #### Namespace restrictions On GKE, any pods with the [system-node-critical](https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/) `priorityClassName` can only be installed in namespaces that have a [ResourceQuota](https://kubernetes.io/docs/concepts/policy/resource-quotas/) defined. By default in GKE, only `kube-system` has a defined ResourceQuota for the `node-critical` class. The Istio CNI node agent and `ztunnel` both require the `node-critical` class, and so in GKE, both components must either: - Be installed into `kube-system` (\_not\_ `istio-system`) - Be installed into another namespace (such as `istio-system`) in which a ResourceQuota has been manually created, for example: {{< text syntax=yaml >}} apiVersion: v1 kind: ResourceQuota metadata: name: gcp-critical-pods namespace: istio-system spec: hard: pods: 1000 scopeSelector: matchExpressions: - operator: In scopeName: PriorityClass values: - system-node-critical {{< /text >}} #### Platform profile When using GKE you must append the correct `platform` value to your installation commands, as GKE uses nonstandard locations for CNI binaries which requires Helm overrides. {{< tabset category-name="install-method" >}} {{< tab name="Helm" category-value="helm" >}} {{< text syntax=bash >}} $ helm install istio-cni istio/cni -n istio-system --set profile=ambient --set global.platform=gke --wait {{< /text >}} {{< /tab >}} {{< tab name="istioctl" category-value="istioctl" >}} {{< text syntax=bash >}} $ istioctl install --set profile=ambient --set values.global.platform=gke {{< /text >}} {{< /tab >}} {{< /tabset >}} ### Amazon Elastic Kubernetes Service (EKS) If you are using EKS: - with Amazon's VPC CNI - with Pod ENI trunking enabled - \*\*and\*\* you are using EKS pod-attached SecurityGroups via [SecurityGroupPolicy](https://aws.github.io/aws-eks-best-practices/networking/sgpp/#enforcing-mode-use-strict-mode-for-isolating-pod-and-node-traffic) [`POD\_SECURITY\_GROUP\_ENFORCING\_MODE` must be explicitly set to `standard`](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/README.md#pod\_security\_group\_enforcing\_mode-v1110), or pod health probes will fail. This is because Istio uses a link-local SNAT address to identify kubelet health probes, and VPC CNI currently misroutes link-local packets in Pod Security Group `strict` mode. Explicitly adding a CIDR exclusion for the link-local address to your SecurityGroup will not work, because VPC CNI's Pod Security Group mode works by silently routing traffic across links, looping them thru the trunked `pod ENI` for SecurityGroup policy enforcement. Since [link-local traffic is not routable across links](https://datatracker.ietf.org/doc/html/rfc3927#section-2.6.2), the Pod Security Group feature cannot enforce policy against them as a design constraint, and drops the packets in `strict` mode. There is an [open issue on the VPC CNI component](https://github.com/aws/amazon-vpc-cni-k8s/issues/2797) for this limitation. The current recommendation from the VPC CNI team is to disable `strict` mode to work around it, if you are using Pod Security Groups, or to use `exec`-based Kubernetes probes for your pods instead of kubelet-based ones. You can check if you have pod ENI trunking enabled by running the following command: {{< text syntax=bash >}} $ kubectl set env daemonset aws-node -n kube-system --list | grep ENABLE\_POD\_ENI {{< /text >}} You can check if you have any pod-attached security groups in your cluster by running the following command: {{< text syntax=bash >}} $ kubectl get securitygrouppolicies.vpcresources.k8s.aws {{< /text >}} You can set `POD\_SECURITY\_GROUP\_ENFORCING\_MODE=standard` by running the following command, and recycling affected pods: {{< text syntax=bash >}} $ kubectl set env daemonset aws-node -n kube-system POD\_SECURITY\_GROUP\_ENFORCING\_MODE=standard {{< /text >}} ### k3d When using [k3d](https://k3d.io/) with the default Flannel CNI, you must append the correct `platform` value to your installation commands, as k3d uses nonstandard locations for CNI configuration and binaries which requires some Helm overrides. 1. Create a cluster with Traefik disabled so it doesn't conflict with Istio's ingress gateways: {{< text bash >}} $ k3d cluster create --api-port 6550 -p '9080:80@loadbalancer' -p '9443:443@loadbalancer' --agents 2 --k3s-arg '--disable=traefik@server:\*' {{< /text >}} 1. Set `global.platform=k3d` when installing Istio charts. For example: {{< tabset category-name="install-method" >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/install/platform-prerequisites/index.md
master
istio
[ -0.04677853360772133, 0.0003738105879165232, 0.05388161167502403, 0.021505914628505707, 0.006160800810903311, -0.06299109756946564, 0.035367853939533234, 0.04867202416062355, -0.013474682345986366, 0.01990671642124653, -0.051610007882118225, -0.15591906011104584, 0.008027954958379269, 0.00...
0.44924
overrides. 1. Create a cluster with Traefik disabled so it doesn't conflict with Istio's ingress gateways: {{< text bash >}} $ k3d cluster create --api-port 6550 -p '9080:80@loadbalancer' -p '9443:443@loadbalancer' --agents 2 --k3s-arg '--disable=traefik@server:\*' {{< /text >}} 1. Set `global.platform=k3d` when installing Istio charts. For example: {{< tabset category-name="install-method" >}} {{< tab name="Helm" category-value="helm" >}} {{< text syntax=bash >}} $ helm install istio-cni istio/cni -n istio-system --set profile=ambient --set global.platform=k3d --wait {{< /text >}} {{< /tab >}} {{< tab name="istioctl" category-value="istioctl" >}} {{< text syntax=bash >}} $ istioctl install --set profile=ambient --set values.global.platform=k3d {{< /text >}} {{< /tab >}} {{< /tabset >}} ### K3s When using [K3s](https://k3s.io/) and one of its bundled CNIs, you must append the correct `platform` value to your installation commands, as K3s uses nonstandard locations for CNI configuration and binaries which requires some Helm overrides. For the default K3s paths, Istio provides built-in overrides based on the `global.platform` value. {{< tabset category-name="install-method" >}} {{< tab name="Helm" category-value="helm" >}} {{< text syntax=bash >}} $ helm install istio-cni istio/cni -n istio-system --set profile=ambient --set global.platform=k3s --wait {{< /text >}} {{< /tab >}} {{< tab name="istioctl" category-value="istioctl" >}} {{< text syntax=bash >}} $ istioctl install --set profile=ambient --set values.global.platform=k3s {{< /text >}} {{< /tab >}} {{< /tabset >}} However, these locations may be overridden in K3s, [according to K3s documentation](https://docs.k3s.io/cli/server#k3s-server-cli-help). If you are using K3s with a custom, non-bundled CNI, you must manually specify the correct paths for those CNIs, e.g. `/etc/cni/net.d` - [see the K3s docs for details](https://docs.k3s.io/networking/basic-network-options#custom-cni). For example: {{< tabset category-name="install-method" >}} {{< tab name="Helm" category-value="helm" >}} {{< text syntax=bash >}} $ helm install istio-cni istio/cni -n istio-system --set profile=ambient --wait --set cniConfDir=/var/lib/rancher/k3s/agent/etc/cni/net.d --set cniBinDir=/var/lib/rancher/k3s/data/current/bin/ {{< /text >}} {{< /tab >}} {{< tab name="istioctl" category-value="istioctl" >}} {{< text syntax=bash >}} $ istioctl install --set profile=ambient --set values.cni.cniConfDir=/var/lib/rancher/k3s/agent/etc/cni/net.d --set values.cni.cniBinDir=/var/lib/rancher/k3s/data/current/bin/ {{< /text >}} {{< /tab >}} {{< /tabset >}} ### MicroK8s If you are installing Istio on [MicroK8s](https://microk8s.io/), you must append the correct `platform` value to your installation commands, as MicroK8s [uses non-standard locations for CNI configuration and binaries](https://microk8s.io/docs/change-cidr). For example: {{< tabset category-name="install-method" >}} {{< tab name="Helm" category-value="helm" >}} {{< text syntax=bash >}} $ helm install istio-cni istio/cni -n istio-system --set profile=ambient --set global.platform=microk8s --wait {{< /text >}} {{< /tab >}} {{< tab name="istioctl" category-value="istioctl" >}} {{< text syntax=bash >}} $ istioctl install --set profile=ambient --set values.global.platform=microk8s {{< /text >}} {{< /tab >}} {{< /tabset >}} ### minikube If you are using [minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/) with the [Docker driver](https://minikube.sigs.k8s.io/docs/drivers/docker/), you must append the correct `platform` value to your installation commands, as minikube with Docker uses a nonstandard bind mount path for containers. For example: {{< tabset category-name="install-method" >}} {{< tab name="Helm" category-value="helm" >}} {{< text syntax=bash >}} $ helm install istio-cni istio/cni -n istio-system --set profile=ambient --set global.platform=minikube --wait" {{< /text >}} {{< /tab >}} {{< tab name="istioctl" category-value="istioctl" >}} {{< text syntax=bash >}} $ istioctl install --set profile=ambient --set values.global.platform=minikube" {{< /text >}} {{< /tab >}} {{< /tabset >}} ### Red Hat OpenShift OpenShift requires that `ztunnel` and `istio-cni` components are installed in the `kube-system` namespace, and that you set `global.platform=openshift` for all charts. While deploying Ambient dataplane mode on OpenShift, set `routingViaHost: true` in the `gatewayConfig` spec to enable OVN-Kubernetes `local` gateway mode. This one-time configuration is required if your pod manifests include liveness or readiness probes, as it ensures that probe traffic is routed through the host and applied to the host’s routing table, which is necessary for the probes to function correctly. To configure the gateway mode at runtime, follow the steps described [here](https://docs.redhat.com/en/documentation/openshift\_container\_platform/4.19/html/ovn-kubernetes\_network\_plugin/configuring-gateway). {{< tabset category-name="install-method" >}} {{< tab name="Helm" category-value="helm" >}} You
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/install/platform-prerequisites/index.md
master
istio
[ -0.02295828051865101, 0.023781027644872665, -0.05819951742887497, 0.00446189334616065, -0.0396769754588604, -0.049003127962350845, 0.019273053854703903, 0.026341505348682404, -0.0025017149746418, 0.015223391354084015, -0.011586201377213001, -0.16663053631782532, -0.032056648284196854, 0.01...
0.46662
probes, as it ensures that probe traffic is routed through the host and applied to the host’s routing table, which is necessary for the probes to function correctly. To configure the gateway mode at runtime, follow the steps described [here](https://docs.redhat.com/en/documentation/openshift\_container\_platform/4.19/html/ovn-kubernetes\_network\_plugin/configuring-gateway). {{< tabset category-name="install-method" >}} {{< tab name="Helm" category-value="helm" >}} You must `--set global.platform=openshift` for \*\*every\*\* chart you install, for example with the `istiod` chart: {{< text syntax=bash >}} $ helm install istiod istio/istiod -n istio-system --set profile=ambient --set global.platform=openshift --wait {{< /text >}} In addition, you must install `istio-cni` and `ztunnel` in the `kube-system` namespace, for example: {{< text syntax=bash >}} $ helm install istio-cni istio/cni -n kube-system --set profile=ambient --set global.platform=openshift --wait $ helm install ztunnel istio/ztunnel -n kube-system --set profile=ambient --set global.platform=openshift --wait {{< /text >}} {{< /tab >}} {{< tab name="istioctl" category-value="istioctl" >}} {{< text syntax=bash >}} $ istioctl install --set profile=openshift-ambient --skip-confirmation {{< /text >}} {{< /tab >}} {{< /tabset >}} ## CNI plugins The following configurations apply to all platforms, when certain {{< gloss "CNI" >}}CNI plugins{{< /gloss >}} are used: ### Cilium 1. Cilium currently defaults to proactively deleting other CNI plugins and their config, and must be configured with `cni.exclusive = false` to properly support chaining. See [the Cilium documentation](https://docs.cilium.io/en/stable/helm-reference/) for more details. 1. Cilium's BPF masquerading is currently disabled by default, and has issues with Istio's use of link-local IPs for Kubernetes health checking. Enabling BPF masquerading via `bpf.masquerade=true` is not currently supported, and results in non-functional pod health checks in Istio ambient. Cilium's default iptables masquerading implementation should continue to function correctly. 1. Due to how Cilium manages node identity and internally allow-lists node-level health probes to pods, applying any default-DENY `NetworkPolicy` in a Cilium CNI install underlying Istio in ambient mode will cause `kubelet` health probes (which are by-default silently exempted from all policy enforcement by Cilium) to be blocked. This is because Istio uses a link-local SNAT address for kubelet health probes, which Cilium is not aware of, and Cilium does not have an option to exempt link-local addresses from policy enforcement. This can be resolved by applying the following `CiliumClusterWideNetworkPolicy`: {{< text syntax=yaml >}} apiVersion: "cilium.io/v2" kind: CiliumClusterwideNetworkPolicy metadata: name: "allow-ambient-hostprobes" spec: description: "Allows SNAT-ed kubelet health check probes into ambient pods" enableDefaultDeny: egress: false ingress: false endpointSelector: {} ingress: - fromCIDR: - "169.254.7.127/32" {{< /text >}} This policy override is \*not\* required unless you already have other default-deny `NetworkPolicies` or `CiliumNetworkPolicies` applied in your cluster. Please see [issue #49277](https://github.com/istio/istio/issues/49277) and [CiliumClusterWideNetworkPolicy](https://docs.cilium.io/en/stable/network/kubernetes/policy/#ciliumclusterwidenetworkpolicy) for more details. When Cilium is used to replace kube-proxy, take note of the additional configuration options required to ensure proper operation with Istio in ambient mode described in the [Cilium documentation](https://docs.cilium.io/en/stable/network/servicemesh/istio/).
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/install/platform-prerequisites/index.md
master
istio
[ 0.10592205077409744, 0.019106678664684296, 0.005935959052294493, 0.04737234115600586, 0.01573183573782444, -0.027237901464104652, -0.013804037123918533, 0.010844867676496506, 0.02407924272119999, 0.030834093689918518, -0.049240145832300186, -0.08634617924690247, -0.07044332474470139, -0.00...
0.193332
Follow this guide to install an Istio {{< gloss "ambient" >}}ambient service mesh{{< /gloss >}} that spans multiple {{< gloss "cluster" >}}clusters{{< /gloss >}}. ## Current Status and Limitations {{< warning >}} \*\*Ambient multicluster is currently in alpha status\*\* and has significant limitations. This feature is under active development and should not be used in production environments. {{< /warning >}} Before proceeding with ambient multicluster installation, it's critical to understand the current state and limitations of this feature. ### Critical Limitations #### Network Topology Restrictions Multicluster single-network configurations are untested, and may be broken: - Use caution when deploying ambient across clusters that share the same network - Only multi-network configurations are supported #### Control Plane Limitations Primary remote configuration is not currently supported: - You can only have multiple primary clusters - Configurations with one or more remote clusters will not work correctly #### Waypoint Requirements Universal waypoint deployments are assumed across clusters: - All clusters must have identically named waypoint deployments - Waypoint configurations must be synchronized manually across clusters (e.g. using Flux, ArgoCD, or similar tools) - Traffic routing relies on consistent waypoint naming conventions #### Service Visibility and Scoping Service scope configurations are not read from across clusters: - Only the local cluster's service scope configuration is used as the source of truth - Remote cluster service scopes are not respected, which can lead to unexpected traffic behavior - Cross-cluster service discovery may not respect intended service boundaries If a service's waypoint is marked as global, that service will also be global: - This can lead to unintended cross-cluster traffic if not managed carefully - The solution to this issue is tracked [here](https://github.com/istio/istio/issues/57710) #### Load Distribution on Remote Network Traffic going to a remote network is not equally distributed between endpoints: - When failing over to a remote network, a single endpoint on a remote network may get a disproportionate number of requests due to multiplexing of HTTP requests and connection pooling - The solution to this issue is tracked [here](https://github.com/istio/istio/issues/58039) #### Gateway Limitations Ambient east-west gateways currently only support meshed mTLS traffic: - Cannot currently expose `istiod` across networks using ambient east-west gateways. You can still use a classic e/w gateway for this. {{< tip >}} As ambient multicluster matures, many of these limitations will be addressed. Check the [Istio release notes](https://istio.io/latest/news/) for updates on ambient multicluster capabilities. {{< /tip >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/install/multicluster/_index.md
master
istio
[ -0.022194691002368927, -0.037299495190382004, 0.046356432139873505, 0.0017982983263209462, 0.04737996682524681, -0.0720219612121582, -0.009607150219380856, -0.00295989285223186, -0.03172867000102997, 0.009240097366273403, 0.0008087362512014806, -0.15426792204380035, 0.02566457726061344, 0....
0.313732
Follow this guide to verify that your ambient multicluster Istio installation is working properly. Before proceeding, be sure to complete the steps under [before you begin](/docs/ambient/install/multicluster/before-you-begin) as well as choosing and following one of the [multicluster installation guides](/docs/ambient/install/multicluster). In this guide, we will verify multicluster is functional, deploy the `HelloWorld` application `v1` to `cluster1` and `v2` to `cluster2`. Upon receiving a request, `HelloWorld` will include its version in its response when we call the `/hello` path. We will also deploy the `curl` container to both clusters. We will use these pods as the source of requests to the `HelloWorld` service, simulating in-mesh traffic. Finally, after generating traffic, we will observe which cluster received the requests. ## Verify Multicluster To confirm that Istiod is now able to communicate with the Kubernetes control plane of the remote cluster. {{< text bash >}} $ istioctl remote-clusters --context="${CTX\_CLUSTER1}" NAME SECRET STATUS ISTIOD cluster1 synced istiod-7b74b769db-kb4kj cluster2 istio-system/istio-remote-secret-cluster2 synced istiod-7b74b769db-kb4kj {{< /text >}} All clusters should indicate their status as `synced`. If a cluster is listed with a `STATUS` of `timeout` that means that Istiod in the primary cluster is unable to communicate with the remote cluster. See the Istiod logs for detailed error messages. Note: if you do see `timeout` issues and there is an intermediary host (such as the [Rancher auth proxy](https://ranchermanager.docs.rancher.com/how-to-guides/new-user-guides/manage-clusters/access-clusters/authorized-cluster-endpoint#two-authentication-methods-for-rke-clusters)) sitting between Istiod in the primary cluster and the Kubernetes control plane in the remote cluster, you may need to update the `certificate-authority-data` field of the kubeconfig that `istioctl create-remote-secret` generates in order to match the certificate being used by the intermediate host. ## Deploy the `HelloWorld` Service In order to make the `HelloWorld` service callable from any cluster, the DNS lookup must succeed in each cluster (see [deployment models](/docs/ops/deployment/deployment-models#dns-with-multiple-clusters) for details). We will address this by deploying the `HelloWorld` Service to each cluster in the mesh. {{< tip >}} Before proceeding, ensure that the istio-system namespaces in both clusters have the `istio.io/topology-network` set to the appropriate value (e.g., `network1` for `cluster1` and `network2` for `cluster2`). {{< /tip >}} To begin, create the `sample` namespace in each cluster: {{< text bash >}} $ kubectl create --context="${CTX\_CLUSTER1}" namespace sample $ kubectl create --context="${CTX\_CLUSTER2}" namespace sample {{< /text >}} Enroll the `sample` namespace in the mesh: {{< text bash >}} $ kubectl label --context="${CTX\_CLUSTER1}" namespace sample \ istio.io/dataplane-mode=ambient $ kubectl label --context="${CTX\_CLUSTER2}" namespace sample \ istio.io/dataplane-mode=ambient {{< /text >}} Create the `HelloWorld` service in both clusters: {{< text bash >}} $ kubectl apply --context="${CTX\_CLUSTER1}" \ -f @samples/helloworld/helloworld.yaml@ \ -l service=helloworld -n sample $ kubectl apply --context="${CTX\_CLUSTER2}" \ -f @samples/helloworld/helloworld.yaml@ \ -l service=helloworld -n sample {{< /text >}} ## Deploy `HelloWorld` `V1` Deploy the `helloworld-v1` application to `cluster1`: {{< text bash >}} $ kubectl apply --context="${CTX\_CLUSTER1}" \ -f @samples/helloworld/helloworld.yaml@ \ -l version=v1 -n sample {{< /text >}} Confirm the `helloworld-v1` pod status: {{< text bash >}} $ kubectl get pod --context="${CTX\_CLUSTER1}" -n sample -l app=helloworld NAME READY STATUS RESTARTS AGE helloworld-v1-86f77cd7bd-cpxhv 1/1 Running 0 40s {{< /text >}} Wait until the status of `helloworld-v1` is `Running`. Now, mark the helloworld service in `cluster1` as global so that it can be accessed from other clusters in the mesh: {{< text bash >}} $ kubectl label --context="${CTX\_CLUSTER1}" svc helloworld -n sample \ istio.io/global="true" {{< /text >}} ## Deploy `HelloWorld` `V2` Deploy the `helloworld-v2` application to `cluster2`: {{< text bash >}} $ kubectl apply --context="${CTX\_CLUSTER2}" \ -f @samples/helloworld/helloworld.yaml@ \ -l version=v2 -n sample {{< /text >}} Confirm the status the `helloworld-v2` pod status: {{< text bash >}} $ kubectl get pod --context="${CTX\_CLUSTER2}" -n sample -l app=helloworld NAME READY STATUS RESTARTS AGE helloworld-v2-758dd55874-6x4t8 1/1 Running 0 40s {{<
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/install/multicluster/verify/index.md
master
istio
[ -0.0026811084244400263, -0.0018150972900912166, 0.033216554671525955, -0.009947499260306358, 0.0015624226070940495, -0.10141149163246155, -0.05243721976876259, 0.0009684783290140331, 0.025991423055529594, 0.02511853538453579, 0.008660711348056793, -0.1833215057849884, -0.01673586666584015, ...
0.381438
{{< text bash >}} $ kubectl apply --context="${CTX\_CLUSTER2}" \ -f @samples/helloworld/helloworld.yaml@ \ -l version=v2 -n sample {{< /text >}} Confirm the status the `helloworld-v2` pod status: {{< text bash >}} $ kubectl get pod --context="${CTX\_CLUSTER2}" -n sample -l app=helloworld NAME READY STATUS RESTARTS AGE helloworld-v2-758dd55874-6x4t8 1/1 Running 0 40s {{< /text >}} Wait until the status of `helloworld-v2` is `Running`. Now, mark the helloworld service in `cluster2` as global so that it can be accessed from other clusters in the mesh: {{< text bash >}} $ kubectl label --context="${CTX\_CLUSTER2}" svc helloworld -n sample \ istio.io/global="true" {{< /text >}} ## Deploy `curl` Deploy the `curl` application to both clusters: {{< text bash >}} $ kubectl apply --context="${CTX\_CLUSTER1}" \ -f @samples/curl/curl.yaml@ -n sample $ kubectl apply --context="${CTX\_CLUSTER2}" \ -f @samples/curl/curl.yaml@ -n sample {{< /text >}} Confirm the status `curl` pod on `cluster1`: {{< text bash >}} $ kubectl get pod --context="${CTX\_CLUSTER1}" -n sample -l app=curl NAME READY STATUS RESTARTS AGE curl-754684654f-n6bzf 1/1 Running 0 5s {{< /text >}} Wait until the status of the `curl` pod is `Running`. Confirm the status of the `curl` pod on `cluster2`: {{< text bash >}} $ kubectl get pod --context="${CTX\_CLUSTER2}" -n sample -l app=curl NAME READY STATUS RESTARTS AGE curl-754684654f-dzl9j 1/1 Running 0 5s {{< /text >}} Wait until the status of the `curl` pod is `Running`. ## Verifying Cross-Cluster Traffic To verify that cross-cluster load balancing works as expected, call the `HelloWorld` service several times using the `curl` pod. To ensure load balancing is working properly, call the `HelloWorld` service from all clusters in your deployment. Send one request from the `curl` pod on `cluster1` to the `HelloWorld` service: {{< text bash >}} $ kubectl exec --context="${CTX\_CLUSTER1}" -n sample -c curl \ "$(kubectl get pod --context="${CTX\_CLUSTER1}" -n sample -l \ app=curl -o jsonpath='{.items[0].metadata.name}')" \ -- curl -sS helloworld.sample:5000/hello {{< /text >}} Repeat this request several times and verify that the `HelloWorld` version should change between `v1` and `v2`, signifying that endpoints in both clusters are being used: {{< text plain >}} Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8 Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv ... {{< /text >}} Now repeat this process from the `curl` pod on `cluster2`: {{< text bash >}} $ kubectl exec --context="${CTX\_CLUSTER2}" -n sample -c curl \ "$(kubectl get pod --context="${CTX\_CLUSTER2}" -n sample -l \ app=curl -o jsonpath='{.items[0].metadata.name}')" \ -- curl -sS helloworld.sample:5000/hello {{< /text >}} Repeat this request several times and verify that the `HelloWorld` version should toggle between `v1` and `v2`: {{< text plain >}} Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8 Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv ... {{< /text >}} \*\*Congratulations!\*\* You successfully installed and verified Istio on multiple clusters! ## Next steps Configure [locality failover](/docs/ambient/install/multicluster/failover) for your multicluster deployment.
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/install/multicluster/verify/index.md
master
istio
[ 0.0483706071972847, 0.03625701367855072, -0.02157091163098812, -0.01646135374903679, 0.017109880223870277, -0.03314106911420822, 0.06281688064336777, -0.018629327416419983, 0.06808245182037354, 0.0011973966611549258, 0.022517511621117592, -0.12843668460845947, -0.002146326005458832, -0.022...
0.221891
{{< boilerplate alpha >}} {{< tip >}} This guide requires installation of the Gateway API CRDs. {{< boilerplate gateway-api-install-crds >}} {{< /tip >}} Follow this guide to install the Istio control plane on both `cluster1` and `cluster2`, making each a {{< gloss >}}primary cluster{{< /gloss >}} (this is currently the only supported configuration in ambient mode). Cluster `cluster1` is on the `network1` network, while `cluster2` is on the `network2` network. This means there is no direct connectivity between pods across cluster boundaries. Before proceeding, be sure to complete the steps under [before you begin](/docs/ambient/install/multicluster/before-you-begin). {{< boilerplate multi-cluster-with-metallb >}} In this configuration, both `cluster1` and `cluster2` observe the API Servers in each cluster for endpoints. Service workloads across cluster boundaries communicate indirectly, via dedicated gateways for [east-west](https://en.wikipedia.org/wiki/East-west\_traffic) traffic. The gateway in each cluster must be reachable from the other cluster. {{< image width="75%" link="arch.svg" caption="Multiple primary clusters on separate networks" >}} ## Set the default network for `cluster1` If the istio-system namespace is already created, we need to set the cluster's network there: {{< text bash >}} $ kubectl --context="${CTX\_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1 {{< /text >}} ## Configure `cluster1` as a primary Create the `istioctl` configuration for `cluster1`: {{< tabset category-name="multicluster-install-type-cluster-1" >}} {{< tab name="IstioOperator" category-value="iop" >}} Install Istio as primary in `cluster1` using istioctl and the `IstioOperator` API. {{< text bash >}} $ cat < cluster1.yaml apiVersion: insall.istio.io/v1alpha1 kind: IstioOperator spec: profile: ambient components: pilot: k8s: env: - name: AMBIENT\_ENABLE\_MULTI\_NETWORK value: "true" values: global: meshID: mesh1 multiCluster: clusterName: cluster1 network: network1 EOF {{< /text >}} Apply the configuration to `cluster1`: {{< text bash >}} $ istioctl install --context="${CTX\_CLUSTER1}" -f cluster1.yaml {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Install Istio as primary in `cluster1` using the following Helm commands: Install the `base` chart in `cluster1`: {{< text bash >}} $ helm install istio-base istio/base -n istio-system --kube-context "${CTX\_CLUSTER1}" {{< /text >}} Then, install the `istiod` chart in `cluster1` with the following multi-cluster settings: {{< text bash >}} $ helm install istiod istio/istiod -n istio-system --kube-context "${CTX\_CLUSTER1}" --set global.meshID=mesh1 --set global.multiCluster.clusterName=cluster1 --set global.network=network1 --set profile=ambient --set env.AMBIENT\_ENABLE\_MULTI\_NETWORK="true" {{< /text >}} Next, install the CNI node agent in ambient mode: {{< text syntax=bash snip\_id=install\_cni\_cluster1 >}} $ helm install istio-cni istio/cni -n istio-system --kube-context "${CTX\_CLUSTER1}" --set profile=ambient {{< /text >}} Finally, install the ztunnel data plane: {{< text syntax=bash snip\_id=install\_ztunnel\_cluster1 >}} $ helm install ztunnel istio/ztunnel -n istio-system --kube-context "${CTX\_CLUSTER1}" --set multiCluster.clusterName=cluster1 --set global.network=network1 {{< /text >}} {{< /tab >}} {{< /tabset >}} ## Install an ambient east-west gateway in `cluster1` Install a gateway in `cluster1` that is dedicated to ambient [east-west](https://en.wikipedia.org/wiki/East-west\_traffic) traffic. Be aware that, depending on your Kubernetes environment, this gateway may be deployed on the public Internet by default. Production systems may require additional access restrictions (e.g. via firewall rules) to prevent external attacks. Check with your cloud vendor to see what options are available. {{< tabset category-name="east-west-gateway-install-type-cluster-1" >}} {{< tab name="IstioOperator" category-value="iop" >}} {{< text bash >}} $ @samples/multicluster/gen-eastwest-gateway.sh@ \ --network network1 \ --ambient | \ kubectl --context="${CTX\_CLUSTER1}" apply -f - {{< /text >}} {{< warning >}} If the control-plane was installed with a revision, add the `--revision rev` flag to the `gen-eastwest-gateway.sh` command. {{< /warning >}} {{< /tab >}} {{< tab name="Kubectl apply" category-value="helm" >}} Install the east-west gateway in `cluster1` using the following Gateway definition: {{< text bash >}} $ cat < cluster1-ewgateway.yaml kind: Gateway apiVersion: gateway.networking.k8s.io/v1 metadata: name: istio-eastwestgateway namespace: istio-system labels: topology.istio.io/network: "network1" spec: gatewayClassName: istio-east-west listeners: - name: mesh port: 15008 protocol: HBONE tls: mode: Terminate # represents double-HBONE options: gateway.istio.io/tls-terminate-mode: ISTIO\_MUTUAL EOF {{< /text >}} {{< warning >}} If
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/install/multicluster/multi-primary_multi-network/index.md
master
istio
[ 0.0006102105253376067, -0.06481888890266418, 0.017484918236732483, 0.03394288569688797, -0.009503047913312912, -0.051373191177845, -0.023477276787161827, 0.03628918528556824, 0.008969051763415337, 0.023484718054533005, -0.022516775876283646, -0.14589740335941315, -0.017110619693994522, -0....
0.356231
Gateway definition: {{< text bash >}} $ cat < cluster1-ewgateway.yaml kind: Gateway apiVersion: gateway.networking.k8s.io/v1 metadata: name: istio-eastwestgateway namespace: istio-system labels: topology.istio.io/network: "network1" spec: gatewayClassName: istio-east-west listeners: - name: mesh port: 15008 protocol: HBONE tls: mode: Terminate # represents double-HBONE options: gateway.istio.io/tls-terminate-mode: ISTIO\_MUTUAL EOF {{< /text >}} {{< warning >}} If you are running a revisioned instance of istiod and you don't have a default revision or tag set, you may need to add the `istio.io/rev` label to this `Gateway` manifest. {{< /warning >}} Apply the configuration to `cluster1`: {{< text bash >}} $ kubectl apply --context="${CTX\_CLUSTER1}" -f cluster1-ewgateway.yaml {{< /text >}} {{< /tab >}} {{< /tabset >}} Wait for the east-west gateway to be assigned an external IP address: {{< text bash >}} $ kubectl --context="${CTX\_CLUSTER1}" get svc istio-eastwestgateway -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-eastwestgateway LoadBalancer 10.80.6.124 34.75.71.237 ... 51s {{< /text >}} ## Set the default network for `cluster2` If the istio-system namespace is already created, we need to set the cluster's network there: {{< text bash >}} $ kubectl --context="${CTX\_CLUSTER2}" get namespace istio-system && \ kubectl --context="${CTX\_CLUSTER2}" label namespace istio-system topology.istio.io/network=network2 {{< /text >}} ## Configure cluster2 as a primary Create the `istioctl` configuration for `cluster2`: {{< tabset category-name="multicluster-install-type-cluster-2" >}} {{< tab name="IstioOperator" category-value="iop" >}} Install Istio as primary in `cluster2` using istioctl and the `IstioOperator` API. {{< text bash >}} $ cat < cluster2.yaml apiVersion: insall.istio.io/v1alpha1 kind: IstioOperator spec: profile: ambient components: pilot: k8s: env: - name: AMBIENT\_ENABLE\_MULTI\_NETWORK value: "true" values: global: meshID: mesh1 multiCluster: clusterName: cluster2 network: network2 EOF {{< /text >}} Apply the configuration to `cluster2`: {{< text bash >}} $ istioctl install --context="${CTX\_CLUSTER2}" -f cluster2.yaml {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Install Istio as primary in `cluster2` using the following Helm commands: Install the `base` chart in `cluster2`: {{< text bash >}} $ helm install istio-base istio/base -n istio-system --kube-context "${CTX\_CLUSTER2}" {{< /text >}} Then, install the `istiod` chart in `cluster2` with the following multi-cluster settings: {{< text bash >}} $ helm install istiod istio/istiod -n istio-system --kube-context "${CTX\_CLUSTER2}" --set global.meshID=mesh1 --set global.multiCluster.clusterName=cluster2 --set global.network=network2 --set profile=ambient --set env.AMBIENT\_ENABLE\_MULTI\_NETWORK="true" {{< /text >}} Next, install the CNI node agent in ambient mode: {{< text syntax=bash snip\_id=install\_cni\_cluster2 >}} $ helm install istio-cni istio/cni -n istio-system --kube-context "${CTX\_CLUSTER2}" --set profile=ambient {{< /text >}} Finally, install the ztunnel data plane: {{< text syntax=bash snip\_id=install\_ztunnel\_cluster2 >}} $ helm install ztunnel istio/ztunnel -n istio-system --kube-context "${CTX\_CLUSTER2}" --set multiCluster.clusterName=cluster2 --set global.network=network2 {{< /text >}} {{< /tab >}} {{< /tabset >}} ## Install an ambient east-west gateway in `cluster2` As we did with `cluster1` above, install a gateway in `cluster2` that is dedicated to east-west traffic. {{< tabset category-name="east-west-gateway-install-type-cluster-2" >}} {{< tab name="IstioOperator" category-value="iop" >}} {{< text bash >}} $ @samples/multicluster/gen-eastwest-gateway.sh@ \ --network network2 \ --ambient | \ kubectl apply --context="${CTX\_CLUSTER2}" -f - {{< /text >}} {{< /tab >}} {{< tab name="Kubectl apply" category-value="helm" >}} Install the east-west gateway in `cluster2` using the following Gateway definition: {{< text bash >}} $ cat < cluster2-ewgateway.yaml kind: Gateway apiVersion: gateway.networking.k8s.io/v1 metadata: name: istio-eastwestgateway namespace: istio-system labels: topology.istio.io/network: "network2" spec: gatewayClassName: istio-east-west listeners: - name: mesh port: 15008 protocol: HBONE tls: mode: Terminate # represents double-HBONE options: gateway.istio.io/tls-terminate-mode: ISTIO\_MUTUAL EOF {{< /text >}} {{< warning >}} If you are running a revisioned instance of istiod and you don't have a default revision or tag set, you may need to add the `istio.io/rev` label to this `Gateway` manifest. {{< /warning >}} Apply the configuration to `cluster2`: {{< text bash >}} $ kubectl apply --context="${CTX\_CLUSTER2}" -f cluster2-ewgateway.yaml {{< /text >}} {{< /tab >}} {{< /tabset >}} Wait for the
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/install/multicluster/multi-primary_multi-network/index.md
master
istio
[ 0.028381315991282463, 0.01412469893693924, -0.007076847366988659, 0.015635762363672256, -0.055982593446969986, -0.008153746835887432, 0.0003842173027805984, 0.003347281366586685, 0.07912600040435791, 0.0012805520091205835, -0.043018512427806854, -0.12011635303497314, -0.041559141129255295, ...
0.401111