content
large_stringlengths
3
20.5k
url
large_stringlengths
54
193
branch
large_stringclasses
4 values
source
large_stringclasses
42 values
embeddings
listlengths
384
384
score
float64
-0.21
0.65
don't have a default revision or tag set, you may need to add the `istio.io/rev` label to this `Gateway` manifest. {{< /warning >}} Apply the configuration to `cluster2`: {{< text bash >}} $ kubectl apply --context="${CTX\_CLUSTER2}" -f cluster2-ewgateway.yaml {{< /text >}} {{< /tab >}} {{< /tabset >}} Wait for the east-west gateway to be assigned an external IP address: {{< text bash >}} $ kubectl --context="${CTX\_CLUSTER2}" get svc istio-eastwestgateway -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-eastwestgateway LoadBalancer 10.0.12.121 34.122.91.98 ... 51s {{< /text >}} ## Enable Endpoint Discovery Install a remote secret in `cluster2` that provides access to `cluster1`’s API server. {{< text bash >}} $ istioctl create-remote-secret \ --context="${CTX\_CLUSTER1}" \ --name=cluster1 | \ kubectl apply -f - --context="${CTX\_CLUSTER2}" {{< /text >}} Install a remote secret in `cluster1` that provides access to `cluster2`’s API server. {{< text bash >}} $ istioctl create-remote-secret \ --context="${CTX\_CLUSTER2}" \ --name=cluster2 | \ kubectl apply -f - --context="${CTX\_CLUSTER1}" {{< /text >}} \*\*Congratulations!\*\* You successfully installed an Istio mesh across multiple primary clusters on different networks! ## Next Steps You can now [verify the installation](/docs/ambient/install/multicluster/verify). ## Cleanup Uninstall Istio from both `cluster1` and `cluster2` using the same mechanism you installed Istio with (istioctl or Helm). {{< tabset category-name="multicluster-uninstall-type-cluster-1" >}} {{< tab name="IstioOperator" category-value="iop" >}} Uninstall Istio in `cluster1`: {{< text syntax=bash snip\_id=none >}} $ istioctl uninstall --context="${CTX\_CLUSTER1}" -y --purge $ kubectl delete ns istio-system --context="${CTX\_CLUSTER1}" {{< /text >}} Uninstall Istio in `cluster2`: {{< text syntax=bash snip\_id=none >}} $ istioctl uninstall --context="${CTX\_CLUSTER2}" -y --purge $ kubectl delete ns istio-system --context="${CTX\_CLUSTER2}" {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} Delete Istio Helm installation from `cluster1`: {{< text syntax=bash >}} $ helm delete ztunnel -n istio-system --kube-context "${CTX\_CLUSTER1}" $ helm delete istio-cni -n istio-system --kube-context "${CTX\_CLUSTER1}" $ helm delete istiod -n istio-system --kube-context "${CTX\_CLUSTER1}" $ helm delete istio-base -n istio-system --kube-context "${CTX\_CLUSTER1}" {{< /text >}} Delete the `istio-system` namespace from `cluster1`: {{< text syntax=bash >}} $ kubectl delete ns istio-system --context="${CTX\_CLUSTER1}" {{< /text >}} Delete Istio Helm installation from `cluster2`: {{< text syntax=bash >}} $ helm delete ztunnel -n istio-system --kube-context "${CTX\_CLUSTER2}" $ helm delete istio-cni -n istio-system --kube-context "${CTX\_CLUSTER2}" $ helm delete istiod -n istio-system --kube-context "${CTX\_CLUSTER2}" $ helm delete istio-base -n istio-system --kube-context "${CTX\_CLUSTER2}" {{< /text >}} Delete the `istio-system` namespace from `cluster2`: {{< text syntax=bash >}} $ kubectl delete ns istio-system --context="${CTX\_CLUSTER2}" {{< /text >}} (Optional) Delete CRDs installed by Istio: Deleting CRDs permanently removes any Istio resources you have created in your clusters. To delete Istio CRDs installed in your clusters: {{< text syntax=bash snip\_id=delete\_crds >}} $ kubectl get crd -oname --context "${CTX\_CLUSTER1}" | grep --color=never 'istio.io' | xargs kubectl delete --context "${CTX\_CLUSTER1}" $ kubectl get crd -oname --context "${CTX\_CLUSTER2}" | grep --color=never 'istio.io' | xargs kubectl delete --context "${CTX\_CLUSTER2}" {{< /text >}} And finally, clean up the Gateway API CRDs: {{< text syntax=bash snip\_id=delete\_gateway\_crds >}} $ kubectl get crd -oname --context "${CTX\_CLUSTER1}" | grep --color=never 'gateway.networking.k8s.io' | xargs kubectl delete --context "${CTX\_CLUSTER1}" $ kubectl get crd -oname --context "${CTX\_CLUSTER2}" | grep --color=never 'gateway.networking.k8s.io' | xargs kubectl delete --context "${CTX\_CLUSTER2}" {{< /text >}} {{< /tab >}} {{< /tabset >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/install/multicluster/multi-primary_multi-network/index.md
master
istio
[ 0.025498587638139725, -0.01396152749657631, -0.02942933514714241, 0.010366472415626049, -0.04207446798682213, -0.01586989127099514, -0.01265926007181406, 0.0338386669754982, 0.046502478420734406, 0.01817142777144909, -0.020940925925970078, -0.14783835411071777, -0.02788875438272953, -0.028...
0.355945
{{< boilerplate alpha >}} Before you begin a multicluster installation, review the [deployment models guide](/docs/ops/deployment/deployment-models) which describes the foundational concepts used throughout this guide. In addition, review the requirements and perform the initial steps below. ## Requirements ### Cluster This guide requires that you have two Kubernetes clusters with support for LoadBalancer `Services` on any of the [supported Kubernetes versions:](/docs/releases/supported-releases#support-status-of-istio-releases) {{< supported\_kubernetes\_versions >}}. ### API Server Access The API Server in each cluster must be accessible to the other clusters in the mesh. Many cloud providers make API Servers publicly accessible via network load balancers (NLB). The ambient east-west gateway cannot be used to expose the API server as it only supports double HBONE traffic. A non-ambient [east-west](https://en.wikipedia.org/wiki/East-west\_traffic) gateway could be used to enable access to the API Server. ## Environment Variables This guide will refer to two clusters: `cluster1` and `cluster2`. The following environment variables will be used throughout to simplify the instructions: Variable | Description -------- | ----------- `CTX\_CLUSTER1` | The context name in the default [Kubernetes configuration file](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) used for accessing the `cluster1` cluster. `CTX\_CLUSTER2` | The context name in the default [Kubernetes configuration file](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) used for accessing the `cluster2` cluster. Set the two variables before proceeding: {{< text syntax=bash snip\_id=none >}} $ export CTX\_CLUSTER1= $ export CTX\_CLUSTER2= {{< /text >}} ## Configure Trust A multicluster service mesh deployment requires that you establish trust between all clusters in the mesh. Depending on the requirements for your system, there may be multiple options available for establishing trust. See [certificate management](/docs/tasks/security/cert-management/) for detailed descriptions and instructions for all available options. Depending on which option you choose, the installation instructions for Istio may change slightly. This guide will assume that you use a common root to generate intermediate certificates for each primary cluster. Follow the [instructions](/docs/tasks/security/cert-management/plugin-ca-cert/) to generate and push a CA certificate secret to both the `cluster1` and `cluster2` clusters. {{< tip >}} If you currently have a single cluster with a self-signed CA (as described in [Getting Started](/docs/setup/getting-started/)), you need to change the CA using one of the methods described in [certificate management](/docs/tasks/security/cert-management/). Changing the CA typically requires reinstalling Istio. The installation instructions below may have to be altered based on your choice of CA. {{< /tip >}} ## Next steps You're now ready to install an Istio ambient mesh across multiple clusters. - [Install Multi-Primary on Different Networks](/docs/ambient/install/multicluster/multi-primary\_multi-network) {{< tip >}} If you plan on installing Istio multi-cluster using Helm, follow the [Helm prerequisites](/docs/setup/install/helm/#prerequisites) in the Helm install guide first. {{< /tip >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/install/multicluster/before-you-begin/index.md
master
istio
[ -0.008465799503028393, -0.020825548097491264, 0.04656132683157921, -0.05118973180651665, -0.03449951857328415, -0.04142327606678009, -0.10808438062667847, 0.016258852556347847, 0.04800020903348923, 0.040376633405685425, -0.05768667906522751, -0.11546820402145386, 0.017079899087548256, -0.0...
0.218065
Follow this guide to customize failover behavior in your ambient multicluster Istio installation using waypoint proxies. Before proceeding, be sure to complete ambient multicluster Istio installation following one of the [multicluster installation guides](/docs/ambient/install/multicluster) and verify that the installation is working properly. In this guide, we will build on top of the `HelloWorld` application used to verify the multicluster installation. We will configure locality failover for the `HelloWorld` service to prefer endpoints in the cluster local to the client using a `DestinationRule` and will deploy a waypoint proxy to enforce the configuration. ## Deploy waypoint proxy In order to configure outlier detection and customize failover behavior for the service we need a waypoint proxy. To begin, deploy waypoint proxy to each cluster in the mesh: {{< text bash >}} $ istioctl --context "${CTX\_CLUSTER1}" waypoint apply --name waypoint --for service -n sample --wait $ istioctl --context "${CTX\_CLUSTER2}" waypoint apply --name waypoint --for service -n sample --wait {{< /text >}} Confirm the status of the waypoint proxy deployment on `cluster1`: {{< text bash >}} $ kubectl --context "${CTX\_CLUSTER1}" get deployment waypoint --namespace sample NAME READY UP-TO-DATE AVAILABLE AGE waypoint 1/1 1 1 137m {{< /text >}} Confirm the status of the waypoint proxy deployment on `cluster2`: {{< text bash >}} $ kubectl --context "${CTX\_CLUSTER2}" get deployment waypoint --namespace sample NAME READY UP-TO-DATE AVAILABLE AGE waypoint 1/1 1 1 138m {{< /text >}} Wait until all waypoint proxies are ready. Configure `HelloWorld` service in each cluster to use the waypoint proxy: {{< text bash >}} $ kubectl --context "${CTX\_CLUSTER1}" label svc helloworld -n sample istio.io/use-waypoint=waypoint $ kubectl --context "${CTX\_CLUSTER2}" label svc helloworld -n sample istio.io/use-waypoint=waypoint {{< /text >}} Finally, and this step is specific to multicluster deployment of waypoint proxies, mark the waypoint proxy service in each cluster as global, just like you did earlier with the `HelloWorld` service: {{< text bash >}} $ kubectl --context "${CTX\_CLUSTER1}" label svc waypoint -n sample istio.io/global=true $ kubectl --context "${CTX\_CLUSTER2}" label svc waypoint -n sample istio.io/global=true {{< /text >}} The `HelloWorld` service in both clusters is now configured to use waypoint proxies, but waypoint proxies don't do anything useful yet. ## Configure locality failover To configure locality failover create and apply a `DestinationRule` in `cluster1`: {{< text bash >}} $ kubectl --context "${CTX\_CLUSTER1}" apply -n sample -f - <}} Apply the same `DestinationRule` in `cluster2` as well: {{< text bash >}} $ kubectl --context "${CTX\_CLUSTER2}" apply -n sample -f - <}} This `DestinationRule` configures the following: - [Outlier detection](/docs/reference/config/networking/destination-rule/#OutlierDetection) for the `HelloWorld` service. This instructs waypoint proxies how to identify when endpoints for a service are unhealthy. It's required for failover to function properly. - [Failover priority](/docs/reference/config/networking/destination-rule/#LocalityLoadBalancerSetting) that instructs waypoint proxy how to prioritize endpoints when routing requests. In this example, waypoint proxy will prefer endpoints in the same cluster over endpoints in other clusters. With these policies in place, waypoint proxies will prefer endpoints in the same cluster as the waypoint proxy when they are available and considered healthy based on the outlier detection configuration. ## Verify traffic stays in local cluster Send request from the `curl` pods on `cluster1` to the `HelloWorld` service: {{< text bash >}} $ kubectl exec --context "${CTX\_CLUSTER1}" -n sample -c curl \ "$(kubectl get pod --context "${CTX\_CLUSTER1}" -n sample -l \ app=curl -o jsonpath='{.items[0].metadata.name}')" \ -- curl -sS helloworld.sample:5000/hello {{< /text >}} Now, if you repeat this request several times and verify that the `HelloWorld` version should always be `v1` because the traffic stays in `cluster1`: {{< text plain >}} Hello version: v1, instance: helloworld-v1-954745fd-z6qcn Hello version: v1, instance: helloworld-v1-954745fd-z6qcn ... {{< /text >}} Similarly, send request from `curl` pods
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/install/multicluster/failover/index.md
master
istio
[ -0.018385199829936028, -0.012790676206350327, 0.039922669529914856, 0.021432489156723022, -0.020446686074137688, -0.08305772393941879, 0.0248921699821949, 0.01645767316222191, -0.05078272521495819, 0.014505621045827866, -0.023003961890935898, -0.13576583564281464, 0.04050223156809807, 0.04...
0.354687
/text >}} Now, if you repeat this request several times and verify that the `HelloWorld` version should always be `v1` because the traffic stays in `cluster1`: {{< text plain >}} Hello version: v1, instance: helloworld-v1-954745fd-z6qcn Hello version: v1, instance: helloworld-v1-954745fd-z6qcn ... {{< /text >}} Similarly, send request from `curl` pods on `cluster2` several times: {{< text bash >}} $ kubectl exec --context "${CTX\_CLUSTER2}" -n sample -c curl \ "$(kubectl get pod --context "${CTX\_CLUSTER2}" -n sample -l \ app=curl -o jsonpath='{.items[0].metadata.name}')" \ -- curl -sS helloworld.sample:5000/hello {{< /text >}} You should see that all requests are processed in `cluster2` by looking at the version in the response: {{< text plain >}} Hello version: v2, instance: helloworld-v2-7b768b9bbd-7zftm Hello version: v2, instance: helloworld-v2-7b768b9bbd-7zftm ... {{< /text >}} ## Verify failover to another cluster To verify that failover to remote cluster works simulate `HelloWorld` service outage in `cluster1` by scaling down deployment: {{< text bash >}} $ kubectl --context "${CTX\_CLUSTER1}" scale --replicas=0 deployment/helloworld-v1 -n sample {{< /text >}} Send request from the `curl` pods on `cluster1` to the `HelloWorld` service again: {{< text bash >}} $ kubectl exec --context "${CTX\_CLUSTER1}" -n sample -c curl \ "$(kubectl get pod --context "${CTX\_CLUSTER1}" -n sample -l \ app=curl -o jsonpath='{.items[0].metadata.name}')" \ -- curl -sS helloworld.sample:5000/hello {{< /text >}} This time you should see that the request is processed by `HelloWorld` service in `cluster2` because there are no available endpoints in `cluster1`: {{< text plain >}} Hello version: v2, instance: helloworld-v2-7b768b9bbd-7zftm Hello version: v2, instance: helloworld-v2-7b768b9bbd-7zftm ... {{< /text >}} \*\*Congratulations!\*\* You successfully configuration locality failover in Istio ambient multicluster deployment!
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/install/multicluster/failover/index.md
master
istio
[ 0.025505714118480682, 0.025790417566895485, 0.0053566754795610905, -0.00028081887285225093, -0.045100994408130646, -0.06577320396900177, -0.01553389523178339, -0.045754022896289825, 0.09390338510274887, -0.012527558021247387, -0.00045749821583740413, -0.10997901111841202, -0.0170279685407876...
0.136579
{{< tip >}} Follow this guide to install and configure an Istio mesh with support for ambient mode. If you are new to Istio, and just want to try it out, follow the [quick start instructions](/docs/ambient/getting-started) instead. {{< /tip >}} We encourage the use of Helm to install Istio for production use in ambient mode. To allow controlled upgrades, the control plane and data plane components are packaged and installed separately. (Because the ambient data plane is split across [two components](/docs/ambient/architecture/data-plane), the ztunnel and waypoints, upgrades involve separate steps for these components.) ## Prerequisites 1. Check the [Platform-Specific Prerequisites](/docs/ambient/install/platform-prerequisites). 1. [Install the Helm client](https://helm.sh/docs/intro/install/), version 3.6 or above. 1. Configure the Helm repository: {{< text syntax=bash snip\_id=configure\_helm >}} $ helm repo add istio https://istio-release.storage.googleapis.com/charts $ helm repo update {{< /text >}} ## Install the control plane Default configuration values can be changed using one or more `--set =` arguments. Alternatively, you can specify several parameters in a custom values file using the `--values ` argument. {{< tip >}} You can display the default values of configuration parameters using the `helm show values ` command or refer to Artifact Hub chart documentation for the [base](https://artifacthub.io/packages/helm/istio-official/base?modal=values), [istiod](https://artifacthub.io/packages/helm/istio-official/istiod?modal=values), [CNI](https://artifacthub.io/packages/helm/istio-official/cni?modal=values), [ztunnel](https://artifacthub.io/packages/helm/istio-official/ztunnel?modal=values) and [Gateway](https://artifacthub.io/packages/helm/istio-official/gateway?modal=values) chart configuration parameters. {{< /tip >}} Full details on how to use and customize Helm installations are available in [the sidecar installation documentation](/docs/setup/install/helm/). Unlike [istioctl](/docs/ambient/install/istioctl/) profiles, which group together components to be installed or removed, Helm profiles simply set groups of configuration values. ### Base components The `base` chart contains the basic CRDs and cluster roles required to set up Istio. This should be installed prior to any other Istio component. {{< text syntax=bash snip\_id=install\_base >}} $ helm install istio-base istio/base -n istio-system --create-namespace --wait {{< /text >}} ### Install or upgrade the Kubernetes Gateway API CRDs {{< boilerplate gateway-api-install-crds >}} ### istiod control plane The `istiod` chart installs a revision of Istiod. Istiod is the control plane component that manages and configures the proxies to route traffic within the mesh. {{< text syntax=bash snip\_id=install\_istiod >}} $ helm install istiod istio/istiod --namespace istio-system --set profile=ambient --wait {{< /text >}} ### CNI node agent The `cni` chart installs the Istio CNI node agent. It is responsible for detecting the pods that belong to the ambient mesh, and configuring the traffic redirection between pods and the ztunnel node proxy (which will be installed later). {{< text syntax=bash snip\_id=install\_cni >}} $ helm install istio-cni istio/cni -n istio-system --set profile=ambient --wait {{< /text >}} ## Install the data plane ### ztunnel DaemonSet The `ztunnel` chart installs the ztunnel DaemonSet, which is the node proxy component of Istio's ambient mode. {{< text syntax=bash snip\_id=install\_ztunnel >}} $ helm install ztunnel istio/ztunnel -n istio-system --wait {{< /text >}} ### Ingress gateway (optional) {{< tip >}} {{< boilerplate gateway-api-future >}} If you use the Gateway API, you do not need to install and manage an ingress gateway Helm chart as described below. Refer to the [Gateway API task](/docs/tasks/traffic-management/ingress/gateway-api/#automated-deployment) for details. {{< /tip >}} To install an ingress gateway, run the command below: {{< text syntax=bash snip\_id=install\_ingress >}} $ helm install istio-ingress istio/gateway -n istio-ingress --create-namespace --wait {{< /text >}} If your Kubernetes cluster doesn't support the `LoadBalancer` service type (`type: LoadBalancer`) with a proper external IP assigned, run the above command without the `--wait` parameter to avoid the infinite wait. See [Installing Gateways](/docs/setup/additional-setup/gateway/) for in-depth documentation on gateway installation. ## Configuration To view supported configuration options and documentation, run: {{< text syntax=bash >}} $ helm show values istio/istiod {{< /text >}} ## Verify the installation ### Verify the workload status After installing all the components, you can check the
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/install/helm/index.md
master
istio
[ -0.020414818078279495, 0.04820117726922035, 0.040704481303691864, 0.0537075512111187, 0.05840558558702469, -0.08991720527410507, 0.025560714304447174, 0.0859416052699089, -0.051304709166288376, 0.0256042517721653, 0.02452911250293255, -0.10214509814977646, -0.011702402494847775, 0.01186158...
0.380227
wait. See [Installing Gateways](/docs/setup/additional-setup/gateway/) for in-depth documentation on gateway installation. ## Configuration To view supported configuration options and documentation, run: {{< text syntax=bash >}} $ helm show values istio/istiod {{< /text >}} ## Verify the installation ### Verify the workload status After installing all the components, you can check the Helm deployment status with: {{< text syntax=bash snip\_id=show\_components >}} $ helm ls -n istio-system NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION istio-base istio-system 1 2024-04-17 22:14:45.964722028 +0000 UTC deployed base-{{< istio\_full\_version >}} {{< istio\_full\_version >}} istio-cni istio-system 1 2024-04-17 22:14:45.964722028 +0000 UTC deployed cni-{{< istio\_full\_version >}} {{< istio\_full\_version >}} istiod istio-system 1 2024-04-17 22:14:45.964722028 +0000 UTC deployed istiod-{{< istio\_full\_version >}} {{< istio\_full\_version >}} ztunnel istio-system 1 2024-04-17 22:14:45.964722028 +0000 UTC deployed ztunnel-{{< istio\_full\_version >}} {{< istio\_full\_version >}} {{< /text >}} You can check the status of the deployed pods with: {{< text syntax=bash snip\_id=check\_pods >}} $ kubectl get pods -n istio-system NAME READY STATUS RESTARTS AGE istio-cni-node-g97z5 1/1 Running 0 10m istiod-5f4c75464f-gskxf 1/1 Running 0 10m ztunnel-c2z4s 1/1 Running 0 10m {{< /text >}} ### Verify with the sample application After installing ambient mode with Helm, you can follow the [Deploy the sample application](/docs/ambient/getting-started/deploy-sample-app/) guide to deploy the sample application and ingress gateways, and then you can [add your application to the ambient mesh](/docs/ambient/getting-started/secure-and-visualize/#add-bookinfo-to-the-mesh). ## Uninstall You can uninstall Istio and its components by uninstalling the charts installed above. 1. List all the Istio charts installed in `istio-system` namespace: {{< text syntax=bash >}} $ helm ls -n istio-system NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION istio-base istio-system 1 2024-04-17 22:14:45.964722028 +0000 UTC deployed base-{{< istio\_full\_version >}} {{< istio\_full\_version >}} istio-cni istio-system 1 2024-04-17 22:14:45.964722028 +0000 UTC deployed cni-{{< istio\_full\_version >}} {{< istio\_full\_version >}} istiod istio-system 1 2024-04-17 22:14:45.964722028 +0000 UTC deployed istiod-{{< istio\_full\_version >}} {{< istio\_full\_version >}} ztunnel istio-system 1 2024-04-17 22:14:45.964722028 +0000 UTC deployed ztunnel-{{< istio\_full\_version >}} {{< istio\_full\_version >}} {{< /text >}} 1. (Optional) Delete any Istio gateway chart installations: {{< text syntax=bash snip\_id=delete\_ingress >}} $ helm delete istio-ingress -n istio-ingress $ kubectl delete namespace istio-ingress {{< /text >}} 1. Delete the ztunnel chart: {{< text syntax=bash snip\_id=delete\_ztunnel >}} $ helm delete ztunnel -n istio-system {{< /text >}} 1. Delete the Istio CNI chart: {{< text syntax=bash snip\_id=delete\_cni >}} $ helm delete istio-cni -n istio-system {{< /text >}} 1. Delete the istiod control plane chart: {{< text syntax=bash snip\_id=delete\_istiod >}} $ helm delete istiod -n istio-system {{< /text >}} 1. Delete the Istio base chart: {{< tip >}} By design, deleting a chart via Helm doesn't delete the installed Custom Resource Definitions (CRDs) installed via the chart. {{< /tip >}} {{< text syntax=bash snip\_id=delete\_base >}} $ helm delete istio-base -n istio-system {{< /text >}} 1. Delete CRDs installed by Istio (optional) {{< warning >}} This will delete all created Istio resources. {{< /warning >}} {{< text syntax=bash snip\_id=delete\_crds >}} $ kubectl get crd -oname | grep --color=never 'istio.io' | xargs kubectl delete {{< /text >}} 1. Delete the `istio-system` namespace: {{< text syntax=bash snip\_id=delete\_system\_namespace >}} $ kubectl delete namespace istio-system {{< /text >}} ## Generate a manifest before installation You can generate the manifests for each component before installing Istio using the `helm template` sub-command. For example, to generate a manifest that can be installed with `kubectl` for the `istiod` component: {{< text syntax=bash snip\_id=none >}} $ helm template istiod istio/istiod -n istio-system --kube-version {Kubernetes version of target cluster} > istiod.yaml {{< /text >}} The generated manifest can be used to inspect what exactly is installed as well as to track changes to the manifest over time. {{< tip >}} Any additional flags or custom values
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/install/helm/index.md
master
istio
[ 0.02294597215950489, -0.004219578113406897, 0.006208017934113741, -0.02974490448832512, 0.02796229161322117, -0.0167955681681633, -0.05829277262091637, 0.0661824494600296, -0.029637034982442856, 0.00859546847641468, -0.02808558940887451, -0.11245917528867722, -0.04098713770508766, 0.018753...
0.459123
$ helm template istiod istio/istiod -n istio-system --kube-version {Kubernetes version of target cluster} > istiod.yaml {{< /text >}} The generated manifest can be used to inspect what exactly is installed as well as to track changes to the manifest over time. {{< tip >}} Any additional flags or custom values overrides you would normally use for installation should also be supplied to the `helm template` command. {{< /tip >}} To install the manifest generated above, which will create the `istiod` component in the target cluster: {{< text syntax=bash snip\_id=none >}} $ kubectl apply -f istiod.yaml {{< /text >}} {{< warning >}} If attempting to install and manage Istio using `helm template`, please note the following caveats: 1. The Istio namespace (`istio-system` by default) must be created manually. 1. Resources may not be installed with the same sequencing of dependencies as `helm install` 1. This method is not tested as part of Istio releases. 1. While `helm install` will automatically detect environment specific settings from your Kubernetes context, `helm template` cannot as it runs offline, which may lead to unexpected results. In particular, you must ensure that you follow [these steps](/docs/ops/best-practices/security/#configure-third-party-service-account-tokens) if your Kubernetes environment does not support third party service account tokens. 1. `kubectl apply` of the generated manifest may show transient errors due to resources not being available in the cluster in the correct order. 1. `helm install` automatically prunes any resources that should be removed when the configuration changes (e.g. if you remove a gateway). This does not happen when you use `helm template` with `kubectl`, and these resources must be removed manually. {{< /warning >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/install/helm/index.md
master
istio
[ 0.03375190868973732, 0.042723432183265686, 0.055900849401950836, -0.0257897786796093, 0.034532222896814346, 0.03267223387956619, 0.016785521060228348, 0.05609246343374252, 0.05461828410625458, 0.003299390897154808, -0.01867307908833027, -0.13299928605556488, -0.043790947645902634, 0.020092...
0.486038
{{< tip >}} Follow this guide to install and configure an Istio mesh with support for ambient mode. If you are new to Istio, and just want to try it out, follow the [quick start instructions](/docs/ambient/getting-started) instead. {{< /tip >}} We encourage the use of Helm to install Istio for production use in ambient mode. To allow controlled upgrades, the control plane and data plane components are packaged and installed separately. (Because the ambient data plane is split across [two components](/docs/ambient/architecture/data-plane), the ztunnel and waypoints, upgrades involve separate steps for these components.) ## Prerequisites 1. Check the [Platform-Specific Prerequisites](/docs/ambient/install/platform-prerequisites). 1. [Install the Helm client](https://helm.sh/docs/intro/install/), version 3.6 or above. 1. Configure the Helm repository: {{< text syntax=bash snip\_id=configure\_helm >}} $ helm repo add istio https://istio-release.storage.googleapis.com/charts $ helm repo update {{< /text >}} ### Install or upgrade the Kubernetes Gateway API CRDs {{< boilerplate gateway-api-install-crds >}} ### Install the Istio ambient control plane and data plane The `ambient` chart installs all the Istio data plane and control plane components required for ambient, using a Helm wrapper chart that composes the individual component charts. {{< warning >}} Note that if you install everything as part of this wrapper chart, you can only upgrade or uninstall ambient via this wrapper chart - you cannot upgrade or uninstall sub-components individually. {{< /warning >}} {{< text syntax=bash snip\_id=install\_ambient\_aio >}} $ helm install istio-ambient istio/ambient --namespace istio-system --create-namespace --wait {{< /text >}} ### Ingress gateway (optional) {{< tip >}} {{< boilerplate gateway-api-future >}} If you use the Gateway API, you do not need to install and manage an ingress gateway Helm chart as described below. Refer to the [Gateway API task](/docs/tasks/traffic-management/ingress/gateway-api/#automated-deployment) for details. {{< /tip >}} To install an ingress gateway, run the command below: {{< text syntax=bash snip\_id=install\_ingress >}} $ helm install istio-ingress istio/gateway -n istio-ingress --create-namespace --wait {{< /text >}} If your Kubernetes cluster doesn't support the `LoadBalancer` service type (`type: LoadBalancer`) with a proper external IP assigned, run the above command without the `--wait` parameter to avoid the infinite wait. See [Installing Gateways](/docs/setup/additional-setup/gateway/) for in-depth documentation on gateway installation. ## Configuration The ambient wrapper chart composes the following component Helm charts - base - istiod - istio-cni - ztunnel Default configuration values can be changed using one or more `--set =` arguments. Alternatively, you can specify several parameters in a custom values file using the `--values ` argument. You can override component-level settings via the wrapper chart just like you can when installing the components individually, by prefixing the value path with the name of the component. Example: {{< text syntax=bash snip\_id=none >}} $ helm install istiod istio/istiod --set hub=gcr.io/istio-testing {{< /text >}} Becomes: {{< text syntax=bash snip\_id=none >}} $ helm install istio-ambient istio/ambient --set istiod.hub=gcr.io/istio-testing {{< /text >}} when set via the wrapper chart. To view supported configuration options and documentation for each sub-component, run: {{< text syntax=bash >}} $ helm show values istio/istiod {{< /text >}} for each component you are interested in. Full details on how to use and customize Helm installations are available in [the sidecar installation documentation](/docs/setup/install/helm/). ## Verify the installation ### Verify the workload status After installing all the components, you can check the Helm deployment status with: {{< text syntax=bash snip\_id=show\_components >}} $ helm ls -n istio-system NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION istio-ambient istio-system 1 2024-04-17 22:14:45.964722028 +0000 UTC deployed ambient-{{< istio\_full\_version >}} {{< istio\_full\_version >}} {{< /text >}} You can check the status of the deployed pods with: {{< text syntax=bash snip\_id=check\_pods >}} $ kubectl get pods -n istio-system NAME READY STATUS RESTARTS AGE istio-cni-node-g97z5 1/1 Running 0 10m istiod-5f4c75464f-gskxf 1/1 Running 0
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/install/helm/all-in-one/index.md
master
istio
[ -0.020414818078279495, 0.04820117726922035, 0.040704481303691864, 0.0537075512111187, 0.05840558558702469, -0.08991720527410507, 0.025560714304447174, 0.0859416052699089, -0.051304709166288376, 0.0256042517721653, 0.02452911250293255, -0.10214509814977646, -0.011702402494847775, 0.01186158...
0.380227
1 2024-04-17 22:14:45.964722028 +0000 UTC deployed ambient-{{< istio\_full\_version >}} {{< istio\_full\_version >}} {{< /text >}} You can check the status of the deployed pods with: {{< text syntax=bash snip\_id=check\_pods >}} $ kubectl get pods -n istio-system NAME READY STATUS RESTARTS AGE istio-cni-node-g97z5 1/1 Running 0 10m istiod-5f4c75464f-gskxf 1/1 Running 0 10m ztunnel-c2z4s 1/1 Running 0 10m {{< /text >}} ### Verify with the sample application After installing ambient mode with Helm, you can follow the [Deploy the sample application](/docs/ambient/getting-started/deploy-sample-app/) guide to deploy the sample application and ingress gateways, and then you can [add your application to the ambient mesh](/docs/ambient/getting-started/secure-and-visualize/#add-bookinfo-to-the-mesh). ## Uninstall You can uninstall Istio and its components by uninstalling the chart installed above. 1. Uninstall all Istio components {{< text syntax=bash snip\_id=delete\_ambient\_aio >}} $ helm delete istio-ambient -n istio-system {{< /text >}} 1. (Optional) Delete any Istio gateway chart installations: {{< text syntax=bash snip\_id=delete\_ingress >}} $ helm delete istio-ingress -n istio-ingress $ kubectl delete namespace istio-ingress {{< /text >}} 1. Delete CRDs installed by Istio (optional) {{< warning >}} This will delete all created Istio resources. {{< /warning >}} {{< text syntax=bash snip\_id=delete\_crds >}} $ kubectl get crd -oname | grep --color=never 'istio.io' | xargs kubectl delete {{< /text >}} 1. Delete the `istio-system` namespace: {{< text syntax=bash snip\_id=delete\_system\_namespace >}} $ kubectl delete namespace istio-system {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/install/helm/all-in-one/index.md
master
istio
[ 0.026354514062404633, 0.036233965307474136, 0.076410211622715, 0.0019056529272347689, 0.06736546754837036, -0.07006954401731491, 0.0034427596256136894, 0.053515806794166565, 0.013482770882546902, 0.04695332422852516, -0.0017960432451218367, -0.1696155071258545, -0.02523823454976082, -0.008...
0.425848
This guide describes some options for monitoring the ztunnel proxy configuration and datapath. This information can also help with some high level troubleshooting and in identifying information that would be useful to collect and provide in a bug report if there are any problems. ## Viewing ztunnel proxy state The ztunnel proxy gets configuration and discovery information from the istiod {{< gloss >}}control plane{{< /gloss >}} via xDS APIs. The `istioctl ztunnel-config` command allows you to view discovered workloads as seen by a ztunnel proxy. In the first example, you see all the workloads and control plane components that ztunnel is currently tracking, including information about the IP address and protocol to use when connecting to that component and whether there is a waypoint proxy associated with that workload. {{< text bash >}} $ istioctl ztunnel-config workloads NAMESPACE POD NAME IP NODE WAYPOINT PROTOCOL default bookinfo-gateway-istio-59dd7c96db-q9k6v 10.244.1.11 ambient-worker None TCP default details-v1-cf74bb974-5sqkp 10.244.1.5 ambient-worker None HBONE default productpage-v1-87d54dd59-fn6vw 10.244.1.10 ambient-worker None HBONE default ratings-v1-7c4bbf97db-zvkdw 10.244.1.6 ambient-worker None HBONE default reviews-v1-5fd6d4f8f8-knbht 10.244.1.16 ambient-worker None HBONE default reviews-v2-6f9b55c5db-c94m2 10.244.1.17 ambient-worker None HBONE default reviews-v3-7d99fd7978-7rgtd 10.244.1.18 ambient-worker None HBONE default curl-7656cf8794-r7zb9 10.244.1.12 ambient-worker None HBONE istio-system istiod-7ff4959459-qcpvp 10.244.2.5 ambient-worker2 None TCP istio-system ztunnel-6hvcw 10.244.1.4 ambient-worker None TCP istio-system ztunnel-mf476 10.244.2.6 ambient-worker2 None TCP istio-system ztunnel-vqzf9 10.244.0.6 ambient-control-plane None TCP kube-system coredns-76f75df574-2sms2 10.244.0.3 ambient-control-plane None TCP kube-system coredns-76f75df574-5bf9c 10.244.0.2 ambient-control-plane None TCP local-path-storage local-path-provisioner-7577fdbbfb-pslg6 10.244.0.4 ambient-control-plane None TCP {{< /text >}} The `ztunnel-config` command can be used to view the secrets holding the TLS certificates that the ztunnel proxy has received from the istiod control plane to use for mTLS. {{< text bash >}} $ istioctl ztunnel-config certificates "$ZTUNNEL".istio-system CERTIFICATE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE spiffe://cluster.local/ns/default/sa/bookinfo-details Leaf Available true c198d859ee51556d0eae13b331b0c259 2024-05-05T09:17:47Z 2024-05-04T09:15:47Z spiffe://cluster.local/ns/default/sa/bookinfo-details Root Available true bad086c516cce777645363cb8d731277 2034-04-24T03:31:05Z 2024-04-26T03:31:05Z spiffe://cluster.local/ns/default/sa/bookinfo-productpage Leaf Available true 64c3828993c7df6f85a601a1615532cc 2024-05-05T09:17:47Z 2024-05-04T09:15:47Z spiffe://cluster.local/ns/default/sa/bookinfo-productpage Root Available true bad086c516cce777645363cb8d731277 2034-04-24T03:31:05Z 2024-04-26T03:31:05Z spiffe://cluster.local/ns/default/sa/bookinfo-ratings Leaf Available true 720479815bf6d81a05df8a64f384ebb0 2024-05-05T09:17:47Z 2024-05-04T09:15:47Z spiffe://cluster.local/ns/default/sa/bookinfo-ratings Root Available true bad086c516cce777645363cb8d731277 2034-04-24T03:31:05Z 2024-04-26T03:31:05Z spiffe://cluster.local/ns/default/sa/bookinfo-reviews Leaf Available true 285697fb2cf806852d3293298e300c86 2024-05-05T09:17:47Z 2024-05-04T09:15:47Z spiffe://cluster.local/ns/default/sa/bookinfo-reviews Root Available true bad086c516cce777645363cb8d731277 2034-04-24T03:31:05Z 2024-04-26T03:31:05Z spiffe://cluster.local/ns/default/sa/curl Leaf Available true fa33bbb783553a1704866842586e4c0b 2024-05-05T09:25:49Z 2024-05-04T09:23:49Z spiffe://cluster.local/ns/default/sa/curl Root Available true bad086c516cce777645363cb8d731277 2034-04-24T03:31:05Z 2024-04-26T03:31:05Z {{< /text >}} Using these commands, you can check that ztunnel proxies are configured with all the expected workloads and TLS certificate. Additionally, missing information can be used for troubleshooting any networking errors. You may use the `all` option to view all parts of the ztunnel-config with a single CLI command: {{< text bash >}} $ istioctl ztunnel-config all -o json {{< /text >}} You can also view the raw configuration dump of a ztunnel proxy via a `curl` to an endpoint inside its pod: {{< text bash >}} $ kubectl debug -it $ZTUNNEL -n istio-system --image=curlimages/curl -- curl localhost:15000/config\_dump {{< /text >}} ## Viewing Istiod state for ztunnel xDS resources Sometimes you may wish to view the state of ztunnel proxy config resources as maintained in the istiod control plane, in the format of the xDS API resources defined specially for ztunnel proxies. This can be done by exec-ing into the istiod pod and obtaining this information from port 15014 for a given ztunnel proxy as shown in the example below. This output can then also be saved and viewed with a JSON pretty print formatter utility for easier browsing (not shown in the example). {{< text bash >}} $ export ISTIOD=$(kubectl get pods -n istio-system -l app=istiod -o=jsonpath='{.items[0].metadata.name}') $ kubectl debug -it $ISTIOD -n istio-system --image=curlimages/curl -- curl localhost:15014/debug/config\_dump?proxyID="$ZTUNNEL".istio-system {{< /text >}} ## Verifying ztunnel traffic through logs ztunnel's traffic logs can be queried using the standard Kubernetes log facilities. {{<
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/usage/troubleshoot-ztunnel/index.md
master
istio
[ -0.052977968007326126, -0.0025780850555747747, 0.002380435122177005, 0.07712426781654358, -0.028403200209140778, -0.08019252121448517, 0.05304897204041481, -0.0019435032736510038, -0.02460630051791668, 0.018519239500164986, -0.06816732883453369, 0.015928685665130615, -0.006548975128680468, ...
0.192399
in the example). {{< text bash >}} $ export ISTIOD=$(kubectl get pods -n istio-system -l app=istiod -o=jsonpath='{.items[0].metadata.name}') $ kubectl debug -it $ISTIOD -n istio-system --image=curlimages/curl -- curl localhost:15014/debug/config\_dump?proxyID="$ZTUNNEL".istio-system {{< /text >}} ## Verifying ztunnel traffic through logs ztunnel's traffic logs can be queried using the standard Kubernetes log facilities. {{< text bash >}} $ kubectl -n default exec deploy/curl -- sh -c 'for i in $(seq 1 10); do curl -s -I http://productpage:9080/; done' HTTP/1.1 200 OK Server: Werkzeug/3.0.1 Python/3.12.1 --snip-- {{< /text >}} The response displayed confirms the client pod receives responses from the service. You can now check logs of the ztunnel pods to confirm the traffic was sent over the HBONE tunnel. {{< text bash >}} $ kubectl -n istio-system logs -l app=ztunnel | grep -E "inbound|outbound" 2024-05-04T09:59:05.028709Z info access connection complete src.addr=10.244.1.12:60059 src.workload="curl-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/curl" dst.addr=10.244.1.10:9080 dst.hbone\_addr="10.244.1.10:9080" dst.service="productpage.default.svc.cluster.local" dst.workload="productpage-v1-87d54dd59-fn6vw" dst.namespace="productpage" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" direction="inbound" bytes\_sent=175 bytes\_recv=80 duration="1ms" 2024-05-04T09:59:05.028771Z info access connection complete src.addr=10.244.1.12:58508 src.workload="curl-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/curl" dst.addr=10.244.1.10:15008 dst.hbone\_addr="10.244.1.10:9080" dst.service="productpage.default.svc.cluster.local" dst.workload="productpage-v1-87d54dd59-fn6vw" dst.namespace="productpage" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" direction="outbound" bytes\_sent=80 bytes\_recv=175 duration="1ms" --snip-- {{< /text >}} These log messages confirm the traffic was sent via the ztunnel proxy. Additional fine-grained monitoring can be done by checking logs on the specific ztunnel proxy instances that are on the same nodes as the source and destination pods of the traffic. If these logs are not seen, then a possibility is that [traffic redirection](/docs/ambient/architecture/traffic-redirection) may not be working correctly. {{< tip >}} Traffic always traverses the ztunnel pod, even when the source and destination of the traffic are on the same compute node. {{< /tip >}} ### Verifying ztunnel load balancing The ztunnel proxy automatically performs client-side load balancing if the destination is a service with multiple endpoints. No additional configuration is needed. The load balancing algorithm is an internally fixed L4 Round Robin algorithm that distributes traffic based on L4 connection state, and is not user configurable. {{< tip >}} If the destination is a service with multiple instances or pods and there is no waypoint associated with the destination service, then the source ztunnel performs L4 load balancing directly across these instances or service backends and then sends traffic via the remote ztunnel proxies associated with those backends. If the destination service is configured to use one or more waypoint proxies, then the source ztunnel proxy performs load balancing by distributing traffic across these waypoint proxies and sends traffic via the remote ztunnel proxies on the node hosting the waypoint proxy instances. {{< /tip >}} By calling a service with multiple backends, we can validate that client traffic is balanced across the service replicas. {{< text bash >}} $ kubectl -n default exec deploy/curl -- sh -c 'for i in $(seq 1 10); do curl -s -I http://reviews:9080/; done' {{< /text >}} {{< text bash >}} $ kubectl -n istio-system logs -l app=ztunnel | grep -E "outbound" --snip-- 2024-05-04T10:11:04.964851Z info access connection complete src.addr=10.244.1.12:35520 src.workload="curl-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/curl" dst.addr=10.244.1.9:15008 dst.hbone\_addr="10.244.1.9:9080" dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v3-7d99fd7978-zznnq" dst.namespace="reviews" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes\_sent=84 bytes\_recv=169 duration="2ms" 2024-05-04T10:11:04.969578Z info access connection complete src.addr=10.244.1.12:35526 src.workload="curl-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/curl" dst.addr=10.244.1.9:15008 dst.hbone\_addr="10.244.1.9:9080" dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v3-7d99fd7978-zznnq" dst.namespace="reviews" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes\_sent=84 bytes\_recv=169 duration="2ms" 2024-05-04T10:11:04.974720Z info access connection complete src.addr=10.244.1.12:35536 src.workload="curl-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/curl" dst.addr=10.244.1.7:15008 dst.hbone\_addr="10.244.1.7:9080" dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v1-5fd6d4f8f8-26j92" dst.namespace="reviews" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes\_sent=84 bytes\_recv=169 duration="2ms" 2024-05-04T10:11:04.979462Z info access connection complete src.addr=10.244.1.12:35552 src.workload="curl-7656cf8794-r7zb9" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/curl" dst.addr=10.244.1.8:15008 dst.hbone\_addr="10.244.1.8:9080" dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v2-6f9b55c5db-c2dtw" dst.namespace="reviews" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes\_sent=84 bytes\_recv=169 duration="2ms" {{< /text >}} This is a round robin load balancing algorithm and is separate from and independent of any load balancing algorithm that may be configured within a `VirtualService`'s `TrafficPolicy` field, since as discussed previously, all aspects of `VirtualService` API objects are instantiated on the Waypoint proxies and not the ztunnel
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/usage/troubleshoot-ztunnel/index.md
master
istio
[ 0.022748364135622978, 0.06671378761529922, 0.008029226213693619, 0.012041343376040459, -0.021145667880773544, -0.08798573166131973, 0.010510727763175964, 0.028386743739247322, 0.08615104109048843, 0.06486620008945465, -0.02938755229115486, -0.1213957667350769, -0.023965349420905113, -0.013...
0.370804
/text >}} This is a round robin load balancing algorithm and is separate from and independent of any load balancing algorithm that may be configured within a `VirtualService`'s `TrafficPolicy` field, since as discussed previously, all aspects of `VirtualService` API objects are instantiated on the Waypoint proxies and not the ztunnel proxies. ### Observability of ambient mode traffic In addition to checking ztunnel logs and other monitoring options noted above, you can also use normal Istio monitoring and telemetry functions to monitor application traffic using the ambient data plane mode. \* [Prometheus installation](/docs/ops/integrations/prometheus/#installation) \* [Kiali installation](/docs/ops/integrations/kiali/#installation) \* [Istio metrics](/docs/reference/config/metrics/) \* [Querying Metrics from Prometheus](/docs/tasks/observability/metrics/querying-metrics/) If a service is only using the secure overlay provided by ztunnel, the Istio metrics reported will only be the L4 TCP metrics (namely `istio\_tcp\_sent\_bytes\_total`, `istio\_tcp\_received\_bytes\_total`, `istio\_tcp\_connections\_opened\_total`, `istio\_tcp\_connections\_closed\_total`). The full set of Istio and Envoy metrics will be reported if a waypoint proxy is used.
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/usage/troubleshoot-ztunnel/index.md
master
istio
[ -0.07866034656763077, 0.03839678689837456, -0.03369413688778877, 0.03695160895586014, -0.0351824052631855, -0.1762961447238922, 0.05231161043047905, 0.008531724102795124, -0.013391885906457901, 0.004218310583382845, -0.07531586289405823, -0.04709962382912636, 0.029757216572761536, 0.023065...
0.323442
{{< boilerplate alpha >}} Istio provides the ability to [extend its functionality using WebAssembly (Wasm)](/docs/concepts/wasm/). One of the key advantages of Wasm extensibility is that extensions can be loaded dynamically at runtime. This document outlines how to extend ambient mode within Istio with Wasm features. In ambient mode, Wasm configuration must be applied to the waypoint proxy deployed in each namespace. ## Before you begin 1. Set up Istio by following the instructions in the [ambient mode Getting Started guide](/docs/ambient/getting-started). 1. Deploy the [Bookinfo sample application](/docs/ambient/getting-started/deploy-sample-app). 1. [Add the default namespace to the ambient mesh](/docs/ambient/getting-started/secure-and-visualize). 1. Deploy the [curl]({{< github\_tree >}}/samples/curl) sample app to use as a test source for sending requests. {{< text syntax=bash >}} $ kubectl apply -f @samples/curl/curl.yaml@ {{< /text >}} ## At a gateway With the Kubernetes Gateway API, Istio provides a centralized entry point for managing traffic into the service mesh. We will configure a WasmPlugin at the gateway level, ensuring that all traffic passing through the gateway is subject to the extended authentication rules. ### Configure a WebAssembly plugin for a gateway In this example, you will add a HTTP [Basic auth module](https://github.com/istio-ecosystem/wasm-extensions/tree/master/extensions/basic\_auth) to your mesh. You will configure Istio to pull the Basic auth module from a remote image registry and load it. It will be configured to run on calls to `/productpage`. These steps are similar to those in [Distributing WebAssembly Modules](/docs/tasks/extensibility/wasm-module-distribution/), with the difference being the use of the `targetRefs` field instead of label selectors. To configure a WebAssembly filter with a remote Wasm module, create a `WasmPlugin` resource targeting the `bookinfo-gateway`: {{< text syntax=bash snip\_id=get\_gateway >}} $ kubectl get gateway NAME CLASS ADDRESS PROGRAMMED AGE bookinfo-gateway istio bookinfo-gateway-istio.default.svc.cluster.local True 42m {{< /text >}} {{< text syntax=bash snip\_id=apply\_wasmplugin\_gateway >}} $ kubectl apply -f - <}} An HTTP filter will be injected at the gateway as an authentication filter. The Istio agent will interpret the WasmPlugin configuration, download remote Wasm modules from the OCI image registry to a local file, and inject the HTTP filter at the gateway by referencing that file. ### Verify the traffic via the Gateway 1. Test `/productpage` without credentials: {{< text syntax=bash snip\_id=test\_gateway\_productpage\_without\_credentials >}} $ kubectl exec deploy/curl -- curl -s -w "%{http\_code}" -o /dev/null "http://bookinfo-gateway-istio.default.svc.cluster.local/productpage" 401 {{< /text >}} 1. Test `/productpage` with the credentials configured in the WasmPlugin resource: {{< text syntax=bash snip\_id=test\_gateway\_productpage\_with\_credentials >}} $ kubectl exec deploy/curl -- curl -s -o /dev/null -H "Authorization: Basic YWRtaW4zOmFkbWluMw==" -w "%{http\_code}" "http://bookinfo-gateway-istio.default.svc.cluster.local/productpage" 200 {{< /text >}} ## At a waypoint, for all services in a namespace Waypoint proxies play a crucial role in Istio's ambient mode, facilitating secure and efficient communication within the service mesh. Below, we will explore how to apply Wasm configuration to the waypoint, enhancing the proxy functionality dynamically. ### Deploy a waypoint proxy Follow the [waypoint deployment instructions](/docs/ambient/usage/waypoint/#deploy-a-waypoint-proxy) to deploy a waypoint proxy in the bookinfo namespace. {{< text syntax=bash snip\_id=create\_waypoint >}} $ istioctl waypoint apply --enroll-namespace --wait {{< /text >}} Verify traffic reaches the service: {{< text syntax=bash snip\_id=verify\_traffic >}} $ kubectl exec deploy/curl -- curl -s -w "%{http\_code}" -o /dev/null http://productpage:9080/productpage 200 {{< /text >}} ### Configure a WebAssembly plugin for a waypoint To configure a WebAssembly filter with a remote Wasm module, create a `WasmPlugin` resource targeting the `waypoint` gateway: {{< text syntax=bash snip\_id=get\_gateway\_waypoint >}} $ kubectl get gateway NAME CLASS ADDRESS PROGRAMMED AGE bookinfo-gateway istio bookinfo-gateway-istio.default.svc.cluster.local True 23h waypoint istio-waypoint 10.96.202.82 True 21h {{< /text >}} {{< text syntax=bash snip\_id=apply\_wasmplugin\_waypoint\_all >}} $ kubectl apply -f - <}} ### View the configured plugin {{< text syntax=bash snip\_id=get\_wasmplugin >}} $ kubectl get wasmplugin NAME AGE basic-auth-at-gateway 28m basic-auth-at-waypoint 14m {{< /text
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/usage/extend-waypoint-wasm/index.md
master
istio
[ -0.026411019265651703, -0.005520065315067768, 0.008965781889855862, 0.051221802830696106, -0.03339888155460358, -0.08903627842664719, 0.011762539856135845, 0.0596686452627182, -0.0363786555826664, 0.028595000505447388, -0.003761343192309141, -0.13709613680839539, -0.013348867185413837, -0....
0.503612
CLASS ADDRESS PROGRAMMED AGE bookinfo-gateway istio bookinfo-gateway-istio.default.svc.cluster.local True 23h waypoint istio-waypoint 10.96.202.82 True 21h {{< /text >}} {{< text syntax=bash snip\_id=apply\_wasmplugin\_waypoint\_all >}} $ kubectl apply -f - <}} ### View the configured plugin {{< text syntax=bash snip\_id=get\_wasmplugin >}} $ kubectl get wasmplugin NAME AGE basic-auth-at-gateway 28m basic-auth-at-waypoint 14m {{< /text >}} ### Verify the traffic via the waypoint proxy 1. Test internal `/productpage` without credentials: {{< text syntax=bash snip\_id=test\_waypoint\_productpage\_without\_credentials >}} $ kubectl exec deploy/curl -- curl -s -w "%{http\_code}" -o /dev/null http://productpage:9080/productpage 401 {{< /text >}} 1. Test internal `/productpage` with credentials: {{< text syntax=bash snip\_id=test\_waypoint\_productpage\_with\_credentials >}} $ kubectl exec deploy/curl -- curl -s -w "%{http\_code}" -o /dev/null -H "Authorization: Basic YWRtaW4zOmFkbWluMw==" http://productpage:9080/productpage 200 {{< /text >}} ## At a waypoint, for a specific service To configure a WebAssembly filter with a remote Wasm module for a specific service, create a WasmPlugin resource targeting the specific service directly. Create a `WasmPlugin` targeting the `reviews` service so that the extension applies only to the `reviews` service. In this configuration, the authentication token and the prefix are tailored specifically for the reviews service, ensuring that only requests directed towards it are subjected to this authentication mechanism. {{< text syntax=bash snip\_id=apply\_wasmplugin\_waypoint\_service >}} $ kubectl apply -f - <}} ### Verify the traffic targeting the Service 1. Test the internal `/productpage` with the credentials configured at the generic `waypoint` proxy: {{< text syntax=bash snip\_id=test\_waypoint\_service\_productpage\_with\_credentials >}} $ kubectl exec deploy/curl -- curl -s -w "%{http\_code}" -o /dev/null -H "Authorization: Basic YWRtaW4zOmFkbWluMw==" http://productpage:9080/productpage 200 {{< /text >}} 1. Test the internal `/reviews` with credentials configured at the specific `reviews-svc-waypoint` proxy: {{< text syntax=bash snip\_id=test\_waypoint\_service\_reviews\_with\_credentials >}} $ kubectl exec deploy/curl -- curl -s -w "%{http\_code}" -o /dev/null -H "Authorization: Basic MXQtaW4zOmFkbWluMw==" http://reviews:9080/reviews/1 200 {{< /text >}} 1. Test internal `/reviews` without credentials: {{< text syntax=bash snip\_id=test\_waypoint\_service\_reviews\_without\_credentials >}} $ kubectl exec deploy/curl -- curl -s -w "%{http\_code}" -o /dev/null http://reviews:9080/reviews/1 401 {{< /text >}} When executing the provided command without credentials, it verifies that accessing the internal `/productpage` results in a 401 unauthorized response, demonstrating the expected behavior of failing to access the resource without proper authentication credentials. ## Cleanup 1. Remove WasmPlugin configuration: {{< text syntax=bash snip\_id=remove\_wasmplugin >}} $ kubectl delete wasmplugin basic-auth-at-gateway basic-auth-at-waypoint basic-auth-for-service {{< /text >}} 1. Follow [the ambient mode uninstall guide](/docs/ambient/getting-started/#uninstall) to remove Istio and sample test applications.
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/usage/extend-waypoint-wasm/index.md
master
istio
[ -0.00865995604544878, 0.021636776626110077, -0.05658980458974838, -0.03397849202156067, -0.07366007566452026, 0.017067430540919304, 0.010912981815636158, -0.029177943244576454, 0.02356124483048916, 0.003920103423297405, 0.025131413713097572, -0.08990294486284256, -0.015391903929412365, -0....
0.258375
Once you have added applications to an ambient mesh, you can easily validate mTLS is enabled among your workloads using one or more of the methods below: ## Validate mTLS using workload's ztunnel configurations Using the convenient `istioctl ztunnel-config workloads` command, you can view if your workload is configured to send and accept HBONE traffic via the value for the `PROTOCOL` column. For example: {{< text syntax=bash >}} $ istioctl ztunnel-config workloads NAMESPACE POD NAME IP NODE WAYPOINT PROTOCOL default details-v1-857849f66-ft8wx 10.42.0.5 k3d-k3s-default-agent-0 None HBONE default kubernetes 172.20.0.3 None TCP default productpage-v1-c5b7f7dbc-hlhpd 10.42.0.8 k3d-k3s-default-agent-0 None HBONE default ratings-v1-68d5f5486b-b5sbj 10.42.0.6 k3d-k3s-default-agent-0 None HBONE default reviews-v1-7dc5fc4b46-ndrq9 10.42.1.5 k3d-k3s-default-agent-1 None HBONE default reviews-v2-6cf45d556b-4k4md 10.42.0.7 k3d-k3s-default-agent-0 None HBONE default reviews-v3-86cb7d97f8-zxzl4 10.42.1.6 k3d-k3s-default-agent-1 None HBONE {{< /text >}} Having HBONE configured on your workload doesn't mean your workload will reject any plaintext traffic. If you want your workload to reject plaintext traffic, create a `PeerAuthentication` policy with mTLS mode set to `STRICT` for your workload. ## Validate mTLS from metrics If you have [installed Prometheus](/docs/ops/integrations/prometheus/#installation), you can set up port-forwarding and open the Prometheus UI by using the following command: {{< text syntax=bash >}} $ istioctl dashboard prometheus {{< /text >}} In Prometheus, you can view the values for the TCP metrics. First, select Graph and enter the a metric such as: `istio\_tcp\_connections\_opened\_total`, `istio\_tcp\_connections\_closed\_total`, `istio\_tcp\_received\_bytes\_total`, or `istio\_tcp\_sent\_bytes\_total`. Lastly, click Execute. The data will contain entries such as: {{< text syntax=plain >}} istio\_tcp\_connections\_opened\_total{ app="ztunnel", connection\_security\_policy="mutual\_tls", destination\_principal="spiffe://cluster.local/ns/default/sa/bookinfo-details", destination\_service="details.default.svc.cluster.local", reporter="source", request\_protocol="tcp", response\_flags="-", source\_app="curl", source\_principal="spiffe://cluster.local/ns/default/sa/curl",source\_workload\_namespace="default", ...} {{< /text >}} Validate that the `connection\_security\_policy` value is set to `mutual\_tls` along with the expected source and destination identity information. ## Validate mTLS from logs You can also view either the source or destination ztunnel log to confirm mTLS is enabled, along with peer identities. Below is an example of the source ztunnel's log for a request from the `curl` service to the `details` service: {{< text syntax=plain >}} 2024-08-21T15:32:05.754291Z info access connection complete src.addr=10.42.0.9:33772 src.workload="curl-7656cf8794-6lsm4" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/curl" dst.addr=10.42.0.5:15008 dst.hbone\_addr=10.42.0.5:9080 dst.service="details.default.svc.cluster.local" dst.workload="details-v1-857849f66-ft8wx" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-details" direction="outbound" bytes\_sent=84 bytes\_recv=358 duration="15ms" {{< /text >}} Validate the `src.identity` and `dst.identity` values are correct. They are the identities used for the mTLS communication among the source and destination workloads. Refer to the [verifying ztunnel traffic through logs section](/docs/ambient/usage/troubleshoot-ztunnel/#verifying-ztunnel-traffic-through-logs) for more details. ## Validate with Kiali dashboard If you have Kiali and Prometheus installed, you can visualize your workload communication in the ambient mesh using Kiali's dashboard. You can see if the connection between any workloads has the padlock icon to validate that mTLS is enabled, along with the peer identity information: {{< image link="./kiali-mtls.png" caption="Kiali dashboard" >}} Refer to the [Visualize the application and metrics](/docs/ambient/getting-started/secure-and-visualize/#visualize-the-application-and-metrics) document for more details. ## Validate with `tcpdump` If you have access to your Kubernetes worker nodes, you can run the `tcpdump` command to capture all traffic on the network interface, with optional focusing the application ports and HBONE port. In this example, port `9080` is the `details` service port and `15008` is the HBONE port: {{< text syntax=bash >}} $ tcpdump -nAi eth0 port 9080 or port 15008 {{< /text >}} You should see encrypted traffic from the output of the `tcpdump` command. If you don't have access to the worker nodes, you may be able to use the [netshoot container image](https://hub.docker.com/r/nicolaka/netshoot) to easily run the command: {{< text syntax=bash >}} $ POD=$(kubectl get pods -l app=details -o jsonpath="{.items[0].metadata.name}") $ kubectl debug $POD -i --image=nicolaka/netshoot -- tcpdump -nAi eth0 port 9080 or port 15008 {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/usage/verify-mtls-enabled/index.md
master
istio
[ -0.04370778799057007, 0.022610904648900032, 0.02768855169415474, 0.011009206995368004, 0.015643451362848282, -0.0856209397315979, -0.03695927932858467, -0.021592574194073677, 0.029367167502641678, -0.011454857885837555, -0.0915144681930542, -0.0362422876060009, 0.07033756375312805, 0.05816...
0.130489
$ POD=$(kubectl get pods -l app=details -o jsonpath="{.items[0].metadata.name}") $ kubectl debug $POD -i --image=nicolaka/netshoot -- tcpdump -nAi eth0 port 9080 or port 15008 {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/usage/verify-mtls-enabled/index.md
master
istio
[ 0.10379558801651001, 0.10652704536914825, 0.023343820124864578, 0.01267805602401495, 0.01969265379011631, -0.038034725934267044, 0.016132954508066177, -0.0229013841599226, 0.07049161195755005, 0.019080597907304764, 0.009978246875107288, -0.13528259098529816, -0.0577862486243248, -0.0325391...
0.174404
In most cases, a cluster administrator will deploy the Istio mesh infrastructure. Once Istio is successfully deployed with support for the ambient {{< gloss >}}data plane{{< /gloss >}} mode, it will be transparently available to applications deployed by all users in namespaces that have been configured to use it. ## Enabling ambient mode for applications in the mesh To add applications or namespaces to the mesh in ambient mode, add the label `istio.io/dataplane-mode=ambient` to the corresponding resource. You can apply this label to a namespace or to an individual pod. Ambient mode can be seamlessly enabled (or disabled) completely transparently as far as the application pods are concerned. Unlike the {{< gloss >}}sidecar{{< /gloss >}} data plane mode, there is no need to restart applications to add them to the mesh, and they will not show as having an extra container deployed in their pod. ### Layer 4 and Layer 7 functionality The secure L4 overlay supports authentication and authorization policies. [Learn about L4 policy support in ambient mode](/docs/ambient/usage/l4-policy/). To opt-in to use Istio's L7 functionality, such as traffic routing, you will need to [deploy a waypoint proxy and enroll your workloads to use it](/docs/ambient/usage/waypoint/). ### Ambient and Kubernetes NetworkPolicy See [ambient and Kubernetes NetworkPolicy](/docs/ambient/usage/networkpolicy/). ## Communicating between pods in different data plane modes There are multiple options for interoperability between application pods using the ambient data plane mode, and non-ambient endpoints (including Kubernetes application pods, Istio gateways or Kubernetes Gateway API instances). This interoperability provides multiple options for seamlessly integrating ambient and non-ambient workloads within the same Istio mesh, allowing for phased introduction of ambient capability as best suits the needs of your mesh deployment and operation. ### Pods outside the mesh You may have namespaces which are not part of the mesh at all, in either sidecar or ambient mode. In this case, the non-mesh pods initiate traffic directly to the destination pods without going through the source node's ztunnel, while the destination pod's ztunnel enforces any L4 policy to control whether traffic should be allowed or denied. For example, setting a `PeerAuthentication` policy with mTLS mode set to `STRICT`, in a namespace with ambient mode enabled, will cause traffic from outside the mesh to be denied. ### Pods inside the mesh using sidecar mode Istio supports East-West interoperability between a pod with a sidecar and a pod using ambient mode, within the same mesh. The sidecar proxy knows to use the HBONE protocol since the destination has been discovered to be an HBONE destination. {{< tip >}} For sidecar proxies to use the HBONE/mTLS signaling option when communicating with ambient destinations, they need to be configured with `ISTIO\_META\_ENABLE\_HBONE` set to `true` in the proxy metadata. This is the default in `MeshConfig` when using the `ambient` profile, hence you do not have to do anything else when using this profile. {{< /tip >}} A `PeerAuthentication` policy with mTLS mode set to `STRICT` will allow traffic from a pod with an Istio sidecar proxy. ### Ingress and egress gateways and ambient mode pods An ingress gateway may run in a non-ambient namespace, and expose services provided by ambient mode, sidecar mode or non-mesh pods. Interoperability is also supported between pods in ambient mode and Istio egress gateways. ## Pod selection logic for ambient and sidecar modes Istio's two data plane modes, sidecar and ambient, can co-exist in the same cluster. It is important to ensure that the same pod or namespace does not get configured to use both modes at the same time. However, if this does occur, the sidecar mode currently takes precedence for such a pod or
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/usage/add-workloads/index.md
master
istio
[ 0.006101634819060564, 0.000962008023634553, 0.056091077625751495, 0.052423350512981415, 0.03694998472929001, -0.031089957803487778, 0.08006181567907333, 0.000012325128409429453, -0.026609735563397408, 0.032426245510578156, 0.02971937693655491, -0.061118513345718384, -0.030624110251665115, ...
0.420836
plane modes, sidecar and ambient, can co-exist in the same cluster. It is important to ensure that the same pod or namespace does not get configured to use both modes at the same time. However, if this does occur, the sidecar mode currently takes precedence for such a pod or namespace. Note that two pods within the same namespace could in theory be set to use different modes by labeling individual pods separately from the namespace label; however, this is not recommended. For most common use cases a single mode should be used for all pods within a single namespace. The exact logic to determine whether a pod is set up to use ambient mode is as follows: 1. The `istio-cni` plugin configuration exclude list configured in `cni.values.excludeNamespaces` is used to skip namespaces in the exclude list. 1. `ambient` mode is used for a pod if \* The namespace or pod has the label `istio.io/dataplane-mode=ambient` \* The pod does not have the opt-out label `istio.io/dataplane-mode=none` \* The annotation `sidecar.istio.io/status` is not present on the pod The simplest option to avoid a configuration conflict is for a user to ensure that for each namespace, it either has the label for sidecar injection (`istio-injection=enabled`) or for ambient mode (`istio.io/dataplane-mode=ambient`), but never both. ## Label reference {#ambient-labels} The following labels control if a resource is included in the mesh in ambient mode, if a waypoint proxy is used to enforce L7 policy for your resource, and to control how traffic is sent to the waypoint. | Name | Feature Status | Resource | Description | | --- | --- | --- | --- | | `istio.io/dataplane-mode` | Beta | `Namespace` or `Pod` (latter has precedence) | Add your resource to an ambient mesh. Valid values: `ambient` or `none`. | | `istio.io/use-waypoint` | Beta | `Namespace`, `Service` or `Pod` | Use a waypoint for traffic to the labeled resource for L7 policy enforcement. Valid values: `{waypoint-name}` or `none`. | | `istio.io/waypoint-for` | Alpha | `Gateway` | Specifies what types of endpoints the waypoint will process traffic for. Valid values: `service`, `workload`, `none` or `all`. This label is optional and the default value is `service`. | In order for your `istio.io/use-waypoint` label value to be effective, you have to ensure the waypoint is configured for the resource types it will be handling traffic for. By default waypoints accept traffic for services. For example, when you label a pod to use a specific waypoint via the `istio.io/use-waypoint` label, the waypoint should be labeled `istio.io./waypoint-for` with the value `workload` or `all`.
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/usage/add-workloads/index.md
master
istio
[ 0.003665417432785034, -0.033370159566402435, 0.0398494228720665, 0.07750154286623001, 0.06578406691551208, -0.08939995616674423, 0.044543784111738205, -0.008892463520169258, -0.020293287932872772, -0.013785157352685928, 0.07448539137840271, -0.10774186998605728, -0.03559451922774315, -0.04...
0.312212
By adding a waypoint proxy to your traffic flow you can enable more of [Istio's features](/docs/concepts). Waypoints are configured using the {{< gloss "gateway api" >}}Kubernetes Gateway API{{< /gloss >}}. {{< warning >}} Usage of VirtualService with the ambient data plane mode is considered Alpha. Mixing with Gateway API configuration is not supported, and will lead to undefined behavior. {{< /warning >}} {{< warning >}} `EnvoyFilter` is Istio's break-glass API for advanced configuration of Envoy proxies. Please note that \*`EnvoyFilter` is not currently supported for any existing Istio version with waypoint proxies\*. While it may be possible to use `EnvoyFilter` with waypoints in limited scenarios, its use is not supported, and is actively discouraged by the maintainers. The alpha API may break in future releases as it evolves. We expect official support will be provided at a later date. {{< /warning >}} ## Route and policy attachment The Gateway API defines the relationship between objects (such as routes and gateways) in terms of \*attachment\*. \* Route objects (such as [HTTPRoute](https://gateway-api.sigs.k8s.io/api-types/httproute/)) include a way to reference the \*\*parent\*\* resources it wants to attach to. \* Policy objects are considered [\*metaresources\*](https://gateway-api.sigs.k8s.io/geps/gep-713/): objects that augments the behavior of a \*\*target\*\* object in a standard way. The tables below show the type of attachment that is configured for each object. ## Traffic routing With a waypoint proxy deployed, you can use the following traffic route types: | Name | Feature Status | Attachment | | --- | --- | --- | | [`HTTPRoute`](https://gateway-api.sigs.k8s.io/guides/http-routing/) | Beta | `parentRefs` | | [`TLSRoute`](https://gateway-api.sigs.k8s.io/guides/tls) | Alpha | `parentRefs` | | [`TCPRoute`](https://gateway-api.sigs.k8s.io/guides/tcp/) | Alpha | `parentRefs` | Refer to the [traffic management](/docs/tasks/traffic-management/) documentation to see the range of features that can be implemented using these routes. ## Security Without a waypoint installed, you can only use [Layer 4 security policies](/docs/ambient/usage/l4-policy/). By adding a waypoint, you gain access to the following policies: | Name | Feature Status | Attachment | | --- | --- | --- | | [`AuthorizationPolicy`](/docs/reference/config/security/authorization-policy/) (including L7 features) | Beta | `targetRefs` | | [`RequestAuthentication`](/docs/reference/config/security/request\_authentication/) | Beta | `targetRefs` | ### Considerations for authorization policies {#considerations} In ambient mode, authorization policies can either be \*targeted\* (for ztunnel enforcement) or \*attached\* (for waypoint enforcement). For an authorization policy to be attached to a waypoint it must have a `targetRef` which refers to the waypoint, or a Service which uses that waypoint. The ztunnel cannot enforce L7 policies. If a policy with rules matching L7 attributes is targeted with a workload selector (rather than attached with a `targetRef`), such that it is enforced by a ztunnel, it will fail safe by becoming a `DENY` policy. See [the L4 policy guide](/docs/ambient/usage/l4-policy/) for more information, including when to attach policies to waypoints for TCP-only use cases. ## Observability The [full set of Istio traffic metrics](/docs/reference/config/metrics/) are exported by a waypoint proxy. ## Extension As the waypoint proxy is a deployment of {{< gloss >}}Envoy{{< /gloss >}}, some of the extension mechanisms that are available for Envoy in {{< gloss "sidecar">}}sidecar mode{{< /gloss >}} are also available to waypoint proxies. | Name | Feature Status | Attachment | | --- | --- | --- | | `WasmPlugin` † | Alpha | `targetRefs` | † [Read more on how to extend waypoints with WebAssembly plugins](/docs/ambient/usage/extend-waypoint-wasm/). Extension configurations are considered policy by the Gateway API definition. ## Scoping routes or policies A route or policy can be scoped to apply to all traffic traversing a waypoint proxy, or only specific services. ### Attach to the entire waypoint proxy To attach a route or a policy to the entire waypoint — so that it
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/usage/l7-features/index.md
master
istio
[ 0.001582500641234219, 0.008280642330646515, 0.03963632881641388, 0.010851697064936161, -0.05026481673121452, -0.0008049283642321825, 0.07410115748643875, -0.015673790127038956, 0.02678189054131508, 0.04814368486404419, -0.09137231856584549, -0.08410357683897018, -0.05972069501876831, 0.055...
0.414332
Gateway API definition. ## Scoping routes or policies A route or policy can be scoped to apply to all traffic traversing a waypoint proxy, or only specific services. ### Attach to the entire waypoint proxy To attach a route or a policy to the entire waypoint — so that it applies to all traffic enrolled to use it — set `Gateway` as the `parentRefs` or `targetRefs` value, depending on the type. To scope an `AuthorizationPolicy` policy to apply to the waypoint named `default` for the `default` namespace: {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: view-only namespace: default spec: targetRefs: - kind: Gateway group: gateway.networking.k8s.io name: default action: ALLOW rules: - from: - source: namespaces: ["default", "istio-system"] to: - operation: methods: ["GET"] {{< /text >}} ### Attach to a specific service You can also attach a route to one or more specific services within the waypoint. Set `Service` as the `parentRefs` or `targetRefs` value, as appropriate. To apply the `reviews` HTTPRoute to the `reviews` service in the `default` namespace: {{< text yaml >}} apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: reviews namespace: default spec: parentRefs: - group: "" kind: Service name: reviews port: 9080 rules: - backendRefs: - name: reviews-v1 port: 9080 weight: 90 - name: reviews-v2 port: 9080 weight: 10 {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/usage/l7-features/index.md
master
istio
[ -0.03733877092599869, -0.008235324174165726, -0.04759342223405838, 0.04676341637969017, -0.032567963004112244, -0.002675765659660101, 0.06660003215074539, 0.032895419746637344, -0.006076608784496784, 0.022328199818730354, -0.04877205938100815, -0.006267839577049017, -0.01248909905552864, 0...
0.43188
The Layer 4 (L4) features of Istio's [security policies](/docs/concepts/security) are supported by {{< gloss >}}ztunnel{{< /gloss >}}, and are available in {{< gloss "ambient" >}}ambient mode{{< /gloss >}}. [Kubernetes Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) also continue to work if your cluster has a {{< gloss >}}CNI{{< /gloss >}} plugin that supports them, and can be used to provide defense-in-depth. The layering of ztunnel and {{< gloss "waypoint" >}}waypoint proxies{{< /gloss >}} gives you a choice as to whether or not you want to enable Layer 7 (L7) processing for a given workload. To use L7 policies, and Istio's traffic routing features, you can [deploy a waypoint](/docs/ambient/usage/waypoint) for your workloads. Because policy can now be enforced in two places, there are [considerations](#considerations) that need to be understood. ## Policy enforcement using ztunnel The ztunnel proxy can perform authorization policy enforcement when a workload is enrolled in {{< gloss "Secure L4 Overlay" >}}secure overlay mode{{< /gloss >}}. The enforcement point is the receiving (server-side) ztunnel proxy in the path of a connection. A basic L4 authorization policy looks like this: {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: allow-curl-to-httpbin spec: selector: matchLabels: app: httpbin action: ALLOW rules: - from: - source: principals: - cluster.local/ns/ambient-demo/sa/curl {{< /text >}} This policy can be used in both {{< gloss "sidecar" >}}sidecar mode{{< /gloss >}} and ambient mode. The L4 (TCP) features of the Istio `AuthorizationPolicy` API have the same functional behavior in ambient mode as in sidecar mode. When there is no authorization policy provisioned, the default action is `ALLOW`. Once a policy is provisioned, pods targeted by the policy only permit traffic which is explicitly allowed. In the above example, pods with the label `app: httpbin` only permit traffic from sources with an identity principal of `cluster.local/ns/ambient-demo/sa/curl`. Traffic from all other sources will be denied. ## Targeting policies Sidecar mode and L4 policies in ambient are \*targeted\* in the same fashion: they are scoped by the namespace in which the policy object resides, and an optional `selector` in the `spec`. If the policy is in the Istio root namespace (traditionally `istio-system`), then it will target all namespaces. If it is in any other namespace, it will target only that namespace. L7 policies in ambient mode are enforced by waypoints, which are configured with the {{< gloss "gateway api" >}}Kubernetes Gateway API{{< /gloss >}}. They are \*attached\* using the `targetRef` field. ## Allowed policy attributes Authorization policy rules can contain [source](/docs/reference/config/security/authorization-policy/#Source) (`from`), [operation](/docs/reference/config/security/authorization-policy/#Operation) (`to`), and [condition](/docs/reference/config/security/authorization-policy/#Condition) (`when`) clauses. This list of attributes determines whether a policy is considered L4-only: | Type | Attribute | Positive match | Negative match | | --- | --- | --- | --- | | Source | Peer identity | `principals` | `notPrincipals` | | Source | Namespace | `namespaces` | `notNamespaces` | | Source | IP block | `ipBlocks` | `notIpBlocks` | | Operation | Destination port | `ports` | `notPorts` | | Condition | Source IP | `source.ip` | n/a | | Condition | Source namespace | `source.namespace` | n/a | | Condition | Source identity | `source.principal` | n/a | | Condition | Remote IP | `destination.ip` | n/a | | Condition | Remote port | `destination.port` | n/a | ### Policies with Layer 7 conditions The ztunnel cannot enforce L7 policies. If a policy with rules matching L7 attributes (i.e. those not listed in the table above) is targeted such that it will be enforced by a receiving ztunnel, it will fail safe by becoming a `DENY` policy. This example adds a check for the HTTP GET method: {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata:
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/usage/l4-policy/index.md
master
istio
[ -0.07482239603996277, 0.025133393704891205, 0.05348983779549599, 0.027707209810614586, 0.002184657147154212, 0.024337047711014748, 0.07519663870334625, 0.01032178569585085, -0.0010445028310641646, 0.02620374783873558, -0.02714974246919155, -0.07521981745958328, 0.0017500479007139802, 0.027...
0.442881
L7 attributes (i.e. those not listed in the table above) is targeted such that it will be enforced by a receiving ztunnel, it will fail safe by becoming a `DENY` policy. This example adds a check for the HTTP GET method: {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: allow-curl-to-httpbin spec: selector: matchLabels: app: httpbin action: ALLOW rules: - from: - source: principals: - cluster.local/ns/ambient-demo/sa/curl to: - operation: methods: ["GET"] {{< /text >}} Even if the identity of the client pod is correct, the presence of a L7 attribute causes the ztunnel to deny the connection: {{< text plain >}} command terminated with exit code 56 {{< /text >}} ## Choosing enforcement points when waypoints are introduced {#considerations} When a waypoint proxy is added to a workload, you now have two possible places where you can enforce L4 policy. (L7 policy can only be enforced at the waypoint proxy.) With only the secure overlay, traffic appears at the destination ztunnel with the identity of the \*source\* workload. Waypoint proxies do not impersonate the identity of the source workload. Once you have introduced a waypoint to the traffic path, the destination ztunnel will see traffic with the \*waypoint's\* identity, not the source identity. This means that when you have a waypoint installed, \*\*the ideal place to enforce policy shifts\*\*. Even if you only wish to enforce policy against L4 attributes, if you are dependent on the source identity, you should attach your policy to your waypoint proxy. A second policy may be targeted at your workload to make its ztunnel enforce policies like "in-mesh traffic must come from my waypoint in order to reach my application". ## Peer authentication Istio's [peer authentication policies](/docs/concepts/security/#peer-authentication), which configure mutual TLS (mTLS) modes, are supported by ztunnel. The default policy for ambient mode is `PERMISSIVE`, which allows pods to accept both mTLS-encrypted traffic (from within the mesh) and plain text traffic (from without). Enabling `STRICT` mode means that pods will only accept mTLS-encrypted traffic. As ztunnel and {{< gloss >}}HBONE{{< /gloss >}} implies the use of mTLS, it is not possible to use the `DISABLE` mode in a policy. Such policies will be ignored.
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/usage/l4-policy/index.md
master
istio
[ -0.053445927798748016, 0.09412985295057297, -0.025851083919405937, 0.0558033362030983, 0.010323142632842064, -0.034314174205064774, 0.04469068720936775, -0.03260565549135208, -0.013825529254972935, 0.05846923962235451, 0.01853153295814991, -0.02696932852268219, 0.03619685769081116, 0.06053...
0.244746
A \*\*waypoint proxy\*\* is an optional deployment of the Envoy-based proxy to add Layer 7 (L7) processing to a defined set of workloads. Waypoint proxies are installed, upgraded and scaled independently from applications; an application owner should be unaware of their existence. Compared to the sidecar {{< gloss >}}data plane{{< /gloss >}} mode, which runs an instance of the Envoy proxy alongside each workload, the number of proxies required can be substantially reduced. A waypoint, or set of waypoints, can be shared between applications with a similar security boundary. This might be all the instances of a particular workload, or all the workloads in a namespace. As opposed to {{< gloss >}}sidecar{{< /gloss >}} mode, in ambient mode policies are enforced by the \*\*destination\*\* waypoint. In many ways, the waypoint acts as a gateway to a resource (a namespace, service or pod). Istio enforces that all traffic coming into the resource goes through the waypoint, which then enforces all policies for that resource. ## Do you need a waypoint proxy? The layered approach of ambient allows users to adopt Istio in a more incremental fashion, smoothly transitioning from no mesh, to the secure L4 overlay, to full L7 processing. Most of the features of ambient mode are provided by the ztunnel node proxy. Ztunnel is scoped to only process traffic at Layer 4 (L4), so that it can safely operate as a shared component. When you configure redirection to a waypoint, traffic will be forwarded by ztunnel to that waypoint. If your applications require any of the following L7 mesh functions, you will need to use a waypoint proxy: \* \*\*Traffic management\*\*: HTTP routing & load balancing, circuit breaking, rate limiting, fault injection, retries, timeouts \* \*\*Security\*\*: Rich authorization policies based on L7 primitives such as request type or HTTP header \* \*\*Observability\*\*: HTTP metrics, access logging, tracing ## Deploy a waypoint proxy Waypoint proxies are deployed using Kubernetes Gateway resources. {{< boilerplate gateway-api-install-crds >}} You can use istioctl waypoint subcommands to generate, apply or list these resources. After the waypoint is deployed, the entire namespace (or whichever services or pods you choose) must be [enrolled](#useawaypoint) to use it. Before you deploy a waypoint proxy for a specific namespace, confirm the namespace is labeled with `istio.io/dataplane-mode: ambient`: {{< text syntax=bash snip\_id=check\_ns\_label >}} $ kubectl get ns -L istio.io/dataplane-mode NAME STATUS AGE DATAPLANE-MODE istio-system Active 24h default Active 24h ambient {{< /text >}} `istioctl` can generate a Kubernetes Gateway resource for a waypoint proxy. For example, to generate a waypoint proxy named `waypoint` for the `default` namespace that can process traffic for services in the namespace: {{< text syntax=bash snip\_id=gen\_waypoint\_resource >}} $ istioctl waypoint generate --for service -n default apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: labels: istio.io/waypoint-for: service name: waypoint namespace: default spec: gatewayClassName: istio-waypoint listeners: - name: mesh port: 15008 protocol: HBONE {{< /text >}} Note the Gateway resource has a `gatewayClassName` of `istio-waypoint`, which instantiates an Istio-managed waypoint. The Gateway resource is labeled with `istio.io/waypoint-for: service`, indicating the waypoint can process traffic for services, which is the default. To deploy a waypoint proxy directly, use `apply` instead of `generate`: {{< text syntax=bash snip\_id=apply\_waypoint >}} $ istioctl waypoint apply -n default waypoint default/waypoint applied {{< /text >}} Or, you can deploy the generated Gateway resource: {{< text syntax=bash >}} $ kubectl apply -f - <}} After the Gateway resource is applied, Istiod will monitor the resource, deploy and manage the corresponding waypoint deployment and service for users automatically. ### Waypoint traffic types By default, a waypoint will only handle traffic destined for \*\*services\*\* in its namespaces. This choice was made because
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/usage/waypoint/index.md
master
istio
[ -0.07607291638851166, -0.016600847244262695, 0.060361627489328384, 0.03206630051136017, -0.05974646285176277, -0.043845035135746, 0.08649934828281403, -0.034879595041275024, -0.009114795364439487, 0.04077612981200218, -0.028244249522686005, 0.004683262202888727, -0.034183721989393234, 0.00...
0.387518
apply -f - <}} After the Gateway resource is applied, Istiod will monitor the resource, deploy and manage the corresponding waypoint deployment and service for users automatically. ### Waypoint traffic types By default, a waypoint will only handle traffic destined for \*\*services\*\* in its namespaces. This choice was made because traffic directed at a pod alone is rare, and often used for internal purposes such as Prometheus scraping, and the extra overhead of L7 processing may not be desired. It is also possible for the waypoint to handle all traffic, only handle traffic sent directly to \*\*workloads\*\* (pods or VMs) in the cluster, or no traffic at all. The types of traffic that will be redirected to the waypoint are determined by the `istio.io/waypoint-for` label on the `Gateway` object. Use the `--for` argument to `istioctl waypoint apply` to change the types of traffic that can be redirected to the waypoint: | `waypoint-for` value | Original destination type | | -------------------- | ------------ | | `service` | Kubernetes services | | `workload` | Pod IPs or VM IPs | | `all` | Both service and workload traffic | | `none` | No traffic (useful for testing) | Waypoint selection occurs based on the destination type, `service` or `workload`, to which traffic was \_originally addressed\_. If traffic is addressed to a service which does not have a waypoint, a waypoint will not be transited: even if the eventual workload it reaches \_does\_ have an attached waypoint. ## Use a waypoint proxy {#useawaypoint} When a waypoint proxy is deployed, it is not used by any resources until you explicitly configure those resources to use it. To enable a namespace, service or Pod to use a waypoint, add the `istio.io/use-waypoint` label with a value of the waypoint name. {{< tip >}} Most users will want to apply a waypoint to an entire namespace, and we recommend you start with this approach. {{< /tip >}} If you use `istioctl` to deploy your namespace waypoint, you can use the `--enroll-namespace` parameter to automatically label a namespace: {{< text syntax=bash snip\_id=enroll\_ns\_waypoint >}} $ istioctl waypoint apply -n default --enroll-namespace waypoint default/waypoint applied namespace default labeled with "istio.io/use-waypoint: waypoint" {{< /text >}} Alternatively, you may add the `istio.io/use-waypoint: waypoint` label to the `default` namespace using `kubectl`: {{< text syntax=bash >}} $ kubectl label ns default istio.io/use-waypoint=waypoint namespace/default labeled {{< /text >}} After a namespace is enrolled to use a waypoint, any requests from any pods using the ambient data plane mode, to any service running in that namespace, will be routed through the waypoint for L7 processing and policy enforcement. If you prefer more granularity than using a waypoint for an entire namespace, you can enroll only a specific service or pod to use a waypoint. This may be useful if you only need L7 features for some services in a namespace, if you only want an extension like a `WasmPlugin` to apply to a specific service, or if you are calling a Kubernetes [headless service](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services) by its pod IP address. {{< tip >}} If the `istio.io/use-waypoint` label exists on both a namespace and a service, the service waypoint takes precedence over the namespace waypoint as long as the service waypoint can handle `service` or `all` traffic. Similarly, a label on a pod will take precedence over a namespace label. {{< /tip >}} ### Configure a service to use a specific waypoint Using the services from the sample [bookinfo application](/docs/examples/bookinfo/), we can deploy a waypoint called `reviews-svc-waypoint` for the `reviews` service: {{< text syntax=bash >}} $ istioctl waypoint apply -n default --name reviews-svc-waypoint waypoint default/reviews-svc-waypoint applied {{< /text
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/usage/waypoint/index.md
master
istio
[ 0.02325567416846752, -0.01608676090836525, 0.00751832639798522, 0.028221938759088516, -0.014103558845818043, -0.026144258677959442, 0.056753046810626984, 0.001750568742863834, -0.006814604625105858, 0.04280488193035126, -0.04521416127681732, -0.057246703654527664, -0.04656350240111351, 0.0...
0.468253
namespace label. {{< /tip >}} ### Configure a service to use a specific waypoint Using the services from the sample [bookinfo application](/docs/examples/bookinfo/), we can deploy a waypoint called `reviews-svc-waypoint` for the `reviews` service: {{< text syntax=bash >}} $ istioctl waypoint apply -n default --name reviews-svc-waypoint waypoint default/reviews-svc-waypoint applied {{< /text >}} Label the `reviews` service to use the `reviews-svc-waypoint` waypoint: {{< text syntax=bash >}} $ kubectl label service reviews istio.io/use-waypoint=reviews-svc-waypoint service/reviews labeled {{< /text >}} Any requests from pods in the mesh to the `reviews` service will now be routed through the `reviews-svc-waypoint` waypoint. ### Configure a pod to use a specific waypoint Deploy a waypoint called `reviews-v2-pod-waypoint` for the `reviews-v2` pod. {{< tip >}} Recall the default for waypoints is to target services; as we explicitly want to target a pod, we need to use the `istio.io/waypoint-for: workload` label, which we can generate by using the `--for workload` parameter to istioctl. {{< /tip >}} {{< text syntax=bash >}} $ istioctl waypoint apply -n default --name reviews-v2-pod-waypoint --for workload waypoint default/reviews-v2-pod-waypoint applied {{< /text >}} Label the `reviews-v2` pod to use the `reviews-v2-pod-waypoint` waypoint: {{< text syntax=bash >}} $ kubectl label pod -l version=v2,app=reviews istio.io/use-waypoint=reviews-v2-pod-waypoint pod/reviews-v2-5b667bcbf8-spnnh labeled {{< /text >}} Any requests from pods in the ambient mesh to the `reviews-v2` pod IP will now be routed through the `reviews-v2-pod-waypoint` waypoint for L7 processing and policy enforcement. {{< tip >}} The original destination type of the traffic is used to determine if a service or workload waypoint will be used. By using the original destination type the ambient mesh avoids having traffic transit waypoint twice, even if both service and workload have attached waypoints. For instance, traffic which is addressed to a service, even though ultimately resolved to a pod IP, is always treated by the ambient mesh as to-service and would use a service-attached waypoint. {{< /tip >}} ## Cross-namespace waypoint use {#usewaypointnamespace} Straight out of the box, a waypoint proxy is usable by resources within the same namespace. Beginning with Istio 1.23, it is possible to use waypoints in different namespaces. In this section, we will examine the gateway configuration required to enable cross-namespace use and how to configure your resources to use a waypoint from a different namespace. ### Configure a waypoint for cross-namespace use In order to enable cross-namespace use of a waypoint, the `Gateway` should be configured to [allow routes](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io%2fv1.AllowedRoutes) from other namespaces. {{< tip >}} The keyword `All` may be specified as the value for `allowedRoutes.namespaces.from` in order to allow routes from any namespace. {{< /tip >}} The following `Gateway` would allow resources in a namespace called "cross-namespace-waypoint-consumer" to use this `egress-gateway`: {{< text syntax=yaml >}} apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: egress-gateway namespace: common-infrastructure spec: gatewayClassName: istio-waypoint listeners: - name: mesh port: 15008 protocol: HBONE allowedRoutes: namespaces: from: Selector selector: matchLabels: kubernetes.io/metadata.name: cross-namespace-waypoint-consumer {{< /text >}} ### Configure resources to use a cross-namespace waypoint proxy By default, the Istio control plane will look for a waypoint specified using the `istio.io/use-waypoint` label in the same namespace as the resource which the label is applied to. It is possible to use a waypoint in another namespace by adding a new label, `istio.io/use-waypoint-namespace`. `istio.io/use-waypoint-namespace` works for all resources which support the `istio.io/use-waypoint` label. Together, the two labels specify the name and namespace of your waypoint respectively. For example, to configure a `ServiceEntry` named `istio-site` to use a waypoint named `egress-gateway` in the namespace named `common-infrastructure`, you could use the following commands: {{< text syntax=bash >}} $ kubectl label serviceentries.networking.istio.io istio-site istio.io/use-waypoint=egress-gateway serviceentries.networking.istio.io/istio-site labeled $ kubectl label serviceentries.networking.istio.io istio-site istio.io/use-waypoint-namespace=common-infrastructure serviceentries.networking.istio.io/istio-site labeled {{< /text >}} ### Cleaning
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/usage/waypoint/index.md
master
istio
[ -0.03403489291667938, -0.019633103162050247, -0.09139514714479446, 0.00445864163339138, -0.029373375698924065, 0.036001767963171005, 0.05543694645166397, -0.006194778718054295, 0.06045247241854668, 0.013574515469372272, 0.00859510526061058, -0.05120138078927994, 0.009968160651624203, 0.016...
0.228262
For example, to configure a `ServiceEntry` named `istio-site` to use a waypoint named `egress-gateway` in the namespace named `common-infrastructure`, you could use the following commands: {{< text syntax=bash >}} $ kubectl label serviceentries.networking.istio.io istio-site istio.io/use-waypoint=egress-gateway serviceentries.networking.istio.io/istio-site labeled $ kubectl label serviceentries.networking.istio.io istio-site istio.io/use-waypoint-namespace=common-infrastructure serviceentries.networking.istio.io/istio-site labeled {{< /text >}} ### Cleaning up You can remove all waypoints from a namespace by doing the following: {{< text syntax=bash snip\_id=delete\_waypoint >}} $ istioctl waypoint delete --all -n default $ kubectl label ns default istio.io/use-waypoint- {{< /text >}} {{< boilerplate gateway-api-remove-crds >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/usage/waypoint/index.md
master
istio
[ 0.004137319512665272, 0.00637354701757431, -0.04396941885352135, 0.007009251043200493, -0.06691382825374603, -0.0033945569302886724, 0.0716300904750824, -0.04326731339097023, 0.03366173058748245, -0.0008313339203596115, -0.03269809111952782, -0.11073610931634903, -0.02011159621179104, 0.01...
0.368328
This guide describes what to do if you have enrolled a namespace, service or workload in a waypoint proxy, but you are not seeing the expected behavior. ## Problems with traffic routing or security policy To send some requests to the `reviews` service via the `productpage` service from the `curl` pod: {{< text bash >}} $ kubectl exec deploy/curl -- curl -s http://productpage:9080/productpage {{< /text >}} To send some requests to the `reviews` `v2` pod from the `curl` pod: {{< text bash >}} $ export REVIEWS\_V2\_POD\_IP=$(kubectl get pod -l version=v2,app=reviews -o jsonpath='{.items[0].status.podIP}') $ kubectl exec deploy/curl -- curl -s http://$REVIEWS\_V2\_POD\_IP:9080/reviews/1 {{< /text >}} Requests to the `reviews` service should be enforced by the `reviews-svc-waypoint` for any L7 policies. Requests to the `reviews` `v2` pod should be enforced by the `reviews-v2-pod-waypoint` for any L7 policies. 1. If your L7 configuration isn't applied, run `istioctl analyze` first to check if your configuration has a validation issue. {{< text bash >}} $ istioctl analyze ✔ No validation issues found when analyzing namespace: default. {{< /text >}} 1. Determine which waypoint is implementing the L7 configuration for your service or pod. If your source calls the destination using the service's hostname or IP, use the `istioctl experimental ztunnel-config service` command to confirm your waypoint is used by the destination service. Following the example earlier, the `reviews` service should use the `reviews-svc-waypoint` while all other services in the `default` namespace should use the namespace `waypoint`. {{< text bash >}} $ istioctl ztunnel-config service NAMESPACE SERVICE NAME SERVICE VIP WAYPOINT default bookinfo-gateway-istio 10.43.164.194 waypoint default bookinfo-gateway-istio 10.43.164.194 waypoint default bookinfo-gateway-istio 10.43.164.194 waypoint default bookinfo-gateway-istio 10.43.164.194 waypoint default details 10.43.160.119 waypoint default kubernetes 10.43.0.1 waypoint default productpage 10.43.172.254 waypoint default ratings 10.43.71.236 waypoint default reviews 10.43.162.105 reviews-svc-waypoint ... {{< /text >}} If your source calls the destination using a pod IP, use the `istioctl ztunnel-config workload` command to confirm your waypoint is used by the destination pod. Following the example earlier, the `reviews` `v2` pod should use the `reviews-v2-pod-waypoint` while all other pods in the `default` namespace should not have any waypoints, because by default [a waypoint only handles traffic addressed to services](/docs/ambient/usage/waypoint/#waypoint-traffic-types). {{< text bash >}} $ istioctl ztunnel-config workload NAMESPACE POD NAME IP NODE WAYPOINT PROTOCOL default bookinfo-gateway-istio-7c57fc4647-wjqvm 10.42.2.8 k3d-k3s-default-server-0 None TCP default details-v1-698d88b-wwsnv 10.42.2.4 k3d-k3s-default-server-0 None HBONE default productpage-v1-675fc69cf-fp65z 10.42.2.6 k3d-k3s-default-server-0 None HBONE default ratings-v1-6484c4d9bb-crjtt 10.42.0.4 k3d-k3s-default-agent-0 None HBONE default reviews-svc-waypoint-c49f9f569-b492t 10.42.2.10 k3d-k3s-default-server-0 None TCP default reviews-v1-5b5d6494f4-nrvfx 10.42.2.5 k3d-k3s-default-server-0 None HBONE default reviews-v2-5b667bcbf8-gj7nz 10.42.0.5 k3d-k3s-default-agent-0 reviews-v2-pod-waypoint HBONE ... {{< /text >}} If the value for the pod's waypoint column isn't correct, verify your pod is labeled with `istio.io/use-waypoint` and the label's value is the name of a waypoint that can process workload traffic. For example, if your `reviews` `v2` pod uses a waypoint that can only process service traffic, you will not see any waypoint used by that pod. If the `istio.io/use-waypoint` label on your pod looks correct verify that the Gateway resource for your waypoint is labeled with a compatible value for `istio.io/waypoint-for`. In the case of a pod, suitable values would be `all` or `workload`. 1. Check the waypoint's proxy status via the `istioctl proxy-status` command. {{< text bash >}} $ istioctl proxy-status NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION bookinfo-gateway-istio-7c57fc4647-wjqvm.default Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-795d55fc6d-vqtjx 1.23-alpha.75c6eafc5bc8d160b5643c3ea18acb9785855564 reviews-svc-waypoint-c49f9f569-b492t.default Kubernetes SYNCED SYNCED SYNCED NOT SENT NOT SENT istiod-795d55fc6d-vqtjx 1.23-alpha.75c6eafc5bc8d160b5643c3ea18acb9785855564 reviews-v2-pod-waypoint-7f5dbd597-7zzw7.default Kubernetes SYNCED SYNCED NOT SENT NOT SENT NOT SENT istiod-795d55fc6d-vqtjx 1.23-alpha.75c6eafc5bc8d160b5643c3ea18acb9785855564 waypoint-6f7b665c89-6hppr.default Kubernetes SYNCED SYNCED SYNCED NOT SENT NOT SENT istiod-795d55fc6d-vqtjx 1.23-alpha.75c6eafc5bc8d160b5643c3ea18acb9785855564 ... {{< /text >}} 1. Enable Envoy's [access log](/docs/tasks/observability/logs/access-log/) and check the logs of the waypoint
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/usage/troubleshoot-waypoint/index.md
master
istio
[ -0.02409353107213974, 0.0054373410530388355, -0.020573461428284645, -0.03867420181632042, -0.08424632251262665, -0.0021522000897675753, 0.00021467392798513174, -0.0729227066040039, 0.0691075399518013, 0.04422865808010101, -0.0018640512134879827, -0.02685360237956047, -0.017680464312434196, ...
0.045415
reviews-svc-waypoint-c49f9f569-b492t.default Kubernetes SYNCED SYNCED SYNCED NOT SENT NOT SENT istiod-795d55fc6d-vqtjx 1.23-alpha.75c6eafc5bc8d160b5643c3ea18acb9785855564 reviews-v2-pod-waypoint-7f5dbd597-7zzw7.default Kubernetes SYNCED SYNCED NOT SENT NOT SENT NOT SENT istiod-795d55fc6d-vqtjx 1.23-alpha.75c6eafc5bc8d160b5643c3ea18acb9785855564 waypoint-6f7b665c89-6hppr.default Kubernetes SYNCED SYNCED SYNCED NOT SENT NOT SENT istiod-795d55fc6d-vqtjx 1.23-alpha.75c6eafc5bc8d160b5643c3ea18acb9785855564 ... {{< /text >}} 1. Enable Envoy's [access log](/docs/tasks/observability/logs/access-log/) and check the logs of the waypoint proxy after sending some requests: {{< text bash >}} $ kubectl logs deploy/waypoint {{< /text >}} If there is not enough information, you can enable the debug logs for the waypoint proxy: {{< text bash >}} $ istioctl pc log deploy/waypoint --level debug {{< /text >}} 1. Check the envoy configuration for the waypoint via the `istioctl proxy-config` command, which shows all the information related to the waypoint such as clusters, endpoints, listeners, routes and secrets: {{< text bash >}} $ istioctl proxy-config all deploy/waypoint {{< /text >}} Refer to the [deep dive into Envoy configuration](/docs/ops/diagnostic-tools/proxy-cmd/#deep-dive-into-envoy-configuration) section for more information regarding how to debug Envoy since waypoint proxies are based on Envoy.
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/usage/troubleshoot-waypoint/index.md
master
istio
[ 0.03575354814529419, -0.054897841066122055, 0.002618942642584443, -0.0035786847583949566, 0.0273033007979393, 0.010269378311932087, -0.03554219380021095, -0.06631529331207275, 0.049814339727163315, 0.0866788998246193, 0.02017129398882389, -0.07984982430934906, -0.10598178952932358, 0.00305...
0.192798
Kubernetes [NetworkPolicy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) allows you to control how layer 4 traffic reaches your pods. `NetworkPolicy` is typically enforced by the {{< gloss >}}CNI{{< /gloss >}} installed in your cluster. Istio is not a CNI, and does not enforce or manage `NetworkPolicy`, and in all cases respects it - ambient does not and will never bypass Kubernetes `NetworkPolicy` enforcement. An implication of this is that it is possible to create a Kubernetes `NetworkPolicy` that will block Istio traffic, or otherwise impede Istio functionality, so when using `NetworkPolicy` and ambient together, there are some things to keep in mind. ## Ambient traffic overlay and Kubernetes NetworkPolicy Once you have added applications to the ambient mesh, ambient's secure L4 overlay will tunnel traffic between your pods over port 15008. Once secured traffic enters the target pod with a destination port of 15008, the traffic will be proxied back to the original destination port. However, `NetworkPolicy` is enforced on the host, outside the pod. This means that if you have preexisting `NetworkPolicy` in place that, for example, will deny list inbound traffic to an ambient pod on every port but 443, you will have to add an exception to that `NetworkPolicy` for port 15008. Sidecar workloads receiving traffic will also need to allow inbound traffic on port 15008 to allow ambient workloads to communicate with them. For example, the following `NetworkPolicy` will block incoming {{< gloss >}}HBONE{{< /gloss >}} traffic to `my-app` on port 15008: {{< text syntax=yaml snip\_id=none >}} apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: my-app-allow-ingress-web spec: podSelector: matchLabels: app.kubernetes.io/name: my-app ingress: - ports: - port: 8080 protocol: TCP {{< /text >}} and should be changed to {{< text syntax=yaml snip\_id=none >}} apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: my-app-allow-ingress-web spec: podSelector: matchLabels: app.kubernetes.io/name: my-app ingress: - ports: - port: 8080 protocol: TCP - port: 15008 protocol: TCP {{< /text >}} if `my-app` is added to the mesh. ## Ambient, health probes, and Kubernetes NetworkPolicy Kubernetes health check probes present a problem and create a special case for Kubernetes traffic policy in general. They originate from the kubelet running as a process on the node, and not some other pod in the cluster. They are plaintext and unsecured. Neither the kubelet or the Kubernetes node typically have their own cryptographic identity, so access control isn't possible. It's not enough to simply allow all traffic through on the health probe port, as malicious traffic could use that port just as easily as the kubelet could. In addition, many apps use the same port for health probes and legitimate application traffic, so simple port-based allows are unacceptable. Various CNI implementations solve this in different ways and seek to either work around the problem by silently excluding kubelet health probes from normal policy enforcement, or configuring policy exceptions for them. In Istio ambient, this problem is solved by using a combination of iptables rules and source network address translation (SNAT) to rewrite only packets that provably originate from the local node with a fixed link-local IP, so that they can be explicitly ignored by Istio policy enforcement as unsecured health probe traffic. A link-local IP was chosen as the default since they are typically ignored for ingress-egress controls, and [by IETF standard](https://datatracker.ietf.org/doc/html/rfc3927) are not routable outside of the local subnetwork. This behavior is transparently enabled when you add pods to the ambient mesh, and by default ambient uses the link-local addresses `169.254.7.127` (IPv4) and `fd16:9254:7127:1337:ffff:ffff:ffff:ffff` (IPv6) to identify and correctly allow kubelet health probe packets. Note: If your workload, namespace, or cluster enforces Kubernetes `NetworkPolicy`, you must allow both the IPv4 and IPv6 addresses used by ambient mode.
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/usage/networkpolicy/index.md
master
istio
[ -0.006073533557355404, 0.039406683295965195, 0.06569896638393402, 0.009898186661303043, -0.006629764102399349, -0.027112549170851707, 0.04209376871585846, 0.009213101118803024, 0.015177215449512005, 0.05283300578594208, -0.05667470395565033, -0.0980733186006546, -0.006740903947502375, -0.0...
0.433993
pods to the ambient mesh, and by default ambient uses the link-local addresses `169.254.7.127` (IPv4) and `fd16:9254:7127:1337:ffff:ffff:ffff:ffff` (IPv6) to identify and correctly allow kubelet health probe packets. Note: If your workload, namespace, or cluster enforces Kubernetes `NetworkPolicy`, you must allow both the IPv4 and IPv6 addresses used by ambient mode. Depending on your CNI, packets with these addresses may otherwise be blocked, which will cause application pod health probes to fail once they join the ambient mesh. For instance, applying the following `NetworkPolicy` in a namespace would block all traffic (Istio or otherwise) to the `my-app` pod, \*\*including\*\* kubelet health probes. Depending on your CNI, kubelet probes and link-local addresses may be ignored by this policy, or be blocked by it: {{< text syntax=yaml snip\_id=none >}} apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-ingress spec: podSelector: matchLabels: app.kubernetes.io/name: my-app policyTypes: - Ingress {{< /text >}} Once the pod is enrolled in the ambient mesh, health probe packets will begin to be assigned a link local address via SNAT, which means health probes may begin to be blocked by your CNI's `NetworkPolicy` implementation. To allow ambient health probes to bypass `NetworkPolicy`, explicitly allow traffic from the host node to your pod by allow-listing the link-local addresses ambient uses for this traffic: {{< text syntax=yaml snip\_id=none >}} apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-ingress-allow-kubelet-healthprobes spec: podSelector: matchLabels: app.kubernetes.io/name: my-app ingress: - from: - ipBlock: cidr: 169.254.7.127/32 {{< /text >}} Note: If you are using a dual-stack cluster or an IPv6-only cluster, make sure to update your `NetworkPolicy` with the IPv6 ipBlock (`fd16:9254:7127:1337:ffff:ffff:ffff:ffff/128`) in addition to, or instead of, the IPv4 entry.
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/usage/networkpolicy/index.md
master
istio
[ 0.03885599970817566, 0.05247902125120163, 0.09833879768848419, 0.010765890590846539, 0.032448701560497284, -0.09124959260225296, -0.013726620934903622, -0.04031698405742645, 0.0587230809032917, 0.037323661148548126, -0.019998101517558098, -0.09012316167354584, -0.04092005640268326, -0.0075...
0.218038
\*\*HBONE\*\* (or HTTP-Based Overlay Network Environment) is a secure tunneling protocol used between Istio components. HBONE is an Istio-specific term. It is a mechanism to transparently multiplex TCP streams related to many different application connections over a single, mTLS encrypted network connection: an encrypted tunnel. In its current implementation within Istio, the HBONE protocol composes three open standards: - [HTTP/2](https://httpwg.org/specs/rfc7540.html) - [HTTP CONNECT](https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/CONNECT) - [Mutual TLS (mTLS)](https://datatracker.ietf.org/doc/html/rfc8446) HTTP CONNECT is used to establish a tunnel connection, mTLS is used to secure and encrypt that connection, and HTTP/2 is used to multiplex application connection streams over that single secured and encrypted tunnel, and convey additional stream-level metadata. ## Security and tenancy As enforced by the mTLS spec, each underlying tunnel connection must have a unique source and unique destination identity, and those identities must be used to establish encryption for that connection. This means that application connections over the HBONE protocol to the same destination identity will be multiplexed across the same shared, encrypted and secured underlying HTTP/2 connection - in effect, each unique source and destination must get their own dedicated, secure tunnel connection, even if that underlying dedicated connection is handling multiple application-level connections. ## Implementation details By Istio convention, ztunnel and other proxies that understand the HBONE protocol expose listeners on TCP port 15008. As HBONE is merely a combination of HTTP/2, HTTP CONNECT, and mTLS, the HBONE tunnel packets that flow between HBONE-enabled proxies looks like the following figure: {{< image width="100%" link="hbone-packet.svg" caption="HBONE L3 packet format" >}} An important property of the HBONE tunnel is that the original application request can be proxied transparently without altering the underlying application traffic stream in any way. This means metadata about a connection can be conveyed to destination proxies without altering the application request - removing the need to append Istio-specific headers to application traffic, for example. Additional use cases of HBONE and HTTP tunneling (such as UDP) will be investigated in the future as ambient mode and standards evolve.
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/architecture/hbone/index.md
master
istio
[ -0.10074160993099213, 0.01833927258849144, -0.03169867396354675, -0.0023222756572067738, -0.0383094921708107, -0.040192075073719025, -0.012513488531112671, 0.01062505878508091, 0.10234487801790237, -0.0764511302113533, -0.06235957890748978, -0.013802668079733849, 0.06980987638235092, 0.014...
0.393758
In the context of ambient mode, \_traffic redirection\_ refers to data plane functionality that intercepts traffic sent to and from ambient-enabled workloads, routing it through the {{< gloss >}}ztunnel{{< /gloss >}} node proxies that handle the core data path. Sometimes the term \_traffic capture\_ is also used. As ztunnel aims to transparently encrypt and route application traffic, a mechanism is needed to capture all traffic entering and leaving "in mesh" pods. This is a security critical task: if the ztunnel can be bypassed, authorization policies can be bypassed. ## Istio's in-pod traffic redirection model The core design principle underlying ambient mode's in-pod traffic redirection is that the ztunnel proxy has the ability to perform data path capture inside the Linux network namespace of the workload pod. This is achieved via a cooperation of functionality between the [`istio-cni` node agent](/docs/setup/additional-setup/cni/) and the ztunnel node proxy. A key benefit of this model is that it enables Istio's ambient mode to work alongside any Kubernetes CNI plugin, transparently, and without impacting Kubernetes networking features. The following figure illustrates the sequence of events when a new workload pod is started in (or added to) an ambient-enabled namespace. {{< image width="100%" link="./pod-added-to-ambient.svg" alt="pod added to the ambient mesh flow" >}} The `istio-cni` node agent responds to CNI events such as pod creation and deletion, and also watches the underlying Kubernetes API server for events such as the ambient label being added to a pod or namespace. The `istio-cni` node agent additionally installs a chained CNI plugin that is executed by the container runtime after the primary CNI plugin within that Kubernetes cluster executes. Its only purpose is to notify the `istio-cni` node agent when a new pod is created by the container runtime in a namespace that is already enrolled in ambient mode, and propagate the new pod context to `istio-cni`. Once the `istio-cni` node agent is notified that a pod needs to be added to the mesh (either from the CNI plugin, if the pod is brand new, or from the Kubernetes API server, if the pod is already running but needs to be added), the following sequence of operations is performed: - `istio-cni` enters the pod’s network namespace and establishes network redirection rules, such that packets entering and leaving the pod are intercepted and transparently redirected to the node-local ztunnel proxy instance listening on [well-known ports](https://github.com/istio/ztunnel/blob/master/ARCHITECTURE.md#ports) (15008, 15006, 15001). - The `istio-cni` node agent then informs the ztunnel proxy, over a Unix domain socket, that it should establish local proxy listening ports inside the pod’s network namespace (on ports 15008, 15006, and 15001), and provides ztunnel with a low-level Linux [file descriptor](https://en.wikipedia.org/wiki/File\_descriptor) representing the pod’s network namespace. - While typically sockets are created within a Linux network namespace by the process actually running inside that network namespace, it is perfectly possible to leverage Linux’s low-level socket API to allow a process running in one network namespace to create listening sockets in another network namespace, assuming the target network namespace is known at creation time. - The node-local ztunnel internally spins up a new logical proxy instance and listen port set, dedicated to the newly-added pod. Note that this is still running within the same process, and is merely a dedicated task for the pod. - Once the in-pod redirect rules are in place and the ztunnel has established the listen ports, the pod is added in the mesh and traffic begins flowing through the node-local ztunnel. Traffic to and from pods in the mesh will be fully encrypted with mTLS by default. Data will now enter and leave the pod network namespace encrypted.
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/architecture/traffic-redirection/index.md
master
istio
[ -0.04013126343488693, 0.04620497673749924, 0.038153573870658875, 0.05203985422849655, 0.027310293167829514, -0.09522929787635803, 0.0895574763417244, 0.04598628729581833, 0.05025072395801544, 0.030718127265572548, -0.04456821084022522, -0.0006951247341930866, 0.014464871026575565, 0.001598...
0.368175
and the ztunnel has established the listen ports, the pod is added in the mesh and traffic begins flowing through the node-local ztunnel. Traffic to and from pods in the mesh will be fully encrypted with mTLS by default. Data will now enter and leave the pod network namespace encrypted. Every pod in the mesh has the ability to enforce mesh policy and securely encrypt traffic, even though the user application running in the pod has no awareness of either. This diagram illustrates how encrypted traffic flows between pods in the ambient mesh in the new model: {{< image width="100%" link="./traffic-flows-between-pods-in-ambient.svg" alt="HBONE traffic flows between pods in the ambient mesh" >}} ## Observing and debugging traffic redirection in ambient mode If traffic redirection is not working correctly in ambient mode, some quick checks can be made to help narrow down the problem. We recommend that troubleshooting begin with the steps described in the [ztunnel debugging guide](/docs/ambient/usage/troubleshoot-ztunnel/). ### Check the ztunnel proxy logs When an application pod is part of an ambient mesh, you can check the ztunnel proxy logs to confirm the mesh is redirecting traffic. As shown in the example below, the ztunnel logs related to `inpod` indicate that in-pod redirection mode is enabled, the proxy has received the network namespace (netns) information about an ambient application pod, and has started proxying for it. {{< text bash >}} $ kubectl logs ds/ztunnel -n istio-system | grep inpod Found 3 pods, using pod/ztunnel-hl94n inpod\_enabled: true inpod\_uds: /var/run/ztunnel/ztunnel.sock inpod\_port\_reuse: true inpod\_mark: 1337 2024-02-21T22:01:49.916037Z INFO ztunnel::inpod::workloadmanager: handling new stream 2024-02-21T22:01:49.919944Z INFO ztunnel::inpod::statemanager: pod WorkloadUid("1e054806-e667-4109-a5af-08b3e6ba0c42") received netns, starting proxy 2024-02-21T22:01:49.925997Z INFO ztunnel::inpod::statemanager: pod received snapshot sent 2024-02-21T22:03:49.074281Z INFO ztunnel::inpod::statemanager: pod delete request, draining proxy 2024-02-21T22:04:58.446444Z INFO ztunnel::inpod::statemanager: pod WorkloadUid("1e054806-e667-4109-a5af-08b3e6ba0c42") received netns, starting proxy {{< /text >}} ### Confirm the state of sockets Follow the steps below to confirm that the sockets on ports 15001, 15006, and 15008 are open and in the listening state. {{< text bash >}} $ kubectl debug $(kubectl get pod -l app=curl -n ambient-demo -o jsonpath='{.items[0].metadata.name}') -it -n ambient-demo --image nicolaka/netshoot -- ss -ntlp Defaulting debug container name to debugger-nhd4d. State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess LISTEN 0 128 127.0.0.1:15080 0.0.0.0:\* LISTEN 0 128 \*:15006 \*:\* LISTEN 0 128 \*:15001 \*:\* LISTEN 0 128 \*:15008 \*:\* {{< /text >}} ### Check the iptables rules setup To view the iptables rules setup inside one of the application pods, execute this command: {{< text bash >}} $ kubectl debug $(kubectl get pod -l app=curl -n ambient-demo -o jsonpath='{.items[0].metadata.name}') -it --image gcr.io/istio-release/base --profile=netadmin -n ambient-demo -- iptables-save Defaulting debug container name to debugger-m44qc. # Generated by iptables-save \*mangle :PREROUTING ACCEPT [320:53261] :INPUT ACCEPT [23753:267657744] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [23352:134432712] :POSTROUTING ACCEPT [23352:134432712] :ISTIO\_OUTPUT - [0:0] :ISTIO\_PRERT - [0:0] -A PREROUTING -j ISTIO\_PRERT -A OUTPUT -j ISTIO\_OUTPUT -A ISTIO\_OUTPUT -m connmark --mark 0x111/0xfff -j CONNMARK --restore-mark --nfmask 0xffffffff --ctmask 0xffffffff -A ISTIO\_PRERT -m mark --mark 0x539/0xfff -j CONNMARK --set-xmark 0x111/0xfff -A ISTIO\_PRERT -s 169.254.7.127/32 -p tcp -m tcp -j ACCEPT -A ISTIO\_PRERT ! -d 127.0.0.1/32 -i lo -p tcp -j ACCEPT -A ISTIO\_PRERT -p tcp -m tcp --dport 15008 -m mark ! --mark 0x539/0xfff -j TPROXY --on-port 15008 --on-ip 0.0.0.0 --tproxy-mark 0x111/0xfff -A ISTIO\_PRERT -p tcp -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A ISTIO\_PRERT ! -d 127.0.0.1/32 -p tcp -m mark ! --mark 0x539/0xfff -j TPROXY --on-port 15006 --on-ip 0.0.0.0 --tproxy-mark 0x111/0xfff COMMIT # Completed # Generated by iptables-save \*nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [175:13694] :POSTROUTING ACCEPT [205:15494] :ISTIO\_OUTPUT - [0:0] -A OUTPUT -j ISTIO\_OUTPUT -A ISTIO\_OUTPUT -d 169.254.7.127/32 -p tcp -m tcp -j ACCEPT -A ISTIO\_OUTPUT
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/architecture/traffic-redirection/index.md
master
istio
[ 0.0007009072578512132, 0.04857449233531952, 0.09242361783981323, 0.0856303721666336, 0.03852773457765579, -0.13065072894096375, 0.007610358763486147, 0.0036590900272130966, 0.07432842999696732, 0.02521488070487976, -0.03548716753721237, -0.01142624020576477, 0.03182222694158554, 0.05077758...
0.042959
! --mark 0x539/0xfff -j TPROXY --on-port 15006 --on-ip 0.0.0.0 --tproxy-mark 0x111/0xfff COMMIT # Completed # Generated by iptables-save \*nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [175:13694] :POSTROUTING ACCEPT [205:15494] :ISTIO\_OUTPUT - [0:0] -A OUTPUT -j ISTIO\_OUTPUT -A ISTIO\_OUTPUT -d 169.254.7.127/32 -p tcp -m tcp -j ACCEPT -A ISTIO\_OUTPUT -p tcp -m mark --mark 0x111/0xfff -j ACCEPT -A ISTIO\_OUTPUT ! -d 127.0.0.1/32 -o lo -j ACCEPT -A ISTIO\_OUTPUT ! -d 127.0.0.1/32 -p tcp -m mark ! --mark 0x539/0xfff -j REDIRECT --to-ports 15001 COMMIT {{< /text >}} The command output shows that additional Istio-specific chains are added to the NAT and Mangle tables in netfilter/iptables within the application pod's network namespace. All TCP traffic coming into the pod is redirected to the ztunnel proxy for ingress processing. If the traffic is plaintext (destination port != 15008), it will be redirected to the in-pod ztunnel plaintext listening port 15006. If the traffic is HBONE (destination port == 15008), it will be redirected to the in-pod ztunnel HBONE listening port 15008. Any TCP traffic leaving the pod is redirected to ztunnel's port 15001 for egress processing, before being sent out by ztunnel using HBONE encapsulation.
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/architecture/traffic-redirection/index.md
master
istio
[ 0.012236522510647774, 0.05239560827612877, -0.07030559331178665, -0.004139433149248362, -0.01727290078997612, -0.0073283943347632885, 0.008203551173210144, 0.06214520335197449, -0.07330808788537979, 0.03258446604013443, 0.0069150603376328945, -0.11673776060342789, -0.0218611191958189, 0.00...
0.43251
In {{< gloss "ambient" >}}ambient mode{{< /gloss >}}, workloads can fall into 3 categories: 1. \*\*Out of Mesh\*\*: a standard pod without any mesh features enabled. Istio and the ambient {{< gloss >}}data plane{{< /gloss >}} are not enabled. 1. \*\*In Mesh\*\*: a pod that is included in the ambient {{< gloss >}}data plane{{< /gloss >}}, and has traffic intercepted at the Layer 4 level by {{< gloss >}}ztunnel{{< /gloss >}}. In this mode, L4 policies can be enforced for pod traffic. This mode can be enabled by setting the `istio.io/dataplane-mode=ambient` label. See [labels](/docs/ambient/usage/add-workloads/#ambient-labels) for more details. 1. \*\*In Mesh, Waypoint enabled\*\*: a pod that is \_in mesh\_ \*and\* has a {{< gloss "waypoint" >}}waypoint proxy{{< /gloss >}} deployed. In this mode, L7 policies can be enforced for pod traffic. This mode can be enabled by setting the `istio.io/use-waypoint` label. See [labels](/docs/ambient/usage/add-workloads/#ambient-labels) for more details. Depending on which category a workload is in, the traffic path will be different. ## In Mesh Routing ### Outbound When a pod in an ambient mesh makes an outbound request, it will be [transparently redirected](/docs/ambient/architecture/traffic-redirection) to the node-local ztunnel which will determine where and how to forward the request. In general, the traffic routing behaves just like Kubernetes default traffic routing; requests to a `Service` will be sent to an endpoint within the `Service` while requests directly to a `Pod` IP will go directly to that IP. However, depending on the destination's capabilities, different behavior will occur. If the destination is also added in the mesh, or otherwise has Istio proxy capabilities (such as a sidecar), the request will be upgraded to an encrypted {{< gloss "HBONE" >}}HBONE tunnel{{< /gloss >}}. If the destination has a waypoint proxy, in addition to being upgraded to HBONE, the request will be forwarded to that waypoint for L7 policy enforcement. Note that in the case of a request to a `Service`, if the service \*has\* a waypoint, the request will be sent to its waypoint to apply L7 policies to the traffic. Similarly, in the case of a request to a `Pod` IP, if the pod \*has\* a waypoint, the request will be sent to its waypoint to apply L7 policies to the traffic. Since it is possible to vary the labels associated with pods in a `Deployment` it is technically possible for some pods to use a waypoint while others do not. Users are generally recommended to avoid this advanced use case. ### Inbound When a pod in an ambient mesh receives an inbound request, it will be [transparently redirected](/docs/ambient/architecture/traffic-redirection) to the node-local ztunnel. When ztunnel receives the request, it will apply Authorization Policies and forward the request only if the request passes these checks. A pod can receive HBONE traffic or plaintext traffic. By default, both will be accepted by ztunnel. Requests from sources out of mesh will have no peer identity when Authorization Policies are evaluated, a user can set a policy requiring an identity (either \*any\* identity, or a specific one) to block all plaintext traffic. When the destination is waypoint enabled, if the source is in ambient mesh, the source's ztunnel ensures the request \*\*will\*\* go through the waypoint where policy is enforced. However, a workload outside of the mesh doesn't know anything about waypoint proxies therefore it sends requests directly to the destination without going through any waypoint proxy even if the destination is waypoint enabled. Currently, traffic from sidecars and gateways won't go through any waypoint proxy either and they will be made aware of waypoint proxies in a future release. #### Dataplane details ##### Identity All inbound and outbound
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/architecture/data-plane/index.md
master
istio
[ -0.023236097767949104, 0.03593330457806587, 0.06708426028490067, 0.07304076850414276, 0.012884811498224735, -0.044272150844335556, 0.08845750242471695, -0.041398242115974426, 0.031003400683403015, 0.00816846452653408, -0.0068599758669734, -0.06360822916030884, -0.06683734804391861, 0.00748...
0.309653
the destination without going through any waypoint proxy even if the destination is waypoint enabled. Currently, traffic from sidecars and gateways won't go through any waypoint proxy either and they will be made aware of waypoint proxies in a future release. #### Dataplane details ##### Identity All inbound and outbound L4 TCP traffic between workloads in the ambient mesh is secured by the data plane, using mTLS via {{< gloss >}}HBONE{{< /gloss >}}, ztunnel, and x509 certificates. As enforced by {{< gloss "mutual tls authentication" >}}mTLS{{< /gloss >}}, the source and destination must have unique x509 identities, and those identities must be used to establish the encrypted channel for that connection. This requires ztunnel to manage multiple distinct workload certificates, on behalf of the proxied workloads - one for each unique identity (service account) for every node-local pod. Ztunnel's own identity is never used for mTLS connections between workloads. When fetching certificates, ztunnel will authenticate to the CA with its own identity, but request the identity of another workload. Critically, the CA must enforce that the ztunnel has permission to request that identity. Requests for identities not running on the node are rejected. This is critical to ensure that a compromised node does not compromise the entire mesh. This CA enforcement is done by Istio's CA using a Kubernetes Service Account JWT token, which encodes the pod information. This enforcement is also a requirement for any alternative CAs integrating with ztunnel. Ztunnel will request certificates for all identities on the node. It determines this based on the {{< gloss >}}control plane{{< /gloss >}} configuration it receives. When a new identity is discovered on the node, it will be queued for fetching at a low priority, as an optimization. However, if a request needs a certain identity that has not been fetched yet, it will be immediately requested. Ztunnel additionally will handle the rotation of these certificates as they approach expiry. ##### Telemetry Ztunnel emits the full set of [Istio Standard TCP Metrics](/docs/reference/config/metrics/). ##### Dataplane example for Layer 4 traffic The L4 ambient dataplane between is depicted in the following figure. {{< image width="100%" link="ztunnel-datapath-1.png" caption="Basic ztunnel L4-only datapath" >}} The figure shows several workloads added to the ambient mesh, running on nodes W1 and W2 of a Kubernetes cluster. There is a single instance of the ztunnel proxy on each node. In this scenario, application client pods C1, C2 and C3 need to access a service provided by pod S1. There is no requirement for advanced L7 features such as L7 traffic routing or L7 traffic management, so a L4 data plane is sufficient to obtain {{< gloss "mutual tls authentication" >}}mTLS{{< /gloss >}} and L4 policy enforcement - no waypoint proxy is required. The figure shows that pods C1 and C2, running on node W1, connect with pod S1 running on node W2. The TCP traffic for C1 and C2 is securely tunneled via ztunnel-created {{< gloss >}}HBONE{{< /gloss >}} connections. {{< gloss "mutual tls authentication" >}}Mutual TLS (mTLS){{< /gloss >}} is used for encryption as well as mutual authentication of traffic being tunneled. [SPIFFE](https://github.com/spiffe/spiffe/blob/main/standards/SPIFFE.md) identities are used to identify the workloads on each side of the connection. For more details on the tunneling protocol and traffic redirection mechanism, refer to the guides on [HBONE](/docs/ambient/architecture/hbone) and [ztunnel traffic redirection](/docs/ambient/architecture/traffic-redirection). {{< tip >}} Note: Although the figure shows the HBONE tunnels to be between the two ztunnel proxies, the tunnels are in fact between the source and destination pods. Traffic is HBONE encapsulated and encrypted in the network namespace of the source pod itself, and eventually decapsulated and decrypted in
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/architecture/data-plane/index.md
master
istio
[ -0.05961522459983826, 0.051858026534318924, 0.05915286764502525, 0.04643306881189346, -0.03650454431772232, -0.061980951577425, 0.04193096235394478, -0.029206041246652603, 0.06799054145812988, 0.00018982826441060752, -0.0510680265724659, 0.0091719264164567, 0.09209225326776505, 0.032156117...
-0.041391
{{< tip >}} Note: Although the figure shows the HBONE tunnels to be between the two ztunnel proxies, the tunnels are in fact between the source and destination pods. Traffic is HBONE encapsulated and encrypted in the network namespace of the source pod itself, and eventually decapsulated and decrypted in the network namespace of the destination pod on the destination worker node. The ztunnel proxy still logically handles both the control plane and data plane needed for HBONE transport; however, it is able to do that from inside the network namespaces of the source and destination pods. {{< /tip >}} Note that local traffic - shown in the figure from pod C3 to destination pod S1 on worker node W2 - also traverses the local ztunnel proxy instance, so that L4 traffic management functions such as L4 Authorization and L4 Telemetry will be enforced identically on traffic, whether or not it crosses a node boundary. ## In Mesh routing with Waypoint enabled Istio waypoints exclusively receive HBONE traffic. Upon receiving a request, the waypoint will ensure that the traffic is for a `Pod` or `Service` which uses it. Having accepted the traffic, the waypoint will then enforce L7 policies (such as `AuthorizationPolicy`, `RequestAuthentication`, `WasmPlugin`, `Telemetry`, etc) before forwarding. For direct requests to a `Pod`, the requests are simply forwarded directly after policy is applied. For requests to a `Service`, the waypoint will also apply routing and load balancing. By default, a `Service` will simply route to itself, performing L7 load balancing across its endpoints. This can be overridden with Routes for that `Service`. For example, the below policy will ensure that requests to the `echo` service are forwarded to `echo-v1`: {{< text yaml >}} apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: echo spec: parentRefs: - group: "" kind: Service name: echo rules: - backendRefs: - name: echo-v1 port: 80 {{< /text >}} The following figure shows the datapath between ztunnel and a waypoint, if one is configured for L7 policy enforcement. Here ztunnel uses HBONE tunneling to send traffic to a waypoint proxy for L7 processing. After processing, the waypoint sends traffic via a second HBONE tunnel to the ztunnel on the node hosting the selected service destination pod. In general the waypoint proxy may or may not be located on the same nodes as the source or destination pod. {{< image width="100%" link="ztunnel-waypoint-datapath.png" caption="Ztunnel datapath via an interim waypoint" >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/architecture/data-plane/index.md
master
istio
[ -0.015150326304137707, 0.01192583329975605, 0.01707233302295208, 0.05110837519168854, -0.0168401300907135, -0.046337906271219254, 0.04864583909511566, -0.01041130069643259, 0.019693732261657715, 0.03435588255524635, -0.06700992584228516, 0.01850287616252899, 0.030316492542624474, 0.0148565...
0.186159
Like all Istio {{< gloss >}}data plane{{< /gloss >}} modes, Ambient uses the Istio {{< gloss >}}control plane{{< /gloss>}}. In ambient, the control plane communicates with the {{< gloss >}}ztunnel{{< /gloss >}} proxy on each Kubernetes node. The figure shows an overview of the control plane related components and flows between ztunnel proxy and the `istiod` control plane. {{< image width="100%" link="ztunnel-architecture.svg" caption="Ztunnel architecture" >}} The ztunnel proxy uses xDS APIs to communicate with the Istio control plane (`istiod`). This enables the fast, dynamic configuration updates required in modern distributed systems. The ztunnel proxy also obtains {{< gloss "mutual tls authentication" >}}mTLS{{< /gloss >}} certificates for the Service Accounts of all pods that are scheduled on its Kubernetes node using xDS. A single ztunnel proxy may implement L4 data plane functionality on behalf of any pod sharing its node which requires efficiently obtaining relevant configuration and certificates. This multi-tenant architecture contrasts sharply with the sidecar model where each application pod has its own proxy. It is also worth noting that in ambient mode, a simplified set of resources are used in the xDS APIs for ztunnel proxy configuration. This results in improved performance (having to transmit and process a much smaller set of information that is sent from istiod to the ztunnel proxies) and improved troubleshooting.
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/architecture/control-plane/index.md
master
istio
[ -0.10128729790449142, 0.034118689596652985, 0.06630291044712067, 0.02851087599992752, -0.04880465939640999, -0.08797250688076019, 0.09780294448137283, 0.027629384770989418, 0.09120231121778488, -0.005331806838512421, -0.017381154000759125, -0.057521119713783264, 0.03869132325053215, -0.017...
0.348237
This guide lets you quickly evaluate Istio's {{< gloss "ambient" >}}ambient mode{{< /gloss >}}. You'll need a Kubernetes cluster to proceed. If you don't have a cluster, you can use [kind](/docs/setup/platform-setup/kind) or any other [supported Kubernetes platform](/docs/setup/platform-setup). These steps require you to have a {{< gloss >}}cluster{{< /gloss >}} running a [supported version](/docs/releases/supported-releases#support-status-of-istio-releases) of Kubernetes ({{< supported\_kubernetes\_versions >}}). ## Download the Istio CLI Istio is configured using a command line tool called `istioctl`. Download it, and the Istio sample applications: {{< text syntax=bash snip\_id=none >}} $ curl -L https://istio.io/downloadIstio | sh - $ cd istio-{{< istio\_full\_version >}} $ export PATH=$PWD/bin:$PATH {{< /text >}} Check that you are able to run `istioctl` by printing the version of the command. At this point, Istio is not installed in your cluster, so you will see that there are no pods ready. {{< text syntax=bash snip\_id=none >}} $ istioctl version Istio is not present in the cluster: no running Istio pods in namespace "istio-system" client version: {{< istio\_full\_version >}} {{< /text >}} ## Install Istio on to your cluster `istioctl` supports a number of [configuration profiles](/docs/setup/additional-setup/config-profiles/) that include different default options, and can be customized for your production needs. Support for ambient mode is included in the `ambient` profile. Install Istio with the following command: {{< text syntax=bash snip\_id=install\_ambient >}} $ istioctl install --set profile=ambient --skip-confirmation {{< /text >}} Once the installation completes, you’ll get the following output that indicates all components have been installed successfully. {{< text syntax=plain snip\_id=none >}} ✔ Istio core installed ✔ Istiod installed ✔ CNI installed ✔ Ztunnel installed ✔ Installation complete {{< /text >}} ## Install the Kubernetes Gateway API CRDs You will use the Kubernetes Gateway API to configure traffic routing. {{< boilerplate gateway-api-install-crds >}} ## Next steps Congratulations! You've successfully installed Istio with support for ambient mode. Continue to the next step to [install a sample application](/docs/ambient/getting-started/deploy-sample-app/).
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/getting-started/_index.md
master
istio
[ -0.01178832445293665, -0.006515041459351778, 0.07685723155736923, 0.018117986619472504, -0.026316221803426743, -0.028407957404851913, 0.06560482084751129, 0.02500748634338379, 0.0025580564979463816, 0.000631962320767343, -0.019297782331705093, -0.17173030972480774, -0.07068648934364319, 0....
0.517821
To explore Istio, you will install the sample [Bookinfo application](/docs/examples/bookinfo/), composed of four separate microservices used to demonstrate various Istio features. {{< image width="50%" link="./bookinfo.svg" caption="Istio's Bookinfo sample application is written in many different languages" >}} As part of this guide, you'll deploy the Bookinfo application and expose the `productpage` service using an ingress gateway. ## Deploy the Bookinfo application Start by deploying the application: {{< text bash >}} $ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@ $ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo-versions.yaml@ {{< /text >}} To verify that the application is running, check the status of the pods: {{< text syntax=bash snip\_id=none >}} $ kubectl get pods NAME READY STATUS RESTARTS AGE details-v1-cf74bb974-nw94k 1/1 Running 0 42s productpage-v1-87d54dd59-wl7qf 1/1 Running 0 42s ratings-v1-7c4bbf97db-rwkw5 1/1 Running 0 42s reviews-v1-5fd6d4f8f8-66j45 1/1 Running 0 42s reviews-v2-6f9b55c5db-6ts96 1/1 Running 0 42s reviews-v3-7d99fd7978-dm6mx 1/1 Running 0 42s {{< /text >}} To access the `productpage` service from outside the cluster, you need to configure an ingress gateway. ## Deploy and configure the ingress gateway You will use the Kubernetes Gateway API to deploy a gateway called `bookinfo-gateway`: {{< text syntax=bash snip\_id=deploy\_bookinfo\_gateway >}} $ kubectl apply -f @samples/bookinfo/gateway-api/bookinfo-gateway.yaml@ {{< /text >}} By default, Istio creates a `LoadBalancer` service for a gateway. As you will access this gateway by a tunnel, you don't need a load balancer. Change the service type to `ClusterIP` by annotating the gateway: {{< text syntax=bash snip\_id=annotate\_bookinfo\_gateway >}} $ kubectl annotate gateway bookinfo-gateway networking.istio.io/service-type=ClusterIP --namespace=default {{< /text >}} To check the status of the gateway, run: {{< text bash >}} $ kubectl get gateway NAME CLASS ADDRESS PROGRAMMED AGE bookinfo-gateway istio bookinfo-gateway-istio.default.svc.cluster.local True 42s {{< /text >}} Wait for the gateway to show as programmed before continuing. ## Access the application You will connect to the Bookinfo `productpage` service through the gateway you just provisioned. To access the gateway, you need to use the `kubectl port-forward` command: {{< text syntax=bash snip\_id=none >}} $ kubectl port-forward svc/bookinfo-gateway-istio 8080:80 {{< /text >}} Open your browser and navigate to `http://localhost:8080/productpage` to view the Bookinfo application. {{< image width="80%" link="./bookinfo-browser.png" caption="Bookinfo Application" >}} If you refresh the page, you should see the display of the book ratings changing as the requests are distributed across the different versions of the `reviews` service. ## Next steps [Continue to the next section](../secure-and-visualize/) to add the application to the mesh, and learn how to secure and visualize the communication between the applications.
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/getting-started/deploy-sample-app/index.md
master
istio
[ 0.005515664350241423, -0.023714423179626465, -0.016929982230067253, -0.018662774935364723, -0.025064650923013687, -0.01152927614748478, -0.009160337969660759, 0.05053034424781799, 0.013353779911994934, 0.02903330698609352, -0.002651615533977747, -0.09084777534008026, -0.011235423386096954, ...
0.502841
Now you have a waypoint proxy installed, you will learn how to split traffic between services. ## Split traffic between services The Bookinfo application has three versions of the `reviews` service. You can split traffic between these versions to test new features or perform A/B testing. Let's configure traffic routing to send 90% of requests to `reviews` v1 and 10% to `reviews` v2: {{< text syntax=bash snip\_id=deploy\_httproute >}} $ kubectl apply -f - <}} To confirm that roughly 10% of the of the traffic from 100 requests goes to `reviews-v2`, you can run the following command: {{< text syntax=bash snip\_id=test\_traffic\_split >}} $ kubectl exec deploy/curl -- sh -c "for i in \$(seq 1 100); do curl -s http://productpage:9080/productpage | grep reviews-v.-; done" {{< /text >}} You'll notice the majority of requests go to `reviews-v1`. You can confirm the same if you open the Bookinfo application in your browser and refresh the page multiple times. Notice the requests from the `reviews-v1` don't have any stars, while the requests from `reviews-v2` have black stars. ## Next steps This section concludes the Getting Started guide for Istio's ambient mode. You can continue to the [Cleanup](/docs/ambient/getting-started/cleanup) section to remove Istio or continue exploring the [ambient mode user guides](/docs/ambient/usage/) to learn more about Istio's features and capabilities.
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/getting-started/manage-traffic/index.md
master
istio
[ 0.00033556640846654773, -0.033406175673007965, -0.044132158160209656, -0.045665442943573, -0.08759372681379318, -0.008410383947193623, -0.01821988634765148, -0.029063157737255096, 0.031142905354499817, 0.04051680117845535, -0.0406092032790184, -0.06048930436372757, 0.03052874468266964, -0....
0.046922
After you have added your application to the ambient mesh, you can secure application access using Layer 4 authorization policies. This feature lets you control access to and from a service based on the client workload identities that are automatically issued to all workloads in the mesh. ## Enforce Layer 4 authorization policy Let's create an [authorization policy](/docs/reference/config/security/authorization-policy/) that restricts which services can communicate with the `productpage` service. The policy is applied to pods with the `app: productpage` label, and it allows calls only from the service account `cluster.local/ns/default/sa/bookinfo-gateway-istio`. This is the service account that is used by the Bookinfo gateway you deployed in the previous step. {{< text syntax=bash snip\_id=deploy\_l4\_policy >}} $ kubectl apply -f - <}} If you open the Bookinfo application in your browser (`http://localhost:8080/productpage`), you will see the product page, just as before. However, if you try to access the `productpage` service from a different service account, you should see an error. Let's try accessing Bookinfo application from a different client in the cluster: {{< text syntax=bash snip\_id=deploy\_curl >}} $ kubectl apply -f @samples/curl/curl.yaml@ {{< /text >}} Since the `curl` pod is using a different service account, it will not have access the `productpage` service: {{< text bash >}} $ kubectl exec deploy/curl -- curl -s "http://productpage:9080/productpage" command terminated with exit code 56 {{< /text >}} ## Enforce Layer 7 authorization policy To enforce Layer 7 policies, you first need a {{< gloss "waypoint" >}}waypoint proxy{{< /gloss >}} for the namespace. This proxy will handle all Layer 7 traffic entering the namespace. {{< text syntax=bash snip\_id=deploy\_waypoint >}} $ istioctl waypoint apply --enroll-namespace --wait ✅ waypoint default/waypoint applied ✅ waypoint default/waypoint is ready! ✅ namespace default labeled with "istio.io/use-waypoint: waypoint" {{< /text >}} You can view the waypoint proxy and make sure it has the `Programmed=True` status: {{< text bash >}} $ kubectl get gtw waypoint NAME CLASS ADDRESS PROGRAMMED AGE waypoint istio-waypoint 10.96.58.95 True 42s {{< /text >}} Adding a [L7 authorization policy](/docs/ambient/usage/l7-features/) will explicitly allow the `curl` service to send `GET` requests to the `productpage` service, but perform no other operations: {{< text syntax=bash snip\_id=deploy\_l7\_policy >}} $ kubectl apply -f - <}} Note the `targetRefs` field is used to specify the target service for the authorization policy of a waypoint proxy. The rules section is similar as before, but this time you added the `to` section to specify the operation that is allowed. Remember that our L4 policy instructed the ztunnel to only allow connections from the gateway? We now need to update it to also allow connections from the waypoint. {{< text syntax=bash snip\_id=update\_l4\_policy >}} $ kubectl apply -f - <}} {{< tip >}} To learn about how to enable more of Istio's features, read the [Layer 7 features user guide](/docs/ambient/usage/l7-features/). {{< /tip >}} Confirm the new waypoint proxy is enforcing the updated authorization policy: {{< text bash >}} $ # This fails with an RBAC error because you're not using a GET operation $ kubectl exec deploy/curl -- curl -s "http://productpage:9080/productpage" -X DELETE RBAC: access denied {{< /text >}} {{< text bash >}} $ # This fails with an RBAC error because the identity of the reviews-v1 service is not allowed $ kubectl exec deploy/reviews-v1 -- curl -s http://productpage:9080/productpage RBAC: access denied {{< /text >}} {{< text bash >}} $ # This works as you're explicitly allowing GET requests from the curl pod $ kubectl exec deploy/curl -- curl -s http://productpage:9080/productpage | grep -o ".\*" Simple Bookstore App {{< /text >}} ## Next steps With the waypoint proxy in place, you can now enforce Layer 7 policies in the namespace. In addition to
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/getting-started/enforce-auth-policies/index.md
master
istio
[ 0.010800077579915524, 0.058226827532052994, 0.016560474410653114, -0.05170224979519844, -0.011284239590168, -0.010407563298940659, 0.05080316588282585, 0.0010327902855351567, 0.09023051708936691, 0.03256118670105934, -0.021526377648115158, -0.04502791166305542, 0.019818076863884926, -0.030...
0.212007
as you're explicitly allowing GET requests from the curl pod $ kubectl exec deploy/curl -- curl -s http://productpage:9080/productpage | grep -o ".\*" Simple Bookstore App {{< /text >}} ## Next steps With the waypoint proxy in place, you can now enforce Layer 7 policies in the namespace. In addition to authorization policies, [you can use the waypoint proxy to split traffic between services](../manage-traffic/). This is useful when doing canary deployments or A/B testing.
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/getting-started/enforce-auth-policies/index.md
master
istio
[ -0.008787981234490871, 0.034124795347452164, 0.017779167741537094, -0.06580264866352081, -0.10497768223285675, -0.008700594305992126, -0.015533650293946266, -0.08191104978322983, 0.07550682127475739, 0.04584356024861336, 0.012513866648077965, -0.025178004056215286, -0.02265263721346855, -0...
0.075789
If you no longer need Istio and associated resources, you can delete them by following the steps in this section. ## Remove waypoint proxies To remove all waypoint proxies run the following commands: {{< text bash >}} $ kubectl label namespace default istio.io/use-waypoint- $ istioctl waypoint delete --all {{< /text >}} ## Remove the namespace from the ambient data plane The label that instructs Istio to automatically include applications in the `default` namespace to the ambient mesh is not removed when you remove Istio. Use the following command to remove it: {{< text bash >}} $ kubectl label namespace default istio.io/dataplane-mode- {{< /text >}} You must remove workloads from the ambient data plane before uninstalling Istio. ## Remove the sample application To delete the Bookinfo sample application and the `curl` deployment, run the following: {{< text bash >}} $ kubectl delete httproute reviews $ kubectl delete authorizationpolicy productpage-viewer $ kubectl delete -f @samples/curl/curl.yaml@ $ kubectl delete -f @samples/bookinfo/platform/kube/bookinfo.yaml@ $ kubectl delete -f @samples/bookinfo/platform/kube/bookinfo-versions.yaml@ $ kubectl delete -f @samples/bookinfo/gateway-api/bookinfo-gateway.yaml@ {{< /text >}} ## Uninstall Istio To uninstall Istio: {{< text syntax=bash snip\_id=none >}} $ istioctl uninstall -y --purge $ kubectl delete namespace istio-system {{< /text >}} ## Remove the Kubernetes Gateway API CRDs {{< boilerplate gateway-api-remove-crds >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/getting-started/cleanup/index.md
master
istio
[ 0.0022260076366364956, 0.03500276431441307, 0.02296532690525055, -0.010514450259506702, -0.015311756171286106, -0.03581022098660469, 0.051046907901763916, -0.04304145649075508, 0.054165806621313095, 0.018861718475818634, -0.018802421167492867, -0.087213896214962, -0.03404514119029045, 0.01...
0.480191
Adding applications to an ambient mesh is as simple as labeling the namespace where the application resides. By adding the applications to the mesh, you automatically secure the communication between them and Istio starts gathering TCP telemetry. And no, you don't need to restart or redeploy the applications! ## Add Bookinfo to the mesh You can enable all pods in a given namespace to be part of an ambient mesh by simply labeling the namespace: {{< text bash >}} $ kubectl label namespace default istio.io/dataplane-mode=ambient namespace/default labeled {{< /text >}} Congratulations! You have successfully added all pods in the default namespace to the ambient mesh. 🎉 If you open the Bookinfo application in your browser, you will see the product page, just like before. The difference this time is that the communication between the Bookinfo application pods is encrypted using mTLS. Additionally, Istio is gathering TCP telemetry for all traffic between the pods. {{< tip >}} You now have mTLS encryption between all your pods — without even restarting or redeploying any of the applications! {{< /tip >}} ## Visualize the application and metrics Using Istio's dashboard, Kiali, and the Prometheus metrics engine, you can visualize the Bookinfo application. Deploy them both: {{< text syntax=bash snip\_id=none >}} $ kubectl apply -f @samples/addons/prometheus.yaml@ $ kubectl apply -f @samples/addons/kiali.yaml@ {{< /text >}} You can access the Kiali dashboard by running the following command: {{< text syntax=bash snip\_id=none >}} $ istioctl dashboard kiali {{< /text >}} Let's send some traffic to the Bookinfo application, so Kiali generates the traffic graph: {{< text bash >}} $ for i in $(seq 1 100); do curl -sSI -o /dev/null http://localhost:8080/productpage; done {{< /text >}} Next, click on the Traffic Graph and select "Default" from the "Select Namespaces" drop-down. You should see the Bookinfo application: {{< image link="./kiali-ambient-bookinfo.png" caption="Kiali dashboard" >}} {{< tip >}} If you don't see the traffic graph, try re-sending the traffic to the Bookinfo application and make sure you have selected the \*\*default\*\* namespace in the \*\*Namespace\*\* drop-down in Kiali. To see the mTLS status between the services, click the \*\*Display\*\* drop-down and click \*\*Security\*\*. {{}} If you click on the line connecting two services on the dashboard, you can see the inbound and outbound traffic metrics gathered by Istio. {{< image link="./kiali-tcp-traffic.png" caption="L4 traffic" >}} In addition to the TCP metrics, Istio has created a strong identity for each service: a SPIFFE ID. This identity can be used for creating authorization policies. ## Next steps Now that you have identities assigned to the services, let's [enforce authorization policies](/docs/ambient/getting-started/enforce-auth-policies/) to secure access to the application.
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/getting-started/secure-and-visualize/index.md
master
istio
[ -0.006712182424962521, -0.01884211041033268, 0.03537683188915253, 0.014112074859440327, 0.0010023564100265503, -0.075177863240242, 0.06450297683477402, 0.02834915928542614, 0.06810247153043747, 0.03495025634765625, -0.0019695772789418697, -0.12986931204795837, 0.018815098330378532, -0.0006...
0.364948
In \*\*ambient mode\*\*, Istio implements its [features](/docs/concepts) using a per-node Layer 4 (L4) proxy, and optionally a per-namespace Layer 7 (L7) proxy. This layered approach allows you to adopt Istio in a more incremental fashion, smoothly transitioning from no mesh, to a secure L4 overlay, to full L7 processing and policy — on a per-namespace basis, as needed. Furthermore, workloads running in different Istio {{< gloss >}}data plane{{< /gloss >}} modes interoperate seamlessly, allowing users to mix and match capabilities based on their particular needs as they change over time. Since workload pods no longer require proxies running in sidecars in order to participate in the mesh, ambient mode is often informally referred to as "sidecar-less mesh". ## How it works Ambient mode splits Istio’s functionality into two distinct layers. At the base, the \*\*ztunnel\*\* secure overlay handles routing and zero trust security for traffic. Above that, when needed, users can enable L7 \*\*waypoint proxies\*\* to get access to the full range of Istio features. Waypoint proxies, while heavier than the ztunnel overlay alone, still run as an ambient component of the infrastructure, requiring no modifications to application pods. {{< tip >}} Pods and workloads using sidecar mode can co-exist within the same mesh as pods that use ambient mode. The term "ambient mesh" refers to an Istio mesh that was installed with support for ambient mode, and so can support mesh pods that use either type of data plane. {{< /tip >}} For details on the design of ambient mode, and how it interacts with the Istio {{< gloss >}}control plane{{< /gloss >}}, see the [data plane](/docs/ambient/architecture/data-plane) and [control plane](/docs/ambient/architecture/control-plane) architecture documentation. ## ztunnel The ztunnel (Zero Trust tunnel) component is a purpose-built, per-node proxy that powers Istio's ambient data plane mode. Ztunnel is responsible for securely connecting and authenticating workloads within the mesh. The ztunnel proxy is written in Rust and is intentionally scoped to handle L3 and L4 functions such as mTLS, authentication, L4 authorization and telemetry. Ztunnel does not terminate workload HTTP traffic or parse workload HTTP headers. The ztunnel ensures L3 and L4 traffic is efficiently and securely transported either directly to workloads, other ztunnel proxies, or to waypoint proxies. The term "secure overlay" is used to collectively describe the set of L4 networking functions implemented in an ambient mesh via the ztunnel proxy. At the transport layer, this is implemented via an HTTP CONNECT-based traffic tunneling protocol called [HBONE](/docs/ambient/architecture/hbone). ## Waypoint proxies The waypoint proxy is a deployment of the {{< gloss >}}Envoy{{}} proxy; the same engine that Istio uses for its sidecar data plane mode. Waypoint proxies run outside of application pods. They are installed, upgraded, and scale independently from applications. Some use cases of Istio in ambient mode may be addressed solely via the L4 secure overlay features, and will not need L7 features, thereby not requiring deployment of a waypoint proxy. Use cases requiring advanced traffic management and L7 networking features will require deployment of a waypoint. | Application deployment use case | Ambient mode configuration | | ------------------------------- | -------------------------- | | Zero Trust networking via mutual-TLS, encrypted and tunneled data transport of client application traffic, L4 authorization, L4 telemetry | ztunnel only (default) | | As above, plus advanced Istio traffic management features (including L7 authorization, telemetry and VirtualService routing) | ztunnel and waypoint proxies |
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/overview/index.md
master
istio
[ -0.08119383454322815, 0.03138723596930504, 0.05150507390499115, 0.07305025309324265, 0.023402467370033264, -0.08565013110637665, 0.08843810111284256, 0.04168416187167168, -0.022749463096261024, 0.006763583514839411, -0.00943830143660307, -0.0668814405798912, -0.02786092460155487, 0.0112851...
0.57473
routing) | ztunnel and waypoint proxies |
https://github.com/istio/istio.io/blob/master//content/en/docs/ambient/overview/index.md
master
istio
[ -0.04138234257698059, 0.015454367734491825, -0.027733180671930313, 0.03630289435386658, -0.08767877519130707, -0.03555076941847801, 0.03484579548239708, 0.0036488519981503487, -0.03516075387597084, -0.01053455751389265, -0.02208886854350567, 0.08427861332893372, 0.04298621416091919, 0.0392...
-0.069923
As you saw in the previous module, Istio enhances Kubernetes by giving you the functionality to more effectively operate your microservices. In this module you enable Istio on a single microservice, `productpage`. The rest of the application will continue to operate as before. Note that you can enable Istio gradually, microservice by microservice. Istio is enabled transparently to the microservices. You do not change the microservices code or disrupt your application, it continues to run and serve user requests. 1. Apply the default destination rules: {{< text bash >}} $ kubectl apply -f {{< github\_file >}}/samples/bookinfo/networking/destination-rule-all.yaml {{< /text >}} 1. Redeploy the `productpage` microservice, Istio-enabled: {{< tip >}} This tutorial step demonstrates manual sidecar injection to demonstrate enabling Istio service-by-service for instructional purposes. [Automatic sidecar injection](/docs/setup/additional-setup/sidecar-injection/#automatic-sidecar-injection) is the recommended method for production use. {{< /tip >}} {{< text bash >}} $ curl -s {{< github\_file >}}/samples/bookinfo/platform/kube/bookinfo.yaml | istioctl kube-inject -f - | sed 's/replicas: 1/replicas: 3/g' | kubectl apply -l app=productpage,version=v1 -f - deployment.apps/productpage-v1 configured {{< /text >}} 1. Access the application's webpage and verify that the application continues to work. Istio was added without changing the code of the original application. 1. Check the `productpage`'s pods and see that now each replica has two containers. The first container is the microservice itself and the second one is the sidecar proxy attached to it: {{< text bash >}} $ kubectl get pods details-v1-68868454f5-8nbjv 1/1 Running 0 7h details-v1-68868454f5-nmngq 1/1 Running 0 7h details-v1-68868454f5-zmj7j 1/1 Running 0 7h productpage-v1-6dcdf77948-6tcbf 2/2 Running 0 7h productpage-v1-6dcdf77948-t9t97 2/2 Running 0 7h productpage-v1-6dcdf77948-tjq5d 2/2 Running 0 7h ratings-v1-76f4c9765f-khlvv 1/1 Running 0 7h ratings-v1-76f4c9765f-ntvkx 1/1 Running 0 7h ratings-v1-76f4c9765f-zd5mp 1/1 Running 0 7h reviews-v2-56f6855586-cnrjp 1/1 Running 0 7h reviews-v2-56f6855586-lxc49 1/1 Running 0 7h reviews-v2-56f6855586-qh84k 1/1 Running 0 7h curl-88ddbcfdd-cc85s 1/1 Running 0 7h {{< /text >}} 1. Kubernetes replaced the original pods of `productpage` with the Istio-enabled pods, transparently and incrementally, performing a [rolling update](https://kubernetes.io/docs/tutorials/kubernetes-basics/update-intro/). Kubernetes terminated an old pod only when a new pod started to run, and it transparently switched the traffic to the new pods, one by one. That is, it did not terminate more than one pod before it stated a new pod. All this was done to prevent disruption of your application, so it continued to work during the injection of Istio. 1. Check the logs of the Istio sidecar of `productpage`: {{< text bash >}} $ kubectl logs -l app=productpage -c istio-proxy | grep GET ... [2019-02-15T09:06:04.079Z] "GET /details/0 HTTP/1.1" 200 - 0 178 5 3 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10\_14) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Safari/605.1.15" "18710783-58a1-9e5f-992c-9ceff05b74c5" "details:9080" "172.30.230.51:9080" outbound|9080||details.tutorial.svc.cluster.local - 172.21.109.216:9080 172.30.146.104:58698 - [2019-02-15T09:06:04.088Z] "GET /reviews/0 HTTP/1.1" 200 - 0 379 22 22 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10\_14) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Safari/605.1.15" "18710783-58a1-9e5f-992c-9ceff05b74c5" "reviews:9080" "172.30.230.27:9080" outbound|9080||reviews.tutorial.svc.cluster.local - 172.21.185.48:9080 172.30.146.104:41442 - [2019-02-15T09:06:04.053Z] "GET /productpage HTTP/1.1" 200 - 0 5723 90 83 "10.127.220.66" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10\_14) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Safari/605.1.15" "18710783-58a1-9e5f-992c-9ceff05b74c5" "tutorial.bookinfo.com" "127.0.0.1:9080" inbound|9080|http|productpage.tutorial.svc.cluster.local - 172.30.146.104:9080 10.127.220.66:0 - {{< /text >}} 1. Output the name of your namespace. You will need it to recognize your microservices in the Istio dashboard: {{< text bash >}} $ echo $(kubectl config view -o jsonpath="{.contexts[?(@.name == \"$(kubectl config current-context)\")].context.namespace}") tutorial {{< /text >}} 1. Check the Istio dashboard, using the custom URL you set in your `/etc/hosts` file [previously](/docs/examples/microservices-istio/bookinfo-kubernetes/#update-your-etc-hosts-configuration-file): {{< text plain >}} http://my-istio-dashboard.io/dashboard/db/istio-mesh-dashboard {{< /text >}} In the top left drop-down menu, select \_Istio Mesh Dashboard\_. {{< image width="80%" link="dashboard-select-dashboard.png" caption="Select Istio Mesh Dashboard from the top left drop-down menu" >}} Notice the `productpage` service from your namespace, it's
https://github.com/istio/istio.io/blob/master//content/en/docs/examples/microservices-istio/add-istio/index.md
master
istio
[ -0.005785771179944277, -0.02059626393020153, 0.04577312245965004, -0.02984439954161644, -0.05469885468482971, -0.005142577458173037, 0.015942562371492386, 0.01411097776144743, 0.02873992919921875, 0.016921496018767357, -0.00022420835739467293, -0.036169495433568954, -0.02197859436273575, 0...
0.518109
custom URL you set in your `/etc/hosts` file [previously](/docs/examples/microservices-istio/bookinfo-kubernetes/#update-your-etc-hosts-configuration-file): {{< text plain >}} http://my-istio-dashboard.io/dashboard/db/istio-mesh-dashboard {{< /text >}} In the top left drop-down menu, select \_Istio Mesh Dashboard\_. {{< image width="80%" link="dashboard-select-dashboard.png" caption="Select Istio Mesh Dashboard from the top left drop-down menu" >}} Notice the `productpage` service from your namespace, it's name should be `productpage..svc.cluster.local`. {{< image width="80%" link="dashboard-mesh.png" caption="Istio Mesh Dashboard" >}} 1. In the \_Istio Mesh Dashboard\_, under the `Service` column, click the `productpage` service. {{< image width="80%" link="dashboard-service-select-productpage.png" caption="Istio Service Dashboard, `productpage` selected" >}} Scroll down to the \_Service Workloads\_ section. Observe that the dashboard graphs are updated. {{< image width="80%" link="dashboard-service.png" caption="Istio Service Dashboard" >}} This is the immediate benefit of applying Istio on a single microservice. You receive logs of traffic to and from the microservice, including time, HTTP method, path, and response code. You can monitor your microservice using the Istio dashboard. In the next modules, you will learn about the functionality Istio can provide to your applications. While some Istio functionality is beneficial when applied to a single microservice, you will learn how to apply Istio on the whole application to realize its full potential. You are ready to [enable Istio on all the microservices](/docs/examples/microservices-istio/enable-istio-all-microservices).
https://github.com/istio/istio.io/blob/master//content/en/docs/examples/microservices-istio/add-istio/index.md
master
istio
[ -0.00955195538699627, 0.008094372227787971, -0.02644520439207554, -0.00379582098685205, -0.0038911052979528904, -0.017546217888593674, -0.010338555090129375, 0.0524233877658844, 0.05097562074661255, 0.06882183998823166, -0.03239629045128822, -0.07431112229824066, -0.0015104911290109158, 0....
0.415613
{{< boilerplate work-in-progress >}} In this module you prepare your local computer for the tutorial. 1. Install [`curl`](https://curl.haxx.se/download.html). 1. Install [Node.js](https://nodejs.org/en/download/). 1. Install [Docker](https://docs.docker.com/install/). 1. Install [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/). 1. Set the `KUBECONFIG` environment variable for the configuration file you received from the tutorial instructors, or created yourself in the previous module. {{< text bash >}} $ export KUBECONFIG= {{< /text >}} 1. Verify that the configuration took effect by printing the current namespace: {{< text bash >}} $ kubectl config view -o jsonpath="{.contexts[?(@.name==\"$(kubectl config current-context)\")].context.namespace}" tutorial {{< /text >}} You should see in the output the name of the namespace, allocated for you by the instructors or allocated by yourself in the previous module. 1. Download one of the [Istio release archives](https://github.com/istio/istio/releases) and extract the `istioctl` command line tool from the `bin` directory, and verify that you can run `istioctl` with the following command: {{< text bash >}} $ istioctl version client version: 1.22.0 control plane version: 1.22.0 data plane version: 1.22.0 (4 proxies) {{< /text >}} Congratulations, you configured your local computer! You are ready to [run a single service locally](/docs/examples/microservices-istio/single/).
https://github.com/istio/istio.io/blob/master//content/en/docs/examples/microservices-istio/setup-local-computer/index.md
master
istio
[ 0.021333737298846245, 0.038515716791152954, 0.008604438044130802, -0.017633697018027306, 0.007815183140337467, -0.014070943929255009, -0.0637916624546051, 0.009186911396682262, 0.019307564944028854, 0.03701246902346611, -0.03476688638329506, -0.0730367973446846, -0.007801821455359459, -0.0...
0.027186
Monitoring is crucial to support transitioning to the microservices architecture style. With Istio, you gain monitoring of the traffic between microservices by default. You can use the Istio Dashboard for monitoring your microservices in real time. Istio is integrated out-of-the-box with [Prometheus time series database and monitoring system](https://prometheus.io). Prometheus collects various traffic-related metrics and provides [a rich query language](https://prometheus.io/docs/prometheus/latest/querying/basics/) for them. See below several examples of Prometheus Istio-related queries. 1. Access the Prometheus UI at [http://my-istio-logs-database.io](http://my-istio-logs-database.io). (The `my-istio-logs-database.io` URL should be in your /etc/hosts file, you set it [previously](/docs/examples/microservices-istio/bookinfo-kubernetes/#update-your-etc-hosts-configuration-file)). {{< image width="80%" link="prometheus.png" caption="Prometheus Query UI" >}} 1. Run the following example queries in the \_Expression\_ input box. Push the \_Execute\_ button to see query results in the \_Console\_ tab. The queries use `tutorial` as the name of the application's namespace, substitute it with the name of your namespace. For best results, run the real-time traffic simulator described in the previous steps when querying data. 1. Get all the requests in your namespace: {{< text plain >}} istio\_requests\_total{destination\_service\_namespace="tutorial", reporter="destination"} {{< /text >}} 1. Get the sum of all the requests in your namespace: {{< text plain >}} sum(istio\_requests\_total{destination\_service\_namespace="tutorial", reporter="destination"}) {{< /text >}} 1. Get the requests to `reviews` microservice: {{< text plain >}} istio\_requests\_total{destination\_service\_namespace="tutorial", reporter="destination",destination\_service\_name="reviews"} {{< /text >}} 1. [Rate](https://prometheus.io/docs/prometheus/latest/querying/functions/#rate) of requests over the past 5 minutes to all instances of the `reviews` microservice: {{< text plain >}} rate(istio\_requests\_total{destination\_service\_namespace="tutorial", reporter="destination",destination\_service\_name="reviews"}[5m]) {{< /text >}} The queries above use the `istio\_requests\_total` metric, which is a standard Istio metric. You can observe other metrics, in particular, the ones of Envoy ([Envoy](https://www.envoyproxy.io) is the sidecar proxy of Istio). You can see the collected metrics in the \_insert metric at cursor\_ drop-down menu. ## Next steps Congratulations on completing the tutorial! These tasks are a great place for beginners to further evaluate Istio's features using this `demo` installation: - [Request routing](/docs/tasks/traffic-management/request-routing/) - [Fault injection](/docs/tasks/traffic-management/fault-injection/) - [Traffic shifting](/docs/tasks/traffic-management/traffic-shifting/) - [Querying metrics](/docs/tasks/observability/metrics/querying-metrics/) - [Visualizing metrics](/docs/tasks/observability/metrics/using-istio-dashboard/) - [Accessing external services](/docs/tasks/traffic-management/egress/egress-control/) - [Visualizing your mesh](/docs/tasks/observability/kiali/) Before you customize Istio for production use, see these resources: - [Deployment models](/docs/ops/deployment/deployment-models/) - [Deployment best practices](/docs/ops/best-practices/deployment/) - [Pod requirements](/docs/ops/deployment/application-requirements/) - [General installation instructions](/docs/setup/) ## Join the Istio community We welcome you to ask questions and give us feedback by joining the [Istio community](/get-involved/).
https://github.com/istio/istio.io/blob/master//content/en/docs/examples/microservices-istio/logs-istio/index.md
master
istio
[ -0.026507949456572533, -0.024301394820213318, -0.013698004186153412, 0.05870138853788376, -0.03143012151122093, -0.11156606674194336, 0.037719354033470154, 0.015545917674899101, 0.0026835929602384567, 0.04538185894489288, -0.07309079170227051, -0.09192852675914764, -0.03173206001520157, 0....
0.501881
In this module, you deploy a new version of the `reviews` service, `\_v2\_`, which will return the number and star color of ratings provided by reviewers. In a real-world scenario, before you deploy, you would perform static analysis tests, unit tests, integration tests, end-to-end tests and tests in a staging environment. 1. Deploy the new version of the `reviews` microservice without the `app=reviews` label. Without that label, the new version will not be selected to provide the `reviews` service. As such, it will not be called by the production code. Run the following command to deploy the `reviews` microservice version 2, while replacing the label `app=reviews` by `app=reviews\_test`: {{< text bash >}} $ curl -s {{< github\_file >}}/samples/bookinfo/platform/kube/bookinfo.yaml | sed 's/app: reviews/app: reviews\_test/' | kubectl apply -l app=reviews\_test,version=v2 -f - deployment.apps/reviews-v2 created {{< /text >}} 1. Access your application to ensure the deployed microservice did not disrupt it. 1. Test the new version of your microservice from inside the cluster using the testing container you deployed earlier. Note that your new version accesses the production pods of the `ratings` microservice during the test. Also note that you have to use the pod IP to access your new version of the microservice, because it is not selected for the `reviews` service. 1. Get the IP of the pod: {{< text bash >}} $ REVIEWS\_V2\_POD\_IP=$(kubectl get pod -l app=reviews\_test,version=v2 -o jsonpath='{.items[0].status.podIP}') $ echo $REVIEWS\_V2\_POD\_IP {{< /text >}} 1. Send a request to the pod and see that it returns the correct result: {{< text bash >}} $ kubectl exec $(kubectl get pod -l app=curl -o jsonpath='{.items[0].metadata.name}') -- curl -sS "$REVIEWS\_V2\_POD\_IP:9080/reviews/7" {"id": "7","reviews": [{ "reviewer": "Reviewer1", "text": "An extremely entertaining play by Shakespeare. The slapstick humour is refreshing!", "rating": {"stars": 5, "color": "black"}},{ "reviewer": "Reviewer2", "text": "Absolutely fun and entertaining. The play lacks thematic depth when compared to other plays by Shakespeare.", "rating": {"stars": 4, "color": "black"}}]} {{< /text >}} 1. Perform primitive load testing by sending a request 10 times in a row: {{< text bash >}} $ kubectl exec $(kubectl get pod -l app=curl -o jsonpath='{.items[0].metadata.name}') -- sh -c "for i in 1 2 3 4 5 6 7 8 9 10; do curl -o /dev/null -s -w '%{http\_code}\n' $REVIEWS\_V2\_POD\_IP:9080/reviews/7; done" 200 200 ... {{< /text >}} 1. The previous steps ensure that your new version of `reviews` will work and you can deploy it. You will deploy a single replica of the service into production so the real production traffic will start to arrive to your new service version. With the current setting, 75% of the traffic will arrive to the old version (three pods of the old version) and 25% will arrive to the new version (a single pod). To deploy \_reviews v2\_, redeploy the new version with the `app=reviews` label, so it will become addressable by the `reviews` service. {{< text bash >}} $ kubectl label pods -l version=v2 app=reviews --overwrite pod "reviews-v2-79c8c8c7c5-4p4mn" labeled {{< /text >}} 1. Now, you access the application web page and observe that the black stars appear for ratings. You can access the page several times and see that sometimes the page is returned with stars (approximately 25% of the time) and sometimes without stars (approximately 75% of the time). {{< image width="80%" link="bookinfo-reviews-v2.png" caption="Bookinfo Web Application with black stars as ratings" >}} 1. If you encounter any problems with the new version in a real-world scenario, you could quickly undeploy the new version, so only the old version will be used: {{< text bash >}} $ kubectl delete deployment reviews-v2 $ kubectl delete pod -l app=reviews,version=v2 deployment.apps "reviews-v2" deleted pod "reviews-v2-79c8c8c7c5-4p4mn" deleted
https://github.com/istio/istio.io/blob/master//content/en/docs/examples/microservices-istio/add-new-microservice-version/index.md
master
istio
[ 0.0006009687203913927, -0.0026124254800379276, -0.05754537135362625, 0.012528750114142895, -0.025040142238140106, 0.007019701413810253, -0.019338130950927734, -0.04296988993883133, 0.08243892341852188, 0.0335121750831604, 0.052468396723270416, -0.052230436354875565, 0.0800597220659256, -0....
0.034978
1. If you encounter any problems with the new version in a real-world scenario, you could quickly undeploy the new version, so only the old version will be used: {{< text bash >}} $ kubectl delete deployment reviews-v2 $ kubectl delete pod -l app=reviews,version=v2 deployment.apps "reviews-v2" deleted pod "reviews-v2-79c8c8c7c5-4p4mn" deleted {{< /text >}} Allow time for the configuration change to propagate through the system. Then, access your application's webpage several times and see that now black stars do not appear. To restore the new version: {{< text bash >}} $ kubectl apply -l app=reviews,version=v2 -f {{< github\_file >}}/samples/bookinfo/platform/kube/bookinfo.yaml deployment.apps/reviews-v2 created {{< /text >}} Access your application's webpage several times and see that now the black stars are present approximately 25% of the time. 1. Next, increase the replicas of your new version. You can do it gradually, carefully checking that the number of errors does not increase: {{< text bash >}} $ kubectl scale deployment reviews-v2 --replicas=3 deployment.apps/reviews-v2 scaled {{< /text >}} Now, access your application's webpage several times and see that the black stars appear approximately half the time. 1. Now, you can decommission the old version: {{< text bash >}} $ kubectl delete deployment reviews-v1 deployment.apps "reviews-v1" deleted {{< /text >}} Accessing the web page of the application will return `reviews` with black stars only. In the previous steps, you performed the update of `reviews`. First, you deployed the new version without sending it simulated production traffic. You tested it in the production environment using test traffic. You checked that the new version provides correct results. You released the new version, gradually increasing the production traffic to it. Finally, you decommissioned the old version. From here, you can improve your deployment strategy using the following example tasks. First, test the new version end-to-end in production. This requires the ability to drive traffic to your new version using request parameters, for example using the user name stored in a cookie. In addition, perform shadowing of the production traffic to your new version and check if your new version provides incorrect results or produces errors. Finally, gain more detailed control of the rollout. As an example, you can deploy at 1%, then increase by 1% an hour as long as there does not appear to be degradation in the service. Istio enhances the value of Kubernetes by helping you perform these tasks in a straightforward way. For more detailed information and best practices about deployment, see [Deployment models](/docs/ops/deployment/deployment-models/). From here, you have two choices: 1. Use a \_service mesh\_. In a service mesh, you put all the reporting, routing, policies, security logic in \_sidecar\_ proxies, injected \*transparently\* into your application pods. The business logic remains in the code of the application, no changes are required to the application code. 1. Implement the required functionality in the application code. Most of the functionality is already available in various libraries, for example in the Netflix's [Hystrix](https://github.com/Netflix/Hystrix) library for the Java programming language. However, now you have to change your code to use the libraries. You have to put additional effort, your code will bloat, business logic will be mixed with reporting, routing, policies, networking logic. Since your microservices use different programming languages, you have to learn, use, update multiple libraries. See [The Istio service mesh](/about/service-mesh/) to learn how Istio can perform the tasks mentioned here and more. In the next modules, you explore various Istio features. You are ready to [enable Istio on `productpage`](/docs/examples/microservices-istio/add-istio/).
https://github.com/istio/istio.io/blob/master//content/en/docs/examples/microservices-istio/add-new-microservice-version/index.md
master
istio
[ 0.03768863528966904, -0.02112964168190956, 0.023536263033747673, 0.01792142353951931, 0.06690221279859543, -0.025471162050962448, -0.03299523890018463, -0.07715372741222382, 0.0972578227519989, 0.00276195234619081, 0.0313195064663887, -0.010442742146551609, 0.0303752850741148, -0.103757232...
0.042489
tasks mentioned here and more. In the next modules, you explore various Istio features. You are ready to [enable Istio on `productpage`](/docs/examples/microservices-istio/add-istio/).
https://github.com/istio/istio.io/blob/master//content/en/docs/examples/microservices-istio/add-new-microservice-version/index.md
master
istio
[ -0.05357605218887329, -0.06312818080186844, -0.04188378527760506, -0.015041143633425236, -0.045820485800504684, 0.021456792950630188, 0.046089161187410355, 0.04766673967242241, -0.06676924228668213, -0.005723079666495323, 0.010410570539534092, -0.06819985061883926, -0.04240298271179199, 0....
0.628157
{{< boilerplate work-in-progress >}} For this tutorial you need a Kubernetes cluster with a namespace for the tutorial's modules and a local computer to run the commands. If you have your own cluster, ensure your cluster satisfies the prerequisites. If you are in a workshop and the instructors provide a cluster, let them handle the cluster prerequisites, while you skip ahead to set up your local computer. ## Kubernetes cluster Ensure the following conditions are met: - You have administrator privileges to the virtual machine running a Kubernetes cluster named `tutorial-cluster` and administrator privileges to the virtual machine it runs on. - You can create a namespace in the cluster for each participant. ## Local computer Ensure the following conditions are met: - You have write access to the local computer's `/etc/hosts` file. - You have the ability and permission to download, install and run command line tools on the local computer. - You have Internet connectivity for the duration of the tutorial.
https://github.com/istio/istio.io/blob/master//content/en/docs/examples/microservices-istio/prereq/index.md
master
istio
[ 0.035544317215681076, 0.0026105765718966722, -0.008091472089290619, -0.043180111795663834, -0.031037602573633194, 0.021633943542838097, -0.024867858737707138, -0.002079039579257369, -0.011795875616371632, 0.06058737635612488, 0.032000839710235596, -0.09956395626068115, 0.0010828361846506596,...
0.182632
{{< boilerplate work-in-progress >}} This module shows you an application composed of four microservices written in different programming languages: `productpage`, `details`, `ratings` and `reviews`. We call the composed application `Bookinfo`, and you can learn more about it on the [Bookinfo example](/docs/examples/bookinfo) page. The [Bookinfo example](/docs/examples/bookinfo) shows the final state of the application, in which the `reviews` microservice has three versions: `v1`, `v2`, `v3`. In this module, the application only uses the `v1` version of the `reviews` microservice. The next modules enhance the application by deploying newer versions of the `reviews` microservice. ## Deploy the application and a testing pod 1. Set the `MYHOST` environment variable to hold the URL of the application: {{< text bash >}} $ export MYHOST=$(kubectl config view -o jsonpath={.contexts..namespace}).bookinfo.com {{< /text >}} 1. Skim [`bookinfo.yaml`]({{< github\_blob >}}/samples/bookinfo/platform/kube/bookinfo.yaml). This is the Kubernetes deployment spec of the app. Notice the services and the deployments. 1. Deploy the application to your Kubernetes cluster: {{< text bash >}} $ kubectl apply -l version!=v2,version!=v3 -f {{< github\_file >}}/samples/bookinfo/platform/kube/bookinfo.yaml service/details created serviceaccount/bookinfo-details created deployment.apps/details-v1 created service/ratings created serviceaccount/bookinfo-ratings created deployment.apps/ratings-v1 created service/reviews created serviceaccount/bookinfo-reviews created deployment.apps/reviews-v1 created service/productpage created serviceaccount/bookinfo-productpage created deployment.apps/productpage-v1 created {{< /text >}} 1. Check the status of the pods: {{< text bash >}} $ kubectl get pods NAME READY STATUS RESTARTS AGE details-v1-6d86fd9949-q8rrf 1/1 Running 0 10s productpage-v1-c9965499-tjdjx 1/1 Running 0 8s ratings-v1-7bf577cb77-pq9kg 1/1 Running 0 9s reviews-v1-77c65dc5c6-kjvxs 1/1 Running 0 9s {{< /text >}} 1. After the four pods achieve the `Running` status, you can scale the deployment. To let each version of each microservice run in three pods, execute the following command: {{< text bash >}} $ kubectl scale deployments --all --replicas 3 deployment.apps/details-v1 scaled deployment.apps/productpage-v1 scaled deployment.apps/ratings-v1 scaled deployment.apps/reviews-v1 scaled {{< /text >}} 1. Check the pods status. Notice that each microservice has three pods: {{< text bash >}} $ kubectl get pods NAME READY STATUS RESTARTS AGE details-v1-6d86fd9949-fr59p 1/1 Running 0 50s details-v1-6d86fd9949-mksv7 1/1 Running 0 50s details-v1-6d86fd9949-q8rrf 1/1 Running 0 1m productpage-v1-c9965499-hwhcn 1/1 Running 0 50s productpage-v1-c9965499-nccwq 1/1 Running 0 50s productpage-v1-c9965499-tjdjx 1/1 Running 0 1m ratings-v1-7bf577cb77-cbdsg 1/1 Running 0 50s ratings-v1-7bf577cb77-cz6jm 1/1 Running 0 50s ratings-v1-7bf577cb77-pq9kg 1/1 Running 0 1m reviews-v1-77c65dc5c6-5wt8g 1/1 Running 0 49s reviews-v1-77c65dc5c6-kjvxs 1/1 Running 0 1m reviews-v1-77c65dc5c6-r55tl 1/1 Running 0 49s {{< /text >}} 1. After the services achieve the `Running` status, deploy a testing pod, [curl]({{< github\_tree >}}/samples/curl), to use for sending requests to your microservices: {{< text bash >}} $ kubectl apply -f {{< github\_file >}}/samples/curl/curl.yaml {{< /text >}} 1. To confirm that the Bookinfo application is running, send a request to it with a curl command from your testing pod: {{< text bash >}} $ kubectl exec $(kubectl get pod -l app=curl -o jsonpath='{.items[0].metadata.name}') -c curl -- curl -sS productpage:9080/productpage | grep -o ".\*" Simple Bookstore App {{< /text >}} ## Enable external access to the application Once your application is running, enable clients from outside the cluster to access it. Once you configure the steps below successfully, you can access the application from your laptop's browser. {{< warning >}} If your cluster runs on GKE, change the `productpage` service type to `LoadBalancer`: {{< text bash >}} $ kubectl patch svc productpage -p '{"spec": {"type": "LoadBalancer"}}' service/productpage patched {{< /text >}} {{< /warning >}} ### Configure the Kubernetes Ingress resource and access your application's webpage 1. Create a Kubernetes Ingress resource: {{< text bash >}} $ kubectl apply -f - <}} ### Update your `/etc/hosts` configuration file 1. Get the IP address for the Kubernetes ingress named `bookinfo`: {{< text bash >}} $ kubectl get ingress bookinfo {{< /text >}} 1. In your `/etc/hosts` file,
https://github.com/istio/istio.io/blob/master//content/en/docs/examples/microservices-istio/bookinfo-kubernetes/index.md
master
istio
[ -0.012060004286468029, -0.004723816178739071, -0.05265296250581741, -0.03026210330426693, 0.0027705919928848743, 0.008876495994627476, -0.05215436592698097, 0.0031920606270432472, 0.09854190051555634, 0.0033552846871316433, 0.015345307998359203, -0.04296150431036949, 0.00021081439626868814, ...
0.162298
1. Create a Kubernetes Ingress resource: {{< text bash >}} $ kubectl apply -f - <}} ### Update your `/etc/hosts` configuration file 1. Get the IP address for the Kubernetes ingress named `bookinfo`: {{< text bash >}} $ kubectl get ingress bookinfo {{< /text >}} 1. In your `/etc/hosts` file, add the previous IP address to the host entries provided by the following command. You should have a [Superuser](https://en.wikipedia.org/wiki/Superuser) privilege and probably use [`sudo`](https://en.wikipedia.org/wiki/Sudo) to edit `/etc/hosts`. {{< text bash >}} $ echo $(kubectl get ingress istio-system -n istio-system -o jsonpath='{..ip} {..host}') $(kubectl get ingress bookinfo -o jsonpath='{..host}') {{< /text >}} ### Access your application 1. Access the application's home page from the command line: {{< text bash >}} $ curl -s $MYHOST/productpage | grep -o ".\*" Simple Bookstore App {{< /text >}} 1. Paste the output of the following command in your browser address bar: {{< text bash >}} $ echo http://$MYHOST/productpage {{< /text >}} You should see the following webpage: {{< image width="80%" link="bookinfo.png" caption="Bookinfo Web Application" >}} 1. Observe how microservices call each other. For example, `reviews` calls the `ratings` microservice using the `http://ratings:9080/ratings` URL. See the [code of `reviews`]({{< github\_blob >}}/samples/bookinfo/src/reviews/reviews-application/src/main/java/application/rest/LibertyRestEndpoint.java): {{< text java >}} private final static String ratings\_service = "http://ratings:9080/ratings"; {{< /text >}} 1. Set an infinite loop in a separate terminal window to send traffic to your application to simulate the constant user traffic in the real world: {{< text bash >}} $ while :; do curl -s $MYHOST/productpage | grep -o ".\*"; sleep 1; done Simple Bookstore App Simple Bookstore App Simple Bookstore App Simple Bookstore App ... {{< /text >}} You are ready to [test the application](/docs/examples/microservices-istio/production-testing).
https://github.com/istio/istio.io/blob/master//content/en/docs/examples/microservices-istio/bookinfo-kubernetes/index.md
master
istio
[ 0.013506688177585602, -0.006003519520163536, -0.013795692473649979, -0.023122413083910942, -0.09345956146717072, -0.014936061576008797, 0.010917906649410725, 0.026499370113015175, 0.02953379787504673, 0.07602307945489883, -0.010455651208758354, -0.08633775264024734, 0.015841756016016006, -...
0.291574
{{< boilerplate work-in-progress >}} This module shows how you create a [Docker](https://www.docker.com) image and run it locally. 1. Download the [`Dockerfile`](https://docs.docker.com/engine/reference/builder/) for the `ratings` microservice. {{< text bash >}} $ curl -s {{< github\_file >}}/samples/bookinfo/src/ratings/Dockerfile -o Dockerfile {{< /text >}} 1. Observe the `Dockerfile`. {{< text bash >}} $ cat Dockerfile {{< /text >}} Note that it copies the files into the container's filesystem and then runs the `npm install` command you ran in the previous module. The `CMD` command instructs Docker to run the `ratings` service on port `9080`. 1. Create an environment variable to store your user id which will be used to tag the docker image for `ratings` service. For example, `user`. {{< text bash >}} $ export USER=user {{< /text >}} 1. Build a Docker image from the `Dockerfile`: {{< text bash >}} $ docker build -t $USER/ratings . ... Step 9/9 : CMD node /opt/microservices/ratings.js 9080 ---> Using cache ---> 77c6a304476c Successfully built 77c6a304476c Successfully tagged user/ratings:latest {{< /text >}} 1. Run ratings in Docker. The following [docker run](https://docs.docker.com/engine/reference/commandline/run/) command instructs Docker to expose port `9080` of the container to port `9081` of your computer, allowing you to access the `ratings` microservice on port `9081`. {{< text bash >}} $ docker run --name my-ratings --rm -d -p 9081:9080 $USER/ratings {{< /text >}} 1. Access [http://localhost:9081/ratings/7](http://localhost:9081/ratings/7) in your browser or use the following `curl` command: {{< text bash >}} $ curl localhost:9081/ratings/7 {"id":7,"ratings":{"Reviewer1":5,"Reviewer2":4}} {{< /text >}} 1. Observe the running container. Run the [docker ps](https://docs.docker.com/engine/reference/commandline/ps/) command to list all the running containers and notice the container with the image `/ratings`. {{< text bash >}} $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 47e8c1fe6eca user/ratings "docker-entrypoint.s…" 2 minutes ago Up 2 minutes 0.0.0.0:9081->9080/tcp elated\_stonebraker ... {{< /text >}} 1. Stop the running container: {{< text bash >}} $ docker stop my-ratings {{< /text >}} You have learned how to package a single service into a container. The next step is to learn how to [deploy the whole application to a Kubernetes cluster](/docs/examples/microservices-istio/bookinfo-kubernetes).
https://github.com/istio/istio.io/blob/master//content/en/docs/examples/microservices-istio/package-service/index.md
master
istio
[ -0.029202723875641823, 0.0650242269039154, -0.10537530481815338, 0.04358680173754692, -0.0021152161061763763, -0.029007509350776672, -0.04564666375517845, 0.043821338564157486, -0.00030763607355766, 0.04138556495308876, -0.009398506954312325, -0.03574894368648529, 0.06619130820035934, 0.00...
0.029997
{{< boilerplate work-in-progress >}} Before the advent of microservice architecture, development teams built, deployed and ran the whole application as one large chunk of software. To test a small change in their module not merely by unit testing, the developers had to build the whole application. Therefore the builds took large amount of time. After the build, the developers deployed their version of the application into a test server. The developers ran the server either on a remote machine, or on their local computer. In the latter case, the developers had to install and operate a rather complex environment on their local computer. In the era of microservice architecture, the developers write, build, test and run small software services. Builds are fast. With modern frameworks like [Node.js](https://nodejs.org/en/) there is no need to install and operate complex server environments to test a single service, since the service runs as a regular process. You do not have to deploy your service to some environment to merely test it, so you just build your service and run it immediately on your local computer. This module covers the different aspects involved in developing a single service on a local machine. You don't need to write code though. Instead, you build, run, and test an existing service: `ratings`. The `ratings` service is a small web app written in [Node.js](https://nodejs.org/en/) that can run on its own. It performs similar actions to those of other web apps: - Listen to the port it receives as a parameter. - Expect `HTTP GET` requests on the `/ratings/{productID}` path and return the ratings of the product matching the value the client specifies for `productID`. - Expect `HTTP POST` requests on the `/ratings/{productID}` path and update the ratings of the product matching the value you specify for `productID`. Follow these steps to download the code of the app, install its dependencies, and run it locally: 1. Download [the service's code]({{< github\_blob >}}/samples/bookinfo/src/ratings/ratings.js) and [the package file]({{< github\_blob >}}/samples/bookinfo/src/ratings/package.json) into a separate directory: {{< text bash >}} $ mkdir ratings $ cd ratings $ curl -s {{< github\_file >}}/samples/bookinfo/src/ratings/ratings.js -o ratings.js $ curl -s {{< github\_file >}}/samples/bookinfo/src/ratings/package.json -o package.json {{< /text >}} 1. Skim the service's code and note the following elements: - The web server's features: - listening to a port - handling requests and responses - The aspects related to HTTP: - headers - path - status code {{< tip >}} In Node.js, the web server's functionality is embedded in the code of the application. A Node.js web application runs as a standalone process. {{< /tip >}} 1. Node.js applications are written in JavaScript, which means that there is no explicit compilation step. Instead, they use [just-in-time compilation](https://en.wikipedia.org/wiki/Just-in-time\_compilation). To build a Node.js application, then means to install its dependencies. Install the dependencies of the `ratings` service in the same folder where you stored the service code and the package file: {{< text bash >}} $ npm install npm notice created a lockfile as package-lock.json. You should commit this file. npm WARN ratings No description npm WARN ratings No repository field. npm WARN ratings No license field. added 24 packages in 2.094s {{< /text >}} 1. Run the service, passing `9080` as a parameter. The application then listens on port 9080. {{< text bash >}} $ npm start 9080 > @ start /tmp/ratings > node ratings.js "9080" Server listening on: http://0.0.0.0:9080 {{< /text >}} {{< tip >}} The `ratings` service is a web app and you can communicate with it as you would with any other web app. You can use a browser or a command line web client like [`curl`](https://curl.haxx.se)
https://github.com/istio/istio.io/blob/master//content/en/docs/examples/microservices-istio/single/index.md
master
istio
[ 0.000006590926204808056, 0.0228874571621418, 0.053808849304914474, -0.004324519075453281, 0.07027321308851242, -0.0926092118024826, -0.1065237894654274, 0.0141930365934968, 0.07144490629434586, 0.04326927289366722, 0.02138609066605568, 0.07322421669960022, -0.0040494357235729694, -0.034560...
0.126132
start /tmp/ratings > node ratings.js "9080" Server listening on: http://0.0.0.0:9080 {{< /text >}} {{< tip >}} The `ratings` service is a web app and you can communicate with it as you would with any other web app. You can use a browser or a command line web client like [`curl`](https://curl.haxx.se) or [`Wget`](https://www.gnu.org/software/wget/). Since you run the `ratings` service locally, you can also access it via the `localhost` hostname. {{< /tip >}} 1. Open [http://localhost:9080/ratings/7](http://localhost:9080/ratings/7) in your browser or access `ratings` using the `curl` command from a different terminal window: {{< text bash >}} $ curl localhost:9080/ratings/7 {"id":7,"ratings":{"Reviewer1":5,"Reviewer2":4}} {{< /text >}} 1. Use the `POST` method of the `curl` command to set the ratings for the product to `1`: {{< text bash >}} $ curl -X POST localhost:9080/ratings/7 -d '{"Reviewer1":1,"Reviewer2":1}' {"id":7,"ratings":{"Reviewer1":1,"Reviewer2":1}} {{< /text >}} 1. Check the updated ratings: {{< text bash >}} $ curl localhost:9080/ratings/7 {"id":7,"ratings":{"Reviewer1":1,"Reviewer2":1}} {{< /text >}} 1. Use `Ctrl-C` in the terminal running the service to stop it. Congratulations, you can now build, test, and run a service on your local computer! You are ready to [package the service](/docs/examples/microservices-istio/package-service) into a container.
https://github.com/istio/istio.io/blob/master//content/en/docs/examples/microservices-istio/single/index.md
master
istio
[ -0.04861113056540489, 0.02953454479575157, -0.13152265548706055, 0.04589879885315895, 0.025687720626592636, -0.0044793495908379555, -0.04824073240160942, 0.02896740287542343, 0.028931377455592155, 0.009114153683185577, 0.016739360988140106, -0.04008081555366516, 0.0076729911379516125, -0.0...
-0.005704
Previously, you enabled Istio on a single microservice, `productpage`. You can proceed to enable Istio on the microservices incrementally to get the Istio functionality for more microservices. For the purpose of this tutorial, you will enable Istio on all the remaining microservices in one step. 1. For the purpose of this tutorial, scale the deployments of the microservices down to 1: {{< text bash >}} $ kubectl scale deployments --all --replicas 1 {{< /text >}} 1. Redeploy the Bookinfo application, Istio-enabled. The service `productpage` will not be redeployed since it already has Istio injected, and its pods will not be changed. This time you will use only a single replica of a microservice. {{< text bash >}} $ curl -s {{< github\_file >}}/samples/bookinfo/platform/kube/bookinfo.yaml | istioctl kube-inject -f - | kubectl apply -l app!=reviews -f - $ curl -s {{< github\_file >}}/samples/bookinfo/platform/kube/bookinfo.yaml | istioctl kube-inject -f - | kubectl apply -l app=reviews,version=v2 -f - service/details unchanged serviceaccount/bookinfo-details unchanged deployment.apps/details-v1 configured service/ratings unchanged serviceaccount/bookinfo-ratings unchanged deployment.apps/ratings-v1 configured serviceaccount/bookinfo-reviews unchanged service/productpage unchanged serviceaccount/bookinfo-productpage unchanged deployment.apps/productpage-v1 configured deployment.apps/reviews-v2 configured {{< /text >}} 1. Access the application's webpage several times. Note that Istio was added \*\*transparently\*\*, the original application did not change. It was added on the fly, without the need to undeploy and redeploy the whole application. 1. Check the application pods and verify that now each pod has two containers. One container is the microservice itself, the other is the sidecar proxy attached to it: {{< text bash >}} $ kubectl get pods details-v1-58c68b9ff-kz9lf 2/2 Running 0 2m productpage-v1-59b4f9f8d5-d4prx 2/2 Running 0 2m ratings-v1-b7b7fbbc9-sggxf 2/2 Running 0 2m reviews-v2-dfbcf859c-27dvk 2/2 Running 0 2m curl-88ddbcfdd-cc85s 1/1 Running 0 7h {{< /text >}} 1. Access the Istio dashboard using the custom URL you set in your `/etc/hosts` file [previously](/docs/examples/microservices-istio/bookinfo-kubernetes/#update-your-etc-hosts-configuration-file): {{< text plain >}} http://my-istio-dashboard.io/dashboard/db/istio-mesh-dashboard {{< /text >}} 1. In the top left drop-down menu, select \_Istio Mesh Dashboard\_. Note that now all the services from your namespace appear in the list of services. {{< image width="80%" link="dashboard-mesh-all.png" caption="Istio Mesh Dashboard" >}} 1. Check some other microservice in \_Istio Service Dashboard\_, e.g. `ratings` : {{< image width="80%" link="dashboard-ratings.png" caption="Istio Service Dashboard" >}} 1. Visualize your application's topology by using the [Kiali](https://www.kiali.io) console, which is not a part of Istio, but is installed as part of the `demo` configuration. Access the dashboard using the custom URL you set in your `/etc/hosts` file [previously](/docs/examples/microservices-istio/bookinfo-kubernetes/#update-your-etc-hosts-configuration-file): {{< text plain >}} http://my-kiali.io/kiali/console {{< /text >}} If you installed Kiali as part of the [getting started](/docs/setup/getting-started/) instructions, your Kiali console user name is `admin` and the password is `admin`. 1. Click on the Graph tab and select your namespace in the \_Namespace\_ drop-down menu in the top level corner. In the \_Display\_ drop-down menu mark the \_Traffic Animation\_ check box to see some cool traffic animation. {{< image width="80%" link="kiali-display-menu.png" caption="Kiali Graph Tab, display drop-down menu" >}} 1. Try different options in the \_Edge Labels\_ drop-down menu. Hover with the mouse over the nodes and edges of the graph. Notice the traffic metrics on the right. {{< image width="80%" link="kiali-edge-labels-menu.png" caption="Kiali Graph Tab, edge labels drop-down menu" >}} {{< image width="80%" link="kiali-initial.png" caption="Kiali Graph Tab" >}} You are ready to [configure the Istio Ingress Gateway](/docs/examples/microservices-istio/istio-ingress-gateway).
https://github.com/istio/istio.io/blob/master//content/en/docs/examples/microservices-istio/enable-istio-all-microservices/index.md
master
istio
[ -0.004949982278048992, -0.05779234692454338, 0.017735792323946953, -0.0334884375333786, -0.08012577146291733, -0.001002941164188087, 0.00438483152538538, 0.001146536786109209, 0.010050860233604908, 0.030279893428087234, 0.022707421332597733, -0.07609256356954575, -0.009913303889334202, 0.0...
0.495608
{{< boilerplate work-in-progress >}} In this module, you set up a Kubernetes cluster that has Istio installed and a namespace to use throughout the tutorial. {{< warning >}} If you are in a workshop and the instructors provide a cluster for you, proceed to [setting up your local computer](/docs/examples/microservices-istio/setup-local-computer). {{}} 1. Ensure you have access to a [Kubernetes cluster](https://kubernetes.io/docs/tutorials/kubernetes-basics/). You can use the [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/docs/quickstart) or the [IBM Cloud Kubernetes Service](https://cloud.ibm.com/docs/containers?topic=containers-getting-started). 1. Create an environment variable to store the name of a namespace that you will use when you run the tutorial commands. You can use any name, for example `tutorial`. {{< text bash >}} $ export NAMESPACE=tutorial {{< /text >}} 1. Create the namespace: {{< text bash >}} $ kubectl create namespace $NAMESPACE {{< /text >}} {{< tip >}} If you are an instructor, you should allocate a separate namespace per each participant. The tutorial supports work in multiple namespaces simultaneously by multiple participants. {{< /tip >}} 1. [Install Istio](/docs/setup/getting-started/) using the `demo` profile. 1. The [Kiali](/docs/ops/integrations/kiali/) and [Prometheus](/docs/ops/integrations/prometheus/) addons are used in this example and need to be installed. All addons are installed using: {{< text bash >}} $ kubectl apply -f @samples/addons@ {{< /text >}} {{< tip >}} If there are errors trying to install the addons, try running the command again. There may be some timing issues which will be resolved when the command is run again. {{< /tip >}} 1. Create a Kubernetes Ingress resource for these common Istio services using the `kubectl` command shown. It is not necessary to be familiar with each of these services at this point in the tutorial. - [Grafana](https://grafana.com/docs/guides/getting\_started/) - [Jaeger](https://www.jaegertracing.io/docs/1.13/getting-started/) - [Prometheus](https://prometheus.io/docs/prometheus/latest/getting\_started/) - [Kiali](https://kiali.io/docs/installation/quick-start/) The `kubectl` command can accept an in-line configuration to create the Ingress resources for each service: {{< text bash >}} $ kubectl apply -f - <}} 1. Create a role to provide read access to the `istio-system` namespace. This role is required to limit permissions of the participants in the steps below. {{< text bash >}} $ kubectl apply -f - <}} 1. Create a service account for each participant: {{< text bash >}} $ kubectl apply -f - <}} 1. Limit each participant's permissions. During the tutorial, participants only need to create resources in their namespace and to read resources from `istio-system` namespace. It is a good practice, even if using your own cluster, to avoid interfering with other namespaces in your cluster. Create a role to allow read-write access to each participant's namespace. Bind the participant's service account to this role and to the role for reading resources from `istio-system`: {{< text bash >}} $ kubectl apply -f - <}} 1. Each participant needs to use their own Kubernetes configuration file. This configuration file specifies the cluster details, the service account, the credentials and the namespace of the participant. The `kubectl` command uses the configuration file to operate on the cluster. Generate a Kubernetes configuration file for each participant: {{< tip >}} This command assumes your cluster is named `tutorial-cluster`. If your cluster is named differently, replace all references with the name of your cluster. {{}} {{< text bash >}} $ cat < ./${NAMESPACE}-user-config.yaml apiVersion: v1 kind: Config preferences: {} clusters: - cluster: certificate-authority-data: $(kubectl get secret $(kubectl get sa ${NAMESPACE}-user -n $NAMESPACE -o jsonpath={.secrets..name}) -n $NAMESPACE -o jsonpath='{.data.ca\.crt}') server: $(kubectl config view -o jsonpath="{.clusters[?(.name==\"$(kubectl config view -o jsonpath="{.contexts[?(.name==\"$(kubectl config current-context)\")].context.cluster}")\")].cluster.server}") name: ${NAMESPACE}-cluster users: - name: ${NAMESPACE}-user user: as-user-extra: {} client-key-data: $(kubectl get secret $(kubectl get sa ${NAMESPACE}-user -n $NAMESPACE -o jsonpath={.secrets..name}) -n $NAMESPACE -o jsonpath='{.data.ca\.crt}') token: $(kubectl get secret $(kubectl get sa ${NAMESPACE}-user -n $NAMESPACE -o jsonpath={.secrets..name}) -n $NAMESPACE -o
https://github.com/istio/istio.io/blob/master//content/en/docs/examples/microservices-istio/setup-kubernetes-cluster/index.md
master
istio
[ -0.03546806797385216, -0.022775938734412193, 0.005279670935124159, -0.012476085685193539, -0.024498023092746735, -0.008633899502456188, 0.0075119053944945335, 0.028800826519727707, -0.010868172161281109, 0.014666173607110977, -0.007027319632470608, -0.09739145636558533, 0.001964295981451869,...
0.432907
config view -o jsonpath="{.clusters[?(.name==\"$(kubectl config view -o jsonpath="{.contexts[?(.name==\"$(kubectl config current-context)\")].context.cluster}")\")].cluster.server}") name: ${NAMESPACE}-cluster users: - name: ${NAMESPACE}-user user: as-user-extra: {} client-key-data: $(kubectl get secret $(kubectl get sa ${NAMESPACE}-user -n $NAMESPACE -o jsonpath={.secrets..name}) -n $NAMESPACE -o jsonpath='{.data.ca\.crt}') token: $(kubectl get secret $(kubectl get sa ${NAMESPACE}-user -n $NAMESPACE -o jsonpath={.secrets..name}) -n $NAMESPACE -o jsonpath={.data.token} | base64 --decode) contexts: - context: cluster: ${NAMESPACE}-cluster namespace: ${NAMESPACE} user: ${NAMESPACE}-user name: ${NAMESPACE} current-context: ${NAMESPACE} EOF {{< /text >}} 1. Set the `KUBECONFIG` environment variable for the `${NAMESPACE}-user-config.yaml` configuration file: {{< text bash >}} $ export KUBECONFIG=$PWD/${NAMESPACE}-user-config.yaml {{< /text >}} 1. Verify that the configuration took effect by printing the current namespace: {{< text bash >}} $ kubectl config view -o jsonpath="{.contexts[?(@.name==\"$(kubectl config current-context)\")].context.namespace}" tutorial {{< /text >}} You should see the name of your namespace in the output. 1. If you are setting up the cluster for yourself, copy the `${NAMESPACE}-user-config.yaml` file mentioned in the previous steps to your local computer, where `${NAMESPACE}` is the name of the namespace you provided in the previous steps. For example, `tutorial-user-config.yaml`. You will need this file later in the tutorial. If you are an instructor, send the generated configuration files to each participant. The participants must copy their configuration file to their local computer. Congratulations, you configured your cluster for the tutorial! You are ready to [set up a local computer](/docs/examples/microservices-istio/setup-local-computer).
https://github.com/istio/istio.io/blob/master//content/en/docs/examples/microservices-istio/setup-kubernetes-cluster/index.md
master
istio
[ 0.013374218717217445, 0.005391485523432493, -0.014466683380305767, 0.04736444726586342, -0.012358127161860466, 0.024512220174074173, 0.028272312134504318, -0.012280697003006935, 0.10856307297945023, -0.00434650806710124, 0.036915525794029236, -0.1635892242193222, -0.0032642704900354147, 0....
0.047263
Until now, you used a Kubernetes Ingress to access your application from the outside. In this module, you configure the traffic to enter through an Istio ingress gateway, in order to apply Istio control on traffic to your microservices. 1. Store the name of your namespace in the `NAMESPACE` environment variable. You will need it to recognize your microservices in the logs: {{< text bash >}} $ export NAMESPACE=$(kubectl config view -o jsonpath="{.contexts[?(@.name == \"$(kubectl config current-context)\")].context.namespace}") $ echo $NAMESPACE tutorial {{< /text >}} 1. Create an environment variable for the hostname of the Istio ingress gateway: {{< text bash >}} $ export MY\_INGRESS\_GATEWAY\_HOST=istio.$NAMESPACE.bookinfo.com $ echo $MY\_INGRESS\_GATEWAY\_HOST istio.tutorial.bookinfo.com {{< /text >}} 1. Configure an Istio ingress gateway: {{< text bash >}} $ kubectl apply -f - <}} 1. Set `INGRESS\_HOST` and `INGRESS\_PORT` using the instructions in the [Determining the Ingress IP and ports](/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports) section. 1. Add the output of this command to your `/etc/hosts` file: {{< text bash >}} $ echo $INGRESS\_HOST $MY\_INGRESS\_GATEWAY\_HOST {{< /text >}} 1. Access the application's home page from the command line: {{< text bash >}} $ curl -s $MY\_INGRESS\_GATEWAY\_HOST:$INGRESS\_PORT/productpage | grep -o ".\*" Simple Bookstore App {{< /text >}} 1. Paste the output of the following command in your browser address bar: {{< text bash >}} $ echo http://$MY\_INGRESS\_GATEWAY\_HOST:$INGRESS\_PORT/productpage {{< /text >}} 1. Simulate real-world user traffic to your application by setting an infinite loop in a new terminal window: {{< text bash >}} $ while :; do curl -s | grep -o ".\*"; sleep 1; done Simple Bookstore App Simple Bookstore App Simple Bookstore App Simple Bookstore App ... {{< /text >}} 1. Check the graph of your namespace in the Kiali console `my-kiali.io/kiali/console`. (The `my-kiali.io` URL should be in your `/etc/hosts` file that you set [previously](/docs/examples/microservices-istio/bookinfo-kubernetes/#update-your-etc-hosts-configuration-file)). This time, you can see that traffic arrives from two sources, `unknown` (the Kubernetes Ingress) and from `istio-ingressgateway istio-system` (the Istio Ingress Gateway). {{< image width="80%" link="kiali-ingress-gateway.png" caption="Kiali Graph Tab with Istio Ingress Gateway" >}} 1. At this point you can stop sending requests through the Kubernetes Ingress and use Istio Ingress Gateway only. Stop the infinite loop (`Ctrl-C` in the terminal window) you set in the previous steps. In a real production environment, you would update the DNS entry of your application to contain the IP of Istio ingress gateway or configure your external Load Balancer. 1. Delete the Kubernetes Ingress resource: {{< text bash >}} $ kubectl delete ingress bookinfo ingress.extensions "bookinfo" deleted {{< /text >}} 1. In a new terminal window, restart the real-world user traffic simulation as described in the previous steps. 1. Check your graph in the Kiali console. After about a minute, you will see the Istio Ingress Gateway as a single source of traffic for your application. {{< image width="80%" link="kiali-ingress-gateway-only.png" caption="Kiali Graph Tab with Istio Ingress Gateway as a single source of traffic" >}} You are ready to configure [logging with Istio](/docs/examples/microservices-istio/logs-istio).
https://github.com/istio/istio.io/blob/master//content/en/docs/examples/microservices-istio/istio-ingress-gateway/index.md
master
istio
[ 0.021757379174232483, -0.034328773617744446, -0.030227314680814743, 0.02071240171790123, -0.09982418268918991, -0.031912174075841904, 0.0742073506116867, 0.06079321354627609, 0.058619432151317596, 0.059454336762428284, -0.045682646334171295, -0.15717779099941254, -0.013481606729328632, 0.0...
0.416545
{{< boilerplate work-in-progress >}} Test your microservice, in production! ## Testing individual microservices 1. Issue an HTTP request from the testing pod to one of your services: {{< text bash >}} $ kubectl exec $(kubectl get pod -l app=curl -o jsonpath='{.items[0].metadata.name}') -- curl -sS http://ratings:9080/ratings/7 {{< /text >}} ## Chaos testing Perform some [chaos testing](http://www.boyter.org/2016/07/chaos-testing-engineering/) in production and see how your application reacts. After each chaos operation, access the application's webpage and see if anything changed. Check the pods' status with `kubectl get pods`. 1. Terminate the `details` service in one pod. {{< text bash >}} $ kubectl exec $(kubectl get pods -l app=details -o jsonpath='{.items[0].metadata.name}') -- pkill ruby {{< /text >}} 1. Check the pods status: {{< text bash >}} $ kubectl get pods NAME READY STATUS RESTARTS AGE details-v1-6d86fd9949-fr59p 1/1 Running 1 47m details-v1-6d86fd9949-mksv7 1/1 Running 0 47m details-v1-6d86fd9949-q8rrf 1/1 Running 0 48m productpage-v1-c9965499-hwhcn 1/1 Running 0 47m productpage-v1-c9965499-nccwq 1/1 Running 0 47m productpage-v1-c9965499-tjdjx 1/1 Running 0 48m ratings-v1-7bf577cb77-cbdsg 1/1 Running 0 47m ratings-v1-7bf577cb77-cz6jm 1/1 Running 0 47m ratings-v1-7bf577cb77-pq9kg 1/1 Running 0 48m reviews-v1-77c65dc5c6-5wt8g 1/1 Running 0 47m reviews-v1-77c65dc5c6-kjvxs 1/1 Running 0 48m reviews-v1-77c65dc5c6-r55tl 1/1 Running 0 47m curl-88ddbcfdd-l9zq4 1/1 Running 0 47m {{< /text >}} Note that the first pod was restarted once. 1. Terminate the `details` service in all its pods: {{< text bash >}} $ for pod in $(kubectl get pods -l app=details -o jsonpath='{.items[\*].metadata.name}'); do echo terminating "$pod"; kubectl exec "$pod" -- pkill ruby; done {{< /text >}} 1. Check the webpage of the application: {{< image width="80%" link="bookinfo-details-unavailable.png" caption="Bookinfo Web Application, details unavailable" >}} Note that the details section contains error messages instead of book details. 1. Check the pods status: {{< text bash >}} $ kubectl get pods NAME READY STATUS RESTARTS AGE details-v1-6d86fd9949-fr59p 1/1 Running 2 48m details-v1-6d86fd9949-mksv7 1/1 Running 1 48m details-v1-6d86fd9949-q8rrf 1/1 Running 1 49m productpage-v1-c9965499-hwhcn 1/1 Running 0 48m productpage-v1-c9965499-nccwq 1/1 Running 0 48m productpage-v1-c9965499-tjdjx 1/1 Running 0 48m ratings-v1-7bf577cb77-cbdsg 1/1 Running 0 48m ratings-v1-7bf577cb77-cz6jm 1/1 Running 0 48m ratings-v1-7bf577cb77-pq9kg 1/1 Running 0 49m reviews-v1-77c65dc5c6-5wt8g 1/1 Running 0 48m reviews-v1-77c65dc5c6-kjvxs 1/1 Running 0 49m reviews-v1-77c65dc5c6-r55tl 1/1 Running 0 48m curl-88ddbcfdd-l9zq4 1/1 Running 0 48m {{< /text >}} The first pod restarted twice and two other `details` pods restarted once. You may experience the `Error` and the `CrashLoopBackOff` statuses until the pods reach `Running` status. 1. Use Ctrl-C in the terminal to stop the infinite loop that is running to simulate traffic. In both cases, the application did not crash. The crash in the `details` microservice did not cause other microservices to fail. This behavior means you did not have a \*\*cascading failure\*\* in this situation. Instead, you had \*\*gradual service degradation\*\*: despite one microservice crashing, the application could still provide useful functionality. It displayed the reviews and the basic information about the book. You are ready to [add a new version of the reviews application](/docs/examples/microservices-istio/add-new-microservice-version).
https://github.com/istio/istio.io/blob/master//content/en/docs/examples/microservices-istio/production-testing/index.md
master
istio
[ 0.03180998936295509, 0.03347179666161537, 0.043356623500585556, -0.005964062176644802, -0.020818494260311127, -0.10439055413007736, -0.07844918221235275, -0.06963478028774261, 0.10052841156721115, 0.014494276605546474, 0.01726626232266426, -0.06748126447200775, -0.02333485335111618, -0.051...
0.118957
This example deploys a sample application composed of four separate microservices used to demonstrate various Istio features. {{< tip >}} If you installed Istio using the [Getting Started](/docs/setup/getting-started/) instructions, you already have Bookinfo installed and you can skip most of these steps and go directly to [Define the service versions](/docs/examples/bookinfo/#define-the-service-versions). {{< /tip >}} The application displays information about a book, similar to a single catalog entry of an online book store. Displayed on the page is a description of the book, book details (ISBN, number of pages, and so on), and a few book reviews. The Bookinfo application is broken into four separate microservices: \* `productpage`. The `productpage` microservice calls the `details` and `reviews` microservices to populate the page. \* `details`. The `details` microservice contains book information. \* `reviews`. The `reviews` microservice contains book reviews. It also calls the `ratings` microservice. \* `ratings`. The `ratings` microservice contains book ranking information that accompanies a book review. There are 3 versions of the `reviews` microservice: \* Version v1 doesn't call the `ratings` service. \* Version v2 calls the `ratings` service, and displays each rating as 1 to 5 black stars. \* Version v3 calls the `ratings` service, and displays each rating as 1 to 5 red stars. The end-to-end architecture of the application is shown below. {{< image width="80%" link="./noistio.svg" caption="Bookinfo Application without Istio" >}} This application is polyglot, i.e., the microservices are written in different languages. It’s worth noting that these services have no dependencies on Istio, but make an interesting service mesh example, particularly because of the multitude of services, languages and versions for the `reviews` service. ## Before you begin If you haven't already done so, setup Istio by following the instructions in the [installation guide](/docs/setup/). {{< boilerplate gateway-api-support >}} ## Deploying the application To run the sample with Istio requires no changes to the application itself. Instead, you simply need to configure and run the services in an Istio-enabled environment, with Envoy sidecars injected along side each service. The resulting deployment will look like this: {{< image width="80%" link="./withistio.svg" caption="Bookinfo Application" >}} All of the microservices will be packaged with an Envoy sidecar that intercepts incoming and outgoing calls for the services, providing the hooks needed to externally control, via the Istio control plane, routing, telemetry collection, and policy enforcement for the application as a whole. ### Start the application services {{< tip >}} If you use GKE, please ensure your cluster has at least 4 standard GKE nodes. If you use Minikube, please ensure you have at least 4GB RAM. {{< /tip >}} 1. Change directory to the root of the Istio installation. 1. The default Istio installation uses [automatic sidecar injection](/docs/setup/additional-setup/sidecar-injection/#automatic-sidecar-injection). Label the namespace that will host the application with `istio-injection=enabled`: {{< text bash >}} $ kubectl label namespace default istio-injection=enabled {{< /text >}} 1. Deploy your application using the `kubectl` command: {{< text bash >}} $ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@ {{< /text >}} The command launches all four services shown in the `bookinfo` application architecture diagram. All 3 versions of the reviews service, v1, v2, and v3, are started. {{< tip >}} In a realistic deployment, new versions of a microservice are deployed over time instead of deploying all versions simultaneously. {{< /tip >}} 1. Confirm all services and pods are correctly defined and running: {{< text bash >}} $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE details ClusterIP 10.0.0.31 9080/TCP 6m kubernetes ClusterIP 10.0.0.1 443/TCP 7d productpage ClusterIP 10.0.0.120 9080/TCP 6m ratings ClusterIP 10.0.0.15 9080/TCP 6m reviews ClusterIP 10.0.0.170 9080/TCP 6m {{< /text >}} and {{< text bash >}} $ kubectl get
https://github.com/istio/istio.io/blob/master//content/en/docs/examples/bookinfo/index.md
master
istio
[ -0.06063329800963402, -0.04346994310617447, -0.06832671910524368, -0.005399823654443026, -0.03714444115757942, -0.009240452200174332, -0.008007164113223553, 0.0299778264015913, -0.0037872788961976767, 0.010487135499715805, 0.04071033373475075, -0.023145854473114014, 0.0074772280640900135, ...
0.466694
{{< text bash >}} $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE details ClusterIP 10.0.0.31 9080/TCP 6m kubernetes ClusterIP 10.0.0.1 443/TCP 7d productpage ClusterIP 10.0.0.120 9080/TCP 6m ratings ClusterIP 10.0.0.15 9080/TCP 6m reviews ClusterIP 10.0.0.170 9080/TCP 6m {{< /text >}} and {{< text bash >}} $ kubectl get pods NAME READY STATUS RESTARTS AGE details-v1-1520924117-48z17 2/2 Running 0 6m productpage-v1-560495357-jk1lz 2/2 Running 0 6m ratings-v1-734492171-rnr5l 2/2 Running 0 6m reviews-v1-874083890-f0qf0 2/2 Running 0 6m reviews-v2-1343845940-b34q5 2/2 Running 0 6m reviews-v3-1813607990-8ch52 2/2 Running 0 6m {{< /text >}} 1. To confirm that the Bookinfo application is running, send a request to it by a `curl` command from some pod, for example from `ratings`: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep -o ".\*" Simple Bookstore App {{< /text >}} ### Determine the ingress IP and port Now that the Bookinfo services are up and running, you need to make the application accessible from outside of your Kubernetes cluster, e.g., from a browser. A gateway is used for this purpose. 1. Create a gateway for the Bookinfo application: {{< tabset category-name="config-api" >}} {{< tab name="Istio APIs" category-value="istio-apis" >}} Create an [Istio Gateway](/docs/concepts/traffic-management/#gateways) using the following command: {{< text bash >}} $ kubectl apply -f @samples/bookinfo/networking/bookinfo-gateway.yaml@ gateway.networking.istio.io/bookinfo-gateway created virtualservice.networking.istio.io/bookinfo created {{< /text >}} Confirm the gateway has been created: {{< text bash >}} $ kubectl get gateway NAME AGE bookinfo-gateway 32s {{< /text >}} Follow [these instructions](/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports) to set the `INGRESS\_HOST` and `INGRESS\_PORT` variables for accessing the gateway. Return here, when they are set. {{< /tab >}} {{< tab name="Gateway API" category-value="gateway-api" >}} {{< boilerplate external-loadbalancer-support >}} Create a [Kubernetes Gateway](https://gateway-api.sigs.k8s.io/api-types/gateway/) using the following command: {{< text bash >}} $ kubectl apply -f @samples/bookinfo/gateway-api/bookinfo-gateway.yaml@ gateway.gateway.networking.k8s.io/bookinfo-gateway created httproute.gateway.networking.k8s.io/bookinfo created {{< /text >}} Because creating a Kubernetes `Gateway` resource will also [deploy an associated proxy service](/docs/tasks/traffic-management/ingress/gateway-api/#automated-deployment), run the following command to wait for the gateway to be ready: {{< text bash >}} $ kubectl wait --for=condition=programmed gtw bookinfo-gateway {{< /text >}} Get the gateway address and port from the bookinfo gateway resource: {{< text bash >}} $ export INGRESS\_HOST=$(kubectl get gtw bookinfo-gateway -o jsonpath='{.status.addresses[0].value}') $ export INGRESS\_PORT=$(kubectl get gtw bookinfo-gateway -o jsonpath='{.spec.listeners[?(@.name=="http")].port}') {{< /text >}} {{< /tab >}} {{< /tabset >}} 1. Set `GATEWAY\_URL`: {{< text bash >}} $ export GATEWAY\_URL=$INGRESS\_HOST:$INGRESS\_PORT {{< /text >}} ## Confirm the app is accessible from outside the cluster To confirm that the Bookinfo application is accessible from outside the cluster, run the following `curl` command: {{< text bash >}} $ curl -s "http://${GATEWAY\_URL}/productpage" | grep -o ".\*" Simple Bookstore App {{< /text >}} You can also point your browser to `http://$GATEWAY\_URL/productpage` to view the Bookinfo web page. If you refresh the page several times, you should see different versions of reviews shown in `productpage`, presented in a round robin style (red stars, black stars, no stars), since we haven't yet used Istio to control the version routing. ## Define the service versions Before you can use Istio to control the Bookinfo version routing, you need to define the available versions. {{< tabset category-name="config-api" >}} {{< tab name="Istio APIs" category-value="istio-apis" >}} Istio uses \*subsets\*, in [destination rules](/docs/concepts/traffic-management/#destination-rules), to define versions of a service. Run the following command to create default destination rules for the Bookinfo services: {{< text bash >}} $ kubectl apply -f @samples/bookinfo/networking/destination-rule-all.yaml@ {{< /text >}} {{< tip >}} The `default` and `demo` [configuration profiles](/docs/setup/additional-setup/config-profiles/) have [auto mutual TLS](/docs/tasks/security/authentication/authn-policy/#auto-mutual-tls) enabled by default. To enforce mutual TLS, use the destination rules in `samples/bookinfo/networking/destination-rule-all-mtls.yaml`. {{< /tip >}} Wait a few seconds for the destination
https://github.com/istio/istio.io/blob/master//content/en/docs/examples/bookinfo/index.md
master
istio
[ 0.05516277626156807, 0.03486371785402298, -0.04616323485970497, -0.01797744259238243, -0.02422826550900936, 0.009909141808748245, 0.011450928635895252, -0.01796795427799225, 0.07501333951950073, 0.03910190612077713, 0.0302865169942379, -0.07964048534631729, -0.0046070776879787445, -0.06378...
0.150541
Bookinfo services: {{< text bash >}} $ kubectl apply -f @samples/bookinfo/networking/destination-rule-all.yaml@ {{< /text >}} {{< tip >}} The `default` and `demo` [configuration profiles](/docs/setup/additional-setup/config-profiles/) have [auto mutual TLS](/docs/tasks/security/authentication/authn-policy/#auto-mutual-tls) enabled by default. To enforce mutual TLS, use the destination rules in `samples/bookinfo/networking/destination-rule-all-mtls.yaml`. {{< /tip >}} Wait a few seconds for the destination rules to propagate. You can display the destination rules with the following command: {{< text bash >}} $ kubectl get destinationrules -o yaml {{< /text >}} {{< /tab >}} {{< tab name="Gateway API" category-value="gateway-api" >}} Unlike the Istio API, which uses `DestinationRule` subsets to define the versions of a service, the Kubernetes Gateway API uses backend service definitions for this purpose. Run the following command to create backend service definitions for the three versions of the `reviews` service: {{< text bash >}} $ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo-versions.yaml@ {{< /text >}} {{< /tab >}} {{< /tabset >}} ## What's next You can now use this sample to experiment with Istio's features for traffic routing, fault injection, rate limiting, etc. To proceed, refer to one or more of the [Istio Tasks](/docs/tasks), depending on your interest. [Configuring Request Routing](/docs/tasks/traffic-management/request-routing/) is a good place to start for beginners. ## Cleanup When you're finished experimenting with the Bookinfo sample, uninstall and clean it up using the following command: {{< text bash >}} $ @samples/bookinfo/platform/kube/cleanup.sh@ {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/examples/bookinfo/index.md
master
istio
[ -0.016717582941055298, 0.011218927800655365, -0.030915994197130203, -0.04624481126666069, -0.05970264971256256, -0.06268734484910965, 0.016639064997434616, -0.05523150414228439, 0.08814442902803421, 0.00142530573066324, -0.022937580943107605, -0.09381163865327835, 0.06042106822133064, 0.02...
0.176961
This example deploys the Bookinfo application across Kubernetes with one service running on a virtual machine (VM), and illustrates how to control this infrastructure as a single mesh. ## Overview {{< image width="80%" link="./vm-bookinfo.svg" caption="Bookinfo running on VMs" >}} ## Before you begin - Setup Istio by following the instructions in the [Virtual Machine Installation guide](/docs/setup/install/virtual-machine/). - Deploy the [Bookinfo](/docs/examples/bookinfo/) sample application (in the `bookinfo` namespace). - Create a VM and add it to the `vm` namespace, following the steps in [Configure the virtual machine](/docs/setup/install/virtual-machine/#configure-the-virtual-machine). ## Running MySQL on the VM We will first install MySQL on the VM, and configure it as a backend for the ratings service. All commands below should be run on the VM. Install `mariadb`: {{< text bash >}} $ sudo apt-get update && sudo apt-get install -y mariadb-server $ sudo sed -i '/bind-address/c\bind-address = 0.0.0.0' /etc/mysql/mariadb.conf.d/50-server.cnf {{< /text >}} Set up authentication: {{< text bash >}} $ cat <}} You can find details of configuring MySQL at [Mysql](https://mariadb.com/kb/en/library/download/). On the VM add ratings database to mysql. {{< text bash >}} $ curl -LO {{< github\_file >}}/samples/bookinfo/src/mysql/mysqldb-init.sql $ mysql -u root -ppassword < mysqldb-init.sql {{< /text >}} To make it easy to visually inspect the difference in the output of the Bookinfo application, you can change the ratings that are generated by using the following commands to inspect the ratings: {{< text bash >}} $ mysql -u root -ppassword test -e "select \* from ratings;" +----------+--------+ | ReviewID | Rating | +----------+--------+ | 1 | 5 | | 2 | 4 | +----------+--------+ {{< /text >}} and to change the ratings {{< text bash >}} $ mysql -u root -ppassword test -e "update ratings set rating=1 where reviewid=1;select \* from ratings;" +----------+--------+ | ReviewID | Rating | +----------+--------+ | 1 | 1 | | 2 | 4 | +----------+--------+ {{< /text >}} ## Expose the mysql service to the mesh When the virtual machine is started, it will automatically be registered into the mesh. However, just like when creating a Pod, we still need to create a Service before we can easily access it. {{< text bash >}} $ cat <}} ## Using the mysql service The ratings service in Bookinfo will use the DB on the machine. To verify that it works, create version 2 of the ratings service that uses the mysql db on the VM. Then specify route rules that force the review service to use the ratings version 2. {{< text bash >}} $ kubectl apply -n bookinfo -f @samples/bookinfo/platform/kube/bookinfo-ratings-v2-mysql-vm.yaml@ {{< /text >}} Create route rules that will force Bookinfo to use the ratings back end: {{< text bash >}} $ kubectl apply -n bookinfo -f @samples/bookinfo/networking/virtual-service-ratings-mysql-vm.yaml@ {{< /text >}} You can verify the output of the Bookinfo application is showing 1 star from Reviewer1 and 4 stars from Reviewer2 or change the ratings on your VM and see the results. ## Reaching Kubernetes services from the virtual machine In the above example, we treated our virtual machine as only a server. We can also seamlessly call Kubernetes services from our virtual machine: {{< text bash >}} $ curl productpage.bookinfo:9080/productpage ... Simple Bookstore App ... {{< /text >}} Istio's [DNS proxying](/docs/ops/configuration/traffic-management/dns-proxy/) automatically configures DNS for the virtual machine, allowing us to make calls to Kubernetes hostnames. ## Cleanup - Delete the `Bookinfo` sample application and its configuration following the steps in [`Bookinfo` cleanup](/docs/examples/bookinfo/#cleanup). - Delete the `mysqldb` Service: {{< text syntax=bash snip\_id=none >}} $ kubectl delete service mysqldb {{< /text >}} - Cleanup the VM following the steps in [virtual-machine uninstall](/docs/setup/install/virtual-machine/#uninstall).
https://github.com/istio/istio.io/blob/master//content/en/docs/examples/virtual-machines/index.md
master
istio
[ 0.008372833020985126, -0.021395552903413773, -0.0654955804347992, 0.009646596387028694, -0.10508657991886139, 0.01433715783059597, -0.0263588298112154, 0.04224133491516113, 0.013334603980183601, 0.060424599796533585, 0.006268596276640892, -0.15147094428539276, 0.03995371609926224, -0.02034...
0.307651
`Bookinfo` sample application and its configuration following the steps in [`Bookinfo` cleanup](/docs/examples/bookinfo/#cleanup). - Delete the `mysqldb` Service: {{< text syntax=bash snip\_id=none >}} $ kubectl delete service mysqldb {{< /text >}} - Cleanup the VM following the steps in [virtual-machine uninstall](/docs/setup/install/virtual-machine/#uninstall).
https://github.com/istio/istio.io/blob/master//content/en/docs/examples/virtual-machines/index.md
master
istio
[ 0.029444623738527298, 0.03188280388712883, -0.031917840242385864, -0.07813574373722076, -0.029598621651530266, -0.0006031446973793209, -0.007108225952833891, -0.03739010542631149, 0.04214585945010185, 0.03345121443271637, 0.04092453792691231, -0.011524582281708717, 0.03969588130712509, -0....
0.10292
Breaking down a monolithic application into atomic services offers various benefits, including better agility, better scalability and better ability to reuse services. However, microservices also have particular security needs: - To defend against man-in-the-middle attacks, they need traffic encryption. - To provide flexible service access control, they need mutual TLS and fine-grained access policies. - To determine who did what at what time, they need auditing tools. Istio Security provides a comprehensive security solution to solve these issues. This page gives an overview on how you can use Istio security features to secure your services, wherever you run them. In particular, Istio security mitigates both insider and external threats against your data, endpoints, communication, and platform. {{< image width="75%" link="./overview.svg" caption="Security overview" >}} The Istio security features provide strong identity, powerful policy, transparent TLS encryption, and authentication, authorization and audit (AAA) tools to protect your services and data. The goals of Istio security are: - Security by default: no changes needed to application code and infrastructure - Defense in depth: integrate with existing security systems to provide multiple layers of defense - Zero-trust network: build security solutions on distrusted networks Visit our [mutual TLS Migration docs](/docs/tasks/security/authentication/mtls-migration/) to start using Istio security features with your deployed services. Visit our [Security Tasks](/docs/tasks/security/) for detailed instructions to use the security features. ## High-level architecture Security in Istio involves multiple components: - A Certificate Authority (CA) for key and certificate management - The configuration API server distributes to the proxies: - [authentication policies](/docs/concepts/security/#authentication-policies) - [authorization policies](/docs/concepts/security/#authorization-policies) - [secure naming information](/docs/concepts/security/#secure-naming) - Sidecar and perimeter proxies work as [Policy Enforcement Points](https://csrc.nist.gov/glossary/term/policy\_enforcement\_point) (PEPs) to secure communication between clients and servers. - A set of Envoy proxy extensions to manage telemetry and auditing The control plane handles configuration from the API server and configures the PEPs in the data plane. The PEPs are implemented using Envoy. The following diagram shows the architecture. {{< image width="75%" link="./arch-sec.svg" caption="Security Architecture" >}} In the following sections, we introduce the Istio security features in detail. ## Istio identity Identity is a fundamental concept of any security infrastructure. At the beginning of a workload-to-workload communication, the two parties must exchange credentials with their identity information for mutual authentication purposes. On the client side, the server's identity is checked against the [secure naming](/docs/concepts/security/#secure-naming) information to see if it is an authorized runner of the workload. On the server side, the server can determine what information the client can access based on the [authorization policies](/docs/concepts/security/#authorization-policies), audit who accessed what at what time, charge clients based on the workloads they used, and reject any clients who failed to pay their bill from accessing the workloads. The Istio identity model uses the first-class `service identity` to determine the identity of a request's origin. This model allows for great flexibility and granularity for service identities to represent a human user, an individual workload, or a group of workloads. On platforms without a service identity, Istio can use other identities that can group workload instances, such as service names. The following list shows examples of service identities that you can use on different platforms: - Kubernetes: Kubernetes service account - GCE: GCP service account - On-premises (non-Kubernetes): user account, custom service account, service name, Istio service account, or GCP service account. The custom service account refers to the existing service account just like the identities that the customer's Identity Directory manages. ## Identity and certificate management {#pki} Istio securely provisions strong identities to every workload with X.509 certificates. Istio agents, running alongside each Envoy proxy, work together with `istiod` to automate key and certificate rotation at scale. The
https://github.com/istio/istio.io/blob/master//content/en/docs/concepts/security/index.md
master
istio
[ -0.044476404786109924, 0.06061696633696556, 0.012516975402832031, -0.016112282872200012, 0.00045487930765375495, -0.04322250932455063, 0.04492799565196037, 0.06915160268545151, 0.020115926861763, 0.03848598524928093, -0.021466977894306183, -0.060379721224308014, 0.001179110142402351, 0.041...
0.563607
existing service account just like the identities that the customer's Identity Directory manages. ## Identity and certificate management {#pki} Istio securely provisions strong identities to every workload with X.509 certificates. Istio agents, running alongside each Envoy proxy, work together with `istiod` to automate key and certificate rotation at scale. The following diagram shows the identity provisioning flow. {{< image width="40%" link="./id-prov.svg" caption="Identity Provisioning Workflow" >}} Istio provisions keys and certificates through the following flow: 1. `istiod` offers a gRPC service to take [certificate signing requests](https://en.wikipedia.org/wiki/Certificate\_signing\_request) (CSRs). 1. When started, the Istio agent creates the private key and CSR, and then sends the CSR with its credentials to `istiod` for signing. 1. The CA in `istiod` validates the credentials carried in the CSR. Upon successful validation, it signs the CSR to generate the certificate. 1. When a workload is started, Envoy requests the certificate and key from the Istio agent in the same container via the [Envoy secret discovery service (SDS)](https://www.envoyproxy.io/docs/envoy/latest/configuration/security/secret#secret-discovery-service-sds) API. 1. The Istio agent sends the certificates received from `istiod` and the private key to Envoy via the Envoy SDS API. 1. Istio agent monitors the expiration of the workload certificate. The above process repeats periodically for certificate and key rotation. ## ClusterTrustBundle `ClusterTrustBundle` is a Kubernetes Custom Resource Definition (CRD) introduced to help manage trusted Certificate Authority (CA) bundles cluster-wide. It is primarily used to distribute and trust public X.509 certificates across the entire cluster. This concept is especially useful in environments where components and workloads need to validate TLS certificates signed by non-standard or private CAs. Istio has added experimental support for this in recent versions, making it easier to manage trust for services. ### Enabling the feature To use `ClusterTrustBundle` in Istio, you must enable it by setting a flag during installation. Here's how: 1. Ensure your Kubernetes cluster is version 1.27 or later and that [`ClusterTrustBundles` are enabled](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/#cluster-trust-bundles). 1. Add this to your istio configuration {{< text yaml >}} values: pilot: env: ENABLE\_CLUSTER\_TRUST\_BUNDLE\_API: "true" {{< /text >}} ### Creating and Using ClusterTrustBundles You create `ClusterTrustBundles` as Kubernetes resources, for example: {{< text yaml >}} apiVersion: certificates.k8s.io/v1alpha1 kind: ClusterTrustBundle metadata: name: my-trust-bundle spec: trustBundle | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- {{< /text >}} Once created, the Istio control plane will use these for validating certificates in secure communications, like mutual TLS (mTLS). ### Important notes - This is experimental, so expect changes in future versions. - Make sure the Istio service account has the right permissions to access `ClusterTrustBundles`, or you may encounter errors. ## Authentication Istio provides two types of authentication: - Peer authentication: used for service-to-service authentication to verify the client making the connection. Istio offers [mutual TLS](https://en.wikipedia.org/wiki/Mutual\_authentication) as a full stack solution for transport authentication, which can be enabled without requiring service code changes. This solution: - Provides each service with a strong identity representing its role to enable interoperability across clusters and clouds. - Secures service-to-service communication. - Provides a key management system to automate key and certificate generation, distribution, and rotation. - Request authentication: Used for end-user authentication to verify the credential attached to the request. Istio enables request-level authentication with JSON Web Token (JWT) validation and a streamlined developer experience using a custom authentication provider or any OpenID Connect providers, for example: - [ORY Hydra](https://www.ory.sh/) - [Keycloak](https://www.keycloak.org/) - [Auth0](https://auth0.com/) - [Firebase Auth](https://firebase.google.com/docs/auth/) - [Google Auth](https://developers.google.com/identity/protocols/OpenIDConnect) In all cases, Istio stores the authentication policies in the `Istio config store` via a custom Kubernetes API. {{< gloss >}}Istiod{{< /gloss >}} keeps them up-to-date for each proxy, along with the keys where appropriate. Additionally, Istio supports authentication in permissive mode to help you understand
https://github.com/istio/istio.io/blob/master//content/en/docs/concepts/security/index.md
master
istio
[ -0.10935831815004349, 0.0020223872270435095, -0.03138647973537445, -0.005290243774652481, -0.047591682523489, -0.0638091042637825, 0.09658529609441757, 0.030612468719482422, 0.07739567011594772, -0.0507902130484581, -0.005137428175657988, -0.06243686378002167, 0.041873980313539505, 0.08631...
0.406958
Auth](https://firebase.google.com/docs/auth/) - [Google Auth](https://developers.google.com/identity/protocols/OpenIDConnect) In all cases, Istio stores the authentication policies in the `Istio config store` via a custom Kubernetes API. {{< gloss >}}Istiod{{< /gloss >}} keeps them up-to-date for each proxy, along with the keys where appropriate. Additionally, Istio supports authentication in permissive mode to help you understand how a policy change can affect your security posture before it is enforced. ### Mutual TLS authentication Istio tunnels service-to-service communication through the client- and server-side PEPs, which are implemented as [Envoy proxies](https://www.envoyproxy.io/). When a workload sends a request to another workload using mutual TLS authentication, the request is handled as follows: 1. Istio re-routes the outbound traffic from a client to the client's local sidecar Envoy. 1. The client side Envoy starts a mutual TLS handshake with the server side Envoy. During the handshake, the client side Envoy also does a [secure naming](/docs/concepts/security/#secure-naming) check to verify that the service account presented in the server certificate is authorized to run the target service. 1. The client side Envoy and the server side Envoy establish a mutual TLS connection, and Istio forwards the traffic from the client side Envoy to the server side Envoy. 1. The server side Envoy authorizes the request. If authorized, it forwards the traffic to the backend service through local TCP connections. Istio configures `TLSv1\_2` as the minimum TLS version for both client and server with the following cipher suites: - `ECDHE-ECDSA-AES256-GCM-SHA384` - `ECDHE-RSA-AES256-GCM-SHA384` - `ECDHE-ECDSA-AES128-GCM-SHA256` - `ECDHE-RSA-AES128-GCM-SHA256` - `AES256-GCM-SHA384` - `AES128-GCM-SHA256` #### Permissive mode Istio mutual TLS has a permissive mode, which allows a service to accept both plaintext traffic and mutual TLS traffic at the same time. This feature greatly improves the mutual TLS onboarding experience. Many non-Istio clients communicating with a non-Istio server presents a problem for an operator who wants to migrate that server to Istio with mutual TLS enabled. Commonly, the operator cannot install an Istio sidecar for all clients at the same time or does not even have the permissions to do so on some clients. Even after installing the Istio sidecar on the server, the operator cannot enable mutual TLS without breaking existing communications. With the permissive mode enabled, the server accepts both plaintext and mutual TLS traffic. The mode provides greater flexibility for the on-boarding process. The server's installed Istio sidecar takes mutual TLS traffic immediately without breaking existing plaintext traffic. As a result, the operator can gradually install and configure the client's Istio sidecars to send mutual TLS traffic. Once the configuration of the clients is complete, the operator can configure the server to mutual TLS only mode. For more information, visit the [Mutual TLS Migration tutorial](/docs/tasks/security/authentication/mtls-migration). #### Secure naming Server identities are encoded in certificates, but service names are retrieved through the discovery service or DNS. The secure naming information maps the server identities to the service names. A mapping of identity `A` to service name `B` means "`A` is authorized to run service `B`". The control plane watches the `apiserver`, generates the secure naming mappings, and distributes them securely to the PEPs. The following example explains why secure naming is critical in authentication. Suppose the legitimate servers that run the service `datastore` only use the `infra-team` identity. A malicious user has the certificate and key for the `test-team` identity. The malicious user intends to impersonate the service to inspect the data sent from the clients. The malicious user deploys a forged server with the certificate and key for the `test-team` identity. Suppose the malicious user successfully hijacked (through DNS spoofing, BGP/route hijacking, ARP spoofing, etc.) the traffic sent to the `datastore` and redirected it to the
https://github.com/istio/istio.io/blob/master//content/en/docs/concepts/security/index.md
master
istio
[ -0.07913176715373993, 0.04869287461042404, 0.059220943599939346, -0.014343805611133575, -0.042032551020383835, 0.0076368823647499084, 0.09420102834701538, -0.0064189257100224495, 0.035214588046073914, -0.008672468364238739, -0.07400473952293396, -0.037225622683763504, 0.03293665871024132, ...
0.431172
service to inspect the data sent from the clients. The malicious user deploys a forged server with the certificate and key for the `test-team` identity. Suppose the malicious user successfully hijacked (through DNS spoofing, BGP/route hijacking, ARP spoofing, etc.) the traffic sent to the `datastore` and redirected it to the forged server. When a client calls the `datastore` service, it extracts the `test-team` identity from the server's certificate, and checks whether `test-team` is allowed to run `datastore` with the secure naming information. The client detects that `test-team` is not allowed to run the `datastore` service and the authentication fails. Note that, for non HTTP/HTTPS traffic, secure naming doesn't protect from DNS spoofing, in which case the attacker modifies the destination IPs for the service. Since TCP traffic does not contain `Host` information and Envoy can only rely on the destination IP for routing, Envoy may route traffic to services on the hijacked IPs. This DNS spoofing can happen even before the client-side Envoy receives the traffic. ### Authentication architecture You can specify authentication requirements for workloads receiving requests in an Istio mesh using peer and request authentication policies. The mesh operator uses `.yaml` files to specify the policies. The policies are saved in the Istio configuration storage once deployed. The Istio controller watches the configuration storage. Upon any policy changes, the new policy is translated to the appropriate configuration telling the PEP how to perform the required authentication mechanisms. The control plane may fetch the public key and attach it to the configuration for JWT validation. Alternatively, Istiod provides the path to the keys and certificates the Istio system manages and installs them to the application pod for mutual TLS. You can find more info in the [Identity and certificate management section](#pki). Istio sends configurations to the targeted endpoints asynchronously. Once the proxy receives the configuration, the new authentication requirement takes effect immediately on that pod. Client services, those that send requests, are responsible for following the necessary authentication mechanism. For request authentication, the application is responsible for acquiring and attaching the JWT credential to the request. For peer authentication, Istio automatically upgrades all traffic between two PEPs to mutual TLS. If authentication policies disable mutual TLS mode, Istio continues to use plain text between PEPs. To override this behavior explicitly disable mutual TLS mode with [destination rules](/docs/concepts/traffic-management/#destination-rules). You can find out more about how mutual TLS works in the [Mutual TLS authentication section](/docs/concepts/security/#mutual-tls-authentication). {{< image width="50%" link="./authn.svg" caption="Authentication Architecture" >}} Istio outputs identities with both types of authentication, as well as other claims in the credential if applicable, to the next layer: [authorization](/docs/concepts/security/#authorization). ### Authentication policies This section provides more details about how Istio authentication policies work. As you'll remember from the [Architecture section](/docs/concepts/security/#authentication-architecture), authentication policies apply to requests that a service receives. To specify client-side authentication rules in mutual TLS, you need to specify the `TLSSettings` in the `DestinationRule`. You can find more information in our [TLS settings reference docs](/docs/reference/config/networking/destination-rule#ClientTLSSettings). Like other Istio configurations, you can specify authentication policies in `.yaml` files. You deploy policies using `kubectl`. The following example authentication policy specifies that transport authentication for the workloads with the `app:reviews` label must use mutual TLS: {{< text yaml >}} apiVersion: security.istio.io/v1 kind: PeerAuthentication metadata: name: "example-peer-policy" namespace: "foo" spec: selector: matchLabels: app: reviews mtls: mode: STRICT {{< /text >}} #### Policy storage Istio stores mesh-scope policies in the root namespace. These policies have an empty selector apply to all workloads in the mesh. Policies that have a namespace scope are stored in the corresponding namespace. They only apply to workloads within their namespace. If you configure
https://github.com/istio/istio.io/blob/master//content/en/docs/concepts/security/index.md
master
istio
[ -0.09739044308662415, 0.06496649980545044, 0.016208304092288017, -0.0057735503651201725, -0.07714302092790604, -0.09099163115024567, 0.056944966316223145, -0.061583176255226135, 0.08663362264633179, 0.03124992921948433, 0.0033176245633512735, 0.06283596158027649, 0.09360525012016296, 0.091...
0.08487
{{< /text >}} #### Policy storage Istio stores mesh-scope policies in the root namespace. These policies have an empty selector apply to all workloads in the mesh. Policies that have a namespace scope are stored in the corresponding namespace. They only apply to workloads within their namespace. If you configure a `selector` field, the authentication policy only applies to workloads matching the conditions you configured. Peer and request authentication policies are stored separately by kind, `PeerAuthentication` and `RequestAuthentication` respectively. #### Selector field Peer and request authentication policies use `selector` fields to specify the label of the workloads to which the policy applies. The following example shows the selector field of a policy that applies to workloads with the `app:product-page` label: {{< text yaml >}} selector: matchLabels: app: product-page {{< /text >}} If you don't provide a value for the `selector` field, Istio matches the policy to all workloads in the storage scope of the policy. Thus, the `selector` fields help you specify the scope of the policies: - Mesh-wide policy: A policy specified for the root namespace without or with an empty `selector` field. - Namespace-wide policy: A policy specified for a non-root namespace without or with an empty `selector` field. - Workload-specific policy: a policy defined in the regular namespace, with non-empty selector field. Peer and request authentication policies follow the same hierarchy principles for the `selector` fields, but Istio combines and applies them in slightly different ways. There can be only one mesh-wide peer authentication policy, and only one namespace-wide peer authentication policy per namespace. When you configure multiple mesh- or namespace-wide peer authentication policies for the same mesh or namespace, Istio ignores the newer policies. When more than one workload-specific peer authentication policy matches, Istio picks the oldest one. Istio applies the narrowest matching policy for each workload using the following order: 1. workload-specific 1. namespace-wide 1. mesh-wide Istio can combine all matching request authentication policies to work as if they come from a single request authentication policy. Thus, you can have multiple mesh-wide or namespace-wide policies in a mesh or namespace. However, it is still a good practice to avoid having multiple mesh-wide or namespace-wide request authentication policies. #### Peer authentication Peer authentication policies specify the mutual TLS mode Istio enforces on target workloads. The following modes are supported: - PERMISSIVE: Workloads accept both mutual TLS and plain text traffic. This mode is most useful during migrations when workloads without sidecar cannot use mutual TLS. Once workloads are migrated with sidecar injection, you should switch the mode to STRICT. - STRICT: Workloads only accept mutual TLS traffic. - DISABLE: Mutual TLS is disabled. From a security perspective, you shouldn't use this mode unless you provide your own security solution. When the mode is unset, the mode of the parent scope is inherited. Mesh-wide peer authentication policies with an unset mode use the `PERMISSIVE` mode by default. The following peer authentication policy requires all workloads in namespace `foo` to use mutual TLS: {{< text yaml >}} apiVersion: security.istio.io/v1 kind: PeerAuthentication metadata: name: "example-policy" namespace: "foo" spec: mtls: mode: STRICT {{< /text >}} With workload-specific peer authentication policies, you can specify different mutual TLS modes for different ports. You can only use ports that workloads have claimed for port-wide mutual TLS configuration. The following example disables mutual TLS on port `80` for the `app:example-app` workload, and uses the mutual TLS settings of the namespace-wide peer authentication policy for all other ports: {{< text yaml >}} apiVersion: security.istio.io/v1 kind: PeerAuthentication metadata: name: "example-workload-policy" namespace: "foo" spec: selector: matchLabels: app: example-app portLevelMtls: 80: mode: DISABLE {{< /text >}} The peer
https://github.com/istio/istio.io/blob/master//content/en/docs/concepts/security/index.md
master
istio
[ -0.0845160037279129, 0.0655730739235878, -0.027443788945674896, -0.021379372105002403, 0.04053924232721329, 0.008816740475594997, 0.11901868134737015, -0.026559364050626755, 0.03554968163371086, 0.03709529712796211, 0.005343909375369549, -0.050257209688425064, 0.012426421977579594, 0.04406...
0.311316
on port `80` for the `app:example-app` workload, and uses the mutual TLS settings of the namespace-wide peer authentication policy for all other ports: {{< text yaml >}} apiVersion: security.istio.io/v1 kind: PeerAuthentication metadata: name: "example-workload-policy" namespace: "foo" spec: selector: matchLabels: app: example-app portLevelMtls: 80: mode: DISABLE {{< /text >}} The peer authentication policy above works only because the service configuration below bound the requests from the `example-app` workload to port `80` of the `example-service`: {{< text yaml >}} apiVersion: v1 kind: Service metadata: name: example-service namespace: foo spec: ports: - name: http port: 8000 protocol: TCP targetPort: 80 selector: app: example-app {{< /text >}} #### Request authentication Request authentication policies specify the values needed to validate a JSON Web Token (JWT). These values include, among others, the following: - The location of the token in the request - The issuer or the request - The public JSON Web Key Set (JWKS) Istio checks the presented token, if presented against the rules in the request authentication policy, and rejects requests with invalid tokens. When requests carry no token, they are accepted by default. To reject requests without tokens, provide authorization rules that specify the restrictions for specific operations, for example paths or actions. Request authentication policies can specify more than one JWT if each uses a unique location. When more than one policy matches a workload, Istio combines all rules as if they were specified as a single policy. This behavior is useful to program workloads to accept JWT from different providers. However, requests with more than one valid JWT are not supported because the output principal of such requests is undefined. #### Principals When you use peer authentication policies and mutual TLS, Istio extracts the identity from the peer authentication into the `source.principal`. Similarly, when you use request authentication policies, Istio assigns the identity from the JWT to the `request.auth.principal`. Use these principals to set authorization policies and as telemetry output. ### Updating authentication policies You can change an authentication policy at any time and Istio pushes the new policies to the workloads almost in real time. However, Istio can't guarantee that all workloads receive the new policy at the same time. The following recommendations help avoid disruption when updating your authentication policies: - Use intermediate peer authentication policies using the `PERMISSIVE` mode when changing the mode from `DISABLE` to `STRICT` and vice-versa. When all workloads switch successfully to the desired mode, you can apply the policy with the final mode. You can use Istio telemetry to verify that workloads have switched successfully. - When migrating request authentication policies from one JWT to another, add the rule for the new JWT to the policy without removing the old rule. Workloads then accept both types of JWT, and you can remove the old rule when all traffic switches to the new JWT. However, each JWT has to use a different location. ## Authorization Istio's authorization features provide mesh-, namespace-, and workload-wide access control for your workloads in the mesh. This level of control provides the following benefits: - Workload-to-workload and end-user-to-workload authorization. - A simple API: it includes a single [`AuthorizationPolicy` CRD](/docs/reference/config/security/authorization-policy/), which is easy to use and maintain. - Flexible semantics: operators can define custom conditions on Istio attributes, and use CUSTOM, DENY and ALLOW actions. - High performance: Istio authorization (`ALLOW` and `DENY`) is enforced natively on Envoy. - High compatibility: supports gRPC, HTTP, HTTPS and HTTP/2 natively, as well as any plain TCP protocols. ### Authorization architecture The authorization policy enforces access control to the inbound traffic in the server side Envoy proxy. Each Envoy proxy runs an authorization
https://github.com/istio/istio.io/blob/master//content/en/docs/concepts/security/index.md
master
istio
[ -0.0887279286980629, 0.07599041610956192, 0.026104427874088287, -0.05196036398410797, -0.023514501750469208, -0.10654204338788986, 0.028599988669157028, -0.01185658574104309, 0.030961645767092705, 0.011061481200158596, -0.008491692133247852, -0.008803041651844978, 0.03582033887505531, 0.09...
0.221851
(`ALLOW` and `DENY`) is enforced natively on Envoy. - High compatibility: supports gRPC, HTTP, HTTPS and HTTP/2 natively, as well as any plain TCP protocols. ### Authorization architecture The authorization policy enforces access control to the inbound traffic in the server side Envoy proxy. Each Envoy proxy runs an authorization engine that authorizes requests at runtime. When a request comes to the proxy, the authorization engine evaluates the request context against the current authorization policies, and returns the authorization result, either `ALLOW` or `DENY`. Operators specify Istio authorization policies using `.yaml` files. {{< image width="50%" link="./authz.svg" caption="Authorization Architecture" >}} ### Implicit enablement You don't need to explicitly enable Istio's authorization features; they are available after installation. To enforce access control to your workloads, you apply an authorization policy. For workloads without authorization policies applied, Istio allows all requests. Authorization policies support `ALLOW`, `DENY` and `CUSTOM` actions. You can apply multiple policies, each with a different action, as needed to secure access to your workloads. Istio checks for matching policies in layers, in this order: `CUSTOM`, `DENY`, and then `ALLOW`. For each type of action, Istio first checks if there is a policy with the action applied, and then checks if the request matches the policy's specification. If a request doesn't match a policy in one of the layers, the check continues to the next layer. The following graph shows the policy precedence in detail: {{< image width="50%" link="./authz-eval.svg" caption="Authorization Policy Precedence">}} When you apply multiple authorization policies to the same workload, Istio applies them additively. ### Authorization policies To configure an authorization policy, you create an [`AuthorizationPolicy` custom resource](/docs/reference/config/security/authorization-policy/). An authorization policy includes a selector, an action, and a list of rules: - The `selector` field specifies the target of the policy - The `action` field specifies whether to allow or deny the request - The `rules` specify when to trigger the action - The `from` field in the `rules` specifies the sources of the request - The `to` field in the `rules` specifies the operations of the request - The `when` field specifies the conditions needed to apply the rule The following example shows an authorization policy that allows two sources, the `cluster.local/ns/default/sa/curl` service account and the `dev` namespace, to access the workloads with the `app: httpbin` and `version: v1` labels in the `foo` namespace when requests sent have a valid JWT token. {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: selector: matchLabels: app: httpbin version: v1 action: ALLOW rules: - from: - source: principals: ["cluster.local/ns/default/sa/curl"] - source: namespaces: ["dev"] to: - operation: methods: ["GET"] when: - key: request.auth.claims[iss] values: ["https://accounts.google.com"] {{< /text >}} The following example shows an authorization policy that denies requests if the source is not the `foo` namespace: {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: httpbin-deny namespace: foo spec: selector: matchLabels: app: httpbin version: v1 action: DENY rules: - from: - source: notNamespaces: ["foo"] {{< /text >}} The deny policy takes precedence over the allow policy. Requests matching allow policies can be denied if they match a deny policy. Istio evaluates deny policies first to ensure that an allow policy can't bypass a deny policy. #### Policy Target You can specify a policy's scope or target with the `metadata/namespace` field and an optional `selector` field. A policy applies to the namespace in the `metadata/namespace` field. If set its value to the root namespace, the policy applies to all namespaces in a mesh. The value of the root namespace is configurable, and the default is `istio-system`. If set to any other namespace, the policy only applies to
https://github.com/istio/istio.io/blob/master//content/en/docs/concepts/security/index.md
master
istio
[ -0.04165971279144287, 0.05349667742848396, -0.01776021532714367, 0.0031879055313766003, -0.032854825258255005, -0.05806319788098335, 0.02809443324804306, -0.03758144751191139, -0.00808282382786274, 0.02349567413330078, -0.03315182402729988, 0.03509094566106796, -0.024609621614217758, 0.100...
0.385072
policy applies to the namespace in the `metadata/namespace` field. If set its value to the root namespace, the policy applies to all namespaces in a mesh. The value of the root namespace is configurable, and the default is `istio-system`. If set to any other namespace, the policy only applies to the specified namespace. You can use a `selector` field to further restrict policies to apply to specific workloads. The `selector` uses labels to select the target workload. The selector contains a list of `{key: value}` pairs, where the `key` is the name of the label. If not set, the authorization policy applies to all workloads in the same namespace as the authorization policy. For example, the `allow-read` policy allows `"GET"` and `"HEAD"` access to the workload with the `app: products` label in the `default` namespace. {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: allow-read namespace: default spec: selector: matchLabels: app: products action: ALLOW rules: - to: - operation: methods: ["GET", "HEAD"] {{< /text >}} #### Value matching Most fields in authorization policies support all the following matching schemas: - Exact match: exact string match. - Suffix match: a string with an ending `"\*"`. For example, `"test.abc.\*"` matches `"test.abc.com"`, `"test.abc.com.cn"`, `"test.abc.org"`, etc. - Prefix match: a string with a starting `"\*"`. For example, `"\*.abc.com"` matches `"eng.abc.com"`, `"test.eng.abc.com"`, etc. - Presence match: `\*` is used to specify anything but not empty. To specify that a field must be present, use the `fieldname: ["\*"]`format. This is different from leaving a field unspecified, which means match anything, including empty. There are a few exceptions. For example, the following fields only support exact match: - The `key` field under the `when` section - The `ipBlocks` under the `source` section - The `ports` field under the `to` section The following example policy allows access at paths with the `/test/\*` prefix or the `\*/info` suffix. {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: tester namespace: default spec: selector: matchLabels: app: products action: ALLOW rules: - to: - operation: paths: ["/test/\*", "\*/info"] {{< /text >}} #### Exclusion matching To match negative conditions like `notValues` in the `when` field, `notIpBlocks` in the `source` field, `notPorts` in the `to` field, Istio supports exclusion matching. The following example requires a valid request principals, which is derived from JWT authentication, if the request path is not `/healthz`. Thus, the policy excludes requests to the `/healthz` path from the JWT authentication: {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: disable-jwt-for-healthz namespace: default spec: selector: matchLabels: app: products action: ALLOW rules: - to: - operation: notPaths: ["/healthz"] from: - source: requestPrincipals: ["\*"] {{< /text >}} The following example denies the request to the `/admin` path for requests without request principals: {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: enable-jwt-for-admin namespace: default spec: selector: matchLabels: app: products action: DENY rules: - to: - operation: paths: ["/admin"] from: - source: notRequestPrincipals: ["\*"] {{< /text >}} #### `allow-nothing`, `deny-all` and `allow-all` policy The following example shows an `ALLOW` policy that matches nothing. If there are no other `ALLOW` policies, requests will always be denied because of the "deny by default" behavior. Note the "deny by default" behavior applies only if the workload has at least one authorization policy with the `ALLOW` action. {{< tip >}} It is a good security practice to start with the `allow-nothing` policy and incrementally add more `ALLOW` policies to open more access to the workload. {{< /tip >}} {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: allow-nothing spec: action: ALLOW # the rules field is not specified, and the policy will never
https://github.com/istio/istio.io/blob/master//content/en/docs/concepts/security/index.md
master
istio
[ -0.052521515637636185, 0.031143277883529663, -0.06079474836587906, 0.017372122034430504, 0.023751793429255486, 0.020748699083924294, 0.11054651439189911, -0.014718045480549335, 0.01705481857061386, 0.02960185706615448, -0.017625564709305763, -0.0399574376642704, 0.02322031557559967, 0.0608...
0.37059
security practice to start with the `allow-nothing` policy and incrementally add more `ALLOW` policies to open more access to the workload. {{< /tip >}} {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: allow-nothing spec: action: ALLOW # the rules field is not specified, and the policy will never match. {{< /text >}} The following example shows a `DENY` policy that explicitly denies all access. It will always deny the request even if there is another `ALLOW` policy allowing the request because the `DENY` policy takes precedence over the `ALLOW` policy. This is useful if you want to temporarily disable all access to the workload. {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: deny-all spec: action: DENY # the rules field has an empty rule, and the policy will always match. rules: - {} {{< /text >}} The following example shows an `ALLOW` policy that allows full access to the workload. It will make other `ALLOW` policies useless as it will always allow the request. It might be useful if you want to temporarily expose full access to the workload. Note the request could still be denied due to `CUSTOM` and `DENY` policies. {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: allow-all spec: action: ALLOW # This matches everything. rules: - {} {{< /text >}} #### Custom conditions You can also use the `when` section to specify additional conditions. For example, the following `AuthorizationPolicy` definition includes a condition that `request.headers[version]` is either `"v1"` or `"v2"`. In this case, the key is `request.headers[version]`, which is an entry in the Istio attribute `request.headers`, which is a map. {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: selector: matchLabels: app: httpbin version: v1 action: ALLOW rules: - from: - source: principals: ["cluster.local/ns/default/sa/curl"] to: - operation: methods: ["GET"] when: - key: request.headers[version] values: ["v1", "v2"] {{< /text >}} The supported `key` values of a condition are listed on the [conditions page](/docs/reference/config/security/conditions/). #### Authenticated and unauthenticated identity If you want to make a workload publicly accessible, you need to leave the `source` section empty. This allows sources from all (both authenticated and unauthenticated) users and workloads, for example: {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: selector: matchLabels: app: httpbin version: v1 action: ALLOW rules: - to: - operation: methods: ["GET", "POST"] {{< /text >}} To allow only authenticated users, set `principals` to `"\*"` instead, for example: {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: selector: matchLabels: app: httpbin version: v1 action: ALLOW rules: - from: - source: principals: ["\*"] to: - operation: methods: ["GET", "POST"] {{< /text >}} ### Using Istio authorization on plain TCP protocols Istio authorization supports workloads using any plain TCP protocols, such as MongoDB. In this case, you configure the authorization policy in the same way you did for the HTTP workloads. The difference is that certain fields and conditions are only applicable to HTTP workloads. These fields include: - The `request\_principals` field in the source section of the authorization policy object - The `hosts`, `methods` and `paths` fields in the operation section of the authorization policy object The supported conditions are listed in the [conditions page](/docs/reference/config/security/conditions/). If you use any HTTP only fields for a TCP workload, Istio will ignore HTTP-only fields in the authorization policy. Assuming you have a MongoDB service on port `27017`, the following example configures an authorization policy to only allows the `bookinfo-ratings-v2` service in the Istio mesh to access the MongoDB workload. {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name:
https://github.com/istio/istio.io/blob/master//content/en/docs/concepts/security/index.md
master
istio
[ -0.051431804895401, 0.10138575732707977, -0.03204299882054329, 0.036904241889715195, 0.03516722843050957, -0.07176490128040314, 0.03095778450369835, -0.028954189270734787, 0.014549896121025085, 0.017931809648871422, -0.001176037359982729, 0.03873923420906067, -0.01900709792971611, 0.085349...
0.373631
Istio will ignore HTTP-only fields in the authorization policy. Assuming you have a MongoDB service on port `27017`, the following example configures an authorization policy to only allows the `bookinfo-ratings-v2` service in the Istio mesh to access the MongoDB workload. {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: mongodb-policy namespace: default spec: selector: matchLabels: app: mongodb action: ALLOW rules: - from: - source: principals: ["cluster.local/ns/default/sa/bookinfo-ratings-v2"] to: - operation: ports: ["27017"] {{< /text >}} ### Dependency on mutual TLS Istio uses mutual TLS to securely pass some information from the client to the server. Mutual TLS must be enabled before using any of the following fields in the authorization policy: - the `principals` and `notPrincipals` field under the `source` section - the `namespaces` and `notNamespaces` field under the `source` section - the `source.principal` custom condition - the `source.namespace` custom condition Note it is strongly recommended to always use these fields with \*\*strict\*\* mutual TLS mode in the `PeerAuthentication` to avoid potential unexpected requests rejection or policy bypass when plain text traffic is used with the permissive mutual TLS mode. Check the [security advisory](/news/security/istio-security-2021-004) for more details and alternatives if you cannot enable strict mutual TLS mode. ## Learn more After learning the basic concepts, there are more resources to review: - Try out the security policy by following the [authentication](/docs/tasks/security/authentication) and [authorization](/docs/tasks/security/authorization) tasks. - Learn some security [policy examples](/docs/ops/configuration/security/security-policy-examples) that could be used to improve security in your mesh. - Read [common problems](/docs/ops/common-problems/security-issues/) to better troubleshoot security policy issues when something goes wrong.
https://github.com/istio/istio.io/blob/master//content/en/docs/concepts/security/index.md
master
istio
[ -0.01265890896320343, 0.09400206059217453, -0.06151108071208, 0.04950721934437752, -0.007281194441020489, -0.04844490438699722, 0.0011208883952349424, -0.01682201214134693, 0.025109460577368736, 0.0026894453912973404, -0.029493290930986404, -0.08934198319911957, 0.006468712352216244, 0.075...
0.410597
WebAssembly is a sandboxing technology which can be used to extend the Istio proxy (Envoy). The Proxy-Wasm sandbox API replaces Mixer as the primary extension mechanism in Istio. WebAssembly sandbox goals: - \*\*Efficiency\*\* - An extension adds low latency, CPU, and memory overhead. - \*\*Function\*\* - An extension can enforce policy, collect telemetry, and perform payload mutations. - \*\*Isolation\*\* - A programming error or crash in one plugin doesn't affect other plugins. - \*\*Configuration\*\* - The plugins are configured using an API that is consistent with other Istio APIs. An extension can be configured dynamically. - \*\*Operator\*\* - An extension can be canaried and deployed as log-only, fail-open or fail-close. - \*\*Extension developer\*\* - The plugin can be written in several programming languages. This [video talk](https://youtu.be/XdWmm\_mtVXI) is an introduction about architecture of WebAssembly integration. ## High-level architecture Istio extensions (Proxy-Wasm plugins) have several components: - \*\*Filter Service Provider Interface (SPI)\*\* for building Proxy-Wasm plugins for filters. - \*\*Sandbox\*\* V8 Wasm Runtime embedded in Envoy. - \*\*Host APIs\*\* for headers, trailers and metadata. - \*\*Call out APIs\*\* for gRPC and HTTP calls. - \*\*Stats and Logging APIs\*\* for metrics and monitoring. {{< image width="80%" link="./extending.svg" caption="Extending Istio/Envoy" >}} ## Example An example C++ Proxy-Wasm plugin for a filter can be found [here](https://github.com/istio-ecosystem/wasm-extensions/tree/master/example). You can follow [this guide](https://github.com/istio-ecosystem/wasm-extensions/blob/master/doc/write-a-wasm-extension-with-cpp.md) to implement a Wasm extension with C++. ## Ecosystem - [Istio Ecosystem Wasm Extensions](https://github.com/istio-ecosystem/wasm-extensions) - [Proxy-Wasm ABI specification](https://github.com/proxy-wasm/spec) - [Proxy-Wasm C++ SDK](https://github.com/proxy-wasm/proxy-wasm-cpp-sdk) - [Proxy-Wasm Go SDK](https://github.com/proxy-wasm/proxy-wasm-go-sdk) - [Proxy-Wasm Rust SDK](https://github.com/proxy-wasm/proxy-wasm-rust-sdk) - [Proxy-Wasm AssemblyScript SDK](https://github.com/solo-io/proxy-runtime) - [WebAssembly Hub](https://webassemblyhub.io/) - [WebAssembly Extensions For Network Proxies (video)](https://www.youtube.com/watch?v=OIUPf8m7CGA)
https://github.com/istio/istio.io/blob/master//content/en/docs/concepts/wasm/index.md
master
istio
[ -0.0423140823841095, -0.031131362542510033, -0.0017273203702643514, 0.006530208513140678, -0.023399602621793747, -0.028699401766061783, 0.016279183328151703, 0.0494641475379467, -0.05514536052942276, 0.003790321061387658, 0.04391320422291756, -0.0933670699596405, -0.04011370614171028, 0.00...
0.531595
Istio generates detailed telemetry for all service communications within a mesh. This telemetry provides \*observability\* of service behavior, empowering operators to troubleshoot, maintain, and optimize their applications -- without imposing any additional burdens on service developers. Through Istio, operators gain a thorough understanding of how monitored services are interacting, both with other services and with the Istio components themselves. Istio generates the following types of telemetry in order to provide overall service mesh observability: - [\*\*Metrics\*\*](#metrics). Istio generates a set of service metrics based on the four "golden signals" of monitoring (latency, traffic, errors, and saturation). Istio also provides detailed metrics for the [mesh control plane](/docs/ops/deployment/architecture/). A default set of mesh monitoring dashboards built on top of these metrics is also provided. - [\*\*Distributed Traces\*\*](#distributed-traces). Istio generates distributed trace spans for each service, providing operators with a detailed understanding of call flows and service dependencies within a mesh. - [\*\*Access Logs\*\*](#access-logs). As traffic flows into a service within a mesh, Istio can generate a full record of each request, including source and destination metadata. This information enables operators to audit service behavior down to the individual [workload instance](/docs/reference/glossary/#workload-instance) level. ## Metrics Metrics provide a way of monitoring and understanding behavior in aggregate. To monitor service behavior, Istio generates metrics for all service traffic in, out, and within an Istio service mesh. These metrics provide information on behaviors such as the overall volume of traffic, the error rates within the traffic, and the response times for requests. In addition to monitoring the behavior of services within a mesh, it is also important to monitor the behavior of the mesh itself. Istio components export metrics on their own internal behaviors to provide insight on the health and function of the mesh control plane. ### Proxy-level metrics Istio metrics collection begins with the sidecar proxies (Envoy). Each proxy generates a rich set of metrics about all traffic passing through the proxy (both inbound and outbound). The proxies also provide detailed statistics about the administrative functions of the proxy itself, including configuration and health information. Envoy-generated metrics provide monitoring of the mesh at the granularity of Envoy resources (such as listeners and clusters). As a result, understanding the connection between mesh services and Envoy resources is required for monitoring the Envoy metrics. Istio enables operators to select which of the Envoy metrics are generated and collected at each workload instance. By default, Istio enables only a small subset of the Envoy-generated statistics to avoid overwhelming metrics backends and to reduce the CPU overhead associated with metrics collection. However, operators can easily expand the set of collected proxy metrics when required. This enables targeted debugging of networking behavior, while reducing the overall cost of monitoring across the mesh. The [Envoy documentation site](https://www.envoyproxy.io/docs/envoy/latest/) includes a detailed overview of [Envoy statistics collection](https://www.envoyproxy.io/docs/envoy/latest/intro/arch\_overview/observability/statistics.html?highlight=statistics). The operations guide on [Envoy Statistics](/docs/ops/configuration/telemetry/envoy-stats/) provides more information on controlling the generation of proxy-level metrics. Example proxy-level Metrics: {{< text json >}} envoy\_cluster\_internal\_upstream\_rq{response\_code\_class="2xx",cluster\_name="xds-grpc"} 7163 envoy\_cluster\_upstream\_rq\_completed{cluster\_name="xds-grpc"} 7164 envoy\_cluster\_ssl\_connection\_error{cluster\_name="xds-grpc"} 0 envoy\_cluster\_lb\_subsets\_removed{cluster\_name="xds-grpc"} 0 envoy\_cluster\_internal\_upstream\_rq{response\_code="503",cluster\_name="xds-grpc"} 1 {{< /text >}} ### Service-level metrics In addition to the proxy-level metrics, Istio provides a set of service-oriented metrics for monitoring service communications. These metrics cover the four basic service monitoring needs: latency, traffic, errors, and saturation. Istio ships with a default set of [dashboards](/docs/tasks/observability/metrics/using-istio-dashboard/) for monitoring service behaviors based on these metrics. The [standard Istio metrics](/docs/reference/config/metrics/) are exported to [Prometheus](/docs/ops/integrations/prometheus/) by default. Use of the service-level metrics is entirely optional. Operators may choose to turn off generation and collection of these metrics to meet their individual needs. Example service-level metric: {{< text json >}} istio\_requests\_total{ connection\_security\_policy="mutual\_tls", destination\_app="details", destination\_canonical\_service="details", destination\_canonical\_revision="v1", destination\_principal="cluster.local/ns/default/sa/default", destination\_service="details.default.svc.cluster.local",
https://github.com/istio/istio.io/blob/master//content/en/docs/concepts/observability/index.md
master
istio
[ -0.042616430670022964, -0.030943037942051888, -0.030498389154672623, -0.006310340482741594, 0.00274968845769763, -0.11088572442531586, 0.06736483424901962, 0.031003611162304878, 0.03627638518810272, 0.045144908130168915, -0.06575285643339157, -0.08271130174398422, -0.01316928118467331, 0.0...
0.604425
metrics. The [standard Istio metrics](/docs/reference/config/metrics/) are exported to [Prometheus](/docs/ops/integrations/prometheus/) by default. Use of the service-level metrics is entirely optional. Operators may choose to turn off generation and collection of these metrics to meet their individual needs. Example service-level metric: {{< text json >}} istio\_requests\_total{ connection\_security\_policy="mutual\_tls", destination\_app="details", destination\_canonical\_service="details", destination\_canonical\_revision="v1", destination\_principal="cluster.local/ns/default/sa/default", destination\_service="details.default.svc.cluster.local", destination\_service\_name="details", destination\_service\_namespace="default", destination\_version="v1", destination\_workload="details-v1", destination\_workload\_namespace="default", reporter="destination", request\_protocol="http", response\_code="200", response\_flags="-", source\_app="productpage", source\_canonical\_service="productpage", source\_canonical\_revision="v1", source\_principal="cluster.local/ns/default/sa/default", source\_version="v1", source\_workload="productpage-v1", source\_workload\_namespace="default" } 214 {{< /text >}} ### Control plane metrics The Istio control plane also provides a collection of self-monitoring metrics. These metrics allow monitoring of the behavior of Istio itself (as distinct from that of the services within the mesh). For more information on which metrics are maintained, please refer to the [reference documentation](/docs/reference/commands/pilot-discovery/#metrics). ## Distributed traces Distributed tracing provides a way to monitor and understand behavior by monitoring individual requests as they flow through a mesh. Traces empower mesh operators to understand service dependencies and the sources of latency within their service mesh. Istio supports distributed tracing through the Envoy proxies. The proxies automatically generate trace spans on behalf of the applications they proxy, requiring only that the applications forward the appropriate request context. Istio supports a number of tracing backends, including [Zipkin](/docs/tasks/observability/distributed-tracing/zipkin/), [Jaeger](/docs/tasks/observability/distributed-tracing/jaeger/), and many tools and services that support [OpenTelemetry](/docs/tasks/observability/distributed-tracing/opentelemetry/). Operators control the sampling rate for trace generation (that is, the rate at which tracing data is generated per request). This allows operators to control the amount and rate of tracing data being produced for their mesh. More information about Distributed Tracing with Istio is found in our [FAQ on Distributed Tracing](/about/faq/#distributed-tracing). Example Istio-generated distributed trace for a single request: {{< image link="/docs/tasks/observability/distributed-tracing/zipkin/istio-tracing-details-zipkin.png" caption="Distributed Trace for a single request" >}} ## Access logs Access logs provide a way to monitor and understand behavior from the perspective of an individual workload instance. Istio can generate access logs for service traffic in a configurable set of formats, providing operators with full control of the how, what, when and where of logging. For more information, please refer to [Getting Envoy's Access Logs](/docs/tasks/observability/logs/access-log/). Example Istio access log: {{< text plain >}} [2019-03-06T09:31:27.360Z] "GET /status/418 HTTP/1.1" 418 - "-" 0 135 5 2 "-" "curl/7.60.0" "d209e46f-9ed5-9b61-bbdd-43e22662702a" "httpbin:8000" "127.0.0.1:80" inbound|8000|http|httpbin.default.svc.cluster.local - 172.30.146.73:80 172.30.146.82:38618 outbound\_.8000\_.\_.httpbin.default.svc.cluster.local {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/concepts/observability/index.md
master
istio
[ -0.058268144726753235, -0.018771423026919365, -0.05829589441418648, 0.05367583408951759, -0.05104731395840645, -0.08118089288473129, 0.021942028775811195, 0.043787725269794464, 0.016140790656208992, -0.012776916846632957, -0.04302314296364784, -0.12458833307027817, -0.03452248498797417, 0....
0.477438
Istio’s traffic routing rules let you easily control the flow of traffic and API calls between services. Istio simplifies configuration of service-level properties like circuit breakers, timeouts, and retries, and makes it easy to set up important tasks like A/B testing, canary rollouts, and staged rollouts with percentage-based traffic splits. It also provides out-of-box reliability features that help make your application more resilient against failures of dependent services or the network. Istio’s traffic management model relies on the {{< gloss >}}Envoy{{}} proxies that are deployed along with your services. All traffic that your mesh services send and receive ({{< gloss >}}data plane{{}} traffic) is proxied through Envoy, making it easy to direct and control traffic around your mesh without making any changes to your services. If you’re interested in the details of how the features described in this guide work, you can find out more about Istio’s traffic management implementation in the [architecture overview](/docs/ops/deployment/architecture/). The rest of this guide introduces Istio’s traffic management features. ## Introducing Istio traffic management In order to direct traffic within your mesh, Istio needs to know where all your endpoints are, and which services they belong to. To populate its own {{< gloss >}}service registry{{}}, Istio connects to a service discovery system. For example, if you've installed Istio on a Kubernetes cluster, then Istio automatically detects the services and endpoints in that cluster. Using this service registry, the Envoy proxies can then direct traffic to the relevant services. Most microservice-based applications have multiple instances of each service workload to handle service traffic, sometimes referred to as a load balancing pool. By default, the Envoy proxies distribute traffic across each service’s load balancing pool using a least requests model, where each request is routed to the host with fewer active requests from a random selection of two hosts from the pool; in this way the most heavily loaded host will not receive requests until it is no more loaded than any other host. While Istio's basic service discovery and load balancing gives you a working service mesh, it’s far from all that Istio can do. In many cases you might want more fine-grained control over what happens to your mesh traffic. You might want to direct a particular percentage of traffic to a new version of a service as part of A/B testing, or apply a different load balancing policy to traffic for a particular subset of service instances. You might also want to apply special rules to traffic coming into or out of your mesh, or add an external dependency of your mesh to the service registry. You can do all this and more by adding your own traffic configuration to Istio using Istio’s traffic management API. Like other Istio configuration, the API is specified using Kubernetes custom resource definitions ({{< gloss >}}CRDs{{}}), which you can configure using YAML, as you’ll see in the examples. The rest of this guide examines each of the traffic management API resources and what you can do with them. These resources are: - [Virtual services](#virtual-services) - [Destination rules](#destination-rules) - [Gateways](#gateways) - [Service entries](#service-entries) - [Sidecars](#sidecars) This guide also gives an overview of some of the [network resilience and testing features](#network-resilience-and-testing) that are built in to the API resources. ## Virtual services {#virtual-services} [Virtual services](/docs/reference/config/networking/virtual-service/#VirtualService), along with [destination rules](#destination-rules), are the key building blocks of Istio’s traffic routing functionality. A virtual service lets you configure how requests are routed to a service within an Istio service mesh, building on the basic connectivity and discovery provided by Istio and your platform. Each virtual service consists of a set of routing rules
https://github.com/istio/istio.io/blob/master//content/en/docs/concepts/traffic-management/index.md
master
istio
[ -0.04999064281582832, 0.007991356775164604, 0.024198805913329124, 0.03684133291244507, -0.07430686801671982, -0.043353643268346786, 0.044295381754636765, 0.04068015143275261, -0.01689995639026165, 0.0720367282629013, -0.09161596745252609, -0.04083073139190674, -0.02220393531024456, 0.02558...
0.607078
the key building blocks of Istio’s traffic routing functionality. A virtual service lets you configure how requests are routed to a service within an Istio service mesh, building on the basic connectivity and discovery provided by Istio and your platform. Each virtual service consists of a set of routing rules that are evaluated in order, letting Istio match each given request to the virtual service to a specific real destination within the mesh. Your mesh can require multiple virtual services or none depending on your use case. ### Why use virtual services? {#why-use-virtual-services} Virtual services play a key role in making Istio’s traffic management flexible and powerful. They do this by strongly decoupling where clients send their requests from the destination workloads that actually implement them. Virtual services also provide a rich way of specifying different traffic routing rules for sending traffic to those workloads. Why is this so useful? Without virtual services, Envoy distributes traffic using least requests load balancing between all service instances, as described in the introduction. You can improve this behavior with what you know about the workloads. For example, some might represent a different version. This can be useful in A/B testing, where you might want to configure traffic routes based on percentages across different service versions, or to direct traffic from your internal users to a particular set of instances. With a virtual service, you can specify traffic behavior for one or more hostnames. You use routing rules in the virtual service that tell Envoy how to send the virtual service’s traffic to appropriate destinations. Route destinations can be different versions of the same service or entirely different services. A typical use case is to send traffic to different versions of a service, specified as service subsets. Clients send requests to the virtual service host as if it was a single entity, and Envoy then routes the traffic to the different versions depending on the virtual service rules: for example, "20% of calls go to the new version" or "calls from these users go to version 2". This allows you to, for instance, create a canary rollout where you gradually increase the percentage of traffic that’s sent to a new service version. The traffic routing is completely separate from the instance deployment, meaning that the number of instances implementing the new service version can scale up and down based on traffic load without referring to traffic routing at all. By contrast, container orchestration platforms like Kubernetes only support traffic distribution based on instance scaling, which quickly becomes complex. You can read more about how virtual services help with canary deployments in [Canary Deployments using Istio](/blog/2017/0.1-canary/). Virtual services also let you: - Address multiple application services through a single virtual service. If your mesh uses Kubernetes, for example, you can configure a virtual service to handle all services in a specific namespace. Mapping a single virtual service to multiple "real" services is particularly useful in facilitating turning a monolithic application into a composite service built out of distinct microservices without requiring the consumers of the service to adapt to the transition. Your routing rules can specify "calls to these URIs of `monolith.com` go to `microservice A`", and so on. You can see how this works in [one of our examples below](#more-about-routing-rules). - Configure traffic rules in combination with [gateways](/docs/concepts/traffic-management/#gateways) to control ingress and egress traffic. In some cases you also need to configure destination rules to use these features, as these are where you specify your service subsets. Specifying service subsets and other destination-specific policies in a separate object lets you reuse these cleanly between
https://github.com/istio/istio.io/blob/master//content/en/docs/concepts/traffic-management/index.md
master
istio
[ -0.04054543748497963, 0.015055018477141857, 0.0049713184125721455, -0.018197033554315567, -0.04258817434310913, -0.05067434161901474, 0.04616613686084747, 0.011662259697914124, 0.0401817187666893, 0.04960312321782112, -0.10405579209327698, -0.08725868910551071, -0.0544494166970253, 0.03815...
0.628897
in combination with [gateways](/docs/concepts/traffic-management/#gateways) to control ingress and egress traffic. In some cases you also need to configure destination rules to use these features, as these are where you specify your service subsets. Specifying service subsets and other destination-specific policies in a separate object lets you reuse these cleanly between virtual services. You can find out more about destination rules in the next section. ### Virtual service example {#virtual-service-example} The following virtual service routes requests to different versions of a service depending on whether the request comes from a particular user. {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: reviews spec: hosts: - reviews http: - match: - headers: end-user: exact: jason route: - destination: host: reviews subset: v2 - route: - destination: host: reviews subset: v3 {{< /text >}} #### The hosts field {#the-hosts-field} The `hosts` field lists the virtual service’s hosts - in other words, the user-addressable destination or destinations that these routing rules apply to. This is the address or addresses the client uses when sending requests to the service. {{< text yaml >}} hosts: - reviews {{< /text >}} The virtual service hostname can be an IP address, a DNS name, or, depending on the platform, a short name (such as a Kubernetes service short name) that resolves, implicitly or explicitly, to a fully qualified domain name (FQDN). You can also use wildcard ("\\*") prefixes, letting you create a single set of routing rules for all matching services. Virtual service hosts don't actually have to be part of the Istio service registry, they are simply virtual destinations. This lets you model traffic for virtual hosts that don't have routable entries inside the mesh. #### Routing rules {#routing-rules} The `http` section contains the virtual service’s routing rules, describing match conditions and actions for routing HTTP/1.1, HTTP2, and gRPC traffic sent to the destination(s) specified in the hosts field (you can also use `tcp` and `tls` sections to configure routing rules for [TCP](/docs/reference/config/networking/virtual-service/#TCPRoute) and unterminated [TLS](/docs/reference/config/networking/virtual-service/#TLSRoute) traffic). A routing rule consists of the destination where you want the traffic to go and zero or more match conditions, depending on your use case. ##### Match condition {#match-condition} The first routing rule in the example has a condition and so begins with the `match` field. In this case you want this routing to apply to all requests from the user "jason", so you use the `headers`, `end-user`, and `exact` fields to select the appropriate requests. {{< text yaml >}} - match: - headers: end-user: exact: jason {{< /text >}} ##### Destination {#destination} The route section’s `destination` field specifies the actual destination for traffic that matches this condition. Unlike the virtual service’s host(s), the destination’s host must be a real destination that exists in Istio’s service registry or Envoy won’t know where to send traffic to it. This can be a mesh service with proxies or a non-mesh service added using a service entry. In this case we’re running on Kubernetes and the host name is a Kubernetes service name: {{< text yaml >}} route: - destination: host: reviews subset: v2 {{< /text >}} Note in this and the other examples on this page, we use a Kubernetes short name for the destination hosts for simplicity. When this rule is evaluated, Istio adds a domain suffix based on the namespace of the virtual service that contains the routing rule to get the fully qualified name for the host. Using short names in our examples also means that you can copy and try them in any namespace you like. {{< warning >}} Using short names like this only works if
https://github.com/istio/istio.io/blob/master//content/en/docs/concepts/traffic-management/index.md
master
istio
[ -0.040441203862428665, 0.014989973977208138, -0.026959015056490898, -0.01647075079381466, -0.03170381486415863, -0.02190307527780533, 0.07474179565906525, -0.01591573841869831, 0.00687381299212575, 0.06245727464556694, -0.08455917984247208, -0.06940670311450958, -0.025078903883695602, 0.04...
0.36962
namespace of the virtual service that contains the routing rule to get the fully qualified name for the host. Using short names in our examples also means that you can copy and try them in any namespace you like. {{< warning >}} Using short names like this only works if the destination hosts and the virtual service are actually in the same Kubernetes namespace. Because using the Kubernetes short name can result in misconfigurations, we recommend that you specify fully qualified host names in production environments. {{< /warning >}} The destination section also specifies which subset of this Kubernetes service you want requests that match this rule’s conditions to go to, in this case the subset named v2. You’ll see how you define a service subset in the section on [destination rules](#destination-rules) below. #### Routing rule precedence {#routing-rule-precedence} Routing rules are \*\*evaluated in sequential order from top to bottom\*\*, with the first rule in the virtual service definition being given highest priority. In this case you want anything that doesn't match the first routing rule to go to a default destination, specified in the second rule. Because of this, the second rule has no match conditions and just directs traffic to the v3 subset. {{< text yaml >}} - route: - destination: host: reviews subset: v3 {{< /text >}} We recommend providing a default "no condition" or weight-based rule (described below) like this as the last rule in each virtual service to ensure that traffic to the virtual service always has at least one matching route. ### More about routing rules {#more-about-routing-rules} As you saw above, routing rules are a powerful tool for routing particular subsets of traffic to particular destinations. You can set match conditions on traffic ports, header fields, URIs, and more. For example, this virtual service lets users send traffic to two separate services, ratings and reviews, as if they were part of a bigger virtual service at `http://bookinfo.com/.` The virtual service rules match traffic based on request URIs and direct requests to the appropriate service. {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: bookinfo spec: hosts: - bookinfo.com http: - match: - uri: prefix: /reviews route: - destination: host: reviews - match: - uri: prefix: /ratings route: - destination: host: ratings {{< /text >}} For some match conditions, you can also choose to select them using the exact value, a prefix, or a regex. You can add multiple match conditions to the same `match` block to AND your conditions, or add multiple match blocks to the same rule to OR your conditions. You can also have multiple routing rules for any given virtual service. This lets you make your routing conditions as complex or simple as you like within a single virtual service. A full list of match condition fields and their possible values can be found in the [`HTTPMatchRequest` reference](/docs/reference/config/networking/virtual-service/#HTTPMatchRequest). In addition to using match conditions, you can distribute traffic by percentage "weight". This is useful for A/B testing and canary rollouts: {{< text yaml >}} spec: hosts: - reviews http: - route: - destination: host: reviews subset: v1 weight: 75 - destination: host: reviews subset: v2 weight: 25 {{< /text >}} You can also use routing rules to perform some actions on the traffic, for example: - Append or remove headers. - Rewrite the URL. - Set a [retry policy](#retries) for calls to this destination. To learn more about the actions available, see the [`HTTPRoute` reference](/docs/reference/config/networking/virtual-service/#HTTPRoute). ## Destination rules {#destination-rules} Along with [virtual services](#virtual-services), [destination rules](/docs/reference/config/networking/destination-rule/#DestinationRule) are a key part of Istio’s traffic routing functionality. You can think of virtual services as
https://github.com/istio/istio.io/blob/master//content/en/docs/concepts/traffic-management/index.md
master
istio
[ -0.0014527782332152128, 0.03157458081841469, 0.07223044335842133, -0.03006008267402649, -0.02981792576611042, -0.02975466661155224, 0.04567128047347069, -0.043672818690538406, 0.02849651128053665, 0.026420753449201584, -0.05131732299923897, -0.10559046268463135, 0.003269929438829422, -0.01...
0.098529
Rewrite the URL. - Set a [retry policy](#retries) for calls to this destination. To learn more about the actions available, see the [`HTTPRoute` reference](/docs/reference/config/networking/virtual-service/#HTTPRoute). ## Destination rules {#destination-rules} Along with [virtual services](#virtual-services), [destination rules](/docs/reference/config/networking/destination-rule/#DestinationRule) are a key part of Istio’s traffic routing functionality. You can think of virtual services as how you route your traffic \*\*to\*\* a given destination, and then you use destination rules to configure what happens to traffic \*\*for\*\* that destination. Destination rules are applied after virtual service routing rules are evaluated, so they apply to the traffic’s "real" destination. In particular, you use destination rules to specify named service subsets, such as grouping all a given service’s instances by version. You can then use these service subsets in the routing rules of virtual services to control the traffic to different instances of your services. Destination rules also let you customize Envoy’s traffic policies when calling the entire destination service or a particular service subset, such as your preferred load balancing model, TLS security mode, or circuit breaker settings. You can see a complete list of destination rule options in the [Destination Rule reference](/docs/reference/config/networking/destination-rule/). ### Load balancing options By default, Istio uses a least requests load balancing policy, where requests are distributed among the instances with the least number of requests. Istio also supports the following models, which you can specify in destination rules for requests to a particular service or service subset. - Random: Requests are forwarded at random to instances in the pool. - Weighted: Requests are forwarded to instances in the pool according to a specific percentage. - Round robin: Requests are forwarded to each instance in sequence. - Consistent hash: Provides soft session affinity based on HTTP headers, cookies or other properties. - Ring hash: Implements consistent hashing to upstream hosts using the [Ketama algorithm](https://www.metabrew.com/article/libketama-consistent-hashing-algo-memcached-clients). - Maglev: Implements consistent hashing to upstream hosts as described in the [Maglev paper](https://research.google/pubs/maglev-a-fast-and-reliable-software-network-load-balancer/). See the [Envoy load balancing documentation](https://www.envoyproxy.io/docs/envoy/latest/intro/arch\_overview/upstream/load\_balancing/load\_balancers) for more information about each option. ### Destination rule example {#destination-rule-example} The following example destination rule configures three different subsets for the `my-svc` destination service, with different load balancing policies: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: DestinationRule metadata: name: my-destination-rule spec: host: my-svc trafficPolicy: loadBalancer: simple: RANDOM subsets: - name: v1 labels: version: v1 - name: v2 labels: version: v2 trafficPolicy: loadBalancer: simple: ROUND\_ROBIN - name: v3 labels: version: v3 {{< /text >}} Each subset is defined based on one or more `labels`, which in Kubernetes are key/value pairs that are attached to objects such as Pods. These labels are applied in the Kubernetes service’s deployment as `metadata` to identify different versions. As well as defining subsets, this destination rule has both a default traffic policy for all subsets in this destination and a subset-specific policy that overrides it for just that subset. The default policy, defined above the `subsets` field, sets a simple random load balancer for the `v1` and `v3` subsets. In the `v2` policy, a round-robin load balancer is specified in the corresponding subset’s field. ## Gateways {#gateways} You use a [gateway](/docs/reference/config/networking/gateway/#Gateway) to manage inbound and outbound traffic for your mesh, letting you specify which traffic you want to enter or leave the mesh. Gateway configurations are applied to standalone Envoy proxies that are running at the edge of the mesh, rather than sidecar Envoy proxies running alongside your service workloads. Unlike other mechanisms for controlling traffic entering your systems, such as the Kubernetes Ingress APIs, Istio gateways let you use the full power and flexibility of Istio’s traffic routing. You can do this because Istio’s Gateway resource just lets you configure layer 4-6
https://github.com/istio/istio.io/blob/master//content/en/docs/concepts/traffic-management/index.md
master
istio
[ -0.03874312341213226, 0.00013041791680734605, -0.00990414060652256, -0.045066166669130325, -0.07911020517349243, -0.04492203891277313, 0.04726701229810715, 0.0011635241098701954, -0.019310826435685158, 0.024874603375792503, -0.05392388999462128, -0.0473070964217186, -0.03298164904117584, 0...
0.498409
Envoy proxies running alongside your service workloads. Unlike other mechanisms for controlling traffic entering your systems, such as the Kubernetes Ingress APIs, Istio gateways let you use the full power and flexibility of Istio’s traffic routing. You can do this because Istio’s Gateway resource just lets you configure layer 4-6 load balancing properties such as ports to expose, TLS settings, and so on. Then instead of adding application-layer traffic routing (L7) to the same API resource, you bind a regular Istio [virtual service](#virtual-services) to the gateway. This lets you basically manage gateway traffic like any other data plane traffic in an Istio mesh. Gateways are primarily used to manage ingress traffic, but you can also configure egress gateways. An egress gateway lets you configure a dedicated exit node for the traffic leaving the mesh, letting you limit which services can or should access external networks, or to enable [secure control of egress traffic](/blog/2019/egress-traffic-control-in-istio-part-1/) to add security to your mesh, for example. You can also use a gateway to configure a purely internal proxy. Istio provides some preconfigured gateway proxy deployments (`istio-ingressgateway` and `istio-egressgateway`) that you can use - both are deployed if you use our [demo installation](/docs/setup/getting-started/), while just the ingress gateway is deployed with our [default profile](/docs/setup/additional-setup/config-profiles/). You can apply your own gateway configurations to these deployments or deploy and configure your own gateway proxies. ### Gateway example {#gateway-example} The following example shows a possible gateway configuration for external HTTPS ingress traffic: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: Gateway metadata: name: ext-host-gwy spec: selector: app: my-gateway-controller servers: - port: number: 443 name: https protocol: HTTPS hosts: - ext-host.example.com tls: mode: SIMPLE credentialName: ext-host-cert {{< /text >}} This gateway configuration lets HTTPS traffic from `ext-host.example.com` into the mesh on port 443, but doesn’t specify any routing for the traffic. To specify routing and for the gateway to work as intended, you must also bind the gateway to a virtual service. You do this using the virtual service’s `gateways` field, as shown in the following example: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: virtual-svc spec: hosts: - ext-host.example.com gateways: - ext-host-gwy {{< /text >}} You can then configure the virtual service with routing rules for the external traffic. ## Service entries {#service-entries} You use a [service entry](/docs/reference/config/networking/service-entry/#ServiceEntry) to add an entry to the service registry that Istio maintains internally. After you add the service entry, the Envoy proxies can send traffic to the service as if it was a service in your mesh. Configuring service entries allows you to manage traffic for services running outside of the mesh, including the following tasks: - Redirect and forward traffic for external destinations, such as APIs consumed from the web, or traffic to services in legacy infrastructure. - Define [retry](#retries), [timeout](#timeouts), and [fault injection](#fault-injection) policies for external destinations. - Run a mesh service in a Virtual Machine (VM) by [adding VMs to your mesh](/docs/examples/virtual-machines/). You don’t need to add a service entry for every external service that you want your mesh services to use. By default, Istio configures the Envoy proxies to passthrough requests to unknown services. However, you can’t use Istio features to control the traffic to destinations that aren't registered in the mesh. ### Service entry example {#service-entry-example} The following example mesh-external service entry adds the `ext-svc.example.com` external dependency to Istio’s service registry: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH\_EXTERNAL resolution: DNS {{< /text >}} You specify the external resource using the `hosts` field. You can qualify it fully or
https://github.com/istio/istio.io/blob/master//content/en/docs/concepts/traffic-management/index.md
master
istio
[ -0.031569451093673706, -0.0072621311992406845, 0.007063264027237892, 0.029715154320001602, -0.07816177606582642, -0.018509600311517715, 0.050265122205019, 0.04487687349319458, 0.028998706489801407, 0.06333798915147781, -0.10159318894147873, -0.06038440018892288, -0.052785273641347885, 0.01...
0.54758
dependency to Istio’s service registry: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: ServiceEntry metadata: name: svc-entry spec: hosts: - ext-svc.example.com ports: - number: 443 name: https protocol: HTTPS location: MESH\_EXTERNAL resolution: DNS {{< /text >}} You specify the external resource using the `hosts` field. You can qualify it fully or use a wildcard prefixed domain name. You can configure virtual services and destination rules to control traffic to a service entry in a more granular way, in the same way you configure traffic for any other service in the mesh. For example, the following destination rule adjusts the TCP connection timeout for requests to the `ext-svc.example.com` external service that we configured using the service entry: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: DestinationRule metadata: name: ext-res-dr spec: host: ext-svc.example.com trafficPolicy: connectionPool: tcp: connectTimeout: 1s {{< /text >}} See the [Service Entry reference](/docs/reference/config/networking/service-entry) for more possible configuration options. ## Sidecars {#sidecars} By default, Istio configures every Envoy proxy to accept traffic on all the ports of its associated workload, and to reach every workload in the mesh when forwarding traffic. You can use a [sidecar](/docs/reference/config/networking/sidecar/#Sidecar) configuration to do the following: - Fine-tune the set of ports and protocols that an Envoy proxy accepts. - Limit the set of services that the Envoy proxy can reach. You might want to limit sidecar reachability like this in larger applications, where having every proxy configured to reach every other service in the mesh can potentially affect mesh performance due to high memory usage. You can specify that you want a sidecar configuration to apply to all workloads in a particular namespace, or choose specific workloads using a `workloadSelector`. For example, the following sidecar configuration configures all services in the `bookinfo` namespace to only reach services running in the same namespace and the Istio control plane (needed by Istio’s egress and telemetry features): {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: Sidecar metadata: name: default namespace: bookinfo spec: egress: - hosts: - "./\*" - "istio-system/\*" {{< /text >}} See the [Sidecar reference](/docs/reference/config/networking/sidecar/) for more details. ## Network resilience and testing {#network-resilience-and-testing} As well as helping you direct traffic around your mesh, Istio provides opt-in failure recovery and fault injection features that you can configure dynamically at runtime. Using these features helps your applications operate reliably, ensuring that the service mesh can tolerate failing nodes and preventing localized failures from cascading to other nodes. ### Timeouts {#timeouts} A timeout is the amount of time that an Envoy proxy should wait for replies from a given service, ensuring that services don’t hang around waiting for replies indefinitely and that calls succeed or fail within a predictable timeframe. The Envoy timeout for HTTP requests is disabled in Istio by default. For some applications and services, Istio’s default timeout might not be appropriate. For example, a timeout that is too long could result in excessive latency from waiting for replies from failing services, while a timeout that is too short could result in calls failing unnecessarily while waiting for an operation involving multiple services to return. To find and use your optimal timeout settings, Istio lets you easily adjust timeouts dynamically on a per-service basis using [virtual services](#virtual-services) without having to edit your service code. Here’s a virtual service that specifies a 10 second timeout for calls to the v1 subset of the ratings service: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - route: - destination: host: ratings subset: v1 timeout: 10s {{< /text >}} ### Retries {#retries} A retry setting specifies the maximum number of times an Envoy proxy attempts
https://github.com/istio/istio.io/blob/master//content/en/docs/concepts/traffic-management/index.md
master
istio
[ -0.057405829429626465, 0.0007418543682433665, -0.06234925240278244, -0.01570468209683895, -0.036980561912059784, -0.0654313862323761, 0.03785984218120575, 0.061939336359500885, 0.02759472467005253, 0.028244590386748314, -0.06919582188129425, -0.10323672741651535, -0.048788100481033325, 0.1...
0.460143
subset of the ratings service: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - route: - destination: host: ratings subset: v1 timeout: 10s {{< /text >}} ### Retries {#retries} A retry setting specifies the maximum number of times an Envoy proxy attempts to connect to a service if the initial call fails. Retries can enhance service availability and application performance by making sure that calls don’t fail permanently because of transient problems such as a temporarily overloaded service or network. The interval between retries (25ms+) is variable and determined automatically by Istio, preventing the called service from being overwhelmed with requests. The default retry behavior for HTTP requests is to retry twice before returning the error. Like timeouts, Istio’s default retry behavior might not suit your application needs in terms of latency (too many retries to a failed service can slow things down) or availability. Also like timeouts, you can adjust your retry settings on a per-service basis in [virtual services](#virtual-services) without having to touch your service code. You can also further refine your retry behavior by adding per-retry timeouts, specifying the amount of time you want to wait for each retry attempt to successfully connect to the service. The following example configures a maximum of 3 retries to connect to this service subset after an initial call failure, each with a 2 second timeout. {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - route: - destination: host: ratings subset: v1 retries: attempts: 3 perTryTimeout: 2s {{< /text >}} ### Circuit breakers {#circuit-breakers} Circuit breakers are another useful mechanism Istio provides for creating resilient microservice-based applications. In a circuit breaker, you set limits for calls to individual hosts within a service, such as the number of concurrent connections or how many times calls to this host have failed. Once that limit has been reached the circuit breaker "trips" and stops further connections to that host. Using a circuit breaker pattern enables fast failure rather than clients trying to connect to an overloaded or failing host. As circuit breaking applies to "real" mesh destinations in a load balancing pool, you configure circuit breaker thresholds in [destination rules](#destination-rules), with the settings applying to each individual host in the service. The following example limits the number of concurrent connections for the `reviews` service workloads of the v1 subset to 100: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: DestinationRule metadata: name: reviews spec: host: reviews subsets: - name: v1 labels: version: v1 trafficPolicy: connectionPool: tcp: maxConnections: 100 {{< /text >}} You can find out more about creating circuit breakers in [Circuit Breaking](/docs/tasks/traffic-management/circuit-breaking/). ### Fault injection {#fault-injection} After you’ve configured your network, including failure recovery policies, you can use Istio’s fault injection mechanisms to test the failure recovery capacity of your application as a whole. Fault injection is a testing method that introduces errors into a system to ensure that it can withstand and recover from error conditions. Using fault injection can be particularly useful to ensure that your failure recovery policies aren’t incompatible or too restrictive, potentially resulting in critical services being unavailable. {{< warning >}} Currently, the fault injection configuration can not be combined with retry or timeout configuration on the same virtual service, see [Traffic Management Problems](/docs/ops/common-problems/network-issues/#virtual-service-with-fault-injection-and-retrytimeout-policies-not-working-as-expected). {{< /warning >}} Unlike other mechanisms for introducing errors, such as delaying packets or killing pods at the network layer, Istio lets you inject faults at the application layer. This lets you inject more relevant failures, such as HTTP error codes, to get more relevant results. You can inject
https://github.com/istio/istio.io/blob/master//content/en/docs/concepts/traffic-management/index.md
master
istio
[ -0.045614346861839294, 0.016363156959414482, -0.010226165875792503, 0.06191130355000496, -0.04147423058748245, -0.029637310653924942, -0.022846270352602005, 0.01523138489574194, 0.032442856580019, 0.0433337576687336, -0.06001916900277138, -0.0423244945704937, -0.04148782417178154, 0.055006...
0.485763
Problems](/docs/ops/common-problems/network-issues/#virtual-service-with-fault-injection-and-retrytimeout-policies-not-working-as-expected). {{< /warning >}} Unlike other mechanisms for introducing errors, such as delaying packets or killing pods at the network layer, Istio lets you inject faults at the application layer. This lets you inject more relevant failures, such as HTTP error codes, to get more relevant results. You can inject two types of faults, both configured using a [virtual service](#virtual-services): - Delays: Delays are timing failures. They mimic increased network latency or an overloaded upstream service. - Aborts: Aborts are crash failures. They mimic failures in upstream services. Aborts usually manifest in the form of HTTP error codes or TCP connection failures. For example, this virtual service introduces a 5 second delay for 1 out of every 1000 requests to the `ratings` service. {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: ratings spec: hosts: - ratings http: - fault: delay: percentage: value: 0.1 fixedDelay: 5s route: - destination: host: ratings subset: v1 {{< /text >}} For detailed instructions on how to configure delays and aborts, see [Fault Injection](/docs/tasks/traffic-management/fault-injection/). ### Working with your applications {#working-with-your-applications} Istio failure recovery features are completely transparent to the application. Applications don’t know if an Envoy sidecar proxy is handling failures for a called service before returning a response. This means that if you are also setting failure recovery policies in your application code you need to keep in mind that both work independently, and therefore might conflict. For example, suppose you can have two timeouts, one configured in a virtual service and another in the application. The application sets a 2 second timeout for an API call to a service. However, you configured a 3 second timeout with 1 retry in your virtual service. In this case, the application’s timeout kicks in first, so your Envoy timeout and retry attempt has no effect. While Istio failure recovery features improve the reliability and availability of services in the mesh, applications must handle the failure or errors and take appropriate fallback actions. For example, when all instances in a load balancing pool have failed, Envoy returns an `HTTP 503` code. The application must implement any fallback logic needed to handle the `HTTP 503` error code.
https://github.com/istio/istio.io/blob/master//content/en/docs/concepts/traffic-management/index.md
master
istio
[ -0.03670685365796089, 0.04760122299194336, 0.035371724516153336, 0.050128258764743805, 0.0167859960347414, -0.042542245239019394, 0.024587489664554596, 0.04918829724192619, 0.0075031667947769165, 0.045878153294324875, 0.006733329966664314, -0.0966632217168808, -0.029250312596559525, 0.0263...
0.570269
From [Kubernetes mutating and validating webhook mechanisms](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/): {{< tip >}} Admission webhooks are HTTP callbacks that receive admission requests and do something with them. You can define two types of admission webhooks, validating admission webhook and mutating admission webhook. With validating admission webhooks, you may reject requests to enforce custom admission policies. With mutating admission webhooks, you may change requests to enforce custom defaults. {{< /tip >}} Istio uses `ValidatingAdmissionWebhooks` for validating Istio configuration and `MutatingAdmissionWebhooks` for automatically injecting the sidecar proxy into user pods. The webhook setup guides assuming general familiarity with Kubernetes Dynamic Admission Webhooks. Consult the Kubernetes API references for detailed documentation of the [Mutating Webhook Configuration](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#mutatingwebhookconfiguration-v1-admissionregistration-k8s-io) and [Validating Webhook Configuration](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.29/#validatingwebhookconfiguration-v1-admissionregistration-k8s-io). ## Verify dynamic admission webhook prerequisites See the [platform setup instructions](/docs/setup/platform-setup/) for Kubernetes provider specific setup instructions. Webhooks will not function properly if the cluster is misconfigured. You can follow these steps once the cluster has been configured and dynamic webhooks and dependent features are not functioning properly. 1. Verify you’re using a [supported version](/docs/releases/supported-releases#support-status-of-istio-releases) ({{< supported\_kubernetes\_versions >}}) of [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) and of the Kubernetes server: {{< text bash >}} $ kubectl version --short Client Version: v1.29.0 Server Version: v1.29.1 {{< /text >}} 1. `admissionregistration.k8s.io/v1` should be enabled {{< text bash >}} $ kubectl api-versions | grep admissionregistration.k8s.io/v1 admissionregistration.k8s.io/v1 {{< /text >}} 1. Verify `MutatingAdmissionWebhook` and `ValidatingAdmissionWebhook` plugins are listed in the `kube-apiserver --enable-admission-plugins`. Access to this flag is [provider specific](/docs/setup/platform-setup/). 1. Verify the Kubernetes api-server has network connectivity to the webhook pod. e.g. incorrect `http\_proxy` settings can interfere api-server operation (see related issues [here](https://github.com/kubernetes/kubernetes/pull/58698#discussion\_r163879443) and [here](https://github.com/kubernetes/kubeadm/issues/666) for more information).
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/mesh/webhook/index.md
master
istio
[ -0.017160480841994286, 0.05706247687339783, 0.03563544899225235, -0.04235456511378288, -0.04443194344639778, -0.05122114345431328, 0.05658533051609993, -0.001245046965777874, -0.029803387820720673, 0.01699102856218815, -0.01983477920293808, -0.07899032533168793, -0.009681065566837788, -0.0...
0.29607
[Kubernetes liveness and readiness probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) describes several ways to configure liveness and readiness probes: 1. [Command](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command) 1. [HTTP request](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request) 1. [TCP probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-tcp-liveness-probe) 1. [gRPC probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-grpc-liveness-probe) The command approach works with no changes required, but HTTP requests, TCP probes, and gRPC probes require Istio to make changes to the pod configuration. The health check requests to the `liveness-http` service are sent by Kubelet. This becomes a problem when mutual TLS is enabled, because the Kubelet does not have an Istio issued certificate. Therefore the health check requests will fail. TCP probe checks need special handling, because Istio redirects all incoming traffic into the sidecar, and so all TCP ports appear open. The Kubelet simply checks if some process is listening on the specified port, and so the probe will always succeed as long as the sidecar is running. Istio solves both these problems by rewriting the application `PodSpec` readiness/liveness probe, so that the probe request is sent to the [sidecar agent](/docs/reference/commands/pilot-agent/). ## Liveness probe rewrite example To demonstrate how the readiness/liveness probe is rewritten at the application `PodSpec` level, let us use the [liveness-http-same-port sample]({{< github\_file >}}/samples/health-check/liveness-http-same-port.yaml). First create and label a namespace for the example: {{< text bash >}} $ kubectl create namespace istio-io-health-rewrite $ kubectl label namespace istio-io-health-rewrite istio-injection=enabled {{< /text >}} And deploy the sample application: {{< text bash yaml >}} $ kubectl apply -f - <}} Once deployed, you can inspect the pod's application container to see the changed path: {{< text bash json >}} $ kubectl get pod "$LIVENESS\_POD" -n istio-io-health-rewrite -o json | jq '.spec.containers[0].livenessProbe.httpGet' { "path": "/app-health/liveness-http/livez", "port": 15020, "scheme": "HTTP" } {{< /text >}} The original `livenessProbe` path is now mapped against the new path in the sidecar container environment variable `ISTIO\_KUBE\_APP\_PROBERS`: {{< text bash json >}} $ kubectl get pod "$LIVENESS\_POD" -n istio-io-health-rewrite -o=jsonpath="{.spec.containers[1].env[?(@.name=='ISTIO\_KUBE\_APP\_PROBERS')]}" { "name":"ISTIO\_KUBE\_APP\_PROBERS", "value":"{\"/app-health/liveness-http/livez\":{\"httpGet\":{\"path\":\"/foo\",\"port\":8001,\"scheme\":\"HTTP\"},\"timeoutSeconds\":1}}" } {{< /text >}} For HTTP and gRPC requests, the sidecar agent redirects the request to the application and strips the response body, only returning the response code. For TCP probes, the sidecar agent will then do the port check while avoiding the traffic redirection. The rewriting of problematic probes is enabled by default in all built-in Istio [configuration profiles](/docs/setup/additional-setup/config-profiles/) but can be disabled as described below. ## Liveness and readiness probes using the command approach Istio provides a [liveness sample]({{< github\_file >}}/samples/health-check/liveness-command.yaml) that implements this approach. To demonstrate it working with mutual TLS enabled, first create a namespace for the example: {{< text bash >}} $ kubectl create ns istio-io-health {{< /text >}} To configure strict mutual TLS, run: {{< text bash >}} $ kubectl apply -f - <}} Next, change directory to the root of the Istio installation and run the following command to deploy the sample service: {{< text bash >}} $ kubectl -n istio-io-health apply -f <(istioctl kube-inject -f @samples/health-check/liveness-command.yaml@) {{< /text >}} To confirm that the liveness probes are working, check the status of the sample pod to verify that it is running. {{< text bash >}} $ kubectl -n istio-io-health get pod NAME READY STATUS RESTARTS AGE liveness-6857c8775f-zdv9r 2/2 Running 0 4m {{< /text >}} ## Liveness and readiness probes using the HTTP, TCP, and gRPC approach {#liveness-and-readiness-probes-using-the-http-request-approach} As stated previously, Istio uses probe rewrite to implement HTTP, TCP, and gRPC probes by default. You can disable this feature either for specific pods, or globally. ### Disable the probe rewrite for a pod {#disable-the-http-probe-rewrite-for-a-pod} You can [annotate the pod](/docs/reference/config/annotations/) with `sidecar.istio.io/rewriteAppHTTPProbers: "false"` to disable the probe rewrite option. Make sure you add the annotation to the [pod resource](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/) because it will be ignored anywhere else (for example, on an
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/mesh/app-health-check/index.md
master
istio
[ -0.009743135422468185, -0.01505522895604372, -0.008087319321930408, 0.009476871229708195, 0.048456963151693344, -0.052317555993795395, -0.008553177118301392, -0.04781275615096092, -0.0006080403691157699, 0.053884461522102356, 0.011820671148598194, -0.04309767112135887, -0.03895171731710434, ...
0.17043
either for specific pods, or globally. ### Disable the probe rewrite for a pod {#disable-the-http-probe-rewrite-for-a-pod} You can [annotate the pod](/docs/reference/config/annotations/) with `sidecar.istio.io/rewriteAppHTTPProbers: "false"` to disable the probe rewrite option. Make sure you add the annotation to the [pod resource](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/) because it will be ignored anywhere else (for example, on an enclosing deployment resource). {{< tabset category-name="disable-probe-rewrite" >}} {{< tab name="HTTP Probe" category-value="http-probe" >}} {{< text yaml >}} kubectl apply -f - <}} {{< /tab >}} {{< tab name="gRPC Probe" category-value="grpc-probe" >}} {{< text yaml >}} kubectl apply -f - <}} {{< /tab >}} {{< /tabset >}} This approach allows you to disable the health check probe rewrite gradually on individual deployments, without reinstalling Istio. ### Disable the probe rewrite globally [Install Istio](/docs/setup/install/istioctl/) using `--set values.sidecarInjectorWebhook.rewriteAppHTTPProbe=false` to disable the probe rewrite globally. \*\*Alternatively\*\*, update the configuration map for the Istio sidecar injector: {{< text bash >}} $ kubectl get cm istio-sidecar-injector -n istio-system -o yaml | sed -e 's/"rewriteAppHTTPProbe": true/"rewriteAppHTTPProbe": false/' | kubectl apply -f - {{< /text >}} ## Cleanup Remove the namespaces used for the examples: {{< text bash >}} $ kubectl delete ns istio-io-health istio-io-health-rewrite {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/mesh/app-health-check/index.md
master
istio
[ 0.06185006722807884, 0.07177796214818954, 0.06751523911952972, 0.054819196462631226, 0.0028214440681040287, -0.050201866775751114, -0.0371021032333374, -0.03592769801616669, 0.040220074355602264, 0.04712267592549324, -0.007767215371131897, -0.0502486415207386, -0.06550305336713791, -0.0095...
0.239591
In order to program the service mesh, the Istio control plane (Istiod) reads a variety of configurations, including core Kubernetes types like `Service` and `Node`, and Istio's own types like `Gateway`. These are then sent to the data plane (see [Architecture](/docs/ops/deployment/architecture/) for more information). By default, the control plane will read all configuration in all namespaces. Each proxy instance will receive configuration for all namespaces as well. This includes information about workloads that are not enrolled in the mesh. This default ensures correct behavior out of the box, but comes with a scalability cost. Each configuration has a cost (in CPU and memory, primarily) to maintain and keep up to date. At large scales, it is critical to limit the configuration scope to avoid excessive resource consumption. ## Scoping mechanisms Istio offers a few tools to help control the scope of a configuration to meet different use cases. Depending on your requirements, these can be used alone or together. \* `Sidecar` provides a mechanism for specific workloads to \_import\_ a set of configurations \* `exportTo` provides a mechanism to \_export\_ a configuration to a set of workloads \* `discoverySelectors` provides a mechanism to let Istio completely ignore a set of configurations ### `Sidecar` import The [`egress.hosts`](/docs/reference/config/networking/sidecar/#IstioEgressListener) field in `Sidecar` allows specifying a list of configurations to import. Only configurations matching the specified criteria will be seen by sidecars impacted by the `Sidecar` resource. For example: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: Sidecar metadata: name: default spec: egress: - hosts: - "./\*" # Import all configuration from our own namespace - "bookinfo/\*" # Import all configuration from the bookinfo namespace - "external-services/example.com" # Import only 'example.com' from the external-services namespace {{< /text >}} ### `exportTo` Istio's `VirtualService`, `DestinationRule`, and `ServiceEntry` provide a `spec.exportTo` field. Similarly, `Service` can be configured with the `networking.istio.io/exportTo` annotation. Unlike `Sidecar` which allows a workload owner to control what dependencies it has, `exportTo` works in the opposite way, and allows the service owners to control their own service's visibility. For example, this configuration makes the `details` `Service` only visible to its own namespace, and the `client` namespace: {{< text yaml >}} apiVersion: v1 kind: Service metadata: name: details annotations: networking.istio.io/exportTo: ".,client" spec: ... {{< /text >}} ### `DiscoverySelectors` While the previous controls operate on a workload or service owner level, [`DiscoverySelectors`](/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig) provides mesh wide control over configuration visibility. Discovery selectors allows specifying criteria for which namespaces should be visible to the control plane. Any namespaces not matching are ignored by the control plane entirely. This can be configured as part of `meshConfig` during installation. For example: {{< text yaml >}} meshConfig: discoverySelectors: - matchLabels: # Allow any namespaces with `istio-discovery=enabled` istio-discovery: enabled - matchLabels: # Allow "kube-system"; Kubernetes automatically adds this label to each namespace kubernetes.io/metadata.name: kube-system {{< /text >}} {{< warning >}} Istiod will always open a watch to Kubernetes for all namespaces. However, discovery selectors will ignore objects that are not selected very early in its processing, minimizing costs. {{}} ## Frequently asked questions ### How can I understand the cost of a certain configuration? In order to get the best return-on-investment for scoping down configuration, it can be helpful to understand the cost of each object. Unfortunately, there is not a straightforward answer; scalability depends on a large number of factors. However, there are a few general guidelines: Configuration \*changes\* are expensive in Istio, as they require recomputation. While `Endpoints` changes (generally from a Pod scaling up or down) are heavily optimized, most other configurations are fairly expensive. This can be especially harmful when controllers are constantly making changes to an object (sometimes
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/mesh/configuration-scoping/index.md
master
istio
[ 0.0006586528616026044, -0.032244451344013214, 0.01604888029396534, 0.0355120413005352, -0.054421018809080124, -0.018269076943397522, 0.02298196777701378, 0.0077627976424992085, 0.06517601013183594, 0.04528062418103218, -0.07990172505378723, -0.02487867884337902, -0.06396878510713577, -0.04...
0.483904
are a few general guidelines: Configuration \*changes\* are expensive in Istio, as they require recomputation. While `Endpoints` changes (generally from a Pod scaling up or down) are heavily optimized, most other configurations are fairly expensive. This can be especially harmful when controllers are constantly making changes to an object (sometimes this happens accidentally!). Some tools to detect which configurations are changing: \* Istiod will log each change like: `Push debounce stable 1 for config Gateway/default/gateway: ..., full=true`. This shows a `Gateway` object in the `default` namespace changed. `full=false` would represent and optimized update such as `Endpoint`. Note: changes to `Service` and `Endpoints` will all show as `ServiceEntry`. \* Istiod exposes metrics `pilot\_k8s\_cfg\_events` and `pilot\_k8s\_reg\_events` for each change. \* `kubectl get --watch -oyaml --show-managed-fields` can show changes to an object (or objects) to help understand what is changing, and by whom. [Headless services](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services) (besides ones declared as [HTTP](/docs/ops/configuration/traffic-management/protocol-selection/#explicit-protocol-selection)) scale with the number of instances. This makes large headless services expensive, and a good candidate for exclusion with `exportTo` or equivalent. ### What happens if I connect to a service outside of my scope? When connecting to a service that has been excluded through one of the scoping mechanisms, the data plane will not know anything about the destination, so it will be treated as [Unmatched traffic](/docs/ops/configuration/traffic-management/traffic-routing/#unmatched-traffic). ### What about Gateways? While [Gateways](/docs/setup/additional-setup/gateway/) will respect `exportTo` and `DiscoverySelectors`, `Sidecar` objects do not impact Gateways. However, unlike sidecars, gateways do not have configuration for the entire cluster by default. Instead, each configuration is explicitly attached to the gateway, which mostly avoids this problem. However, [currently](https://github.com/istio/istio/issues/29131) part of the data plane configuration (a "cluster", in Envoy terms), is always sent for the entire cluster, even if it is not referenced explicitly.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/mesh/configuration-scoping/index.md
master
istio
[ 0.02588886208832264, -0.025187797844409943, -0.008614090271294117, 0.017206521704792976, -0.029287759214639664, -0.04669163003563881, -0.02736142836511135, 0.011036958545446396, 0.04459717869758606, 0.028175950050354004, -0.018445074558258057, -0.067215196788311, -0.09696606546640396, -0.0...
0.45622
Istio's [default images](https://hub.docker.com/r/istio/base) are based on `ubuntu` with some extra tools added. An alternative image based on [distroless images](https://github.com/GoogleContainerTools/distroless) is also available. These images strip all non-essential executables and libraries, offering the following benefits: - The attack surface is reduced as they include the smallest possible set of vulnerabilities. - The images are smaller, which allows faster start-up. See also the [Why should I use distroless images?](https://github.com/GoogleContainerTools/distroless#why-should-i-use-distroless-images) section in the official distroless README. ## Install distroless images Follow the [Installation Steps](/docs/setup/install/istioctl/) to set up Istio. Add the `variant` option to use the \*distroless images\*. {{< text bash >}} $ istioctl install --set values.global.variant=distroless {{< /text >}} If you are only interested in using distroless images for injected proxy images, you can also use the `image.imageType` field in [Proxy Config](/docs/reference/config/networking/proxy-config/#ProxyImage). Note the above `variant` flag will automatically set this for you. ## Debugging Distroless images are missing all debugging tools (including a shell!). While great for security, this limits the ability to do ad-hoc debugging using `kubectl exec` into the proxy container. Fortunately, [Ephemeral Containers](https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/) can help here. `kubectl debug` can attach a temporary container to a pod. By using an image with extra tools, we can debug as we used to: {{< text shell >}} $ kubectl debug --image istio/base --target istio-proxy -it app-65c6749c9d-t549t Defaulting debug container name to debugger-cdftc. If you don't see a command prompt, try pressing enter. root@app-65c6749c9d-t549t:/# curl example.com {{< /text >}} This deploys a new ephemeral container using the `istio/base`. This is the same base image used in non-distroless Istio images, and contains a variety of tools useful to debug Istio. However, any image will work. The container is also attached to the process namespace of the sidecar proxy (`--target istio-proxy`) and the network namespace of the pod.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/security/harden-docker-images/index.md
master
istio
[ -0.04601603373885155, 0.045723848044872284, 0.04668300598859787, -0.006560622714459896, 0.05258415639400482, -0.0783216804265976, 0.021285543218255043, 0.09109223634004593, -0.05389908328652382, -0.008463047444820404, 0.0022791132796555758, -0.06776539236307144, 0.026723602786660194, 0.077...
0.518817
## Background This page shows common patterns of using Istio security policies. You may find them useful in your deployment or use this as a quick reference to example policies. The policies demonstrated here are just examples and require changes to adapt to your actual environment before applying. Also read the [authentication](/docs/tasks/security/authentication/authn-policy) and [authorization](/docs/tasks/security/authorization) tasks for a hands-on tutorial of using the security policy in more detail. ## Require different JWT issuer per host JWT validation is common on the ingress gateway and you may want to require different JWT issuers for different hosts. You can use the authorization policy for fine grained JWT validation in addition to the [request authentication](/docs/tasks/security/authentication/authn-policy/#end-user-authentication) policy. Use the following policy if you want to allow access to the given hosts if JWT principal matches. Access to other hosts will always be denied. {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: jwt-per-host namespace: istio-system spec: selector: matchLabels: istio: ingressgateway action: ALLOW rules: - from: - source: # the JWT token must have issuer with suffix "@example.com" requestPrincipals: ["\*@example.com"] to: - operation: hosts: ["example.com", "\*.example.com"] - from: - source: # the JWT token must have issuer with suffix "@another.org" requestPrincipals: ["\*@another.org"] to: - operation: hosts: [".another.org", "\*.another.org"] {{< /text >}} ## Namespace isolation The following two policies enable strict mTLS on namespace `foo`, and allow traffic from the same namespace. {{< text yaml >}} apiVersion: security.istio.io/v1 kind: PeerAuthentication metadata: name: default namespace: foo spec: mtls: mode: STRICT --- apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: foo-isolation namespace: foo spec: action: ALLOW rules: - from: - source: namespaces: ["foo"] {{< /text >}} ## Namespace isolation with ingress exception The following two policies enable strict mTLS on namespace `foo`, and allow traffic from the same namespace and also from the ingress gateway. {{< text yaml >}} apiVersion: security.istio.io/v1 kind: PeerAuthentication metadata: name: default namespace: foo spec: mtls: mode: STRICT --- apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: ns-isolation-except-ingress namespace: foo spec: action: ALLOW rules: - from: - source: namespaces: ["foo"] - source: principals: ["cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"] {{< /text >}} ## Require mTLS in authorization layer (defense in depth) You have configured `PeerAuthentication` to `STRICT` but want to make sure the traffic is indeed protected by mTLS with an extra check in the authorization layer, i.e., defense in depth. The following policy denies the request if the principal is empty. The principal will be empty if plain text is used. In other words, the policy allows requests if the principal is non-empty. `"\*"` means non-empty match and using with `notPrincipals` means matching on empty principal. {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: require-mtls namespace: foo spec: action: DENY rules: - from: - source: notPrincipals: ["\*"] {{< /text >}} ## Require mandatory authorization check with `DENY` policy You can use the `DENY` policy if you want to require mandatory authorization check that must be satisfied and cannot be bypassed by another more permissive `ALLOW` policy. This works because the `DENY` policy takes precedence over the `ALLOW` policy and could deny a request early before `ALLOW` policies. Use the following policy to enforce mandatory JWT validation in addition to the [request authentication](/docs/tasks/security/authentication/authn-policy/#end-user-authentication) policy. The policy denies the request if the request principal is empty. The request principal will be empty if JWT validation failed. In other words, the policy allows requests if the request principal is non-empty. `"\*"` means non-empty match and using with `notRequestPrincipals` means matching on empty request principal. {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: require-jwt namespace: istio-system spec: selector: matchLabels: istio: ingressgateway action: DENY rules: - from: - source: notRequestPrincipals: ["\*"] {{< /text
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/security/security-policy-examples/index.md
master
istio
[ -0.07470351457595825, 0.030035827308893204, 0.016987288370728493, -0.004513947293162346, 0.019277336075901985, -0.06733223795890808, 0.044454026967287064, -0.001396877458319068, -0.009478329680860043, 0.0085006607696414, -0.0736173763871193, -0.09860897809267044, 0.0637679323554039, 0.0603...
0.364367
requests if the request principal is non-empty. `"\*"` means non-empty match and using with `notRequestPrincipals` means matching on empty request principal. {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: require-jwt namespace: istio-system spec: selector: matchLabels: istio: ingressgateway action: DENY rules: - from: - source: notRequestPrincipals: ["\*"] {{< /text >}} Similarly, Use the following policy to require mandatory namespace isolation and also allow requests from ingress gateway. The policy denies the request if the namespace is not `foo` and the principal is not `cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account`. In other words, the policy allows the request only if the namespace is `foo` or the principal is `cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account`. {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: ns-isolation-except-ingress namespace: foo spec: action: DENY rules: - from: - source: notNamespaces: ["foo"] notPrincipals: ["cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"] {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/security/security-policy-examples/index.md
master
istio
[ -0.061508141458034515, 0.03754042461514473, -0.031718991696834564, 0.018478073179721832, -0.016731971874833107, -0.05180024728178978, 0.06059452146291733, 0.00555202504619956, 0.02899911440908909, 0.0006598623003810644, -0.0023458111099898815, -0.1292780190706253, 0.01870296522974968, 0.02...
0.496117