content
large_stringlengths
3
20.5k
url
large_stringlengths
54
193
branch
large_stringclasses
4 values
source
large_stringclasses
42 values
embeddings
listlengths
384
384
score
float64
-0.21
0.65
The Envoy proxy keeps detailed statistics about network traffic. Envoy's statistics only cover the traffic for a particular Envoy instance. See [Observability](/docs/tasks/observability/) for persistent per-service Istio telemetry. The statistics the Envoy proxies record can provide more information about specific pod instances. To see the statistics for a pod: {{< text syntax=bash snip\_id=get\_stats >}} $ kubectl exec "$POD" -c istio-proxy -- pilot-agent request GET stats {{< /text >}} Envoy generates statistics about its behavior, scoping the statistics by proxy function. Examples include: - [Upstream connection](https://www.envoyproxy.io/docs/envoy/latest/configuration/upstream/cluster\_manager/cluster\_stats) - [Listener](https://www.envoyproxy.io/docs/envoy/latest/configuration/listeners/stats) - [HTTP Connection Manager](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http\_conn\_man/stats) - [TCP proxy](https://www.envoyproxy.io/docs/envoy/latest/configuration/listeners/network\_filters/tcp\_proxy\_filter#statistics) - [Router](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http\_filters/router\_filter.html?highlight=vhost#statistics) By default, Istio configures Envoy to record a minimal set of statistics to reduce the overall CPU and memory footprint of the installed proxies. The default collection keys are: - `cluster\_manager` - `listener\_manager` - `server` - `cluster.xds-grpc` To see the Envoy settings for statistics data collection use [`istioctl proxy-config bootstrap`](/docs/reference/commands/istioctl/#istioctl-proxy-config-bootstrap) and follow the [deep dive into Envoy configuration](/docs/ops/diagnostic-tools/proxy-cmd/#deep-dive-into-envoy-configuration). Envoy only collects statistical data on items matching the `inclusion\_list` within the `stats\_matcher` JSON element. {{< tip >}} Note: The names of Envoy statistics can vary based on the composition of Envoy configuration. As a result, the exposed names of statistics for Envoys managed by Istio are subject to the configuration behavior of Istio. If you build or maintain dashboards or alerts based on Envoy statistics, it is \*\*strongly recommended\*\* that you examine the statistics in a canary environment \*\*before upgrading Istio\*\*. {{< /tip >}} To configure Istio proxy to record additional statistics, you can add [`ProxyConfig.ProxyStatsMatcher`](/docs/reference/config/istio.mesh.v1alpha1/#ProxyStatsMatcher) to your mesh config. For example, to enable stats for circuit breakers, request retries, upstream connections, and request timeouts globally, you can specify stats matcher as follows: {{< tip >}} Proxy needs to restart to pick up the stats matcher configuration. {{< /tip >}} {{< text syntax=yaml snip\_id=proxyStatsMatcher >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: meshConfig: defaultConfig: proxyStatsMatcher: inclusionRegexps: - ".\*outlier\_detection.\*" - ".\*upstream\_rq\_retry.\*" - ".\*upstream\_cx\_.\*" inclusionSuffixes: - "upstream\_rq\_timeout" {{< /text >}} You can also override the global stats matching configuration per proxy by using the `proxy.istio.io/config` annotation. For example, to configure the same stats generation inclusion as above, you can add the annotation to a gateway proxy or a workload as follows: {{< text syntax=yaml snip\_id=proxyIstioConfig >}} metadata: annotations: proxy.istio.io/config: |- proxyStatsMatcher: inclusionRegexps: - ".\*outlier\_detection.\*" - ".\*upstream\_rq\_retry.\*" - ".\*upstream\_cx\_.\*" inclusionSuffixes: - "upstream\_rq\_timeout" {{< /text >}} {{< tip >}} Note: If you are using `sidecar.istio.io/statsInclusionPrefixes`, `sidecar.istio.io/statsInclusionRegexps`, and `sidecar.istio.io/statsInclusionSuffixes`, consider switching to the `ProxyConfig`-based configuration as it provides a global default and a uniform way to override at the gateway and sidecar proxy. {{< /tip >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/telemetry/envoy-stats/index.md
master
istio
[ 0.045138824731111526, -0.00862127635627985, 0.010525244288146496, 0.06527421623468399, -0.05054266005754471, -0.043025605380535126, 0.030084311962127686, -0.018712220713496208, 0.07430636882781982, 0.07761158794164658, -0.08753827214241028, -0.04997667297720909, -0.027286838740110397, 0.00...
0.302484
## Overview This guide is meant to provide operational guidance on how to configure monitoring of Istio meshes comprised of two or more individual Kubernetes clusters. It is not meant to establish the \*only\* possible path forward, but rather to demonstrate a workable approach to multicluster telemetry with Prometheus. Our recommendation for multicluster monitoring of Istio with Prometheus is built upon the foundation of Prometheus [hierarchical federation](https://prometheus.io/docs/prometheus/latest/federation/#hierarchical-federation). Prometheus instances that are deployed locally to each cluster by Istio act as initial collectors that then federate up to a production mesh-wide Prometheus instance. That mesh-wide Prometheus can either live outside of the mesh (external), or in one of the clusters within the mesh. ## Multicluster Istio setup Follow the [multicluster installation](/docs/setup/install/multicluster/) section to set up your Istio clusters in one of the supported [multicluster deployment models](/docs/ops/deployment/deployment-models/#multiple-clusters). For the purpose of this guide, any of those approaches will work, with the following caveat: \*\*Ensure that a cluster-local Istio Prometheus instance is installed in each cluster.\*\* Individual Istio deployment of Prometheus in each cluster is required to form the basis of cross-cluster monitoring by way of federation to a production-ready instance of Prometheus that runs externally or in one of the clusters. Validate that you have an instance of Prometheus running in each cluster: {{< text bash >}} $ kubectl -n istio-system get services prometheus NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE prometheus ClusterIP 10.8.4.109 9090/TCP 20h {{< /text >}} ## Configure Prometheus federation ### External production Prometheus There are several reasons why you may want to have a Prometheus instance running outside of your Istio deployment. Perhaps you want long-term monitoring disjoint from the cluster being monitored. Perhaps you want to monitor multiple separate meshes in a single place. Or maybe you have other motivations. Whatever your reason is, you’ll need some special configurations to make it all work. {{< image width="80%" link="./external-production-prometheus.svg" alt="Architecture of external Production Prometheus for monitoring multicluster Istio." caption="External Production Prometheus for monitoring multicluster Istio" >}} {{< warning >}} This guide demonstrates connectivity to cluster-local Prometheus instances, but does not address security considerations. For production use, secure access to each Prometheus endpoint with HTTPS. In addition, take precautions, such as using an internal load-balancer instead of a public endpoint and the appropriate configuration of firewall rules. {{< /warning >}} Istio provides a way to expose cluster services externally via [Gateways](/docs/reference/config/networking/gateway/). You can configure an ingress gateway for the cluster-local Prometheus, providing external connectivity to the in-cluster Prometheus endpoint. For each cluster, follow the appropriate instructions from the [Remotely Accessing Telemetry Addons](/docs/tasks/observability/gateways/#option-1-secure-access-https) task. Also note that you \*\*SHOULD\*\* establish secure (HTTPS) access. Next, configure your external Prometheus instance to access the cluster-local Prometheus instances using a configuration like the following (replacing the ingress domain and cluster name): {{< text yaml >}} scrape\_configs: - job\_name: 'federate-{{CLUSTER\_NAME}}' scrape\_interval: 15s honor\_labels: true metrics\_path: '/federate' params: 'match[]': - '{job="kubernetes-pods"}' static\_configs: - targets: - 'prometheus.{{INGRESS\_DOMAIN}}' labels: cluster: '{{CLUSTER\_NAME}}' {{< /text >}} Notes: \* `CLUSTER\_NAME` should be set to the same value that you used to create the cluster (set via `values.global.multiCluster.clusterName`). \* No authentication to the Prometheus endpoint(s) is provided. This means that anyone can query your cluster-local Prometheus instances. This may not be desirable. \* Without proper HTTPS configuration of the gateway, everything is being transported via plaintext. This may not be desirable. ### Production Prometheus on an in-mesh cluster If you prefer to run the production Prometheus in one of the clusters, you need to establish connectivity from it to the other cluster-local Prometheus instances in the mesh. This is really just a variation of the configuration for external federation. In this case
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/telemetry/monitoring-multicluster-prometheus/index.md
master
istio
[ -0.024794910103082657, -0.004011904820799828, 0.001649116980843246, 0.004726985935121775, 0.011651611886918545, -0.1072688177227974, 0.007301353849470615, 0.005284816026687622, 0.01663113757967949, 0.03613215684890747, -0.03600938990712166, -0.136845201253891, 0.007987601682543755, 0.02731...
0.446555
Production Prometheus on an in-mesh cluster If you prefer to run the production Prometheus in one of the clusters, you need to establish connectivity from it to the other cluster-local Prometheus instances in the mesh. This is really just a variation of the configuration for external federation. In this case the configuration on the cluster running the production Prometheus is different from the configuration for remote cluster Prometheus scraping. {{< image width="80%" link="./in-mesh-production-prometheus.svg" alt="Architecture of in-mesh Production Prometheus for monitoring multicluster Istio." caption="In-mesh Production Prometheus for monitoring multicluster Istio" >}} Configure your production Prometheus to access both of the \*local\* and \*remote\* Prometheus instances. First execute the following command: {{< text bash >}} $ kubectl -n istio-system edit cm prometheus -o yaml {{< /text >}} Then add configurations for the \*remote\* clusters (replacing the ingress domain and cluster name for each cluster) and add one configuration for the \*local\* cluster: {{< text yaml >}} scrape\_configs: - job\_name: 'federate-{{REMOTE\_CLUSTER\_NAME}}' scrape\_interval: 15s honor\_labels: true metrics\_path: '/federate' params: 'match[]': - '{job="kubernetes-pods"}' static\_configs: - targets: - 'prometheus.{{REMOTE\_INGRESS\_DOMAIN}}' labels: cluster: '{{REMOTE\_CLUSTER\_NAME}}' - job\_name: 'federate-local' honor\_labels: true metrics\_path: '/federate' metric\_relabel\_configs: - replacement: '{{CLUSTER\_NAME}}' target\_label: cluster kubernetes\_sd\_configs: - role: pod namespaces: names: ['istio-system'] params: 'match[]': - '{\_\_name\_\_=~"istio\_(.\*)"}' - '{\_\_name\_\_=~"pilot(.\*)"}' {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/telemetry/monitoring-multicluster-prometheus/index.md
master
istio
[ -0.015990616753697395, 0.0068253097124397755, 0.004145256243646145, -0.004465932492166758, 0.027905547991394997, -0.10629566758871078, -0.035527054220438004, -0.006881057750433683, 0.04686965420842171, 0.055184878408908844, -0.004169912543147802, -0.13304577767848969, 0.005350116640329361, ...
0.287184
{{< boilerplate alpha >}} The [WasmPlugin API](/docs/reference/config/proxy\_extensions/wasm-plugin) provides a method for [distributing Wasm modules](/docs/tasks/extensibility/wasm-module-distribution) to proxies. Since each proxy will pull Wasm modules from a remote registry or an HTTP server, understanding how Istio chooses to pull modules is important in terms of usability as well as performance. ## Image pull policy and exceptions Analogous to `ImagePullPolicy` of Kubernetes, [WasmPlugin](/docs/reference/config/proxy\_extensions/wasm-plugin/#WasmPlugin) also has the notion of `IfNotPresent` and `Always`, which means "use the cached module" and "always pull the module regardless of the cache", respectively. Users explicitly configure the behavior for Wasm module retrieval with the `ImagePullPolicy` field. However, user-provided behavior can be overridden by Istio in the following scenarios: 1. If the user sets `sha256` in [WasmPlugin](/docs/reference/config/proxy\_extensions/wasm-plugin/#WasmPlugin), regardless of `ImagePullPolicy`, `IfNotPresent` policy is used. 1. If the `url` field points to an OCI image and it has a digest suffix (e.g., `gcr.io/foo/bar@sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef`), `IfNotPresent` policy is used. When `ImagePullPolicy` is not specified for a resource, Istio defaults to `IfNotPresent` behavior. However, if the provided `url` field specifies an OCI image that has a tag value of `latest`, Istio will use `Always` behavior. ## Lifecycle of cached modules Each proxy, whether a sidecar proxy or a gateway, caches Wasm modules. The lifetime of the cached Wasm module is therefore bounded by the lifetime of the corresponding pod. In addition, there is an expiration mechanism for keeping the proxy memory footprint to a minimum: if a cached Wasm module is not used for a certain amount of the time, the module is purged. This expiration can be configured via the environment variables `WASM\_MODULE\_EXPIRY` and `WASM\_PURGE\_INTERVAL` of [pilot-proxy](/docs/reference/commands/pilot-agent/#envvars), which are the duration of expiration and the interval for checking the expiration respectively. ## The meaning of "Always" In Kubernetes, `ImagePullPolicy: Always` means that an image is pulled directly from its source each time a pod is created. Every time a new pod is started, Kubernetes pulls the image anew. For a `WasmPlugin`, `ImagePullPolicy: Always` means that Istio will pull an image directly from its source each time the corresponding `WasmPlugin` Kubernetes resource is created or changed. Please note that a change not only in `spec` but also `metadata` triggers the pulling of a Wasm module when the `Always` policy is used. This can mean that an image is pulled from source several times over the lifetime of a pod, and over the lifetime of an individual proxy.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/extensibility/wasm-pull-policy/index.md
master
istio
[ -0.021679500117897987, -0.002828551921993494, 0.006831390783190727, 0.03462950140237808, 0.006312727462500334, -0.04600309580564499, 0.0741223394870758, 0.07398799806833267, -0.0339629091322422, 0.022595427930355072, 0.0039030928164720535, -0.03302902355790138, -0.038886964321136475, 0.018...
0.542609
One of Istio's most important features is the ability to lock down and secure network traffic to, from, and within the mesh. However, configuring TLS settings can be confusing and a common source of misconfiguration. This document attempts to explain the various connections involved when sending requests in Istio and how their associated TLS settings are configured. Refer to [TLS configuration mistakes](/docs/ops/common-problems/network-issues/#tls-configuration-mistakes) for a summary of some the most common TLS configuration problems. ## Sidecars Sidecar traffic has a variety of associated connections. Let's break them down one at a time. {{< image width="100%" link="sidecar-connections.svg" alt="Sidecar proxy network connections" title="Sidecar connections" caption="Sidecar proxy network connections" >}} 1. \*\*External inbound traffic\*\* This is traffic coming from an outside client that is captured by the sidecar. If the client is inside the mesh, this traffic may be encrypted with Istio mutual TLS. By default, the sidecar will be configured to accept both mTLS and non-mTLS traffic, known as `PERMISSIVE` mode. The mode can alternatively be configured to `STRICT`, where traffic must be mTLS, or `DISABLE`, where traffic must be plaintext. The mTLS mode is configured using a [`PeerAuthentication` resource](/docs/reference/config/security/peer\_authentication/). 1. \*\*Local inbound traffic\*\* This is traffic going to your application service, from the sidecar. This traffic will always be forwarded as-is. Note that this does not mean it's always plaintext; the sidecar may pass a TLS connection through. It just means that a new TLS connection will never be originated from the sidecar. 1. \*\*Local outbound traffic\*\* This is outgoing traffic from your application service that is intercepted by the sidecar. Your application may be sending plaintext or TLS traffic. If [automatic protocol selection](/docs/ops/configuration/traffic-management/protocol-selection/#automatic-protocol-selection) is enabled, Istio will automatically detect the protocol. Otherwise you should use the port name in the destination service to [manually specify the protocol](/docs/ops/configuration/traffic-management/protocol-selection/#explicit-protocol-selection). 1. \*\*External outbound traffic\*\* This is traffic leaving the sidecar to some external destination. Traffic can be forwarded as is, or a TLS connection can be initiated (mTLS or standard TLS). This is controlled using the TLS mode setting in the `trafficPolicy` of a [`DestinationRule` resource](/docs/reference/config/networking/destination-rule/). A mode setting of `DISABLE` will send plaintext, while `SIMPLE`, `MUTUAL`, and `ISTIO\_MUTUAL` will originate a TLS connection. The key takeaways are: - `PeerAuthentication` is used to configure what type of mTLS traffic the sidecar will accept. - `DestinationRule` is used to configure what type of TLS traffic the sidecar will send. - Port names, or automatic protocol selection, determines which protocol the sidecar will parse traffic as. ## Auto mTLS As described above, a `DestinationRule` controls whether outgoing traffic uses mTLS or not. However, configuring this for every workload can be tedious. Typically, you want Istio to always use mTLS wherever possible, and only send plaintext to workloads that are not part of the mesh (i.e., ones without sidecars). Istio makes this easy with a feature called "Auto mTLS". Auto mTLS works by doing exactly that. If TLS settings are not explicitly configured in a `DestinationRule`, the sidecar will automatically determine if [Istio mutual TLS](/about/faq/#difference-between-mutual-and-istio-mutual) should be sent. This means that without any configuration, all inter-mesh traffic will be mTLS encrypted. ## Gateways Any given request to a gateway will have two connections. {{< image width="100%" link="gateway-connections.svg" alt="Gateway network connections" title="Gateway connections" caption="Gateway network connections" >}} 1. The inbound request, initiated by some client such as `curl` or a web browser. This is often called the "downstream" connection. 1. The outbound request, initiated by the gateway to some backend. This is often called the "upstream" connection. Both of these connections have independent TLS configurations. Note that the configuration of ingress and egress gateways are identical. The `istio-ingress-gateway`
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/traffic-management/tls-configuration/index.md
master
istio
[ -0.07286524027585983, 0.070283442735672, 0.006958166602998972, 0.0600174255669117, -0.024821102619171143, -0.047521717846393585, 0.0706973746418953, 0.009776609018445015, 0.01816898211836815, -0.03662935271859169, -0.013237755745649338, -0.056702032685279846, -0.02086719125509262, 0.034830...
0.474135
or a web browser. This is often called the "downstream" connection. 1. The outbound request, initiated by the gateway to some backend. This is often called the "upstream" connection. Both of these connections have independent TLS configurations. Note that the configuration of ingress and egress gateways are identical. The `istio-ingress-gateway` and `istio-egress-gateway` are just two specialized gateway deployments. The difference is that the client of an ingress gateway is running outside of the mesh while in the case of an egress gateway, the destination is outside of the mesh. ### Inbound As part of the inbound request, the gateway must decode the traffic in order to apply routing rules. This is done based on the server configuration in a [`Gateway` resource](/docs/reference/config/networking/gateway/). For example, if an inbound connection is plaintext HTTP, the port protocol is configured as `HTTP`: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: Gateway ... servers: - port: number: 80 name: http protocol: HTTP {{< /text >}} Similarly, for raw TCP traffic, the protocol would be set to `TCP`. For TLS connections, there are a few more options: 1. What protocol is encapsulated? If the connection is HTTPS, the server protocol should be configured as `HTTPS`. Otherwise, for a raw TCP connection encapsulated with TLS, the protocol should be set to `TLS`. 1. Is the TLS connection terminated or passed through? For passthrough traffic, configure the TLS mode field to `PASSTHROUGH`: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: Gateway ... servers: - port: number: 443 name: https protocol: HTTPS tls: mode: PASSTHROUGH {{< /text >}} In this mode, Istio will route based on SNI information and forward the connection as-is to the destination. 1. Should mutual TLS be used? Mutual TLS can be configured through the TLS mode `MUTUAL`. When this is configured, a client certificate will be requested and verified against the configured `caCertificates` or `credentialName`: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: Gateway ... servers: - port: number: 443 name: https protocol: HTTPS tls: mode: MUTUAL caCertificates: ... {{< /text >}} ### Outbound While the inbound side configures what type of traffic to expect and how to process it, the outbound configuration controls what type of traffic the gateway will send. This is configured by the TLS settings in a `DestinationRule`, just like external outbound traffic from [sidecars](#sidecars), or [auto mTLS](#auto-mtls) by default. The only difference is that you should be careful to consider the `Gateway` settings when configuring this. For example, if the `Gateway` is configured with TLS `PASSTHROUGH` while the `DestinationRule` configures TLS origination, you will end up with [double encryption](/docs/ops/common-problems/network-issues/#double-tls). This works, but is often not the desired behavior. A `VirtualService` bound to the gateway needs care as well to [ensure it is consistent](/docs/ops/common-problems/network-issues/#gateway-mismatch) with the `Gateway` definition.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/traffic-management/tls-configuration/index.md
master
istio
[ -0.06279202550649643, 0.02328052744269371, -0.04058167338371277, 0.03321026265621185, -0.050638895481824875, -0.08721575140953064, 0.051207415759563446, 0.04218737781047821, 0.1125103086233139, 0.015063331462442875, -0.052529316395521164, -0.02081492356956005, -0.03108116053044796, 0.02464...
0.524528
Istio supports proxying any TCP traffic. This includes HTTP, HTTPS, gRPC, as well as raw TCP protocols. In order to provide additional capabilities, such as routing and rich metrics, the protocol must be determined. This can be done automatically or explicitly specified. Non-TCP based protocols, such as UDP, are not proxied. These protocols will continue to function as normal, without any interception by the Istio proxy but cannot be used in proxy-only components such as ingress or egress gateways. ## Automatic protocol selection Istio can automatically detect HTTP and HTTP/2 traffic. If the protocol cannot automatically be determined, traffic will be treated as plain TCP traffic. {{< tip >}} Server First protocols, such as MySQL, are incompatible with automatic protocol selection. See [Server first protocols](/docs/ops/deployment/application-requirements#server-first-protocols) for more information. {{< /tip >}} ## Explicit protocol selection Protocols can be specified manually in the Service definition. This can be configured in two ways: - By the name of the port: `name: [-]`. - In Kubernetes 1.18+, by the `appProtocol` field: `appProtocol: `. If both are defined, `appProtocol` takes precedence over the port name. Note that behavior at the Gateway differs in some cases as the gateway can terminate TLS and the protocol can be negotiated. The following protocols are supported: | Protocol | Sidecar Purpose | Gateway Purpose | | ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `http` | Plaintext HTTP/1.1 traffic | Plaintext HTTP (1.1 or 2) traffic | | `http2` | Plaintext HTTP/2 traffic | Plaintext HTTP (1.1 or 2) traffic | | `https` | TLS Encrypted data. Because the Sidecar does not decrypt TLS traffic, this is the same as `tls` | TLS Encrypted HTTP (1.1 or 2) traffic | | `tcp` | Opaque TCP data stream | Opaque TCP data stream | | `tls` | TLS Encrypted data | TLS Encrypted data | | `grpc`, `grpc-web` | Same as `http2` | Same as `http2` | | | `mongo`, `mysql`, `redis` | Experimental application protocol support. To enable them, configure the corresponding [environment variables](/docs/reference/commands/pilot-discovery/#envvars). If not enabled, treated as opaque TCP data stream | Experimental application protocol support. To enable them, configure the corresponding [environment variables](/docs/reference/commands/pilot-discovery/#envvars). If not enabled, treated as opaque TCP data stream | Below is an example of a Service that defines a `https` port by `appProtocol` and an `http` port by name: {{< text yaml >}} kind: Service metadata: name: myservice spec: ports: - port: 3306 name: database appProtocol: https - port: 80 name: http-web {{< /text >}} ## HTTP gateway protocol selection Unlike sidecars, gateways are by default unable to automatically detect the specific HTTP protocol to use when forwarding requests to backend services. Therefore, unless explicit protocol selection is used to specify HTTP/1.1 (`http`) or HTTP/2 (`http2` or `grpc`), gateways will forward all incoming HTTP requests using HTTP/1.1. Instead of using explicit protocol selection, you can instruct gateways to forward requests using the same protocol as the incoming request by setting the [`useClientProtocol`](/docs/reference/config/networking/destination-rule/#ConnectionPoolSettings-HTTPSettings) option for a Service. Note, however, that using this option with services that do not support HTTP/2 can be risky because HTTPS gateways always [advertise](https://en.wikipedia.org/wiki/Application-Layer\_Protocol\_Negotiation) support for HTTP/1.1 and HTTP/2. So even when a backend service doesn't support HTTP/2, modern clients will think it does and often choose to use it.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/traffic-management/protocol-selection/index.md
master
istio
[ -0.02888612076640129, -0.0199680645018816, 0.012393676675856113, -0.003803395899012685, -0.09407202154397964, -0.03712528198957443, 0.07327099144458771, 0.024528583511710167, 0.002641476457938552, 0.028393378481268883, -0.06988386064767838, -0.08423212170600891, -0.024082276970148087, 0.04...
0.572038
{{< boilerplate alpha >}} {{< boilerplate gateway-api-support >}} ## Forwarding external client attributes (IP address, certificate info) to destination workloads Many applications require knowing the client IP address and certificate information of the originating request to behave properly. Notable cases include logging and audit tools that require the client IP be populated and security tools, such as Web Application Firewalls (WAF), that need this information to apply rule sets properly. The ability to provide client attributes to services has long been a staple of reverse proxies. To forward these client attributes to destination workloads, proxies use the `X-Forwarded-For` (XFF) and `X-Forwarded-Client-Cert` (XFCC) headers. Today's networks vary widely in nature, but support for these attributes is a requirement no matter what the network topology is. This information should be preserved and forwarded whether the network uses cloud-based Load Balancers, on-premise Load Balancers, gateways that are exposed directly to the internet, gateways that serve many intermediate proxies, and other deployment topologies not specified. While Istio provides an [ingress gateway](/docs/tasks/traffic-management/ingress/ingress-control/), given the varieties of architectures mentioned above, reasonable defaults are not able to be shipped that support the proper forwarding of client attributes to the destination workloads. This becomes ever more vital as Istio multicluster deployment models become more common. For more information on `X-Forwarded-For`, see the IETF's [RFC](https://tools.ietf.org/html/rfc7239). ## Configuring network topologies Configuration of XFF and XFCC headers can be set globally for all gateway workloads via `MeshConfig` or per gateway using a pod annotation. For example, to configure globally during install or upgrade when using an `IstioOperator` custom resource: {{< text syntax=yaml snip\_id=none >}} spec: meshConfig: defaultConfig: gatewayTopology: numTrustedProxies: forwardClientCertDetails: {{< /text >}} You can also configure both of these settings by adding the `proxy.istio.io/config` annotation to the Pod spec of your Istio ingress gateway. {{< text syntax=yaml snip\_id=none >}} ... metadata: annotations: "proxy.istio.io/config": '{"gatewayTopology" : { "numTrustedProxies": , "forwardClientCertDetails": } }' {{< /text >}} ### Configuring X-Forwarded-For Headers Applications rely on reverse proxies to forward client attributes in a request, such as `X-Forward-For` header. However, due to the variety of network topologies that Istio can be deployed in, you must set the `numTrustedProxies` to the number of trusted proxies deployed in front of the Istio gateway proxy, so that the client address can be extracted correctly. This controls the value populated by the ingress gateway in the `X-Envoy-External-Address` header which can be reliably used by the upstream services to access client's original IP address. For example, if you have a cloud based Load Balancer and a reverse proxy in front of your Istio gateway, set `numTrustedProxies` to `2`. {{< idea >}} Note that all proxies in front of the Istio gateway proxy must parse HTTP traffic and append to the `X-Forwarded-For` header at each hop. If the number of entries in the `X-Forwarded-For` header is less than the number of trusted hops configured, Envoy falls back to using the immediate downstream address as the trusted client address. Please refer to the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http\_conn\_man/headers#x-forwarded-for) to understand how `X-Forwarded-For` headers and trusted client addresses are determined. {{< /idea >}} #### Example using X-Forwarded-For capability with httpbin 1. Run the following command to create a file named `topology.yaml` with `numTrustedProxies` set to `2` and install Istio: {{< text syntax=bash snip\_id=install\_num\_trusted\_proxies\_two >}} $ cat < topology.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: meshConfig: defaultConfig: gatewayTopology: numTrustedProxies: 2 EOF $ istioctl install -f topology.yaml {{< /text >}} {{< idea >}} If you previously installed an Istio ingress gateway, restart all ingress gateway pods after step 1. {{}} 1. Create an `httpbin` namespace: {{< text syntax=bash snip\_id=create\_httpbin\_namespace >}} $ kubectl create namespace httpbin namespace/httpbin created {{< /text
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/traffic-management/network-topologies/index.md
master
istio
[ -0.0633092001080513, 0.006864258553832769, 0.0240151546895504, -0.03898538649082184, 0.03799926117062569, 0.003736987942829728, 0.06290918588638306, 0.01182647980749607, 0.039052899926900864, -0.029256276786327362, -0.039259832352399826, -0.030864225700497627, 0.030667640268802643, 0.01503...
0.24909
numTrustedProxies: 2 EOF $ istioctl install -f topology.yaml {{< /text >}} {{< idea >}} If you previously installed an Istio ingress gateway, restart all ingress gateway pods after step 1. {{}} 1. Create an `httpbin` namespace: {{< text syntax=bash snip\_id=create\_httpbin\_namespace >}} $ kubectl create namespace httpbin namespace/httpbin created {{< /text >}} 1. Set the `istio-injection` label to `enabled` for sidecar injection: {{< text syntax=bash snip\_id=label\_httpbin\_namespace >}} $ kubectl label --overwrite namespace httpbin istio-injection=enabled namespace/httpbin labeled {{< /text >}} 1. Deploy `httpbin` in the `httpbin` namespace: {{< text syntax=bash snip\_id=apply\_httpbin >}} $ kubectl apply -n httpbin -f @samples/httpbin/httpbin.yaml@ {{< /text >}} 1. Deploy a gateway associated with `httpbin`: {{< tabset category-name="config-api" >}} {{< tab name="Istio APIs" category-value="istio-apis" >}} {{< text syntax=bash snip\_id=deploy\_httpbin\_gateway >}} $ kubectl apply -n httpbin -f @samples/httpbin/httpbin-gateway.yaml@ {{< /text >}} {{< /tab >}} {{< tab name="Gateway API" category-value="gateway-api" >}} {{< text syntax=bash snip\_id=deploy\_httpbin\_k8s\_gateway >}} $ kubectl apply -n httpbin -f @samples/httpbin/gateway-api/httpbin-gateway.yaml@ $ kubectl wait --for=condition=programmed gtw -n httpbin httpbin-gateway {{< /text >}} {{< /tab >}} {{< /tabset >}} 6) Set a local `GATEWAY\_URL` environmental variable based on your Istio ingress gateway's IP address: {{< tabset category-name="config-api" >}} {{< tab name="Istio APIs" category-value="istio-apis" >}} {{< text syntax=bash snip\_id=export\_gateway\_url >}} $ export GATEWAY\_URL=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') {{< /text >}} {{< /tab >}} {{< tab name="Gateway API" category-value="gateway-api" >}} {{< text syntax=bash snip\_id=export\_k8s\_gateway\_url >}} $ export GATEWAY\_URL=$(kubectl get gateways.gateway.networking.k8s.io httpbin-gateway -n httpbin -ojsonpath='{.status.addresses[0].value}') {{< /text >}} {{< /tab >}} {{< /tabset >}} 7) Run the following `curl` command to simulate a request with proxy addresses in the `X-Forwarded-For` header: {{< text syntax=bash snip\_id=curl\_xff\_headers >}} $ curl -s -H 'X-Forwarded-For: 56.5.6.7, 72.9.5.6, 98.1.2.3' "$GATEWAY\_URL/get?show\_env=true" | jq '.headers["X-Forwarded-For"][0]' "56.5.6.7, 72.9.5.6, 98.1.2.3,10.244.0.1" {{< /text >}} {{< tip >}} In the above example `$GATEWAY\_URL` resolved to 10.244.0.1. This will not be the case in your environment. {{< /tip >}} The above output shows the request headers that the `httpbin` workload received. When the Istio gateway received this request, it set the `X-Envoy-External-Address` header to the second to last (`numTrustedProxies: 2`) address in the `X-Forwarded-For` header from your curl command. Additionally, the gateway appends its own IP to the `X-Forwarded-For` header before forwarding it to the httpbin workload. ### Configuring X-Forwarded-Client-Cert Headers From [Envoy's documentation](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http\_conn\_man/headers#x-forwarded-client-cert) regarding XFCC: {{< quote >}} x-forwarded-client-cert (XFCC) is a proxy header which indicates certificate information of part or all of the clients or proxies that a request has flowed through, on its way from the client to the server. A proxy may choose to sanitize/append/forward the XFCC header before proxying the request. {{< /quote >}} To configure how XFCC headers are handled, set `forwardClientCertDetails` in your `IstioOperator` {{< text syntax=yaml snip\_id=none >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: meshConfig: defaultConfig: gatewayTopology: forwardClientCertDetails: {{< /text >}} where `ENUM\_VALUE` can be of the following type. | `ENUM\_VALUE` | | |-----------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | `UNDEFINED` | Field is not set. | | `SANITIZE` | Do not send the XFCC header to the next hop. | | `FORWARD\_ONLY` | When the client connection is mTLS (Mutual TLS), forward the XFCC header in the request. | | `APPEND\_FORWARD` | When the client connection is mTLS, append the client certificate information to the request’s XFCC header and forward it. | | `SANITIZE\_SET` | When the client connection is mTLS, reset the XFCC header with the client certificate information and send it to the next hop. This is the default value for a gateway. | | `ALWAYS\_FORWARD\_ONLY` | Always forward the XFCC header in the request, regardless of whether the client connection is mTLS. | See the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http\_conn\_man/headers#x-forwarded-client-cert) for examples of using this capability. ## PROXY
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/traffic-management/network-topologies/index.md
master
istio
[ -0.00009113273699767888, 0.008572145365178585, 0.007493129000067711, 0.023617176339030266, -0.07544652372598648, 0.019812224432826042, 0.005528808571398258, -0.0008509671315550804, -0.017216570675373077, 0.06843465566635132, -0.0058766999281942844, -0.10222211480140686, -0.02349701151251793,...
0.348474
certificate information and send it to the next hop. This is the default value for a gateway. | | `ALWAYS\_FORWARD\_ONLY` | Always forward the XFCC header in the request, regardless of whether the client connection is mTLS. | See the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http\_conn\_man/headers#x-forwarded-client-cert) for examples of using this capability. ## PROXY Protocol The [PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt) allows for exchanging and preservation of client attributes between TCP proxies, without relying on L7 protocols such as HTTP and the `X-Forwarded-For` and `X-Envoy-External-Address` headers. It is intended for scenarios where an external TCP load balancer needs to proxy TCP traffic through an Istio gateway to a backend TCP service and still expose client attributes such as source IP to upstream TCP service endpoints. PROXY protocol can be enabled via `EnvoyFilter`. {{< warning >}} PROXY protocol is only supported for TCP traffic forwarding by Envoy. See the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/intro/arch\_overview/other\_features/ip\_transparency#proxy-protocol) for more details, along with some important performance caveats. PROXY protocol should not be used for L7 traffic, or for Istio gateways behind L7 load balancers. {{< /warning >}} If your external TCP load balancer is configured to forward TCP traffic and use the PROXY protocol, the Istio Gateway TCP listener must also be configured to accept the PROXY protocol. To enable PROXY protocol on all TCP listeners on the gateways, set `proxyProtocol` in your `IstioOperator`. For example: {{< text syntax=yaml snip\_id=none >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: meshConfig: defaultConfig: gatewayTopology: proxyProtocol: {} {{< /text >}} Alternatively, deploy a gateway with the following pod annotation: {{< text yaml >}} metadata: annotations: "proxy.istio.io/config": '{"gatewayTopology" : { "proxyProtocol": {} }}' {{< /text >}} The client IP is retrieved from the PROXY protocol by the gateway and set (or appended) in the `X-Forwarded-For` and `X-Envoy-External-Address` header. Note that the PROXY protocol is mutually exclusive with L7 headers like `X-Forwarded-For` and `X-Envoy-External-Address`. When PROXY protocol is used in conjunction with the `gatewayTopology` configuration, the `numTrustedProxies` and the received `X-Forwarded-For` header takes precedence in determining the trusted client addresses, and PROXY protocol client information will be ignored. Note that the above example only configures the Gateway to accept incoming PROXY protocol TCP traffic - See the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/intro/arch\_overview/other\_features/ip\_transparency#proxy-protocol) for examples of how to configure Envoy itself to communicate with upstream services using PROXY protocol.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/traffic-management/network-topologies/index.md
master
istio
[ -0.08659987896680832, 0.07792627811431885, -0.014040878973901272, 0.06098594143986702, -0.020141635090112686, -0.016155123710632324, 0.052907176315784454, 0.021949579939246178, 0.03831969201564789, -0.016647275537252426, -0.07075154036283493, -0.04183115065097809, 0.05867279693484306, -0.0...
0.149282
In addition to capturing application traffic, Istio can also capture DNS requests to improve the performance and usability of your mesh. When proxying DNS, all DNS requests from an application will be redirected to the sidecar or ztunnel proxy, which stores a local mapping of domain names to IP addresses. If the request can be handled by the proxy, it will directly return a response to the application, avoiding a roundtrip to the upstream DNS server. Otherwise, the request is forwarded upstream following the standard `/etc/resolv.conf` DNS configuration. While Kubernetes provides DNS resolution for Kubernetes `Service`s out of the box, any custom `ServiceEntry`s will not be recognized. With this feature, `ServiceEntry` addresses can be resolved without requiring custom configuration of a DNS server. For Kubernetes `Service`s, the DNS response will be the same, but with reduced load on `kube-dns` and increased performance. This functionality is also available for services running outside of Kubernetes. This means that all internal services can be resolved without clunky workarounds to expose Kubernetes DNS entries outside of the cluster. ## Getting started Istio will generally route traffic based on HTTP headers. If routing based on a HTTP header is not possible - in ambient mode, or with TCP traffic in sidecar mode - DNS proxy can be enabled. In ambient mode, the ztunnel only sees traffic at Layer 4, and does not have access to HTTP headers. Therefore, DNS proxying is required to enable resolution of `ServiceEntry` addresses, especially in the case of [sending egress traffic to waypoints](https://github.com/istio/istio/wiki/Troubleshooting-Istio-Ambient#scenario-ztunnel-is-not-sending-egress-traffic-to-waypoints). ### Ambient mode DNS proxying is enabled by default in ambient mode from Istio 1.25 onwards. For versions prior to 1.25, you can enable DNS capture by setting `values.cni.ambient.dnsCapture=true` and `values.pilot.env.PILOT\_ENABLE\_IP\_AUTOALLOCATE=true` during installation. Individual pods may opt-out of global ambient mode DNS capture by applying the `ambient.istio.io/dns-capture=false` annotation. ### Sidecar mode This feature is not currently enabled by default. To enable it, install Istio with the following settings: {{< text bash >}} $ cat <}} This can also be enabled on a per-pod basis with the [`proxy.istio.io/config` annotation](/docs/reference/config/annotations/): {{< text syntax=yaml snip\_id=none >}} kind: Deployment metadata: name: curl spec: ... template: metadata: annotations: proxy.istio.io/config: | proxyMetadata: ISTIO\_META\_DNS\_CAPTURE: "true" ... {{< /text >}} {{< tip >}} When deploying to a VM using [`istioctl workload entry configure`](/docs/setup/install/virtual-machine/), basic DNS proxying will be enabled by default. {{< /tip >}} ## DNS capture in action To try out the DNS capture, first set up a `ServiceEntry` for some external service: {{< text bash >}} $ kubectl apply -f - <}} Bring up a client application to initiate the DNS request: {{< text bash >}} $ kubectl label namespace default istio-injection=enabled --overwrite $ kubectl apply -f @samples/curl/curl.yaml@ {{< /text >}} Without the DNS capture, a request to `address.internal` would likely fail to resolve. Once this is enabled, you should instead get a response back based on the configured `address`: {{< text bash >}} $ kubectl exec deploy/curl -- curl -sS -v address.internal \* Trying 198.51.100.1:80... {{< /text >}} ## Address auto-allocation In the above example, you had a predefined IP address for the service to which you sent the request. However, it's common to access external services that do not have stable addresses, and instead rely on DNS. In this case, the DNS proxy will not have enough information to return a response, and will need to forward DNS requests upstream. This is especially problematic with TCP traffic. Unlike HTTP requests, which are routed based on `Host` headers, TCP carries much less information; you can only route on the destination IP and port number. Because you don't have a stable IP for
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/traffic-management/dns-proxy/index.md
master
istio
[ -0.05632871761918068, -0.005151445046067238, 0.07589572668075562, -0.04325877130031586, -0.1123386025428772, -0.034469980746507645, -0.0033161004539579153, 0.005125497933477163, 0.0844670906662941, 0.012581069022417068, -0.09748634696006775, -0.052936453372240067, -0.06423185765743256, -0....
0.427044
response, and will need to forward DNS requests upstream. This is especially problematic with TCP traffic. Unlike HTTP requests, which are routed based on `Host` headers, TCP carries much less information; you can only route on the destination IP and port number. Because you don't have a stable IP for the backend, you cannot route based on that either, leaving only port number, which leads to conflicts when multiple `ServiceEntry`s for TCP services share the same port. Refer to [the following section](#external-tcp-services-without-vips) for more details. To work around these issues, the DNS proxy additionally supports automatically allocating addresses for `ServiceEntry`s that do not explicitly define one. The DNS response will include a distinct and automatically assigned address for each `ServiceEntry`. The proxy is then configured to match requests to this IP address, and forward the request to the corresponding `ServiceEntry`. Istio will automatically allocate non-routable VIPs (from the Class E subnet) to such services as long as they do not use a wildcard host. The Istio agent on the sidecar will use the VIPs as responses to the DNS lookup queries from the application. Envoy can now clearly distinguish traffic bound for each external TCP service and forward it to the right target. {{< warning >}} Because this feature modifies DNS responses, it may not be compatible with all applications. {{< /warning >}} To try this out, configure another `ServiceEntry`: {{< text bash >}} $ kubectl apply -f - <}} Now, send a request: {{< text bash >}} $ kubectl exec deploy/curl -- curl -sS -v auto.internal \* Trying 240.240.0.1:80... {{< /text >}} As you can see, the request is sent to an automatically allocated address, `240.240.0.1`. These addresses will be picked from the `240.240.0.0/16` reserved IP address range to avoid conflicting with real services. Users also have the flexibility for more granular configuration by adding the label `networking.istio.io/enable-autoallocate-ip="true/false"` to their `ServiceEntry`. This label configures whether a `ServiceEntry` without any `spec.addresses` set should get an IP address automatically allocated for it. To try this out, update the existing `ServiceEntry` with the opt-out label: {{< text bash >}} $ kubectl apply -f - <}} Now, send a request and verify that the auto allocation is no longer happening: {{< text bash >}} $ kubectl exec deploy/curl -- curl -sS -v auto.internal \* Could not resolve host: auto.internal \* Store negative name resolve for auto.internal:80 \* shutting down connection #0 {{< /text >}} ## External TCP services without VIPs By default, Istio has a limitation when routing external TCP traffic because it is unable to distinguish between multiple TCP services on the same port. This limitation is particularly apparent when using third party databases such as AWS Relational Database Service or any database setup with geographical redundancy. Similar, but different external TCP services, cannot be handled separately by default. For the sidecar to distinguish traffic between two different TCP services that are outside of the mesh, the services must be on different ports or they need to have globally unique VIPs. For example, if you have two external database services, `mysql-instance1` and `mysql-instance2`, and you create service entries for both, client sidecars will still have a single listener on `0.0.0.0:{port}` that looks up the IP address of only `mysql-instance1`, from public DNS servers, and forwards traffic to it. It cannot route traffic to `mysql-instance2` because it has no way of distinguishing whether traffic arriving at `0.0.0.0:{port}` is bound for `mysql-instance1` or `mysql-instance2`. The following example shows how DNS proxying can be used to solve this problem. A virtual IP address will be assigned to every service entry so that client sidecars can
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/traffic-management/dns-proxy/index.md
master
istio
[ -0.08596343547105789, -0.02574872598052025, 0.0010632596677169204, -0.03383743762969971, -0.15342161059379578, -0.0612085796892643, 0.0420045331120491, 0.0258368868380785, -0.0023362687788903713, -0.008611467666924, -0.11937328428030014, -0.06841767579317093, -0.04689507186412811, 0.044717...
0.462406
to `mysql-instance2` because it has no way of distinguishing whether traffic arriving at `0.0.0.0:{port}` is bound for `mysql-instance1` or `mysql-instance2`. The following example shows how DNS proxying can be used to solve this problem. A virtual IP address will be assigned to every service entry so that client sidecars can clearly distinguish traffic bound for each external TCP service. 1. Update the Istio configuration specified in the [Getting Started](#getting-started) section to also configure `discoverySelectors` that restrict the mesh to namespaces with `istio-injection` enabled. This will let us use any other namespaces in the cluster to run TCP services outside of the mesh. {{< text bash >}} $ cat <}} 1. Deploy the first external sample TCP application: {{< text bash >}} $ kubectl create ns external-1 $ kubectl -n external-1 apply -f samples/tcp-echo/tcp-echo.yaml {{< /text >}} 1. Deploy the second external sample TCP application: {{< text bash >}} $ kubectl create ns external-2 $ kubectl -n external-2 apply -f samples/tcp-echo/tcp-echo.yaml {{< /text >}} 1. Configure `ServiceEntry` to reach external services: {{< text bash >}} $ kubectl apply -f - <}} 1. Verify listeners are configured separately for each service at the client side: {{< text bash >}} $ istioctl pc listener deploy/curl | grep tcp-echo | awk '{printf "ADDRESS=%s, DESTINATION=%s %s\n", $1, $4, $5}' ADDRESS=240.240.105.94, DESTINATION=Cluster: outbound|9000||tcp-echo.external-2.svc.cluster.local ADDRESS=240.240.69.138, DESTINATION=Cluster: outbound|9000||tcp-echo.external-1.svc.cluster.local {{< /text >}} ## Cleanup {{< text bash >}} $ kubectl -n external-1 delete -f @samples/tcp-echo/tcp-echo.yaml@ $ kubectl -n external-2 delete -f @samples/tcp-echo/tcp-echo.yaml@ $ kubectl delete -f @samples/curl/curl.yaml@ $ istioctl uninstall --purge -y $ kubectl delete ns istio-system external-1 external-2 $ kubectl label namespace default istio-injection- {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/traffic-management/dns-proxy/index.md
master
istio
[ -0.016076553612947464, -0.018479662016034126, 0.0025853831321001053, -0.04314765706658363, -0.0996047779917717, -0.032683007419109344, 0.06271065026521683, -0.000618800288066268, 0.03872152790427208, 0.0031910534016788006, -0.07091906666755676, -0.15829719603061676, 0.04164955019950867, -0...
0.276761
Within a multicluster mesh, traffic rules specific to the cluster topology may be desirable. This document describes a few ways to manage traffic in a multicluster mesh. Before reading this guide: 1. Read [Deployment Models](/docs/ops/deployment/deployment-models/#multiple-clusters) 1. Make sure your deployed services follow the concept of {{< gloss "namespace sameness" >}}namespace sameness{{< /gloss >}}. ## Keeping traffic in-cluster In some cases the default cross-cluster load balancing behavior is not desirable. To keep traffic "cluster-local" (i.e. traffic sent from `cluster-a` will only reach destinations in `cluster-a`), mark hostnames or wildcards as `clusterLocal` using [`MeshConfig.serviceSettings`](/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig-ServiceSettings-Settings). For example, you can enforce cluster-local traffic for an individual service, all services in a particular namespace, or globally for all services in the mesh, as follows: {{< tabset category-name="meshconfig" >}} {{< tab name="per-service" category-value="service" >}} {{< text yaml >}} serviceSettings: - settings: clusterLocal: true hosts: - "mysvc.myns.svc.cluster.local" {{< /text >}} {{< /tab >}} {{< tab name="per-namespace" category-value="namespace" >}} {{< text yaml >}} serviceSettings: - settings: clusterLocal: true hosts: - "\*.myns.svc.cluster.local" {{< /text >}} {{< /tab >}} {{< tab name="global" category-value="global" >}} {{< text yaml >}} serviceSettings: - settings: clusterLocal: true hosts: - "\*" {{< /text >}} {{< /tab >}} {{< /tabset >}} You can also refine service access down by setting a global cluster-local rule and adding explicit exceptions, which can be specific or wildcard. In the following example, all services in the cluster will be kept cluster-local, except any service in the `myns` namespace: {{< text yaml >}} serviceSettings: - settings: clusterLocal: true hosts: - "\*" - settings: clusterLocal: false hosts: - "\*.myns.svc.cluster.local" {{< /text >}} ## Partitioning Services {#partitioning-services} [`DestinationRule.subsets`](/docs/reference/config/networking/destination-rule/#Subset) allows partitioning a service by selecting labels. These labels can be the labels from Kubernetes metadata, or from [built-in labels](/docs/reference/config/labels/). One of these built-in labels, `topology.istio.io/cluster`, in the subset selector for a `DestinationRule` allows creating per-cluster subsets. {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: DestinationRule metadata: name: mysvc-per-cluster-dr spec: host: mysvc.myns.svc.cluster.local subsets: - name: cluster-1 labels: topology.istio.io/cluster: cluster-1 - name: cluster-2 labels: topology.istio.io/cluster: cluster-2 {{< /text >}} Using these subsets you can create various routing rules based on the cluster such as [mirroring](/docs/tasks/traffic-management/mirroring/) or [shifting](/docs/tasks/traffic-management/traffic-shifting/). This provides another option to create cluster-local traffic rules by restricting the destination subset in a `VirtualService`: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: mysvc-cluster-local-vs spec: hosts: - mysvc.myns.svc.cluster.local http: - name: "cluster-1-local" match: - sourceLabels: topology.istio.io/cluster: "cluster-1" route: - destination: host: mysvc.myns.svc.cluster.local subset: cluster-1 - name: "cluster-2-local" match: - sourceLabels: topology.istio.io/cluster: "cluster-2" route: - destination: host: mysvc.myns.svc.cluster.local subset: cluster-2 {{< /text >}} Using subset-based routing this way to control cluster-local traffic, as opposed to [`MeshConfig.serviceSettings`](/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig-ServiceSettings-Settings), has the downside of mixing service-level policy with topology-level policy. For example, a rule that sends 10% of traffic to `v2` of a service will need twice the number of subsets (e.g., `cluster-1-v2`, `cluster-2-v2`). This approach is best limited to situations where more granular control of cluster-based routing is needed.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/traffic-management/multicluster/index.md
master
istio
[ 0.016672266647219658, -0.02693856693804264, -0.0026814790908247232, -0.03726908192038536, -0.03802350163459778, -0.02647634968161583, 0.006417443510144949, -0.013314621523022652, 0.0018593992572277784, 0.06899908930063248, -0.0663861483335495, -0.11740102618932724, 0.018665501847863197, 0....
0.291472
Istio interacts with DNS in different ways that can be confusing to understand. This document provides a deep dive into how Istio and DNS work together. {{< warning >}} This document describes low level implementation details. For a higher level overview, check out the traffic management [Concepts](/docs/concepts/traffic-management/) or [Tasks](/docs/tasks/traffic-management/) pages. {{< /warning >}} ## Life of a request In these examples, we will walk through what happens when an application runs `curl example.com`. While `curl` is used here, the same applies to almost all clients. When you send a request to a domain, a client will do DNS resolution to resolve that to an IP address. This happens regardless of any Istio settings, as Istio only intercepts networking traffic; it cannot change your application's behavior or decision to send a DNS request. In the example below, `example.com` resolved to `192.0.2.0`. {{< text bash >}} $ curl example.com -v \* Trying 192.0.2.0:80... {{< /text >}} Next, the request will be intercepted by Istio. At this point, Istio will see both the hostname (from a `Host: example.com` header), and the destination address (`192.0.2.0:80`). Istio uses this information to determine the intended destination. [Understanding Traffic Routing](/docs/ops/configuration/traffic-management/traffic-routing/) gives a deep dive into how this behavior works. If the client was unable to resolve the DNS request, the request would terminate before Istio receives it. This means that if a request is sent to a hostname which is known to Istio (for example, by a `ServiceEntry`) but not to the DNS server, the request will fail. Istio [DNS proxying](#dns-proxying) can change this behavior. Once Istio has identified the intended destination, it must choose which address to send to. Because of Istio's advanced [load balancing capabilities](/docs/concepts/traffic-management/#load-balancing-options), this is often not the original IP address the client sent. Depending on the service configuration, there are a few different ways Istio does this. \* Use the original IP address of the client (`192.0.2.0`, in the example above). This is the case for `ServiceEntry` of type `resolution: NONE` (the default) and [headless `Services`](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services). \* Load balance over a set of static IP addresses. This is the case for `ServiceEntry` of type `resolution: STATIC`, where all `spec.endpoints` will be used, or for standard `Services`, where all `Endpoints` will be used. \* Periodically resolve an address using DNS, and load balance across all results. This is the case for `ServiceEntry` of type `resolution: DNS`. Note that in all cases, DNS resolution within the Istio proxy is orthogonal to DNS resolution in a user application. Even when the client does DNS resolution, the proxy may ignore the resolved IP address and use its own, which could be from a static list of IPs or by doing its own DNS resolution (potentially of the same hostname or a different one). ## Proxy DNS resolution Unlike most clients, which will do DNS requests on demand at the time of requests (and then typically cache the results), the Istio proxy never does synchronous DNS requests. When a `resolution: DNS` type `ServiceEntry` is configured, the proxy will periodically resolve the configured hostnames and use those for all requests. This interval is fixed at 30 seconds and cannot be changed at this time. This happens even if the proxy never sends any requests to these applications. For meshes with many proxies or many `resolution: DNS` type `ServiceEntries`, especially when low `TTL`s are used, this may cause a high load on DNS servers. In these cases, the following can help reduce the load: \* Switch to `resolution: NONE` to avoid proxy DNS lookups entirely. This is suitable for many use cases. \* If you control the domains being
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/traffic-management/dns/index.md
master
istio
[ -0.05059581249952316, 0.0025382970925420523, 0.007985478267073631, 0.00926092080771923, -0.09052497893571854, -0.07583634555339813, -0.004536583088338375, 0.0556788370013237, 0.07504960149526596, -0.019605832174420357, -0.07265899330377579, -0.04442959278821945, -0.046872399747371674, 0.03...
0.593241
especially when low `TTL`s are used, this may cause a high load on DNS servers. In these cases, the following can help reduce the load: \* Switch to `resolution: NONE` to avoid proxy DNS lookups entirely. This is suitable for many use cases. \* If you control the domains being resolved, increase their TTL. \* If your `ServiceEntry` is only needed by a few workloads, limit its scope with `exportTo` or a [`Sidecar`](/docs/reference/config/networking/sidecar/). ## DNS Proxying Istio offers a feature to [proxy DNS requests](/docs/ops/configuration/traffic-management/dns-proxy/). This allows Istio to capture DNS requests sent by the client and return a response directly. This can improve DNS latency, reduce load, and allow `ServiceEntries`, which otherwise would not be known to `kube-dns`, to be resolved. Note this proxying only applies to DNS requests sent by user applications; when `resolution: DNS` type `ServiceEntries` are used, the proxy has no impact on the DNS resolution of the Istio proxy.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/traffic-management/dns/index.md
master
istio
[ -0.07519198954105377, 0.012540504336357117, 0.07909610867500305, -0.004132294561713934, -0.13412167131900787, -0.09006545692682266, -0.014590941369533539, 0.03404733166098595, 0.030499817803502083, -0.018426354974508286, -0.06763195246458054, -0.02471306547522545, -0.09327070415019989, 0.0...
0.464566
One of the goals of Istio is to act as a "transparent proxy" which can be dropped into an existing cluster, allowing traffic to continue to flow as before. However, there are powerful ways Istio can manage traffic differently than a typical Kubernetes cluster because of the additional features such as request load balancing. To understand what is happening in your mesh, it is important to understand how Istio routes traffic. {{< warning >}} This document describes low level implementation details. For a higher level overview, check out the traffic management [Concepts](/docs/concepts/traffic-management/) or [Tasks](/docs/tasks/traffic-management/). {{< /warning >}} ## Frontends and backends In traffic routing in Istio, there are two primary phases: \* The "frontend" refers to how we match the type of traffic we are handling. This is necessary to identify which backend to route traffic to, and which policies to apply. For example, we may read the `Host` header of `http.ns.svc.cluster.local` and identify the request is intended for the `http` Service. More information on how this matching works can be found below. \* The "backend" refers to where we send traffic once we have matched it. Using the example above, after identifying the request as targeting the `http` Service, we would send it to an endpoint in that Service. However, this selection is not always so simple; Istio allows customization of this logic, through `VirtualService` routing rules. Standard Kubernetes networking has these same concepts, too, but they are much simpler and generally hidden. When a `Service` is created, there is typically an associated frontend -- the automatically created DNS name (such as `http.ns.svc.cluster.local`), and an automatically created IP address to represent the service (the `ClusterIP`). Similarly, a backend is also created - the `Endpoints` or `EndpointSlice` - which represents all of the pods selected by the service. ## Protocols Unlike Kubernetes, Istio has the ability to process application level protocols such as HTTP and TLS. This allows for different types of [frontend](#frontends-and-backends) matching than is available in Kubernetes. In general, there are three classes of protocols Istio understands: \* HTTP, which includes HTTP/1.1, HTTP/2, and gRPC. Note that this does not include TLS encrypted traffic (HTTPS). \* TLS, which includes HTTPS. \* Raw TCP bytes. The [protocol selection](/docs/ops/configuration/traffic-management/protocol-selection/) document describes how Istio decides which protocol is used. The use of "TCP" can be confusing, as in other contexts it is used to distinguish between other L4 protocols, such as UDP. When referring to the TCP protocol in Istio, this typically means we are treating it as a raw stream of bytes, and not parsing application level protocols such as TLS or HTTP. ## Traffic Routing When an Envoy proxy receives a request, it must decide where, if anywhere, to forward it to. By default, this will be to the original service that was requested, unless [customized](/docs/tasks/traffic-management/traffic-shifting/). How this works depends on the protocol used. ### TCP When processing TCP traffic, Istio has a very small amount of useful information to route the connection - only the destination IP and Port. These attributes are used to determine the intended Service; the proxy is configured to listen on each service IP (`:`) pair and forward traffic to the upstream service. For customizations, a TCP `VirtualService` can be configured, which allows [matching on specific IPs and ports](/docs/reference/config/networking/virtual-service/#L4MatchAttributes) and routing it to different upstream services than requested. ### TLS When processing TLS traffic, Istio has slightly more information available than raw TCP: we can inspect the [SNI](https://en.wikipedia.org/wiki/Server\_Name\_Indication) field presented during the TLS handshake. For standard Services, the same IP:Port matching is used as for raw TCP. However, for services that do not have
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/traffic-management/traffic-routing/index.md
master
istio
[ 0.0046871439553797245, -0.004176406189799309, 0.024532660841941833, 0.027833549305796623, -0.04116512089967728, -0.02955944649875164, 0.042468562722206116, 0.025862891227006912, 0.0454593226313591, 0.046034298837184906, -0.1014125719666481, -0.10984732210636139, -0.06504913419485092, 0.011...
0.569943
upstream services than requested. ### TLS When processing TLS traffic, Istio has slightly more information available than raw TCP: we can inspect the [SNI](https://en.wikipedia.org/wiki/Server\_Name\_Indication) field presented during the TLS handshake. For standard Services, the same IP:Port matching is used as for raw TCP. However, for services that do not have a Service IP defined, such as [ExternalName services](#externalname-services), the SNI field will be used for routing. Additionally, custom routing can be configured with a TLS `VirtualService` to [match on SNI](/docs/reference/config/networking/virtual-service/#TLSMatchAttributes) and route requests to custom destinations. ### HTTP HTTP allows much richer routing than TCP and TLS. With HTTP, you can route individual HTTP requests, rather than just connections. In addition, a [number of rich attributes](/docs/reference/config/networking/virtual-service/#HTTPMatchRequest) are available, such as host, path, headers, query parameters, etc. While TCP and TLS traffic generally behave the same with or without Istio (assuming no configuration has been applied to customize the routing), HTTP has significant differences. \* Istio will load balance individual requests. In general, this is highly desirable, especially in scenarios with long-lived connections such as gRPC and HTTP/2, where connection level load balancing is ineffective. \* Requests are routed based on the port and \*`Host` header\*, rather than port and IP. This means the destination IP address is effectively ignored. For example, `curl 8.8.8.8 -H "Host: productpage.default.svc.cluster.local"`, would be routed to the `productpage` Service. ## Unmatched traffic If traffic cannot be matched using one of the methods described above, it is treated as [passthrough traffic](/docs/tasks/traffic-management/egress/egress-control/#envoy-passthrough-to-external-services). By default, these requests will be forwarded as-is, which ensures that traffic to services that Istio is not aware of (such as external services that do not have `ServiceEntry`s created) continues to function. Note that when these requests are forwarded, mutual TLS will not be used and telemetry collection is limited. ## Service types Along with standard `ClusterIP` Services, Istio supports the full range of Kubernetes Services, with some caveats. ### `LoadBalancer` and `NodePort` Services These Services are supersets of `ClusterIP` Services, and are mostly concerned with allowing access from external clients. These service types are supported and behave exactly like standard `ClusterIP` Services. ### Headless Services A [headless Service](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services) is a Service that does not have a `ClusterIP` assigned. Instead, the DNS response will contain the IP addresses of each endpoint (i.e. the Pod IP) that is a part of the Service. In general, Istio does not configure listeners for each Pod IP, as it works at the Service level. However, to support headless services, listeners are set up for each IP:Port pair in the headless service. An exception to this is for protocols declared as HTTP, which will match traffic by the `Host` header. {{< warning >}} Without Istio, the `ports` field of a headless service is not strictly required because requests go directly to pod IPs, which can accept traffic on all ports. However, with Istio the port must be declared in the Service, or it will [not be matched](/docs/ops/configuration/traffic-management/traffic-routing/#unmatched-traffic). {{< /warning >}} ### ExternalName Services An [ExternalName Service](https://kubernetes.io/docs/concepts/services-networking/service/#externalname) is essentially just a DNS alias. To make things more concrete, consider the following example: {{< text yaml >}} apiVersion: v1 kind: Service metadata: name: alias spec: type: ExternalName externalName: concrete.example.com {{< /text >}} Because there is no `ClusterIP` nor pod IPs to match on, for TCP traffic there are no changes at all to traffic matching in Istio. When Istio receives the request, they will see the IP for `concrete.example.com`. If this is a service Istio knows about, it will be routed as described [above](#tcp). If not, it will be handled as [unmatched traffic](#unmatched-traffic) For HTTP and TLS, which match on hostname, things
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/traffic-management/traffic-routing/index.md
master
istio
[ -0.10015465319156647, 0.057657044380903244, -0.01200307346880436, -0.03824496641755104, -0.056230973452329636, -0.06783910095691681, 0.022460846230387688, 0.03051171451807022, 0.05793026089668274, 0.007659013848751783, -0.08949711173772812, -0.05994129553437233, -0.03902020305395126, 0.031...
0.40172
to traffic matching in Istio. When Istio receives the request, they will see the IP for `concrete.example.com`. If this is a service Istio knows about, it will be routed as described [above](#tcp). If not, it will be handled as [unmatched traffic](#unmatched-traffic) For HTTP and TLS, which match on hostname, things are a bit different. If the target service (`concrete.example.com`) is a service Istio knows about, then the alias hostname (`alias.default.svc.cluster.local`) will be added as an \_additional\_ match to the [TLS](#tls) or [HTTP](#http) matching. If not, there will be no changes, so it will be handled as [unmatched traffic](#unmatched-traffic). An `ExternalName` service can never be a [backend](#frontends-and-backends) on its own. Instead, it is only ever used as additional [frontend](#frontends-and-backends) matches to existing Services. If one is explicitly used as a backend, such as in a `VirtualService` destination, the same aliasing applies. That is, if `alias.default.svc.cluster.local` is set as the destination, then requests will go to the `concrete.example.com`. If that hostname is not known to Istio, the requests will fail; in this case, a `ServiceEntry` for `concrete.example.com` would make this configuration work. ### ServiceEntry In addition to Kubernetes Services, [Service Entries](/docs/reference/config/networking/service-entry/#ServiceEntry) can be created to extend the set of services known to Istio. This can be useful to ensure that traffic to external services, such as `example.com`, get the functionality of Istio. A ServiceEntry with `addresses` set will perform routing just like a `ClusterIP` Service. However, for Service Entries without any `addresses`, all IPs on the port will be matched. This may prevent [unmatched traffic](#unmatched-traffic) on the same port from being forwarded correctly. As such, it is best to avoid these where possible, or use dedicated ports when needed. HTTP and TLS do not share this constraint, as routing is done based on the hostname/SNI. {{< tip >}} The `addresses` field and `endpoints` field are often confused. `addresses` refers to IPs that will be matched against, while endpoints refer to the set of IPs we will send traffic to. For example, the Service entry below would match traffic for `1.1.1.1`, and send the request to `2.2.2.2` and `3.3.3.3` following the configured load balancing policy: {{< text yaml >}} addresses: [1.1.1.1] resolution: STATIC endpoints: - address: 2.2.2.2 - address: 3.3.3.3 {{< /text >}} {{< /tip >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/traffic-management/traffic-routing/index.md
master
istio
[ -0.06653176248073578, -0.011559241451323032, 0.00928518082946539, -0.0030356140341609716, -0.043138355016708374, -0.07727643847465515, 0.07678450644016266, 0.025787927210330963, 0.053746297955513, -0.03959540277719498, -0.056751493364572525, -0.12315507978200912, -0.017940768972039223, 0.0...
0.523965
{{< boilerplate experimental >}} Many users need to manage the types of the certificates used within their environment. For example, some users require the use of Elliptical Curve Cryptography (ECC) while others may need to use a stronger bit length for RSA certificates. Configuring certificates within your environment can be a daunting task for most users. This document is only intended to be used for in-mesh communication. For managing certificates at your Gateway, see the [Secure Gateways](/docs/tasks/traffic-management/ingress/secure-ingress/) document. For managing the CA used by istiod to generate workload certificates, see the [Plugin CA Certificates](/docs/tasks/security/cert-management/plugin-ca-cert/) document. ## istiod When Istio is installed without a root CA certificate, istiod will generate a self-signed CA certificate using RSA 2048. To change the self-signed CA certificate's bit length, you will need to modify either the IstioOperator manifest provided to `istioctl` or the values file used during the Helm installation of the [istio-discovery]({{< github\_tree >}}/manifests/charts/istio-control/istio-discovery) chart. {{< tip >}} While there are many environment variables that can be changed for [pilot-discovery](/docs/reference/commands/pilot-discovery/), this document will only outline some of them. {{< /tip >}} {{< tabset category-name="certificates" >}} {{< tab name="IstioOperator" category-value="iop" >}} {{< text yaml >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: values: pilot: env: CITADEL\_SELF\_SIGNED\_CA\_RSA\_KEY\_SIZE: 4096 {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} {{< text yaml >}} pilot: env: CITADEL\_SELF\_SIGNED\_CA\_RSA\_KEY\_SIZE: 4096 {{< /text >}} {{< /tab >}} {{< /tabset >}} ## Sidecars Since sidecars manage their own certificates for in-mesh communication, the sidecars are responsible for managing their private keys and generated Certificate Signing Request (CSRs). The sidecar injector needs to be modified to inject the environment variables to be used for this purpose. {{< tip >}} While there are many environment variables that can be changed for [pilot-agent](/docs/reference/commands/pilot-agent/), this document will only outline some of them. {{< /tip >}} {{< tabset category-name="gateway-install-type" >}} {{< tab name="IstioOperator" category-value="iop" >}} {{< text yaml >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: meshConfig: defaultConfig: proxyMetadata: CITADEL\_SELF\_SIGNED\_CA\_RSA\_KEY\_SIZE: 4096 {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} {{< text yaml >}} meshConfig: defaultConfig: proxyMetadata: CITADEL\_SELF\_SIGNED\_CA\_RSA\_KEY\_SIZE: 4096 {{< /text >}} {{< /tab >}} {{< tab name="Annotation" category-value="annotation" >}} {{< text yaml >}} apiVersion: apps/v1 kind: Deployment metadata: name: curl spec: ... template: metadata: ... annotations: ... proxy.istio.io/config: | CITADEL\_SELF\_SIGNED\_CA\_RSA\_KEY\_SIZE: 4096 spec: ... {{< /text >}} {{< /tab >}} {{< /tabset >}} ### Signature Algorithm By default, the sidecars will create RSA certificates. If you want to change it to ECC, you need to set `ECC\_SIGNATURE\_ALGORITHM` to `ECDSA`. {{< tabset category-name="gateway-install-type" >}} {{< tab name="IstioOperator" category-value="iop" >}} {{< text yaml >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: meshConfig: defaultConfig: proxyMetadata: ECC\_SIGNATURE\_ALGORITHM: "ECDSA" {{< /text >}} {{< /tab >}} {{< tab name="Helm" category-value="helm" >}} {{< text yaml >}} meshConfig: defaultConfig: proxyMetadata: ECC\_SIGNATURE\_ALGORITHM: "ECDSA" {{< /text >}} {{< /tab >}} {{< /tabset >}} Only P256 and P384 are supported via `ECC\_CURVE`. If you prefer to retain RSA signature algorithms and want to modify the RSA key size, you can change the value of `WORKLOAD\_RSA\_KEY\_SIZE`.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/configuration/traffic-management/manage-mesh-certificates/index.md
master
istio
[ -0.030383039265871048, 0.04219847545027733, -0.01609603688120842, 0.057179734110832214, -0.05829586833715439, -0.05282830074429512, -0.03100256435573101, 0.10260506719350815, 0.033589158207178116, -0.02852148376405239, -0.021136708557605743, -0.10447684675455093, -0.004848912823945284, 0.1...
0.479426
Istio components are built with a flexible logging framework which provides a number of features and controls to help operate these components and facilitate diagnostics. You control these logging features by passing command-line options when starting the components. ## Logging scopes Logging messages output by a component are categorized by \*scopes\*. A scope represents a set of related log messages which you can control as a whole. Different components have different scopes, depending on the features the component provides. All components have the `default` scope, which is used for non-categorized log messages. As an example, as of this writing, `istioctl` has 25 scopes, representing different functional areas within the command: - `ads`, `adsc`, `all`, `analysis`, `authn`, `authorization`, `ca`, `cache`, `cli`, `default`, `installer`, `klog`, `mcp`, `model`, `patch`, `processing`, `resource`, `source`, `spiffe`, `tpath`, `translator`, `util`, `validation`, `validationController`, `wle` Pilot-Agent, Pilot-Discovery, and the Istio Operator have their own scopes which you can discover by looking at their [reference documentation](/docs/reference/commands/). Each scope has a unique output level which is one of: 1. none 1. error 1. warn 1. info 1. debug where `none` produces no output for the scope, and `debug` produces the maximum amount of output. The default level for all scopes is `info` which is intended to provide the right amount of logging information for operating Istio in normal conditions. To control the output level, you use the `--log\_output\_level` command-line option. For example: {{< text bash >}} $ istioctl analyze --log\_output\_level klog:none,cli:info {{< /text >}} In addition to controlling the output level from the command-line, you can also control the output level of a running component by using its [ControlZ](/docs/ops/diagnostic-tools/controlz) interface. ## Controlling output Log messages are normally sent to a component's standard output stream. The `--log\_target` option lets you direct the output to any number of different locations. You give the option a comma-separated list of file system paths, along with the special values `stdout` and `stderr` to indicate the standard output and standard error streams respectively. Log messages are normally output in a human-friendly format. The `--log\_as\_json` option can be used to force the output into JSON, which can be easier for tools to process. ## Log rotation Istio control plane components can automatically manage log rotation, which make it simple to break up large logs into smaller log files. The `--log\_rotate` option lets you specify the base file name to use for rotation. Derived names will be used for individual log files. The `--log\_rotate\_max\_age` option lets you specify the maximum number of days before file rotation takes place, while the `--log\_rotate\_max\_size` option let you specify the maximum size in megabytes before file rotation takes place. Finally, the `--log\_rotate\_max\_backups` option lets you control the maximum number of rotated files to keep, older files will be automatically deleted. ## Component debugging The `--log\_caller` and `--log\_stacktrace\_level` options let you control whether log information includes programmer-level information. This is useful when trying to track down bugs in a component but is not normally used in day-to-day operation.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/diagnostic-tools/component-logging/index.md
master
istio
[ 0.03419949859380722, -0.03380877524614334, -0.03832194581627846, 0.08962881565093994, 0.03019767813384533, -0.09546583890914917, 0.10066010057926178, 0.050337955355644226, 0.0286696944385767, -0.0036354782059788704, -0.021934058517217636, -0.10481879115104675, -0.0052647641859948635, 0.058...
0.499845
`istioctl analyze` is a diagnostic tool that can detect potential issues with your Istio configuration. It can run against a live cluster or a set of local configuration files. It can also run against a combination of the two, allowing you to catch problems before you apply changes to a cluster. ## Getting started in under a minute You can analyze your current live Kubernetes cluster by running: {{< text syntax=bash snip\_id=analyze\_all\_namespaces >}} $ istioctl analyze --all-namespaces {{< /text >}} And that’s it! It’ll give you any recommendations that apply. For example, if you forgot to enable Istio injection (a very common issue), you would get the following 'Info' message: {{< text syntax=plain snip\_id=analyze\_all\_namespace\_sample\_response >}} Info [IST0102] (Namespace default) The namespace is not enabled for Istio injection. Run 'kubectl label namespace default istio-injection=enabled' to enable it, or 'kubectl label namespace default istio-injection=disabled' to explicitly mark it as not needing injection. {{< /text >}} Fix the issue: {{< text syntax=bash snip\_id=fix\_default\_namespace >}} $ kubectl label namespace default istio-injection=enabled {{< /text >}} Then try again: {{< text syntax=bash snip\_id=try\_with\_fixed\_namespace >}} $ istioctl analyze --namespace default ✔ No validation issues found when analyzing namespace: default. {{< /text >}} ## Analyzing live clusters, local files, or both Analyze the current live cluster, simulating the effect of applying additional yaml files like `bookinfo-gateway.yaml` and `destination-rule-all.yaml` in the `samples/bookinfo/networking` directory: {{< text syntax=bash snip\_id=analyze\_sample\_destrule >}} $ istioctl analyze @samples/bookinfo/networking/bookinfo-gateway.yaml@ @samples/bookinfo/networking/destination-rule-all.yaml@ Error [IST0101] (Gateway default/bookinfo-gateway samples/bookinfo/networking/bookinfo-gateway.yaml:9) Referenced selector not found: "istio=ingressgateway" Error [IST0101] (VirtualService default/bookinfo samples/bookinfo/networking/bookinfo-gateway.yaml:41) Referenced host not found: "productpage" Warning [IST0174] (DestinationRule default/details samples/bookinfo/networking/destination-rule-all.yaml:49) The host details defined in the DestinationRule does not match any services in the mesh. Warning [IST0174] (DestinationRule default/productpage samples/bookinfo/networking/destination-rule-all.yaml:1) The host productpage defined in the DestinationRule does not match any services in the mesh. Warning [IST0174] (DestinationRule default/ratings samples/bookinfo/networking/destination-rule-all.yaml:29) The host ratings defined in the DestinationRule does not match any services in the mesh. Warning [IST0174] (DestinationRule default/reviews samples/bookinfo/networking/destination-rule-all.yaml:12) The host reviews defined in the DestinationRule does not match any services in the mesh. Error: Analyzers found issues when analyzing namespace: default. See https://istio.io/v{{< istio\_version >}}/docs/reference/config/analysis for more information about causes and resolutions. {{< /text >}} Analyze the entire `networking` folder: {{< text syntax=bash snip\_id=analyze\_networking\_directory >}} $ istioctl analyze samples/bookinfo/networking/ {{< /text >}} Analyze all yaml files in the `networking` folder: {{< text syntax=bash snip\_id=analyze\_all\_networking\_yaml >}} $ istioctl analyze samples/bookinfo/networking/\*.yaml {{< /text >}} The above examples are doing analysis on a live cluster. The tool also supports performing analysis of a set of local Kubernetes yaml configuration files, or on a combination of local files and a live cluster. When analyzing a set of local files, the file-set is expected to be fully self-contained. Typically, this is used to analyze the entire set of configuration files that are intended to be deployed to a cluster. To use this feature, simply add the `--use-kube=false` flag. Analyze all yaml files in the `networking` folder: {{< text syntax=bash snip\_id=analyze\_all\_networking\_yaml\_no\_kube >}} $ istioctl analyze --use-kube=false samples/bookinfo/networking/\*.yaml {{< /text >}} You can run `istioctl analyze --help` to see the full set of options. ## Advanced ### Enabling validation messages for resource status {{< boilerplate experimental-feature-warning >}} Starting with v1.5, Istio can be set up to perform configuration analysis alongside the configuration distribution that it is primarily responsible for, via the `istiod.enableAnalysis` flag. This analysis uses the same logic and error messages as when using `istioctl analyze`. Validation messages from the analysis are written to the status subresource of the affected Istio resource. For example. if you have a misconfigured gateway on your "ratings" virtual service, running `kubectl get virtualservice ratings` would give you something like:
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/diagnostic-tools/istioctl-analyze/index.md
master
istio
[ 0.023813582956790924, 0.009516982361674309, 0.021698864176869392, 0.07282710075378418, 0.025807207450270653, -0.01286162156611681, -0.004544850904494524, -0.013055556453764439, 0.004978951998054981, 0.006331321317702532, -0.058082520961761475, -0.14213469624519348, -0.04847316816449165, 0....
0.478797
the same logic and error messages as when using `istioctl analyze`. Validation messages from the analysis are written to the status subresource of the affected Istio resource. For example. if you have a misconfigured gateway on your "ratings" virtual service, running `kubectl get virtualservice ratings` would give you something like: {{< text syntax=yaml snip\_id=vs\_yaml\_with\_status >}} apiVersion: networking.istio.io/v1 kind: VirtualService ... spec: gateways: - bogus-gateway hosts: - ratings ... status: observedGeneration: "1" validationMessages: - documentationUrl: https://istio.io/v{{< istio\_version >}}/docs/reference/config/analysis/ist0101/ level: ERROR type: code: IST0101 {{< /text >}} `enableAnalysis` runs in the background, and will keep the status field of a resource up to date with its current validation status. Note that this isn't a replacement for `istioctl analyze`: - Not all resources have a custom status field (e.g. Kubernetes `namespace` resources), so messages attached to those resources won't show validation messages. - `enableAnalysis` only works on Istio versions starting with 1.5, while `istioctl analyze` can be used with older versions. - While it makes it easy to see what's wrong with a particular resource, it's harder to get a holistic view of validation status in the mesh. You can enable this feature with: {{< text syntax=bash snip\_id=install\_with\_custom\_config\_analysis >}} $ istioctl install --set values.global.istiod.enableAnalysis=true {{< /text >}} ### Ignoring specific analyzer messages via CLI Sometimes you might find it useful to hide or ignore analyzer messages in certain cases. For example, imagine a situation where a message is emitted about a resource you don't have permissions to update: {{< text syntax=bash snip\_id=analyze\_k\_frod >}} $ istioctl analyze -k --namespace frod Info [IST0102] (Namespace frod) The namespace is not enabled for Istio injection. Run 'kubectl label namespace frod istio-injection=enabled' to enable it, or 'kubectl label namespace frod istio-injection=disabled' to explicitly mark it as not needing injection. {{< /text >}} Because you don't have permissions to update the namespace, you cannot resolve the message by annotating the namespace. Instead, you can direct `istioctl analyze` to suppress the above message on the resource: {{< text syntax=bash snip\_id=analyze\_suppress0102 >}} $ istioctl analyze -k --namespace frod --suppress "IST0102=Namespace frod" ✔ No validation issues found when analyzing namespace: frod. {{< /text >}} The syntax used for suppression is the same syntax used throughout `istioctl` when referring to resources: ` .`, or just ` ` for cluster-scoped resources like `Namespace`. If you want to suppress multiple objects, you can either repeat the `--suppress` argument or use wildcards: {{< text syntax=bash snip\_id=analyze\_suppress\_frod\_0107\_baz >}} $ # Suppress code IST0102 on namespace frod and IST0107 on all pods in namespace baz $ istioctl analyze -k --all-namespaces --suppress "IST0102=Namespace frod" --suppress "IST0107=Pod \*.baz" {{< /text >}} ### Ignoring specific analyzer messages via annotations You can also ignore specific analyzer messages using an annotation on the resource. For example, to ignore code IST0107 (`MisplacedAnnotation`) on resource `deployment/my-deployment`: {{< text syntax=bash snip\_id=annotate\_for\_deployment\_suppression >}} $ kubectl annotate deployment my-deployment galley.istio.io/analyze-suppress=IST0107 {{< /text >}} To ignore multiple codes for a resource, separate each code with a comma: {{< text syntax=bash snip\_id=annotate\_for\_deployment\_suppression\_107 >}} $ kubectl annotate deployment my-deployment galley.istio.io/analyze-suppress=IST0107,IST0002 {{< /text >}} ## Helping us improve this tool We're continuing to add more analysis capability and we'd love your help in identifying more use cases. If you've discovered some Istio configuration "gotcha", some tricky situation that caused you some problems, open an issue and let us know. We might be able to automatically flag this problem so that others can discover and avoid the problem in the first place. To do this, [open an issue](https://github.com/istio/istio/issues) describing your scenario. For example: - Look at all the virtual services - For each, look at their list of gateways - If some of
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/diagnostic-tools/istioctl-analyze/index.md
master
istio
[ -0.016233058646321297, 0.0546385832130909, -0.013709488324820995, 0.07126729190349579, 0.019245028495788574, -0.007231879513710737, 0.004613958299160004, -0.005220924969762564, 0.04204244166612625, 0.01699196547269821, -0.04873806983232498, -0.12443948537111282, -0.015754228457808495, 0.04...
0.467876
able to automatically flag this problem so that others can discover and avoid the problem in the first place. To do this, [open an issue](https://github.com/istio/istio/issues) describing your scenario. For example: - Look at all the virtual services - For each, look at their list of gateways - If some of the gateways don’t exist, produce an error We already have an analyzer for this specific scenario, so this is just an example to illustrate what kind of information you should provide. ## Q&A - \*\*What Istio release does this tool target?\*\* Like other `istioctl` tools, we generally recommend using a downloaded version that matches the version deployed in your cluster. For the time being, analysis is generally backwards compatible, so that you can, for example, run the {{< istio\_version >}} version of `istioctl analyze` against a cluster running an older Istio 1.x version and expect to get useful feedback. Analysis rules that are not meaningful with an older Istio release will be skipped. If you decide to use the latest `istioctl` for analysis purposes on a cluster running an older Istio version, we suggest that you keep it in a separate folder from the version of the binary used to manage your deployed Istio release. - \*\*What analyzers are supported today?\*\* We're still working to documenting the analyzers. In the meantime, you can see all the analyzers in the [Istio source]({{< github\_tree >}}/pkg/config/analysis/analyzers). You can also see what [configuration analysis messages](/docs/reference/config/analysis/) are supported to get an idea of what is currently covered. - \*\*Can analysis do anything harmful to my cluster?\*\* Analysis never changes configuration state. It is a completely read-only operation that will never alter the state of a cluster. - \*\*What about analysis that goes beyond configuration?\*\* Today, the analysis is purely based on Kubernetes configuration, but in the future we’d like to expand beyond that. For example, we could allow analyzers to also look at logs to generate recommendations. - \*\*Where can I find out how to fix the errors I'm getting?\*\* The set of [configuration analysis messages](/docs/reference/config/analysis/) contains descriptions of each message along with suggested fixes.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/diagnostic-tools/istioctl-analyze/index.md
master
istio
[ 0.010083653964102268, 0.00045011279871687293, -0.018557526171207428, 0.03639092296361923, 0.03378896415233612, -0.05635158345103264, -0.0012483022874221206, -0.022528868168592453, -0.012432211078703403, -0.0037859641015529633, -0.0217268243432045, -0.11709386110305786, -0.07126744836568832, ...
0.537909
Istio provides two very valuable commands to help diagnose traffic management configuration problems, the [`proxy-status`](/docs/reference/commands/istioctl/#istioctl-proxy-status) and [`proxy-config`](/docs/reference/commands/istioctl/#istioctl-proxy-config) commands. The `proxy-status` command allows you to get an overview of your mesh and identify the proxy causing the problem. Then `proxy-config` can be used to inspect Envoy configuration and diagnose the issue. If you want to try the commands described below, you can either: \* Have a Kubernetes cluster with Istio and Bookinfo installed (as described in [installation steps](/docs/setup/getting-started/) and [Bookinfo installation steps](/docs/examples/bookinfo/#deploying-the-application)). OR \* Use similar commands against your own application running in a Kubernetes cluster. ## Get an overview of your mesh The `proxy-status` command allows you to get an overview of your mesh. If you suspect one of your sidecars isn't receiving configuration or is out of sync then `proxy-status` will tell you this. {{< text bash >}} $ istioctl proxy-status NAME CDS LDS EDS RDS ISTIOD VERSION details-v1-558b8b4b76-qzqsg.default SYNCED SYNCED SYNCED SYNCED istiod-6cf8d4f9cb-wm7x6 1.7.0 istio-ingressgateway-66c994c45c-cmb7x.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-6cf8d4f9cb-wm7x6 1.7.0 productpage-v1-6987489c74-nc7tj.default SYNCED SYNCED SYNCED SYNCED istiod-6cf8d4f9cb-wm7x6 1.7.0 prometheus-7bdc59c94d-hcp59.istio-system SYNCED SYNCED SYNCED SYNCED istiod-6cf8d4f9cb-wm7x6 1.7.0 ratings-v1-7dc98c7588-5m6xj.default SYNCED SYNCED SYNCED SYNCED istiod-6cf8d4f9cb-wm7x6 1.7.0 reviews-v1-7f99cc4496-rtsqn.default SYNCED SYNCED SYNCED SYNCED istiod-6cf8d4f9cb-wm7x6 1.7.0 reviews-v2-7d79d5bd5d-tj6kf.default SYNCED SYNCED SYNCED SYNCED istiod-6cf8d4f9cb-wm7x6 1.7.0 reviews-v3-7dbcdcbc56-t8wrx.default SYNCED SYNCED SYNCED SYNCED istiod-6cf8d4f9cb-wm7x6 1.7.0 {{< /text >}} If a proxy is missing from this list it means that it is not currently connected to a Istiod instance so will not be receiving any configuration. \* `SYNCED` means that Envoy has acknowledged the last configuration {{< gloss >}}Istiod{{< /gloss >}} has sent to it. \* `NOT SENT` means that Istiod hasn't sent anything to Envoy. This usually is because Istiod has nothing to send. \* `STALE` means that Istiod has sent an update to Envoy but has not received an acknowledgement. This usually indicates a networking issue between Envoy and Istiod or a bug with Istio itself. ## Retrieve diffs between Envoy and Istiod The `proxy-status` command can also be used to retrieve a diff between the configuration Envoy has loaded and the configuration Istiod would send, by providing a proxy ID. This can help you determine exactly what is out of sync and where the issue may lie. {{< text bash json >}} $ istioctl proxy-status details-v1-6dcc6fbb9d-wsjz4.default --- Istiod Clusters +++ Envoy Clusters @@ -374,36 +374,14 @@ "edsClusterConfig": { "edsConfig": { "ads": { } }, "serviceName": "outbound|443||public-cr0bdc785ce3f14722918080a97e1f26be-alb1.kube-system.svc.cluster.local" - }, - "connectTimeout": "1.000s", - "circuitBreakers": { - "thresholds": [ - { - - } - ] - } - } - }, - { - "cluster": { - "name": "outbound|53||kube-dns.kube-system.svc.cluster.local", - "type": "EDS", - "edsClusterConfig": { - "edsConfig": { - "ads": { - - } - }, - "serviceName": "outbound|53||kube-dns.kube-system.svc.cluster.local" }, "connectTimeout": "1.000s", "circuitBreakers": { "thresholds": [ { } Listeners Match Routes Match (RDS last loaded at Tue, 04 Aug 2020 11:52:54 IST) {{< /text >}} Here you can see that the listeners and routes match but the clusters are out of sync. ## Deep dive into Envoy configuration The `proxy-config` command can be used to see how a given Envoy instance is configured. This can then be used to pinpoint any issues you are unable to detect by just looking through your Istio configuration and custom resources. To get a basic summary of clusters, listeners or routes for a given pod use the command as follows (changing clusters for listeners or routes when required): {{< text bash >}} $ istioctl proxy-config cluster -n istio-system istio-ingressgateway-7d6874b48f-qxhn5 SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE BlackHoleCluster - - - STATIC agent - - - STATIC details.default.svc.cluster.local 9080 - outbound EDS details.default istio-ingressgateway.istio-system.svc.cluster.local 80 - outbound EDS istio-ingressgateway.istio-system.svc.cluster.local 443 - outbound
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/diagnostic-tools/proxy-cmd/index.md
master
istio
[ 0.025052696466445923, -0.0162530317902565, 0.045595526695251465, 0.0205554086714983, -0.08128905296325684, -0.04715539887547493, 0.024755306541919708, -0.001717220526188612, -0.017367668449878693, 0.07618110626935959, -0.08286435902118683, -0.09603706002235413, -0.07490164786577225, 0.0485...
0.477824
for listeners or routes when required): {{< text bash >}} $ istioctl proxy-config cluster -n istio-system istio-ingressgateway-7d6874b48f-qxhn5 SERVICE FQDN PORT SUBSET DIRECTION TYPE DESTINATION RULE BlackHoleCluster - - - STATIC agent - - - STATIC details.default.svc.cluster.local 9080 - outbound EDS details.default istio-ingressgateway.istio-system.svc.cluster.local 80 - outbound EDS istio-ingressgateway.istio-system.svc.cluster.local 443 - outbound EDS istio-ingressgateway.istio-system.svc.cluster.local 15021 - outbound EDS istio-ingressgateway.istio-system.svc.cluster.local 15443 - outbound EDS istiod.istio-system.svc.cluster.local 443 - outbound EDS istiod.istio-system.svc.cluster.local 853 - outbound EDS istiod.istio-system.svc.cluster.local 15010 - outbound EDS istiod.istio-system.svc.cluster.local 15012 - outbound EDS istiod.istio-system.svc.cluster.local 15014 - outbound EDS kube-dns.kube-system.svc.cluster.local 53 - outbound EDS kube-dns.kube-system.svc.cluster.local 9153 - outbound EDS kubernetes.default.svc.cluster.local 443 - outbound EDS ... productpage.default.svc.cluster.local 9080 - outbound EDS prometheus\_stats - - - STATIC ratings.default.svc.cluster.local 9080 - outbound EDS reviews.default.svc.cluster.local 9080 - outbound EDS sds-grpc - - - STATIC xds-grpc - - - STRICT\_DNS zipkin - - - STRICT\_DNS {{< /text >}} In order to debug Envoy you need to understand Envoy clusters/listeners/routes/endpoints and how they all interact. We will use the `proxy-config` command with the `-o json` and filtering flags to follow Envoy as it determines where to send a request from the `productpage` pod to the `reviews` pod at `reviews:9080`. 1. If you query the listener summary on a pod you will notice Istio generates the following listeners: \* A listener on `0.0.0.0:15006` that receives all inbound traffic to the pod and a listener on `0.0.0.0:15001` that receives all outbound traffic to the pod, then hands the request over to a virtual listener. \* A virtual listener per service IP, per each non-HTTP for outbound TCP/HTTPS traffic. \* A virtual listener on the pod IP for each exposed port for inbound traffic. \* A virtual listener on `0.0.0.0` per each HTTP port for outbound HTTP traffic. {{< text bash >}} $ istioctl proxy-config listeners productpage-v1-6c886ff494-7vxhs ADDRESS PORT MATCH DESTINATION 10.96.0.10 53 ALL Cluster: outbound|53||kube-dns.kube-system.svc.cluster.local 0.0.0.0 80 App: HTTP Route: 80 0.0.0.0 80 ALL PassthroughCluster 10.100.93.102 443 ALL Cluster: outbound|443||istiod.istio-system.svc.cluster.local 10.111.121.13 443 ALL Cluster: outbound|443||istio-ingressgateway.istio-system.svc.cluster.local 10.96.0.1 443 ALL Cluster: outbound|443||kubernetes.default.svc.cluster.local 10.100.93.102 853 App: HTTP Route: istiod.istio-system.svc.cluster.local:853 10.100.93.102 853 ALL Cluster: outbound|853||istiod.istio-system.svc.cluster.local 0.0.0.0 9080 App: HTTP Route: 9080 0.0.0.0 9080 ALL PassthroughCluster 0.0.0.0 9090 App: HTTP Route: 9090 0.0.0.0 9090 ALL PassthroughCluster 10.96.0.10 9153 App: HTTP Route: kube-dns.kube-system.svc.cluster.local:9153 10.96.0.10 9153 ALL Cluster: outbound|9153||kube-dns.kube-system.svc.cluster.local 0.0.0.0 15001 ALL PassthroughCluster 0.0.0.0 15006 Addr: 10.244.0.22/32:15021 inbound|15021|mgmt-15021|mgmtCluster 0.0.0.0 15006 Addr: 10.244.0.22/32:9080 Inline Route: /\* 0.0.0.0 15006 Trans: tls; App: HTTP TLS; Addr: 0.0.0.0/0 Inline Route: /\* 0.0.0.0 15006 App: HTTP; Addr: 0.0.0.0/0 Inline Route: /\* 0.0.0.0 15006 App: Istio HTTP Plain; Addr: 10.244.0.22/32:9080 Inline Route: /\* 0.0.0.0 15006 Addr: 0.0.0.0/0 InboundPassthroughClusterIpv4 0.0.0.0 15006 Trans: tls; App: TCP TLS; Addr: 0.0.0.0/0 InboundPassthroughClusterIpv4 0.0.0.0 15010 App: HTTP Route: 15010 0.0.0.0 15010 ALL PassthroughCluster 10.100.93.102 15012 ALL Cluster: outbound|15012||istiod.istio-system.svc.cluster.local 0.0.0.0 15014 App: HTTP Route: 15014 0.0.0.0 15014 ALL PassthroughCluster 0.0.0.0 15021 ALL Inline Route: /healthz/ready\* 10.111.121.13 15021 App: HTTP Route: istio-ingressgateway.istio-system.svc.cluster.local:15021 10.111.121.13 15021 ALL Cluster: outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local 0.0.0.0 15090 ALL Inline Route: /stats/prometheus\* 10.111.121.13 15443 ALL Cluster: outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local {{< /text >}} 1. From the above summary you can see that every sidecar has a listener bound to `0.0.0.0:15006` which is where IP tables routes all inbound pod traffic to and a listener bound to `0.0.0.0:15001` which is where IP tables routes all outbound pod traffic to. The `0.0.0.0:15001` listener hands the request over to the virtual listener that best matches the original destination of the request, if it can find a matching one. Otherwise, it sends the request to the `PassthroughCluster` which connects to the destination directly. {{< text bash json >}} $ istioctl proxy-config listeners productpage-v1-6c886ff494-7vxhs --port 15001 -o json [ { "name": "virtualOutbound", "address": { "socketAddress": { "address": "0.0.0.0",
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/diagnostic-tools/proxy-cmd/index.md
master
istio
[ 0.005405139178037643, -0.014718358404934406, -0.06993428617715836, 0.02670852094888687, -0.030214762315154076, -0.04797445237636566, 0.023883026093244553, 0.0067855617962777615, -0.003925108816474676, 0.012939752079546452, -0.045617956668138504, -0.07279105484485626, -0.03281724452972412, ...
0.499146
original destination of the request, if it can find a matching one. Otherwise, it sends the request to the `PassthroughCluster` which connects to the destination directly. {{< text bash json >}} $ istioctl proxy-config listeners productpage-v1-6c886ff494-7vxhs --port 15001 -o json [ { "name": "virtualOutbound", "address": { "socketAddress": { "address": "0.0.0.0", "portValue": 15001 } }, "filterChains": [ { "filters": [ { "name": "istio.stats", "typedConfig": { "@type": "type.googleapis.com/udpa.type.v1.TypedStruct", "typeUrl": "type.googleapis.com/envoy.extensions.filters.network.wasm.v3.Wasm", "value": { "config": { "configuration": "{\n \"debug\": \"false\",\n \"stat\_prefix\": \"istio\"\n}\n", "root\_id": "stats\_outbound", "vm\_config": { "code": { "local": { "inline\_string": "envoy.wasm.stats" } }, "runtime": "envoy.wasm.runtime.null", "vm\_id": "tcp\_stats\_outbound" } } } } }, { "name": "envoy.tcp\_proxy", "typedConfig": { "@type": "type.googleapis.com/envoy.config.filter.network.tcp\_proxy.v2.TcpProxy", "statPrefix": "PassthroughCluster", "cluster": "PassthroughCluster" } } ], "name": "virtualOutbound-catchall-tcp" } ], "trafficDirection": "OUTBOUND", "hiddenEnvoyDeprecatedUseOriginalDst": true } ] {{< /text >}} 1. Our request is an outbound HTTP request to port `9080` this means it gets handed off to the `0.0.0.0:9080` virtual listener. This listener then looks up the route configuration in its configured RDS. In this case it will be looking up route `9080` in RDS configured by Istiod (via ADS). {{< text bash json >}} $ istioctl proxy-config listeners productpage-v1-6c886ff494-7vxhs -o json --address 0.0.0.0 --port 9080 ... "rds": { "configSource": { "ads": {}, "resourceApiVersion": "V3" }, "routeConfigName": "9080" } ... {{< /text >}} 1. The `9080` route configuration only has a virtual host for each service. Our request is heading to the reviews service so Envoy will select the virtual host to which our request matches a domain. Once matched on domain Envoy looks for the first route that matches the request. In this case we don't have any advanced routing so there is only one route that matches on everything. This route tells Envoy to send the request to the `outbound|9080||reviews.default.svc.cluster.local` cluster. {{< text bash json >}} $ istioctl proxy-config routes productpage-v1-6c886ff494-7vxhs --name 9080 -o json [ { "name": "9080", "virtualHosts": [ { "name": "reviews.default.svc.cluster.local:9080", "domains": [ "reviews.default.svc.cluster.local", "reviews", "reviews.default.svc", "reviews.default", "10.98.88.0", ], "routes": [ { "name": "default", "match": { "prefix": "/" }, "route": { "cluster": "outbound|9080||reviews.default.svc.cluster.local", "timeout": "0s", } } ] ... {{< /text >}} 1. This cluster is configured to retrieve the associated endpoints from Istiod (via ADS). So Envoy will then use the `serviceName` field as a key to look up the list of Endpoints and proxy the request to one of them. {{< text bash json >}} $ istioctl proxy-config cluster productpage-v1-6c886ff494-7vxhs --fqdn reviews.default.svc.cluster.local -o json [ { "name": "outbound|9080||reviews.default.svc.cluster.local", "type": "EDS", "edsClusterConfig": { "edsConfig": { "ads": {}, "resourceApiVersion": "V3" }, "serviceName": "outbound|9080||reviews.default.svc.cluster.local" }, "connectTimeout": "10s", "circuitBreakers": { "thresholds": [ { "maxConnections": 4294967295, "maxPendingRequests": 4294967295, "maxRequests": 4294967295, "maxRetries": 4294967295 } ] }, } ] {{< /text >}} 1. To see the endpoints currently available for this cluster use the `proxy-config` endpoints command. {{< text bash json >}} $ istioctl proxy-config endpoints productpage-v1-6c886ff494-7vxhs --cluster "outbound|9080||reviews.default.svc.cluster.local" ENDPOINT STATUS OUTLIER CHECK CLUSTER 172.17.0.7:9080 HEALTHY OK outbound|9080||reviews.default.svc.cluster.local 172.17.0.8:9080 HEALTHY OK outbound|9080||reviews.default.svc.cluster.local 172.17.0.9:9080 HEALTHY OK outbound|9080||reviews.default.svc.cluster.local {{< /text >}} ## Inspecting bootstrap configuration So far we have looked at configuration retrieved (mostly) from Istiod, however Envoy requires some bootstrap configuration that includes information like where Istiod can be found. To view this use the following command: {{< text bash json >}} $ istioctl proxy-config bootstrap -n istio-system istio-ingressgateway-7d6874b48f-qxhn5 { "bootstrap": { "node": { "id": "router~172.30.86.14~istio-ingressgateway-7d6874b48f-qxhn5.istio-system~istio-system.svc.cluster.local", "cluster": "istio-ingressgateway", "metadata": { "CLUSTER\_ID": "Kubernetes", "EXCHANGE\_KEYS": "NAME,NAMESPACE,INSTANCE\_IPS,LABELS,OWNER,PLATFORM\_METADATA,WORKLOAD\_NAME,MESH\_ID,SERVICE\_ACCOUNT,CLUSTER\_ID", "INSTANCE\_IPS": "10.244.0.7", "ISTIO\_PROXY\_SHA": "istio-proxy:f98b7e538920abc408fbc91c22a3b32bc854d9dc", "ISTIO\_VERSION": "1.7.0", "LABELS": { "app": "istio-ingressgateway", "chart": "gateways", "heritage": "Tiller", "istio": "ingressgateway", "pod-template-hash": "68bf7d7f94", "release": "istio", "service.istio.io/canonical-name": "istio-ingressgateway", "service.istio.io/canonical-revision": "latest" }, "MESH\_ID": "cluster.local", "NAME": "istio-ingressgateway-68bf7d7f94-sp226", "NAMESPACE": "istio-system", "OWNER": "kubernetes://apis/apps/v1/namespaces/istio-system/deployments/istio-ingressgateway", "ROUTER\_MODE": "sni-dnat", "SDS": "true", "SERVICE\_ACCOUNT": "istio-ingressgateway-service-account", "WORKLOAD\_NAME": "istio-ingressgateway" }, "userAgentBuildVersion": { "version": { "majorNumber": 1, "minorNumber": 15
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/diagnostic-tools/proxy-cmd/index.md
master
istio
[ -0.04875229299068451, 0.0015801137778908014, -0.029942085966467857, -0.009080827236175537, -0.029550934210419655, -0.04030250385403633, 0.08205382525920868, -0.0793430507183075, -0.006315993145108223, -0.026444774121046066, -0.035144444555044174, -0.0741872787475586, -0.01416816283017397, ...
0.253807
"INSTANCE\_IPS": "10.244.0.7", "ISTIO\_PROXY\_SHA": "istio-proxy:f98b7e538920abc408fbc91c22a3b32bc854d9dc", "ISTIO\_VERSION": "1.7.0", "LABELS": { "app": "istio-ingressgateway", "chart": "gateways", "heritage": "Tiller", "istio": "ingressgateway", "pod-template-hash": "68bf7d7f94", "release": "istio", "service.istio.io/canonical-name": "istio-ingressgateway", "service.istio.io/canonical-revision": "latest" }, "MESH\_ID": "cluster.local", "NAME": "istio-ingressgateway-68bf7d7f94-sp226", "NAMESPACE": "istio-system", "OWNER": "kubernetes://apis/apps/v1/namespaces/istio-system/deployments/istio-ingressgateway", "ROUTER\_MODE": "sni-dnat", "SDS": "true", "SERVICE\_ACCOUNT": "istio-ingressgateway-service-account", "WORKLOAD\_NAME": "istio-ingressgateway" }, "userAgentBuildVersion": { "version": { "majorNumber": 1, "minorNumber": 15 }, "metadata": { "build.type": "RELEASE", "revision.sha": "f98b7e538920abc408fbc91c22a3b32bc854d9dc", "revision.status": "Clean", "ssl.version": "BoringSSL" } }, }, ... {{< /text >}} ## Verifying connectivity to Istiod Verifying connectivity to Istiod is a useful troubleshooting step. Every proxy container in the service mesh should be able to communicate with Istiod. This can be accomplished in a few simple steps: 1. Create a `curl` pod: {{< text bash >}} $ kubectl create namespace foo $ kubectl apply -f <(istioctl kube-inject -f samples/curl/curl.yaml) -n foo {{< /text >}} 1. Test connectivity to Istiod using `curl`. The following example invokes the v1 registration API using default Istiod configuration parameters and mutual TLS enabled: {{< text bash >}} $ kubectl exec $(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name}) -c curl -n foo -- curl -sS istiod.istio-system:15014/version {{< /text >}} You should receive a response listing the version of Istiod. ## What Envoy version is Istio using? To find out the Envoy version used in deployment, you can `exec` into the container and query the `server\_info` endpoint: {{< text bash >}} $ kubectl exec -it productpage-v1-6b746f74dc-9stvs -c istio-proxy -n default -- pilot-agent request GET server\_info --log\_as\_json | jq {version} { "version": "2d4ec97f3ac7b3256d060e1bb8aa6c415f5cef63/1.17.0/Clean/RELEASE/BoringSSL" } {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/diagnostic-tools/proxy-cmd/index.md
master
istio
[ 0.00378818460740149, -0.005760879721492529, -0.02796749956905842, 0.00870627909898758, -0.015363435260951519, -0.027920778840780258, 0.0164644755423069, 0.026418395340442657, 0.0113563546910882, 0.02623937651515007, -0.006747601553797722, -0.11242807656526566, -0.06084088981151581, 0.03868...
0.623312
`istioctl experimental check-inject` is a diagnostic tool that helps you verify if specific webhooks will perform Istio sidecar injection in your pods. Use this tool to check if the sidecar injection configuration is correctly applied to a live cluster. ## Quick Start To check why Istio sidecar injection did/didn't (or will/won't) occur for a specific pod, run: {{< text syntax=bash >}} $ istioctl experimental check-inject -n {{< /text >}} For a deployment, run: {{< text syntax=bash >}} $ istioctl experimental check-inject -n deploy/ {{< /text >}} Or, for label pairs: {{< text syntax=bash >}} $ istioctl experimental check-inject -n -l = {{< /text >}} For example, if you have a deployment named `httpbin` in the `hello` namespace and a pod named `httpbin-1234` with the label `app=httpbin`, the following commands are equivalent: {{< text syntax=bash >}} $ istioctl experimental check-inject -n hello httpbin-1234 $ istioctl experimental check-inject -n hello deploy/httpbin $ istioctl experimental check-inject -n hello -l app=httpbin {{< /text >}} Example results: {{< text plain >}} WEBHOOK REVISION INJECTED REASON istio-revision-tag-default default ✔ Namespace label istio-injection=enabled matches istio-sidecar-injector-1-18 1-18 ✘ No matching namespace labels (istio.io/rev=1-18) or pod labels (istio.io/rev=1-18) {{< /text >}} If the `INJECTED` field is marked as `✔`, the webhook in that row will perform the injection, with the reason why the webhook will do the sidecar injection. If the `INJECTED` field is marked as `✘`, the webhook in that row will not perform the injection, and the reason is also shown. Possible reasons the webhook won't perform injection or the injection will have errors: 1. \*\*No matching namespace labels or pod labels\*\*: Ensure proper labels are set on the namespace or pod. 1. \*\*No matching namespace labels or pod labels for a specific revision\*\*: Set correct labels to match the desired Istio revision. 1. \*\*Pod label preventing injection\*\*: Remove the label or set it to the appropriate value. 1. \*\*Namespace label preventing injection\*\*: Change the label to the appropriate value. 1. \*\*Multiple webhooks injecting sidecars\*\*: Ensure only one webhook is enabled for injection, or set appropriate labels on the namespace or pod to target a specific webhook.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/diagnostic-tools/check-inject/index.md
master
istio
[ 0.03196031227707863, 0.03298608958721161, -0.0038008103147149086, 0.034634627401828766, 0.0033440119586884975, -0.051666855812072754, 0.027100693434476852, -0.0017334676813334227, -0.0527367927134037, 0.029843328520655632, 0.053250815719366074, -0.13717907667160034, -0.008705256506800652, ...
0.472229
{{< boilerplate experimental-feature-warning >}} In Istio 1.3, we included the [`istioctl experimental describe`](/docs/reference/commands/istioctl/#istioctl-experimental-describe-pod) command. This CLI command provides you with the information needed to understand the configuration impacting a {{< gloss >}}pod{{< /gloss >}}. This guide shows you how to use this experimental sub-command to see if a pod is in the mesh and verify its configuration. The basic usage of the command is as follows: {{< text bash >}} $ istioctl experimental describe pod [.] {{< /text >}} Appending a namespace to the pod name has the same affect as using the `-n` option of `istioctl` to specify a non-default namespace. {{< tip >}} Just like all other `istioctl` commands, you can replace `experimental` with `x` for convenience. {{< /tip >}} This guide assumes you have deployed the [Bookinfo](/docs/examples/bookinfo/) sample in your mesh. If you haven't already done so, [start the application's services](/docs/examples/bookinfo/#start-the-application-services) and [determine the IP and port of the ingress](/docs/examples/bookinfo/#determine-the-ingress-ip-and-port) before continuing. ## Verify a pod is in the mesh The `istioctl describe` command returns a warning if the {{< gloss >}}Envoy{{< /gloss >}} proxy is not present in a pod or if the proxy has not started. Additionally, the command warns if some of the [Istio requirements for pods](/docs/ops/deployment/application-requirements/) are not met. For example, the following command produces a warning indicating a `kube-dns` pod is not part of the service mesh because it has no sidecar: {{< text bash >}} $ export KUBE\_POD=$(kubectl -n kube-system get pod -l k8s-app=kube-dns -o jsonpath='{.items[0].metadata.name}') $ istioctl x describe pod -n kube-system $KUBE\_POD Pod: coredns-f9fd979d6-2zsxk Pod Ports: 53/UDP (coredns), 53 (coredns), 9153 (coredns) WARNING: coredns-f9fd979d6-2zsxk is not part of mesh; no Istio sidecar -------------------- 2021-01-22T16:10:14.080091Z error klog an error occurred forwarding 42785 -> 15000: error forwarding port 15000 to pod 692362a4fe313005439a873a1019a62f52ecd02c3de9a0957cd0af8f947866e5, uid : failed to execute portforward in network namespace "/var/run/netns/cni-3c000d0a-fb1c-d9df-8af8-1403e6803c22": failed to dial 15000: dial tcp4 127.0.0.1:15000: connect: connection refused[] Error: failed to execute command on sidecar: failure running port forward process: Get "http://localhost:42785/config\_dump": EOF {{< /text >}} The command will not produce such a warning for a pod that is part of the mesh, the Bookinfo `ratings` service for example, but instead will output the Istio configuration applied to the pod: {{< text bash >}} $ export RATINGS\_POD=$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') $ istioctl experimental describe pod $RATINGS\_POD Pod: ratings-v1-7dc98c7588-8jsbw Pod Ports: 9080 (ratings), 15090 (istio-proxy) -------------------- Service: ratings Port: http 9080/HTTP targets pod port 9080 {{< /text >}} The output shows the following information: - The ports of the service container in the pod, `9080` for the `ratings` container in this example. - The ports of the `istio-proxy` container in the pod, `15090` in this example. - The protocol used by the service in the pod, `HTTP` over port `9080` in this example. ## Verify destination rule configurations You can use `istioctl describe` to see what [destination rules](/docs/concepts/traffic-management/#destination-rules) apply to requests to a pod. For example, apply the Bookinfo [mutual TLS destination rules]({{< github\_file >}}/samples/bookinfo/networking/destination-rule-all-mtls.yaml): {{< text bash >}} $ kubectl apply -f @samples/bookinfo/networking/destination-rule-all-mtls.yaml@ {{< /text >}} Now describe the `ratings` pod again: {{< text bash >}} $ istioctl x describe pod $RATINGS\_POD Pod: ratings-v1-f745cf57b-qrxl2 Pod Ports: 9080 (ratings), 15090 (istio-proxy) -------------------- Service: ratings Port: http 9080/HTTP DestinationRule: ratings for "ratings" Matching subsets: v1 (Non-matching subsets v2,v2-mysql,v2-mysql-vm) Traffic Policy TLS Mode: ISTIO\_MUTUAL {{< /text >}} The command now shows additional output: - The `ratings` destination rule applies to request to the `ratings` service. - The subset of the `ratings` destination rule that matches the pod, `v1` in this example. - The other subsets defined by the destination rule. - The pod accepts either HTTP or
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/diagnostic-tools/istioctl-describe/index.md
master
istio
[ -0.05017368867993355, 0.019540658220648766, 0.06934074312448502, 0.04101954773068428, 0.011251600459218025, -0.04830973967909813, 0.07676279544830322, -0.005509616807103157, 0.0030417407397180796, 0.02321133390069008, 0.003946981392800808, -0.1532025784254074, -0.001964936265721917, 0.0074...
0.388572
The command now shows additional output: - The `ratings` destination rule applies to request to the `ratings` service. - The subset of the `ratings` destination rule that matches the pod, `v1` in this example. - The other subsets defined by the destination rule. - The pod accepts either HTTP or mutual TLS requests but clients use mutual TLS. ## Verify virtual service configurations When [virtual services](/docs/concepts/traffic-management/#virtual-services) configure routes to a pod, `istioctl describe` will also include the routes in its output. For example, apply the [Bookinfo virtual services]({{< github\_file>}}/samples/bookinfo/networking/virtual-service-all-v1.yaml) that route all requests to `v1` pods: {{< text bash >}} $ kubectl apply -f @samples/bookinfo/networking/virtual-service-all-v1.yaml@ {{< /text >}} Then, describe a pod implementing `v1` of the `reviews` service: {{< text bash >}} $ export REVIEWS\_V1\_POD=$(kubectl get pod -l app=reviews,version=v1 -o jsonpath='{.items[0].metadata.name}') $ istioctl x describe pod $REVIEWS\_V1\_POD ... VirtualService: reviews 1 HTTP route(s) {{< /text >}} The output contains similar information to that shown previously for the `ratings` pod, but it also includes the virtual service's routes to the pod. The `istioctl describe` command doesn't just show the virtual services impacting the pod. If a virtual service configures the service host of a pod but no traffic will reach it, the command's output includes a warning. This case can occur if the virtual service actually blocks traffic by never routing traffic to the pod's subset. For example: {{< text bash >}} $ export REVIEWS\_V2\_POD=$(kubectl get pod -l app=reviews,version=v2 -o jsonpath='{.items[0].metadata.name}') $ istioctl x describe pod $REVIEWS\_V2\_POD ... VirtualService: reviews WARNING: No destinations match pod subsets (checked 1 HTTP routes) Route to non-matching subset v1 for (everything) {{< /text >}} The warning includes the cause of the problem, how many routes were checked, and even gives you information about the other routes in place. In this example, no traffic arrives at the `v2` pod because the route in the virtual service directs all traffic to the `v1` subset. If you now delete the Bookinfo destination rules: {{< text bash >}} $ kubectl delete -f @samples/bookinfo/networking/destination-rule-all-mtls.yaml@ {{< /text >}} You can see another useful feature of `istioctl describe`: {{< text bash >}} $ istioctl x describe pod $REVIEWS\_V1\_POD ... VirtualService: reviews WARNING: No destinations match pod subsets (checked 1 HTTP routes) Warning: Route to subset v1 but NO DESTINATION RULE defining subsets! {{< /text >}} The output shows you that you deleted the destination rule but not the virtual service that depends on it. The virtual service routes traffic to the `v1` subset, but there is no destination rule defining the `v1` subset. Thus, traffic destined for version `v1` can't flow to the pod. If you refresh the browser to send a new request to Bookinfo at this point, you would see the following message: `Error fetching product reviews`. To fix the problem, reapply the destination rule: {{< text bash >}} $ kubectl apply -f @samples/bookinfo/networking/destination-rule-all-mtls.yaml@ {{< /text >}} Reloading the browser shows the app working again and running `istioctl experimental describe pod $REVIEWS\_V1\_POD` no longer produces warnings. ## Verifying traffic routes The `istioctl describe` command shows split traffic weights too. For example, run the following command to route 90% of traffic to the `v1` subset and 10% to the `v2` subset of the `reviews` service: {{< text bash >}} $ kubectl apply -f @samples/bookinfo/networking/virtual-service-reviews-90-10.yaml@ {{< /text >}} Now describe the `reviews v1` pod: {{< text bash >}} $ istioctl x describe pod $REVIEWS\_V1\_POD ... VirtualService: reviews Weight 90% {{< /text >}} The output shows that the `reviews` virtual service has a weight of 90% for the `v1` subset. This function is also helpful for other types of routing. For example, you
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/diagnostic-tools/istioctl-describe/index.md
master
istio
[ -0.006338242907077074, 0.04139610752463341, -0.018100623041391373, -0.007907385937869549, -0.018685350194573402, -0.034529898315668106, 0.008060067892074585, -0.050230130553245544, 0.06590859591960907, 0.04071125015616417, -0.04775610193610191, -0.08002135902643204, 0.008310729637742043, 0...
0.182619
v1` pod: {{< text bash >}} $ istioctl x describe pod $REVIEWS\_V1\_POD ... VirtualService: reviews Weight 90% {{< /text >}} The output shows that the `reviews` virtual service has a weight of 90% for the `v1` subset. This function is also helpful for other types of routing. For example, you can deploy header-specific routing: {{< text bash >}} $ kubectl apply -f @samples/bookinfo/networking/virtual-service-reviews-jason-v2-v3.yaml@ {{< /text >}} Then, describe the pod again: {{< text bash >}} $ istioctl x describe pod $REVIEWS\_V1\_POD ... VirtualService: reviews WARNING: No destinations match pod subsets (checked 2 HTTP routes) Route to non-matching subset v2 for (when headers are end-user=jason) Route to non-matching subset v3 for (everything) {{< /text >}} The output produces a warning since you are describing a pod in the `v1` subset. However, the virtual service configuration you applied routes traffic to the `v2` subset if the header contains `end-user=jason` and to the `v3` subset in all other cases. ## Verifying strict mutual TLS Following the [mutual TLS migration](/docs/tasks/security/authentication/mtls-migration/) instructions, you can enable strict mutual TLS for the `ratings` service: {{< text bash >}} $ kubectl apply -f - <}} Run the following command to describe the `ratings` pod: {{< text bash >}} $ istioctl x describe pod $RATINGS\_POD Pilot reports that pod enforces mTLS and clients speak mTLS {{< /text >}} The output reports that requests to the `ratings` pod are now locked down and secure. Sometimes, however, a deployment breaks when switching mutual TLS to `STRICT`. The likely cause is that the destination rule didn't match the new configuration. For example, if you configure the Bookinfo clients to not use mutual TLS using the [plain HTTP destination rules]({{< github\_file >}}/samples/bookinfo/networking/destination-rule-all.yaml): {{< text bash >}} $ kubectl apply -f @samples/bookinfo/networking/destination-rule-all.yaml@ {{< /text >}} If you open Bookinfo in your browser, you see `Ratings service is currently unavailable`. To learn why, run the following command: {{< text bash >}} $ istioctl x describe pod $RATINGS\_POD ... WARNING Pilot predicts TLS Conflict on ratings-v1-f745cf57b-qrxl2 port 9080 (pod enforces mTLS, clients speak HTTP) Check DestinationRule ratings/default and AuthenticationPolicy ratings-strict/default {{< /text >}} The output includes a warning describing the conflict between the destination rule and the authentication policy. You can restore correct behavior by applying a destination rule that uses mutual TLS: {{< text bash >}} $ kubectl apply -f @samples/bookinfo/networking/destination-rule-all-mtls.yaml@ {{< /text >}} ## Conclusion and cleanup Our goal with the `istioctl x describe` command is to help you understand the traffic and security configurations in your Istio mesh. We would love to hear your ideas for improvements! Please join us at [https://discuss.istio.io](https://discuss.istio.io). To remove the Bookinfo pods and configurations used in this guide, run the following commands: {{< text bash >}} $ kubectl delete -f @samples/bookinfo/platform/kube/bookinfo.yaml@ $ kubectl delete -f @samples/bookinfo/networking/bookinfo-gateway.yaml@ $ kubectl delete -f @samples/bookinfo/networking/destination-rule-all-mtls.yaml@ $ kubectl delete -f @samples/bookinfo/networking/virtual-service-all-v1.yaml@ {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/diagnostic-tools/istioctl-describe/index.md
master
istio
[ 0.05523477867245674, 0.0356779508292675, 0.007888205349445343, -0.012455333024263382, 0.0035551695618778467, -0.011848441325128078, 0.04973902925848961, -0.01913219690322876, 0.07079388201236725, 0.03222256898880005, -0.02703804150223732, -0.11637983471155167, 0.0031188686843961477, -0.030...
0.218466
Istiod is built with a flexible introspection framework, called ControlZ, which makes it easy to inspect and manipulate the internal state of an istiod instance. Istiod opens a port which can be used from a web browser to get an interactive view into its state, or via REST for access and control from external tools. When Istiod starts, a message is logged indicating the IP address and port to connect to in order to interact with ControlZ. {{< text plain >}} 2020-08-04T23:28:48.889370Z info ControlZ available at 100.76.122.230:9876 {{< /text >}} Here's sample of the ControlZ interface: {{< image width="90%" link="./ctrlz.png" caption="ControlZ User Interface" >}} To access the ControlZ page of istiod, you can port-forward its ControlZ endpoint locally and connect through your local browser: {{< text bash >}} $ istioctl dashboard controlz deployment/istiod.istio-system {{< /text >}} This will redirect the component's ControlZ page to `http://localhost:9876` for remote access.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/diagnostic-tools/controlz/index.md
master
istio
[ -0.05305817350745201, 0.032406099140644073, -0.07548252493143082, 0.021276431158185005, 0.004900864325463772, -0.017187021672725677, 0.036447297781705856, 0.09232521057128906, -0.012388710863888264, 0.03480752930045128, -0.012608323246240616, -0.07146790623664856, -0.015254471451044083, 0....
0.57549
This page describes how to troubleshoot issues with the Istio CNI plugin. Before reading this, you should read the [CNI installation and operation guide](/docs/setup/additional-setup/cni/). ## Log The Istio CNI plugin log provides information about how the plugin configures application pod traffic redirection based on `PodSpec`. The plugin runs in the container runtime process space, so you can see CNI log entries in the `kubelet` log. To make debugging easier, the CNI plugin also sends its log to the `istio-cni-node` DaemonSet. The default log level for the CNI plugin is `info`. To get more detailed log output, you can change the level by editing the `values.cni.logLevel` installation option and restarting the CNI DaemonSet pod. The Istio CNI DaemonSet pod log also provides information about CNI plugin installation, and [race condition repairing](/docs/setup/additional-setup/cni/#race-condition--mitigation). ## Monitoring The CNI DaemonSet [generates metrics](/docs/reference/commands/install-cni/#metrics), which can be used to monitor CNI installation, readiness, and race condition mitigation. Prometheus scraping annotations (`prometheus.io/port`, `prometheus.io/path`) are added to the `istio-cni-node` DaemonSet pod by default. You can collect the generated metrics via standard Prometheus configuration. ## DaemonSet readiness Readiness of the CNI DaemonSet indicates that the Istio CNI plugin is properly installed and configured. If Istio CNI DaemonSet is unready, it suggests something is wrong. Look at the `istio-cni-node` DaemonSet logs to diagnose. You can also track CNI installation readiness via the `istio\_cni\_install\_ready` metric. ## Race condition repair By default, the Istio CNI DaemonSet has [race condition mitigation](/docs/setup/additional-setup/cni/#race-condition--mitigation) enabled, which will evict a pod that was started before the CNI plugin was ready. To understand which pods were evicted, look for log lines like the following: {{< text plain >}} 2021-07-21T08:32:17.362512Z info Deleting broken pod: service-graph00/svc00-0v1-95b5885bf-zhbzm {{< /text >}} You can also track pods repaired via the `istio\_cni\_repair\_pods\_repaired\_total` metric. ## Diagnose pod start-up failure A common issue with the CNI plugin is that a pod fails to start due to container network set-up failure. Typically the failure reason is written to the pod events, and is visible via pod description: {{< text bash >}} $ kubectl describe pod POD\_NAME -n POD\_NAMESPACE {{< /text >}} If a pod keeps getting init error, check the init container `istio-validation` log for "connection refused" errors like the following: {{< text bash >}} $ kubectl logs POD\_NAME -n POD\_NAMESPACE -c istio-validation ... 2021-07-20T05:30:17.111930Z error Error connecting to 127.0.0.6:15002: dial tcp 127.0.0.1:0->127.0.0.6:15002: connect: connection refused 2021-07-20T05:30:18.112503Z error Error connecting to 127.0.0.6:15002: dial tcp 127.0.0.1:0->127.0.0.6:15002: connect: connection refused ... 2021-07-20T05:30:22.111676Z error validation timeout {{< /text >}} The `istio-validation` init container sets up a local dummy server which listens on traffic redirection target inbound/outbound ports, and checks whether test traffic can be redirected to the dummy server. When pod traffic redirection is not set up correctly by the CNI plugin, the `istio-validation` init container blocks pod startup, to prevent traffic bypass. To see if there were any errors or unexpected network setup behaviors, search the `istio-cni-node` for the pod ID. Another symptom of a malfunctioned CNI plugin is that the application pod is continuously evicted at start-up time. This is typically because the plugin is not properly installed, thus pod traffic redirection cannot be set up. CNI [race repair logic](/docs/setup/additional-setup/cni/#race-condition--mitigation) considers the pod is broken due to the race condition and evicts the pod continuously. When running into this issue, check the CNI DaemonSet log for information on why the plugin could not be properly installed.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/diagnostic-tools/cni/index.md
master
istio
[ 0.010203464888036251, -0.04123564809560776, 0.030561182647943497, -0.005689597688615322, -0.016057012602686882, -0.06413242965936661, -0.03041975013911724, 0.0436849482357502, -0.029087726026773453, 0.0393746979534626, -0.039247870445251465, -0.15517805516719818, -0.061661362648010254, 0.0...
0.452881
the plugin could not be properly installed.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/diagnostic-tools/cni/index.md
master
istio
[ -0.016840675845742226, -0.07686619460582733, -0.05988533794879913, -0.03866706043481827, 0.027445724233984947, 0.00660637067630887, -0.0877319946885109, -0.03517952561378479, -0.032270293682813644, 0.04678085446357727, 0.09865958243608475, -0.025918196886777878, -0.08777525275945663, 0.107...
0.001267
You can gain insights into what individual components are doing by inspecting their [logs](/docs/ops/diagnostic-tools/component-logging/) or peering inside via [introspection](/docs/ops/diagnostic-tools/controlz/). If that's insufficient, the steps below explain how to get under the hood. The [`istioctl`](/docs/reference/commands/istioctl) tool is a configuration command line utility that allows service operators to debug and diagnose their Istio service mesh deployments. The Istio project also includes two helpful scripts for `istioctl` that enable auto-completion for Bash and Zsh. Both of these scripts provide support for the currently available `istioctl` commands. {{< tip >}} `istioctl` only has auto-completion enabled for non-deprecated commands. {{< /tip >}} ## Before you begin We recommend you use an `istioctl` version that is the same version as your Istio control plane. Using matching versions helps avoid unforeseen issues. {{< tip >}} If you have already [downloaded the Istio release](/docs/setup/additional-setup/download-istio-release/), you should already have `istioctl` and do not need to install it again. {{< /tip >}} ## Install {{< istioctl >}} Install the `istioctl` binary with `curl`: 1. Download the latest release with the command: {{< text bash >}} $ curl -sL https://istio.io/downloadIstioctl | sh - {{< /text >}} 1. Add the `istioctl` client to your path, on a macOS or Linux system: {{< text bash >}} $ export PATH=$HOME/.istioctl/bin:$PATH {{< /text >}} 1. You can optionally enable the [auto-completion option](#enabling-auto-completion) when working with a bash or Zsh console. ## Get an overview of your mesh You can get an overview of your mesh using the `proxy-status` or `ps` command: {{< text bash >}} $ istioctl proxy-status {{< /text >}} If a proxy is missing from the output list it means that it is not currently connected to an istiod instance and so it will not receive any configuration. Additionally, if it is marked stale, it likely means there are networking issues or istiod needs to be scaled. ## Get proxy configuration [`istioctl`](/docs/reference/commands/istioctl) allows you to retrieve information about proxy configuration using the `proxy-config` or `pc` command. For example, to retrieve information about cluster configuration for the Envoy instance in a specific pod: {{< text bash >}} $ istioctl proxy-config cluster [flags] {{< /text >}} To retrieve information about bootstrap configuration for the Envoy instance in a specific pod: {{< text bash >}} $ istioctl proxy-config bootstrap [flags] {{< /text >}} To retrieve information about listener configuration for the Envoy instance in a specific pod: {{< text bash >}} $ istioctl proxy-config listener [flags] {{< /text >}} To retrieve information about route configuration for the Envoy instance in a specific pod: {{< text bash >}} $ istioctl proxy-config route [flags] {{< /text >}} To retrieve information about endpoint configuration for the Envoy instance in a specific pod: {{< text bash >}} $ istioctl proxy-config endpoints [flags] {{< /text >}} See [Debugging Envoy and Istiod](/docs/ops/diagnostic-tools/proxy-cmd/) for more advice on interpreting this information. ## `istioctl` auto-completion {{< tabset category-name="prereqs" >}} {{< tab name="macOS" category-value="macos" >}} If you are using the macOS operating system with the Zsh terminal shell, make sure that the `zsh-completions` package is installed. With the [brew](https://brew.sh) package manager for macOS, you can check to see if the `zsh-completions` package is installed with the following command: {{< text bash >}} $ brew list zsh-completions /usr/local/Cellar/zsh-completions/0.34.0/share/zsh-completions/ (147 files) {{< /text >}} If you receive `Error: No such keg: /usr/local/Cellar/zsh-completion`, proceed with installing the `zsh-completions` package with the following command: {{< text bash >}} $ brew install zsh-completions {{< /text >}} Once the `zsh-completions package` has been installed on your macOS system, add the following to your `~/.zshrc` file: {{< text plain >}} if type brew &>/dev/null; then FPATH=$(brew --prefix)/share/zsh-completions:$FPATH autoload -Uz compinit compinit fi {{<
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/diagnostic-tools/istioctl/index.md
master
istio
[ -0.0190848708152771, -0.01762276701629162, 0.007480012718588114, 0.05702000483870506, 0.018156347796320915, -0.032690536230802536, 0.068478524684906, 0.05330292880535126, -0.026913080364465714, 0.038797203451395035, -0.02185644768178463, -0.08241882175207138, -0.05547064542770386, 0.074292...
0.530826
with the following command: {{< text bash >}} $ brew install zsh-completions {{< /text >}} Once the `zsh-completions package` has been installed on your macOS system, add the following to your `~/.zshrc` file: {{< text plain >}} if type brew &>/dev/null; then FPATH=$(brew --prefix)/share/zsh-completions:$FPATH autoload -Uz compinit compinit fi {{< /text >}} You may also need to force rebuild `zcompdump`: {{< text bash >}} $ rm -f ~/.zcompdump; compinit {{< /text >}} Additionally, if you receive `Zsh compinit: insecure directories` warnings when attempting to load these completions, you may need to run this: {{< text bash >}} $ chmod -R go-w "$(brew --prefix)/share" {{< /text >}} {{< /tab >}} {{< tab name="Linux" category-value="linux" >}} If you are using a Linux-based operating system, you can install the Bash completion package with the `apt-get install bash-completion` command for Debian-based Linux distributions or `yum install bash-completion` for RPM-based Linux distributions, the two most common occurrences. Once the `bash-completion` package has been installed on your Linux system, add the following line to your `~/.bash\_profile` file: {{< text plain >}} [[ -r "/usr/local/etc/profile.d/bash\_completion.sh" ]] && . "/usr/local/etc/profile.d/bash\_completion.sh" {{< /text >}} {{< /tab >}} {{< /tabset >}} ### Enabling auto-completion To enable `istioctl` completion on your system, follow the steps for your preferred shell: {{< warning >}} You will need to download the full Istio release containing the auto-completion files (in the `/tools` directory). If you haven't already done so, [download the full release](/docs/setup/additional-setup/download-istio-release/) now. {{< /warning >}} {{< tabset category-name="profile" >}} {{< tab name="Bash" category-value="bash" >}} Installing the bash auto-completion file If you are using bash, the `istioctl` auto-completion file is located in the `tools` directory. To use it, copy the `istioctl.bash` file to your home directory, then add the following line to source the `istioctl` tab completion file from your `.bashrc` file: {{< text bash >}} $ source ~/istioctl.bash {{< /text >}} {{< /tab >}} {{< tab name="Zsh" category-value="zsh" >}} Installing the Zsh auto-completion file For Zsh users, the `istioctl` auto-completion file is located in the `tools` directory. Copy the `\_istioctl` file to your home directory, or any directory of your choosing (update directory in script snippet below), and source the `istioctl` auto-completion file in your `.zshrc` file as follows: {{< text zsh >}} source ~/\_istioctl {{< /text >}} You may also add the `\_istioctl` file to a directory listed in the `fpath` variable. To achieve this, place the `\_istioctl` file in an existing directory in the `fpath`, or create a new directory and add it to the `fpath` variable in your `~/.zshrc` file. {{< tip >}} If you get an error like `complete:13: command not found: compdef`, then add the following to the beginning of your `~/.zshrc` file: {{< text bash >}} $ autoload -Uz compinit $ compinit {{< /text >}} If your auto-completion is not working, try again after restarting your terminal. If auto-completion still does not work, try resetting the completion cache using the above commands in your terminal. {{< /tip >}} {{< /tab >}} {{< /tabset >}} ### Using auto-completion If the `istioctl` completion file has been installed correctly, press the Tab key while writing an `istioctl` command, and it should return a set of command suggestions for you to choose from: {{< text bash >}} $ istioctl proxy- proxy-config proxy-status {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/diagnostic-tools/istioctl/index.md
master
istio
[ -0.0038032589945942163, 0.01908942684531212, -0.013425822369754314, -0.06286921352148056, 0.07353191077709198, -0.017406174913048744, 0.015398959629237652, 0.013916263356804848, -0.018573276698589325, 0.0031727859750390053, 0.013493020087480545, -0.053156301379203796, -0.007851235568523407, ...
-0.02102
This page describes how to troubleshoot issues with Istio deployed to multiple clusters and/or networks. Before reading this, you should take the steps in [Multicluster Installation](/docs/setup/install/multicluster/) and read the [Deployment Models](/docs/ops/deployment/deployment-models/) guide. ## Cross-Cluster Load Balancing The most common, but also broad problem with multi-network installations is that cross-cluster load balancing doesn’t work. Usually this manifests itself as only seeing responses from the cluster-local instance of a Service: {{< text bash >}} $ for i in $(seq 10); do kubectl --context=$CTX\_CLUSTER1 -n sample exec curl-dd98b5f48-djwdw -c curl -- curl -s helloworld:5000/hello; done Hello version: v1, instance: helloworld-v1-578dd69f69-j69pf Hello version: v1, instance: helloworld-v1-578dd69f69-j69pf Hello version: v1, instance: helloworld-v1-578dd69f69-j69pf ... {{< /text >}} When following the guide to [verify multicluster installation](/docs/setup/install/multicluster/verify/) we would expect both `v1` and `v2` responses, indicating traffic is going to both clusters. There are many possible causes to the problem: ### Connectivity and firewall issues In some environments it may not be apparent that a firewall is blocking traffic between your clusters. It's possible that `ICMP` (ping) traffic may succeed, but HTTP and other types of traffic do not. This can appear as a timeout, or in some cases a more confusing error such as: {{< text plain >}} upstream connect error or disconnect/reset before headers. reset reason: local reset, transport failure reason: TLS error: 268435612:SSL routines:OPENSSL\_internal:HTTP\_REQUEST {{< /text >}} While Istio provides service discovery capabilities to make it easier, cross-cluster traffic should still succeed if pods in each cluster are on a single network without Istio. To rule out issues with TLS/mTLS, you can do a manual traffic test using pods without Istio sidecars. In each cluster, create a new namespace for this test. Do \_not\_ enable sidecar injection: {{< text bash >}} $ kubectl create --context="${CTX\_CLUSTER1}" namespace uninjected-sample $ kubectl create --context="${CTX\_CLUSTER2}" namespace uninjected-sample {{< /text >}} Then deploy the same apps used in [verify multicluster installation](/docs/setup/install/multicluster/verify/): {{< text bash >}} $ kubectl apply --context="${CTX\_CLUSTER1}" \ -f samples/helloworld/helloworld.yaml \ -l service=helloworld -n uninjected-sample $ kubectl apply --context="${CTX\_CLUSTER2}" \ -f samples/helloworld/helloworld.yaml \ -l service=helloworld -n uninjected-sample $ kubectl apply --context="${CTX\_CLUSTER1}" \ -f samples/helloworld/helloworld.yaml \ -l version=v1 -n uninjected-sample $ kubectl apply --context="${CTX\_CLUSTER2}" \ -f samples/helloworld/helloworld.yaml \ -l version=v2 -n uninjected-sample $ kubectl apply --context="${CTX\_CLUSTER1}" \ -f samples/curl/curl.yaml -n uninjected-sample $ kubectl apply --context="${CTX\_CLUSTER2}" \ -f samples/curl/curl.yaml -n uninjected-sample {{< /text >}} Verify that there is a helloworld pod running in `cluster2`, using the `-o wide` flag, so we can get the Pod IP: {{< text bash >}} $ kubectl --context="${CTX\_CLUSTER2}" -n uninjected-sample get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES curl-557747455f-jdsd8 1/1 Running 0 41s 10.100.0.2 node-2 helloworld-v2-54df5f84b-z28p5 1/1 Running 0 43s 10.100.0.1 node-1 {{< /text >}} Take note of the `IP` column for `helloworld`. In this case, it is `10.100.0.1`: {{< text bash >}} $ REMOTE\_POD\_IP=10.100.0.1 {{< /text >}} Next, attempt to send traffic from the `curl` pod in `cluster1` directly to this Pod IP: {{< text bash >}} $ kubectl exec --context="${CTX\_CLUSTER1}" -n uninjected-sample -c curl \ "$(kubectl get pod --context="${CTX\_CLUSTER1}" -n uninjected-sample -l \ app=curl -o jsonpath='{.items[0].metadata.name}')" \ -- curl -sS $REMOTE\_POD\_IP:5000/hello Hello version: v2, instance: helloworld-v2-54df5f84b-z28p5 {{< /text >}} If successful, there should be responses only from `helloworld-v2`. Repeat the steps, but send traffic from `cluster2` to `cluster1`. If this succeeds, you can rule out connectivity issues. If it does not, the cause of the problem may lie outside your Istio configuration. ### Locality Load Balancing [Locality load balancing](/docs/tasks/traffic-management/locality-load-balancing/failover/#configure-locality-failover) can be used to make clients prefer that traffic go to the nearest destination. If the clusters are in different localities (region/zone), locality load balancing will prefer
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/diagnostic-tools/multicluster/index.md
master
istio
[ 0.011176791042089462, -0.044656142592430115, 0.03537365049123764, -0.023841004818677902, -0.05462941154837608, -0.054130300879478455, -0.06041533499956131, 0.001712657161988318, 0.03695455566048622, 0.006601438391953707, -0.04356316104531288, -0.1794036626815796, -0.03005400486290455, -0.0...
0.349249
issues. If it does not, the cause of the problem may lie outside your Istio configuration. ### Locality Load Balancing [Locality load balancing](/docs/tasks/traffic-management/locality-load-balancing/failover/#configure-locality-failover) can be used to make clients prefer that traffic go to the nearest destination. If the clusters are in different localities (region/zone), locality load balancing will prefer the local-cluster and is working as intended. If locality load balancing is disabled, or the clusters are in the same locality, there may be another issue. ### Trust Configuration Cross-cluster traffic, as with intra-cluster traffic, relies on a common root of trust between the proxies. The default Istio installation will use their own individually generated root certificate-authorities. For multi-cluster, we must manually configure a shared root of trust. Follow Plug-in Certs below or read [Identity and Trust Models](/docs/ops/deployment/deployment-models/#identity-and-trust-models) to learn more. \*\*Plug-in Certs:\*\* To verify certs are configured correctly, you can compare the root-cert in each cluster: {{< text bash >}} $ diff \ <(kubectl --context="${CTX\_CLUSTER1}" -n istio-system get secret cacerts -ojsonpath='{.data.root-cert\.pem}') \ <(kubectl --context="${CTX\_CLUSTER2}" -n istio-system get secret cacerts -ojsonpath='{.data.root-cert\.pem}') {{< /text >}} If the root-certs do not match or the secret does not exist at all, you can follow the [Plugin CA Certs](/docs/tasks/security/cert-management/plugin-ca-cert/) guide, ensuring to run the steps for every cluster. ### Step-by-step Diagnosis If you've gone through the sections above and are still having issues, then it's time to dig a little deeper. The following steps assume you're following the [HelloWorld verification](/docs/setup/install/multicluster/verify/). Before continuing, make sure both `helloworld` and `curl` are deployed in each cluster. From each cluster, find the endpoints the `curl` service has for `helloworld`: {{< text bash >}} $ istioctl --context $CTX\_CLUSTER1 proxy-config endpoint curl-dd98b5f48-djwdw.sample | grep helloworld {{< /text >}} Troubleshooting information differs based on the cluster that is the source of traffic: {{< tabset category-name="source-cluster" >}} {{< tab name="Primary cluster" category-value="primary" >}} {{< text bash >}} $ istioctl --context $CTX\_CLUSTER1 proxy-config endpoint curl-dd98b5f48-djwdw.sample | grep helloworld 10.0.0.11:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local {{< /text >}} Only one endpoint is shown, indicating the control plane cannot read endpoints from the remote cluster. Verify that remote secrets are configured properly. {{< text bash >}} $ kubectl get secrets --context=$CTX\_CLUSTER1 -n istio-system -l "istio/multiCluster=true" {{< /text >}} \* If the secret is missing, create it. \* If the secret is present: \* Look at the config in the secret. Make sure the cluster name is used as the data key for the remote `kubeconfig`. \* If the secret looks correct, check the logs of `istiod` for connectivity or permissions issues reaching the remote Kubernetes API server. Log messages may include `Failed to add remote cluster from secret` along with an error reason. {{< /tab >}} {{< tab name="Remote cluster" category-value="remote" >}} {{< text bash >}} $ istioctl --context $CTX\_CLUSTER2 proxy-config endpoint curl-dd98b5f48-djwdw.sample | grep helloworld 10.0.1.11:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local {{< /text >}} Only one endpoint is shown, indicating the control plane cannot read endpoints from the remote cluster. Verify that remote secrets are configured properly. {{< text bash >}} $ kubectl get secrets --context=$CTX\_CLUSTER1 -n istio-system -l "istio/multiCluster=true" {{< /text >}} \* If the secret is missing, create it. \* If the secret is present and the endpoint is a Pod in the \*\*primary\*\* cluster: \* Look at the config in the secret. Make sure the cluster name is used as the data key for the remote `kubeconfig`. \* If the secret looks correct, check the logs of `istiod` for connectivity or permissions issues reaching the remote Kubernetes API server. Log messages may include `Failed to add remote cluster from secret` along with an error reason. \* If the secret is present and the endpoint is
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/diagnostic-tools/multicluster/index.md
master
istio
[ 0.00899745523929596, -0.03622860088944435, 0.027171161025762558, 0.04015142098069191, -0.0405106246471405, -0.07851283252239227, 0.0009038615971803665, 0.06796152144670486, -0.013476048596203327, -0.007597542833536863, -0.0626736581325531, -0.13123585283756256, -0.020442014560103416, 0.075...
0.484857
remote `kubeconfig`. \* If the secret looks correct, check the logs of `istiod` for connectivity or permissions issues reaching the remote Kubernetes API server. Log messages may include `Failed to add remote cluster from secret` along with an error reason. \* If the secret is present and the endpoint is a Pod in the \*\*remote\*\* cluster: \* The proxy is reading configuration from an istiod inside the remote cluster. When a remote cluster has an in -cluster istiod, it is only meant for sidecar injection and CA. You can verify this is the problem by looking for a Service named `istiod-remote` in the `istio-system` namespace. If it's missing, reinstall making sure `values.global.remotePilotAddress` is set. {{< /tab >}} {{< tab name="Multi-Network" category-value="multi-primary" >}} The steps for Primary and Remote clusters still apply for multi-network, although multi-network has an additional case: {{< text bash >}} $ istioctl --context $CTX\_CLUSTER1 proxy-config endpoint curl-dd98b5f48-djwdw.sample | grep helloworld 10.0.5.11:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local 10.0.6.13:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local {{< /text >}} In multi-network, we expect one of the endpoint IPs to match the remote cluster's east-west gateway public IP. Seeing multiple Pod IPs indicates one of two things: \* The address of the gateway for the remote network cannot be determined. \* The network of either the client or server pod cannot be determined. \*\*The address of the gateway for the remote network cannot be determined:\*\* In the remote cluster that cannot be reached, check that the Service has an External IP: {{< text bash >}} $ kubectl -n istio-system get service -l "istio=eastwestgateway" NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-eastwestgateway LoadBalancer 10.8.17.119 15021:31781/TCP,15443:30498/TCP,15012:30879/TCP,15017:30336/TCP 76m {{< /text >}} If the `EXTERNAL-IP` is stuck in ``, the environment may not support `LoadBalancer` services. In this case, it may be necessary to customize the `spec.externalIPs` section of the Service to manually give the Gateway an IP reachable from outside the cluster. If the external IP is present, check that the Service includes a `topology.istio.io/network` label with the correct value. If that is incorrect, reinstall the gateway and make sure to set the --network flag on the generation script. \*\*The network of either the client or server cannot be determined.\*\* On the source pod, check the proxy metadata. {{< text bash >}} $ kubectl get pod $CURL\_POD\_NAME \ -o jsonpath="{.spec.containers[\*].env[?(@.name=='ISTIO\_META\_NETWORK')].value}" {{< /text >}} {{< text bash >}} $ kubectl get pod $HELLOWORLD\_POD\_NAME \ -o jsonpath="{.metadata.labels.topology\.istio\.io/network}" {{< /text >}} If either of these values aren't set, or have the wrong value, istiod may treat the source and client proxies as being on the same network and send network-local endpoints. When these aren't set, check that `values.global.network` was set properly during install, or that the injection webhook is configured correctly. Istio determines the network of a Pod using the `topology.istio.io/network` label which is set during injection. For non-injected Pods, Istio relies on the `topology.istio.io/network` label set on the system namespace in the cluster. In each cluster, check the network: {{< text bash >}} $ kubectl --context="${CTX\_CLUSTER1}" get ns istio-system -ojsonpath='{.metadata.labels.topology\.istio\.io/network}' {{< /text >}} If the above command doesn't output the expected network name, set the label: {{< text bash >}} $ kubectl --context="${CTX\_CLUSTER1}" label namespace istio-system topology.istio.io/network=network1 {{< /text >}} {{< /tab >}} {{< /tabset >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/diagnostic-tools/multicluster/index.md
master
istio
[ 0.01267863716930151, -0.011027765460312366, 0.03615657240152359, 0.02558736503124237, -0.055786263197660446, 0.037525713443756104, -0.02977995201945305, -0.009707029908895493, 0.05957012251019478, 0.10571321845054626, -0.03451269119977951, -0.1178855448961258, -0.03384532779455185, 0.05252...
0.335457
This page describes how to troubleshoot issues with Istio deployed to Virtual Machines. Before reading this, you should take the steps in [Virtual Machine Installation](/docs/setup/install/virtual-machine/). Additionally, [Virtual Machine Architecture](/docs/ops/deployment/vm-architecture/) can help you understand how the components interact. Troubleshooting an Istio Virtual Machine installation is similar to troubleshooting issues with proxies running inside Kubernetes, but there are some key differences to be aware of. While much of the same information is available on both platforms, accessing this information differs. ## Monitoring health The Istio sidecar is typically run as a `systemd` unit. To ensure its running properly, you can check that status: {{< text bash >}} $ systemctl status istio {{< /text >}} Additionally, the sidecar health can be programmatically check at its health endpoint: {{< text bash >}} $ curl localhost:15021/healthz/ready -I {{< /text >}} ## Logs Logs for the Istio proxy can be found in a few places. To access the `systemd` logs, which has details about the initialization of the proxy: {{< text bash >}} $ journalctl -f -u istio -n 1000 {{< /text >}} The proxy will redirect `stderr` and `stdout` to `/var/log/istio/istio.err.log` and `/var/log/istio/istio.log`, respectively. To view these in a format similar to `kubectl`: {{< text bash >}} $ tail /var/log/istio/istio.err.log /var/log/istio/istio.log -Fq -n 100 {{< /text >}} Log levels can be modified by changing the `cluster.env` configuration file. Make sure to restart `istio` if it is already running: {{< text bash >}} $ echo "ISTIO\_AGENT\_FLAGS=\"--log\_output\_level=dns:debug --proxyLogLevel=debug\"" >> /var/lib/istio/envoy/cluster.env $ systemctl restart istio {{< /text >}} ## Iptables To ensure `iptables` rules have been successfully applied: {{< text bash >}} $ sudo iptables-save ... -A ISTIO\_OUTPUT -d 127.0.0.1/32 -j RETURN -A ISTIO\_OUTPUT -j ISTIO\_REDIRECT {{< /text >}} ## Istioctl Most `istioctl` commands will function properly with virtual machines. For example, `istioctl proxy-status` can be used to view all connected proxies: {{< text bash >}} $ istioctl proxy-status NAME CDS LDS EDS RDS ISTIOD VERSION vm-1.default SYNCED SYNCED SYNCED SYNCED istiod-789ffff8-f2fkt {{< istio\_full\_version >}} {{< /text >}} However, `istioctl proxy-config` relies on functionality in Kubernetes to connect to a proxy, which will not work for virtual machines. Instead, a file containing the configuration dump from Envoy can be passed. For example: {{< text bash >}} $ curl -s localhost:15000/config\_dump | istioctl proxy-config clusters --file - SERVICE FQDN PORT SUBSET DIRECTION TYPE istiod.istio-system.svc.cluster.local 443 - outbound EDS istiod.istio-system.svc.cluster.local 15010 - outbound EDS istiod.istio-system.svc.cluster.local 15012 - outbound EDS istiod.istio-system.svc.cluster.local 15014 - outbound EDS {{< /text >}} ## Automatic registration When a virtual machine connects to Istiod, a `WorkloadEntry` will automatically be created. This enables the virtual machine to become a part of a `Service`, similar to an `Endpoint` in Kubernetes. To check these are created correctly: {{< text bash >}} $ kubectl get workloadentries NAME AGE ADDRESS vm-10.128.0.50 14m 10.128.0.50 {{< /text >}} ## Certificates Virtual machines handle certificates differently than Kubernetes Pods, which use a Kubernetes-provided service account token to authenticate and renew mTLS certificates. Instead, existing mTLS credentials are used to authenticate with the certificate authority and renew certificates. The status of these certificates can be viewed in the same way as in Kubernetes: {{< text bash >}} $ curl -s localhost:15000/config\_dump | ./istioctl proxy-config secret --file - RESOURCE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE default Cert Chain ACTIVE true 251932493344649542420616421203546836446 2021-01-29T18:07:21Z 2021-01-28T18:07:21Z ROOTCA CA ACTIVE true 81663936513052336343895977765039160718 2031-01-26T17:54:44Z 2021-01-28T17:54:44Z {{< /text >}} Additionally, these are persisted to disk to ensure downtime or restarts do not lose state. {{< text bash >}} $ ls /etc/certs cert-chain.pem key.pem root-cert.pem {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/diagnostic-tools/virtual-machines/index.md
master
istio
[ 0.045775990933179855, 0.016316721215844154, 0.01913968287408352, 0.017461948096752167, -0.015469277277588844, -0.0648324266076088, -0.021104846149683, 0.053476687520742416, -0.08313526958227158, 0.028477849438786507, 0.025302574038505554, -0.09929980337619781, -0.09352869540452957, -0.0047...
0.508302
2021-01-28T18:07:21Z ROOTCA CA ACTIVE true 81663936513052336343895977765039160718 2031-01-26T17:54:44Z 2021-01-28T17:54:44Z {{< /text >}} Additionally, these are persisted to disk to ensure downtime or restarts do not lose state. {{< text bash >}} $ ls /etc/certs cert-chain.pem key.pem root-cert.pem {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/diagnostic-tools/virtual-machines/index.md
master
istio
[ -0.018159372732043266, 0.07925085723400116, -0.02945510856807232, 0.026166649535298347, 0.06433633714914322, -0.022932972759008408, -0.06034058332443237, 0.02397697977721691, 0.057573240250349045, 0.009975571185350418, 0.06154192239046097, 0.03179938346147537, 0.07688452303409576, 0.023128...
0.006001
[Kiali](https://kiali.io/) is an observability console for Istio with service mesh configuration and validation capabilities. It helps you understand the structure and health of your service mesh by monitoring traffic flow to infer the topology and report errors. Kiali provides detailed metrics and a basic [Grafana](/docs/ops/integrations/grafana) integration, which can be used for advanced queries. Distributed tracing is provided by integration with [Jaeger](/docs/ops/integrations/jaeger). ## Installation ### Option 1: Quick start Istio provides a basic sample installation to quickly get Kiali up and running: {{< text bash >}} $ kubectl apply -f {{< github\_file >}}/samples/addons/kiali.yaml {{< /text >}} This will deploy Kiali into your cluster. This is intended for demonstration only, and is not tuned for performance or security. {{< idea >}} If you use this sample YAML and plan to publicly expose the resulting Kiali installation, be sure to change the `signing\_key` in Kiali's ConfigMap when using an authentication strategy other than `anonymous`. {{< /idea >}} ### Option 2: Customizable install The Kiali project offers its own [quick start guide](https://kiali.io/docs/installation/quick-start) and [customizable installation methods](https://kiali.io/docs/installation/installation-guide). We recommend production users follow those instructions to ensure they stay up to date with the latest versions and best practices. ## Usage For more information about using Kiali, see the [Visualizing Your Mesh](/docs/tasks/observability/kiali/) task.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/integrations/kiali/index.md
master
istio
[ -0.0034232742618769407, -0.010591474361717701, 0.0034348717890679836, 0.00765129504725337, 0.0020529618486762047, -0.051004525274038315, -0.004259465262293816, 0.03806290775537491, -0.006821202579885721, -0.0049166129902005196, -0.02601003460586071, -0.1318599432706833, 0.02204333245754242, ...
0.543403
[Jaeger](https://www.jaegertracing.io/) is an open source end to end distributed tracing system, allowing users to monitor and troubleshoot transactions in complex distributed systems. ## Installation ### Option 1: Quick start Istio provides a basic sample installation to quickly get Jaeger up and running: {{< text bash >}} $ kubectl apply -f {{< github\_file >}}/samples/addons/jaeger.yaml {{< /text >}} This will deploy Jaeger into your cluster. This is intended for demonstration only, and is not tuned for performance or security. ### Option 2: Customizable install Consult the [Jaeger documentation](https://www.jaegertracing.io/) to get started. No special changes are needed for Jaeger to work with Istio. ## Usage For information on using Jaeger, please refer to the [Jaeger task](/docs/tasks/observability/distributed-tracing/jaeger/).
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/integrations/jaeger/index.md
master
istio
[ 0.0034375987015664577, -0.022648708894848824, 0.0022099358029663563, -0.013815979473292828, -0.01830120198428631, -0.018579382449388504, -0.0030423211865127087, 0.05285053700208664, -0.0025375396944582462, 0.02471814677119255, 0.007526748348027468, -0.11493240296840668, -0.004532734863460064...
0.436536
[Grafana](https://grafana.com/) is an open source monitoring solution that can be used to configure dashboards for Istio. You can use Grafana to monitor the health of Istio and of applications within the service mesh. ## Configuration While you can build your own dashboards, Istio offers a set of preconfigured dashboards for all of the most important metrics for the mesh and for the control plane. \* [Mesh Dashboard](https://grafana.com/grafana/dashboards/7639) provides an overview of all services in the mesh. \* [Service Dashboard](https://grafana.com/grafana/dashboards/7636) provides a detailed breakdown of metrics for a service. \* [Workload Dashboard](https://grafana.com/grafana/dashboards/7630) provides a detailed breakdown of metrics for a workload. \* [Performance Dashboard](https://grafana.com/grafana/dashboards/11829) monitors the resource usage of the mesh. \* [Control Plane Dashboard](https://grafana.com/grafana/dashboards/7645) monitors the health and performance of the control plane. \* [WASM Extension Dashboard](https://grafana.com/grafana/dashboards/13277) provides an overview of mesh wide WebAssembly extension runtime and loading state. There are a few ways to configure Grafana to use these dashboards: ### Option 1: Quick start Istio provides a basic sample installation to quickly get Grafana up and running, bundled with all of the Istio dashboards already installed: {{< text bash >}} $ kubectl apply -f {{< github\_file >}}/samples/addons/grafana.yaml {{< /text >}} This will deploy Grafana into your cluster. This is intended for demonstration only, and is not tuned for performance or security. ### Option 2: Import from `grafana.com` into an existing deployment To quickly import the Istio dashboards to an existing Grafana instance, you can use the [\*\*Import\*\* button in the Grafana UI](https://grafana.com/docs/grafana/latest/reference/export\_import/#importing-a-dashboard) to add the dashboard links above. When you import the dashboards, note that you must select a Prometheus data source. You can also use a script to import all dashboards at once. For example: {{< text bash >}} $ # Address of Grafana $ GRAFANA\_HOST="http://localhost:3000" $ # Login credentials, if authentication is used $ GRAFANA\_CRED="USER:PASSWORD" $ # The name of the Prometheus data source to use $ GRAFANA\_DATASOURCE="Prometheus" $ # The version of Istio to deploy $ VERSION={{< istio\_full\_version >}} $ # Import all Istio dashboards $ for DASHBOARD in 7639 11829 7636 7630 7645 13277; do $ REVISION="$(curl -s https://grafana.com/api/dashboards/${DASHBOARD}/revisions -s | jq ".items[] | select(.description | contains(\"${VERSION}\")) | .revision" | tail -n 1)" $ curl -s https://grafana.com/api/dashboards/${DASHBOARD}/revisions/${REVISION}/download > /tmp/dashboard.json $ echo "Importing $(cat /tmp/dashboard.json | jq -r '.title') (revision ${REVISION}, id ${DASHBOARD})..." $ curl -s -k -u "$GRAFANA\_CRED" -XPOST \ $ -H "Accept: application/json" \ $ -H "Content-Type: application/json" \ $ -d "{\"dashboard\":$(cat /tmp/dashboard.json),\"overwrite\":true, \ $ \"inputs\":[{\"name\":\"DS\_PROMETHEUS\",\"type\":\"datasource\", \ $ \"pluginId\":\"prometheus\",\"value\":\"$GRAFANA\_DATASOURCE\"}]}" \ $ $GRAFANA\_HOST/api/dashboards/import $ echo -e "\nDone\n" $ done {{< /text >}} {{< tip >}} A new revision of the dashboards is created for each version of Istio. To ensure compatibility, it is recommended that you select the appropriate revision for the Istio version you are deploying. {{< /tip >}} ### Option 3: Implementation-specific methods Grafana can be installed and configured through other methods. To import Istio dashboards, refer to the documentation for the installation method. For example: \* [Grafana provisioning](https://grafana.com/docs/grafana/latest/administration/provisioning/#dashboards) official documentation. \* [Importing dashboards](https://github.com/grafana/helm-charts/tree/main/charts/grafana#import-dashboards) for the `stable/grafana` Helm chart.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/integrations/grafana/index.md
master
istio
[ -0.06744829565286636, -0.005838957615196705, -0.10059723258018494, 0.0017527559539303184, 0.009331168606877327, -0.06378194689750671, 0.02633288875222206, 0.07435186207294464, 0.030214950442314148, 0.056782063096761703, -0.05391712859272957, -0.0677046924829483, -0.033162180334329605, 0.05...
0.484695
[SPIRE](https://spiffe.io/docs/latest/spire-about/spire-concepts/) is a production-ready implementation of the SPIFFE specification that performs node and workload attestation in order to securely issue cryptographic identities to workloads running in heterogeneous environments. SPIRE can be configured as a source of cryptographic identities for Istio workloads through an integration with [Envoy's SDS API](https://www.envoyproxy.io/docs/envoy/latest/configuration/security/secret). Istio can detect the existence of a UNIX Domain Socket that implements the Envoy SDS API on a defined socket path, allowing Envoy to communicate and fetch identities directly from it. This integration with SPIRE provides flexible attestation options not available with the default Istio identity management while harnessing Istio's powerful service management. For example, SPIRE's plugin architecture enables diverse workload attestation options beyond the Kubernetes namespace and service account attestation offered by Istio. SPIRE's node attestation extends attestation to the physical or virtual hardware on which workloads run. For a quick demo of how this SPIRE integration with Istio works, see [Integrating SPIRE as a CA through Envoy's SDS API]({{< github\_tree >}}/samples/security/spire). ## Install SPIRE We recommend you follow SPIRE's installation instructions and best practices for installing SPIRE, and for deploying SPIRE in production environments. For the examples in this guide, the [SPIRE Helm charts](https://artifacthub.io/packages/helm/spiffe/spire) will be used with upstream defaults, to focus on just the configuration necessary to integrate SPIRE and Istio. {{< text syntax=bash snip\_id=install\_spire\_crds >}} $ helm upgrade --install -n spire-server spire-crds spire-crds --repo https://spiffe.github.io/helm-charts-hardened/ --create-namespace {{< /text >}} {{< text syntax=bash snip\_id=install\_spire\_istio\_overrides >}} $ helm upgrade --install -n spire-server spire spire --repo https://spiffe.github.io/helm-charts-hardened/ --wait --set global.spire.trustDomain="example.org" {{< /text >}} {{< tip >}} See the [SPIRE Helm chart](https://artifacthub.io/packages/helm/spiffe/spire) documentation for other values you can configure for your installation. It is important that SPIRE and Istio are configured with the exact same trust domain, to prevent authentication and authorization errors, and that the [SPIFFE CSI driver](https://github.com/spiffe/spiffe-csi) is enabled and installed. {{< /tip >}} By default, the above will also install: - The [SPIFFE CSI driver](https://github.com/spiffe/spiffe-csi), which is used to mount an Envoy-compatible SDS socket into proxies. Using the SPIFFE CSI driver to mount SDS sockets is strongly recommended by both Istio and SPIRE, as `hostMounts` are a larger security risk and introduce operational hurdles. This guide assumes the use of the SPIFFE CSI driver. - The [SPIRE Controller Manager](https://github.com/spiffe/spire-controller-manager), which eases the creation of SPIFFE registrations for workloads. ## Register workloads By design, SPIRE only grants identities to workloads that have been registered with the SPIRE server; this includes user workloads, as well as Istio components. Istio sidecars and gateways, once configured for SPIRE integration, cannot get identities, and therefore cannot reach READY status, unless there is a preexisting, matching SPIRE registration created for them ahead of time. See the [SPIRE docs on registering workloads](https://spiffe.io/docs/latest/deploying/registering/) for more information on using multiple selectors to strengthen attestation criteria, and the selectors available. This section describes the options available for registering Istio workloads in a SPIRE Server and provides some example workload registrations. {{< warning >}} Istio currently requires a specific SPIFFE ID format for workloads. All registrations must follow the Istio SPIFFE ID pattern: `spiffe:///ns//sa/` {{< /warning >}} ### Option 1: Auto-registration using the SPIRE Controller Manager New entries will be automatically registered for each new pod that matches the selector defined in a [ClusterSPIFFEID](https://github.com/spiffe/spire-controller-manager/blob/main/docs/clusterspiffeid-crd.md) custom resource. Both Istio sidecars and Istio gateways need to be registered with SPIRE, so that they can request identities. #### Istio Gateway `ClusterSPIFFEID` The following will create a `ClusterSPIFFEID`, which will auto-register any Istio Ingress gateway pod with SPIRE if it is scheduled into the `istio-system` namespace, and has a service account named `istio-ingressgateway-service-account`. These selectors are used as a simple example; consult the [SPIRE
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/integrations/spire/index.md
master
istio
[ -0.10005246102809906, -0.0036477160174399614, -0.03401774540543556, 0.0491490513086319, -0.013647740706801414, -0.0433877632021904, 0.029468638822436333, 0.00014784441736992449, 0.017016658559441566, 0.010528765618801117, -0.05432427302002907, -0.06822709739208221, -0.002948981476947665, 0...
0.447171
they can request identities. #### Istio Gateway `ClusterSPIFFEID` The following will create a `ClusterSPIFFEID`, which will auto-register any Istio Ingress gateway pod with SPIRE if it is scheduled into the `istio-system` namespace, and has a service account named `istio-ingressgateway-service-account`. These selectors are used as a simple example; consult the [SPIRE Controller Manager documentation](https://github.com/spiffe/spire-controller-manager/blob/main/docs/clusterspiffeid-crd.md) for more details. {{< text syntax=bash snip\_id=spire\_csid\_istio\_gateway >}} $ kubectl apply -f - <}} #### Istio Sidecar `ClusterSPIFFEID` The following will create a `ClusterSPIFFEID` which will auto-register any pod with the `spiffe.io/spire-managed-identity: true` label that is deployed into the `default` namespace with SPIRE. These selectors are used as a simple example; consult the [SPIRE Controller Manager documentation](https://github.com/spiffe/spire-controller-manager/blob/main/docs/clusterspiffeid-crd.md) for more details. {{< text syntax=bash snip\_id=spire\_csid\_istio\_sidecar >}} $ kubectl apply -f - <}} ### Option 2: Manual Registration If you wish to manually create your SPIRE registrations, rather than use the SPIRE Controller Manager mentioned in [the recommended option](#option-1-auto-registration-using-the-spire-controller-manager), refer to the [SPIRE documentation on manual registration](https://spiffe.io/docs/latest/deploying/registering/). Below are the equivalent manual registrations based off the automatic registrations in [Option 1](#option-1-auto-registration-using-the-spire-controller-manager). The following steps assume you have [already followed the SPIRE documentation to manually register your SPIRE agent and node attestation](https://spiffe.io/docs/latest/deploying/registering/#1-defining-the-spiffe-id-of-the-agent) and that your SPIRE agent was registered with the SPIFFE identity `spiffe://example.org/ns/spire/sa/spire-agent`. 1. Get the `spire-server` pod: {{< text syntax=bash snip\_id=set\_spire\_server\_pod\_name\_var >}} $ SPIRE\_SERVER\_POD=$(kubectl get pod -l statefulset.kubernetes.io/pod-name=spire-server-0 -n spire-server -o jsonpath="{.items[0].metadata.name}") {{< /text >}} 1. Register an entry for the Istio Ingress gateway pod: {{< text bash >}} $ kubectl exec -n spire "$SPIRE\_SERVER\_POD" -- \ /opt/spire/bin/spire-server entry create \ -spiffeID spiffe://example.org/ns/istio-system/sa/istio-ingressgateway-service-account \ -parentID spiffe://example.org/ns/spire/sa/spire-agent \ -selector k8s:sa:istio-ingressgateway-service-account \ -selector k8s:ns:istio-system \ -socketPath /run/spire/sockets/server.sock Entry ID : 6f2fe370-5261-4361-ac36-10aae8d91ff7 SPIFFE ID : spiffe://example.org/ns/istio-system/sa/istio-ingressgateway-service-account Parent ID : spiffe://example.org/ns/spire/sa/spire-agent Revision : 0 TTL : default Selector : k8s:ns:istio-system Selector : k8s:sa:istio-ingressgateway-service-account {{< /text >}} 1. Register an entry for workloads injected with an Istio sidecar: {{< text bash >}} $ kubectl exec -n spire "$SPIRE\_SERVER\_POD" -- \ /opt/spire/bin/spire-server entry create \ -spiffeID spiffe://example.org/ns/default/sa/curl \ -parentID spiffe://example.org/ns/spire/sa/spire-agent \ -selector k8s:ns:default \ -selector k8s:pod-label:spiffe.io/spire-managed-identity:true \ -socketPath /run/spire/sockets/server.sock {{< /text >}} ## Install Istio 1. [Download the Istio release](/docs/setup/additional-setup/download-istio-release/). 1. Create the Istio configuration with custom patches for the Ingress Gateway and `istio-proxy`. The Ingress Gateway component includes the `spiffe.io/spire-managed-identity: "true"` label. {{< text syntax=bash snip\_id=define\_istio\_operator\_for\_auto\_registration >}} $ cat < ./istio.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: namespace: istio-system spec: profile: default meshConfig: trustDomain: example.org values: # This is used to customize the sidecar template. # It adds both the label to indicate that SPIRE should manage the # identity of this pod, as well as the CSI driver mounts. sidecarInjectorWebhook: templates: spire: | labels: spiffe.io/spire-managed-identity: "true" spec: containers: - name: istio-proxy volumeMounts: - name: workload-socket mountPath: /run/secrets/workload-spiffe-uds readOnly: true volumes: - name: workload-socket csi: driver: "csi.spiffe.io" readOnly: true components: ingressGateways: - name: istio-ingressgateway enabled: true label: istio: ingressgateway k8s: overlays: # This is used to customize the ingress gateway template. # It adds the CSI driver mounts, as well as an init container # to stall gateway startup until the CSI driver mounts the socket. - apiVersion: apps/v1 kind: Deployment name: istio-ingressgateway patches: - path: spec.template.spec.volumes.[name:workload-socket] value: name: workload-socket csi: driver: "csi.spiffe.io" readOnly: true - path: spec.template.spec.containers.[name:istio-proxy].volumeMounts.[name:workload-socket] value: name: workload-socket mountPath: "/run/secrets/workload-spiffe-uds" readOnly: true EOF {{< /text >}} {{< warning >}} If you are using Kubernetes 1.33 \*\*and\*\* have not disabled support for [native sidecars](/blog/2023/native-sidecars/) in the Istio control plane, you must use `initContainers` in the injection template for sidecars. This is required because native sidecar support changes how sidecars are injected. \*\*NOTE:\*\* The SPIRE injection template for gateways should continue to use regular `containers` as before. {{< /warning >}} 1. Apply the
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/integrations/spire/index.md
master
istio
[ -0.058188751339912415, -0.02758978307247162, -0.06405404955148697, 0.06150399148464203, -0.015604124404489994, 0.04653623327612877, 0.08186576515436172, 0.007639849092811346, 0.006109678186476231, 0.0035906340926885605, -0.02281828597187996, -0.11257890611886978, 0.008195973001420498, -0.0...
0.331361
[native sidecars](/blog/2023/native-sidecars/) in the Istio control plane, you must use `initContainers` in the injection template for sidecars. This is required because native sidecar support changes how sidecars are injected. \*\*NOTE:\*\* The SPIRE injection template for gateways should continue to use regular `containers` as before. {{< /warning >}} 1. Apply the configuration: {{< text syntax=bash snip\_id=apply\_istio\_operator\_configuration >}} $ istioctl install --skip-confirmation -f ./istio.yaml {{< /text >}} 1. Check Ingress Gateway pod state: {{< text syntax=bash snip\_id=none >}} $ kubectl get pods -n istio-system NAME READY STATUS RESTARTS AGE istio-ingressgateway-5b45864fd4-lgrxs 1/1 Running 0 17s istiod-989f54d9c-sg7sn 1/1 Running 0 23s {{< /text >}} The Ingress Gateway pod is `Ready` since the corresponding registration entry is automatically created for it on the SPIRE Server. Envoy is able to fetch cryptographic identities from SPIRE. This configuration also adds an `initContainer` to the gateway that will wait for SPIRE to create the UNIX Domain Socket before starting the `istio-proxy`. If the SPIRE agent is not ready, or has not been properly configured with the same socket path, the Ingress Gateway `initContainer` will wait forever. 1. Deploy an example workload: {{< text syntax=bash snip\_id=apply\_curl >}} $ istioctl kube-inject --filename @samples/security/spire/curl-spire.yaml@ | kubectl apply -f - {{< /text >}} In addition to needing `spiffe.io/spire-managed-identity` label, the workload will need the SPIFFE CSI Driver volume to access the SPIRE Agent socket. To accomplish this, you can leverage the `spire` pod annotation template from the [Install Istio](#install-istio) section or add the CSI volume to the deployment spec of your workload. Both of these alternatives are highlighted on the example snippet below: {{< text syntax=yaml snip\_id=none >}} apiVersion: apps/v1 kind: Deployment metadata: name: curl spec: replicas: 1 selector: matchLabels: app: curl template: metadata: labels: app: curl # Injects custom sidecar template annotations: inject.istio.io/templates: "sidecar,spire" spec: terminationGracePeriodSeconds: 0 serviceAccountName: curl containers: - name: curl image: curlimages/curl command: ["/bin/sleep", "3650d"] imagePullPolicy: IfNotPresent volumeMounts: - name: tmp mountPath: /tmp securityContext: runAsUser: 1000 volumes: - name: tmp emptyDir: {} # CSI volume - name: workload-socket csi: driver: "csi.spiffe.io" readOnly: true {{< /text >}} The Istio configuration shares the `spiffe-csi-driver` with the Ingress Gateway and the sidecars that are going to be injected on workload pods, granting them access to the SPIRE Agent's UNIX Domain Socket. See [Verifying that identities were created for workloads](#verifying-that-identities-were-created-for-workloads) to check issued identities. ## Verifying that identities were created for workloads Use the following command to confirm that identities were created for the workloads: {{< text syntax=bash snip\_id=none >}} $ kubectl exec -t "$SPIRE\_SERVER\_POD" -n spire-server -c spire-server -- ./bin/spire-server entry show Found 2 entries Entry ID : c8dfccdc-9762-4762-80d3-5434e5388ae7 SPIFFE ID : spiffe://example.org/ns/istio-system/sa/istio-ingressgateway-service-account Parent ID : spiffe://example.org/spire/agent/k8s\_psat/demo-cluster/bea19580-ae04-4679-a22e-472e18ca4687 Revision : 0 X509-SVID TTL : default JWT-SVID TTL : default Selector : k8s:pod-uid:88b71387-4641-4d9c-9a89-989c88f7509d Entry ID : af7b53dc-4cc9-40d3-aaeb-08abbddd8e54 SPIFFE ID : spiffe://example.org/ns/default/sa/curl Parent ID : spiffe://example.org/spire/agent/k8s\_psat/demo-cluster/bea19580-ae04-4679-a22e-472e18ca4687 Revision : 0 X509-SVID TTL : default JWT-SVID TTL : default Selector : k8s:pod-uid:ee490447-e502-46bd-8532-5a746b0871d6 {{< /text >}} Check the Ingress-gateway pod state: {{< text syntax=bash snip\_id=none >}} $ kubectl get pods -n istio-system NAME READY STATUS RESTARTS AGE istio-ingressgateway-5b45864fd4-lgrxs 1/1 Running 0 60s istiod-989f54d9c-sg7sn 1/1 Running 0 45s {{< /text >}} After registering an entry for the Ingress-gateway pod, Envoy receives the identity issued by SPIRE and uses it for all TLS and mTLS communications. ### Check that the workload identity was issued by SPIRE 1. Get pod information: {{< text syntax=bash snip\_id=set\_curl\_pod\_var >}} $ CURL\_POD=$(kubectl get pod -l app=curl -o jsonpath="{.items[0].metadata.name}") {{< /text >}} 1. Retrieve curl's SVID identity document using the istioctl proxy-config secret command: {{< text syntax=bash snip\_id=get\_curl\_svid >}} $ istioctl proxy-config secret "$CURL\_POD" -o json | jq -r \ '.dynamicActiveSecrets[0].secret.tlsCertificate.certificateChain.inlineBytes' | base64 --decode >
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/integrations/spire/index.md
master
istio
[ 0.016335252672433853, 0.01897050067782402, -0.0024465268943458796, 0.020157387480139732, -0.03943152725696564, 0.007722191512584686, 0.0055005671456456184, 0.053849056363105774, -0.027191011235117912, 0.03290495648980141, -0.004968225955963135, -0.124048613011837, -0.06531962752342224, 0.0...
0.469457
information: {{< text syntax=bash snip\_id=set\_curl\_pod\_var >}} $ CURL\_POD=$(kubectl get pod -l app=curl -o jsonpath="{.items[0].metadata.name}") {{< /text >}} 1. Retrieve curl's SVID identity document using the istioctl proxy-config secret command: {{< text syntax=bash snip\_id=get\_curl\_svid >}} $ istioctl proxy-config secret "$CURL\_POD" -o json | jq -r \ '.dynamicActiveSecrets[0].secret.tlsCertificate.certificateChain.inlineBytes' | base64 --decode > chain.pem {{< /text >}} 1. Inspect the certificate and verify that SPIRE was the issuer: {{< text syntax=bash snip\_id=get\_svid\_subject >}} $ openssl x509 -in chain.pem -text | grep SPIRE Subject: C = US, O = SPIRE, CN = curl-5f4d47c948-njvpk {{< /text >}} ## SPIFFE federation SPIRE Servers are able to authenticate SPIFFE identities originating from different trust domains. This is known as SPIFFE federation. SPIRE Agent can be configured to push federated bundles to Envoy through the Envoy SDS API, allowing Envoy to use [validation context](https://spiffe.io/docs/latest/microservices/envoy/#validation-context) to verify peer certificates and trust a workload from another trust domain. To enable Istio to federate SPIFFE identities through SPIRE integration, consult [SPIRE Agent SDS configuration](https://github.com/spiffe/spire/blob/main/doc/spire\_agent.md#sds-configuration) and set the following SDS configuration values for your SPIRE Agent configuration file. | Configuration | Description | Resource Name | |----------------------------|--------------------------------------------------------------------------------------------------|---------------| | `default\_svid\_name` | The TLS Certificate resource name to use for the default `X509-SVID` with Envoy SDS | default | | `default\_bundle\_name` | The Validation Context resource name to use for the default X.509 bundle with Envoy SDS | null | | `default\_all\_bundles\_name` | The Validation Context resource name to use for all bundles (including federated) with Envoy SDS | ROOTCA | This will allow Envoy to get federated bundles directly from SPIRE. ### Create federated registration entries - If using the SPIRE Controller Manager, create federated entries for workloads by setting the `federatesWith` field of the [ClusterSPIFFEID CR](https://github.com/spiffe/spire-controller-manager/blob/main/docs/clusterspiffeid-crd.md) to the trust domains you want the pod to federate with: {{< text syntax=yaml snip\_id=none >}} apiVersion: spire.spiffe.io/v1alpha1 kind: ClusterSPIFFEID metadata: name: federation spec: spiffeIDTemplate: "spiffe://{{ .TrustDomain }}/ns/{{ .PodMeta.Namespace }}/sa/{{ .PodSpec.ServiceAccountName }}" podSelector: matchLabels: spiffe.io/spire-managed-identity: "true" federatesWith: ["example.io", "example.ai"] {{< /text >}} - For manual registration see [Create Registration Entries for Federation](https://spiffe.io/docs/latest/architecture/federation/readme/#create-registration-entries-for-federation). ## Cleanup SPIRE Remove SPIRE by uninstalling its Helm charts: {{< text syntax=bash snip\_id=uninstall\_spire >}} $ helm delete -n spire-server spire {{< /text >}} {{< text syntax=bash snip\_id=uninstall\_spire\_crds >}} $ helm delete -n spire-server spire-crds {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/integrations/spire/index.md
master
istio
[ -0.016366634517908096, 0.1290581226348877, -0.051294345408678055, 0.0309502761811018, -0.00916328001767397, -0.032298147678375244, 0.0027309833094477654, 0.006137280259281397, 0.08848472684621811, 0.010139086283743382, -0.015731047838926315, -0.08329153805971146, -0.008737798780202866, 0.0...
0.162296
[Zipkin](https://zipkin.io/) is a distributed tracing system. It helps gather timing data needed to troubleshoot latency problems in service architectures. Features include both the collection and lookup of this data. ## Installation ### Option 1: Quick start Istio provides a basic sample installation to quickly get Zipkin up and running: {{< text bash >}} $ kubectl apply -f {{< github\_file >}}/samples/addons/extras/zipkin.yaml {{< /text >}} This will deploy Zipkin into your cluster. This is intended for demonstration only, and is not tuned for performance or security. ### Option 2: Customizable install Consult the [Zipkin documentation](https://zipkin.io/) to get started. No special changes are needed for Zipkin to work with Istio. ## Usage For information on using Zipkin, please refer to the [Zipkin task](/docs/tasks/observability/distributed-tracing/zipkin/).
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/integrations/zipkin/index.md
master
istio
[ -0.011288820765912533, 0.010036165826022625, -0.012183288112282753, 0.04289300739765167, -0.016310304403305054, -0.062381722033023834, -0.018838783726096153, 0.01524931751191616, 0.021719835698604584, 0.015940777957439423, -0.03382008895277977, -0.07469744980335236, 0.0061212508007884026, ...
0.427859
[cert-manager](https://cert-manager.io/) is a tool that automates certificate management. This can be integrated with Istio gateways to manage TLS certificates. ## Configuration Consult the [cert-manager installation documentation](https://cert-manager.io/docs/installation/kubernetes/) to get started. No special changes are needed to work with Istio. ## Usage ### Istio Gateway cert-manager can be used to write a secret to Kubernetes, which can then be referenced by a Gateway. 1. To get started, configure an `Issuer` resource, following the [cert-manager issuer documentation](https://cert-manager.io/docs/configuration/). `Issuer`s are Kubernetes resources that represent certificate authorities (CAs) that are able to generate signed certificates by honoring certificate signing requests. For example: an `Issuer` may look like: {{< text yaml >}} apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: ca-issuer namespace: istio-system spec: ca: secretName: ca-key-pair {{< /text >}} {{< tip >}} For a common Issuer type, ACME, a pod and service are created to respond to challenge requests in order to verify the client owns the domain. To respond to those challenges, an endpoint at `http:///.well-known/acme-challenge/` will need to be reachable. That configuration may be implementation specific. {{< /tip >}} 1. Next, configure a `Certificate` resource, following the [cert-manager documentation](https://cert-manager.io/docs/usage/certificate/). The `Certificate` should be created in the same namespace as the `istio-ingressgateway` deployment. For example, a `Certificate` may look like: {{< text yaml >}} apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ingress-cert namespace: istio-system spec: secretName: ingress-cert commonName: my.example.com dnsNames: - my.example.com ... {{< /text >}} 1. Once we have the certificate created, we should see the secret created in the `istio-system` namespace. This can then be referenced in the `tls` config for a Gateway under `credentialName`: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: Gateway metadata: name: gateway spec: selector: istio: ingressgateway servers: - port: number: 443 name: https protocol: HTTPS tls: mode: SIMPLE credentialName: ingress-cert # This should match the Certificate secretName hosts: - my.example.com # This should match a DNS name in the Certificate {{< /text >}} ### Kubernetes Ingress cert-manager provides direct integration with Kubernetes Ingress by configuring an [annotation on the Ingress object](https://cert-manager.io/docs/usage/ingress/). If this method is used, the Ingress must reside in the same namespace as the `istio-ingressgateway` deployment, as secrets will only be read within the same namespace. Alternatively, a `Certificate` can be created as described in [Istio Gateway](#istio-gateway), then referenced in the `Ingress` object: {{< text yaml >}} apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress annotations: kubernetes.io/ingress.class: istio spec: rules: - host: my.example.com http: ... tls: - hosts: - my.example.com # This should match a DNS name in the Certificate secretName: ingress-cert # This should match the Certificate secretName {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/integrations/certmanager/index.md
master
istio
[ -0.0204665157943964, 0.02931090258061886, 0.0160951167345047, 0.05518920719623566, -0.041940607130527496, -0.024217354133725166, -0.00158083310816437, 0.022875694558024406, 0.06841274350881577, -0.0048140487633645535, -0.04115089029073715, -0.146057590842247, 0.027731850743293762, 0.063669...
0.361236
Istio provides both an ingress and service mesh implementation, which can be used together or separately. While these are designed to work together seamlessly, there are times when integrating with a third party ingress is required. This could be for migration purposes, feature requirements, or personal preferences. ## Integration Modes In "standalone" mode, the third party ingress is directly sending to backends. In this case, the backends presumably have Istio sidecars injected. {{< mermaid >}} graph LR cc((Client)) tpi(Third Party Ingress) a(Backend) cc-->tpi-->a {{< /mermaid >}} In this mode, things mostly just work. Clients in a service mesh do not need to be aware that the backend they are connecting to has a sidecar. However, the ingress will not use mTLS, which may lead to undesirable behavior. As a result, most of the configuration for this setup is around enabling mTLS. In "chained" mode, we use both the third party ingress \*and\* Istio's own Gateway in sequence. This can be useful when you want the functionality of both layers. In particular, this is useful with managed cloud load balancers, which have features like global addresses and managed certificates. {{< mermaid >}} graph LR cc((Client)) tpi(Third Party Ingress) ii(Istio Gateway) a(Backend) cc-->tpi tpi-->ii ii-->a {{< /mermaid >}} ## Cloud Load Balancers Generally, cloud load balancers will work out of the box in standalone mode without mTLS. Vendor specific configuration is required to support chained mode or standalone with mTLS. ### Google HTTP(S) Load Balancer Integration with Google HTTP(S) Load Balancers only works out of the box with standalone mode if mTLS is not required as mTLS is not supported. Chained mode is possible. See [Google documentation](https://cloud.google.com/architecture/exposing-service-mesh-apps-through-gke-ingress) for setup instructions. ## In-Cluster Load Balancers Generally, in-cluster load balancers will work out of the box in standalone mode without mTLS. Standalone mode with mTLS can be achieved by inserting a sidecar into the Pod of the in-cluster load balancer. This typically involves two steps beyond standard sidecar injection: 1. Disable inbound traffic redirection. While not required, typically we only want to use the sidecar for \*outbound\* traffic - inbound connection from clients is already handled by the load balancer itself. This also allows preserving the original client IP address, which would otherwise be lost by the sidecar. This mode can be enabled by inserting the `traffic.sidecar.istio.io/includeInboundPorts: ""` annotation on the load balancer `Pod`s. 1. Enable Service routing. Istio sidecars can only properly function when requests are sent to Services, not to specific pod IPs. Most load balancers will send to specific pod IPs by default, breaking mTLS. Steps to do this are vendor specific; a few examples are listed below but consulting with the specific vendor's documentation is recommended. Alternatively, setting the `Host` header to the service name can also work. However, this can result in unexpected behavior; the load balancer will pick a specific pod, but Istio will ignore it. See [here](/docs/ops/configuration/traffic-management/traffic-routing/#http) for more information on why this works. ### ingress-nginx `ingress-nginx` can be configured to do service routing by inserting an annotation on `Ingress` resources: {{< text yaml >}} nginx.ingress.kubernetes.io/service-upstream: "true" {{< /text >}} ### Emissary-Ingress Emissary-ingress defaults to using Service routing, so no additional steps are required.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/integrations/loadbalancers/index.md
master
istio
[ -0.0845060795545578, 0.003962676040828228, 0.030187426134943962, 0.015824057161808014, -0.009332187473773956, -0.04344412311911583, 0.01075062993913889, 0.08137884736061096, 0.01684647798538208, 0.006000184454023838, 0.04819938912987709, -0.10505762696266174, 0.016552340239286423, 0.030422...
0.520168
[Prometheus](https://prometheus.io/) is an open source monitoring system and time series database. You can use Prometheus with Istio to record metrics that track the health of Istio and of applications within the service mesh. You can visualize metrics using tools like [Grafana](/docs/ops/integrations/grafana/) and [Kiali](/docs/tasks/observability/kiali/). ## Installation ### Option 1: Quick start Istio provides a basic sample installation to quickly get Prometheus up and running: {{< text bash >}} $ kubectl apply -f {{< github\_file >}}/samples/addons/prometheus.yaml {{< /text >}} This will deploy Prometheus into your cluster. This is intended for demonstration only, and is not tuned for performance or security. {{< warning >}} While the quick-start configuration is well-suited for small clusters and monitoring for short time horizons, it is not suitable for large-scale meshes or monitoring over a period of days or weeks. In particular, the introduced labels can increase metrics cardinality, requiring a large amount of storage. And, when trying to identify trends and differences in traffic over time, access to historical data can be paramount. {{< /warning >}} ### Option 2: Customizable install Consult the [Prometheus documentation](https://www.prometheus.io/) to get started deploying Prometheus into your environment. See [Configuration](#configuration) for more information on configuring Prometheus to scrape Istio deployments. ## Configuration In an Istio mesh, each component exposes an endpoint that emits metrics. Prometheus works by scraping these endpoints and collecting the results. This is configured through the [Prometheus configuration file](https://prometheus.io/docs/prometheus/latest/configuration/configuration/) which controls settings for which endpoints to query, the port and path to query, TLS settings, and more. To gather metrics for the entire mesh, configure Prometheus to scrape: 1. The control plane (`istiod` deployment) 1. Ingress and Egress gateways 1. The Envoy sidecar 1. The user applications (if they expose Prometheus metrics) To simplify the configuration of metrics, Istio offers two modes of operation. ### Option 1: Metrics merging To simplify configuration, Istio has the ability to control scraping entirely by `prometheus.io` annotations. This allows Istio scraping to work out of the box with standard configurations such as the ones provided by the [Helm `stable/prometheus`](https://github.com/helm/charts/tree/master/stable/prometheus) charts. {{< tip >}} While `prometheus.io` annotations are not a core part of Prometheus, they have become the de facto standard to configure scraping. {{< /tip >}} This option is enabled by default but can be disabled by passing `--set meshConfig.enablePrometheusMerge=false` during [installation](/docs/setup/install/istioctl/). When enabled, appropriate `prometheus.io` annotations will be added to all data plane pods to set up scraping. If these annotations already exist, they will be overwritten. With this option, the Envoy sidecar will merge Istio's metrics with the application metrics. The merged metrics will be scraped from `:15020/stats/prometheus`. This option exposes all the metrics in plain text. This feature may not suit your needs in the following situations: \* You need to scrape metrics using TLS. \* Your application exposes metrics with the same names as Istio metrics. For example, your application metrics expose an `istio\_requests\_total` metric. This might happen if the application is itself running Envoy. \* Your Prometheus deployment is not configured to scrape based on standard `prometheus.io` annotations. If required, this feature can be disabled per workload by adding a `prometheus.istio.io/merge-metrics: "false"` annotation on a pod. ### Option 2: Customized scraping configurations To configure an existing Prometheus instance to scrape stats generated by Istio, several jobs need to be added. \* To scrape `Istiod` stats, the following example job can be added to scrape its `http-monitoring` port: {{< text yaml >}} - job\_name: 'istiod' kubernetes\_sd\_configs: - role: endpoints namespaces: names: - istio-system relabel\_configs: - source\_labels: [\_\_meta\_kubernetes\_service\_name, \_\_meta\_kubernetes\_endpoint\_port\_name] action: keep regex: istiod;http-monitoring {{< /text >}} \* To scrape Envoy stats, including sidecar proxies and gateway proxies, the following job
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/integrations/prometheus/index.md
master
istio
[ -0.024411095306277275, 0.005563891492784023, -0.0140763558447361, 0.02593706175684929, -0.0029781449120491743, -0.07294222712516785, -0.009531280025839806, 0.014491820707917213, 0.04219532757997513, 0.012404095381498337, -0.02455536276102066, -0.09406951814889908, -0.03965982794761658, 0.0...
0.402325
job can be added to scrape its `http-monitoring` port: {{< text yaml >}} - job\_name: 'istiod' kubernetes\_sd\_configs: - role: endpoints namespaces: names: - istio-system relabel\_configs: - source\_labels: [\_\_meta\_kubernetes\_service\_name, \_\_meta\_kubernetes\_endpoint\_port\_name] action: keep regex: istiod;http-monitoring {{< /text >}} \* To scrape Envoy stats, including sidecar proxies and gateway proxies, the following job can be added to scrape ports that end with `-envoy-prom`: {{< text yaml >}} - job\_name: 'envoy-stats' metrics\_path: /stats/prometheus kubernetes\_sd\_configs: - role: pod relabel\_configs: - source\_labels: [\_\_meta\_kubernetes\_pod\_container\_port\_name] action: keep regex: '.\*-envoy-prom' {{< /text >}} \* For application stats, if [Strict mTLS](/docs/tasks/security/authentication/authn-policy/#globally-enabling-istio-mutual-tls-in-strict-mode) is not enabled, your existing scraping configuration should still work. Otherwise, Prometheus needs to be configured to [scrape with Istio certs](#tls-settings). #### TLS settings The control plane, gateway, and Envoy sidecar metrics will all be scraped over cleartext. However, the application metrics will follow whatever [Istio authentication policy](/docs/tasks/security/authentication/authn-policy) has been configured for the workload. \* If you use `STRICT` mode, then Prometheus will need to be configured to scrape using Istio certificates as described below. \* If you use `PERMISSIVE` mode, the workload typically accepts TLS and cleartext. However, Prometheus cannot send the special variant of TLS Istio requires for `PERMISSIVE` mode. As a result, you must \*not\* configure TLS in Prometheus. \* If you use `DISABLE` mode, no TLS configuration is required for Prometheus. {{< tip >}} Note this only applies to Istio-terminated TLS. If your application directly handles TLS: \* `STRICT` mode is not supported, as Prometheus would need to send two layers of TLS which it cannot do. \* `PERMISSIVE` mode and `DISABLE` mode should be configured the same as if Istio was not present. See [Understanding TLS Configuration](/docs/ops/configuration/traffic-management/tls-configuration/) for more information. {{< /tip >}} One way to provision Istio certificates for Prometheus is by injecting a sidecar which will rotate SDS certificates and output them to a volume that can be shared with Prometheus. However, the sidecar should not intercept requests for Prometheus because Prometheus's model of direct endpoint access is incompatible with Istio's sidecar proxy model. To achieve this, configure a cert volume mount on the Prometheus server container: {{< text yaml >}} containers: - name: prometheus-server ... volumeMounts: mountPath: /etc/prom-certs/ name: istio-certs volumes: - emptyDir: medium: Memory name: istio-certs {{< /text >}} Then add the following annotations to the Prometheus deployment pod template, and deploy it with [sidecar injection](/docs/setup/additional-setup/sidecar-injection/). This configures the sidecar to write a certificate to the shared volume, but without configuring traffic redirection: {{< text yaml >}} spec: template: metadata: annotations: traffic.sidecar.istio.io/includeInboundPorts: "" # do not intercept any inbound ports traffic.sidecar.istio.io/includeOutboundIPRanges: "" # do not intercept any outbound traffic proxy.istio.io/config: | # configure an env variable `OUTPUT\_CERTS` to write certificates to the given folder proxyMetadata: OUTPUT\_CERTS: /etc/istio-output-certs sidecar.istio.io/userVolumeMount: '[{"name": "istio-certs", "mountPath": "/etc/istio-output-certs"}]' # mount the shared volume at sidecar proxy {{< /text >}} Finally, set the scraping job TLS context as follows: {{< text yaml >}} scheme: https tls\_config: ca\_file: /etc/prom-certs/root-cert.pem cert\_file: /etc/prom-certs/cert-chain.pem key\_file: /etc/prom-certs/key.pem insecure\_skip\_verify: true # Prometheus does not support Istio security naming, thus skip verifying target pod certificate {{< /text >}} ## Best practices For larger meshes, advanced configuration might help Prometheus scale. See [Using Prometheus for production-scale monitoring](/docs/ops/best-practices/observability/#using-prometheus-for-production-scale-monitoring) for more information.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/integrations/prometheus/index.md
master
istio
[ -0.019739899784326553, 0.03254462778568268, -0.010256866924464703, 0.04039827361702919, 0.013682280667126179, -0.05904774367809296, 0.009986412711441517, -0.05398815870285034, -0.003563032718375325, 0.023260872811079025, 0.002560422057285905, -0.07537443935871124, -0.06750805675983429, 0.0...
0.2398
[Apache SkyWalking](http://skywalking.apache.org) is an application performance monitoring (APM) system, especially designed for microservices, cloud native and container-based architectures. SkyWalking is a one-stop solution for observability that not only provides distributed tracing ability like Jaeger and Zipkin, metrics ability like Prometheus and Grafana, logging ability like Kiali, it also extends the observability to many other scenarios, such as associating the logs with traces, collecting system events and associating the events with metrics, service performance profiling based on eBPF, etc. ## Installation ### Option 1: Quick start Istio provides a basic sample installation to quickly get SkyWalking up and running: {{< text bash >}} $ kubectl apply -f @samples/addons/extras/skywalking.yaml@ {{< /text >}} This will deploy SkyWalking into your cluster. This is intended for demonstration only, and is not tuned for performance or security. Istio proxies by default don't send traces to SkyWalking. You will also need to enable the SkyWalking tracing extension provider by adding the following fields to your configuration: {{< text yaml >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: meshConfig: extensionProviders: - skywalking: service: tracing.istio-system.svc.cluster.local port: 11800 name: skywalking defaultProviders: tracing: - "skywalking" {{< /text >}} ### Option 2: Customizable install Consult the [SkyWalking documentation](http://skywalking.apache.org) to get started. No special changes are needed for SkyWalking to work with Istio. Once SkyWalking is installed, remember to modify the option `--set meshConfig.extensionProviders[0].skywalking.service` to point to the `skywalking-oap` deployment. See [`ProxyConfig.Tracing`](/docs/reference/config/istio.mesh.v1alpha1/#Tracing) for advanced configuration such as TLS settings. ## Usage For information on using SkyWalking, please refer to the [SkyWalking task](/docs/tasks/observability/distributed-tracing/skywalking/).
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/integrations/skywalking/index.md
master
istio
[ 0.01873495802283287, -0.03882923722267151, -0.008807714097201824, 0.03299391642212868, 0.019756415858864784, -0.06731458008289337, 0.005261311307549477, -0.01410133671015501, -0.01418410986661911, 0.039804842323064804, -0.037797778844833374, -0.08922357857227325, -0.04336896911263466, -0.0...
0.403728
Before reading this document, be sure to review [Istio's architecture](/docs/ops/deployment/architecture/) and [deployment models](/docs/ops/deployment/deployment-models/). This page builds on those documents to explain how Istio can be extended to support joining virtual machines into the mesh. Istio's virtual machine support allows connecting workloads outside of a Kubernetes cluster to the mesh. This enables legacy applications, or applications not suitable to run in a containerized environment, to get all the benefits that Istio provides to applications running inside Kubernetes. For workloads running on Kubernetes, the Kubernetes platform itself provides various features like service discovery, DNS resolution, and health checks which are often missing in virtual machine environments. Istio enables these features for workloads running on virtual machines, and in addition allows these workloads to utilize Istio functionality such as mutual TLS (mTLS), rich telemetry, and advanced traffic management capabilities. The following diagram shows the architecture of a mesh with virtual machines: {{< tabset category-name="network-mode" >}} {{< tab name="Single-Network" category-value="single" >}} In this mesh, there is a single [network](/docs/ops/deployment/deployment-models/#network-models), where pods and virtual machines can communicate directly with each other. Control plane traffic, including XDS configuration and certificate signing, are sent through a Gateway in the cluster. This ensures that the virtual machines have a stable address to connect to when they are bootstrapping. Pods and virtual machines can communicate directly with each other without requiring any intermediate Gateway. {{< image width="75%" link="single-network.svg" alt="A service mesh with a single network and virtual machines" title="Single network" caption="A service mesh with a single network and virtual machines" >}} {{< /tab >}} {{< tab name="Multi-Network" category-value="multiple" >}} In this mesh, there are multiple [networks](/docs/ops/deployment/deployment-models/#network-models), where pods and virtual machines are not able to communicate directly with each other. Control plane traffic, including XDS configuration and certificate signing, are sent through a Gateway in the cluster. Similarly, all communication between pods and virtual machines goes through the gateway, which acts as a bridge between the two networks. {{< image width="75%" link="multi-network.svg" alt="A service mesh with multiple networks and virtual machines" title="Multiple networks" caption="A service mesh with multiple networks and virtual machines" >}} {{< /tab >}} {{< /tabset >}} ## Service association Istio provides two mechanisms to represent virtual machine workloads: \* [`WorkloadGroup`](/docs/reference/config/networking/workload-group/) represents a logical group of virtual machine workloads that share common properties. This is similar to a `Deployment` in Kubernetes. \* [`WorkloadEntry`](/docs/reference/config/networking/workload-entry/) represents a single instance of a virtual machine workload. This is similar to a `Pod` in Kubernetes. Creating these resources (`WorkloadGroup` and `WorkloadEntry`) does not result in provisioning of any resources or running any virtual machine workloads. Rather, these resources just reference these workloads and inform Istio how to configure the mesh appropriately. When adding a virtual machine workload to the mesh, you will need to create a `WorkloadGroup` that acts as template for each `WorkloadEntry` instance: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: WorkloadGroup metadata: name: product-vm spec: metadata: labels: app: product template: serviceAccount: default probe: httpGet: port: 8080 {{< /text >}} Once a virtual machine has been [configured and added to the mesh](/docs/setup/install/virtual-machine/#configure-the-virtual-machine), a corresponding `WorkloadEntry` will be automatically created by the Istio control plane. For example: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: WorkloadEntry metadata: annotations: istio.io/autoRegistrationGroup: product-vm labels: app: product name: product-vm-1.2.3.4 spec: address: 1.2.3.4 labels: app: product serviceAccount: default {{< /text >}} This `WorkloadEntry` resource describes a single instance of a workload, similar to a pod in Kubernetes. When the workload is removed from the mesh, the `WorkloadEntry` resource will be automatically removed. Additionally, if any probes are configured in the `WorkloadGroup` resource, the Istio control plane automatically updates the health status of associated `WorkloadEntry` instances. In order for
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/deployment/vm-architecture/index.md
master
istio
[ -0.02263399213552475, 0.045071594417095184, 0.03603239730000496, -0.006269198376685381, -0.0338565893471241, -0.034631796181201935, 0.009508508257567883, 0.027594197541475296, 0.02838166430592537, 0.010115822777152061, -0.05936021730303764, -0.07703905552625656, -0.037224479019641876, 0.00...
0.621323
of a workload, similar to a pod in Kubernetes. When the workload is removed from the mesh, the `WorkloadEntry` resource will be automatically removed. Additionally, if any probes are configured in the `WorkloadGroup` resource, the Istio control plane automatically updates the health status of associated `WorkloadEntry` instances. In order for consumers to reliably call your workload, it's recommended to declare a `Service` association. This allows clients to reach a stable hostname, like `product.default.svc.cluster.local`, rather than an ephemeral IP addresses. This also enables you to use advanced routing capabilities in Istio via the `DestinationRule` and `VirtualService` APIs. Any Kubernetes service can transparently select workloads across both pods and virtual machines via the selector fields which are matched with pod and `WorkloadEntry` labels respectively. For example, a `Service` named `product` is composed of a `Pod` and a `WorkloadEntry`: {{< image width="30%" link="service-selector.svg" title="Service Selection" >}} With this configuration, requests to `product` would be load-balanced across both the pod and virtual machine workload instances. ## DNS Kubernetes provides DNS resolution in pods for `Service` names allowing pods to easily communicate with one another by stable hostnames. For virtual machine expansion, Istio provides similar functionality via a [DNS Proxy](/docs/ops/configuration/traffic-management/dns-proxy/). This feature redirects all DNS queries from the virtual machine workload to the Istio proxy, which maintains a mapping of hostnames to IP addresses. As a result, workloads running on virtual machines can transparently call `Service`s (similar to pods) without requiring any additional configuration.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/deployment/vm-architecture/index.md
master
istio
[ 0.00035995812504552305, 0.0044501894153654575, 0.0038721030578017235, 0.06936755031347275, -0.005148431286215782, -0.03697579354047775, 0.0573582798242569, -0.019347380846738815, 0.08038769662380219, 0.010029918514192104, -0.03454379737377167, -0.009373690001666546, -0.07969594746828079, 0...
0.529853
## Kernel Module Requirements on Cluster Nodes Regardless of the Istio {{< gloss >}}data plane{{< /gloss >}} mode, in Kubernetes contexts Istio generally requires Kubernetes nodes running Linux kernels with support for traffic interception and routing. Istio supports two backends for traffic management: `iptables` (default) and `nftables`. The majority of Linux kernels released in the past decade include built-in support for the features Istio uses - either as kernel modules that will be auto-loaded when required, or built-in. The specific kernel modules required depend on which backend you choose to use. ### iptables Backend When using the `iptables` backend (the default), the following kernel modules are required for Istio to function correctly: #### Primary iptables Modules | Module | Remark | | --- | --- | | `br\_netfilter` | | | `ip6table\_mangle` | Only needed for IPv6/dual-stack clusters | | `ip6table\_nat` | Only needed for IPv6/dual-stack clusters | | `ip6table\_raw` | Only needed for IPv6/dual-stack clusters | | `iptable\_mangle` | | | `iptable\_nat` | | | `iptable\_raw` | Only needed for `DNS` interception in sidecar mode | | `xt\_REDIRECT` | | | `xt\_connmark` | Needed for ambient dataplane mode, and sidecar dataplane mode with `TPROXY` interception (default) | | `xt\_conntrack` | | | `xt\_mark` | Needed for ambient dataplane mode, and sidecar dataplane mode with `TPROXY` interception (default) | | `xt\_owner` | | | `xt\_tcpudp` | | | `xt\_multiport` | | | `ip\_set` | Needed for ambient dataplane mode | The following additional modules are used by the above listed modules and should also be loaded on the cluster node: | Module | Remark | | --- | --- | | `bridge` | | | `ip6\_tables` | Only needed for IPv6/dual-stack clusters | | `ip\_tables` | | | `nf\_conntrack` | | | `nf\_conntrack\_ipv4` | | | `nf\_conntrack\_ipv6` | Only needed for IPv6/dual-stack clusters | | `nf\_nat` | | | `nf\_nat\_ipv4` | | | `nf\_nat\_ipv6` | Only needed for IPv6/dual-stack clusters | | `nf\_nat\_redirect` | | | `x\_tables` | | | `ip\_set\_hash\_ip` | Needed for ambient dataplane mode | ### nftables Backend The `nftables` framework is a modern replacement for `iptables`, offering improved performance and flexibility. Istio relies on the `nft` CLI tool to configure `nftables` rules. The `nft` binary must be version 1.0.1 or later, and it requires Linux kernel version 5.13 or higher. For the `nft` CLI to function correctly, the following kernel modules must be available on the host system. | Module | Remark | |------------------|------------------------------------------| | `nf\_tables` | Core nftables module | | `nf\_conntrack` | Needed for connection tracking support | | `nft\_ct` | | | `nf\_defrag\_ipv4` | | | `nf\_defrag\_ipv6` | Only needed for IPv6/dual-stack clusters | | `nft\_nat` | | | `nft\_socket` | | | `nft\_tproxy` | | | `nft\_redir` | | ### Kernel Module Issues While uncommon, the use of custom or nonstandard Linux kernels or Linux distributions may result in scenarios where the specific modules listed above are not available on the host, or could not be automatically loaded. For example, this [`selinux issue`](https://www.suse.com/support/kb/doc/?id=000020241) describes a scenario in some RHEL releases where `selinux` configuration may prevent the automatic loading of some of the above mentioned kernel modules. For more details on the specific Istio components that perform traffic interception and routing configuration, see the relevant data plane mode documentation.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/deployment/platform-requirements/index.md
master
istio
[ 0.032268133014440536, -0.023680072277784348, -0.01774885132908821, 0.010618641041219234, -0.007250680588185787, -0.004181257449090481, 0.028188403695821762, 0.06652389466762543, -0.034094661474227905, 0.019518353044986725, -0.04794754460453987, -0.10713246464729309, -0.08455657958984375, -...
0.566567
Multicluster deployments with ambient mode enable you to offer truly globally resilient applications at scale with minimal overhead. In addition to its normal functions, the Istio control plane creates watches on all remote clusters to keep an up-to-date listing of what global services each cluster offers. The Istio data plane can route traffic to these remote global services, either as a part of normal traffic distribution, or specifically when the local service is unavailable. ## Control plane performance As documented [here](/docs/ops/deployment/performance-and-scalability), the Istio control plane generally scales as the product of deployment changes, configuration changes, and the number of connected proxies. Ambient multicluster adds two new dimensions to the control plane scalability story: number of remote clusters, and number of remote services. Because the control plane is not programming proxies for remote clusters (assuming a multi-primary deployment topology), adding 10 remote services to the mesh has substantially lower impact on the control plane performance than adding 10 local services. Our multicluster control plane load test created 300 services with 4000 endpoints in each of 10 clusters, and added these clusters to the mesh one at a time. The approximate control plane impact of adding a remote cluster at this scale was \*\*1% of a CPU core, and 180 MB of memory\*\*. At this scale, it should be safe to scale well beyond 10 clusters in a mesh with a properly scaled control plane. One item to note is that for multicluster scalability, horizontally scaling the control plane will not help, as each control plane instance maintains a complete cache of remote services. Instead, we recommend modifying the resource requests and limits of the control plane to scale vertically to meet the needs of your multicluster mesh. ## Data plane performance When traffic is routed to a remote cluster, the originating data plane establishes an encrypted tunnel to the destination cluster's east/west gateway. It then establishes a secondary encrypted tunnel inside the first, which is terminated at the destination data plane. This use of inner and outer tunnels allows the data plane to securely communicate with the remote cluster without knowing the details of which pod IPs represent which services. This double encryption does carry some overhead, however. The data plane load test measures the response latency of traffic between pods in the same cluster, versus those in two different clusters, to understand the impact of double encryption on latency. Additionally, double encryption requires double handshakes, which disproportionately affects the latency of new connections to the remote cluster. As you can see below, our initial connections observed an average of 2.2 milliseconds (346%) additional latency, while requests using existing connections observed an increase of 0.13 milliseconds (72%). While these numbers appear significant, it is expected that most multicluster traffic will cross availability zones or regions, and the observed increase in overhead latency will be minimal compared to the overall transit latency between data centers. {{< image link="./ambient-mc-dataplane-reconnect.png" caption="request latency with reconnect" width="90%" >}} {{< image link="./ambient-mc-dataplane-existing.png" caption="request latency without reconnect" width="90%" >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/deployment/ambient-mc-perf/index.md
master
istio
[ 0.0017731210682541132, -0.039154279977083206, 0.028331326320767403, 0.06864060461521149, 0.030345626175403595, -0.11622056365013123, -0.013344778679311275, 0.06034691631793976, 0.020368726924061775, 0.0632774606347084, -0.05053546652197838, -0.08596326410770416, 0.01919730193912983, 0.0100...
0.463108
Istio provides a great deal of functionality to applications with little or no impact on the application code itself. Many Kubernetes applications can be deployed in an Istio-enabled cluster without any changes at all. However, there are some implications of Istio's sidecar model that may need special consideration when deploying an Istio-enabled application. This document describes these application considerations and specific requirements of Istio enablement. ## Pod requirements To be part of a mesh, Kubernetes pods must satisfy the following requirements: - \*\*Application UIDs\*\*: Ensure your pods do \*\*not\*\* run applications as a user with the user ID (UID) value of `1337` because `1337` is reserved for the sidecar proxy. - \*\*`NET\_ADMIN` and `NET\_RAW` capabilities\*\*: If [pod security policies](https://kubernetes.io/docs/concepts/policy/pod-security-policy/) are [enforced](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#enabling-pod-security-policies) in your cluster and unless you use the [Istio CNI Plugin](/docs/setup/additional-setup/cni/), your pods must have the `NET\_ADMIN` and `NET\_RAW` capabilities allowed. The initialization containers of the Envoy proxies require these capabilities. To check if the `NET\_ADMIN` and `NET\_RAW` capabilities are allowed for your pods, you need to check if their [service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) can use a pod security policy that allows the `NET\_ADMIN` and `NET\_RAW` capabilities. If you haven't specified a service account in your pods' deployment, the pods run using the `default` service account in their deployment's namespace. To list the capabilities for a service account, replace `` and `` with your values in the following command: {{< text bash >}} $ for psp in $(kubectl get psp -o jsonpath="{range .items[\*]}{@.metadata.name}{'\n'}{end}"); do if [ $(kubectl auth can-i use psp/$psp --as=system:serviceaccount::) = yes ]; then kubectl get psp/$psp --no-headers -o=custom-columns=NAME:.metadata.name,CAPS:.spec.allowedCapabilities; fi; done {{< /text >}} For example, to check for the `default` service account in the `default` namespace, run the following command: {{< text bash >}} $ for psp in $(kubectl get psp -o jsonpath="{range .items[\*]}{@.metadata.name}{'\n'}{end}"); do if [ $(kubectl auth can-i use psp/$psp --as=system:serviceaccount:default:default) = yes ]; then kubectl get psp/$psp --no-headers -o=custom-columns=NAME:.metadata.name,CAPS:.spec.allowedCapabilities; fi; done {{< /text >}} If you see `NET\_ADMIN` and `NET\_RAW` or `\*` in the list of capabilities of one of the allowed policies for your service account, your pods have permission to run the Istio init containers. Otherwise, you will need to [provide the permission](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#authorizing-policies). - \*\*Pod labels\*\*: We recommend explicitly declaring pods with an application identifier and version by using a pod label. These labels add contextual information to the metrics and telemetry that Istio collects. Each of these values are read from multiple labels ordered from highest to lowest precedence: - Application name: `service.istio.io/canonical-name`, `app.kubernetes.io/name`, or `app`. - Application version: `service.istio.io/canonical-revision`, `app.kubernetes.io/version`, or `version`. - \*\*Named service ports\*\*: Service ports may optionally be named to explicitly specify a protocol. See [Protocol Selection](/docs/ops/configuration/traffic-management/protocol-selection/) for more details. If a pod belongs to multiple [Kubernetes services](https://kubernetes.io/docs/concepts/services-networking/service/), the services cannot use the same port number for different protocols, for instance HTTP and TCP. ## Ports used by Istio The following ports and protocols are used by the Istio sidecar proxy (Envoy). {{< warning >}} To avoid port conflicts with sidecars, applications should not use any of the ports used by Envoy. {{< /warning >}} | Port | Protocol | Description | Pod-internal only | |----|----|----|----| | 15000 | TCP | Envoy admin port (commands/diagnostics) | Yes | | 15001 | TCP | Envoy outbound | No | | 15002 | TCP | Listen port for failure detection | Yes | | 15004 | HTTP | Debug port | Yes | | 15006 | TCP | Envoy inbound | No | | 15008 | HTTP2 | {{< gloss >}}HBONE{{}} mTLS tunnel port | No | | 15020 | HTTP | Merged Prometheus telemetry from Istio agent, Envoy, and
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/deployment/application-requirements/index.md
master
istio
[ 0.015673715621232986, -0.0012574128340929747, 0.032948847860097885, -0.0035456805489957333, -0.04365001991391182, 0.002456158399581909, 0.005125090014189482, 0.06276081502437592, -0.042356885969638824, 0.025337575003504753, 0.0027351779863238335, -0.08253692835569382, -0.034386228770017624, ...
0.555733
| Yes | | 15004 | HTTP | Debug port | Yes | | 15006 | TCP | Envoy inbound | No | | 15008 | HTTP2 | {{< gloss >}}HBONE{{}} mTLS tunnel port | No | | 15020 | HTTP | Merged Prometheus telemetry from Istio agent, Envoy, and application | No | | 15021 | HTTP | Health checks | No | | 15053 | DNS | DNS port, if capture is enabled | Yes | | 15090 | HTTP | Envoy Prometheus telemetry | No | The following ports and protocols are used by the Istio control plane (istiod). | Port | Protocol | Description | Local host only | |----|----|----|----| | 443 | HTTPS | Webhooks service port | No | | 8080 | HTTP | Debug interface (deprecated, container port only) | No | | 15010 | GRPC | XDS and CA services (Plaintext, only for secure networks) | No | | 15012 | GRPC | XDS and CA services (TLS and mTLS, recommended for production use) | No | | 15014 | HTTP | Control plane monitoring | No | | 15017 | HTTPS | Webhook container port, forwarded from 443 | No | ## Server First Protocols Some protocols are "Server First" protocols, which means the server will send the first bytes. This may have an impact on [`PERMISSIVE`](/docs/reference/config/security/peer\_authentication/#PeerAuthentication-MutualTLS-Mode) mTLS and [Automatic protocol selection](/docs/ops/configuration/traffic-management/protocol-selection/#automatic-protocol-selection). Both of these features work by inspecting the initial bytes of a connection to determine the protocol, which is incompatible with server first protocols. In order to support these cases, follow the [Explicit protocol selection](/docs/ops/configuration/traffic-management/protocol-selection/#explicit-protocol-selection) steps to declare the protocol of the application as `TCP`. The following ports are known to commonly carry server first protocols, and are automatically assumed to be `TCP`: |Protocol|Port| |--------|----| | SMTP |25 | | DNS |53 | | MySQL |3306| | MongoDB|27017| Because TLS communication is not server first, TLS encrypted server first traffic will work with automatic protocol detection as long as you make sure that all traffic subjected to TLS sniffing is encrypted: 1. Configure `mTLS` mode `STRICT` for the server. This will enforce TLS encryption for all requests. 1. Configure `mTLS` mode `DISABLE` for the server. This will disable the TLS sniffing, allowing server first protocols to be used. 1. Configure all clients to send `TLS` traffic, generally through a [`DestinationRule`](/docs/reference/config/networking/destination-rule/#ClientTLSSettings) or by relying on auto mTLS. 1. Configure your application to send TLS traffic directly. ## Outbound traffic In order to support Istio's traffic routing capabilities, traffic leaving a pod may be routed differently than when a sidecar is not deployed. For HTTP-based traffic, traffic is routed based on the `Host` header. This may lead to unexpected behavior if the destination IP and `Host` header are not aligned. For example, a request like `curl 1.2.3.4 -H "Host: httpbin.default"` will be routed to the `httpbin` service, rather than `1.2.3.4`. For Non HTTP-based traffic (including HTTPS), Istio does not have access to an `Host` header, so routing decisions are based on the Service IP address. One implication of this is that direct calls to pods (for example, `curl `), rather than Services, will not be matched. While the traffic may be [passed through](/docs/tasks/traffic-management/egress/egress-control/#envoy-passthrough-to-external-services), it will not get the full Istio functionality including mTLS encryption, traffic routing, and telemetry. See the [Traffic Routing](/docs/ops/configuration/traffic-management/traffic-routing) page for more information.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/deployment/application-requirements/index.md
master
istio
[ -0.06133263558149338, 0.023972896859049797, -0.04244661331176758, -0.039069030433893204, -0.0371176116168499, -0.09554587304592133, -0.015241038985550404, 0.014588668942451477, -0.014088088646531105, 0.027878863736987114, 0.040706753730773926, -0.06513307988643646, -0.013949859887361526, 0...
0.47002
Routing](/docs/ops/configuration/traffic-management/traffic-routing) page for more information.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/deployment/application-requirements/index.md
master
istio
[ 0.048454221338033676, -0.05881156772375107, -0.0746670514345169, -0.017015371471643448, -0.07882411777973175, -0.01573726162314415, 0.001882766606286168, 0.015645762905478477, -0.08100999146699905, 0.0004991794121451676, -0.018590886145830154, 0.08855818957090378, -0.01607421599328518, -0....
0.210785
An Istio service mesh is logically split into a \*\*data plane\*\* and a \*\*control plane\*\*. \* The \*\*data plane\*\* is composed of a set of intelligent proxies ([Envoy](https://www.envoyproxy.io/)) deployed as sidecars. These proxies mediate and control all network communication between microservices. They also collect and report telemetry on all mesh traffic. \* The \*\*control plane\*\* manages and configures the proxies to route traffic. The following diagram shows the different components that make up each plane: {{< image link="./arch.svg" alt="The overall architecture of an Istio-based application." caption="Istio architecture in sidecar mode" >}} ## Components The following sections provide a brief overview of each of Istio's core components. ### Envoy Istio uses an extended version of the [Envoy](https://www.envoyproxy.io/) proxy. Envoy is a high-performance proxy developed in C++ to mediate all inbound and outbound traffic for all services in the service mesh. Envoy proxies are the only Istio components that interact with data plane traffic. Envoy proxies are deployed as sidecars to services, logically augmenting the services with Envoy’s many built-in features, for example: \* Dynamic service discovery \* Load balancing \* TLS termination \* HTTP/2 and gRPC proxies \* Circuit breakers \* Health checks \* Staged rollouts with %-based traffic split \* Fault injection \* Rich metrics This sidecar deployment allows Istio to enforce policy decisions and extract rich telemetry which can be sent to monitoring systems to provide information about the behavior of the entire mesh. The sidecar proxy model also allows you to add Istio capabilities to an existing deployment without requiring you to rearchitect or rewrite code. Some of the Istio features and tasks enabled by Envoy proxies include: \* Traffic control features: enforce fine-grained traffic control with rich routing rules for HTTP, gRPC, WebSocket, and TCP traffic. \* Network resiliency features: setup retries, failovers, circuit breakers, and fault injection. \* Security and authentication features: enforce security policies and enforce access control and rate limiting defined through the configuration API. \* Pluggable extensions model based on WebAssembly that allows for custom policy enforcement and telemetry generation for mesh traffic. ### Istiod Istiod provides service discovery, configuration and certificate management. Istiod converts high level routing rules that control traffic behavior into Envoy-specific configurations, and propagates them to the sidecars at runtime. It abstracts platform-specific service discovery mechanisms and synthesizes them into a standard format that any sidecar conforming with the [Envoy API](https://www.envoyproxy.io/docs/envoy/latest/api/api) can consume. Istio can support discovery for multiple environments such as Kubernetes or VMs. You can use Istio's [Traffic Management API](/docs/concepts/traffic-management/#introducing-istio-traffic-management) to instruct Istiod to refine the Envoy configuration to exercise more granular control over the traffic in your service mesh. Istiod [security](/docs/concepts/security/) enables strong service-to-service and end-user authentication with built-in identity and credential management. You can use Istio to upgrade unencrypted traffic in the service mesh. Using Istio, operators can enforce policies based on service identity rather than on relatively unstable layer 3 or layer 4 network identifiers. Additionally, you can use [Istio's authorization feature](/docs/concepts/security/#authorization) to control who can access your services. Istiod acts as a Certificate Authority (CA) and generates certificates to allow secure mTLS communication in the data plane.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/deployment/architecture/index.md
master
istio
[ -0.04932089522480965, 0.006277922540903091, -0.007302023936063051, 0.004330435302108526, -0.04483487457036972, -0.07569673657417297, 0.05814108997583389, 0.025580069050192833, 0.012834976427257061, 0.06317171454429626, -0.02566911280155182, 0.0008679087040945888, -0.041237980127334595, 0.0...
0.590368
Kubernetes has a unique and permissive networking model. In order to configure L2-L4 networking between Pods, [a Kubernetes cluster requires an \_interface\_ Container Network Interface (CNI) plugin](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/). This plugin runs whenever a new pod is created, and sets up the network environment for that pod. If you are using a hosted Kubernetes provider, you usually have limited choice in what CNI plugin you get in your cluster: it is an implementation detail of the hosted implementation. In order to configure mesh traffic redirection, regardless of what CNI you or your provider choose to use for L2-L4 networking, Istio includes a \_chained\_ CNI plugin, which runs after all configured CNI interface plugins. The API for defining chained and interface plugins, and for sharing data between them, is part of the [CNI specification](https://www.cni.dev/). Istio works with all CNI implementations that follow the CNI standard, in both sidecar and ambient mode. The Istio CNI plugin is optional in sidecar mode, and required in {{}}ambient{{< /gloss >}} mode. \* [Learn how to install Istio with a CNI plugin](/docs/setup/additional-setup/cni/)
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/deployment/architecture/cni/index.md
master
istio
[ 0.04635874181985855, -0.027395620942115784, 0.037112243473529816, 0.020310545340180397, 0.002024319488555193, 0.044131290167570114, -0.037271760404109955, 0.03429586440324783, 0.02426927164196968, 0.04001245275139809, -0.028720008209347725, -0.10279685258865356, -0.03559435158967972, 0.003...
0.429111
Istio makes it easy to create a network of deployed services with rich routing, load balancing, service-to-service authentication, monitoring, and more - all without any changes to the application code. Istio strives to provide these benefits with minimal resource overhead and aims to support very large meshes with high request rates while adding minimal latency. The Istio data plane components, the Envoy proxies, handle data flowing through the system. The Istio control plane component, Istiod, configures the data plane. The data plane and control plane have distinct performance concerns. ## Performance summary for Istio 1.24 The [Istio load tests](https://github.com/istio/tools/tree/{{< source\_branch\_name >}}/perf/load) mesh consists of \*\*1000\*\* services and \*\*2000\*\* pods in an Istio mesh with 70,000 mesh-wide requests per second. ## Control plane performance Istiod configures sidecar proxies based on user authored configuration files and the current state of the system. In a Kubernetes environment, Custom Resource Definitions (CRDs) and deployments constitute the configuration and state of the system. The Istio configuration objects like gateways and virtual services, provide the user-authored configuration. To produce the configuration for the proxies, Istiod processes the combined configuration and system state from the Kubernetes environment and the user-authored configuration. The control plane supports thousands of services, spread across thousands of pods with a similar number of user authored virtual services and other configuration objects. Istiod's CPU and memory requirements scale with the amount of configurations and possible system states. The CPU consumption scales with the following factors: - The rate of deployment changes. - The rate of configuration changes. - The number of proxies connecting to Istiod. However, this part is inherently horizontally scalable. You can increase the number of Istiod instances to reduce the amount of time it takes for the configuration to reach all proxies. At large scale, [configuration scoping](/docs/ops/configuration/mesh/configuration-scoping) is highly recommended. ## Data plane performance Data plane performance depends on many factors, for example: - Number of client connections - Target request rate - Request size and Response size - Number of proxy worker threads - Protocol - CPU cores - Various proxy features. In particular, telemetry filters (logging, tracing, and metrics) are known to have a moderate impact. The latency, throughput, and the proxies' CPU and memory consumption are measured as a function of said factors. ### Sidecar and ztunnel resource usage Since the sidecar proxy performs additional work on the data path, it consumes CPU and memory. In Istio 1.24, with 1000 http requests per second containing 1 KB of payload each - a single sidecar proxy with 2 worker threads consumes about 0.20 vCPU and 60 MB of memory. - a single waypoint proxy with 2 worker threads consumes about 0.25 vCPU and 60 MB of memory - a single ztunnel proxy consumes about 0.06 vCPU and 12 MB of memory. The memory consumption of the proxy depends on the total configuration state the proxy holds. A large number of listeners, clusters, and routes can increase memory usage. ### Latency Since Istio adds a sidecar proxy or ztunnel proxy on the data path, latency is an important consideration. Every feature Istio adds also adds to the path length inside the proxy and potentially affects latency. The Envoy proxy collects raw telemetry data after a response is sent to the client. The time spent collecting raw telemetry for a request does not contribute to the total time taken to complete that request. However, since the worker is busy handling the request, the worker won't start handling the next request immediately. This process adds to the queue wait time of the next request and affects average and tail latencies. The
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/deployment/performance-and-scalability/index.md
master
istio
[ -0.024595940485596657, 0.006875728722661734, -0.0015345802530646324, 0.05804069712758064, -0.06123968958854675, -0.07275906950235367, 0.039168890565633774, 0.04030948132276535, 0.013142818585038185, 0.07102686911821365, -0.06769750267267227, -0.031430382281541824, -0.04196877032518387, 0.0...
0.591467
request does not contribute to the total time taken to complete that request. However, since the worker is busy handling the request, the worker won't start handling the next request immediately. This process adds to the queue wait time of the next request and affects average and tail latencies. The actual tail latency depends on the traffic pattern. ### Latency for Istio 1.24 In sidecar mode, a request will pass through the client sidecar proxy and then the server sidecar proxy before reaching the server, and vice versa. In ambient mode, a request will pass through the client node ztunnel and then the server node ztunnel before reaching the server. With waypoints configured, a request will go through a waypoint proxy between the ztunnels. The following charts show the P90 and P99 latency of http/1.1 requests traveling through various dataplane modes. To run the tests, we used a bare-metal cluster of 5 [M3 Large](https://deploy.equinix.com/product/servers/m3-large/) machines and [Flannel](https://github.com/flannel-io/flannel) as the primary CNI. We obtained these results using the [Istio benchmarks](https://github.com/istio/tools/tree/{{< source\_branch\_name >}}/perf/benchmark) for the `http/1.1` protocol with a 1 KB payload at 500, 750, 1000, 1250, and 1500 requests per second using 4 client connections, 2 proxy workers and mutual TLS enabled. Note: This testing was performed on the [CNCF Community Infrastructure Lab](https://github.com/cncf/cluster). Different hardware will give different values. {{< image link="./istio-1.24.0-fortio-90.png" caption="P90 latency vs client connections" width="90%" >}} {{< image link="./istio-1.24.0-fortio-99.png" caption="P99 latency vs client connections" width="90%" >}} - `no mesh`: Client pod directly calls the server pod, no pods in Istio service mesh. - `ambient: L4`: Default ambient mode with the {{< gloss >}}secure L4 overlay{{< /gloss >}} - `ambient: L4+L7` Default ambient mode with the secure L4 overlay and waypoints enabled for the namespace. - `sidecar` Client and server sidecars. ### Benchmarking tools Istio uses the following tools for benchmarking - [`fortio.org`](https://fortio.org/) - a constant throughput load testing tool. - [`nighthawk`](https://github.com/envoyproxy/nighthawk) - a load testing tool based on Envoy. - [`isotope`](https://github.com/istio/tools/tree/{{< source\_branch\_name >}}/isotope) - a synthetic application with configurable topology.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/deployment/performance-and-scalability/index.md
master
istio
[ -0.05029842630028725, 0.030031681060791016, 0.0515291653573513, 0.09578121453523636, 0.0048779635690152645, -0.16134494543075562, 0.019960954785346985, 0.0448572039604187, 0.00873459130525589, -0.04369974881410599, -0.02764696069061756, -0.0009513081167824566, -0.04503944516181946, -0.0008...
0.264338
When configuring a production deployment of Istio, you need to answer a number of questions. Will the mesh be confined to a single {{< gloss >}}cluster{{< /gloss >}} or distributed across multiple clusters? Will all the services be located in a single fully connected network, or will gateways be required to connect services across multiple networks? Is there a single {{< gloss >}}control plane{{< /gloss >}}, potentially shared across clusters, or are there multiple control planes deployed to ensure high availability (HA)? Are all clusters going to be connected into a single {{< gloss >}}multicluster{{< /gloss >}} service mesh or will they be federated into a {{< gloss >}}multi-mesh{{< /gloss >}} deployment? All of these questions, among others, represent independent dimensions of configuration for an Istio deployment. 1. single or multiple cluster 1. single or multiple network 1. single or multiple control plane 1. single or multiple mesh In a production environment involving multiple clusters, you can use a mix of deployment models. For example, having more than one control plane is recommended for HA, but you could achieve this for a 3 cluster deployment by deploying 2 clusters with a single shared control plane and then adding the third cluster with a second control plane in a different network. All three clusters could then be configured to share both control planes so that all the clusters have 2 sources of control to ensure HA. Choosing the right deployment model depends on the isolation, performance, and HA requirements for your use case. This guide describes the various options and considerations when configuring your Istio deployment. ## Cluster models The workload instances of your application run in one or more {{< gloss "cluster" >}}clusters{{< /gloss >}}. For isolation, performance, and high availability, you can confine clusters to availability zones and regions. Production systems, depending on their requirements, can run across multiple clusters spanning a number of zones or regions, leveraging cloud load balancers to handle things like locality and zonal or regional fail over. In most cases, clusters represent boundaries for configuration and endpoint discovery. For example, each Kubernetes cluster has an API Server which manages the configuration for the cluster as well as serving {{< gloss >}}service endpoint{{< /gloss >}} information as pods are brought up or down. Since Kubernetes configures this behavior on a per-cluster basis, this approach helps limit the potential problems caused by incorrect configurations. In Istio, you can configure a single service mesh to span any number of clusters. ### Single cluster In the simplest case, you can confine an Istio mesh to a single {{< gloss >}}cluster{{< /gloss >}}. A cluster usually operates over a [single network](#single-network), but it varies between infrastructure providers. A single cluster and single network model includes a control plane, which results in the simplest Istio deployment. {{< image width="50%" link="single-cluster.svg" alt="A service mesh with a single cluster" title="Single cluster" caption="A service mesh with a single cluster" >}} Single cluster deployments offer simplicity, but lack other features, for example, fault isolation and fail over. If you need higher availability, you should use multiple clusters. ### Multiple clusters You can configure a single mesh to include multiple {{< gloss "cluster" >}}clusters{{< /gloss >}}. Using a {{< gloss >}}multicluster{{< /gloss >}} deployment within a single mesh affords the following capabilities beyond that of a single cluster deployment: - Fault isolation and fail over: `cluster-1` goes down, fail over to `cluster-2`. - Location-aware routing and fail over: Send requests to the nearest service. - Various [control plane models](#control-plane-models): Support different levels of availability. - Team or project isolation: Each team runs its own set
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/deployment/deployment-models/index.md
master
istio
[ -0.011535417288541794, -0.08650387823581696, -0.014859321527183056, -0.000008635720405436587, -0.03270483389496803, 0.004377291072160006, -0.02658742666244507, 0.006127081345766783, 0.019117332994937897, 0.016942545771598816, -0.020627541467547417, -0.05952441692352295, 0.014246545732021332,...
0.342674
a single cluster deployment: - Fault isolation and fail over: `cluster-1` goes down, fail over to `cluster-2`. - Location-aware routing and fail over: Send requests to the nearest service. - Various [control plane models](#control-plane-models): Support different levels of availability. - Team or project isolation: Each team runs its own set of clusters. {{< image width="75%" link="multi-cluster.svg" alt="A service mesh with multiple clusters" title="Multicluster" caption="A service mesh with multiple clusters" >}} Multicluster deployments give you a greater degree of isolation and availability but increase complexity. If your systems have high availability requirements, you likely need clusters across multiple zones and regions. You can canary configuration changes or new binary releases in a single cluster, where the configuration changes only affect a small amount of user traffic. Additionally, if a cluster has a problem, you can temporarily route traffic to nearby clusters until you address the issue. You can configure inter-cluster communication based on the [network](#network-models) and the options supported by your cloud provider. For example, if two clusters reside on the same underlying network, you can enable cross-cluster communication by simply configuring firewall rules. Within a multicluster mesh, all services are shared by default, according to the concept of {{< gloss "namespace sameness" >}}namespace sameness{{< /gloss >}}. [Traffic management rules](/docs/ops/configuration/traffic-management/multicluster) provide fine-grained control over the behavior of multicluster traffic. ### DNS with multiple clusters When a client application makes a request to some host, it must first perform a DNS lookup for the hostname to obtain an IP address before it can proceed with the request. In Kubernetes, the DNS server residing within the cluster typically handles this DNS lookup, based on the configured `Service` definitions. Istio uses the virtual IP returned by the DNS lookup to load balance across the list of active endpoints for the requested service, taking into account any Istio configured routing rules. Istio uses either Kubernetes `Service`/`Endpoint` or Istio `ServiceEntry` to configure its internal mapping of hostname to workload IP addresses. This two-tiered naming system becomes more complicated when you have multiple clusters. Istio is inherently multicluster-aware, but Kubernetes is not (today). Because of this, the client cluster must have a DNS entry for the service in order for the DNS lookup to succeed, and a request to be successfully sent. This is true even if there are no instances of that service's pods running in the client cluster. To ensure that DNS lookup succeeds, you must deploy a Kubernetes `Service` to each cluster that consumes that service. This ensures that regardless of where the request originates, it will pass DNS lookup and be handed to Istio for proper routing. This can also be achieved with Istio `ServiceEntry`, rather than Kubernetes `Service`. However, a `ServiceEntry` does not configure the Kubernetes DNS server. This means that DNS will need to be configured either manually or with automated tooling such as the [Address auto allocation](/docs/ops/configuration/traffic-management/dns-proxy/#address-auto-allocation) feature of [Istio DNS Proxying](/docs/ops/configuration/traffic-management/dns-proxy/). {{< tip >}} There are a few efforts in progress that will help simplify the DNS story: - [Admiral](https://github.com/istio-ecosystem/admiral) is an Istio community project that provides a number of multicluster capabilities. If you need to support multi-network topologies, managing this configuration across multiple clusters at scale is challenging. Admiral takes an opinionated view on this configuration and provides automatic provisioning and synchronization across clusters. - [Kubernetes Multi-Cluster Services](https://github.com/kubernetes/enhancements/tree/master/keps/sig-multicluster/1645-multi-cluster-services-api) is a Kubernetes Enhancement Proposal (KEP) that defines an API for exporting services to multiple clusters. This effectively pushes the responsibility of service visibility and DNS resolution for the entire `clusterset` onto Kubernetes. There is also work in progress to build layers of `MCS` support into Istio, which would allow
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/deployment/deployment-models/index.md
master
istio
[ 0.04027664288878441, -0.04835290461778641, 0.017358293756842613, 0.0023411691654473543, 0.05515831708908081, -0.03310108557343483, -0.09147366136312485, 0.02535017766058445, 0.04417533054947853, 0.025178629904985428, -0.018116503953933716, -0.03705858066678047, 0.04599412530660629, -0.0171...
0.114043
is a Kubernetes Enhancement Proposal (KEP) that defines an API for exporting services to multiple clusters. This effectively pushes the responsibility of service visibility and DNS resolution for the entire `clusterset` onto Kubernetes. There is also work in progress to build layers of `MCS` support into Istio, which would allow Istio to work with any cloud vendor `MCS` controller or even act as the `MCS` controller for the entire mesh. {{< /tip >}} ## Network models Istio uses a simplified definition of {{< gloss >}}network{{< /gloss >}} to refer to {{< gloss >}}workload instance{{< /gloss >}}s that have direct reachability. For example, by default all workload instances in a single cluster are on the same network. Many production systems require multiple networks or subnets for isolation and high availability. Istio supports spanning a service mesh over a variety of network topologies. This approach allows you to select the network model that fits your existing network topology. ### Single network In the simplest case, a service mesh operates over a single fully connected network. In a single network model, all {{< gloss "workload instance" >}}workload instances{{< /gloss >}} can reach each other directly without an Istio gateway. A single network allows Istio to configure service consumers in a uniform way across the mesh with the ability to directly address workload instances. Note, however, that for a single network across multiple clusters, services and endpoints cannot have overlapping IP addresses. {{< image width="50%" link="single-net.svg" alt="A service mesh with a single network" title="Single network" caption="A service mesh with a single network" >}} ### Multiple networks You can span a single service mesh across multiple networks; such a configuration is known as \*\*multi-network\*\*. Multiple networks afford the following capabilities beyond that of single networks: - Overlapping IP or VIP ranges for \*\*service endpoints\*\* - Crossing of administrative boundaries - Fault tolerance - Scaling of network addresses - Compliance with standards that require network segmentation In this model, the workload instances in different networks can only reach each other through one or more [Istio gateways](/docs/concepts/traffic-management/#gateways). Istio uses \*\*partitioned service discovery\*\* to provide consumers a different view of {{< gloss >}}service endpoint{{< /gloss >}}s. The view depends on the network of the consumers. {{< image width="50%" link="multi-net.svg" alt="A service mesh with multiple networks" title="Multi-network deployment" caption="A service mesh with multiple networks" >}} This solution requires exposing all services (or a subset) through the gateway. Cloud vendors may provide options that will not require exposing services on the public internet. Such an option, if it exists and meets your requirements, will likely be the best choice. {{< tip >}} In order to ensure secure communications in a multi-network scenario, Istio only supports cross-network communication to workloads with an Istio proxy. This is due to the fact that Istio exposes services at the Ingress Gateway with TLS pass-through, which enables mTLS directly to the workload. A workload without an Istio proxy, however, will likely not be able to participate in mutual authentication with other workloads. For this reason, Istio filters out-of-network endpoints for proxyless services. {{< /tip >}} ## Control plane models An Istio mesh uses the {{< gloss >}}control plane{{< /gloss >}} to configure all communication between workload instances within the mesh. Workload instances connect to a control plane instance to get their configuration. In the simplest case, you can run your mesh with a control plane on a single cluster. {{< image width="50%" link="single-cluster.svg" alt="A single cluster with a control plane" title="Single control plane" caption="A single cluster with a control plane" >}} A cluster like this one, with its own local control plane, is referred to
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/deployment/deployment-models/index.md
master
istio
[ -0.053060080856084824, -0.031137801706790924, 0.054439473897218704, 0.03499389812350273, -0.002654029754921794, 0.008296065963804722, -0.003816506825387478, 0.031278278678655624, 0.06148109585046768, 0.015293567441403866, -0.0717439353466034, -0.06881173700094223, -0.01740291342139244, -0....
0.562132
case, you can run your mesh with a control plane on a single cluster. {{< image width="50%" link="single-cluster.svg" alt="A single cluster with a control plane" title="Single control plane" caption="A single cluster with a control plane" >}} A cluster like this one, with its own local control plane, is referred to as a {{< gloss >}}primary cluster{{< /gloss >}}. Multicluster deployments can also share control plane instances. In this case, the control plane instances can reside in one or more primary clusters. Clusters without their own control plane are referred to as {{< gloss "remote cluster" >}}remote clusters{{< /gloss >}}. {{< image width="75%" link="shared-control.svg" alt="A service mesh with a primary and a remote cluster" title="Shared control plane" caption="A service mesh with a primary and a remote cluster" >}} To support remote clusters in a multicluster mesh, the control plane in a primary cluster must be accessible via a stable IP (e.g., a cluster IP). For clusters spanning networks, this can be achieved by exposing the control plane through an Istio gateway. Cloud vendors may provide options, such as internal load balancers, for providing this capability without exposing the control plane on the public internet. Such an option, if it exists and meets your requirements, will likely be the best choice. In multicluster deployments with more than one primary cluster, each primary cluster receives its configuration (i.e., `Service` and `ServiceEntry`, `DestinationRule`, etc.) from the Kubernetes API Server residing in the same cluster. Each primary cluster, therefore, has an independent source of configuration. This duplication of configuration across primary clusters does require additional steps when rolling out changes. Large production systems may automate this process with tooling, such as CI/CD systems, in order to manage configuration rollout. Instead of running control planes in primary clusters inside the mesh, a service mesh composed entirely of remote clusters can be controlled by an {{< gloss >}}external control plane{{< /gloss >}}. This provides isolated management and complete separation of the control plane deployment from the data plane services that comprise the mesh. {{< image width="100%" link="single-cluster-external-control-plane.svg" alt="A single cluster with an external control plane" title="External control plane" caption="A single cluster with an external control plane" >}} A cloud vendor's {{< gloss >}}managed control plane{{< /gloss >}} is a typical example of an external control plane. For high availability, you should deploy multiple control planes across clusters, zones, or regions. {{< image width="75%" link="multi-control.svg" alt="A service mesh with control plane instances for each region" title="Multiple control planes" caption="A service mesh with control plane instances for each region" >}} This model affords the following benefits: - Improved availability: If a control plane becomes unavailable, the scope of the outage is limited to only workloads in clusters managed by that control plane. - Configuration isolation: You can make configuration changes in one cluster, zone, or region without impacting others. - Controlled rollout: You have more fine-grained control over configuration rollout (e.g., one cluster at a time). You can also canary configuration changes in a sub-section of the mesh controlled by a given primary cluster. - Selective service visibility: You can restrict service visibility to part of the mesh, helping to establish service-level isolation. For example, an administrator may choose to deploy the `HelloWorld` service to Cluster A, but not Cluster B. Any attempt to call `HelloWorld` from Cluster B will fail the DNS lookup. The following list ranks control plane deployment examples by availability: - One cluster per region (\*\*lowest availability\*\*) - Multiple clusters per region - One cluster per zone - Multiple clusters per zone - Each cluster (\*\*highest availability\*\*) ### Endpoint discovery with multiple control planes An
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/deployment/deployment-models/index.md
master
istio
[ 0.012875979766249657, -0.06897945702075958, -0.02233082801103592, 0.053622860461473465, 0.025508839637041092, 0.010861869901418686, -0.03878449648618698, -0.0252083633095026, 0.09569843113422394, 0.06752543151378632, -0.004947383422404528, -0.03335200250148773, 0.018296202644705772, 0.0020...
0.044702
will fail the DNS lookup. The following list ranks control plane deployment examples by availability: - One cluster per region (\*\*lowest availability\*\*) - Multiple clusters per region - One cluster per zone - Multiple clusters per zone - Each cluster (\*\*highest availability\*\*) ### Endpoint discovery with multiple control planes An Istio control plane manages traffic within the mesh by providing each proxy with the list of service endpoints. In order to make this work in a multicluster scenario, each control plane must observe endpoints from the API Server in every cluster. To enable endpoint discovery for a cluster, an administrator generates a `remote secret` and deploys it to each primary cluster in the mesh. The `remote secret` contains credentials, granting access to the API server in the cluster. The control planes will then connect and discover the service endpoints for the cluster, enabling cross-cluster load balancing for these services. {{< image width="75%" link="endpoint-discovery.svg" caption="Primary clusters with endpoint discovery" >}} By default, Istio will load balance requests evenly between endpoints in each cluster. In large systems that span geographic regions, it may be desirable to use [locality load balancing](/docs/tasks/traffic-management/locality-load-balancing) to prefer that traffic stay in the same zone or region. In some advanced scenarios, load balancing across clusters may not be desired. For example, in a blue/green deployment, you may deploy different versions of the system to different clusters. In this case, each cluster is effectively operating as an independent mesh. This behavior can be achieved in a couple of ways: - Do not exchange remote secrets between the clusters. This offers the strongest isolation between the clusters. - Use `VirtualService` and `DestinationRule` to disallow routing between two versions of the services. In either case, cross-cluster load balancing is prevented. External traffic can be routed to one cluster or the other using an external load balancer. ## Identity and trust models When a workload instance is created within a service mesh, Istio assigns the workload an {{< gloss >}}identity{{< /gloss >}}. The Certificate Authority (CA) creates and signs the certificates used to verify the identities used within the mesh. You can verify the identity of the message sender with the public key of the CA that created and signed the certificate for that identity. A \*\*trust bundle\*\* is the set of all CA public keys used by an Istio mesh. With a mesh's trust bundle, anyone can verify the sender of any message coming from that mesh. ### Trust within a mesh Within a single Istio mesh, Istio ensures each workload instance has an appropriate certificate representing its own identity, and the trust bundle necessary to recognize all identities within the mesh and any federated meshes. The CA creates and signs the certificates for those identities. This model allows workload instances in the mesh to authenticate each other when communicating. {{< image width="50%" link="single-trust.svg" alt="A service mesh with a common certificate authority" title="Trust within a mesh" caption="A service mesh with a common certificate authority" >}} ### Trust between meshes To enable communication between two meshes with different CAs, you must exchange the trust bundles of the meshes. Istio does not provide any tooling to exchange trust bundles across meshes. You can exchange the trust bundles either manually or automatically using a protocol such as [SPIFFE Trust Domain Federation](https://github.com/spiffe/spiffe/blob/main/standards/SPIFFE\_Federation.md). Once you import a trust bundle to a mesh, you can configure local policies for those identities. {{< image width="50%" link="multi-trust.svg" alt="Multiple service meshes with different certificate authorities" title="Trust between meshes" caption="Multiple service meshes with different certificate authorities" >}} ## Mesh models Istio supports having all of your services in a {{< gloss
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/deployment/deployment-models/index.md
master
istio
[ -0.06432120501995087, -0.040230851620435715, -0.021391229704022408, 0.007826085202395916, -0.03569003939628601, -0.020443905144929886, -0.055247023701667786, -0.0008383085369132459, 0.01377352699637413, 0.05518325790762901, -0.08036693930625916, -0.0996677577495575, 0.002049171132966876, 0...
0.374293
a trust bundle to a mesh, you can configure local policies for those identities. {{< image width="50%" link="multi-trust.svg" alt="Multiple service meshes with different certificate authorities" title="Trust between meshes" caption="Multiple service meshes with different certificate authorities" >}} ## Mesh models Istio supports having all of your services in a {{< gloss "service mesh" >}}mesh{{< /gloss >}}, or federating multiple meshes together, which is also known as {{< gloss >}}multi-mesh{{< /gloss >}}. ### Single mesh The simplest Istio deployment is a single mesh. Within a mesh, service names are unique. For example, only one service can have the name `mysvc` in the `foo` namespace. Additionally, workload instances share a common identity since service account names are unique within a namespace, just like service names. A single mesh can span [one or more clusters](#cluster-models) and [one or more networks](#network-models). Within a mesh, [namespaces](#namespace-tenancy) are used for [tenancy](#tenancy-models). ### Multiple meshes Multiple mesh deployments result from {{< gloss >}}mesh federation{{< /gloss >}}. Multiple meshes afford the following capabilities beyond that of a single mesh: - Organizational boundaries: lines of business - Service name or namespace reuse: multiple distinct uses of the `default` namespace - Stronger isolation: isolating test workloads from production workloads You can enable inter-mesh communication with {{< gloss >}}mesh federation{{< /gloss >}}. When federating, each mesh can expose a set of services and identities, which all participating meshes can recognize. {{< image width="50%" link="multi-mesh.svg" alt="Multiple service meshes" title="Multi-mesh" caption="Multiple service meshes" >}} To avoid service naming collisions, you can give each mesh a globally unique \*\*mesh ID\*\*, to ensure that the fully qualified domain name (FQDN) for each service is distinct. When federating two meshes that do not share the same {{< gloss >}}trust domain{{< /gloss >}}, you must federate {{< gloss >}}identity{{< /gloss >}} and \*\*trust bundles\*\* between them. See the section on [Trust between meshes](#trust-between-meshes) for more details. ## Tenancy models In Istio, a \*\*tenant\*\* is a group of users that share common access and privileges for a set of deployed workloads. Tenants can be used to provide a level of isolation between different teams. You can configure tenancy models to satisfy the following organizational requirements for isolation: - Security - Policy - Capacity - Cost - Performance Istio supports three types of tenancy models: - [Namespace tenancy](#namespace-tenancy) - [Cluster tenancy](#cluster-tenancy) - [Mesh tenancy](#mesh-tenancy) ### Namespace tenancy A cluster can be shared across multiple teams, each using a different namespace. You can grant a team permission to deploy its workloads only to a given namespace or set of namespaces. By default, services from multiple namespaces can communicate with each other, but you can increase isolation by selectively choosing which services to expose to other namespaces. You can configure authorization policies for exposed services to restrict access to only the appropriate callers. {{< image width="50%" link="exp-ns.svg" alt="A service mesh with two namespaces and an exposed service" title="Namespaces with an exposed service" caption="A service mesh with two namespaces and an exposed service" >}} Namespace tenancy can extend beyond a single cluster. When using [multiple clusters](#multiple-clusters), the namespaces in each cluster sharing the same name are considered the same namespace by default. For example, `Service B` in the `Team-1` namespace of cluster `West` and `Service B` in the `Team-1` namespace of cluster `East` refer to the same service, and Istio merges their endpoints for service discovery and load balancing. {{< image width="50%" link="cluster-ns.svg" alt="A service mesh with two clusters with the same namespace" title="Multicluster namespaces" caption="A service mesh with clusters with the same namespace" >}} ### Cluster tenancy Istio supports using clusters as a unit of tenancy. In this case, you can give
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/deployment/deployment-models/index.md
master
istio
[ -0.10573797672986984, -0.01130219828337431, -0.007929284125566483, -0.007060850039124489, -0.017549684271216393, -0.045889224857091904, 0.08684752881526947, 0.018456146121025085, 0.011061142198741436, 0.007796985562890768, -0.007474233396351337, -0.12247005850076675, 0.06704484671354294, 0...
0.410973
for service discovery and load balancing. {{< image width="50%" link="cluster-ns.svg" alt="A service mesh with two clusters with the same namespace" title="Multicluster namespaces" caption="A service mesh with clusters with the same namespace" >}} ### Cluster tenancy Istio supports using clusters as a unit of tenancy. In this case, you can give each team a dedicated cluster or set of clusters to deploy their workloads. Permissions for a cluster are usually limited to the members of the team that owns it. You can set various roles for finer grained control, for example: - Cluster administrator - Developer To use cluster tenancy with Istio, you configure each team's cluster with its own {{< gloss >}}control plane{{< /gloss >}}, allowing each team to manage its own configuration. Alternatively, you can use Istio to implement a group of clusters as a single tenant using {{< gloss "remote cluster" >}}remote clusters{{< /gloss >}} or multiple synchronized {{< gloss "primary cluster" >}}primary clusters{{< /gloss >}}. Refer to [control plane models](#control-plane-models) for details. ### Mesh Tenancy In a multi-mesh deployment with {{< gloss >}}mesh federation{{< /gloss >}}, each mesh can be used as the unit of isolation. {{< image width="50%" link="cluster-iso.svg" alt="Two isolated service meshes with two clusters and two namespaces" title="Cluster isolation" caption="Two isolated service meshes with two clusters and two namespaces" >}} Since a different team or organization operates each mesh, service naming is rarely distinct. For example, a `Service C` in the `foo` namespace of cluster `Team-1` and the `Service C` service in the `foo` namespace of cluster `Team-2` will not refer to the same service. The most common example is the scenario in Kubernetes where many teams deploy their workloads to the `default` namespace. When each team has its own mesh, cross-mesh communication follows the concepts described in the [multiple meshes](#multiple-meshes) model.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/deployment/deployment-models/index.md
master
istio
[ -0.0012820872943848372, -0.04728645086288452, -0.040549684315919876, 0.02433048002421856, -0.036266665905714035, -0.0688963234424591, -0.02294028364121914, 0.016803354024887085, 0.0034060401376336813, 0.04413176700472832, -0.04885276034474373, -0.09320921450853348, 0.039365384727716446, 0....
0.325014
This document aims to describe the security posture of Istio's various components, and how possible attacks can impact the system. ## Components Istio comes with a variety of optional components that will be covered here. For a high level overview, see [Istio Architecture](/docs/ops/deployment/architecture/). Note that Istio deployments are highly flexible; below, we will primarily assume the worst case scenarios. ### Istiod Istiod serves as the core control plane component of Istio, often serving the role of the [XDS serving component](/docs/concepts/traffic-management/) as well as the mesh [mTLS Certificate Authority](/docs/concepts/security/). Istiod is considered a highly privileged component, similar to that of the Kubernetes API server itself. \* It has high Kubernetes RBAC privileges, typically including `Secret` read access and webhook write access. \* When acting as the CA, it can provision arbitrary certificates. \* When acting as the XDS control plane, it can program proxies to perform arbitrary behavior. As such, the security of the cluster is tightly coupled to the security of Istiod. Following [Kubernetes security best practices](https://kubernetes.io/docs/concepts/security/) around Istiod access is paramount. ### Istio CNI plugin Istio can optionally be deployed with the [Istio CNI Plugin `DaemonSet`](/docs/setup/additional-setup/cni/). This `DaemonSet` is responsible for setting up networking rules in Istio to ensure traffic is transparently redirected as needed. This is an alternative to the `istio-init` container discussed [below](#sidecar-proxies). Because the CNI `DaemonSet` modifies networking rules on the node, it requires an elevated `securityContext`. However, unlike [Istiod](#istiod), this is a \*\*node-local\*\* privilege. The implications of this are discussed [below](#node-compromise). Because this consolidates the elevated privileges required to setup networking into a single pod, rather than \*every\* pod, this option is generally recommended. ### Sidecar Proxies Istio may [optionally](/docs/overview/dataplane-modes/) deploy a sidecar proxy next to an application. The sidecar proxy needs the network to be programmed to direct all traffic through the proxy. This can be done with the [Istio CNI plugin](#istio-cni-plugin) or by deploying an `initContainer` (`istio-init`) on the pod (this is done automatically if the CNI plugin is not deployed). The `istio-init` container requires `NET\_ADMIN` and `NET\_RAW` capabilities. However, these capabilities are only present during the initialization - the primary sidecar container is completely unprivileged. Additionally, the sidecar proxy does not require any associated Kubernetes RBAC privileges at all. Each sidecar proxy is authorized to request a certificate for the associated Pod Service Account. ### Gateways and Waypoints {{< gloss "gateway" >}}Gateways{{< /gloss >}} and {{< gloss "waypoint">}}Waypoints{{< /gloss >}} act as standalone proxy deployments. Unlike [sidecars](#sidecar-proxies), they do not require any networking modifications, and thus don't require any privilege. These components run with their own service accounts, distinct from application identities. ### Ztunnel {{< gloss "ztunnel" >}}Ztunnel{{< /gloss >}} acts as a node-level proxy. This task requires the `NET\_ADMIN`, `SYS\_ADMIN`, and `NET\_RAW` capabilities. Like the [Istio CNI Plugin](#istio-cni-plugin), these are \*\*node-local\*\* privileges only. The Ztunnel does not have any associated Kubernetes RBAC privileges. Ztunnel is authorized to request certificates for any Service Accounts of pods running on the same node. Similar to [kubelet](https://kubernetes.io/docs/reference/access-authn-authz/node/), this explicitly does not allow requesting arbitrary certificates. This, again, ensures these privileges are \*\*node-local\*\* only. ## Traffic Capture Properties When a pod is enrolled in the mesh, all incoming TCP traffic will be redirected to the proxy. This includes both mTLS/{{< gloss >}}HBONE{{< /gloss >}} traffic and plaintext traffic. Any applicable [policies](/docs/tasks/security/authorization/) for the workload will be enforced before forwarding the traffic to the workload. However, Istio does not currently guarantee that \_outgoing\_ traffic is redirect to the proxy. See [traffic capture limitations](/docs/ops/best-practices/security/#understand-traffic-capture-limitations). As such, care must be taken to follow the [securing egress traffic](/docs/ops/best-practices/security/#securing-egress-traffic) steps if outbound policies are required. ## Mutual TLS Properties [Mutual TLS](/docs/concepts/security/#mutual-tls-authentication)
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/deployment/security-model/index.md
master
istio
[ -0.0716262236237526, 0.032177504152059555, 0.0240795835852623, 0.02019483596086502, -0.02306872233748436, -0.03093860298395157, 0.06070299819111824, 0.04752998426556587, 0.057920001447200775, -0.0005023822304792702, -0.04354510083794594, -0.07057994604110718, 0.059724532067775726, -0.00533...
0.606808
enforced before forwarding the traffic to the workload. However, Istio does not currently guarantee that \_outgoing\_ traffic is redirect to the proxy. See [traffic capture limitations](/docs/ops/best-practices/security/#understand-traffic-capture-limitations). As such, care must be taken to follow the [securing egress traffic](/docs/ops/best-practices/security/#securing-egress-traffic) steps if outbound policies are required. ## Mutual TLS Properties [Mutual TLS](/docs/concepts/security/#mutual-tls-authentication) provides the basis for much of Istio's security posture. Below explains various properties mutual TLS provides for the security posture of Istio. ### Certificate Authority Istio comes out of the box with its own Certificate Authority. By default, the CA allows authenticating clients based on either of the options below: \* A Kubernetes JWT token, with an audience of `istio-ca`, verified with a Kubernetes `TokenReview`. This is the default method in Kubernetes Pods. \* An existing mutual TLS certificate. \* Custom JWT tokens, verified using OIDC (requires configuration). The CA will only issue certificates that are requested for identities that a client is authenticated for. Istio can also integrate with a variety of third party CAs; please refer to any of their security documentation for more information on how they behave. ### Client mTLS {{< tabset category-name="dataplane" >}} {{< tab name="Sidecar mode" category-value="sidecar" >}} In sidecar mode, the client sidecar will [automatically use TLS](/docs/ops/configuration/traffic-management/tls-configuration/#auto-mtls) when connecting to a service that is detected to support mTLS. This can also be [explicitly configured](/docs/ops/configuration/traffic-management/tls-configuration/#sidecars). Note that this automatic detection relies on Istio associating the traffic to a Service. [Unsupported traffic types](/docs/ops/configuration/traffic-management/traffic-routing/#unmatched-traffic) or [configuration scoping](/docs/ops/configuration/mesh/configuration-scoping/) can prevent this. When [connecting to a backend](/docs/concepts/security/#secure-naming), the set of allowed identities is computed, at the Service level, based on the union of all backend's identities. {{< /tab >}} {{< tab name="Ambient mode" category-value="ambient" >}} In ambient mode, Istio will automatically use mTLS when connecting to any backend that supports mTLS, and verify the identity of the destination matches the identity the workload is expected to be running as. These properties differ from sidecar mode in that they are properties of individual workloads, rather than of the service. This enables more fine-grained authentication checks, as well as supporting a wider variety of workloads. {{< /tab >}} {{< /tabset >}} ### Server mTLS By default, Istio will accept mTLS and non-mTLS traffic (often called "permissive mode"). Users can opt-in to strict enforcement by writing `PeerAuthentication` or `AuthorizationPolicy` rules requiring mTLS. When mTLS connections are established, the peer certificate is verified. Additionally, the peer identity is verified to be within the same trust domain. To verify only specific identities are allowed, an `AuthorizationPolicy` can be used. ## Compromise types explored Based on the above overview, we will consider the impact on the cluster if various parts of the system are compromised. In the real world, there are a variety of different variables around any security attack: \* How easy it is to execute \* What prior privileges are required \* How often it can exploited \* What the impact is (total remote execution, denial of service, etc). In this document, we will primarily consider the worst case scenario: a compromised component means an attacker has complete remote code execution capabilities. ### Workload compromise In this scenario, an application workload (pod) is compromised. A pod [\*may\* have access](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#opt-out-of-api-credential-automounting) to its service account token. If so, a workload compromise can move laterally from a single pod to compromising the entire service account. {{< tabset category-name="dataplane" >}} {{< tab name="Sidecar mode" category-value="sidecar" >}} In the sidecar model, the proxy is co-located with the pod, and runs within the same trust boundary. A compromised application can tamper with the proxy through the admin API or other surfaces, including exfiltration of private key material, allowing
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/deployment/security-model/index.md
master
istio
[ -0.050002571195364, 0.02808697521686554, 0.03696321323513985, 0.016551705077290535, -0.07956882566213608, -0.05193237587809563, 0.07975468784570694, 0.025173164904117584, 0.0367988646030426, 0.0013156287604942918, -0.06606122851371765, -0.08097166568040848, -0.02076212875545025, 0.05572734...
0.566298
{{< tabset category-name="dataplane" >}} {{< tab name="Sidecar mode" category-value="sidecar" >}} In the sidecar model, the proxy is co-located with the pod, and runs within the same trust boundary. A compromised application can tamper with the proxy through the admin API or other surfaces, including exfiltration of private key material, allowing another agent to impersonate the workload. It should be assumed that a compromised workload also includes a compromise of the sidecar proxy. Given this, a compromised workload may: \* Send arbitrary traffic, with or without mutual TLS. These may bypass any proxy configuration, or even the proxy entirely. Note that Istio does not offer egress-based authorization policies, so there is no egress authorization policy bypass occurring. \* Accept traffic that was already destined to the application. It may bypass policies that were configured in the sidecar proxy. The key takeaway here is that while the compromised workload may behave maliciously, this does not give them the ability to bypass policies in \_other\_ workloads. {{< /tab >}} {{< tab name="Ambient mode" category-value="ambient" >}} In ambient mode, the node proxy is not co-located within the pod, and runs in another trust boundary as part of an independent pod. A compromised application may send arbitrary traffic. However, they do not have control over the node proxy, which will chose how to handle incoming and outbound traffic. Additionally, as the pod itself doesn't have access to a service account token to request a mutual TLS certificate, lateral movement possibilities are reduced. {{< /tab >}} {{< /tabset >}} Istio offers a variety of features that can limit the impact of such a compromise: \* [Observability](/docs/tasks/observability/) features can be used to identify the attack. \* [Policies](/docs/tasks/security/authorization/) can be used to restrict what type of traffic a workload can send or receive. ### Proxy compromise - Sidecars In this scenario, a sidecar proxy is compromised. Because the sidecar and application reside in the same trust domain, this is functionally equivalent to the [Workload compromise](#workload-compromise). ### Proxy compromise - Waypoint In this scenario, a [waypoint proxy](#gateways-and-waypoints) is compromised. While waypoints do not have any privileges for a hacker to exploit, they do serve (potentially) many different services and workloads. A compromised waypoint will receive all traffic for these, which it can view, modify, or drop. Istio offers the flexibility of [configuring the granularity of a waypoint deployment](/docs/ambient/usage/waypoint/#useawaypoint). Users may consider deploying more isolated waypoints if they require stronger isolation. Because waypoints run with a distinct identity from the applications they serve, a compromised waypoint does not imply the user's applications can be impersonated. ### Proxy compromise - Ztunnel In this scenario, a [ztunnel](#ztunnel) proxy is compromised. A compromised ztunnel gives the attacker control of the networking of the node. Ztunnel has access to private key material for each application running on its node. A compromised ztunnel could have these exfiltrated and used elsewhere. However, lateral movement to identities beyond co-located workloads is not possible; each ztunnel is only authorized to access certificates for workloads running on its node, scoping the blast radius of a compromised ztunnel. ### Node compromise In this scenario, the Kubernetes Node is compromised. Both [Kubernetes](https://kubernetes.io/docs/reference/access-authn-authz/node/) and Istio are designed to limit the blast radius of a single node compromise, such that the compromise of a single node does not lead to a [cluster-wide compromise](#cluster-api-server-compromise). However, the attack does have complete control over any workloads running on that node. For instance, it can compromise any co-located [waypoints](#proxy-compromise---waypoint), the local [ztunnel](#proxy-compromise---ztunnel), any [sidecars](#proxy-compromise---sidecars), any co-located [Istiod instances](#istiod-compromise), etc. ### Cluster (API Server) compromise A compromise of the Kubernetes API Server effectively means the entire cluster and
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/deployment/security-model/index.md
master
istio
[ -0.020048705860972404, 0.056901250034570694, 0.002624954329803586, 0.027902092784643173, -0.028017807751893997, -0.04891752079129219, 0.12087450921535492, 0.006084946449846029, -0.024806804955005646, 0.025689605623483658, 0.018832482397556305, -0.01947290077805519, -0.012030729092657566, 0...
0.352838
However, the attack does have complete control over any workloads running on that node. For instance, it can compromise any co-located [waypoints](#proxy-compromise---waypoint), the local [ztunnel](#proxy-compromise---ztunnel), any [sidecars](#proxy-compromise---sidecars), any co-located [Istiod instances](#istiod-compromise), etc. ### Cluster (API Server) compromise A compromise of the Kubernetes API Server effectively means the entire cluster and mesh are compromised. Unlike most other attack vectors, there isn't much Istio can do to control the blast radius of such an attack. A compromised API Server gives a hacker complete control over the cluster, including actions such as running `kubectl exec` on arbitrary pods, removing any Istio `AuthorizationPolicies`, or even uninstalling Istio entirely. ### Istiod compromise A compromise of Istiod generally leads to the same result as an [API Server compromise](#cluster-api-server-compromise). Istiod is a highly privileged component that should be strongly protected. Following the [security best practices](/docs/ops/best-practices/security) is crucial to maintaining a secure cluster.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/deployment/security-model/index.md
master
istio
[ 0.013691726140677929, 0.08713757991790771, 0.015183813869953156, 0.03902643918991089, -0.014377661049365997, -0.009946690872311592, 0.031946565955877304, 0.04072943702340126, 0.06735856086015701, 0.05601603537797928, -0.05621552839875221, -0.055487629026174545, -0.009084129706025124, 0.020...
0.533593
## Requests are rejected by Envoy Requests may be rejected for various reasons. The best way to understand why requests are being rejected is by inspecting Envoy's access logs. By default, access logs are output to the standard output of the container. Run the following command to see the log: {{< text bash >}} $ kubectl logs PODNAME -c istio-proxy -n NAMESPACE {{< /text >}} In the default access log format, Envoy response flags are located after the response code, if you are using a custom log format, make sure to include `%RESPONSE\_FLAGS%`. Refer to the [Envoy response flags](https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access\_log/usage#config-access-log-format-response-flags) for details of response flags. Common response flags are: - `NR`: No route configured, check your `DestinationRule` or `VirtualService`. - `UO`: Upstream overflow with circuit breaking, check your circuit breaker configuration in `DestinationRule`. - `UF`: Failed to connect to upstream, if you're using Istio authentication, check for a [mutual TLS configuration conflict](#503-errors-after-setting-destination-rule). ## Route rules don't seem to affect traffic flow With the current Envoy sidecar implementation, up to 100 requests may be required for weighted version distribution to be observed. If route rules are working perfectly for the [Bookinfo](/docs/examples/bookinfo/) sample, but similar version routing rules have no effect on your own application, it may be that your Kubernetes services need to be changed slightly. Kubernetes services must adhere to certain restrictions in order to take advantage of Istio's L7 routing features. Refer to the [Requirements for Pods and Services](/docs/ops/deployment/application-requirements/) for details. Another potential issue is that the route rules may simply be slow to take effect. The Istio implementation on Kubernetes utilizes an eventually consistent algorithm to ensure all Envoy sidecars have the correct configuration including all route rules. A configuration change will take some time to propagate to all the sidecars. With large deployments the propagation will take longer and there may be a lag time on the order of seconds. ## 503 errors after setting destination rule {{< tip >}} You should only see this error if you disabled [automatic mutual TLS](/docs/tasks/security/authentication/authn-policy/#auto-mutual-tls) during install. {{< /tip >}} If requests to a service immediately start generating HTTP 503 errors after you applied a `DestinationRule` and the errors continue until you remove or revert the `DestinationRule`, then the `DestinationRule` is probably causing a TLS conflict for the service. For example, if you configure mutual TLS in the cluster globally, the `DestinationRule` must include the following `trafficPolicy`: {{< text yaml >}} trafficPolicy: tls: mode: ISTIO\_MUTUAL {{< /text >}} Otherwise, the mode defaults to `DISABLE` causing client proxy sidecars to make plain HTTP requests instead of TLS encrypted requests. Thus, the requests conflict with the server proxy because the server proxy expects encrypted requests. Whenever you apply a `DestinationRule`, ensure the `trafficPolicy` TLS mode matches the global server configuration. ## Route rules have no effect on ingress gateway requests Let's assume you are using an ingress `Gateway` and corresponding `VirtualService` to access an internal service. For example, your `VirtualService` looks something like this: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: myapp spec: hosts: - "myapp.com" # or maybe "\*" if you are testing without DNS using the ingress-gateway IP (e.g., http://1.2.3.4/hello) gateways: - myapp-gateway http: - match: - uri: prefix: /hello route: - destination: host: helloworld.default.svc.cluster.local - match: ... {{< /text >}} You also have a `VirtualService` which routes traffic for the helloworld service to a particular subset: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: helloworld spec: hosts: - helloworld.default.svc.cluster.local http: - route: - destination: host: helloworld.default.svc.cluster.local subset: v1 {{< /text >}} In this situation you will notice that requests to the helloworld service via the
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/common-problems/network-issues/index.md
master
istio
[ 0.05416756868362427, 0.043181754648685455, 0.06769732385873795, 0.043007414788007736, -0.03634824976325035, -0.07377804815769196, -0.013167854398488998, -0.06054835021495819, 0.06336915493011475, 0.07128121703863144, -0.07286039739847183, -0.0642649382352829, -0.03397933393716812, 0.031296...
0.207023
traffic for the helloworld service to a particular subset: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: helloworld spec: hosts: - helloworld.default.svc.cluster.local http: - route: - destination: host: helloworld.default.svc.cluster.local subset: v1 {{< /text >}} In this situation you will notice that requests to the helloworld service via the ingress gateway will not be directed to subset v1 but instead will continue to use default round-robin routing. The ingress requests are using the gateway host (e.g., `myapp.com`) which will activate the rules in the myapp `VirtualService` that routes to any endpoint of the helloworld service. Only internal requests with the host `helloworld.default.svc.cluster.local` will use the helloworld `VirtualService` which directs traffic exclusively to subset v1. To control the traffic from the gateway, you need to also include the subset rule in the myapp `VirtualService`: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: myapp spec: hosts: - "myapp.com" # or maybe "\*" if you are testing without DNS using the ingress-gateway IP (e.g., http://1.2.3.4/hello) gateways: - myapp-gateway http: - match: - uri: prefix: /hello route: - destination: host: helloworld.default.svc.cluster.local subset: v1 - match: ... {{< /text >}} Alternatively, you can combine both `VirtualServices` into one unit if possible: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: myapp spec: hosts: - myapp.com # cannot use "\*" here since this is being combined with the mesh services - helloworld.default.svc.cluster.local gateways: - mesh # applies internally as well as externally - myapp-gateway http: - match: - uri: prefix: /hello gateways: - myapp-gateway #restricts this rule to apply only to ingress gateway route: - destination: host: helloworld.default.svc.cluster.local subset: v1 - match: - gateways: - mesh # applies to all services inside the mesh route: - destination: host: helloworld.default.svc.cluster.local subset: v1 {{< /text >}} ## Envoy is crashing under load Check your `ulimit -a`. Many systems have a 1024 open file descriptor limit by default which will cause Envoy to assert and crash with: {{< text plain >}} [2017-05-17 03:00:52.735][14236][critical][assert] assert failure: fd\_ != -1: external/envoy/source/common/network/connection\_impl.cc:58 {{< /text >}} Make sure to raise your ulimit. Example: `ulimit -n 16384` ## Envoy won't connect to my HTTP/1.0 service Envoy requires `HTTP/1.1` or `HTTP/2` traffic for upstream services. For example, when using [NGINX](https://www.nginx.com/) for serving traffic behind Envoy, you will need to set the [proxy\_http\_version](https://nginx.org/en/docs/http/ngx\_http\_proxy\_module.html#proxy\_http\_version) directive in your NGINX configuration to be "1.1", since the NGINX default is 1.0. Example configuration: {{< text plain >}} upstream http\_backend { server 127.0.0.1:8080; keepalive 16; } server { ... location /http/ { proxy\_pass http://http\_backend; proxy\_http\_version 1.1; proxy\_set\_header Connection ""; ... } } {{< /text >}} ## 503 error while accessing headless services Assume Istio is installed with the following configuration: - `mTLS mode` set to `STRICT` within the mesh - `meshConfig.outboundTrafficPolicy.mode` set to `ALLOW\_ANY` Consider `nginx` is deployed as a `StatefulSet` in the default namespace and a corresponding `Headless Service` is defined as shown below: {{< text yaml >}} apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: http-web # Explicitly defining an http port clusterIP: None # Creates a Headless Service selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: nginx serviceName: "nginx" replicas: 3 template: metadata: labels: app: nginx spec: containers: - name: nginx image: registry.k8s.io/nginx-slim:0.8 ports: - containerPort: 80 name: web {{< /text >}} The port name `http-web` in the Service definition explicitly specifies the http protocol for that port. Let us assume we have a [curl]({{< github\_tree >}}/samples/curl) pod `Deployment` as well in the default namespace. When `nginx` is accessed from this `curl` pod using its Pod IP (this is
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/common-problems/network-issues/index.md
master
istio
[ -0.00013420665345620364, 0.02704036235809326, -0.015861913561820984, 0.03854125738143921, -0.028999123722314835, -0.0739637017250061, 0.04085875302553177, -0.01019903551787138, 0.03197124972939491, 0.06568608433008194, -0.032960813492536545, -0.05876575782895088, -0.04319162666797638, 0.00...
0.280996
{{< /text >}} The port name `http-web` in the Service definition explicitly specifies the http protocol for that port. Let us assume we have a [curl]({{< github\_tree >}}/samples/curl) pod `Deployment` as well in the default namespace. When `nginx` is accessed from this `curl` pod using its Pod IP (this is one of the common ways to access a headless service), the request goes via the `PassthroughCluster` to the server-side, but the sidecar proxy on the server-side fails to find the route entry to `nginx` and fails with `HTTP 503 UC`. {{< text bash >}} $ export SOURCE\_POD=$(kubectl get pod -l app=curl -o jsonpath='{.items..metadata.name}') $ kubectl exec -it $SOURCE\_POD -c curl -- curl 10.1.1.171 -s -o /dev/null -w "%{http\_code}" 503 {{< /text >}} `10.1.1.171` is the Pod IP of one of the replicas of `nginx` and the service is accessed on `containerPort` 80. Here are some of the ways to avoid this 503 error: 1. Specify the correct Host header: The Host header in the curl request above will be the Pod IP by default. Specifying the Host header as `nginx.default` in our request to `nginx` successfully returns `HTTP 200 OK`. {{< text bash >}} $ export SOURCE\_POD=$(kubectl get pod -l app=curl -o jsonpath='{.items..metadata.name}') $ kubectl exec -it $SOURCE\_POD -c curl -- curl -H "Host: nginx.default" 10.1.1.171 -s -o /dev/null -w "%{http\_code}" 200 {{< /text >}} 1. Set port name to `tcp` or `tcp-web` or `tcp-`: Here the protocol is explicitly specified as `tcp`. In this case, only the `TCP Proxy` network filter on the sidecar proxy is used both on the client-side and server-side. HTTP Connection Manager is not used at all and therefore, any kind of header is not expected in the request. A request to `nginx` with or without explicitly setting the Host header successfully returns `HTTP 200 OK`. This is useful in certain scenarios where a client may not be able to include header information in the request. {{< text bash >}} $ export SOURCE\_POD=$(kubectl get pod -l app=curl -o jsonpath='{.items..metadata.name}') $ kubectl exec -it $SOURCE\_POD -c curl -- curl 10.1.1.171 -s -o /dev/null -w "%{http\_code}" 200 {{< /text >}} {{< text bash >}} $ kubectl exec -it $SOURCE\_POD -c curl -- curl -H "Host: nginx.default" 10.1.1.171 -s -o /dev/null -w "%{http\_code}" 200 {{< /text >}} 1. Use domain name instead of Pod IP: A specific instance of a headless service can also be accessed using just the domain name. {{< text bash >}} $ export SOURCE\_POD=$(kubectl get pod -l app=curl -o jsonpath='{.items..metadata.name}') $ kubectl exec -it $SOURCE\_POD -c curl -- curl web-0.nginx.default -s -o /dev/null -w "%{http\_code}" 200 {{< /text >}} Here `web-0` is the pod name of one of the 3 replicas of `nginx`. Refer to this [traffic routing](/docs/ops/configuration/traffic-management/traffic-routing/) page for some additional information on headless services and traffic routing behavior for different protocols. ## TLS configuration mistakes Many traffic management problems are caused by incorrect [TLS configuration](/docs/ops/configuration/traffic-management/tls-configuration/). The following sections describe some of the most common misconfigurations. ### Sending HTTPS to an HTTP port If your application sends an HTTPS request to a service declared to be HTTP, the Envoy sidecar will attempt to parse the request as HTTP while forwarding the request, which will fail because the HTTP is unexpectedly encrypted. {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: ServiceEntry metadata: name: httpbin spec: hosts: - httpbin.org ports: - number: 443 name: http protocol: HTTP resolution: DNS {{< /text >}} Although the above configuration may be correct if you are intentionally sending plaintext on port 443 (e.g., `curl http://httpbin.org:443`), generally port 443 is dedicated for HTTPS traffic. Sending an HTTPS request like `curl https://httpbin.org`,
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/common-problems/network-issues/index.md
master
istio
[ -0.0005786952679045498, 0.04949946328997612, 0.004466439131647348, -0.04558819904923439, -0.07548276335000992, -0.038066890090703964, -0.05403909087181091, -0.035487715154886246, 0.016174713149666786, 0.059413012117147446, 0.007390900980681181, -0.020528441295027733, -0.038344644010066986, ...
0.044867
hosts: - httpbin.org ports: - number: 443 name: http protocol: HTTP resolution: DNS {{< /text >}} Although the above configuration may be correct if you are intentionally sending plaintext on port 443 (e.g., `curl http://httpbin.org:443`), generally port 443 is dedicated for HTTPS traffic. Sending an HTTPS request like `curl https://httpbin.org`, which defaults to port 443, will result in an error like `curl: (35) error:1408F10B:SSL routines:ssl3\_get\_record:wrong version number`. The access logs may also show an error like `400 DPE`. To fix this, you should change the port protocol to HTTPS: {{< text yaml >}} spec: ports: - number: 443 name: https protocol: HTTPS {{< /text >}} ### Gateway to virtual service TLS mismatch {#gateway-mismatch} There are two common TLS mismatches that can occur when binding a virtual service to a gateway. 1. The gateway terminates TLS while the virtual service configures TLS routing. 1. The gateway does TLS passthrough while the virtual service configures HTTP routing. #### Gateway with TLS termination {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: Gateway metadata: name: gateway namespace: istio-system spec: selector: istio: ingressgateway servers: - port: number: 443 name: https protocol: HTTPS hosts: - "\*" tls: mode: SIMPLE credentialName: sds-credential --- apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: httpbin spec: hosts: - "\*.example.com" gateways: - istio-system/gateway tls: - match: - sniHosts: - "\*.example.com" route: - destination: host: httpbin.org {{< /text >}} In this example, the gateway is terminating TLS (the `tls.mode` configuration of the gateway is `SIMPLE`, not `PASSTHROUGH`) while the virtual service is using TLS-based routing. Evaluating routing rules occurs after the gateway terminates TLS, so the TLS rule will have no effect because the request is then HTTP rather than HTTPS. With this misconfiguration, you will end up getting 404 responses because the requests will be sent to HTTP routing but there are no HTTP routes configured. You can confirm this using the `istioctl proxy-config routes` command. To fix this problem, you should switch the virtual service to specify `http` routing, instead of `tls`: {{< text yaml >}} spec: ... http: - match: - headers: ":authority": regex: "\*.example.com" {{< /text >}} #### Gateway with TLS passthrough {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: Gateway metadata: name: gateway spec: selector: istio: ingressgateway servers: - hosts: - "\*" port: name: https number: 443 protocol: HTTPS tls: mode: PASSTHROUGH --- apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: virtual-service spec: gateways: - gateway hosts: - httpbin.example.com http: - route: - destination: host: httpbin.org {{< /text >}} In this configuration, the virtual service is attempting to match HTTP traffic against TLS traffic passed through the gateway. This will result in the virtual service configuration having no effect. You can observe that the HTTP route is not applied using the `istioctl proxy-config listener` and `istioctl proxy-config route` commands. To fix this, you should switch the virtual service to configure `tls` routing: {{< text yaml >}} spec: tls: - match: - sniHosts: ["httpbin.example.com"] route: - destination: host: httpbin.org {{< /text >}} Alternatively, you could terminate TLS, rather than passing it through, by switching the `tls` configuration in the gateway: {{< text yaml >}} spec: ... tls: credentialName: sds-credential mode: SIMPLE {{< /text >}} ### Double TLS (TLS origination for a TLS request) {#double-tls} When configuring Istio to perform {{< gloss >}}TLS origination{{< /gloss >}}, you need to make sure that the application sends plaintext requests to the sidecar, which will then originate the TLS. The following `DestinationRule` originates TLS for requests to the `httpbin.org` service, but the corresponding `ServiceEntry` defines the protocol as HTTPS on port 443. {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: ServiceEntry metadata: name: httpbin spec: hosts: - httpbin.org ports:
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/common-problems/network-issues/index.md
master
istio
[ -0.015008583664894104, 0.05259982869029045, -0.007198668085038662, -0.04573814943432808, -0.07091884315013885, -0.09544165432453156, -0.059893738478422165, -0.11107193678617477, 0.09793055057525635, 0.04676403850317001, 0.0031086234375834465, 0.024097738787531853, -0.003815979231148958, 0....
-0.097487
plaintext requests to the sidecar, which will then originate the TLS. The following `DestinationRule` originates TLS for requests to the `httpbin.org` service, but the corresponding `ServiceEntry` defines the protocol as HTTPS on port 443. {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: ServiceEntry metadata: name: httpbin spec: hosts: - httpbin.org ports: - number: 443 name: https protocol: HTTPS resolution: DNS --- apiVersion: networking.istio.io/v1 kind: DestinationRule metadata: name: originate-tls spec: host: httpbin.org trafficPolicy: tls: mode: SIMPLE {{< /text >}} With this configuration, the sidecar expects the application to send TLS traffic on port 443 (e.g., `curl https://httpbin.org`), but it will also perform TLS origination before forwarding requests. This will cause the requests to be double encrypted. For example, sending a request like `curl https://httpbin.org` will result in an error: `(35) error:1408F10B:SSL routines:ssl3\_get\_record:wrong version number`. You can fix this example by changing the port protocol in the `ServiceEntry` to HTTP: {{< text yaml >}} spec: hosts: - httpbin.org ports: - number: 443 name: http protocol: HTTP {{< /text >}} Note that with this configuration your application will need to send plaintext requests to port 443, like `curl http://httpbin.org:443`, because TLS origination does not change the port. However, starting in Istio 1.8, you can expose HTTP port 80 to the application (e.g., `curl http://httpbin.org`) and then redirect requests to `targetPort` 443 for the TLS origination: {{< text yaml >}} spec: hosts: - httpbin.org ports: - number: 80 name: http protocol: HTTP targetPort: 443 {{< /text >}} ### 404 errors occur when multiple gateways configured with same TLS certificate Configuring more than one gateway using the same TLS certificate will cause browsers that leverage [HTTP/2 connection reuse](https://httpwg.org/specs/rfc7540.html#reuse) (i.e., most browsers) to produce 404 errors when accessing a second host after a connection to another host has already been established. For example, let's say you have 2 hosts that share the same TLS certificate like this: - Wildcard certificate `\*.test.com` installed in `istio-ingressgateway` - `Gateway` configuration `gw1` with host `service1.test.com`, selector `istio: ingressgateway`, and TLS using gateway's mounted (wildcard) certificate - `Gateway` configuration `gw2` with host `service2.test.com`, selector `istio: ingressgateway`, and TLS using gateway's mounted (wildcard) certificate - `VirtualService` configuration `vs1` with host `service1.test.com` and gateway `gw1` - `VirtualService` configuration `vs2` with host `service2.test.com` and gateway `gw2` Since both gateways are served by the same workload (i.e., selector `istio: ingressgateway`) requests to both services (`service1.test.com` and `service2.test.com`) will resolve to the same IP. If `service1.test.com` is accessed first, it will return the wildcard certificate (`\*.test.com`) indicating that connections to `service2.test.com` can use the same certificate. Browsers like Chrome and Firefox will consequently reuse the existing connection for requests to `service2.test.com`. Since the gateway (`gw1`) has no route for `service2.test.com`, it will then return a 404 (Not Found) response. You can avoid this problem by configuring a single wildcard `Gateway`, instead of two (`gw1` and `gw2`). Then, simply bind both `VirtualServices` to it like this: - `Gateway` configuration `gw` with host `\*.test.com`, selector `istio: ingressgateway`, and TLS using gateway's mounted (wildcard) certificate - `VirtualService` configuration `vs1` with host `service1.test.com` and gateway `gw` - `VirtualService` configuration `vs2` with host `service2.test.com` and gateway `gw` ### Configuring SNI routing when not sending SNI An HTTPS `Gateway` that specifies the `hosts` field will perform an [SNI](https://en.wikipedia.org/wiki/Server\_Name\_Indication) match on incoming requests. For example, the following configuration would only allow requests that match `\*.example.com` in the SNI: {{< text yaml >}} servers: - port: number: 443 name: https protocol: HTTPS hosts: - "\*.example.com" {{< /text >}} This may cause certain requests to fail. For example, if you do not have DNS set up and are instead directly setting the host header, such as
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/common-problems/network-issues/index.md
master
istio
[ -0.04461107775568962, 0.08266647905111313, 0.012836657464504242, 0.00719668623059988, -0.04987816885113716, -0.09765693545341492, -0.0006154965376481414, -0.00710515771061182, 0.046135157346725464, -0.02125164493918419, 0.033361054956912994, -0.05740426108241081, -0.0344531387090683, 0.047...
0.245616
`\*.example.com` in the SNI: {{< text yaml >}} servers: - port: number: 443 name: https protocol: HTTPS hosts: - "\*.example.com" {{< /text >}} This may cause certain requests to fail. For example, if you do not have DNS set up and are instead directly setting the host header, such as `curl 1.2.3.4 -H "Host: app.example.com"`, no SNI will be set, causing the request to fail. Instead, you can set up DNS or use the `--resolve` flag of `curl`. See the [Secure Gateways](/docs/tasks/traffic-management/ingress/secure-ingress/) task for more information. Another common issue is load balancers in front of Istio. Most cloud load balancers will not forward the SNI, so if you are terminating TLS in your cloud load balancer you may need to do one of the following: - Configure the cloud load balancer to instead passthrough the TLS connection - Disable SNI matching in the `Gateway` by setting the hosts field to `\*` A common symptom of this is for the load balancer health checks to succeed while real traffic fails. ## Unchanged Envoy filter configuration suddenly stops working An `EnvoyFilter` configuration that specifies an insert position relative to another filter can be very fragile because, by default, the order of evaluation is based on the creation time of the filters. Consider a filter with the following specification: {{< text yaml >}} spec: configPatches: - applyTo: NETWORK\_FILTER match: context: SIDECAR\_OUTBOUND listener: portNumber: 443 filterChain: filter: name: istio.stats patch: operation: INSERT\_BEFORE value: ... {{< /text >}} To work properly, this filter configuration depends on the `istio.stats` filter having an older creation time than it. Otherwise, the `INSERT\_BEFORE` operation will be silently ignored. There will be nothing in the error log to indicate that this filter has not been added to the chain. This is particularly problematic when matching filters, like `istio.stats`, that are version specific (i.e., that include the `proxyVersion` field in their match criteria). Such filters may be removed or replaced by newer ones when upgrading Istio. As a result, an `EnvoyFilter` like the one above may initially be working perfectly but after upgrading Istio to a newer version it will no longer be included in the network filter chain of the sidecars. To avoid this issue, you can either change the operation to one that does not depend on the presence of another filter (e.g., `INSERT\_FIRST`), or set an explicit priority in the `EnvoyFilter` to override the default creation time-based ordering. For example, adding `priority: 10` to the above filter will ensure that it is processed after the `istio.stats` filter which has a default priority of 0. ## Virtual service with fault injection and retry/timeout policies not working as expected Currently, Istio does not support configuring fault injections and retry or timeout policies on the same `VirtualService`. Consider the following configuration: {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: VirtualService metadata: name: helloworld spec: hosts: - "\*" gateways: - helloworld-gateway http: - match: - uri: exact: /hello fault: abort: httpStatus: 500 percentage: value: 50 retries: attempts: 5 retryOn: 5xx route: - destination: host: helloworld port: number: 5000 {{< /text >}} You would expect that given the configured five retry attempts, the user would almost never see any errors when calling the `helloworld` service. However since both fault and retries are configured on the same `VirtualService`, the retry configuration does not take effect, resulting in a 50% failure rate. To work around this issue, you may remove the fault config from your `VirtualService` and inject the fault to the upstream Envoy proxy using `EnvoyFilter` instead: {{< text yaml >}} apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: hello-world-filter spec: workloadSelector: labels: app: helloworld configPatches: -
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/common-problems/network-issues/index.md
master
istio
[ -0.030234182253479958, 0.05605247616767883, 0.013894900679588318, -0.006780094467103481, -0.05518261343240738, -0.10588761419057846, -0.01599649339914322, -0.05531609430909157, 0.07039390504360199, 0.04447660967707634, -0.04875652864575386, -0.07402030378580093, 0.03343327343463898, 0.0255...
0.211625
resulting in a 50% failure rate. To work around this issue, you may remove the fault config from your `VirtualService` and inject the fault to the upstream Envoy proxy using `EnvoyFilter` instead: {{< text yaml >}} apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: hello-world-filter spec: workloadSelector: labels: app: helloworld configPatches: - applyTo: HTTP\_FILTER match: context: SIDECAR\_INBOUND # will match outbound listeners in all sidecars listener: filterChain: filter: name: "envoy.filters.network.http\_connection\_manager" patch: operation: INSERT\_BEFORE value: name: envoy.fault typed\_config: "@type": "type.googleapis.com/envoy.extensions.filters.http.fault.v3.HTTPFault" abort: http\_status: 500 percentage: numerator: 50 denominator: HUNDRED {{< /text >}} This works because this way the retry policy is configured for the client proxy while the fault injection is configured for the upstream proxy.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/common-problems/network-issues/index.md
master
istio
[ 0.0005654562846757472, 0.02832154557108879, 0.04706927761435509, 0.014640972949564457, -0.029943296685814857, -0.0671372264623642, 0.014466813765466213, -0.036336228251457214, -0.03632594272494316, 0.026597362011671066, -0.0237761028110981, -0.07673294842243195, -0.02218526229262352, 0.035...
0.157605
## The result of sidecar injection was not what I expected This includes an injected sidecar when it wasn't expected and a lack of injected sidecar when it was. 1. Ensure your pod is not in the `kube-system` or `kube-public` namespace. Automatic sidecar injection will be ignored for pods in these namespaces. 1. Ensure your pod does not have `hostNetwork: true` in its pod spec. Automatic sidecar injection will be ignored for pods that are on the host network. The sidecar model assumes that the iptables changes required for Envoy to intercept traffic are within the pod. For pods on the host network this assumption is violated, and this can lead to routing failures at the host level. 1. Check the webhook's `namespaceSelector` to determine whether the webhook is scoped to opt-in or opt-out for the target namespace. The `namespaceSelector` for opt-in will look like the following: {{< text bash yaml >}} $ kubectl get mutatingwebhookconfiguration istio-sidecar-injector -o yaml | grep "namespaceSelector:" -A5 namespaceSelector: matchLabels: istio-injection: enabled rules: - apiGroups: - "" {{< /text >}} The injection webhook will be invoked for pods created in namespaces with the `istio-injection=enabled` label. {{< text bash >}} $ kubectl get namespace -L istio-injection NAME STATUS AGE ISTIO-INJECTION default Active 18d enabled istio-system Active 3d kube-public Active 18d kube-system Active 18d {{< /text >}} The `namespaceSelector` for opt-out will look like the following: {{< text bash >}} $ kubectl get mutatingwebhookconfiguration istio-sidecar-injector -o yaml | grep "namespaceSelector:" -A5 namespaceSelector: matchExpressions: - key: istio-injection operator: NotIn values: - disabled rules: - apiGroups: - "" {{< /text >}} The injection webhook will be invoked for pods created in namespaces without the `istio-injection=disabled` label. {{< text bash >}} $ kubectl get namespace -L istio-injection NAME STATUS AGE ISTIO-INJECTION default Active 18d istio-system Active 3d disabled kube-public Active 18d disabled kube-system Active 18d disabled {{< /text >}} Verify the application pod's namespace is labeled properly and (re) label accordingly, e.g. {{< text bash >}} $ kubectl label namespace istio-system istio-injection=disabled --overwrite {{< /text >}} (repeat for all namespaces in which the injection webhook should be invoked for new pods) {{< text bash >}} $ kubectl label namespace default istio-injection=enabled --overwrite {{< /text >}} 1. Check default policy Check the default injection policy in the `istio-sidecar-injector configmap`. {{< text bash yaml >}} $ kubectl -n istio-system get configmap istio-sidecar-injector -o jsonpath='{.data.config}' | grep policy: policy: enabled {{< /text >}} Allowed policy values are `disabled` and `enabled`. The default policy only applies if the webhook’s `namespaceSelector` matches the target namespace. Unrecognized policy causes injection to be disabled completely. 1. Check the per-pod override annotation The default policy can be overridden with the `sidecar.istio.io/inject` label in the \_pod template spec’s metadata\_. The deployment’s metadata is ignored. Label value of `true` forces the sidecar to be injected while a value of `false` forces the sidecar to \_not\_ be injected. The following label overrides whatever the default `policy` was to force the sidecar to be injected: {{< text bash yaml >}} $ kubectl get deployment curl -o yaml | grep "sidecar.istio.io/inject:" -B4 template: metadata: labels: app: curl sidecar.istio.io/inject: "true" {{< /text >}} ## Pods cannot be created at all Run `kubectl describe -n namespace deployment name` on the failing pod's deployment. Failure to invoke the injection webhook will typically be captured in the event log. ### x509 certificate related errors {{< text plain >}} Warning FailedCreate 3m (x17 over 8m) replicaset-controller Error creating: Internal error occurred: \ failed calling admission webhook "sidecar-injector.istio.io": Post https://istiod.istio-system.svc:443/inject: \ x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying \ to verify candidate
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/common-problems/injection/index.md
master
istio
[ 0.05546949803829193, 0.044693417847156525, 0.05353901907801628, 0.04354333505034447, -0.028184503316879272, -0.050653986632823944, -0.040710996836423874, -0.025361860170960426, -0.07067525386810303, 0.07499536871910095, 0.008820072747766972, -0.08133545517921448, -0.024800226092338562, -0....
0.109232
event log. ### x509 certificate related errors {{< text plain >}} Warning FailedCreate 3m (x17 over 8m) replicaset-controller Error creating: Internal error occurred: \ failed calling admission webhook "sidecar-injector.istio.io": Post https://istiod.istio-system.svc:443/inject: \ x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying \ to verify candidate authority certificate "Kubernetes.cluster.local") {{< /text >}} `x509: certificate signed by unknown authority` errors are typically caused by an empty `caBundle` in the webhook configuration. Verify the `caBundle` in the `mutatingwebhookconfiguration` matches the root certificate mounted in the `istiod` pod. {{< text bash >}} $ kubectl get mutatingwebhookconfiguration istio-sidecar-injector -o yaml -o jsonpath='{.webhooks[0].clientConfig.caBundle}' | md5sum 4b95d2ba22ce8971c7c92084da31faf0 - $ kubectl -n istio-system get configmap istio-ca-root-cert -o jsonpath='{.data.root-cert\.pem}' | base64 -w 0 | md5sum 4b95d2ba22ce8971c7c92084da31faf0 - {{< /text >}} The CA certificate should match. If they do not, restart the istiod pods. {{< text bash >}} $ kubectl -n istio-system patch deployment istiod \ -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}" deployment.extensions "istiod" patched {{< /text >}} ### Errors in deployment status When automatic sidecar injection is enabled for a pod, and the injection fails for any reason, the pod creation will also fail. In such cases, you can check the deployment status of the pod to identify the error. The errors will also appear in the events of the namespace associated with the deployment. For example, if the `istiod` control plane pod was not running when you tried to deploy your pod, the events would show the following error: {{< text bash >}} $ kubectl get events -n curl ... 23m Normal SuccessfulCreate replicaset/curl-9454cc476 Created pod: curl-9454cc476-khp45 22m Warning FailedCreate replicaset/curl-9454cc476 Error creating: Internal error occurred: failed calling webhook "namespace.sidecar-injector.istio.io": failed to call webhook: Post "https://istiod.istio-system.svc:443/inject?timeout=10s": dial tcp 10.96.44.51:443: connect: connection refused {{< /text >}} {{< text bash >}} $ kubectl -n istio-system get pod -lapp=istiod NAME READY STATUS RESTARTS AGE istiod-7d46d8d9db-jz2mh 1/1 Running 0 2d {{< /text >}} {{< text bash >}} $ kubectl -n istio-system get endpoints istiod NAME ENDPOINTS AGE istiod 10.244.2.8:15012,10.244.2.8:15010,10.244.2.8:15017 + 1 more... 3h18m {{< /text >}} If the istiod pod or endpoints aren't ready, check the pod logs and status for any indication about why the webhook pod is failing to start and serve traffic. {{< text bash >}} $ for pod in $(kubectl -n istio-system get pod -lapp=istiod -o jsonpath='{.items[\*].metadata.name}'); do \ kubectl -n istio-system logs ${pod} \ done $ for pod in $(kubectl -n istio-system get pod -l app=istiod -o name); do \ kubectl -n istio-system describe ${pod}; \ done $ {{< /text >}} ## Automatic sidecar injection fails if the Kubernetes API server has proxy settings When the Kubernetes API server includes proxy settings such as: {{< text yaml >}} env: - name: http\_proxy value: http://proxy-wsa.esl.foo.com:80 - name: https\_proxy value: http://proxy-wsa.esl.foo.com:80 - name: no\_proxy value: 127.0.0.1,localhost,dockerhub.foo.com,devhub-docker.foo.com,10.84.100.125,10.84.100.126,10.84.100.127 {{< /text >}} With these settings, Sidecar injection fails. The only related failure log can be found in `kube-apiserver` log: {{< text plain >}} W0227 21:51:03.156818 1 admission.go:257] Failed calling webhook, failing open sidecar-injector.istio.io: failed calling admission webhook "sidecar-injector.istio.io": Post https://istio-sidecar-injector.istio-system.svc:443/inject: Service Unavailable {{< /text >}} Make sure both pod and service CIDRs are not proxied according to `\*\_proxy` variables. Check the `kube-apiserver` files and logs to verify the configuration and whether any requests are being proxied. One workaround is to remove the proxy settings from the `kube-apiserver` manifest, another workaround is to include `istio-sidecar-injector.istio-system.svc` or `.svc` in the `no\_proxy` value. Make sure that `kube-apiserver` is restarted after each workaround. An [issue](https://github.com/kubernetes/kubeadm/issues/666) was filed with Kubernetes related to this and has since been closed. [https://github.com/kubernetes/kubernetes/pull/58698#discussion\_r163879443](https://github.com/kubernetes/kubernetes/pull/58698#discussion\_r163879443) ## Limitations for using Tcpdump in pods Tcpdump doesn't work in the sidecar pod - the
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/common-problems/injection/index.md
master
istio
[ -0.02499462105333805, 0.07757701724767685, -0.006431096699088812, 0.004068898037075996, 0.04171343147754669, -0.05975574254989624, -0.03681756556034088, -0.011166242882609367, 0.01639585755765438, 0.027340177446603775, 0.04755663871765137, -0.16363361477851868, 0.029463328421115875, 0.0403...
0.187708
is to include `istio-sidecar-injector.istio-system.svc` or `.svc` in the `no\_proxy` value. Make sure that `kube-apiserver` is restarted after each workaround. An [issue](https://github.com/kubernetes/kubeadm/issues/666) was filed with Kubernetes related to this and has since been closed. [https://github.com/kubernetes/kubernetes/pull/58698#discussion\_r163879443](https://github.com/kubernetes/kubernetes/pull/58698#discussion\_r163879443) ## Limitations for using Tcpdump in pods Tcpdump doesn't work in the sidecar pod - the container doesn't run as root. However any other container in the same pod will see all the packets, since the network namespace is shared. `iptables` will also see the pod-wide configuration. Communication between Envoy and the app happens on 127.0.0.1, and is not encrypted. ## Cluster is not scaled down automatically Due to the fact that the sidecar container mounts a local storage volume, the node autoscaler is unable to evict nodes with the injected pods. This is a [known issue](https://github.com/kubernetes/autoscaler/issues/3947). The workaround is to add a pod annotation `"cluster-autoscaler.kubernetes.io/safe-to-evict": "true"` to the injected pods. ## Pod or containers start with network issues if istio-proxy is not ready Many applications execute commands or checks during startup, which require network connectivity. This can cause application containers to hang or restart if the `istio-proxy` sidecar container is not ready. To avoid this, set `holdApplicationUntilProxyStarts` to `true`. This causes the sidecar injector to inject the sidecar at the start of the pod’s container list, and configures it to block the start of all other containers until the proxy is ready. This can be added as a global config option: {{< text yaml >}} values.global.proxy.holdApplicationUntilProxyStarts: true {{< /text >}} or as a pod annotation: {{< text yaml >}} proxy.istio.io/config: '{ "holdApplicationUntilProxyStarts": true }' {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/common-problems/injection/index.md
master
istio
[ 0.009539546445012093, 0.014936654828488827, 0.01871778815984726, -0.0010841167531907558, -0.021814359351992607, -0.01164258737117052, -0.017810039222240448, -0.004281830042600632, 0.008409116417169571, 0.0996270626783371, -0.024667460471391678, -0.06027288734912872, -0.05554459989070892, -...
0.272504
## No traces appearing in Zipkin when running Istio locally on Mac Istio is installed and everything seems to be working except there are no traces showing up in Zipkin when there should be. This may be caused by a known [Docker issue](https://github.com/docker/for-mac/issues/1260) where the time inside containers may skew significantly from the time on the host machine. If this is the case, when you select a very long date range in Zipkin you will see the traces appearing as much as several days too early. You can also confirm this problem by comparing the date inside a Docker container to outside: {{< text bash >}} $ docker run --entrypoint date gcr.io/istio-testing/ubuntu-16-04-slave:latest Sun Jun 11 11:44:18 UTC 2017 {{< /text >}} {{< text bash >}} $ date -u Thu Jun 15 02:25:42 UTC 2017 {{< /text >}} To fix the problem, you'll need to shutdown and then restart Docker before reinstalling Istio. ## Missing Grafana output If you're unable to get Grafana output when connecting from a local web client to Istio remotely hosted, you should validate the client and server date and time match. The time of the web client (e.g. Chrome) affects the output from Grafana. A simple solution to this problem is to verify a time synchronization service is running correctly within the Kubernetes cluster and the web client machine also is correctly using a time synchronization service. Some common time synchronization systems are NTP and Chrony. This is especially problematic in engineering labs with firewalls. In these scenarios, NTP may not be configured properly to point at the lab-based NTP services. ## Verify Istio CNI pods are running (if used) The Istio CNI plugin performs the Istio mesh pod traffic redirection in the Kubernetes pod lifecycle’s network setup phase, thereby removing the [requirement for the `NET\_ADMIN` and `NET\_RAW` capabilities](/docs/ops/deployment/application-requirements/) for users deploying pods into the Istio mesh. The Istio CNI plugin replaces the functionality provided by the `istio-init` container. 1. Verify that the `istio-cni-node` pods are running: {{< text bash >}} $ kubectl -n kube-system get pod -l k8s-app=istio-cni-node {{< /text >}} 1. If `PodSecurityPolicy` is being enforced in your cluster, ensure the `istio-cni` service account can use a `PodSecurityPolicy` which [allows the `NET\_ADMIN` and `NET\_RAW` capabilities](/docs/ops/deployment/application-requirements/).
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/common-problems/observability-issues/index.md
master
istio
[ -0.010848063044250011, 0.05652378872036934, 0.03587138280272484, 0.06441637128591537, 0.06542336195707321, -0.08908005803823471, -0.015718501061201096, 0.02321828529238701, -0.011466225609183311, 0.016654781997203827, -0.04737409949302673, -0.08720242977142334, -0.04888693243265152, 0.0926...
0.378642
## End-user authentication fails With Istio, you can enable authentication for end users through [request authentication policies](/docs/tasks/security/authentication/authn-policy/#end-user-authentication). Follow these steps to troubleshoot the policy specification. 1. If `jwksUri` isn’t set, make sure the JWT issuer is of url format and `url + /.well-known/openid-configuration` can be opened in browser; for example, if the JWT issuer is `https://accounts.google.com`, make sure `https://accounts.google.com/.well-known/openid-configuration` is a valid url and can be opened in a browser. {{< text yaml >}} apiVersion: security.istio.io/v1 kind: RequestAuthentication metadata: name: "example-3" spec: selector: matchLabels: app: httpbin jwtRules: - issuer: "testing@secure.istio.io" jwksUri: "{{< github\_file >}}/security/tools/jwt/samples/jwks.json" {{< /text >}} 1. If the JWT token is placed in the Authorization header in http requests, make sure the JWT token is valid (not expired, etc). The fields in a JWT token can be decoded by using online JWT parsing tools, e.g., [jwt.io](https://jwt.io/). 1. Verify the Envoy proxy configuration of the target workload using `istioctl proxy-config` command. With the example policy above applied, use the following command to check the `listener` configuration on the inbound port `80`. You should see `envoy.filters.http.jwt\_authn` filter with settings matching the issuer and JWKS as specified in the policy. {{< text bash >}} $ POD=$(kubectl get pod -l app=httpbin -n foo -o jsonpath={.items..metadata.name}) $ istioctl proxy-config listener ${POD} -n foo --port 80 --type HTTP -o json { "name": "envoy.filters.http.jwt\_authn", "typedConfig": { "@type": "type.googleapis.com/envoy.config.filter.http.jwt\_authn.v2alpha.JwtAuthentication", "providers": { "origins-0": { "issuer": "testing@secure.istio.io", "localJwks": { "inlineString": "\*redacted\*" }, "payloadInMetadata": "testing@secure.istio.io" } }, "rules": [ { "match": { "prefix": "/" }, "requires": { "requiresAny": { "requirements": [ { "providerName": "origins-0" }, { "allowMissing": {} } ] } } } ] } }, {{< /text >}} ## Authorization is too restrictive or permissive ### Make sure there are no typos in the policy YAML file One common mistake is specifying multiple items unintentionally in the YAML. Take the following policy as an example: {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: example namespace: foo spec: action: ALLOW rules: - to: - operation: paths: - /foo - from: - source: namespaces: - foo {{< /text >}} You may expect the policy to allow requests if the path is `/foo` \*\*and\*\* the source namespace is `foo`. However, the policy actually allows requests if the path is `/foo` \*\*or\*\* the source namespace is `foo`, which is more permissive. In the YAML syntax, the `-` in front of the `from:` means it's a new element in the list. This creates 2 rules in the policy instead of 1. In authorization policy, multiple rules have the semantics of `OR`. To fix the problem, just remove the extra `-` to make the policy have only 1 rule that allows requests if the path is `/foo` \*\*and\*\* the source namespace is `foo`, which is more restrictive. ### Make sure you are NOT using HTTP-only fields on TCP ports The authorization policy will be more restrictive because HTTP-only fields (e.g. `host`, `path`, `headers`, JWT, etc.) do not exist in the raw TCP connections. In the case of `ALLOW` policy, these fields are never matched. In the case of `DENY` and `CUSTOM` action, these fields are considered always matched. The final effect is a more restrictive policy that could cause unexpected denies. Check the Kubernetes service definition to verify that the port is [named with the correct protocol properly](/docs/ops/configuration/traffic-management/protocol-selection/#explicit-protocol-selection). If you are using HTTP-only fields on the port, make sure the port name has the `http-` prefix. ### Make sure the policy is applied to the correct target Check the workload selector and namespace to confirm it's applied to the correct targets. You can determine the authorization policy in effect by running `istioctl
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/common-problems/security-issues/index.md
master
istio
[ -0.10367583483457565, 0.011961901560425758, 0.0036798755172640085, -0.02825121022760868, -0.007609768770635128, -0.06119013577699661, -0.007248749025166035, 0.02520674653351307, -0.03687675669789314, -0.00543363718315959, -0.06722912192344666, -0.07747570425271988, -0.011994503438472748, 0...
0.339878
HTTP-only fields on the port, make sure the port name has the `http-` prefix. ### Make sure the policy is applied to the correct target Check the workload selector and namespace to confirm it's applied to the correct targets. You can determine the authorization policy in effect by running `istioctl x authz check POD-NAME.POD-NAMESPACE`. ### Pay attention to the action specified in the policy - If not specified, the policy defaults to use action `ALLOW`. - When a workload has multiple actions (`CUSTOM`, `ALLOW` and `DENY`) applied at the same time, all actions must be satisfied to allow a request. In other words, a request is denied if any of the action denies and is allowed only if all actions allow. - The `AUDIT` action does not enforce access control and will not deny the request at any cases. Read [authorization implicit enablement](/docs/concepts/security/#implicit-enablement) for more details of the evaluation order. ## Ensure Istiod accepts the policies Istiod converts and distributes your authorization policies to the proxies. The following steps help you ensure Istiod is working as expected: 1. Run the following command to enable the debug logging in istiod: {{< text bash >}} $ istioctl admin log --level authorization:debug {{< /text >}} 1. Get the Istiod log with the following command: {{< tip >}} You probably need to first delete and then re-apply your authorization policies so that the debug output is generated for these policies. {{< /tip >}} {{< text bash >}} $ kubectl logs $(kubectl -n istio-system get pods -l app=istiod -o jsonpath='{.items[0].metadata.name}') -c discovery -n istio-system {{< /text >}} 1. Check the output and verify there are no errors. For example, you might see something similar to the following: {{< text plain >}} 2021-04-23T20:53:29.507314Z info ads Push debounce stable[31] 1: 100.981865ms since last change, 100.981653ms since last push, full=true 2021-04-23T20:53:29.507641Z info ads XDS: Pushing:2021-04-23T20:53:29Z/23 Services:15 ConnectedEndpoints:2 Version:2021-04-23T20:53:29Z/23 2021-04-23T20:53:29.507911Z debug authorization Processed authorization policy for httpbin-74fb669cc6-lpscm.foo with details: \* found 0 CUSTOM actions 2021-04-23T20:53:29.508077Z debug authorization Processed authorization policy for curl-557747455f-6dxbl.foo with details: \* found 0 CUSTOM actions 2021-04-23T20:53:29.508128Z debug authorization Processed authorization policy for httpbin-74fb669cc6-lpscm.foo with details: \* found 1 DENY actions, 0 ALLOW actions, 0 AUDIT actions \* generated config from rule ns[foo]-policy[deny-path-headers]-rule[0] on HTTP filter chain successfully \* built 1 HTTP filters for DENY action \* added 1 HTTP filters to filter chain 0 \* added 1 HTTP filters to filter chain 1 2021-04-23T20:53:29.508158Z debug authorization Processed authorization policy for curl-557747455f-6dxbl.foo with details: \* found 0 DENY actions, 0 ALLOW actions, 0 AUDIT actions 2021-04-23T20:53:29.509097Z debug authorization Processed authorization policy for curl-557747455f-6dxbl.foo with details: \* found 0 CUSTOM actions 2021-04-23T20:53:29.509167Z debug authorization Processed authorization policy for curl-557747455f-6dxbl.foo with details: \* found 0 DENY actions, 0 ALLOW actions, 0 AUDIT actions 2021-04-23T20:53:29.509501Z debug authorization Processed authorization policy for httpbin-74fb669cc6-lpscm.foo with details: \* found 0 CUSTOM actions 2021-04-23T20:53:29.509652Z debug authorization Processed authorization policy for httpbin-74fb669cc6-lpscm.foo with details: \* found 1 DENY actions, 0 ALLOW actions, 0 AUDIT actions \* generated config from rule ns[foo]-policy[deny-path-headers]-rule[0] on HTTP filter chain successfully \* built 1 HTTP filters for DENY action \* added 1 HTTP filters to filter chain 0 \* added 1 HTTP filters to filter chain 1 \* generated config from rule ns[foo]-policy[deny-path-headers]-rule[0] on TCP filter chain successfully \* built 1 TCP filters for DENY action \* added 1 TCP filters to filter chain 2 \* added 1 TCP filters to filter chain 3 \* added 1 TCP filters to filter chain 4 2021-04-23T20:53:29.510903Z info ads LDS: PUSH for node:curl-557747455f-6dxbl.foo resources:18 size:85.0kB 2021-04-23T20:53:29.511487Z info ads LDS: PUSH for node:httpbin-74fb669cc6-lpscm.foo resources:18 size:86.4kB {{< /text >}} This shows that Istiod generated:
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/common-problems/security-issues/index.md
master
istio
[ 0.003219215665012598, 0.06572310626506805, -0.013803146779537201, -0.021119732409715652, -0.00030300545040518045, -0.0609978586435318, 0.04225757345557213, -0.07583274692296982, 0.005569258239120245, 0.04367142915725708, -0.004212248604744673, 0.020473506301641464, -0.03177052363753319, 0....
0.224567
TCP filters to filter chain 2 \* added 1 TCP filters to filter chain 3 \* added 1 TCP filters to filter chain 4 2021-04-23T20:53:29.510903Z info ads LDS: PUSH for node:curl-557747455f-6dxbl.foo resources:18 size:85.0kB 2021-04-23T20:53:29.511487Z info ads LDS: PUSH for node:httpbin-74fb669cc6-lpscm.foo resources:18 size:86.4kB {{< /text >}} This shows that Istiod generated: - An HTTP filter config with policy `ns[foo]-policy[deny-path-headers]-rule[0]` for workload `httpbin-74fb669cc6-lpscm.foo`. - A TCP filter config with policy `ns[foo]-policy[deny-path-headers]-rule[0]` for workload `httpbin-74fb669cc6-lpscm.foo`. ## Ensure Istiod distributes policies to proxies correctly Istiod distributes the authorization policies to proxies. The following steps help you ensure istiod is working as expected: {{< tip >}} The command below assumes you have deployed `httpbin`, you should replace `"-l app=httpbin"` with your actual pod if you are not using `httpbin`. {{< /tip >}} 1. Run the following command to get the proxy configuration dump for the `httpbin` workload: {{< text bash >}} $ kubectl exec $(kubectl get pods -l app=httpbin -o jsonpath='{.items[0].metadata.name}') -c istio-proxy -- pilot-agent request GET config\_dump {{< /text >}} 1. Check the log and verify: - The log includes an `envoy.filters.http.rbac` filter to enforce the authorization policy on each incoming request. - Istio updates the filter accordingly after you update your authorization policy. 1. The following output means the proxy of `httpbin` has enabled the `envoy.filters.http.rbac` filter with rules that rejects anyone to access path `/headers`. {{< text plain >}} { "name": "envoy.filters.http.rbac", "typed\_config": { "@type": "type.googleapis.com/envoy.extensions.filters.http.rbac.v3.RBAC", "rules": { "action": "DENY", "policies": { "ns[foo]-policy[deny-path-headers]-rule[0]": { "permissions": [ { "and\_rules": { "rules": [ { "or\_rules": { "rules": [ { "url\_path": { "path": { "exact": "/headers" } } } ] } } ] } } ], "principals": [ { "and\_ids": { "ids": [ { "any": true } ] } } ] } } }, "shadow\_rules\_stat\_prefix": "istio\_dry\_run\_allow\_" } }, {{< /text >}} ## Ensure proxies enforce policies correctly Proxies eventually enforce the authorization policies. The following steps help you ensure the proxy is working as expected: {{< tip >}} The command below assumes you have deployed `httpbin`, you should replace `"-l app=httpbin"` with your actual pod if you are not using `httpbin`. {{< /tip >}} 1. Turn on the authorization debug logging in proxy with the following command: {{< text bash >}} $ istioctl proxy-config log deploy/httpbin --level "rbac:debug" {{< /text >}} 1. Verify you see the following output: {{< text plain >}} active loggers: ... ... rbac: debug ... ... {{< /text >}} 1. Send some requests to the `httpbin` workload to generate some logs. 1. Print the proxy logs with the following command: {{< text bash >}} $ kubectl logs $(kubectl get pods -l app=httpbin -o jsonpath='{.items[0].metadata.name}') -c istio-proxy {{< /text >}} 1. Check the output and verify: - The output log shows either `enforced allowed` or `enforced denied` depending on whether the request was allowed or denied respectively. - Your authorization policy expects the data extracted from the request. 1. The following is an example output for a request at path `/httpbin`: {{< text plain >}} ... 2021-04-23T20:43:18.552857Z debug envoy rbac checking request: requestedServerName: outbound\_.8000\_.\_.httpbin.foo.svc.cluster.local, sourceIP: 10.44.3.13:46180, directRemoteIP: 10.44.3.13:46180, remoteIP: 10.44.3.13:46180,localAddress: 10.44.1.18:80, ssl: uriSanPeerCertificate: spiffe://cluster.local/ns/foo/sa/curl, dnsSanPeerCertificate: , subjectPeerCertificate: , headers: ':authority', 'httpbin:8000' ':path', '/headers' ':method', 'GET' ':scheme', 'http' 'user-agent', 'curl/7.76.1-DEV' 'accept', '\*/\*' 'x-forwarded-proto', 'http' 'x-request-id', '672c9166-738c-4865-b541-128259cc65e5' 'x-envoy-attempt-count', '1' 'x-b3-traceid', '8a124905edf4291a21df326729b264e9' 'x-b3-spanid', '21df326729b264e9' 'x-b3-sampled', '0' 'x-forwarded-client-cert', 'By=spiffe://cluster.local/ns/foo/sa/httpbin;Hash=d64cd6750a3af8685defbbe4dd8c467ebe80f6be4bfe9ca718e81cd94129fc1d;Subject="";URI=spiffe://cluster.local/ns/foo/sa/curl' , dynamicMetadata: filter\_metadata { key: "istio\_authn" value { fields { key: "request.auth.principal" value { string\_value: "cluster.local/ns/foo/sa/curl" } } fields { key: "source.namespace" value { string\_value: "foo" } } fields { key: "source.principal" value { string\_value: "cluster.local/ns/foo/sa/curl" } } fields { key: "source.user" value { string\_value: "cluster.local/ns/foo/sa/curl" } } } } 2021-04-23T20:43:18.552910Z debug envoy rbac enforced denied, matched
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/common-problems/security-issues/index.md
master
istio
[ -0.051791489124298096, 0.06644844263792038, 0.009526923298835754, 0.006251170299947262, 0.03740504011511803, -0.06060243770480156, 0.0579725056886673, -0.07586660981178284, 0.00899474136531353, 0.016398129984736443, 0.0052934191189706326, -0.09390512853860855, -0.04094303026795387, -0.0144...
0.265653
{ fields { key: "request.auth.principal" value { string\_value: "cluster.local/ns/foo/sa/curl" } } fields { key: "source.namespace" value { string\_value: "foo" } } fields { key: "source.principal" value { string\_value: "cluster.local/ns/foo/sa/curl" } } fields { key: "source.user" value { string\_value: "cluster.local/ns/foo/sa/curl" } } } } 2021-04-23T20:43:18.552910Z debug envoy rbac enforced denied, matched policy ns[foo]-policy[deny-path-headers]-rule[0] ... {{< /text >}} The log `enforced denied, matched policy ns[foo]-policy[deny-path-headers]-rule[0]` means the request is rejected by the policy `ns[foo]-policy[deny-path-headers]-rule[0]`. 1. The following is an example output for authorization policy in the [dry-run mode](/docs/tasks/security/authorization/authz-dry-run): {{< text plain >}} ... 2021-04-23T20:59:11.838468Z debug envoy rbac checking request: requestedServerName: outbound\_.8000\_.\_.httpbin.foo.svc.cluster.local, sourceIP: 10.44.3.13:49826, directRemoteIP: 10.44.3.13:49826, remoteIP: 10.44.3.13:49826,localAddress: 10.44.1.18:80, ssl: uriSanPeerCertificate: spiffe://cluster.local/ns/foo/sa/curl, dnsSanPeerCertificate: , subjectPeerCertificate: , headers: ':authority', 'httpbin:8000' ':path', '/headers' ':method', 'GET' ':scheme', 'http' 'user-agent', 'curl/7.76.1-DEV' 'accept', '\*/\*' 'x-forwarded-proto', 'http' 'x-request-id', 'e7b2fdb0-d2ea-4782-987c-7845939e6313' 'x-envoy-attempt-count', '1' 'x-b3-traceid', '696607fc4382b50017c1f7017054c751' 'x-b3-spanid', '17c1f7017054c751' 'x-b3-sampled', '0' 'x-forwarded-client-cert', 'By=spiffe://cluster.local/ns/foo/sa/httpbin;Hash=d64cd6750a3af8685defbbe4dd8c467ebe80f6be4bfe9ca718e81cd94129fc1d;Subject="";URI=spiffe://cluster.local/ns/foo/sa/curl' , dynamicMetadata: filter\_metadata { key: "istio\_authn" value { fields { key: "request.auth.principal" value { string\_value: "cluster.local/ns/foo/sa/curl" } } fields { key: "source.namespace" value { string\_value: "foo" } } fields { key: "source.principal" value { string\_value: "cluster.local/ns/foo/sa/curl" } } fields { key: "source.user" value { string\_value: "cluster.local/ns/foo/sa/curl" } } } } 2021-04-23T20:59:11.838529Z debug envoy rbac shadow denied, matched policy ns[foo]-policy[deny-path-headers]-rule[0] 2021-04-23T20:59:11.838538Z debug envoy rbac no engine, allowed by default ... {{< /text >}} The log `shadow denied, matched policy ns[foo]-policy[deny-path-headers]-rule[0]` means the request would be rejected by the \*\*dry-run\*\* policy `ns[foo]-policy[deny-path-headers]-rule[0]`. The log `no engine, allowed by default` means the request is actually allowed because the dry-run policy is the only policy on the workload. ## Keys and certificates errors If you suspect that some of the keys and/or certificates used by Istio aren't correct, you can inspect the contents from any pod: {{< text bash >}} $ istioctl proxy-config secret curl-8f795f47d-4s4t7 RESOURCE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE default Cert Chain ACTIVE true 138092480869518152837211547060273851586 2020-11-11T16:39:48Z 2020-11-10T16:39:48Z ROOTCA CA ACTIVE true 288553090258624301170355571152070165215 2030-11-08T16:34:52Z 2020-11-10T16:34:52Z {{< /text >}} By passing the `-o json` flag, you can pass the full certificate content to `openssl` to analyze its contents: {{< text bash >}} $ istioctl proxy-config secret curl-8f795f47d-4s4t7 -o json | jq '[.dynamicActiveSecrets[] | select(.name == "default")][0].secret.tlsCertificate.certificateChain.inlineBytes' -r | base64 -d | openssl x509 -noout -text Certificate: Data: Version: 3 (0x2) Serial Number: 99:59:6b:a2:5a:f4:20:f4:03:d7:f0:bc:59:f5:d8:40 Signature Algorithm: sha256WithRSAEncryption Issuer: O = k8s.cluster.local Validity Not Before: Jun 4 20:38:20 2018 GMT Not After : Sep 2 20:38:20 2018 GMT ... X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Alternative Name: URI:spiffe://cluster.local/ns/my-ns/sa/my-sa ... {{< /text >}} Make sure the displayed certificate contains valid information. In particular, the `Subject Alternative Name` field should be `URI:spiffe://cluster.local/ns/my-ns/sa/my-sa`. ## Mutual TLS errors If you suspect problems with mutual TLS, first ensure that istiod is healthy, and second ensure that [keys and certificates are being delivered](#keys-and-certificates-errors) to sidecars properly. If everything appears to be working so far, the next step is to verify that the right [authentication policy](/docs/tasks/security/authentication/authn-policy/) is applied and the right destination rules are in place. If you suspect the client side sidecar may send mutual TLS or plaintext traffic incorrectly, check the [Grafana Workload dashboard](/docs/ops/integrations/grafana/). The outbound requests are annotated whether mTLS is used or not. After checking this if you believe the client sidecars are misbehaved, report an issue on GitHub.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/common-problems/security-issues/index.md
master
istio
[ -0.004524827003479004, 0.05901787802577019, -0.03209661692380905, 0.03321431949734688, 0.00021726952400058508, -0.019122222438454628, 0.024248600006103516, -0.06638956815004349, 0.019916711375117302, 0.06346119940280914, 0.0005121548310853541, -0.05510716140270233, 0.02152862399816513, 0.0...
0.107606
client sidecars are misbehaved, report an issue on GitHub.
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/common-problems/security-issues/index.md
master
istio
[ -0.04245511069893837, -0.020540349185466766, -0.03762907162308693, 0.038010358810424805, 0.02647962048649788, -0.03427749127149582, -0.09737221151590347, -0.026084838435053825, -0.005027052015066147, -0.015547890216112137, 0.05931303650140762, 0.08809459209442139, -0.029606474563479424, 0....
-0.031269
## EnvoyFilter migration `EnvoyFilter` is an alpha API that is tightly coupled to the implementation details of Istio xDS configuration generation. Production use of the `EnvoyFilter` alpha API must be carefully curated during the upgrade of Istio's control or data plane. In many instances, `EnvoyFilter` can be replaced with a first-class Istio API which carries substantially lower upgrade risks. ### Use Telemetry API for metrics customization The usage of `IstioOperator` to customize Prometheus metrics generation has been replaced by the [Telemetry API](/docs/tasks/observability/metrics/customize-metrics/), because `IstioOperator` relies on a template `EnvoyFilter` to change the metrics filter configuration. Note that the two methods are incompatible, and the Telemetry API does not work with `EnvoyFilter` or `IstioOperator` metric customization configuration. As an example, the following `IstioOperator` configuration adds a `destination\_port` tag: {{< text yaml >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: values: telemetry: v2: prometheus: configOverride: inboundSidecar: metrics: - name: requests\_total dimensions: destination\_port: string(destination.port) {{< /text >}} The following `Telemetry` configuration replaces the above: {{< text yaml >}} apiVersion: telemetry.istio.io/v1 kind: Telemetry metadata: name: namespace-metrics spec: metrics: - providers: - name: prometheus overrides: - match: metric: REQUEST\_COUNT mode: SERVER tagOverrides: destination\_port: value: "string(destination.port)" {{< /text >}} ### Use the WasmPlugin API for Wasm data plane extensibility The usage of `EnvoyFilter` to inject Wasm filters has been replaced by the [WasmPlugin API](/docs/tasks/extensibility/wasm-module-distribution). WasmPlugin API allows dynamic loading of the plugins from artifact registries, URLs, or local files. The "Null" plugin runtime is no longer a recommended option for deployment of Wasm code. ### Use gateway topology to set the number of the trusted hops The usage of `EnvoyFilter` to configure the number of the trusted hops in the HTTP connection manager has been replaced by the [`gatewayTopology`](/docs/reference/config/istio.mesh.v1alpha1/#Topology) field in [`ProxyConfig`](/docs/ops/configuration/traffic-management/network-topologies). For example, the following `EnvoyFilter` configuration should use an annotation on the pod or the mesh default. Instead of: {{< text yaml >}} apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: ingressgateway-redirect-config spec: configPatches: - applyTo: NETWORK\_FILTER match: context: GATEWAY listener: filterChain: filter: name: envoy.filters.network.http\_connection\_manager patch: operation: MERGE value: typed\_config: '@type': type.googleapis.com/envoy.extensions.filters.network.http\_connection\_manager.v3.HttpConnectionManager xff\_num\_trusted\_hops: 1 workloadSelector: labels: istio: ingress-gateway {{< /text >}} Use the equivalent ingress gateway pod proxy configuration annotation: {{< text yaml >}} metadata: annotations: "proxy.istio.io/config": '{"gatewayTopology" : { "numTrustedProxies": 1 }}' {{< /text >}} ### Use gateway topology to enable PROXY protocol on the ingress gateways The usage of `EnvoyFilter` to enable [PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt) on the ingress gateways has been replaced by the [`gatewayTopology`](/docs/reference/config/istio.mesh.v1alpha1/#Topology) field in [`ProxyConfig`](/docs/ops/configuration/traffic-management/network-topologies). For example, the following `EnvoyFilter` configuration should use an annotation on the pod or the mesh default. Instead of: {{< text yaml >}} apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: proxy-protocol spec: configPatches: - applyTo: LISTENER\_FILTER patch: operation: INSERT\_FIRST value: name: proxy\_protocol typed\_config: "@type": "type.googleapis.com/envoy.extensions.filters.listener.proxy\_protocol.v3.ProxyProtocol" workloadSelector: labels: istio: ingress-gateway {{< /text >}} Use the equivalent ingress gateway pod proxy configuration annotation: {{< text yaml >}} metadata: annotations: "proxy.istio.io/config": '{"gatewayTopology" : { "proxyProtocol": {} }}' {{< /text >}} ### Use a proxy annotation to customize the histogram bucket sizes The usage of `EnvoyFilter` and the experimental bootstrap discovery service to configure the bucket sizes for the histogram metrics has been replaced by the proxy annotation `sidecar.istio.io/statsHistogramBuckets`. For example, the following `EnvoyFilter` configuration should use an annotation on the pod. Instead of: {{< text yaml >}} apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: envoy-stats-1 namespace: istio-system spec: workloadSelector: labels: istio: ingressgateway configPatches: - applyTo: BOOTSTRAP patch: operation: MERGE value: stats\_config: histogram\_bucket\_settings: - match: prefix: istiocustom buckets: [1,5,50,500,5000,10000] {{< /text >}} Use the equivalent pod annotation: {{< text yaml >}} metadata: annotations: "sidecar.istio.io/statsHistogramBuckets": '{"istiocustom":[1,5,50,500,5000,10000]}' {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/common-problems/upgrade-issues/index.md
master
istio
[ -0.03968938812613487, -0.018978731706738472, 0.017071356996893883, -0.007482669781893492, -0.027899114415049553, -0.12157762050628662, 0.043370433151721954, 0.026537707075476646, -0.014260479249060154, 0.012536457739770412, -0.011359214782714844, -0.09381110221147537, -0.05181722715497017, ...
0.42104
applyTo: BOOTSTRAP patch: operation: MERGE value: stats\_config: histogram\_bucket\_settings: - match: prefix: istiocustom buckets: [1,5,50,500,5000,10000] {{< /text >}} Use the equivalent pod annotation: {{< text yaml >}} metadata: annotations: "sidecar.istio.io/statsHistogramBuckets": '{"istiocustom":[1,5,50,500,5000,10000]}' {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/common-problems/upgrade-issues/index.md
master
istio
[ 0.036446548998355865, 0.09359757602214813, 0.02373594231903553, 0.02447224222123623, 0.03742316737771034, 0.03279527649283409, 0.015162259340286255, 0.06687688827514648, -0.03137555345892906, 0.04482845962047577, -0.00785418413579464, -0.07955557107925415, -0.017509786412119865, -0.0163450...
0.266716
## Seemingly valid configuration is rejected Use [istioctl validate -f](/docs/reference/commands/istioctl/#istioctl-validate) and [istioctl analyze](/docs/reference/commands/istioctl/#istioctl-analyze) for more insight into why the configuration is rejected. Use an \_istioctl\_ CLI with a similar version to the control plane version. The most commonly reported problems with configuration are YAML indentation and array notation (`-`) mistakes. Manually verify your configuration is correct, cross-referencing [Istio API reference](/docs/reference/config) when necessary. ## Invalid configuration is accepted Verify that a `validatingwebhookconfiguration` named `istio-validator-` followed by `-`, if not the default revision, followed by the Istio system namespace (e.g., `istio-validator-myrev-istio-system`) exists and is correct. The `apiVersion`, `apiGroup`, and `resource` of the invalid configuration should be listed in the `webhooks` section of the `validatingwebhookconfiguration`. {{< text bash yaml >}} $ kubectl get validatingwebhookconfiguration istio-validator-istio-system -o yaml apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: labels: app: istiod install.operator.istio.io/owning-resource-namespace: istio-system istio: istiod istio.io/rev: default operator.istio.io/component: Pilot operator.istio.io/managed: Reconcile operator.istio.io/version: unknown release: istio name: istio-validator-istio-system resourceVersion: "615569" uid: 112fed62-93e7-41c9-8cb1-b2665f392dd7 webhooks: - admissionReviewVersions: - v1beta1 - v1 clientConfig: # caBundle should be non-empty. This is periodically (re)patched # every second by the webhook service using the ca-cert # from the mounted service account secret. caBundle: LS0t... # service corresponds to the Kubernetes service that implements the webhook service: name: istiod namespace: istio-system path: /validate port: 443 failurePolicy: Fail matchPolicy: Equivalent name: rev.validation.istio.io namespaceSelector: {} objectSelector: matchExpressions: - key: istio.io/rev operator: In values: - default rules: - apiGroups: - security.istio.io - networking.istio.io - telemetry.istio.io - extensions.istio.io apiVersions: - '\*' operations: - CREATE - UPDATE resources: - '\*' scope: '\*' sideEffects: None timeoutSeconds: 10 {{< /text >}} If the `istio-validator-` webhook does not exist, verify the `global.configValidation` installation option is set to `true`. The validation configuration is fail-close. If configuration exists and is scoped properly, the webhook will be invoked. A missing `caBundle`, bad certificate, or network connectivity problem will produce an error message when the resource is created/updated. If you don’t see any error message and the webhook wasn’t invoked and the webhook configuration is valid, your cluster is misconfigured. ## Creating configuration fails with x509 certificate errors `x509: certificate signed by unknown authority` related errors are typically caused by an empty `caBundle` in the webhook configuration. Verify that it is not empty (see [verify webhook configuration](#invalid-configuration-is-accepted)). Istio consciously reconciles webhook configuration used the `istio-validation` `configmap` and root certificate. 1. Verify the `istiod` pod(s) are running: {{< text bash >}} $ kubectl -n istio-system get pod -lapp=istiod NAME READY STATUS RESTARTS AGE istiod-5dbbbdb746-d676g 1/1 Running 0 2d {{< /text >}} 1. Check the pod logs for errors. Failing to patch the `caBundle` should print an error. {{< text bash >}} $ for pod in $(kubectl -n istio-system get pod -lapp=istiod -o jsonpath='{.items[\*].metadata.name}'); do \ kubectl -n istio-system logs ${pod} \ done {{< /text >}} 1. If the patching failed, verify the RBAC configuration for Istiod: {{< text bash yaml >}} $ kubectl get clusterrole istiod-istio-system -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole name: istiod-istio-system rules: - apiGroups: - admissionregistration.k8s.io resources: - validatingwebhookconfigurations verbs: - '\*' {{< /text >}} Istio needs `validatingwebhookconfigurations` write access to create and update the `validatingwebhookconfiguration`. ## Creating configuration fails with `no such hosts` or `no endpoints available` errors Validation is fail-close. If the `istiod` pod is not ready, configuration cannot be created and updated. In such cases you’ll see an error about `no endpoints available`. Verify the `istiod` pod(s) are running and endpoints are ready. {{< text bash >}} $ kubectl -n istio-system get pod -lapp=istiod NAME READY STATUS RESTARTS AGE istiod-5dbbbdb746-d676g 1/1 Running 0 2d {{< /text >}} {{< text bash >}} $ kubectl -n istio-system get endpoints istiod NAME ENDPOINTS AGE istiod 10.48.6.108:15014,10.48.6.108:443 3d
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/common-problems/validation/index.md
master
istio
[ -0.034875184297561646, -0.030875258147716522, -0.03203035891056061, 0.01296051312237978, -0.026967715471982956, -0.04041152074933052, 0.026719730347394943, 0.04398072138428688, -0.04453321546316147, 0.020032882690429688, 0.0047209057956933975, -0.16895976662635803, 0.004620420280843973, 0....
0.498152
the `istiod` pod(s) are running and endpoints are ready. {{< text bash >}} $ kubectl -n istio-system get pod -lapp=istiod NAME READY STATUS RESTARTS AGE istiod-5dbbbdb746-d676g 1/1 Running 0 2d {{< /text >}} {{< text bash >}} $ kubectl -n istio-system get endpoints istiod NAME ENDPOINTS AGE istiod 10.48.6.108:15014,10.48.6.108:443 3d {{< /text >}} If the pods or endpoints aren't ready, check the pod logs and status for any indication about why the webhook pod is failing to start and serve traffic. {{< text bash >}} $ for pod in $(kubectl -n istio-system get pod -lapp=istiod -o jsonpath='{.items[\*].metadata.name}'); do \ kubectl -n istio-system logs ${pod} \ done {{< /text >}} {{< text bash >}} $ for pod in $(kubectl -n istio-system get pod -lapp=istiod -o name); do \ kubectl -n istio-system describe ${pod} \ done {{< /text >}}
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/common-problems/validation/index.md
master
istio
[ 0.07196816802024841, 0.007011112757027149, 0.01990543305873871, -0.021260330453515053, -0.02118460275232792, -0.03178809583187103, -0.03073495253920555, -0.011353100650012493, 0.04034348577260971, 0.0696713924407959, 0.004305473528802395, -0.12556076049804688, -0.0869038999080658, 0.024180...
0.37268
Istio security features provide strong identity, powerful policy, transparent TLS encryption, and authentication, authorization and audit (AAA) tools to protect your services and data. However, to fully make use of these features securely, care must be taken to follow best practices. It is recommended to review the [Security overview](/docs/concepts/security/) before proceeding. ## Mutual TLS Istio will [automatically](/docs/ops/configuration/traffic-management/tls-configuration/#auto-mtls) encrypt traffic using [Mutual TLS](/docs/concepts/security/#mutual-tls-authentication) whenever possible. However, proxies are configured in [permissive mode](/docs/concepts/security/#permissive-mode) by default, meaning they will accept both mutual TLS and plaintext traffic. While this is required for incremental adoption or allowing traffic from clients without an Istio sidecar, it also weakens the security stance. It is recommended to [migrate to strict mode](/docs/tasks/security/authentication/mtls-migration/) when possible, to enforce that mutual TLS is used. Mutual TLS alone is not always enough to fully secure traffic, however, as it provides only authentication, not authorization. This means that anyone with a valid certificate can still access a service. To fully lock down traffic, it is recommended to configure [authorization policies](/docs/tasks/security/authorization/). These allow creating fine-grained policies to allow or deny traffic. For example, you can allow only requests from the `app` namespace to access the `hello-world` service. ## Authorization policies Istio [authorization](/docs/concepts/security/#authorization) plays a critical part in Istio security. It takes effort to configure the correct authorization policies to best protect your clusters. It is important to understand the implications of these configurations as Istio cannot determine the proper authorization for all users. Please follow this section in its entirety. ### Safer Authorization Policy Patterns #### Use default-deny patterns We recommend you define your Istio authorization policies following the default-deny pattern to enhance your cluster's security posture. The default-deny authorization pattern means your system denies all requests by default, and you define the conditions in which the requests are allowed. In case you miss some conditions, traffic will be unexpectedly denied, instead of traffic being unexpectedly allowed. The latter typically being a security incident while the former may result in a poor user experience, a service outage or will not match your SLO/SLA. For example, in the [authorization for HTTP traffic task](/docs/tasks/security/authorization/authz-http/), the authorization policy named `allow-nothing` makes sure all traffic is denied by default. From there, other authorization policies allow traffic based on specific conditions. #### Default-deny pattern with waypoints Istio's new ambient data plane mode introduced a new split dataplane architecture. In this architecture, the waypoint proxy is configured using Kubernetes Gateway API which uses more explicit binding to gateways using `parentRef` and `targetRef`. Because waypoints adhere more closely to the principles of Kubernetes Gateway API, the default-deny pattern is enabled in a slightly different way when policy is applied waypoints. Beginning with Istio 1.25, you may bind `AuthorizationPolicy` resources to the `istio-waypoint` `GatewayClass`. By binding `AuthorizationPolicy` to the `GatewayClass`, you can configure all gateways which implement that `GatewayClass` with a default policy. It is important to note that `GatewayClass` is a cluster-scoped resource, and binding namespace-scoped policies to it requires special care. Istio requires that policies which are bound to a `GatewayClass` reside in the root namespace, typically `istio-system`. For waypoints, standard allow-nothing policy would be: {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: allow-nothing-istio-waypoint namespace: istio-system spec: targetRefs: - group: gateway.networking.k8s.io kind: GatewayClass name: istio-waypoint {{< /text >}} {{< tip >}} When using the default-deny pattern with waypoints, the policy bound to the `istio-waypoint` `GatewayClass` should be used in addition to the "classic" default-deny policy. The "classic" default-deny policy will be enforced by ztunnel against the workloads in your mesh and still provides meaningful value. {{< /tip >}} #### Use `ALLOW-with-positive-matching` and `DENY-with-negative-match` patterns Use the `ALLOW-with-positive-matching` or `DENY-with-negative-matching`
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/best-practices/security/index.md
master
istio
[ -0.10444407910108566, 0.04442395269870758, 0.0063989171758294106, 0.003912695217877626, -0.05003242567181587, -0.08561703562736511, 0.07464641332626343, 0.01889079436659813, 0.016776176169514656, -0.022628486156463623, -0.004138598218560219, -0.05280894413590431, 0.0106004374101758, 0.0755...
0.471498
policy bound to the `istio-waypoint` `GatewayClass` should be used in addition to the "classic" default-deny policy. The "classic" default-deny policy will be enforced by ztunnel against the workloads in your mesh and still provides meaningful value. {{< /tip >}} #### Use `ALLOW-with-positive-matching` and `DENY-with-negative-match` patterns Use the `ALLOW-with-positive-matching` or `DENY-with-negative-matching` patterns whenever possible. These authorization policy patterns are safer because the worst result in the case of policy mismatch is an unexpected 403 rejection instead of an authorization policy bypass. The `ALLOW-with-positive-matching` pattern is to use the `ALLOW` action only with \*\*positive\*\* matching fields (e.g. `paths`, `values`) and do not use any of the \*\*negative\*\* matching fields (e.g. `notPaths`, `notValues`). The `DENY-with-negative-matching` pattern is to use the `DENY` action only with \*\*negative\*\* matching fields (e.g. `notPaths`, `notValues`) and do not use any of the \*\*positive\*\* matching fields (e.g. `paths`, `values`). For example, the authorization policy below uses the `ALLOW-with-positive-matching` pattern to allow requests to path `/public`: {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: foo spec: action: ALLOW rules: - to: - operation: paths: ["/public"] {{< /text >}} The above policy explicitly lists the allowed path (`/public`). This means the request path must be exactly the same as `/public` to allow the request. Any other requests will be rejected by default eliminating the risk of unknown normalization behavior causing policy bypass. The following is an example using the `DENY-with-negative-matching` pattern to achieve the same result: {{< text yaml >}} apiVersion: security.istio.io/v1 kind: AuthorizationPolicy metadata: name: foo spec: action: DENY rules: - to: - operation: notPaths: ["/public"] {{< /text >}} ### Understand path normalization in authorization policy The enforcement point for authorization policies is the Envoy proxy instead of the usual resource access point in the backend application. A policy mismatch happens when the Envoy proxy and the backend application interpret the request differently. A mismatch can lead to either unexpected rejection or a policy bypass. The latter is usually a security incident that needs to be fixed immediately, and it's also why we need path normalization in the authorization policy. For example, consider an authorization policy to reject requests with path `/data/secret`. A request with path `/data//secret` will not be rejected because it does not match the path defined in the authorization policy due to the extra forward slash `/` in the path. The request goes through and later the backend application returns the same response that it returns for the path `/data/secret` because the backend application normalizes the path `/data//secret` to `/data/secret` as it considers the double forward slashes `//` equivalent to a single forward slash `/`. In this example, the policy enforcement point (Envoy proxy) had a different understanding of the path than the resource access point (backend application). The different understanding caused the mismatch and subsequently the bypass of the authorization policy. This becomes a complicated problem because of the following factors: \* Lack of a clear standard for the normalization. \* Backends and frameworks in different layers have their own special normalization. \* Applications can even have arbitrary normalizations for their own use cases. Istio authorization policy implements built-in support of various basic normalization options to help you to better address the problem: \* Refer to [Guideline on configuring the path normalization option](/docs/ops/best-practices/security/#guideline-on-configuring-the-path-normalization-option) to understand which normalization options you may want to use. \* Refer to [Customize your system on path normalization](/docs/ops/best-practices/security/#customize-your-system-on-path-normalization) to understand the detail of each normalization option. \* Refer to [Mitigation for unsupported normalization](/docs/ops/best-practices/security/#mitigation-for-unsupported-normalization) for alternative solutions in case you need any unsupported normalization options. ### Guideline on configuring the path normalization option #### Case 1: You do not need normalization
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/best-practices/security/index.md
master
istio
[ -0.07218045741319656, 0.09724775701761246, -0.010509640909731388, 0.023296630010008812, -0.0012859012931585312, -0.05401802062988281, 0.06200406327843666, -0.03209352493286133, 0.006007230840623379, -0.0006962927873246372, 0.03325090557336807, -0.021456904709339142, 0.036600787192583084, 0...
0.213118
Refer to [Customize your system on path normalization](/docs/ops/best-practices/security/#customize-your-system-on-path-normalization) to understand the detail of each normalization option. \* Refer to [Mitigation for unsupported normalization](/docs/ops/best-practices/security/#mitigation-for-unsupported-normalization) for alternative solutions in case you need any unsupported normalization options. ### Guideline on configuring the path normalization option #### Case 1: You do not need normalization at all Before diving into the details of configuring normalization, you should first make sure that normalizations are needed. You do not need normalization if you don't use authorization policies or if your authorization policies don't use any `path` fields. You may not need normalization if all your authorization policies follow the [safer authorization pattern](/docs/ops/best-practices/security/#safer-authorization-policy-patterns) which, in the worst case, results in unexpected rejection instead of policy bypass. #### Case 2: You need normalization but not sure which normalization option to use You need normalization but you have no idea of which option to use. The safest choice is the strictest normalization option that provides the maximum level of normalization in the authorization policy. This is often the case due to the fact that complicated multi-layered systems make it practically impossible to figure out what normalization is actually happening to a request beyond the enforcement point. You could use a less strict normalization option if it already satisfies your requirements and you are sure of its implications. For either option, make sure you write both positive and negative tests specifically for your requirements to verify the normalization is working as expected. The tests are useful in catching potential bypass issues caused by a misunderstanding or incomplete knowledge of the normalization happening to your request. Refer to [Customize your system on path normalization](/docs/ops/best-practices/security/#customize-your-system-on-path-normalization) for more details on configuring the normalization option. #### Case 3: You need an unsupported normalization option If you need a specific normalization option that is not supported by Istio yet, please follow [Mitigation for unsupported normalization](/docs/ops/best-practices/security/#mitigation-for-unsupported-normalization) for customized normalization support or create a feature request for the Istio community. ### Customize your system on path normalization Istio authorization policies can be based on the URL paths in the HTTP request. [Path normalization (a.k.a., URI normalization)](https://en.wikipedia.org/wiki/URI\_normalization) modifies and standardizes the incoming requests' paths, so that the normalized paths can be processed in a standard way. Syntactically different paths may be equivalent after path normalization. Istio supports the following normalization schemes on the request paths, before evaluating against the authorization policies and routing the requests: | Option | Description | Example | | --- | --- | --- | | `NONE` | No normalization is done. Anything received by Envoy will be forwarded exactly as-is to any backend service. | `../%2Fa../b` is evaluated by the authorization policies and sent to your service. | | `BASE` | This is currently the option used in the \*default\* installation of Istio. This applies the [`normalize\_path`](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/network/http\_connection\_manager/v3/http\_connection\_manager.proto#envoy-v3-api-field-extensions-filters-network-http-connection-manager-v3-httpconnectionmanager-normalize-path) option on Envoy proxies, which follows [RFC 3986](https://tools.ietf.org/html/rfc3986) with extra normalization to convert backslashes to forward slashes. | `/a/../b` is normalized to `/b`. `\da` is normalized to `/da`. | | `MERGE\_SLASHES` | Slashes are merged after the \_BASE\_ normalization. | `/a//b` is normalized to `/a/b`. | | `DECODE\_AND\_MERGE\_SLASHES` | The most strict setting when you allow all traffic by default. This setting is recommended, with the caveat that you will need to thoroughly test your authorization policies routes. [Percent-encoded](https://tools.ietf.org/html/rfc3986#section-2.1) slash and backslash characters (`%2F`, `%2f`, `%5C` and `%5c`) are decoded to `/` or `\`, before the `MERGE\_SLASHES` normalization. | `/a%2fb` is normalized to `/a/b`. | {{< tip >}} The configuration is specified via the [`pathNormalization`](/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig-ProxyPathNormalization) field in the [mesh config](/docs/reference/config/istio.mesh.v1alpha1/). {{< /tip >}} To emphasize, the normalization algorithms are conducted in the following order: 1. Percent-decode `%2F`,
https://github.com/istio/istio.io/blob/master//content/en/docs/ops/best-practices/security/index.md
master
istio
[ 0.011823312379419804, 0.03235612064599991, -0.0388406366109848, -0.0177018865942955, -0.00939294882118702, -0.055153198540210724, 0.002752200234681368, -0.01008271798491478, -0.07902763783931732, 0.003312149550765753, 0.04042729735374451, 0.02535829320549965, 0.05709858983755112, 0.0609911...
0.038108