content large_stringlengths 3 20.5k | url large_stringlengths 54 193 | branch large_stringclasses 4
values | source large_stringclasses 42
values | embeddings listlengths 384 384 | score float64 -0.21 0.65 |
|---|---|---|---|---|---|
# AWS SQS ## Parameters This notification service is capable of sending simple messages to AWS SQS queue. \* `queue` - name of the queue you are intending to send messages to. Can be overridden with target destination annotation. \* `region` - region of the sqs queue can be provided via env variable AWS\_DEFAULT\_REGION \* `key` - optional, aws access key must be either referenced from a secret via variable or via env variable AWS\_ACCESS\_KEY\_ID \* `secret` - optional, aws access secret must be either referenced from a secret via variable or via env variable AWS\_SECRET\_ACCESS\_KEY \* `account` optional, external accountId of the queue \* `endpointUrl` optional, useful for development with localstack ## Example ### Using Secret for credential retrieval: Resource Annotation: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment annotations: notifications.argoproj.io/subscribe.on-deployment-ready.awssqs: "overwrite-myqueue" ``` \* ConfigMap ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.awssqs: | region: "us-east-2" queue: "myqueue" account: "1234567" key: "$awsaccess\_key" secret: "$awsaccess\_secret" template.deployment-ready: | message: | Deployment {{.obj.metadata.name}} is ready! trigger.on-deployment-ready: | - when: any(obj.status.conditions, {.type == 'Available' && .status == 'True'}) send: [deployment-ready] - oncePer: obj.metadata.annotations["generation"] ``` Secret ```yaml apiVersion: v1 kind: Secret metadata: name: stringData: awsaccess\_key: test awsaccess\_secret: test ``` ### Minimal configuration using AWS Env variables Ensure the following list of environment variables are injected via OIDC, or another method. And assuming SQS is local to the account. You may skip usage of secret for sensitive data and omit other parameters. (Setting parameters via ConfigMap takes precedent.) Variables: ```bash export AWS\_ACCESS\_KEY\_ID="test" export AWS\_SECRET\_ACCESS\_KEY="test" export AWS\_DEFAULT\_REGION="us-east-1" ``` Resource Annotation: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment annotations: notifications.argoproj.io/subscribe.on-deployment-ready.awssqs: "" ``` \* ConfigMap ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.awssqs: | queue: "myqueue" template.deployment-ready: | message: | Deployment {{.obj.metadata.name}} is ready! trigger.on-deployment-ready: | - when: any(obj.status.conditions, {.type == 'Available' && .status == 'True'}) send: [deployment-ready] - oncePer: obj.metadata.annotations["generation"] ``` ## FIFO SQS Queues FIFO queues require a [MessageGroupId](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API\_SendMessage.html#SQS-SendMessage-request-MessageGroupId) to be sent along with every message, every message with a matching MessageGroupId will be processed one by one in order. To send to a FIFO SQS Queue you must include a `messageGroupId` in the template such as in the example below: ```yaml template.deployment-ready: | message: | Deployment {{.obj.metadata.name}} is ready! messageGroupId: {{.obj.metadata.name}}-deployment ``` | https://github.com/argoproj/argo-cd/blob/master//docs/operator-manual/notifications/services/awssqs.md | master | argo-cd | [
-0.037820786237716675,
0.012025520205497742,
-0.06677158921957016,
0.024069320410490036,
-0.02143404260277748,
-0.016132472082972527,
0.1117350310087204,
-0.05657405033707619,
0.08835653960704803,
0.0758763924241066,
-0.010714679956436157,
-0.07235187292098999,
0.07558993995189667,
0.02210... | 0.097153 |
# NewRelic ## Parameters \* `apiURL` - the api server url, e.g. https://api.newrelic.com \* `apiKey` - a [NewRelic ApiKey](https://docs.newrelic.com/docs/apis/rest-api-v2/get-started/introduction-new-relic-rest-api-v2/#api\_key) \* `maxIdleConns` - optional, maximum number of idle (keep-alive) connections across all hosts. \* `maxIdleConnsPerHost` - optional, maximum number of idle (keep-alive) connections per host. \* `maxConnsPerHost` - optional, maximum total connections per host. \* `idleConnTimeout` - optional, maximum amount of time an idle (keep-alive) connection will remain open before closing, e.g. '90s'. ## Configuration 1. Create a NewRelic [Api Key](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/#user-api-key) 2. Store apiKey in `argocd-notifications-secret` Secret and configure NewRelic integration in `argocd-notifications-cm` ConfigMap ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.newrelic: | apiURL: apiKey: $newrelic-apiKey ``` ```yaml apiVersion: v1 kind: Secret metadata: name: stringData: newrelic-apiKey: apiKey ``` 3. Copy [Application ID](https://docs.newrelic.com/docs/apis/rest-api-v2/get-started/get-app-other-ids-new-relic-one/#apm) 4. Create subscription for your NewRelic integration ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: annotations: notifications.argoproj.io/subscribe..newrelic: ``` ## Templates \* `description` - \_\_optional\_\_, high-level description of this deployment, visible in the [Summary](https://docs.newrelic.com/docs/apm/applications-menu/monitoring/apm-overview-page) page and on the [Deployments](https://docs.newrelic.com/docs/apm/applications-menu/events/deployments-page) page when you select an individual deployment. \* Defaults to `message` \* `changelog` - \_\_optional\_\_, A summary of what changed in this deployment, visible in the [Deployments](https://docs.newrelic.com/docs/apm/applications-menu/events/deployments-page) page when you select (selected deployment) > Change log. \* Defaults to `{{(call .repo.GetCommitMetadata .app.status.sync.revision).Message}}` \* `user` - \_\_optional\_\_, A username to associate with the deployment, visible in the [Summary](https://docs.newrelic.com/docs/apm/applications-menu/events/deployments-page) and on the [Deployments](https://docs.newrelic.com/docs/apm/applications-menu/events/deployments-page). \* Defaults to `{{(call .repo.GetCommitMetadata .app.status.sync.revision).Author}}` ```yaml context: | argocdUrl: https://example.com/argocd template.app-deployed: | message: Application {{.app.metadata.name}} has successfully deployed. newrelic: description: Application {{.app.metadata.name}} has successfully deployed ``` | https://github.com/argoproj/argo-cd/blob/master//docs/operator-manual/notifications/services/newrelic.md | master | argo-cd | [
-0.055471792817115784,
-0.04524393007159233,
-0.04099366441369057,
0.014305894263088703,
-0.06969358026981354,
-0.07800254970788956,
-0.09182955324649811,
-0.0237344428896904,
0.03569938987493515,
0.04617436230182648,
0.020567404106259346,
0.020378049463033676,
0.03455933928489685,
0.02026... | 0.105416 |
# Teams Workflows ## Overview The Teams Workflows notification service sends message notifications using Microsoft Teams Workflows (Power Automate). This is the recommended replacement for the legacy Office 365 Connectors service, which will be retired on March 31, 2026. ## Parameters The Teams Workflows notification service requires specifying the following settings: \* `recipientUrls` - the webhook url map, e.g. `channelName: https://api.powerautomate.com/webhook/...` ## Supported Webhook URL Formats The service supports the following Microsoft Teams Workflows webhook URL patterns: - `https://api.powerautomate.com/...` - `https://api.powerplatform.com/...` - `https://flow.microsoft.com/...` - URLs containing `/powerautomate/` in the path ## Configuration 1. Open `Teams` and go to the channel you wish to set notifications for 2. Click on the 3 dots next to the channel name 3. Select`Workflows` 4. Click on `Manage` 5. Click `New flow` 6. Write `Send webhook alerts to a channel` in the search bar or select it from the template list 7. Choose your team and channel 8. Configure the webhook name and settings 9. Copy the webhook URL (it will be from `api.powerautomate.com`, `api.powerplatform.com`, or `flow.microsoft.com`) 10. Store it in `argocd-notifications-secret` and define it in `argocd-notifications-cm` ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.teams-workflows: | recipientUrls: channelName: $channel-workflows-url ``` ```yaml apiVersion: v1 kind: Secret metadata: name: stringData: channel-workflows-url: https://api.powerautomate.com/webhook/your-webhook-id ``` 11. Create subscription for your Teams Workflows integration: ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: annotations: notifications.argoproj.io/subscribe.on-sync-succeeded.teams-workflows: channelName ``` ## Channel Support - β
Standard Teams channels - β
Shared channels (as of December 2025) - β
Private channels (as of December 2025) Teams Workflows provides enhanced channel support compared to Office 365 Connectors, allowing you to post to shared and private channels in addition to standard channels. ## Adaptive Card Format The Teams Workflows service uses \*\*Adaptive Cards\*\* exclusively, which is the modern, flexible card format for Microsoft Teams. All notifications are automatically converted to Adaptive Card format and wrapped in the required message envelope. ### Option 1: Using Template Fields (Recommended) The service automatically converts template fields to Adaptive Card format. This is the simplest and most maintainable approach: ```yaml template.app-sync-succeeded: | teams-workflows: # ThemeColor supports Adaptive Card semantic colors: "Good", "Warning", "Attention", "Accent" # or hex colors like "#000080" themeColor: "Good" title: Application {{.app.metadata.name}} has been successfully synced text: Application {{.app.metadata.name}} has been successfully synced at {{.app.status.operationState.finishedAt}}. summary: "{{.app.metadata.name}} sync succeeded" facts: | [{ "name": "Sync Status", "value": "{{.app.status.sync.status}}" }, { "name": "Repository", "value": "{{.app.spec.source.repoURL}}" }] sections: | [{ "facts": [ { "name": "Namespace", "value": "{{.app.metadata.namespace}}" }, { "name": "Cluster", "value": "{{.app.spec.destination.server}}" } ] }] potentialAction: |- [{ "@type": "OpenUri", "name": "View in Argo CD", "targets": [{ "os": "default", "uri": "{{.context.argocdUrl}}/applications/{{.app.metadata.name}}" }] }] ``` \*\*How it works:\*\* - `title` β Converted to a large, bold TextBlock - `text` β Converted to a regular TextBlock - `facts` β Converted to a FactSet element - `sections` β Facts within sections are extracted and converted to FactSet elements - `potentialAction` β OpenUri actions are converted to Action.OpenUrl - `themeColor` β Applied to the title TextBlock (supports semantic colors like "Good", "Warning", "Attention", "Accent" or hex colors) ### Option 2: Custom Adaptive Card JSON For full control and advanced features, you can provide a complete Adaptive Card JSON template: ```yaml template.app-sync-succeeded: | teams-workflows: adaptiveCard: | { "type": "AdaptiveCard", "version": "1.4", "body": [ { "type": "TextBlock", "text": "Application {{.app.metadata.name}} synced successfully", "size": "Large", "weight": "Bolder", "color": "Good" }, { "type": "TextBlock", "text": "Application {{.app.metadata.name}} has been successfully synced at {{.app.status.operationState.finishedAt}}.", "wrap": true }, { "type": "FactSet", "facts": [ { "title": "Sync Status", "value": "{{.app.status.sync.status}}" }, { "title": "Repository", "value": "{{.app.spec.source.repoURL}}" } ] } ], "actions": [ { "type": "Action.OpenUrl", "title": "View in Argo CD", "url": | https://github.com/argoproj/argo-cd/blob/master//docs/operator-manual/notifications/services/teams-workflows.md | master | argo-cd | [
-0.09235459566116333,
0.060658812522888184,
-0.034034471958875656,
0.00304136099293828,
0.04030265286564827,
-0.05870576575398445,
-0.026021605357527733,
-0.07879074662923813,
0.09110675007104874,
0.00311592617072165,
-0.07843456417322159,
-0.05790029838681221,
0.03616752102971077,
0.05661... | 0.10014 |
"color": "Good" }, { "type": "TextBlock", "text": "Application {{.app.metadata.name}} has been successfully synced at {{.app.status.operationState.finishedAt}}.", "wrap": true }, { "type": "FactSet", "facts": [ { "title": "Sync Status", "value": "{{.app.status.sync.status}}" }, { "title": "Repository", "value": "{{.app.spec.source.repoURL}}" } ] } ], "actions": [ { "type": "Action.OpenUrl", "title": "View in Argo CD", "url": "{{.context.argocdUrl}}/applications/{{.app.metadata.name}}" } ] } ``` \*\*Note:\*\* When using `adaptiveCard`, you only need to provide the AdaptiveCard JSON structure (not the full message envelope). The service automatically wraps it in the required `message` + `attachments` format for Teams Workflows. \*\*Important:\*\* If you provide `adaptiveCard`, it takes precedence over all other template fields (`title`, `text`, `facts`, etc.). ## Template Fields The Teams Workflows service supports the following template fields, which are automatically converted to Adaptive Card format: ### Standard Fields - `title` - Message title (converted to large, bold TextBlock) - `text` - Message text content (converted to TextBlock) - `summary` - Summary text (currently not used in Adaptive Cards, but preserved for compatibility) - `themeColor` - Color for the title. Supports: - Semantic colors: `"Good"` (green), `"Warning"` (yellow), `"Attention"` (red), `"Accent"` (blue) - Hex colors: `"#000080"`, `"#FF0000"`, etc. - `facts` - JSON array of fact key-value pairs (converted to FactSet) ```yaml facts: | [{ "name": "Status", "value": "{{.app.status.sync.status}}" }] ``` - `sections` - JSON array of sections containing facts (facts are extracted and converted to FactSet) ```yaml sections: | [{ "facts": [{ "name": "Namespace", "value": "{{.app.metadata.namespace}}" }] }] ``` - `potentialAction` - JSON array of action buttons (OpenUri actions converted to Action.OpenUrl) ```yaml potentialAction: |- [{ "@type": "OpenUri", "name": "View Details", "targets": [{ "os": "default", "uri": "{{.context.argocdUrl}}/applications/{{.app.metadata.name}}" }] }] ``` ### Advanced Fields - `adaptiveCard` - Complete Adaptive Card JSON template (takes precedence over all other fields) - Only provide the AdaptiveCard structure, not the message envelope - Supports full Adaptive Card 1.4 specification - Allows access to all Adaptive Card features (containers, columns, images, etc.) - `template` - Raw JSON template (legacy, use `adaptiveCard` instead) ### Field Conversion Details | Template Field | Adaptive Card Element | Notes | |---------------|----------------------|-------| | `title` | `TextBlock` with `size: "Large"`, `weight: "Bolder"` | ThemeColor applied to this element | | `text` | `TextBlock` with `wrap: true` | Uses `n.Message` if `text` is empty | | `facts` | `FactSet` | Each fact becomes a `title`/`value` pair | | `sections[].facts` | `FactSet` | Facts extracted from sections | | `potentialAction[OpenUri]` | `Action.OpenUrl` | Only OpenUri actions are converted | | `themeColor` | Applied to title `TextBlock.color` | Supports semantic and hex colors | ## Migration from Office 365 Connectors If you're currently using the `teams` service with Office 365 Connectors, follow these steps to migrate: 1. \*\*Create a new Workflows webhook\*\* using the configuration steps above 2. \*\*Update your service configuration:\*\* - Change from `service.teams` to `service.teams-workflows` - Update the webhook URL to your new Workflows webhook URL 3. \*\*Update your templates:\*\* - Change `teams:` to `teams-workflows:` in your templates - Your existing template fields (`title`, `text`, `facts`, `sections`, `potentialAction`) will automatically be converted to Adaptive Card format - No changes needed to your template structure - the conversion is automatic 4. \*\*Update your subscriptions:\*\* ```yaml # Old notifications.argoproj.io/subscribe.on-sync-succeeded.teams: channelName # New notifications.argoproj.io/subscribe.on-sync-succeeded.teams-workflows: channelName ``` 5. \*\*Test and verify:\*\* - Send a test notification to verify it works correctly - Once verified, you can remove the old Office 365 Connector configuration \*\*Note:\*\* Your existing templates will work without modification. The service automatically converts your template fields to Adaptive Card format, so you get the benefits of modern cards without changing your templates. ## Differences from Office 365 Connectors | Feature | Office 365 Connectors | | https://github.com/argoproj/argo-cd/blob/master//docs/operator-manual/notifications/services/teams-workflows.md | master | argo-cd | [
-0.07404901087284088,
0.06354699283838272,
-0.04032084345817566,
0.0784379243850708,
0.00864020548760891,
-0.025998646393418312,
0.04649737477302551,
-0.048708271235227585,
-0.031182795763015747,
0.0179729200899601,
-0.0027942105662077665,
-0.042393408715724945,
-0.03594355285167694,
0.012... | 0.098423 |
the old Office 365 Connector configuration \*\*Note:\*\* Your existing templates will work without modification. The service automatically converts your template fields to Adaptive Card format, so you get the benefits of modern cards without changing your templates. ## Differences from Office 365 Connectors | Feature | Office 365 Connectors | Teams Workflows | |---------|----------------------|-----------------| | Service Name | `teams` | `teams-workflows` | | Standard Channels | β
| β
| | Shared Channels | β | β
(Dec 2025+) | | Private Channels | β | β
(Dec 2025+) | | Card Format | messageCard (legacy) | Adaptive Cards (modern) | | Template Conversion | N/A | Automatic conversion from template fields | | Retirement Date | March 31, 2026 | Active | ## Adaptive Card Features The Teams Workflows service leverages Adaptive Cards, which provide: - \*\*Rich Content\*\*: Support for text, images, fact sets, and more - \*\*Flexible Layout\*\*: Containers, columns, and adaptive layouts - \*\*Interactive Elements\*\*: Action buttons, input fields, and more - \*\*Semantic Colors\*\*: Built-in color schemes (Good, Warning, Attention, Accent) - \*\*Cross-Platform\*\*: Works across Teams, Outlook, and other Microsoft 365 apps ### Example: Advanced Adaptive Card Template For complex notifications, you can use the full Adaptive Card specification: ```yaml template.app-sync-succeeded-advanced: | teams-workflows: adaptiveCard: | { "type": "AdaptiveCard", "version": "1.4", "body": [ { "type": "Container", "items": [ { "type": "ColumnSet", "columns": [ { "type": "Column", "width": "auto", "items": [ { "type": "Image", "url": "https://example.com/success-icon.png", "size": "Small" } ] }, { "type": "Column", "width": "stretch", "items": [ { "type": "TextBlock", "text": "Application {{.app.metadata.name}}", "weight": "Bolder", "size": "Large" }, { "type": "TextBlock", "text": "Successfully synced", "spacing": "None", "isSubtle": true } ] } ] }, { "type": "FactSet", "facts": [ { "title": "Status", "value": "{{.app.status.sync.status}}" }, { "title": "Repository", "value": "{{.app.spec.source.repoURL}}" } ] } ] } ], "actions": [ { "type": "Action.OpenUrl", "title": "View in Argo CD", "url": "{{.context.argocdUrl}}/applications/{{.app.metadata.name}}" } ] } ``` | https://github.com/argoproj/argo-cd/blob/master//docs/operator-manual/notifications/services/teams-workflows.md | master | argo-cd | [
-0.0793435275554657,
0.051594581454992294,
-0.03404390811920166,
-0.0045403060503304005,
0.032989583909511566,
0.056013282388448715,
-0.110073983669281,
-0.05532991513609886,
0.02576177753508091,
-0.05631941929459572,
-0.032324790954589844,
0.02229321375489235,
-0.043146274983882904,
0.025... | 0.093788 |
# Grafana To be able to create Grafana annotation with argocd-notifications you have to create an [API Key](https://grafana.com/docs/grafana/latest/http\_api/auth/#create-api-key) inside your [Grafana](https://grafana.com).  Available parameters : \* `apiURL` - the server url, e.g. https://grafana.example.com \* `apiKey` - the API key for the serviceaccount \* `insecureSkipVerify` - optional bool, true or false \* `maxIdleConns` - optional, maximum number of idle (keep-alive) connections across all hosts. \* `maxIdleConnsPerHost` - optional, maximum number of idle (keep-alive) connections per host. \* `maxConnsPerHost` - optional, maximum total connections per host. \* `idleConnTimeout` - optional, maximum amount of time an idle (keep-alive) connection will remain open before closing. 1. Login to your Grafana instance as `admin` 2. On the left menu, go to Configuration / API Keys 3. Click "Add API Key" 4. Fill the Key with name `ArgoCD Notification`, role `Editor` and Time to Live `10y` (for example) 5. Click on Add button 6. Store apiKey in `argocd-notifications-secret` Secret and Copy your API Key and define it in `argocd-notifications-cm` ConfigMap ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.grafana: | apiUrl: https://grafana.example.com/api apiKey: $grafana-api-key ``` ```yaml apiVersion: v1 kind: Secret metadata: name: stringData: grafana-api-key: api-key ``` 7. Create a template in `argo-notifications-cm` Configmap This will be used to pass the (required) text of the annocation to Grafana (or re-use an existing one) As there is no specific template for Grafana, you must use the generic `message`: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: templates: template.app-deployed: | messsage: Application {{.app.metadata.name}} is now running new version of deployments manifests. ``` 8. Create subscription for your Grafana integration ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: annotations: notifications.argoproj.io/subscribe..grafana: tag1|tag2 # list of tags separated with | ``` 9. Change the annotations settings  | https://github.com/argoproj/argo-cd/blob/master//docs/operator-manual/notifications/services/grafana.md | master | argo-cd | [
-0.11692364513874054,
-0.02432834729552269,
-0.14428089559078217,
0.010306935757398605,
-0.032572031021118164,
-0.10045745223760605,
0.008251940831542015,
-0.055610291659832,
0.012774651870131493,
0.09238272160291672,
-0.019916128367185593,
-0.0724218487739563,
0.030463363975286484,
0.0483... | 0.12433 |
# Telegram 1. Get an API token using [@Botfather](https://t.me/Botfather). 2. Store token in `` Secret and configure telegram integration in `argocd-notifications-cm` ConfigMap: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.telegram: | token: $telegram-token ``` 3. Create new Telegram [channel](https://telegram.org/blog/channels). 4. Add your bot as an administrator. 5. Use this channel `username` (public channel) or `chatID` (private channel) in the subscription for your Telegram integration: ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: annotations: notifications.argoproj.io/subscribe.on-sync-succeeded.telegram: username ``` ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: annotations: notifications.argoproj.io/subscribe.on-sync-succeeded.telegram: -1000000000000 ``` If your private chat contains threads, you can optionally specify a thread id by seperating it with a `|`: ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: annotations: notifications.argoproj.io/subscribe.on-sync-succeeded.telegram: -1000000000000|2 ``` | https://github.com/argoproj/argo-cd/blob/master//docs/operator-manual/notifications/services/telegram.md | master | argo-cd | [
-0.024030525237321854,
-0.08771660923957825,
-0.02016422711312771,
0.0045250775292515755,
-0.01931724138557911,
-0.044440511614084244,
0.04266844317317009,
0.040736328810453415,
-0.00911053828895092,
0.12278551608324051,
-0.016995204612612724,
-0.07239927351474762,
0.018639419227838516,
0.... | 0.123254 |
# Alertmanager ## Parameters The notification service is used to push events to [Alertmanager](https://github.com/prometheus/alertmanager), and the following settings need to be specified: \* `targets` - the alertmanager service address, array type \* `scheme` - optional, default is "http", e.g. http or https \* `apiPath` - optional, default is "/api/v2/alerts" \* `insecureSkipVerify` - optional, default is "false", when scheme is https whether to skip the verification of ca \* `basicAuth` - optional, server auth \* `bearerToken` - optional, server auth \* `timeout` - optional, the timeout in seconds used when sending alerts, default is "3 seconds" \* `maxIdleConns` - optional, maximum number of idle (keep-alive) connections across all hosts. \* `maxIdleConnsPerHost` - optional, maximum number of idle (keep-alive) connections per host. \* `maxConnsPerHost` - optional, maximum total connections per host. \* `idleConnTimeout` - optional, maximum amount of time an idle (keep-alive) connection will remain open before closing. `basicAuth` or `bearerToken` is used for authentication, you can choose one. If the two are set at the same time, `basicAuth` takes precedence over `bearerToken`. ## Example ### Prometheus Alertmanager config ```yaml global: resolve\_timeout: 5m route: group\_by: ['alertname'] group\_wait: 10s group\_interval: 10s repeat\_interval: 1h receiver: 'default' receivers: - name: 'default' webhook\_configs: - send\_resolved: false url: 'http://10.5.39.39:10080/api/alerts/webhook' ``` You should turn off "send\_resolved" or you will receive unnecessary recovery notifications after "resolve\_timeout". ### Send one alertmanager without auth ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.alertmanager: | targets: - 10.5.39.39:9093 ``` ### Send alertmanager cluster with custom api path If your alertmanager has changed the default api, you can customize "apiPath". ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.alertmanager: | targets: - 10.5.39.39:443 scheme: https apiPath: /api/events insecureSkipVerify: true ``` ### Send high availability alertmanager with auth Store auth token in `argocd-notifications-secret` Secret and use configure in `argocd-notifications-cm` ConfigMap. ```yaml apiVersion: v1 kind: Secret metadata: name: stringData: alertmanager-username: alertmanager-password: alertmanager-bearer-token: ``` - with basicAuth ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.alertmanager: | targets: - 10.5.39.39:19093 - 10.5.39.39:29093 - 10.5.39.39:39093 scheme: https apiPath: /api/v2/alerts insecureSkipVerify: true basicAuth: username: $alertmanager-username password: $alertmanager-password ``` - with bearerToken ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.alertmanager: | targets: - 10.5.39.39:19093 - 10.5.39.39:29093 - 10.5.39.39:39093 scheme: https apiPath: /api/v2/alerts insecureSkipVerify: true bearerToken: $alertmanager-bearer-token ``` ## Templates \* `labels` - at least one label pair required, implement different notification strategies according to alertmanager routing \* `annotations` - optional, specifies a set of information labels, which can be used to store longer additional information, but only for display \* `generatorURL` - optional, default is '{{.app.spec.source.repoURL}}', backlink used to identify the entity that caused this alert in the client the `label` or `annotations` or `generatorURL` values can be templated. ```yaml context: | argocdUrl: https://example.com/argocd template.app-deployed: | message: Application {{.app.metadata.name}} has been healthy. alertmanager: labels: fault\_priority: "P5" event\_bucket: "deploy" event\_status: "succeed" recipient: "{{.recipient}}" annotations: application: '[{{.app.metadata.name}}]({{.context.argocdUrl}}/applications/{{.app.metadata.name}})' author: "{{(call .repo.GetCommitMetadata .app.status.sync.revision).Author}}" message: "{{(call .repo.GetCommitMetadata .app.status.sync.revision).Message}}" ``` You can do targeted push on [Alertmanager](https://github.com/prometheus/alertmanager) according to labels. ```yaml template.app-deployed: | message: Application {{.app.metadata.name}} has been healthy. alertmanager: labels: alertname: app-deployed fault\_priority: "P5" event\_bucket: "deploy" ``` There is a special label `alertname`. If you donβt set its value, it will be equal to the template name by default. | https://github.com/argoproj/argo-cd/blob/master//docs/operator-manual/notifications/services/alertmanager.md | master | argo-cd | [
-0.020980099216103554,
0.025150805711746216,
-0.06795960664749146,
0.04628622159361839,
0.060575682669878006,
-0.15403835475444794,
0.043840304017066956,
-0.04528399929404259,
0.08507585525512695,
0.017519338056445122,
0.02242804691195488,
-0.06775470823049545,
0.09367606788873672,
0.05994... | 0.176932 |
# Teams (Office 365 Connectors) ## β οΈ Deprecation Notice \*\*Office 365 Connectors are being retired by Microsoft.\*\* Microsoft is retiring the Office 365 Connectors service in Teams. The service will be fully retired by \*\*March 31, 2026\*\* (extended from the original timeline of December 2025). ### What this means: - \*\*Old Office 365 Connectors\*\* (webhook URLs from `webhook.office.com`) will stop working after the retirement date - \*\*New Power Automate Workflows\*\* (webhook URLs from `api.powerautomate.com`, `api.powerplatform.com`, or `flow.microsoft.com`) are the recommended replacement ### Migration Required: If you are currently using Office 365 Connectors (Incoming Webhook), you should migrate to Power Automate Workflows before the retirement date. The notifications-engine automatically detects the webhook type and handles both formats, but you should plan your migration. \*\*Migration Resources:\*\* - [Microsoft Deprecation Notice](https://devblogs.microsoft.com/microsoft365dev/retirement-of-office-365-connectors-within-microsoft-teams/) - [Create incoming webhooks with Workflows for Microsoft Teams](https://support.microsoft.com/en-us/office/create-incoming-webhooks-with-workflows-for-microsoft-teams-4b3b0b0e-0b5a-4b5a-9b5a-0b5a-4b5a-9b5a) --- ## Parameters The Teams notification service sends message notifications using Office 365 Connectors and requires specifying the following settings: \* `recipientUrls` - the webhook url map, e.g. `channelName: https://outlook.office.com/webhook/...` > \*\*β οΈ Deprecation Notice:\*\* Office 365 Connectors will be retired by Microsoft on \*\*March 31, 2026\*\*. We recommend migrating to the [Teams Workflows service](./teams-workflows.md) for continued support and enhanced features. ## Configuration > \*\*π‘ For Power Automate Workflows (Recommended):\*\* See the [Teams Workflows documentation](./teams-workflows.md) for detailed configuration instructions. ### Office 365 Connectors (Deprecated - Retiring March 31, 2026) > \*\*β οΈ Warning:\*\* This method is deprecated and will stop working after March 31, 2026. Please migrate to Power Automate Workflows. 1. Open `Teams` and goto `Apps` 2. Find `Incoming Webhook` microsoft app and click on it 3. Press `Add to a team` -> select team and channel -> press `Set up a connector` 4. Enter webhook name and upload image (optional) 5. Press `Create` then copy webhook url (it will be from `webhook.office.com`) 6. Store it in `argocd-notifications-secret` and define it in `argocd-notifications-cm` ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.teams: | recipientUrls: channelName: $channel-teams-url ``` ```yaml apiVersion: v1 kind: Secret metadata: name: stringData: channel-teams-url: https://webhook.office.com/webhook/your-webhook-id # Office 365 Connector (deprecated) ``` > \*\*Note:\*\* For Power Automate Workflows webhooks, use the [Teams Workflows service](./teams-workflows.md) instead. ### Webhook Type Detection The `teams` service supports Office 365 Connectors (deprecated): - \*\*Office 365 Connectors\*\*: URLs from `webhook.office.com` (deprecated) - Requires response body to be exactly `"1"` for success - Will stop working after March 31, 2026 7. Create subscription for your Teams integration: ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: annotations: notifications.argoproj.io/subscribe.on-sync-succeeded.teams: channelName ``` ## Channel Support - β
Standard Teams channels only > \*\*Note:\*\* Office 365 Connectors only support standard Teams channels. For shared channels or private channels, use the [Teams Workflows service](./teams-workflows.md). ## Templates  [Notification templates](../templates.md) can be customized to leverage teams message sections, facts, themeColor, summary and potentialAction [feature](https://docs.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/connectors-using). The Teams service uses the \*\*messageCard\*\* format (MessageCard schema) which is compatible with Office 365 Connectors. ```yaml template.app-sync-succeeded: | teams: themeColor: "#000080" sections: | [{ "facts": [ { "name": "Sync Status", "value": "{{.app.status.sync.status}}" }, { "name": "Repository", "value": "{{.app.spec.source.repoURL}}" } ] }] potentialAction: |- [{ "@type":"OpenUri", "name":"Operation Details", "targets":[{ "os":"default", "uri":"{{.context.argocdUrl}}/applications/{{.app.metadata.name}}?operation=true" }] }] title: Application {{.app.metadata.name}} has been successfully synced text: Application {{.app.metadata.name}} has been successfully synced at {{.app.status.operationState.finishedAt}}. summary: "{{.app.metadata.name}} sync succeeded" ``` ### facts field You can use `facts` field instead of `sections` field. ```yaml template.app-sync-succeeded: | teams: facts: | [{ "name": "Sync Status", "value": "{{.app.status.sync.status}}" }, { "name": "Repository", "value": "{{.app.spec.source.repoURL}}" }] ``` ### theme color field You can set theme color as hex string for the message.  ```yaml template.app-sync-succeeded: | teams: themeColor: "#000080" ``` ### summary field You can set a summary of the message that will be | https://github.com/argoproj/argo-cd/blob/master//docs/operator-manual/notifications/services/teams.md | master | argo-cd | [
-0.1109812930226326,
0.062165241688489914,
-0.0226783137768507,
-0.01888175494968891,
0.07196506857872009,
-0.022678308188915253,
-0.08845396339893341,
-0.08670884370803833,
0.02175242453813553,
-0.01075464766472578,
-0.032656725496053696,
-0.0102856969460845,
-0.006116968579590321,
0.0072... | 0.114166 |
"name": "Sync Status", "value": "{{.app.status.sync.status}}" }, { "name": "Repository", "value": "{{.app.spec.source.repoURL}}" }] ``` ### theme color field You can set theme color as hex string for the message.  ```yaml template.app-sync-succeeded: | teams: themeColor: "#000080" ``` ### summary field You can set a summary of the message that will be shown on Notification & Activity Feed   ```yaml template.app-sync-succeeded: | teams: summary: "Sync Succeeded" ``` ## Migration to Teams Workflows If you're currently using Office 365 Connectors, see the [Teams Workflows documentation](./teams-workflows.md) for migration instructions and enhanced features. | https://github.com/argoproj/argo-cd/blob/master//docs/operator-manual/notifications/services/teams.md | master | argo-cd | [
0.03233271837234497,
0.07891536504030228,
0.040637195110321045,
0.027148397639393806,
0.08244498074054718,
0.01734357886016369,
0.0615592822432518,
-0.061935558915138245,
0.10396106541156769,
-0.010954764671623707,
-0.041065916419029236,
-0.035101164132356644,
0.010706670582294464,
0.02296... | 0.059996 |
# Google Chat ## Parameters The Google Chat notification service send message notifications to a google chat webhook. This service uses the following settings: \* `webhooks` - a map of the form `webhookName: webhookUrl` ## Configuration 1. Open `Google chat` and go to the space to which you want to send messages 2. From the menu at the top of the page, select \*\*Configure Webhooks\*\* 3. Under \*\*Incoming Webhooks\*\*, click \*\*Add Webhook\*\* 4. Give a name to the webhook, optionally add an image and click \*\*Save\*\* 5. Copy the URL next to your webhook 6. Store the URL in `argocd-notification-secret` and declare it in `argocd-notifications-cm` ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.googlechat: | webhooks: spaceName: $space-webhook-url ``` ```yaml apiVersion: v1 kind: Secret metadata: name: stringData: space-webhook-url: https://chat.googleapis.com/v1/spaces//messages?key=&token= ``` 6. Create a subscription for your space ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: annotations: notifications.argoproj.io/subscribe.on-sync-succeeded.googlechat: spaceName ``` ## Templates You can send [simple text](https://developers.google.com/chat/reference/message-formats/basic) or [card messages](https://developers.google.com/chat/reference/message-formats/cards) to a Google Chat space. A simple text message template can be defined as follows: ```yaml template.app-sync-succeeded: | message: The app {{ .app.metadata.name }} has successfully synced! ``` A card message can be defined as follows: ```yaml template.app-sync-succeeded: | googlechat: cardsV2: | - header: title: ArgoCD Bot Notification sections: - widgets: - decoratedText: text: The app {{ .app.metadata.name }} has successfully synced! - widgets: - decoratedText: topLabel: Repository text: {{ call .repo.RepoURLToHTTPS .app.spec.source.repoURL }} - decoratedText: topLabel: Revision text: {{ .app.spec.source.targetRevision }} - decoratedText: topLabel: Author text: {{ (call .repo.GetCommitMetadata .app.status.sync.revision).Author }} ``` All [Card fields](https://developers.google.com/chat/api/reference/rest/v1/cards#Card\_1) are supported and can be used in notifications. It is also possible to use the previous (now deprecated) `cards` key to use the legacy card fields, but this is not recommended as Google has deprecated this field and recommends using the newer `cardsV2`. The card message can be written in JSON too. ## Chat Threads It is possible send both simple text and card messages in a chat thread by specifying a unique key for the thread. The thread key can be defined as follows: ```yaml template.app-sync-succeeded: | message: The app {{ .app.metadata.name }} has successfully synced! googlechat: threadKey: {{ .app.metadata.name }} ``` | https://github.com/argoproj/argo-cd/blob/master//docs/operator-manual/notifications/services/googlechat.md | master | argo-cd | [
-0.08743895590305328,
-0.004390026908367872,
-0.014356573112308979,
-0.014018413610756397,
0.03076835349202156,
-0.07710915058851242,
0.046030644327402115,
-0.062290411442518234,
0.015342501923441887,
0.03894003853201866,
0.014370748773217201,
-0.08924353122711182,
0.008240187540650368,
-0... | 0.062049 |
# Mattermost ## Parameters \* `apiURL` - the server url, e.g. https://mattermost.example.com \* `token` - the bot token \* `insecureSkipVerify` - optional bool, true or false \* `maxIdleConns` - optional, maximum number of idle (keep-alive) connections across all hosts. \* `maxIdleConnsPerHost` - optional, maximum number of idle (keep-alive) connections per host. \* `maxConnsPerHost` - optional, maximum total connections per host. \* `idleConnTimeout` - optional, maximum amount of time an idle (keep-alive) connection will remain open before closing, e.g. '90s'. ## Configuration 1. Create a bot account and copy token after creating it  2. Invite team  3. Store token in `argocd-notifications-secret` Secret and configure Mattermost integration in `argocd-notifications-cm` ConfigMap ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.mattermost: | apiURL: token: $mattermost-token ``` ```yaml apiVersion: v1 kind: Secret metadata: name: stringData: mattermost-token: token ``` 4. Copy channel id  5. Create subscription for your Mattermost integration ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: annotations: notifications.argoproj.io/subscribe..mattermost: ``` ## Templates  You can reuse the template of slack. Mattermost is compatible with attachments of Slack. See [Mattermost Integration Guide](https://docs.mattermost.com/developer/message-attachments.html). ```yaml template.app-deployed: | message: | Application {{.app.metadata.name}} is now running new version of deployments manifests. mattermost: attachments: | [{ "title": "{{.app.metadata.name}}", "title\_link": "{{.context.argocdUrl}}/applications/{{.app.metadata.name}}", "color": "#18be52", "fields": [{ "title": "Sync Status", "value": "{{.app.status.sync.status}}", "short": true }, { "title": "Repository", "value": "{{.app.spec.source.repoURL}}", "short": true }] }] ``` | https://github.com/argoproj/argo-cd/blob/master//docs/operator-manual/notifications/services/mattermost.md | master | argo-cd | [
-0.07319727540016174,
-0.042446721345186234,
-0.07576613128185272,
0.04783882945775986,
-0.014198930002748966,
-0.11940871924161911,
0.013518493622541428,
0.01695403642952442,
0.001945788273587823,
0.05222320556640625,
0.014700914733111858,
-0.022214757278561592,
0.08830579370260239,
0.008... | 0.211007 |
# Pushover 1. Create an app at [pushover.net](https://pushover.net/apps/build). 2. Store the API key in `` Secret and define the secret name in `argocd-notifications-cm` ConfigMap: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: argocd-notifications-cm data: service.pushover: | token: $pushover-token ``` ```yaml apiVersion: v1 kind: Secret metadata: name: stringData: pushover-token: avtc41pn13asmra6zaiyf7dh6cgx97 ``` 3. Add your user key to your Application resource: ```yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: annotations: notifications.argoproj.io/subscribe.on-sync-succeeded.pushover: uumy8u4owy7bgkapp6mc5mvhfsvpcd ``` | https://github.com/argoproj/argo-cd/blob/master//docs/operator-manual/notifications/services/pushover.md | master | argo-cd | [
-0.03112781047821045,
-0.12366452068090439,
-0.037559542804956436,
0.0032457581255584955,
0.00429193489253521,
-0.0564291775226593,
0.03753639757633209,
0.007918407209217548,
-0.029717762023210526,
0.0909554585814476,
-0.004833380226045847,
-0.01832849346101284,
-0.03819950670003891,
0.000... | 0.063907 |
Oh no! You found a bug? We'd love to hear about it. ## Product bugs Search our [issue database](https://github.com/istio/istio/issues/) to see if we already know about your problem and learn about when we think we can fix it. If you don't find your problem in the database, please open a [new issue](https://github.com/istio/istio/issues/new/choose) and let us know what's going on. If you think a bug is in fact a security vulnerability, please visit [Reporting Security Vulnerabilities](/docs/releases/security-vulnerabilities/) to learn what to do. ### Kubernetes cluster state archives If you're running on Kubernetes, consider including a cluster state archive with your bug report. For convenience, you can run the `istioctl bug-report` command to produce an archive containing all of the relevant state from your Kubernetes cluster: {{< text bash >}} $ istioctl bug-report {{< /text >}} Then attach the produced `bug-report.tgz` with your reported problem. If your mesh spans multiple clusters, run `istioctl bug-report` against each cluster, specifying the `--context` or `--kubeconfig` flags. {{< tip >}} The `istioctl bug-report` command is only available with `istioctl` version `1.8.0` and higher but it can be used to also collect the information from an older Istio version installed in your cluster. {{< /tip >}} {{< tip >}} If you are running `bug-report` on a large cluster, it might fail to complete. Please use the `--include ns1,ns2` option to target the collection of proxy commands and logs only for the relevant namespaces. For more bug-report options, please visit [the istioctl bug-report reference](/docs/reference/commands/istioctl/#istioctl-bug-report). {{< /tip >}} If you are unable to use the `bug-report` command, please attach your own archive containing: \* Output of istioctl analyze: {{< text bash >}} $ istioctl analyze --all-namespaces {{< /text >}} \* Pods, services, deployments, and endpoints across all namespaces: {{< text bash >}} $ kubectl get pods,services,deployments,endpoints --all-namespaces -o yaml > k8s\_resources.yaml {{< /text >}} \* Secret names in `istio-system`: {{< text bash >}} $ kubectl --namespace istio-system get secrets {{< /text >}} \* configmaps in the `istio-system` namespace: {{< text bash >}} $ kubectl --namespace istio-system get cm -o yaml {{< /text >}} \* Current and previous logs from all Istio components and sidecars. Here some examples on how to obtain those, please adapt for your environment: \* Istiod logs: {{< text bash >}} $ kubectl logs -n istio-system -l app=istiod {{< /text >}} \* Ingress Gateway logs: {{< text bash >}} $ kubectl logs -l istio=ingressgateway -n istio-system {{< /text >}} \* Egress Gateway logs: {{< text bash >}} $ kubectl logs -l istio=egressgateway -n istio-system {{< /text >}} \* Sidecar logs: {{< text bash >}} $ for ns in $(kubectl get ns -o jsonpath='{.items[\*].metadata.name}') ; do kubectl logs -l service.istio.io/canonical-revision -c istio-proxy -n $ns ; done {{< /text >}} \* All Istio configuration artifacts: {{< text bash >}} $ kubectl get istio-io --all-namespaces -o yaml {{< /text >}} ## Documentation bugs Search our [documentation issue database](https://github.com/istio/istio.io/issues/) to see if we already know about your problem and learn about when we think we can fix it. If you don't find your problem in the database, please [report the issue there](https://github.com/istio/istio.io/issues/new). If you want to submit a proposed edit to a page, you will find an "Edit this Page on GitHub" link at the bottom right of every page. | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/bugs/index.md | master | istio | [
-0.010827258229255676,
-0.02095053158700466,
0.021880580112338066,
0.070036381483078,
0.008614830672740936,
0.013101320713758469,
0.002619323553517461,
0.013538410887122154,
-0.034040529280900955,
0.039843346923589706,
-0.039301205426454544,
-0.1163395568728447,
-0.02036520093679428,
0.042... | 0.434939 |
This page lists the status, timeline and policy for currently supported releases. Supported releases of Istio include releases that are in the active maintenance window and are patched for security and bug fixes. Subsequent patch releases on a minor release do not contain backward incompatible changes. - [Support policy](#support-policy) - [Naming scheme](#naming-scheme) - [Control Plane/Data Plane Skew](#control-planedata-plane-skew) - [Support status of Istio releases](#support-status-of-istio-releases) - [Supported releases without known Common Vulnerabilities and Exposures (CVEs)](#supported-releases-without-known-common-vulnerabilities-and-exposures-cves) - [Supported Envoy Versions](#supported-envoy-versions) ## Support policy We produce new builds of Istio for each commit. Around once a quarter, we build a minor release and run through several additional tests as well as release qualification. We release patch versions for issues found in minor releases. The various types of releases represent a different product quality level and level of assistance from the Istio community. In this context, \*support\* means that the community will produce patch releases for critical issues and offer technical assistance. Separately, 3rd parties and partners may offer longer-term support solutions. | Type | Support Level | Quality and Recommended Use | |-------------------|-------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------| | Development Build | No support | Dangerous, may not be fully reliable. Useful to experiment with. | | Minor Release | Support provided until 6 weeks after the N+2 minor release (ex. 1.11 supported until 6 weeks after 1.13.0 is released) | | Patch | Same as the corresponding Minor release | Users are encouraged to adopt patch releases as soon as they are available for a given release. | | Security Patch | Same as a Patch, but contains a security fix. Sometimes security patches will contain additional code/fixes in addition to the security fixes. | Given the nature of security fixes, users are \*\*strongly\*\* encouraged to adopt security patches after release. | You can find available releases on the [releases page](https://github.com/istio/istio/releases), and if you're the adventurous type, you can learn about our development builds on the [development builds wiki](https://github.com/istio/istio/wiki/Dev%20Builds). You can find high-level releases notes for each minor and patch release [here](/news). ## Naming scheme Our naming scheme is as follows: {{< text plain >}} .. {{< /text >}} where `` is increased for each release, and `` counts the number of patches for the current `` release. A patch is usually a small change relative to the `` release. ## Control Plane/Data Plane Skew The Istio control plane can be one version ahead of the data plane. However, the data plane cannot be ahead of control plane. We recommend using [revisions](/docs/setup/upgrade/canary/) so that there is no skew at all. As of now, data plane to data plane is compatible across all versions; however, this may change in the future. ## Support status of Istio releases {{< support\_status\_table >}} ## Supported releases without known Common Vulnerabilities and Exposures (CVEs) {{< warning >}} Istio does not guarantee that minor releases that fall outside the support window have all known CVEs patched. Please keep up-to-date and use a supported version. {{< /warning >}} | Minor Releases | Patched versions with no known CVEs | |----------------|-------------------------------------| | 1.28.x | 1.28.2+ | | 1.27.x | 1.27.5+ | | 1.26.x | 1.26.8+ | ## Supported Envoy Versions Istio's data plane is based on [Envoy](https://github.com/envoyproxy/envoy). The relationship between the two project's versions: | Istio version | Envoy release branch | |---------------|----------------------| | 1.28.x | release/v1.36 | | 1.27.x | release/v1.35 | | 1.26.x | release/v1.34 | You can find the precise Envoy commit used by Istio [in the `istio/proxy` repository](https://github.com/istio/proxy/blob/{{< source\_branch\_name >}}/WORKSPACE#L26): look for the `ENVOY\_SHA` variable. | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/supported-releases/index.md | master | istio | [
-0.06638605892658234,
-0.011558984406292439,
0.04495348036289215,
-0.017362099140882492,
0.05132004991173744,
-0.002581215463578701,
0.008060258813202381,
0.047890037298202515,
-0.07860160619020462,
0.017613815143704414,
0.03950751572847366,
-0.0530255064368248,
-0.048065800219774246,
0.02... | 0.509646 |
release/v1.36 | | 1.27.x | release/v1.35 | | 1.26.x | release/v1.34 | You can find the precise Envoy commit used by Istio [in the `istio/proxy` repository](https://github.com/istio/proxy/blob/{{< source\_branch\_name >}}/WORKSPACE#L26): look for the `ENVOY\_SHA` variable. | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/supported-releases/index.md | master | istio | [
-0.01739264652132988,
-0.040303103625774384,
0.014191130176186562,
0.05324489250779152,
-0.05923920124769211,
-0.019882457330822945,
0.008899577893316746,
0.0004425441147759557,
0.0069451224990189075,
0.04112549498677254,
-0.03960714116692543,
-0.07102105021476746,
-0.06938382238149643,
0.... | 0.527356 |
To provide clarity to our users, use the standard terms in this section consistently within the documentation. ## Service Avoid using the term \*\*service\*\*. Research shows that different folks understand different things under that term. The following table shows acceptable alternatives that provide greater specificity and clarity to readers: |Do | Don't |--------------------------------------------|----------------------------------------- | Workload A sends a request to Workload B. | Service A sends a request to Service B. | New workload instances start when ... | New service instances start when ... | The application consists of two workloads. | The service consists of two services. Our glossary establishes the agreed-upon terminology, and provides definitions to avoid confusion. ## Envoy We prefer to use "Envoyβ as itβs a more concrete term than "proxy" and resonates if used consistently throughout the docs. Synonyms: - "Envoy sidecarβ - ok - "Envoy proxyβ - ok - "The Istio proxyβ -- best to avoid unless youβre talking about advanced scenarios where another proxy might be used. - "Sidecarβ -- mostly restricted to conceptual docs - "Proxy" -- only if context is obvious Related Terms: - Proxy agent - This is a minor infrastructural component and should only show up in low-level detail documentation. It is not a proper noun. ## Miscellaneous |Do | Don't |----------------|------ | addon | `add-on` | Bookinfo | `BookInfo`, `bookinfo` | certificate | `cert` | colocate | `co-locate` | configuration | `config` | delete | `kill` | Kubernetes | `kubernetes`, `k8s` | load balancing | `load-balancing` | Mixer | `mixer` | multicluster | `multi-cluster` | mutual TLS | `mtls` | service mesh | `Service Mesh` | sidecar | `side-car`, `Sidecar` | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/contribute/terminology/index.md | master | istio | [
-0.12088091671466827,
-0.04124661535024643,
0.013354983180761337,
-0.037222832441329956,
-0.0738794282078743,
-0.04772322624921799,
0.11294355988502502,
-0.042823515832424164,
0.018471412360668182,
0.028405264019966125,
-0.02631203643977642,
-0.0044236816465854645,
0.07085087895393372,
0.0... | 0.1553 |
To contribute new documentation to Istio, just follow these steps: 1. Identify the audience and intended use for the information. 1. Choose the [type of content](#content-types) you wish to contribute. 1. [Choose a title](#choosing-a-title). 1. Write your contribution following our [documentation contribution guides](/docs/releases/contribute). 1. Submit your contribution to our [GitHub repository](https://github.com/istio/istio.io). 1. Follow our [review process](/docs/releases/contribute/review) until your contribution is merged. ## Identify the audience and intended use The best documentation starts by knowing the intended readers, their knowledge, and what you expect them to do with the information. Otherwise, you cannot determine the appropriate scope and depth of information to provide, its ideal structure, or the necessary supporting information. The following examples show this principle in action: - The reader needs to perform a specific task: Tell them how to recognize when the task is necessary and provide the task itself as a list of numbered steps, donβt simply describe the task in general terms. - The reader must understand a concept before they can perform a task: Before the task, tell them about the prerequisite information and provide a link to it. - The reader needs to make a decision: Provide the conceptual information necessary to know when to make the decision, the available options, and when to choose one option instead of the other. - The reader is an administrator but not a SWE: Provide a script, not a link to a code sample in a developerβs guide. - The reader needs to extend the features of the product: Provide an example of how to extend the feature, using a simplified scenario for illustration purposes. - The reader needs to understand complex feature relationships: Provide a diagram showing the relationships, rather than writing multiple pages of content that is tedious to read and understand. The most important thing to avoid is the common mistake of simply giving readers all the information you have, because you are unsure about what information they need. If you need help identifying the audience for you content, we are happy to help and answer all your questions during the [Docs Working Group](https://github.com/istio/community/blob/master/WORKING-GROUPS.md#istio-working-groups) biweekly meetings. ## Content types When you understand the audience and the intended use for the information you provide, you can choose content type that best addresses their needs. To make it easy for you to choose, the following table shows the supported content types, their intended audiences, and the goals each type strives to achieve: | Content type | Goals | Audiences | | --- | --- | --- | | Concepts | Explain some significant aspect of Istio. For example, a concept page describes the configuration model of a feature and explains its functionality. Concept pages don't include sequences of steps. Instead, provide links to corresponding tasks. | Readers that want to understand how features work with only basic knowledge of the project. | | Reference pages | Provide exhaustive and detailed technical information. Common examples include API parameters, command-line options, configuration settings, and advanced procedures. Reference content is generated from the Istio code base and tested for accuracy. | Readers with advanced and deep technical knowledge of the project that needs specific bits of information to complete advanced tasks. | | Examples | Describe a working and stand-alone example that highlights a set of features, an integration of Istio with other projects, or an end-to-end solution for a use case. Examples must use an existing Istio setup as a starting point. Examples must include an automated test since they are maintained for technical accuracy. | Readers that want to quickly run the example themselves and experiment. Ideally, | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/contribute/add-content/index.md | master | istio | [
0.0006085226777940989,
-0.0155391376465559,
-0.010990035720169544,
0.03398260474205017,
0.027068931609392166,
0.03254975005984306,
0.006630054209381342,
0.13831381499767303,
-0.01329202950000763,
0.04946036636829376,
-0.05903998762369156,
-0.10475362092256546,
0.003748508868739009,
0.05407... | 0.506754 |
of Istio with other projects, or an end-to-end solution for a use case. Examples must use an existing Istio setup as a starting point. Examples must include an automated test since they are maintained for technical accuracy. | Readers that want to quickly run the example themselves and experiment. Ideally, readers should be able to easily change the example to produce their own solutions. | | Tasks | Shows how to achieve a single goal using Istio features. Tasks contain procedures written as a sequence of steps. Tasks provide minimal explanation of the features, but include links to the concepts that provide the related background and knowledge. Tasks must include automated tests since they are tested and maintained for technical accuracy. | Readers that want to use Istio features. | | Setup pages | Focus on the installation steps needed to complete an Istio deployment. Setup pages must include automated tests since they are tested and maintained for technical accuracy. | New and existing Istio users that want to complete a deployment. | | Blog posts | Focus on Istio or products and technologies related to it. Blog posts fall in one of the following three categories: * Posts detailing the authorβs experience using and configuring Istio, especially those that articulate a novel experience or perspective. * Posts highlighting Istio features. * Posts detailing how to accomplish a task or fulfill a specific use case using Istio. Unlike Tasks and Examples, the technical accuracy of blog posts is not maintained and tested after publication. | Readers with a basic understanding of the project who want to learn about it in an anecdotal, experiential, and more informal way. | | News entries | Focus on timely information about Istio and related events. News entries typically announce new releases or upcoming events. | Readers that want to quickly learn what's new and what's happening in the Istio community. | | FAQ entries | Provide quick answers to common questions. Answers don't introduce any concepts. Instead, they provide practical advice or insights. Answers must link to tasks, concepts, or examples in the documentation for readers to learn more. | Readers with specific questions who are looking for brief answers and resources to learn more. | | Operation guides | Focus on practical solutions that address specific problems encountered while running Istio in a real-world setting. | Service mesh operators that want to fix problems or implement solutions for running Istio deployments. | ## Choosing a title Choose a title for your topic that has the keywords you want search engines to find. All content files in Istio are named `index.md`, but each content file is within a folder that uses the keywords in the topic's title, separated by hyphens, all in lowercase. Keep folder names as short as possible to make cross-references easier to create and maintain. ## Submit your contribution to GitHub If you are not familiar with GitHub, see our [working with GitHub guide](/docs/releases/contribute/github) to learn how to submit documentation changes. If you want to learn more about how and when your contributions are published, see the [section on branching](/docs/releases/contribute/github#branching-strategy) to understand how we use branches and cherry picking to publish our content. | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/contribute/add-content/index.md | master | istio | [
-0.049540869891643524,
-0.02668095752596855,
0.019566230475902557,
0.055292848497629166,
0.03174727037549019,
-0.06679284572601318,
0.0353626050055027,
0.07436613738536835,
-0.09512821584939957,
-0.005301155615597963,
-0.03524060174822807,
-0.12528327107429504,
-0.0020410208962857723,
0.07... | 0.614791 |
Welcome to the Istio diagram guide! The guide is available as an [SVG file](./diagram-guidelines.svg) or as a [Google Drawings file](https://docs.google.com/drawings/d/1f3NyutAQIDOA8ojGNyMA5JAJllDShZGQAFfdD01XdSc/edit) to allow you to reuse the shapes and styles with ease. Use these guidelines to create SVG diagrams for the Istio website using any vector graphics tool like Google Drawings, Inkscape, or Illustrator. Please ensure that the text in your diagrams remains editable. Our goal is to drive consistency across all diagrams in our website to ensure diagrams are clear, technically accurate, and accessible. Keeping the text editable allows the community to improve and change the diagrams as needed. To create your diagrams, follow these steps: 1. Refer to the [guide](./diagram-guidelines.svg) and copy-paste from it as needed. 1. Connect the shapes with the appropriate style of line. 1. Label the shapes and lines with descriptive yet short text. 1. Add a legend for any labels that apply multiple times. 1. [Contribute](/docs/releases/contribute/add-content) your diagram to our documentation. If you create the diagram in Google Drawings, follow these steps: 1. Put your diagram in our [shared drive](https://drive.google.com/corp/drive/u/0/folders/17r1m4nfyr9xbfbpMqZsreMvFLCD4bgvx). 1. When the diagram is complete, export it as SVG and include the SVG file in your PR. 1. Leave a comment in the Markdown file containing the diagram with the URL to the Google Drawings file. If your diagram depicts a process, \*\*do not add the descriptions of the steps\*\* to the diagram. Instead, only add the numbers of the steps to the diagram and add the descriptions of the steps as a numbered list in the document. Ensure that the numbers on the list match the numbers on your diagram. This approach helps make diagrams easier to understand and the content more accessible. Thank you for contributing to the Istio documentation! {{< image width="75%" link="./diagram-guidelines.svg" alt="The Istio diagram creation guidelines in SVG format." title="The Istio Diagram Creation Guidelines" >}} | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/contribute/diagrams/index.md | master | istio | [
0.029074355959892273,
-0.024496395140886307,
0.0070475125685334206,
0.03592240810394287,
-0.028010692447423935,
-0.03119744546711445,
-0.04540209472179413,
0.07481339573860168,
-0.03923363238573074,
-0.021307995542883873,
-0.02019522711634636,
-0.030724963173270226,
-0.037293966859579086,
... | 0.475885 |
To remove documentation from Istio, please follow these simple steps: 1. Remove the page. 1. Reconcile the broken links. 1. Submit your contribution to GitHub. ## Remove the page Use `git rm -rf` to remove the directory containing the `index.md` page. ## Reconcile broken links To reconcile broken links, use this flowchart: {{< image width="100%" link="./remove-documentation.svg" alt="Remove Istio documentation." caption="Remove Istio documentation" >}} ## Submit your contribution to GitHub If you are not familiar with GitHub, see our [working with GitHub guide](/docs/releases/contribute/github) to learn how to submit documentation changes. If you want to learn more about how and when your contributions are published, see the [section on branching](/docs/releases/contribute/github#branching-strategy) to understand how we use branches and cherry picking to publish our content. | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/contribute/remove-content/index.md | master | istio | [
-0.03496344015002251,
0.04618360847234726,
0.042154356837272644,
0.06298157572746277,
0.058929115533828735,
-0.039766233414411545,
-0.0693274587392807,
0.02833956852555275,
-0.0010606901487335563,
0.002203688956797123,
-0.04311511665582657,
-0.006910193711519241,
-0.024897310882806778,
0.0... | 0.423511 |
Hugo shortcodes are special placeholders with a certain syntax that you can add to your content to create dynamic content experiences, such as tabs, images and icons, links to other pages, and special content layouts. This page explains the available shortcodes and how to use them for your content. ## Add images Place image files in the same directory as the markdown file using them. To make localization easier and enhance accessibility, the preferred image format is SVG. The following example shows the shortcode with the required fields needed to add an image: {{< text html >}} {{" caption="" \*/>}} {{< /text >}} The `link` and `caption` fields are required, but the shortcode also supports optional fields, for example: {{< text html >}} {{" alt="" title="" caption="" \*/>}} {{< /text >}} If you don't include the `title` field, Hugo uses the text set in `caption`. If you don't include the `alt` field, Hugo uses the text in `title` or in `caption` if `title` is also not defined. The `width` field sets the size of the image relative to the surrounding text and has a default of 100%. The `ratio` field sets the height of the image relative to its width. Hugo calculates this value automatically for image files in the folder. However, you must calculate it manually for external images. Set the value of `ratio` to `([image height]/[image width]) \* 100`. ## Add icons You can embed common icons in your content with the following content: {{< text markdown >}} {{}} {{}} {{}} {{}} {{}} {{< /text >}} The icons are rendered within the text. For example: {{< warning\_icon >}}, {{< idea\_icon >}}, {{< checkmark\_icon >}}, {{< cancel\_icon >}} and {{< tip\_icon >}}. ## Add links to other pages The Istio documentation supports three types of links depending on their target. Each type uses a different syntax to express the target. - \*\*External links\*\*. These are links to pages outside of the Istio documentation or the Istio GitHub repositories. Use the standard Markdown syntax to include the URL. Use the HTTPS protocol, when you reference files on the Internet, for example: {{< text markdown >}} [Descriptive text for the link](https://mysite/myfile.html) {{< /text >}} - \*\*Relative links\*\*. These links target pages at the same level of the current file or further down the hierarchy. Start the path of relative links with a period `.`, for example: {{< text markdown >}} [This links to a sibling or child page](./sub-dir/child-page.html) {{< /text >}} - \*\*Absolute links\*\*. These links target pages outside the hierarchy of the current page but within the Istio website. Start the path of absolute links with a slash `/`, for example: {{< text markdown >}} [This links to a page on the about section](/about/page/) {{< /text >}} Regardless of type, links do not point to the `index.md` file with the content, but to the folder containing it. ### Add links to content on GitHub There are a few ways to reference content or files on GitHub: - \*\*{{}}\*\* is how you reference individual files in GitHub such as yaml files. This shortcode produces a link to `https://raw.githubusercontent.com/istio/istio\*`, for example: {{< text markdown >}} [liveness]({{}}/samples/health-check/liveness-command.yaml) {{< /text >}} - \*\*{{}}\*\* is how you reference a directory tree in GitHub. This shortcode produces a link to `https://github.com/istio/istio/tree\*`, for example: {{< text markdown >}} [httpbin]({{}}/samples/httpbin) {{< /text >}} - \*\*{{}}\*\* is how you reference a file in GitHub sources. This shortcode produces a link to `https://github.com/istio/istio/blob\*`, for example: {{< text markdown >}} [RawVM MySQL]({{}}/samples/rawvm/README.md) {{< /text >}} The shortcodes above produce links to the appropriate branch in GitHub, based on the branch the | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/contribute/shortcodes/index.md | master | istio | [
0.0023567599710077047,
0.004454765934497118,
0.03356895595788956,
0.026872951537370682,
0.12760254740715027,
0.02004956640303135,
-0.06042031943798065,
0.08477949351072311,
0.0319075733423233,
0.027077332139015198,
0.0048539284616708755,
0.05255156755447388,
0.061175331473350525,
-0.022059... | 0.094804 |
>}} [httpbin]({{}}/samples/httpbin) {{< /text >}} - \*\*{{}}\*\* is how you reference a file in GitHub sources. This shortcode produces a link to `https://github.com/istio/istio/blob\*`, for example: {{< text markdown >}} [RawVM MySQL]({{}}/samples/rawvm/README.md) {{< /text >}} The shortcodes above produce links to the appropriate branch in GitHub, based on the branch the documentation is currently targeting. To verify which branch is currently targeted, you can use the `{{}}` shortcode to get the name of the currently targeted branch. ## Version information To display the current Istio version in your content by retrieving the current version from the web site, use the following shortcodes: - `{{}}`, which renders as {{< istio\_version >}} - `{{}}`, which renders as {{< istio\_full\_version >}} ## Glossary terms When you introduce a specialized Istio term in a page, the supplemental acceptance criteria for contributions require you include the term in the glossary and markup its first instance using the `{{}}` shortcode. The shortcode produces a special rendering that invites readers to click on the term to get a pop-up with the definition. For example: {{< text markdown >}} The Istio component that programs the {{}}Envoy{{}} proxies, responsible for service discovery, load balancing, and routing. {{< /text >}} is rendered as follows: The Istio component that programs the {{}}Envoy{{}} proxies, responsible for service discovery, load balancing, and routing. If you use a variant of the term in your text, you can still use this shortcode to include the pop up with the definition. To specify a substitution, just include the glossary entry within the shortcode. For example: {{< text markdown >}} {{}}Envoy's{{}} HTTP support was designed to first and foremost be an HTTP/2 multiplexing proxy. {{< /text >}} Renders with the pop up for the `envoy` glossary entry as follows: {{< gloss envoy >}}Envoy's{{}} HTTP support was designed to first and foremost be an HTTP/2 multiplexing proxy. ## Callouts To emphasize blocks of content, you can format them as warnings, ideas, tips, or quotes. All callouts use very similar shortcodes: {{< text markdown >}} {{}} This is an important warning {{}} {{}} This is a great idea {{}} {{}} This is a useful tip from an expert {{}} {{}} This is a quote from somewhere {{}} {{< /text >}} The shortcodes above render as follows: {{< warning >}} This is an important warning {{< /warning >}} {{< idea >}} This is a great idea {{< /idea >}} {{< tip >}} This is a useful tip from an expert {{< /tip >}} {{< quote >}} This is a quote from somewhere {{< /quote >}} Use callouts sparingly. Each type of callout serves a specific purpose and over-using them negates their intended purposes and their efficacy. Generally, you should not include more than one callout per content file. ## Use boilerplate text To reuse content while maintaining a single source for it, use boilerplate shortcodes. To embed boilerplate text into any content file, use the `boilerplate` shortcode as follows: {{< text markdown >}} {{}} {{< /text >}} The shortcode above includes the following content from the `example.md` Markdown file in the `/content/en/boilerplates/` folder: {{< boilerplate example >}} The example shows that you need to include the filename of the Markdown file with the content you wish to insert at the current location. You can find the existing boilerplate files are located in the `/content/en/boilerplates` directory. ## Use tabs To display content that has multiple options or formats, use tab sets and tabs. For example: - Equivalent commands for different platforms - Equivalent code samples in different languages - Alternative configurations To insert tabbed content, combine the `tabset` and `tabs` shortcodes, for | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/contribute/shortcodes/index.md | master | istio | [
-0.004720036871731281,
0.00825181882828474,
-0.05656351521611214,
0.040138520300388336,
0.07686430960893631,
-0.02318803034722805,
-0.03590529412031174,
0.05453191697597504,
0.0003639859496615827,
0.007842217572033405,
-0.016112985089421272,
-0.04071381315588951,
0.02222403511404991,
-0.02... | 0.372057 |
in the `/content/en/boilerplates` directory. ## Use tabs To display content that has multiple options or formats, use tab sets and tabs. For example: - Equivalent commands for different platforms - Equivalent code samples in different languages - Alternative configurations To insert tabbed content, combine the `tabset` and `tabs` shortcodes, for example: {{< text markdown >}} {{}} {{}} ONE {{}} {{}} TWO {{}} {{}} THREE {{}} {{}} {{< /text >}} The shortcodes above produce the following output: {{< tabset category-name="platform" >}} {{< tab name="One" category-value="one" >}} ONE {{< /tab >}} {{< tab name="Two" category-value="two" >}} TWO {{< /tab >}} {{< tab name="Three" category-value="three" >}} THREE {{< /tab >}} {{< /tabset >}} The value of the `name` attribute of each tab contains the text displayed for the tab. Within each tab, you can have normal Markdown content, but tabs have [limitations](#tab-limitations). The `category-name` and `category-value` attributes are optional and make the selected tab to stick across visits to the page. For example, a visitor selects a tab and their selection is saved automatically with the given name and value. If multiple tab sets use the same category name and values, their selection is automatically synchronized across pages. This is particularly useful when there are many tab sets in the site that hold the same types of formats. For example, multiple tab sets could provide options for `GCP`, `BlueMix` and `AWS`. You can set the value of the `category-name` attribute to `environment` and the values for the `category-value` attributes to `gcp`, `bluemix`, and `aws`. Then, when a reader selects a tab in one page, their choice will carry throughout all tab sets across the website automatically. ### Tab limitations You can use almost any Markdown in a tab, with the following exceptions: - \*Headers\*. Headers in a tab appear in the table of contents but clicking on the link in the table of contents won't automatically select the tab. - \*Nested tab sets\*. Don't nest tab sets. Doing so leads to a terrible reading experience and can cause significant confusion. ## Use banners and stickers To advertise upcoming events, or publicize something new, you can automatically insert time-sensitive banners and stickers into the generated site in order. We've implemented the following shortcodes for promotions: - \*\*Countdown stickers\*\*: They show how much time is left before a big event For example: "37 days left until ServiceMeshCon on March 30". Stickers have some visual impact for readers prior to the event and should be used sparingly. - \*\*Banners\*\*: They show a prominent message to readers about a significant event that is about to take place, is taking place, or has taken place. For example "Istio 1.5 has been released, download it today!" or "Join us at ServiceMeshCon on March 30". Banners are full-screen slices displayed to readers during the event period. To create banners and stickers, you create Markdown files in either the `events/banners` or `events/stickers` folders. Create one Markdown file per event with dedicated front-matter fields to control their behavior. The following table explains the available options: | Field | Description | | --- | --- | | `title` | The name of the event. This is not displayed on the web site, it's intended for diagnostic messages. | | `period_start` | The starting date at which to start displaying the item in `YYYY-MM-DD` format. Instead of a date, this can also be the value `latest_release`, which then uses the latest known Istio release as the start date. This is useful when creating a banner saying "Istio x.y.z has just been released". | | `period_end` | The last date on which to display the | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/contribute/shortcodes/index.md | master | istio | [
0.045735713094472885,
0.011596721597015858,
-0.07631370425224304,
0.05762394517660141,
-0.006016779225319624,
0.04917362704873085,
0.014296283014118671,
0.022905664518475533,
-0.008333396166563034,
-0.030215248465538025,
-0.042399778962135315,
-0.04422210901975632,
-0.0007884091464802623,
... | 0.005779 |
format. Instead of a date, this can also be the value `latest_release`, which then uses the latest known Istio release as the start date. This is useful when creating a banner saying "Istio x.y.z has just been released". | | `period_end` | The last date on which to display the item in `YYYY-MM-DD` format. This value is mutually exclusive with `period_duration` below. | | `period_duration` | How many days to display the item to the user. This value is mutually exclusive with `period_end` above. | | `max_impressions` | How many times to show the content to the user during the event's period. A value of 3 would mean the first three pages visited by the user during the period will display the content, and the content will be hidden on subsequent page loads. A value of 0, or omitting the field completely, results in the content being displayed on all page visits during the period. | | `timeout` | The amount of time the content is visible to the user on a given page. After that much time passes, the item will be removed from the page. | | `link` | You can specify a URL, which turns the whole item into a clickable target. When the user clicks on the item, the item is no longer shown to the user. The special value `latest\_release` can be used here to introduce a link to the current release's announcement page. | | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/contribute/shortcodes/index.md | master | istio | [
-0.03782813623547554,
0.09492059051990509,
-0.021559467539191246,
0.05291253328323364,
0.04741036519408226,
0.015371285378932953,
0.0028667193837463856,
0.050790395587682724,
0.05470407009124756,
-0.016660764813423157,
0.0372297540307045,
-0.022662021219730377,
-0.010829644277691841,
-0.06... | 0.321534 |
The front matter is YAML code in between triple-dashed lines at the top of each file and provides important management options for our content. For example, the front matter allows us to ensure that existing links continue to work for pages that are moved or deleted entirely. This page explains the features currently available for front matters in Istio. The following example shows a front matter with all the required fields filled by placeholders: {{< text yaml >}} --- title: description: <description> weight: <weight> keywords: [<keyword1>,<keyword2>,...] aliases: - <previously-published-at-this-URL> --- {{< /text >}} You can copy the example above and replace all the placeholders with the appropriate values for your page. ## Required front matter fields The following table shows descriptions for all the \*\*required\*\* fields: |Field | Description |-------------------|------------ |`title` | The page's title. |`description` | A one-line description of the content on the page. |`weight` | The order of the page relative to the other pages in the directory. |`keywords` | The keywords on the page. Hugo uses this list to create the links under "See Also". |`aliases` | Past URLs where the page was published. See [Renaming, moving, or deleting pages](#rename-move-or-delete-pages) below for details on this item ### Rename, move, or delete pages When you move pages or delete them completely, you must ensure that the existing links to those pages continue to work. The `aliases` field in the front matter helps you meet this requirement. Add the path to the page before the move or deletion to the `aliases` field. Hugo implements automatic redirects from the old URL to the new URL for our users. On the \_target page\_, which is the page where you want users to land, add the `<path>` of the \_original page\_ to the front-matter as follows: {{< text plain >}} aliases: - <path> {{< /text >}} For example, you could find our FAQ page in the past under `/help/faq`. To help our users find the FAQ page, we moved the page one level up to `/faq/` and changed the front matter as follows: {{< text plain >}} --- title: Frequently Asked Questions description: Questions Asked Frequently. weight: 13 aliases: - /help/faq --- {{< /text >}} The change above allows any user to access the FAQ when they visit `https://istio.io/faq/` or `https://istio.io/help/faq/`. Multiple redirects are supported, for example: {{< text plain >}} --- title: Frequently Asked Questions description: Questions Asked Frequently. weight: 13 aliases: - /faq - /faq2 - /faq3 --- {{< /text >}} ## Optional front matter fields However, Hugo supports many front matter fields and this page only covers those implemented on istio.io. The following table shows the most commonly used \*\*optional\*\* fields: |Field | Description |-------------------|------------ |`linktitle` | A shorter version of the title that is used for links to the page. |`subtitle` | A subtitle displayed below the main title. |`icon` | A path to the image that appears next to the title. |`draft` | If true, the page is not shown in the site's navigation. |`skip\_byline` | If true, Hugo doesn't show a byline under the main title. |`skip\_seealso` | If true, Hugo doesn't generate a "See also" section for the page. Some front matter fields control the auto-generated table of contents (ToC). The following table shows the fields and explains how to use them: |Field | Description |--------------------|------------ |`skip\_toc` | If true, Hugo doesn't generate a ToC for the page. |`force\_inline\_toc` | If true, Hugo inserts an auto-generated ToC in the text instead of in the sidebar to the right. |`max\_toc\_level` | Sets the heading levels used in the ToC. Values can go from 2 to | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/contribute/front-matter/index.md | master | istio | [
-0.025882810354232788,
0.005076246336102486,
0.0024636415764689445,
0.09355971962213516,
0.05457286909222603,
-0.021316885948181152,
-0.051018424332141876,
0.005729255732148886,
-0.017474493011832237,
0.05194072425365448,
-0.06889290362596512,
0.008222127333283424,
-0.009896606206893921,
-... | 0.403357 |
Description |--------------------|------------ |`skip\_toc` | If true, Hugo doesn't generate a ToC for the page. |`force\_inline\_toc` | If true, Hugo inserts an auto-generated ToC in the text instead of in the sidebar to the right. |`max\_toc\_level` | Sets the heading levels used in the ToC. Values can go from 2 to 6. |`remove\_toc\_prefix` | Hugo removes this string from the beginning of every entry in the ToC Some front matter fields only apply to so-called \_bundle pages\_. You can identify bundle pages because their file names begin with an underscore `\_`, for example `\_index.md`. In Istio, we use bundle pages as our section landing pages. The following table shows the front matter fields pertinent to bundle pages. |Field | Description |----------------------|------------ |`skip\_list` | If true, Hugo doesn't auto-generate the content tiles of a section page. |`simple\_list` | If true, Hugo uses a simple list for the auto-generated content of a section page. |`list\_below` | If true, Hugo inserts the auto-generated content below the manually-written content. |`list\_by\_publishdate` | If true, Hugo sorts the auto-generated content by publication date, instead of by weight. Similarly, some front matter fields apply specifically to blog posts. The following table shows those fields: |Field | Description |-----------------|------------ |`publishdate` | Date of the post's original publication |`last\_update` | Date when the post last received a major revision |`attribution` | Optional name of the post's author |`twitter` | Optional Twitter handle of the post's author |`target\_release` | The release used on this blog. Normally, this value is the current major Istio release at the time the blog is authored or updated. | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/contribute/front-matter/index.md | master | istio | [
-0.10697633773088455,
-0.024678949266672134,
0.032155733555555344,
0.05398087203502655,
0.12347418069839478,
0.04048902913928032,
-0.027716876938939095,
0.048808399587869644,
-0.03931855037808418,
0.04506572708487511,
-0.03612612932920456,
-0.019075101241469383,
0.039377905428409576,
-0.06... | 0.18627 |
The Istio documentation follows the standard [GitHub collaboration flow](https://guides.github.com/introduction/flow/) for Pull Requests (PRs). This well-established collaboration model helps open source projects manage the following types of contributions: - [Add](/docs/releases/contribute/add-content) new files to the repository. - [Edit](#quick-edit) existing files. - [Review](/docs/releases/contribute/review) the added or modified files. - Manage multiple release or development [branches](#branching-strategy). The contribution guides assume you can complete the following tasks: - Fork the [Istio documentation repository](https://github.com/istio/istio.io). - Create a branch for your changes. - Add commits to that branch. - Open a PR to share your contribution. ## Before you begin To contribute to the Istio documentation, you need to: 1. Create a [GitHub account](https://github.com). 1. Sign the [Contributor License Agreement](https://github.com/istio/community/blob/master/CONTRIBUTING.md#contributor-license-agreements). 1. Install [Docker](https://www.docker.com/get-started) on your authoring system to preview and test your changes. The Istio documentation is published under the [Apache 2.0](https://github.com/istio/community/blob/master/LICENSE) license. ## Perform quick edits {#quick-edit} Anyone with a GitHub account who signs the CLA can contribute a quick edit to any page on the Istio website. The process is very simple: 1. Visit the page you wish to edit. 1. Add `preliminary` to the beginning of the URL. For example, to edit `https://istio.io/about`, the new URL should be `https://preliminary.istio.io/about` 1. Click the pencil icon in the lower right corner. 1. Perform your edits on the GitHub UI. 1. Submit a Pull Request with your changes. Please see our guides on how to [contribute new content](/docs/releases/contribute/add-content) or [review content](/docs/releases/contribute/review) to learn more about submitting more substantial changes. ## Branching strategy {#branching-strategy} Active content development takes place using the master branch of the `istio/istio.io` repository. On the day of an Istio release, we create a release branch of master for that release. The following button takes you to the repository on GitHub: [Browse this site's source code](https://github.com/istio/istio.io/) The Istio documentation repository uses multiple branches to publish documentation for all Istio releases. Each Istio release has a corresponding documentation branch. For example, there are branches called `release-1.0`, `release-1.1`, `release-1.2` and so forth. These branches were created on the day of the corresponding release. To view the documentation for a specific release, see the [archive page](https://archive.istio.io/). This branching strategy allows us to provide the following Istio online resources: - The [public site](/docs/) shows the content from the current release branch. - The preliminary site at `https://preliminary.istio.io` shows the content from the master branch. - The [archive site](https://archive.istio.io) shows the content from all prior release branches. Given how branching works, if you submit a change into the master branch, that change won't appear on `istio.io` until the next major Istio release happens. If your documentation change is relevant to the current Istio release, then it's probably worth also applying your change to the current release branch. You can do this easily and automatically by using the special cherry-pick labels on your documentation PR. For example, if you introduce a correction in a PR to the master branch, you can apply the `cherrypick/release-1.4` label in order to merge this change to the `release-1.4` branch. When your initial PR is merged, automation creates a new PR in the release branch which includes your changes. On rare occasions, automatic cherry picks don't work. When that happens, the automation leaves a note in the original PR indicating it failed. When that happens, you must manually create the cherry pick and deal with the merge issues that prevented the process from working automatically. Note that we only ever cherry pick changes into the current release branch, and never to older branches. Older branches are considered to be archived and generally no longer receive any changes. ## Istio community roles Depending on your contributions and | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/contribute/github/index.md | master | istio | [
-0.0349152609705925,
-0.0539175420999527,
-0.02509060688316822,
0.040902283042669296,
0.016107238829135895,
0.010243810713291168,
-0.020014869049191475,
0.10254266113042831,
0.005679736845195293,
0.04375244304537773,
-0.024517633020877838,
-0.09781897813081741,
-0.03852090239524841,
0.0315... | 0.51044 |
merge issues that prevented the process from working automatically. Note that we only ever cherry pick changes into the current release branch, and never to older branches. Older branches are considered to be archived and generally no longer receive any changes. ## Istio community roles Depending on your contributions and responsibilities, there are several roles you can assume. Visit our [role summary page](https://github.com/istio/community/blob/master/ROLES.md#role-summary) to learn about the roles, the related requirements and responsibilities, and the privileges associated with the roles. Visit our [community page](https://github.com/istio/community) to learn more about the Istio community in general. | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/contribute/github/index.md | master | istio | [
0.028413156047463417,
-0.07126352190971375,
0.01924261637032032,
0.04647621884942055,
-0.004219099413603544,
-0.06642244011163712,
-0.023119207471609116,
0.04754909500479698,
-0.05932595953345299,
0.02008979767560959,
-0.01311798021197319,
-0.06441150605678558,
-0.0830903947353363,
0.06748... | 0.585517 |
This page shows the formatting standards for the Istio documentation. Istio uses Markdown to markup the content and Hugo to build the website. To ensure consistency across our documentation, we have agreed on these formatting standards. ## Don't use capitalization for emphasis Only use the original capitalization found in the code or configuration files when referencing those values directly. Use back-ticks \`\` around the referenced value to make the connection explicit. For example, use `IstioRoleBinding`, not `Istio Role Binding` or `istio role binding`. If you are not referencing values or code directly, use normal sentence capitalization, for example, "The Istio role binding configuration takes place in a YAML file." ## Use angle brackets for placeholders Use angle brackets for placeholders in commands or code samples. Tell the reader what the placeholder represents. For example: {{< text markdown >}} 1. Display information about a pod: {{}} $ kubectl describe pod {{}} Where `` is the name of one of your pods. {{< /text >}} ## Use \*\*bold\*\* to emphasize user interface elements |Do | Don't |------------------|------ |Click \*\*Fork\*\*. | Click "Fork". |Select \*\*Other\*\*. | Select 'Other'. ## Use \_italics\_ to emphasize new terms |Do | Don't |-------------------------------------------|--- |A \_cluster\_ is a set of nodes ... | A "cluster" is a set of nodes ... |These components form the \_control plane\_. | These components form the \*\*control plane\*\*. Use the `gloss` shortcode and add glossary entries for new terms. ## Use `back-ticks` around file names, directories, and paths |Do | Don't |-------------------------------------|------ |Open the `foo.yaml` file. | Open the foo.yaml file. |Go to the `/content/en/docs/tasks` directory. | Go to the /content/en/docs/tasks directory. |Open the `/data/args.yaml` file. | Open the /data/args.yaml file. ## Use `back-ticks` around inline code and commands |Do | Don't |----------------------------|------ |The `foo run` command creates a `Deployment`. | The "foo run" command creates a `Deployment`. |For declarative management, use `foo apply`. | For declarative management, use "foo apply". Use code-blocks for commands you intend readers to execute. Only use inline code and commands to mention specific labels, flags, values, functions, objects, variables, modules, or commands. ## Use `back-ticks` around object field names |Do | Don't |-----------------------------------------------------------------|------ |Set the value of the `ports` field in the configuration file. | Set the value of the "ports" field in the configuration file. |The value of the `rule` field is a `Rule` object. | The value of the "rule" field is a `Rule` object. | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/contribute/formatting/index.md | master | istio | [
-0.002331934403628111,
0.017143182456493378,
0.015641113743185997,
0.039561256766319275,
-0.020324483513832092,
-0.022226698696613312,
0.030653638765215874,
0.08255849033594131,
-0.0044813379645347595,
-0.015879780054092407,
-0.04378090426325798,
-0.07537814229726791,
0.0027086795307695866,
... | 0.534933 |
The maintainers and working group leads of the Istio Docs Working Group (WG) approve all changes to the [Istio website](/docs/). A \*\*documentation reviewer\*\* is a trusted contributor that approves content that meets the acceptance criteria described in the [review criteria](#review-criteria). All content reviews follow the process described in [Reviewing content PRs](#review-content-prs). Only Docs Maintainers and WG Leads can merge content into the [istio.io repository](https://github.com/istio/istio.io). Content for Istio often needs to be reviewed on short notice and not all content has the same relevance. The last minute nature of contributions and the finite number of reviewers make the prioritization of content reviews necessary to function at scale. This page provides clear review criteria to ensure all review work happens \*\*consistently\*\*, \*\*reliably\*\* and follows the \*\*same quality standards\*\*. ## Review content PRs Documentation reviewers, maintainers, and WG leads follow a clear process to review content PRs to ensure all reviews are consistent. The process is as follows: 1. The \*\*Contributor\*\* submits a new content PR to the istio.io repository. 1. The \*\*Reviewer\*\* performs a review of the content and determines if it meets the acceptance criteria. 1. The \*\*Reviewer\*\* adds any technical WG pertinent for the content if the contributor hasn't already. 1. The \*\*Contributor\*\* and the \*\*Reviewer\*\* work together until the content meets all required acceptance criteria and the issues are addressed. 1. If the content is urgent and meeting the supplemental acceptance criteria requires significant effort, the \*\*Reviewer\*\* files a follow up issue on the istio.io repository to address the problems at a later date. 1. The \*\*Contributor\*\* addresses all required and supplemental feedback as agreed by the Reviewer and Contributor. Any feedback filed in the follow up issues is addressed later. 1. When a \*\*technical\*\* WG lead or maintainer approves the content PR, the \*\*Reviewer\*\* can approve the PR. 1. If a Docs WG maintainer or lead reviewed the content, they not only approve, but they also merge the content. Otherwise, maintainers and leads are automatically notified of the \*\*Reviewer's\*\* approval and prioritize approving and merging the already reviewed content. The following diagram depicts the process: {{< image width="75%" ratio="45.34%" link="./review-process.svg" alt="Documentation review process" title="Documentation review process" >}} - \*\*Contributors\*\* perform the steps in the gray shapes. - \*\*Reviewers\*\* perform the steps in the blue shapes. - \*\*Docs Maintainers and WG Leads\*\* perform the steps in the green shapes. ## Follow up issues When a \*\*Reviewer\*\* files a follow up issue as part of the [review process](#review-content-prs), the GitHub issue must include the following information: - Details about the [supplemental acceptance criteria](#supplemental-acceptance-criteria) the content failed to meet. - Link to the original PR. - Username of the technical Subject Matter Experts (SMEs). - Labels to sort the issues. - Estimate of work: Reviewers provide their best estimate of how long it would take to address the remaining issues working alongside the original contributor. ## Review criteria Our review process supports our [code of conduct](https://www.contributor-covenant.org/version/2/0/code\_of\_conduct) by making our review criteria transparent and applying it to all content contributions. Criteria has two tiers: required and supplemental. ### Required acceptance criteria - Technical accuracy: At least one technical WG lead or maintainer reviews and approves the content. - Correct markup: All linting and tests pass. - Language: Content must be clear and understandable. To learn more see the [highlights](https://developers.google.com/style/highlights) and [general principles](https://developers.google.com/style/tone) of the Google developer style guide. - Links and navigation: The content has no broken links and the site builds properly. ### Supplemental acceptance criteria - Content structure: Information structure enhances the readers' experience. - Consistency: Content adheres to all recommendations in the [Istio contribution guides](/docs/releases/contribute/) - Style: Content adheres | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/contribute/review/index.md | master | istio | [
-0.0689430832862854,
-0.025163063779473305,
-0.011460046283900738,
0.06864666938781738,
0.037668921053409576,
-0.030198300257325172,
-0.007951721549034119,
0.03836691752076149,
0.045102041214704514,
0.06293696910142899,
-0.007610187400132418,
-0.043172527104616165,
-0.030656104907393456,
0... | 0.534758 |
of the Google developer style guide. - Links and navigation: The content has no broken links and the site builds properly. ### Supplemental acceptance criteria - Content structure: Information structure enhances the readers' experience. - Consistency: Content adheres to all recommendations in the [Istio contribution guides](/docs/releases/contribute/) - Style: Content adheres to the [Google developer style guide](https://developers.google.com/style). - Graphic assets: Diagrams follow the Istio [diagram creation guide](/docs/releases/contribute/diagrams/).\ - Code samples: Content provides relevant, testable, and working code samples. - Content reuse: Any repeatable content follows a reusability strategy using boilerplate text. - Glossary: New terms are added to the glossary with clear definitions. | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/contribute/review/index.md | master | istio | [
-0.04783595725893974,
-0.05615946650505066,
0.034194156527519226,
0.024985292926430702,
0.0435560904443264,
0.032316289842128754,
-0.002956805285066366,
0.02872380055487156,
-0.06372442096471786,
0.007575249299407005,
-0.011408226564526558,
0.04725873842835426,
0.001860589487478137,
-0.028... | 0.334019 |
After making your contribution to our website, ensure the changes render as you expect. To ensure you can preview your changes locally, we have tools that let you build and view them easily. We use automated tests to check the quality of all contributions. Before submitting your changes in a Pull Request (PR), you should run the tests locally too. ## Before you begin To guarantee the tests you run locally use the same versions as the tests running on the Istio Continuous Integration (CI), we provide a Docker image with all the tools needed, including our site generator: [Hugo](https://gohugo.io/). To build, test, and preview the site locally, you need to install [Docker](https://www.docker.com/get-started) on your system. ## Preview your changes To preview your changes to the site, go to the root of your fork of `istio/istio.io` and run the following command: {{< text bash >}} $ make serve {{< /text >}} If your changes have no build errors, the command builds the site and starts a local web server to host it. To see the local build of the site, go to `http://localhost:1313` on your web browser. If you need to make and serve the site from a remote server, you can use `ISTIO\_SERVE\_DOMAIN` to provide the IP address or DNS Domain of the server, for example: {{< text bash >}} $ make ISTIO\_SERVE\_DOMAIN=192.168.7.105 serve {{< /text >}} The example builds the site and starts a web server, which hosts the site on the remote server at the `192.168.7.105` IP address. Like before, you can then connect to the web server at `http://192.168.7.105:1313`. ### Test your changes We use linters and tests to ensure a quality baseline for the site's content through automated checks. These checks must pass without failure for us to approve your contribution. Make sure you run the checks locally before you submit your changes to the repository through a PR. We perform the following automated checks: - HTML proofing: ensures all links are valid along with other checks. - Spell check: ensures content is spelled correctly. - Markdown Style check: ensures the markup used complies with our Markdown style rules. To run these checks locally, use the following command: {{< text bash >}} $ make lint {{< /text >}} If the spell checker reports errors, the following are the most likely causes: - A real typo: Fix the typo on your Markdown files. - The error is reported for a command, field, or symbol name: Place \`back-ticks\` around the content with the error. - The error is reported for a correct word or proper name not present in the tool's dictionary: Add the word to the `.spelling` file at the root of the `istio/istio.io` repository. Due to poor Internet connectivity, you could have trouble with the link checker. If you can't get good connectivity, you can set the checker to prevent it from checking external links. Set the `INTERNAL\_ONLY` environment variable to `True` when running the linter, for example: {{< text bash >}} $ make INTERNAL\_ONLY=True lint {{< /text >}} When your content passes all the checks, submit it to the repository through a PR. Visit [Working with GitHub](/docs/releases/contribute/github) for more information. | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/contribute/build/index.md | master | istio | [
-0.02614411525428295,
0.006583430338650942,
0.013117523863911629,
0.03188195452094078,
0.07272523641586304,
-0.07497435808181763,
-0.11913686245679855,
0.04391859471797943,
-0.05922791361808777,
0.035354383289813995,
-0.029936235398054123,
-0.11713360249996185,
-0.0579986609518528,
-0.0060... | 0.398152 |
All content accepted into the Istio documentation must be \*\*clear\*\* and \*\*understandable\*\*. The standard we use to make that determination is defined by the [highlights](https://developers.google.com/style/highlights) and the [general principles](https://developers.google.com/style/tone) of the Google developer style guide. See our [review page](/docs/releases/contribute/review) for details on how your content contributions are reviewed and accepted. Here you can find all the scenarios where the Istio project has decided to follow a different practice and basic style guidance with Istio-specific examples. ## Use sentence case for headings Use sentence case for the headings in your document. Only capitalize the first word of the heading, except for proper nouns or acronyms. |Do | Don't |------------------------|----- |Configuring rate limits | Configuring Rate Limits |Using Envoy for ingress | Using envoy for ingress |Using HTTPS | Using https ## Use title case for the value of the `title:` field of the front-matter The text for the `title:` field of the front-matter must use title case. Capitalize the first letter of every word except conjunctions and prepositions. ## Use present tense |Do | Don't |-----------------------------|------ |This command starts a proxy. | This command will start a proxy. Exception: Use future or past tense if it is required to convey the correct meaning. This exception is extremely rare and should be avoided. ## Use active voice |Do | Don't |-------------------------------------------|------ |You can explore the API using a browser. | The API can be explored using a browser. |The YAML file specifies the replica count. | The replica count is specified in the YAML file. ## Use simple and direct language Use simple and direct language. Avoid using unnecessary phrases, such as saying "please." |Do | Don't |----------------------------|------ |To create a `ReplicaSet`, ... | In order to create a `ReplicaSet`, ... |See the configuration file. | Please see the configuration file. |View the Pods. | With this next command, we'll view the Pods. ## Address the reader as "you" |Do | Don't |---------------------------------------|------ |You can create a `Deployment` by ... | We'll create a `Deployment` by ... |In the preceding output, you can see...| In the preceding output, we can see ... ## Create useful links There are good hyperlinks, and bad hyperlinks. The common practice of calling links \*here\* or \*click here\* are examples of bad hyperlinks. Check out [this excellent article](https://medium.com/@heyoka/dont-use-click-here-f32f445d1021) explaining what makes a good hyperlink and try to keep these guidelines in mind when creating or reviewing site content. ## Avoid using "we" Using "we" in a sentence can be confusing, because the reader might not know whether they're part of the "we" you're describing. |Do | Don't |------------------------------------------|------ |Version 1.4 includes ... | In version 1.4, we have added ... |Istio provides a new feature for ... | We provide a new feature ... |This page teaches you how to use pods. | In this page, we are going to learn about pods. ## Avoid jargon and idioms Some readers speak English as a second language. Avoid jargon and idioms to help make their understanding easier. |Do | Don't |----------------------|------ |Internally, ... | Under the hood, ... |Create a new cluster. | Turn up a new cluster. |Initially, ... | Out of the box, ... ## Avoid statements about the future Avoid making promises or giving hints about the future. If you need to talk about a feature in development, add a boilerplate under the front matter that identifies the information accordingly. ### Avoid statements that will soon be out of date Avoid using wording that becomes outdated quickly like "currently" and "new". A feature that is new today is not new for long. |Do | Don't | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/contribute/style-guide/index.md | master | istio | [
-0.04827325418591499,
0.003160194493830204,
0.04730118811130524,
0.013537228107452393,
-0.00044686158071272075,
0.009584635496139526,
0.014867664314806461,
0.03873739391565323,
-0.05200101062655449,
0.01717466674745083,
-0.024209819734096527,
-0.08994383364915848,
0.0062810624949634075,
-0... | 0.557173 |
feature in development, add a boilerplate under the front matter that identifies the information accordingly. ### Avoid statements that will soon be out of date Avoid using wording that becomes outdated quickly like "currently" and "new". A feature that is new today is not new for long. |Do | Don't |------------------------------------|------ |In version 1.4, ... | In the current version, ... |The Federation feature provides ... | The new Federation feature provides ... | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/contribute/style-guide/index.md | master | istio | [
0.03906158730387688,
0.0028055335860699415,
0.07648240774869919,
-0.00774639705196023,
0.08771054446697235,
0.03581123426556587,
-0.020807379856705666,
-0.04517840966582298,
-0.08699198812246323,
0.03443596512079239,
-0.0278482586145401,
-0.027505207806825638,
-0.04430709779262543,
0.05199... | 0.115112 |
Code blocks in the Istio documentation are embedded preformatted block of content. We use Hugo to build our website, and it uses the `text` and `text\_import` shortcodes to add code to a page. Using this markup allows us to provide our readers with a better experience. The rendered code blocks can be easily copied, printed, or downloaded. Use of these shortcodes is required for all content contributions. If your content doesn't use the appropriate shortcodes, it won't be merged until it does. This page contains several examples of embedded blocks and the formatting options available. The most common example of code blocks are Command Line Interface (CLI) commands, for example: {{< text markdown >}} {{}} $ echo "Hello" {{}} {{< /text >}} The shortcode requires you to start each CLI command with a `$` and it renders the content as follows: {{< text bash >}} $ echo "Hello" {{< /text >}} You can have multiple commands in a code block, but the shortcode only recognizes a single output, for example: {{< text markdown >}} {{}} $ echo "Hello" >file.txt $ cat file.txt Hello {{}} {{< /text >}} By default and given the set `bash` attribute, the commands render using bash syntax highlighting and the output renders as plain text, for example: {{< text bash >}} $ echo "Hello" >file.txt $ cat file.txt Hello {{< /text >}} For readability, you can use `\` to continue long commands on new lines. The new lines must be indented, for example: {{< text markdown >}} {{}} $ echo "Hello" \ >file.txt $ echo "There" >>file.txt $ cat file.txt Hello There {{}} {{< /text >}} Hugo renders the multi-line command without issue: {{< text bash >}} $ echo "Hello" \ >file.txt $ echo "There" >>file.txt $ cat file.txt Hello There {{< /text >}} Your {{}}workloads{{}} can be coded in various programming languages. Therefore, we have implemented support for multiple combinations of syntax highlighting in code blocks. ## Add syntax highlighting Let's start with the following "Hello World" example: {{< text markdown >}} {{}} func HelloWorld() { fmt.Println("Hello World") } {{}} {{< /text >}} The `plain` attribute renders the code without syntax highlighting: {{< text plain >}} func HelloWorld() { fmt.Println("Hello World") } {{< /text >}} You can set the language of the code in the block to highlight its syntax. The previous example set the syntax to `plain`, and the rendered code block doesn't have any syntax highlighting. However, you can set the syntax to Go, for example: {{< text markdown >}} {{}} func HelloWorld() { fmt.Println("Hello World") } {{}} {{< /text >}} Then, Hugo adds the appropriate highlighting: {{< text go >}} func HelloWorld() { fmt.Println("Hello World") } {{< /text >}} ### Supported syntax Code blocks in Istio support the following languages with syntax highlighting: - `plain` - `markdown` - `yaml` - `json` - `java` - `javascript` - `c` - `cpp` - `csharp` - `go` - `html` - `protobuf` - `perl` - `docker` - `bash` By default, the output of CLI commands is considered plain text and renders without syntax highlighting. If you need to add syntax highlighting to the output, you can specify the language in the shortcode. In Istio, the most common examples are YAML or JSON outputs, for example: {{< text markdown >}} {{}} $ kubectl -n istio-system logs $(kubectl -n istio-system get pods -l istio-mixer-type=telemetry -o jsonpath='{.items[0].metadata.name}') mixer | grep \"instance\":\"newlog.logentry.istio-system\" {"level":"warn","ts":"2017-09-21T04:33:31.249Z","instance":"newlog.logentry.istio-system","destination":"details","latency":"6.848ms","responseCode":200,"responseSize":178,"source":"productpage","user":"unknown"} {"level":"warn","ts":"2017-09-21T04:33:31.291Z","instance":"newlog.logentry.istio-system","destination":"ratings","latency":"6.753ms","responseCode":200,"responseSize":48,"source":"reviews","user":"unknown"} {"level":"warn","ts":"2017-09-21T04:33:31.263Z","instance":"newlog.logentry.istio-system","destination":"reviews","latency":"39.848ms","responseCode":200,"responseSize":379,"source":"productpage","user":"unknown"} {"level":"warn","ts":"2017-09-21T04:33:31.239Z","instance":"newlog.logentry.istio-system","destination":"productpage","latency":"67.675ms","responseCode":200,"responseSize":5599,"source":"ingress.istio-system.svc.cluster.local","user":"unknown"} {"level":"warn","ts":"2017-09-21T04:33:31.233Z","instance":"newlog.logentry.istio-system","destination":"ingress.istio-system.svc.cluster.local","latency":"74.47ms","responseCode":200,"responseSize":5599,"source":"unknown","user":"unknown"} {{}} {{< /text >}} Renders the commands with bash syntax highlighting and the output with the appropriate JSON syntax highlighting. {{< text bash json >}} $ kubectl -n istio-system logs $(kubectl -n istio-system get | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/contribute/code-blocks/index.md | master | istio | [
-0.07362821698188782,
0.0010671451454982162,
-0.03600691631436348,
0.06915496289730072,
0.02677886188030243,
0.007350542116910219,
-0.0809028223156929,
0.06746316701173782,
0.0016252496279776096,
-0.026773633435368538,
0.014054722152650356,
-0.041580189019441605,
-0.015213879756629467,
-0.... | 0.473545 |
istio-system get pods -l istio-mixer-type=telemetry -o jsonpath='{.items[0].metadata.name}') mixer | grep \"instance\":\"newlog.logentry.istio-system\" {"level":"warn","ts":"2017-09-21T04:33:31.249Z","instance":"newlog.logentry.istio-system","destination":"details","latency":"6.848ms","responseCode":200,"responseSize":178,"source":"productpage","user":"unknown"} {"level":"warn","ts":"2017-09-21T04:33:31.291Z","instance":"newlog.logentry.istio-system","destination":"ratings","latency":"6.753ms","responseCode":200,"responseSize":48,"source":"reviews","user":"unknown"} {"level":"warn","ts":"2017-09-21T04:33:31.263Z","instance":"newlog.logentry.istio-system","destination":"reviews","latency":"39.848ms","responseCode":200,"responseSize":379,"source":"productpage","user":"unknown"} {"level":"warn","ts":"2017-09-21T04:33:31.239Z","instance":"newlog.logentry.istio-system","destination":"productpage","latency":"67.675ms","responseCode":200,"responseSize":5599,"source":"ingress.istio-system.svc.cluster.local","user":"unknown"} {"level":"warn","ts":"2017-09-21T04:33:31.233Z","instance":"newlog.logentry.istio-system","destination":"ingress.istio-system.svc.cluster.local","latency":"74.47ms","responseCode":200,"responseSize":5599,"source":"unknown","user":"unknown"} {{}} {{< /text >}} Renders the commands with bash syntax highlighting and the output with the appropriate JSON syntax highlighting. {{< text bash json >}} $ kubectl -n istio-system logs $(kubectl -n istio-system get pods -l istio-mixer-type=telemetry -o jsonpath='{.items[0].metadata.name}') mixer | grep \"instance\":\"newlog.logentry.istio-system\" {"level":"warn","ts":"2017-09-21T04:33:31.249Z","instance":"newlog.logentry.istio-system","destination":"details","latency":"6.848ms","responseCode":200,"responseSize":178,"source":"productpage","user":"unknown"} {"level":"warn","ts":"2017-09-21T04:33:31.291Z","instance":"newlog.logentry.istio-system","destination":"ratings","latency":"6.753ms","responseCode":200,"responseSize":48,"source":"reviews","user":"unknown"} {"level":"warn","ts":"2017-09-21T04:33:31.263Z","instance":"newlog.logentry.istio-system","destination":"reviews","latency":"39.848ms","responseCode":200,"responseSize":379,"source":"productpage","user":"unknown"} {"level":"warn","ts":"2017-09-21T04:33:31.239Z","instance":"newlog.logentry.istio-system","destination":"productpage","latency":"67.675ms","responseCode":200,"responseSize":5599,"source":"ingress.istio-system.svc.cluster.local","user":"unknown"} {"level":"warn","ts":"2017-09-21T04:33:31.233Z","instance":"newlog.logentry.istio-system","destination":"ingress.istio-system.svc.cluster.local","latency":"74.47ms","responseCode":200,"responseSize":5599,"source":"unknown","user":"unknown"} {{< /text >}} ## Dynamically import code into your document The previous examples show how to format the code in your document. However, you can use the `text\_import` shortcode to import content or code from a file too. The file can be stored in the documentation repository or in an external source with Cross-Origin Resource Sharing (CORS) enabled. ### Import code from a file in the `istio.io` repository Use the `file` attribute to import content from a file in the Istio documentation repository, for example: {{< text markdown >}} {{}} {{< /text >}} The example above renders the content in the file as plain text: {{< text\_import file="test/snippet\_example.txt" syntax="plain" >}} Set the language of the content through the `syntax=` field to get the appropriate syntax highlighting. ### Import code from an external source through a URL Similarly, you can dynamically import content from the Internet. Use the `url` attribute to specify the source. The following example imports the same file, but from a URL: {{< text markdown >}} {{}} {{< /text >}} As you can see, the content is rendered in the same way as before: {{< text\_import url="https://raw.githubusercontent.com/istio/istio.io/master/test/snippet\_example.txt" syntax="plain" >}} If the file is from a different origin site, CORS should be enabled on that site. Note the GitHub raw content site (`raw.githubusercontent.com`) may be used here. ### Import a code snippet from a larger file {#snippets} Sometimes, you don't need the contents of the entire file. You can control which parts of the content to render using \_named snippets\_. Tag the code you want in the snippet with comments containing the `$snippet SNIPPET\_NAME` and `$endsnippet` tags. The content between the two tags represents the snippet. For example, take the following file: {{< text\_import file="test/snippet\_example.txt" syntax="plain" >}} The file has three separate snippets: `SNIP1`, `SNIP2`, and `SNIP3`. The convention is name snippets using all caps. To reference a specific snippet in your document, set the value of the `snippet` attribute in the shortcode to the name of the snippet, for example: {{< text markdown >}} {{}} {{< /text >}} The resulting code block only includes the code of the `SNIP1` snippet: {{< text\_import file="test/snippet\_example.txt" syntax="plain" snippet="SNIP1" >}} You can use the `syntax` attribute of the `text\_import` shortcode to specify the syntax of the snippet. For snippets containing CLI commands, you can use the `outputis` attribute to specify the output's syntax. ## Link to files in GitHub {#link-2-files} Some code blocks need to reference files from [Istio's GitHub repository](https://github.com/istio/istio). The most common example is referencing YAML configuration files. Instead of copying the entire contents of the YAML file into your code block, you can surround the relative path name of the file with `@` symbols. This markup renders the path should as a link to the file from the current release branch in GitHub, for example: {{< text markdown >}} {{}} $ kubectl apply -f @samples/bookinfo/networking/virtual-service-reviews-v3.yaml@ {{}} {{< /text >}} The path renders as a link that takes you to the corresponding file: {{< text bash >}} $ kubectl apply -f @samples/bookinfo/networking/virtual-service-reviews-v3.yaml@ {{< /text >}} By default, these links point to the current release branch of the `istio/istio` repository. For the link to point to a different Istio repository instead, you can use the `repo` attribute, | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/contribute/code-blocks/index.md | master | istio | [
0.057205550372600555,
-0.03143974393606186,
-0.007771668955683708,
0.004898337181657553,
-0.0055799405090510845,
-0.05437520146369934,
0.05814893916249275,
0.03275825083255768,
0.028474314138293266,
0.019909720867872238,
-0.004767711739987135,
-0.16637162864208221,
-0.054555878043174744,
-... | 0.471369 |
that takes you to the corresponding file: {{< text bash >}} $ kubectl apply -f @samples/bookinfo/networking/virtual-service-reviews-v3.yaml@ {{< /text >}} By default, these links point to the current release branch of the `istio/istio` repository. For the link to point to a different Istio repository instead, you can use the `repo` attribute, for example: {{< text markdown >}} {{}} $ cat @README.md@ {{}} {{< /text >}} The path renders as a link to the `README.md` file of the `istio/api` repository: {{< text syntax="bash" repo="api" >}} $ cat @README.md@ {{< /text >}} Sometimes, your code block uses `@` for something else. You can turn the link expansion on and off with the `expandlinks` attribute, for example: {{< text markdown >}} {{}} $ kubectl apply -f @samples/bookinfo/networking/virtual-service-reviews-v3.yaml@ {{}} {{< /text >}} ## Advanced features To use the more advanced features for preformatted content which are described in the following sections, use the extended form of the `text` sequence rather than the simplified form shown so far. The expanded form uses normal HTML attributes: {{< text markdown >}} {{}} $ kubectl -n istio-system logs $(kubectl -n istio-system get pods -l istio-mixer-type=telemetry -o jsonpath='{.items[0].metadata.name}') mixer | grep \"instance\":\"newlog.logentry.istio-system\" {"level":"warn","ts":"2017-09-21T04:33:31.249Z","instance":"newlog.logentry.istio-system","destination":"details","latency":"6.848ms","responseCode":200,"responseSize":178,"source":"productpage","user":"unknown"} {"level":"warn","ts":"2017-09-21T04:33:31.291Z","instance":"newlog.logentry.istio-system","destination":"ratings","latency":"6.753ms","responseCode":200,"responseSize":48,"source":"reviews","user":"unknown"} {"level":"warn","ts":"2017-09-21T04:33:31.263Z","instance":"newlog.logentry.istio-system","destination":"reviews","latency":"39.848ms","responseCode":200,"responseSize":379,"source":"productpage","user":"unknown"} {"level":"warn","ts":"2017-09-21T04:33:31.239Z","instance":"newlog.logentry.istio-system","destination":"productpage","latency":"67.675ms","responseCode":200,"responseSize":5599,"source":"ingress.istio-system.svc.cluster.local","user":"unknown"} {"level":"warn","ts":"2017-09-21T04:33:31.233Z","instance":"newlog.logentry.istio-system","destination":"ingress.istio-system.svc.cluster.local","latency":"74.47ms","responseCode":200,"responseSize":5599,"source":"unknown","user":"unknown"} {{}} {{< /text >}} The available attributes are: | Attribute | Description |--------------|------------ |`file` | The path of a file to show in the preformatted block. |`url` | The URL of a document to show in the preformatted block. |`syntax` | The syntax of the preformatted block. |`outputis` | When the syntax is `bash`, this specifies the command output's syntax. |`downloadas` | The default file name used when the user [downloads the preformatted block](#download-name). |`expandlinks` | Whether or not to expand [GitHub file references](#link-2-files) in the preformatted block. |`snippet` | The name of the [snippet](#snippets) of content to extract from the preformatted block. |`repo` | The repository to use for [GitHub links](#link-2-files) embedded in preformatted blocks. ### Download name You can define the name used when someone chooses to download the code block with the `downloadas` attribute, for example: {{< text markdown >}} {{}} func HelloWorld() { fmt.Println("Hello World") } {{}} {{< /text >}} If you don't specify a download name, Hugo derives one automatically based on one of the following available possible names: - The title of the current page for inline content - The name of the file containing the imported code - The URL of the source of the imported code | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/contribute/code-blocks/index.md | master | istio | [
0.0009288154542446136,
0.025294361636042595,
-0.0201549232006073,
0.013100536540150642,
0.011377324350178242,
0.015340825542807579,
-0.0019796565175056458,
0.003388098906725645,
0.10753923654556274,
0.017228705808520317,
-0.01653451845049858,
-0.05688173696398735,
-0.020142020657658577,
-0... | 0.328314 |
This page lists the relative maturity and support level of every Istio feature. Please note that the phases are applied to individual features within the project, not to the project as a whole. Here is a high level description of what these labels mean. ## Feature phase definitions | | Experimental | Alpha | Beta | Stable | |-----------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------| | Purpose | Feature is under active development and user facing APIs may change. Users should deploy experimental features with extreme caution, preferably in non-production environments as experimental versions may require a migration effort. | Used to get feedback on a design or feature or see how a tentative design performs, etc. Targeted at developers and expert users. | Used to vet a solution in production without committing to it in the long term, to assess its viability, performance, usability, etc. Targeted at all users. | Dependable, production hardened. | | Stability | May be buggy. Using the feature may expose bugs. Not active by default. | May be buggy. Using the feature may expose bugs. Not active by default. | Code is well tested. The feature is safe for production use. | Code is well tested and stable. Safe for widespread deployment in production. | | Security | Using the feature may have obvious security vulnerabilities. Discovered vulnerabilities may not be communicated broadly. | Using the feature may have obvious security vulnerabilities. Discovered vulnerabilities may not be communicated broadly. | Any discovered security vulnerabilities will be publicly disclosed and patched. | Any discovered security vulnerabilities will be publicly disclosed and patched. | | Performance | Performance characteristics are unknown. Enabling the experimental feature may adversely affect performance. | Performance requirements are assessed as part of design. | Performance and scalability are characterized, but may have caveats. | Performance (latency/scale) is quantified and documented, with guarantees against regression. | | Support | No guarantees on backward compatibility. There is no commitment from the Istio community to improve, maintain or complete the feature and the feature may be dropped entirely in a later software release at any time without notice. | No guarantees on backward compatibility. There is no commitment from Istio to complete the feature. The feature may be dropped entirely in a later software release at any time without notice. | The overall feature will not be dropped, though details may change. Istio commits to complete the feature, in some form, in a subsequent Stable version. Typically this will happen within 3 months, but sometimes longer. Releases should simultaneously support two consecutive versions (e.g. v1alpha1 and v1beta1; or v1beta1 and v1) for at least one supported release cycle (typically 3 months) so that users have enough time to upgrade and migrate resources. | The feature will continue to be present for many subsequent releases. | | Deprecation Policy | None, feature can be removed at any time. | None, feature can be removed at any time. | Weak, feature can be removed with 3 months of advanced notice. | Strong, feature can be removed with 1 year of advanced notice, but there will usually be a supported upgrade path to a replacement feature. | | Versioning | The API version name contains dev (e.g. v1dev1). | The API version name contains alpha (e.g. v1alpha1). | The API version name contains beta (e.g. v1beta1). | The API version is `vX` where X is an integer (e.g. v1). | | Availability | The feature may or may not be available in the main Istio repository. The feature may or may not appear in an Istio release. If it does appear in an | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/feature-stages/index.md | master | istio | [
0.013393489643931389,
-0.0023273127153515816,
-0.007136003114283085,
0.006212511099874973,
0.009415809996426105,
0.025544358417391777,
-0.08711390197277069,
0.054963137954473495,
-0.04899553954601288,
-0.009438377805054188,
-0.023676712065935135,
-0.11506734043359756,
-0.0527433417737484,
... | 0.383686 |
beta (e.g. v1beta1). | The API version is `vX` where X is an integer (e.g. v1). | | Availability | The feature may or may not be available in the main Istio repository. The feature may or may not appear in an Istio release. If it does appear in an Istio release it will be disabled by default. The feature requires an explicit flag to enable in addition to any configuration required to use the feature, in order to turn on dev features. | The feature is committed to the Istio repository. The feature appears in an official Istio release. The feature requires explicit user action to use, e.g. a flag flip, use of a config resource, an explicit installation action, or an API being called. When a feature is disabled it must not affect system stability. | In official Istio releases Enabled by default. | Same as Beta. | | Audience | Other developers closely collaborating on a feature or proof-of-concept. | The feature is targeted at developers and expert users interested in giving early feedback. | Users interested in providing feedback on features. | All users. | | Completeness | Feature has limited capabilities. This feature may act as a proof of concept. | Some API operations or CLI commands may not be implemented. The feature need not have had an API review (an intensive and targeted review of the API, on top of a normal code review). | All API operations and CLI commands should be implemented. End-to-end tests are complete and reliable. The API has had a thorough API review and is thought to be complete, though use during beta frequently turns up API issues not thought of during review. | Same as Beta. | | Documentation | Experimental features are marked as experimental in auto-generated reference docs or they are not exposed. | Alpha features are marked alpha in auto-generated reference docs. Basic documentation describing what the feature does, how to enable it, the restrictions and caveats, and a pointer to the issue or design doc the feature is based on. | Complete feature documentation published to istio.io. In addition to the basic alpha-level documentation, beta documentation includes samples, tutorials, glossary entries, etc. | Same as Beta. | | Upgradeability | The schema and semantics of an experimental feature may change in newer versions without any guarantees of preserving backwards compatibility requiring configuration changes during upgrades. API versions may increment faster than the release cadence and developers are not required to maintain multiple versions for backwards compatibility. Developers are encouraged to increment the API version when schema or semantics change in an incompatible way. | The schema and semantics of an alpha feature may change in a later software release, without any provision for preserving configuration objects in an existing Istio installation. API versions may increment faster than the release cadence and the developer need not maintain multiple versions. Developers should increment the API version when schema or semantics change in an incompatible way. | The schema and semantics may change in a later software release. When this happens, an upgrade path will be documented. In some cases, objects will be automatically converted to the new version. In other cases, a manual upgrade may be necessary. A manual upgrade may require downtime for anything relying on the new feature, and may require manual conversion of objects to the new version. When manual conversion is necessary, the project will provide documentation on the process. | Only strictly compatible changes are allowed in subsequent software releases. | | Testing | The presence of the feature must not | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/feature-stages/index.md | master | istio | [
-0.0045056939125061035,
-0.035460617393255234,
0.0048714326694607735,
0.06347422301769257,
0.05333873629570007,
0.011725968681275845,
0.02906191162765026,
0.03974679857492447,
-0.0767548605799675,
-0.020695440471172333,
-0.01930428110063076,
-0.039837796241045,
-0.06864234060049057,
0.0287... | 0.518019 |
on the new feature, and may require manual conversion of objects to the new version. When manual conversion is necessary, the project will provide documentation on the process. | Only strictly compatible changes are allowed in subsequent software releases. | | Testing | The presence of the feature must not affect any released features. | The feature is covered by unit tests and integration tests where the feature is enabled and the tests are non-flaky. Tests may not cover all corner cases, but the most common cases have been covered. Testing code coverage >= 70%. When the feature is disabled it does not regress performance of the system. | Integration tests cover edge cases as well as common use cases. Integration tests cover all issues reported on the feature. The feature has end-to-end tests covering the samples/tutorials for the feature. Test code coverage is >= 80%. | Superset of Beta, including tests for any issues discovered during Beta. Test code coverage is >= 90%. | | Reliability | The user should not expect the feature to be reliable may be untested. | Because the feature is relatively new, and may lack complete end-to-end tests, enabling the feature via a flag might expose bugs which destabilize Istio (e.g. a bug in a control loop might rapidly create excessive numbers of objects, exhausting API storage). | Because the feature has e2e tests, enabling the feature should not create new bugs in unrelated features. Because the feature is relatively new, it may have minor bugs. | High. The feature is well tested and stable and reliable for all uses. | | | Recommended Use Cases | In short-lived development or testing environments, geared towards soliciting early feedback from users to shape development efforts. | In short-lived development or testing environments, due to complexity of upgradeability and lack of long-term support. | In development or testing environments. In production environments as part of an evaluation of the feature in order to provide feedback. | Any. | ## Istio features Below is our list of existing features and their current phases. This information will be updated after every release. ### Traffic management {{< features section="Traffic Management" >}} ### Observability {{< features section="Observability" >}} ### Extensibility {{< features section="Extensibility" >}} ### Security and policy enforcement {{< features section="Security and policy enforcement" >}} ### Core {{< features section="Core" >}} ### Ambient mode {{< features section="Ambient mode" >}} {{< idea >}} Please get in touch by [joining our community](/get-involved/) if there are features you'd like to see in our future releases! {{< /idea >}} | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/feature-stages/index.md | master | istio | [
0.01992289535701275,
0.0037895669229328632,
-0.026194151490926743,
0.03469127416610718,
0.09798719733953476,
-0.02433706261217594,
-0.06748384237289429,
0.0032577046658843756,
-0.1937161237001419,
-0.026772795245051384,
0.055876486003398895,
0.011336252093315125,
-0.00684386445209384,
0.01... | 0.075261 |
We are very grateful to the security researchers and users that report back Istio security vulnerabilities. We investigate every report thoroughly. ## Reporting a vulnerability To make a report, send an email to the private [istio-security-vulnerability-reports@googlegroups.com](mailto:istio-security-vulnerability-reports@googlegroups.com) mailing list with the vulnerability details. For normal product bugs unrelated to latent security vulnerabilities, please head to our [Reporting Bugs](/docs/releases/bugs/) page to learn what to do. ### When to report a security vulnerability? Send us a report whenever you: - Think Istio has a potential security vulnerability. - Are unsure whether or how a vulnerability affects Istio. - Think a vulnerability is present in another project that Istio depends on. For example, Envoy, Docker, or Kubernetes. When in doubt, please disclose privately. This includes, but is not limited to: - Any crash, especially in Envoy - Any security policy (like Authentication or Authorization) bypass or weakness - Any potential Denial of Service (DoS) ### When not to report a security vulnerability? Don't send a vulnerability report if: - You need help tuning Istio components for security. - You need help applying security related updates. - Your issue is not security related. - Your issue is related to base image dependencies (see [Base Images](#base-images)) ## Evaluation The Istio security team acknowledges and analyzes each vulnerability report within three work days. Any vulnerability information you share with the Istio security team stays within the Istio project. We don't disseminate the information to other projects. We only share the information as needed to fix the issue. We keep the reporter updated as the status of the security issue moves from `triaged`, to `identified fix`, to `release planning`. ## Fixing the issue Once a security vulnerability has been fully characterized, a fix is developed by the Istio team. The development and testing for the fix happens in a private GitHub repository in order to prevent premature disclosure of the vulnerability. ## Early disclosure The Istio project maintains a mailing list for private early disclosure of security vulnerabilities. The list is used to provide actionable information to close Istio partners. The list is not intended for individuals to find out about security issues. See [Early Disclosure of Security Vulnerabilities](https://github.com/istio/community/blob/master/EARLY-DISCLOSURE.md) to get more information. ## Public disclosure On the day chosen for public disclosure, a sequence of activities takes place as quickly as possible: - Changes are merged from the private GitHub repository holding the fix into the appropriate set of public branches. - Release engineers ensure all necessary binaries are promptly built and published. - Once the binaries are available, an announcement is sent out on the following channels: - The [Istio blog](/blog) - The [Announcements](https://discuss.istio.io/c/announcements) category on discuss.istio.io - The [Istio Twitter feed](https://twitter.com/IstioMesh) - The [#announcements channel on Slack](https://istio.slack.com/messages/CFXS256EQ/) As much as possible this announcement will be actionable, and include any mitigating steps customers can take prior to upgrading to a fixed version. The recommended target time for these announcements is 16:00 UTC from Monday to Thursday. This means the announcement will be seen morning Pacific, early evening Europe, and late evening Asia. ## Base Images Istio offers two sets of docker images, based on `ubuntu` (default) and based on `distroless` (see [Harden Docker Container Images](/docs/ops/configuration/security/harden-docker-images/)). These base images occasionally have CVEs. The Istio security team has automated scanning to ensure base images are kept free of CVEs. When CVEs are detected in our images, new images are automatically built and used for all future builds. Additionally, the security team analyzes the vulnerabilities to see if they are exploitable in Istio directly. In most cases, these vulnerabilities may be present in packages within the base image, but | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/security-vulnerabilities/index.md | master | istio | [
-0.04346603900194168,
0.046740416437387466,
-0.0029551833868026733,
0.07322603464126587,
0.03461810573935509,
0.013159578666090965,
0.028854850679636,
0.07491642981767654,
-0.020497942343354225,
0.018700556829571724,
-0.01997396908700466,
-0.12332423031330109,
0.01147952489554882,
0.049800... | 0.510959 |
CVEs. When CVEs are detected in our images, new images are automatically built and used for all future builds. Additionally, the security team analyzes the vulnerabilities to see if they are exploitable in Istio directly. In most cases, these vulnerabilities may be present in packages within the base image, but are not exploitable in the way Istio uses them. For these cases, new releases will not typically be released just to resolve these CVEs, and the fixes will be included in the next regularly scheduled release. As a result, base image CVEs should not be [reported](#reporting-a-vulnerability) unless there is evidence it may be exploitable within Istio. The [`distroless`](/docs/ops/configuration/security/harden-docker-images/) base images are strongly encouraged if reducing base image CVEs is important to you. | https://github.com/istio/istio.io/blob/master//content/en/docs/releases/security-vulnerabilities/index.md | master | istio | [
-0.04152124747633934,
0.07142972201108932,
0.033198632299900055,
0.0025613263715058565,
0.09467257559299469,
-0.050541143864393234,
-0.032702233642339706,
0.08773879706859589,
-0.024508988484740257,
0.013908478431403637,
-0.00590481236577034,
-0.0794682502746582,
0.021815886721014977,
0.09... | 0.359838 |
This task shows how to configure the minimum TLS version for Istio workloads. The maximum TLS version for Istio workloads is 1.3. ## Configuration of minimum TLS version for Istio workloads \* Install Istio through `istioctl` with the minimum TLS version configured. The `IstioOperator` custom resource used to configure Istio in the `istioctl install` command contains a field for the minimum TLS version for Istio workloads. The `minProtocolVersion` field specifies the minimum TLS version for the TLS connections among Istio workloads. In the following example, the minimum TLS version for Istio workloads is configured to be 1.3. {{< text bash >}} $ cat < ./istio.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: meshConfig: meshMTLS: minProtocolVersion: TLSV1\_3 EOF $ istioctl install -f ./istio.yaml {{< /text >}} ## Check the TLS configuration of Istio workloads After configuring the minimum TLS version of Istio workloads, you can verify that the minimum TLS version was configured and works as expected. \* Deploy two workloads: `httpbin` and `curl`. Deploy these into a single namespace, for example `foo`. Both workloads run with an Envoy proxy in front of each. {{< text bash >}} $ kubectl create ns foo $ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/httpbin.yaml@) -n foo $ kubectl apply -f <(istioctl kube-inject -f @samples/curl/curl.yaml@) -n foo {{< /text >}} \* Verify that `curl` successfully communicates with `httpbin` using this command: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl http://httpbin.foo:8000/ip -sS -o /dev/null -w "%{http\_code}\n" 200 {{< /text >}} {{< warning >}} If you donβt see the expected output, retry after a few seconds. Caching and propagation can cause a delay. {{< /warning >}} In the example, the minimum TLS version was configured to be 1.3. To check that TLS 1.3 is allowed, you can run the following command: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c istio-proxy -n foo -- openssl s\_client -alpn istio -tls1\_3 -connect httpbin.foo:8000 | grep "TLSv1.3" {{< /text >}} The text output should include: {{< text plain >}} TLSv1.3 {{< /text >}} To check that TLS 1.2 is not allowed, you can run the following command: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c istio-proxy -n foo -- openssl s\_client -alpn istio -tls1\_2 -connect httpbin.foo:8000 | grep "Cipher is (NONE)" {{< /text >}} The text output should include: {{< text plain >}} Cipher is (NONE) {{< /text >}} ## Cleanup Delete sample applications `curl` and `httpbin` from the `foo` namespace: {{< text bash >}} $ kubectl delete -f samples/httpbin/httpbin.yaml -n foo $ kubectl delete -f samples/curl/curl.yaml -n foo {{< /text >}} Uninstall Istio from the cluster: {{< text bash >}} $ istioctl uninstall --purge -y {{< /text >}} To remove the `foo` and `istio-system` namespaces: {{< text bash >}} $ kubectl delete ns foo istio-system {{< /text >}} | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/tls-configuration/workload-min-tls-version/index.md | master | istio | [
-0.02870776504278183,
0.06036171317100525,
0.006123182363808155,
-0.0036653310526162386,
-0.05914762243628502,
-0.06002794951200485,
0.021707937121391296,
0.010334635153412819,
0.060448430478572845,
-0.04122549667954445,
-0.05800929665565491,
-0.10553199052810669,
-0.027685904875397682,
0.... | 0.428687 |
This task shows how administrators can configure the Istio certificate authority (CA) with a root certificate, signing certificate and key. By default the Istio CA generates a self-signed root certificate and key and uses them to sign the workload certificates. To protect the root CA key, you should use a root CA which runs on a secure machine offline, and use the root CA to issue intermediate certificates to the Istio CAs that run in each cluster. An Istio CA can sign workload certificates using the administrator-specified certificate and key, and distribute an administrator-specified root certificate to the workloads as the root of trust. The following graph demonstrates the recommended CA hierarchy in a mesh containing two clusters. {{< image width="50%" link="ca-hierarchy.svg" caption="CA Hierarchy" >}} This task demonstrates how to generate and plug in the certificates and key for the Istio CA. These steps can be repeated to provision certificates and keys for Istio CAs running in each cluster. ## Plug in certificates and key into the cluster {{< warning >}} The following instructions are for demo purposes only. For a production cluster setup, it is highly recommended to use a production-ready CA, such as [Hashicorp Vault](https://www.hashicorp.com/products/vault). It is a good practice to manage the root CA on an offline machine with strong security protection. {{< /warning >}} {{< warning >}} Support for SHA-1 signatures is [disabled by default in Go 1.18](https://github.com/golang/go/issues/41682). If you are generating the certificate on macOS make sure you are using OpenSSL as described in [GitHub issue 38049](https://github.com/istio/istio/issues/38049). {{< /warning >}} 1. In the top-level directory of the Istio installation package, create a directory to hold certificates and keys: {{< text bash >}} $ mkdir -p certs $ pushd certs {{< /text >}} 1. Generate the root certificate and key: {{< text bash >}} $ make -f ../tools/certs/Makefile.selfsigned.mk root-ca {{< /text >}} This will generate the following files: \* `root-cert.pem`: the generated root certificate \* `root-key.pem`: the generated root key \* `root-ca.conf`: the configuration for `openssl` to generate the root certificate \* `root-cert.csr`: the generated CSR for the root certificate 1. For each cluster, generate an intermediate certificate and key for the Istio CA. The following is an example for `cluster1`: {{< text bash >}} $ make -f ../tools/certs/Makefile.selfsigned.mk cluster1-cacerts {{< /text >}} This will generate the following files in a directory named `cluster1`: \* `ca-cert.pem`: the generated intermediate certificates \* `ca-key.pem`: the generated intermediate key \* `cert-chain.pem`: the generated certificate chain which is used by istiod \* `root-cert.pem`: the root certificate You can replace `cluster1` with a string of your choosing. For example, with the argument `cluster2-cacerts`, you can create certificates and key in a directory called `cluster2`. If you are doing this on an offline machine, copy the generated directory to a machine with access to the clusters. 1. In each cluster, create a secret `cacerts` including all the input files `ca-cert.pem`, `ca-key.pem`, `root-cert.pem` and `cert-chain.pem`. For example, for `cluster1`: {{< text bash >}} $ kubectl create namespace istio-system $ kubectl create secret generic cacerts -n istio-system \ --from-file=cluster1/ca-cert.pem \ --from-file=cluster1/ca-key.pem \ --from-file=cluster1/root-cert.pem \ --from-file=cluster1/cert-chain.pem {{< /text >}} 1. Return to the top-level directory of the Istio installation: {{< text bash >}} $ popd {{< /text >}} ## Deploy Istio 1. Deploy Istio using the `demo` profile. Istio's CA will read certificates and key from the secret-mount files. {{< text bash >}} $ istioctl install --set profile=demo {{< /text >}} ## Deploying example services 1. Deploy the `httpbin` and `curl` sample services. {{< text bash >}} $ kubectl create ns foo $ kubectl apply -f <(istioctl kube-inject -f samples/httpbin/httpbin.yaml) -n foo $ kubectl apply | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/cert-management/plugin-ca-cert/index.md | master | istio | [
-0.01813490316271782,
0.03138086572289467,
-0.025070862844586372,
0.04406227916479111,
-0.03811604529619217,
-0.08004684746265411,
-0.027528397738933563,
0.08770166337490082,
-0.010230820626020432,
-0.008960827253758907,
-0.0032398493494838476,
-0.1366289258003235,
0.08559459447860718,
0.0... | 0.425957 |
from the secret-mount files. {{< text bash >}} $ istioctl install --set profile=demo {{< /text >}} ## Deploying example services 1. Deploy the `httpbin` and `curl` sample services. {{< text bash >}} $ kubectl create ns foo $ kubectl apply -f <(istioctl kube-inject -f samples/httpbin/httpbin.yaml) -n foo $ kubectl apply -f <(istioctl kube-inject -f samples/curl/curl.yaml) -n foo {{< /text >}} 1. Deploy a policy for workloads in the `foo` namespace to only accept mutual TLS traffic. {{< text bash >}} $ kubectl apply -n foo -f - <}} ## Verifying the certificates In this section, we verify that workload certificates are signed by the certificates that we plugged into the CA. This requires you have `openssl` installed on your machine. 1. Sleep 20 seconds for the mTLS policy to take effect before retrieving the certificate chain of `httpbin`. As the CA certificate used in this example is self-signed, the `verify error:num=19:self signed certificate in certificate chain` error returned by the openssl command is expected. {{< text bash >}} $ sleep 20; kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c istio-proxy -n foo -- openssl s\_client -showcerts -connect httpbin.foo:8000 > httpbin-proxy-cert.txt {{< /text >}} 1. Parse the certificates on the certificate chain. {{< text bash >}} $ sed -n '/-----BEGIN CERTIFICATE-----/{:start /-----END CERTIFICATE-----/!{N;b start};/.\*/p}' httpbin-proxy-cert.txt > certs.pem $ awk 'BEGIN {counter=0;} /BEGIN CERT/{counter++} { print > "proxy-cert-" counter ".pem"}' < certs.pem {{< /text >}} 1. Verify the root certificate is the same as the one specified by the administrator: {{< text bash >}} $ openssl x509 -in certs/cluster1/root-cert.pem -text -noout > /tmp/root-cert.crt.txt $ openssl x509 -in ./proxy-cert-3.pem -text -noout > /tmp/pod-root-cert.crt.txt $ diff -s /tmp/root-cert.crt.txt /tmp/pod-root-cert.crt.txt Files /tmp/root-cert.crt.txt and /tmp/pod-root-cert.crt.txt are identical {{< /text >}} 1. Verify the CA certificate is the same as the one specified by the administrator: {{< text bash >}} $ openssl x509 -in certs/cluster1/ca-cert.pem -text -noout > /tmp/ca-cert.crt.txt $ openssl x509 -in ./proxy-cert-2.pem -text -noout > /tmp/pod-cert-chain-ca.crt.txt $ diff -s /tmp/ca-cert.crt.txt /tmp/pod-cert-chain-ca.crt.txt Files /tmp/ca-cert.crt.txt and /tmp/pod-cert-chain-ca.crt.txt are identical {{< /text >}} 1. Verify the certificate chain from the root certificate to the workload certificate: {{< text bash >}} $ openssl verify -CAfile <(cat certs/cluster1/ca-cert.pem certs/cluster1/root-cert.pem) ./proxy-cert-1.pem ./proxy-cert-1.pem: OK {{< /text >}} ## Cleanup \* Remove the certificates, keys, and intermediate files from your local disk: {{< text bash >}} $ rm -rf certs {{< /text >}} \* Remove the secret `cacerts`: {{< text bash >}} $ kubectl delete secret cacerts -n istio-system {{< /text >}} \* Remove the authentication policy from the `foo` namespace: {{< text bash >}} $ kubectl delete peerauthentication -n foo default {{< /text >}} \* Remove the sample applications `curl` and `httpbin`: {{< text bash >}} $ kubectl delete -f samples/curl/curl.yaml -n foo $ kubectl delete -f samples/httpbin/httpbin.yaml -n foo {{< /text >}} \* Uninstall Istio from the cluster: {{< text bash >}} $ istioctl uninstall --purge -y {{< /text >}} \* Remove the namespace `foo` and `istio-system` from the cluster: {{< text bash >}} $ kubectl delete ns foo istio-system {{< /text >}} | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/cert-management/plugin-ca-cert/index.md | master | istio | [
-0.051207154989242554,
0.03750288859009743,
0.005100594833493233,
-0.02777816168963909,
-0.04958571121096611,
-0.08885857462882996,
-0.022937094792723656,
-0.02726450003683567,
0.09447532147169113,
0.01466766744852066,
0.021668432280421257,
-0.10603363811969757,
0.0703238695859909,
-0.0043... | 0.18469 |
{{< boilerplate experimental >}} This feature requires Kubernetes version >= 1.18. This task shows how to provision workload certificates using a custom certificate authority that integrates with the [Kubernetes CSR API](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/). Different workloads can get their certificates signed from different cert-signers. Each cert-signer is effectively a different CA. It is expected that workloads whose certificates are issued from the same cert-signer can talk mTLS to each other while workloads signed by different signers cannot. This feature leverages [Chiron](/blog/2019/dns-cert/), a lightweight component linked with Istiod that signs certificates using the Kubernetes CSR API. For this example, we use [open-source cert-manager](https://cert-manager.io). Cert-manager has added [experimental Support for Kubernetes `CertificateSigningRequests`](https://cert-manager.io/docs/usage/kube-csr/) starting with version 1.4. ## Deploy custom CA controller in the Kubernetes cluster 1. Deploy cert-manager according to the [installation doc](https://cert-manager.io/docs/installation/). {{< warning >}} Make sure to enable feature gate: `--feature-gates=ExperimentalCertificateSigningRequestControllers=true` {{< /warning >}} {{< text bash >}} $ helm repo add jetstack https://charts.jetstack.io $ helm repo update $ helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set featureGates="ExperimentalCertificateSigningRequestControllers=true" --set installCRDs=true {{< /text >}} 1. Create three self signed cluster issuers `istio-system`, `foo` and `bar` for cert-manager. Note: Namespace issuers and other types of issuers can also be used. {{< text bash >}} $ cat < ./selfsigned-issuer.yaml apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: selfsigned-bar-issuer spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: bar-ca namespace: cert-manager spec: isCA: true commonName: bar secretName: bar-ca-selfsigned issuerRef: name: selfsigned-bar-issuer kind: ClusterIssuer group: cert-manager.io --- apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: bar spec: ca: secretName: bar-ca-selfsigned --- apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: selfsigned-foo-issuer spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: foo-ca namespace: cert-manager spec: isCA: true commonName: foo secretName: foo-ca-selfsigned issuerRef: name: selfsigned-foo-issuer kind: ClusterIssuer group: cert-manager.io --- apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: foo spec: ca: secretName: foo-ca-selfsigned --- apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: selfsigned-istio-issuer spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: istio-ca namespace: cert-manager spec: isCA: true commonName: istio-system secretName: istio-ca-selfsigned issuerRef: name: selfsigned-istio-issuer kind: ClusterIssuer group: cert-manager.io --- apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: istio-system spec: ca: secretName: istio-ca-selfsigned EOF $ kubectl apply -f ./selfsigned-issuer.yaml {{< /text >}} ## Verify secrets are created for each cluster issuer {{< text bash >}} $ kubectl get secret -n cert-manager -l controller.cert-manager.io/fao=true NAME TYPE DATA AGE bar-ca-selfsigned kubernetes.io/tls 3 3m36s foo-ca-selfsigned kubernetes.io/tls 3 3m36s istio-ca-selfsigned kubernetes.io/tls 3 3m38s {{< /text >}} ## Export root certificates for each cluster issuer {{< text bash >}} $ export ISTIOCA=$(kubectl get clusterissuers istio-system -o jsonpath='{.spec.ca.secretName}' | xargs kubectl get secret -n cert-manager -o jsonpath='{.data.ca\.crt}' | base64 -d | sed 's/^/ /') $ export FOOCA=$(kubectl get clusterissuers foo -o jsonpath='{.spec.ca.secretName}' | xargs kubectl get secret -n cert-manager -o jsonpath='{.data.ca\.crt}' | base64 -d | sed 's/^/ /') $ export BARCA=$(kubectl get clusterissuers bar -o jsonpath='{.spec.ca.secretName}' | xargs kubectl get secret -n cert-manager -o jsonpath='{.data.ca\.crt}' | base64 -d | sed 's/^/ /') {{< /text >}} ## Deploy Istio with default cert-signer info 1. Deploy Istio on the cluster using `istioctl` with the following configuration. The `ISTIO\_META\_CERT\_SIGNER` is the default cert-signer for workloads. {{< text bash >}} $ cat < ./istio.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: values: pilot: env: EXTERNAL\_CA: ISTIOD\_RA\_KUBERNETES\_API meshConfig: defaultConfig: proxyMetadata: ISTIO\_META\_CERT\_SIGNER: istio-system caCertificates: - pem: | $ISTIOCA certSigners: - clusterissuers.cert-manager.io/istio-system - pem: | $FOOCA certSigners: - clusterissuers.cert-manager.io/foo - pem: | $BARCA certSigners: - clusterissuers.cert-manager.io/bar components: pilot: k8s: env: - name: CERT\_SIGNER\_DOMAIN value: clusterissuers.cert-manager.io - name: PILOT\_CERT\_PROVIDER value: k8s.io/clusterissuers.cert-manager.io/istio-system overlays: - kind: ClusterRole name: istiod-clusterrole-istio-system patches: - path: rules[-1] value: | apiGroups: - certificates.k8s.io resourceNames: - clusterissuers.cert-manager.io/foo - clusterissuers.cert-manager.io/bar - clusterissuers.cert-manager.io/istio-system resources: - signers verbs: - approve EOF $ istioctl install --skip-confirmation -f ./istio.yaml | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/cert-management/custom-ca-k8s/index.md | master | istio | [
-0.0846509039402008,
0.05219405144453049,
0.06301450729370117,
0.022178243845701218,
-0.051623255014419556,
-0.08001763373613358,
-0.012960469350218773,
-0.024331875145435333,
0.0975242331624031,
-0.03792732581496239,
-0.057405296713113785,
-0.10672178119421005,
0.08294677734375,
0.0852168... | 0.131636 |
clusterissuers.cert-manager.io/bar components: pilot: k8s: env: - name: CERT\_SIGNER\_DOMAIN value: clusterissuers.cert-manager.io - name: PILOT\_CERT\_PROVIDER value: k8s.io/clusterissuers.cert-manager.io/istio-system overlays: - kind: ClusterRole name: istiod-clusterrole-istio-system patches: - path: rules[-1] value: | apiGroups: - certificates.k8s.io resourceNames: - clusterissuers.cert-manager.io/foo - clusterissuers.cert-manager.io/bar - clusterissuers.cert-manager.io/istio-system resources: - signers verbs: - approve EOF $ istioctl install --skip-confirmation -f ./istio.yaml {{< /text >}} 1. Create the `bar` and `foo` namespaces. {{< text bash >}} $ kubectl create ns bar $ kubectl create ns foo {{< /text >}} 1. Deploy the `proxyconfig-bar.yaml` in the `bar` namespace to define cert-signer for workloads in the `bar` namespace. {{< text bash >}} $ cat < ./proxyconfig-bar.yaml apiVersion: networking.istio.io/v1beta1 kind: ProxyConfig metadata: name: barpc namespace: bar spec: environmentVariables: ISTIO\_META\_CERT\_SIGNER: bar EOF $ kubectl apply -f ./proxyconfig-bar.yaml {{< /text >}} 1. Deploy the `proxyconfig-foo.yaml` in the `foo` namespace to define cert-signer for workloads in the `foo` namespace. {{< text bash >}} $ cat < ./proxyconfig-foo.yaml apiVersion: networking.istio.io/v1beta1 kind: ProxyConfig metadata: name: foopc namespace: foo spec: environmentVariables: ISTIO\_META\_CERT\_SIGNER: foo EOF $ kubectl apply -f ./proxyconfig-foo.yaml {{< /text >}} 1. Deploy the `httpbin` and `curl` sample applications in the `foo` and `bar` namespaces. {{< text bash >}} $ kubectl label ns foo istio-injection=enabled $ kubectl label ns bar istio-injection=enabled $ kubectl apply -f samples/httpbin/httpbin.yaml -n foo $ kubectl apply -f samples/curl/curl.yaml -n foo $ kubectl apply -f samples/httpbin/httpbin.yaml -n bar {{< /text >}} ## Verify the network connectivity between `httpbin` and `curl` within the same namespace When the workloads are deployed, they send CSR requests with related signer info. Istiod forwards the CSR request to the custom CA for signing. The custom CA will use the correct cluster issuer to sign the cert back. Workloads under `foo` namespace will use `foo` cluster issuers while workloads under `bar` namespace will use the `bar` cluster issuers. To verify that they have indeed been signed by correct cluster issuers, we can verify workloads under the same namespace can communicate while workloads under the different namespace cannot communicate. 1. Set the `CURL\_POD\_FOO` environment variable to the name of `curl` pod. {{< text bash >}} $ export CURL\_POD\_FOO=$(kubectl get pod -n foo -l app=curl -o jsonpath={.items..metadata.name}) {{< /text >}} 1. Check network connectivity between service `curl` and `httpbin` in the `foo` namespace. {{< text bash >}} $ kubectl exec "$CURL\_POD\_FOO" -n foo -c curl -- curl http://httpbin.foo:8000/html # Herman Melville - Moby-Dick Availing himself of the mild... {{< /text >}} 1. Check network connectivity between service `curl` in the `foo` namespace and `httpbin` in the `bar` namespace. {{< text bash >}} $ kubectl exec "$CURL\_POD\_FOO" -n foo -c curl -- curl http://httpbin.bar:8000/html upstream connect error or disconnect/reset before headers. reset reason: connection failure, transport failure reason: TLS error: 268435581:SSL routines:OPENSSL\_internal:CERTIFICATE\_VERIFY\_FAILED {{< /text >}} ## Cleanup \* Remove the namespaces and uninstall Istio and cert-manager: {{< text bash >}} $ kubectl delete ns foo $ kubectl delete ns bar $ istioctl uninstall --purge -y $ helm delete -n cert-manager cert-manager $ kubectl delete ns istio-system cert-manager $ unset ISTIOCA FOOCA BARCA $ rm -rf istio.yaml proxyconfig-foo.yaml proxyconfig-bar.yaml selfsigned-issuer.yaml {{< /text >}} ## Reasons to use this feature \* Custom CA Integration - By specifying a Signer name in the Kubernetes CSR Request, this feature allows Istio to integrate with custom Certificate Authorities using the Kubernetes CSR API interface. This does require the custom CA to implement a Kubernetes controller to watch the `CertificateSigningRequest` Resources and act on them. \* Better multi-tenancy - By specifying a different cert-signer for different workloads, certificates for different tenant's workloads can be signed by different CAs. | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/cert-management/custom-ca-k8s/index.md | master | istio | [
0.005639712326228619,
0.01592918299138546,
-0.041645996272563934,
0.03342249616980553,
-0.02213655598461628,
-0.023784996941685677,
0.02519480139017105,
0.013254373334348202,
0.008108529262244701,
0.0003714027698151767,
0.006355066783726215,
-0.1928660273551941,
0.02339768037199974,
0.0622... | 0.410166 |
to implement a Kubernetes controller to watch the `CertificateSigningRequest` Resources and act on them. \* Better multi-tenancy - By specifying a different cert-signer for different workloads, certificates for different tenant's workloads can be signed by different CAs. | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/cert-management/custom-ca-k8s/index.md | master | istio | [
0.016342200338840485,
0.005754375830292702,
0.03422362357378006,
-0.05174975469708443,
-0.05345818027853966,
0.001944710616953671,
0.002779836067929864,
-0.03236089646816254,
0.11133500933647156,
0.000851642107591033,
-0.03671282157301903,
-0.06292079389095306,
0.0390855148434639,
0.067549... | 0.130509 |
This task shows you how to set up Istio authorization policy for TCP traffic in an Istio mesh. ## Before you begin Before you begin this task, do the following: \* Read the [Istio authorization concepts](/docs/concepts/security/#authorization). \* Install Istio using the [Istio installation guide](/docs/setup/install/istioctl/). \* Deploy two workloads named `curl` and `tcp-echo` together in a namespace, for example `foo`. Both workloads run with an Envoy proxy in front of each. The `tcp-echo` workload listens on port 9000, 9001 and 9002 and echoes back any traffic it received with a prefix `hello`. For example, if you send "world" to `tcp-echo`, it will reply with `hello world`. The `tcp-echo` Kubernetes service object only declares the ports 9000 and 9001, and omits the port 9002. A pass-through filter chain will handle port 9002 traffic. Deploy the example namespace and workloads using the following command: {{< text bash >}} $ kubectl create ns foo $ kubectl apply -f <(istioctl kube-inject -f @samples/tcp-echo/tcp-echo.yaml@) -n foo $ kubectl apply -f <(istioctl kube-inject -f @samples/curl/curl.yaml@) -n foo {{< /text >}} \* Verify that `curl` successfully communicates with `tcp-echo` on ports 9000 and 9001 using the following command: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" \ -c curl -n foo -- sh -c \ 'echo "port 9000" | nc tcp-echo 9000' | grep "hello" && echo 'connection succeeded' || echo 'connection rejected' hello port 9000 connection succeeded {{< /text >}} {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" \ -c curl -n foo -- sh -c \ 'echo "port 9001" | nc tcp-echo 9001' | grep "hello" && echo 'connection succeeded' || echo 'connection rejected' hello port 9001 connection succeeded {{< /text >}} \* Verify that `curl` successfully communicates with `tcp-echo` on port 9002. You need to send the traffic directly to the pod IP of `tcp-echo` because the port 9002 is not defined in the Kubernetes service object of `tcp-echo`. Get the pod IP address and send the request with the following command: {{< text bash >}} $ TCP\_ECHO\_IP=$(kubectl get pod "$(kubectl get pod -l app=tcp-echo -n foo -o jsonpath={.items..metadata.name})" -n foo -o jsonpath="{.status.podIP}") $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" \ -c curl -n foo -- sh -c \ "echo \"port 9002\" | nc $TCP\_ECHO\_IP 9002" | grep "hello" && echo 'connection succeeded' || echo 'connection rejected' hello port 9002 connection succeeded {{< /text >}} {{< warning >}} If you donβt see the expected output, retry after a few seconds. Caching and propagation can cause a delay. {{< /warning >}} ## Configure ALLOW authorization policy for a TCP workload 1. Create the `tcp-policy` authorization policy for the `tcp-echo` workload in the `foo` namespace. Run the following command to apply the policy to allow requests to port 9000 and 9001: {{< text bash >}} $ kubectl apply -f - <}} 1. Verify that requests to port 9000 are allowed using the following command: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" \ -c curl -n foo -- sh -c \ 'echo "port 9000" | nc tcp-echo 9000' | grep "hello" && echo 'connection succeeded' || echo 'connection rejected' hello port 9000 connection succeeded {{< /text >}} 1. Verify that requests to port 9001 are allowed using the following command: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" \ -c curl -n foo -- sh -c \ 'echo "port 9001" | nc tcp-echo 9001' | grep "hello" && echo 'connection | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authorization/authz-tcp/index.md | master | istio | [
-0.014829004183411598,
0.047448087483644485,
0.018054183572530746,
-0.05116879194974899,
-0.12120601534843445,
-0.029859201982617378,
0.06980573385953903,
0.008113468065857887,
0.02004058100283146,
0.03679880499839783,
-0.07677046209573746,
-0.12554578483104706,
-0.017596717923879623,
0.04... | 0.471193 |
Verify that requests to port 9001 are allowed using the following command: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" \ -c curl -n foo -- sh -c \ 'echo "port 9001" | nc tcp-echo 9001' | grep "hello" && echo 'connection succeeded' || echo 'connection rejected' hello port 9001 connection succeeded {{< /text >}} 1. Verify that requests to port 9002 are denied. This is enforced by the authorization policy which also applies to the pass through filter chain, even if the port is not declared explicitly in the `tcp-echo` Kubernetes service object. Run the following command and verify the output: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" \ -c curl -n foo -- sh -c \ "echo \"port 9002\" | nc $TCP\_ECHO\_IP 9002" | grep "hello" && echo 'connection succeeded' || echo 'connection rejected' connection rejected {{< /text >}} 1. Update the policy to add an HTTP-only field named `methods` for port 9000 using the following command: {{< text bash >}} $ kubectl apply -f - <}} 1. Verify that requests to port 9000 are denied. This occurs because the rule becomes invalid when it uses an HTTP-only field (`methods`) for TCP traffic. Istio ignores the invalid ALLOW rule. The final result is that the request is rejected, because it does not match any ALLOW rules. Run the following command and verify the output: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" \ -c curl -n foo -- sh -c \ 'echo "port 9000" | nc tcp-echo 9000' | grep "hello" && echo 'connection succeeded' || echo 'connection rejected' connection rejected {{< /text >}} 1. Verify that requests to port 9001 are denied. This occurs because the requests do not match any ALLOW rules. Run the following command and verify the output: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" \ -c curl -n foo -- sh -c \ 'echo "port 9001" | nc tcp-echo 9001' | grep "hello" && echo 'connection succeeded' || echo 'connection rejected' connection rejected {{< /text >}} ## Configure DENY authorization policy for a TCP workload 1. Add a DENY policy with HTTP-only fields using the following command: {{< text bash >}} $ kubectl apply -f - <}} 1. Verify that requests to port 9000 are denied. This occurs because Istio doesn't understand the HTTP-only fields while creating a DENY rule for tcp port and due to its restrictive nature it denies all the traffic to the tcp ports: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" \ -c curl -n foo -- sh -c \ 'echo "port 9000" | nc tcp-echo 9000' | grep "hello" && echo 'connection succeeded' || echo 'connection rejected' connection rejected {{< /text >}} 1. Verify that the requests to port 9001 are denied. Same reason as above. {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" \ -c curl -n foo -- sh -c \ 'echo "port 9001" | nc tcp-echo 9001' | grep "hello" && echo 'connection succeeded' || echo 'connection rejected' connection rejected {{< /text >}} 1. Add a DENY policy with both TCP and HTTP fields using the following command: {{< text bash >}} $ kubectl apply -f - <}} 1. Verify that requests to port 9000 is denied. This occurs because the request matches the `ports` in the above-mentioned deny policy. {{< text bash >}} | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authorization/authz-tcp/index.md | master | istio | [
0.007559616584330797,
0.0846773311495781,
0.0076432921923696995,
-0.08179060369729996,
-0.060586802661418915,
-0.05756736174225807,
0.0087587283924222,
-0.020738987252116203,
0.03793825954198837,
0.005342456512153149,
-0.025776254013180733,
-0.0862574651837349,
-0.02100350894033909,
0.0208... | 0.06274 |
Add a DENY policy with both TCP and HTTP fields using the following command: {{< text bash >}} $ kubectl apply -f - <}} 1. Verify that requests to port 9000 is denied. This occurs because the request matches the `ports` in the above-mentioned deny policy. {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" \ -c curl -n foo -- sh -c \ 'echo "port 9000" | nc tcp-echo 9000' | grep "hello" && echo 'connection succeeded' || echo 'connection rejected' connection rejected {{< /text >}} 1. Verify that requests to port 9001 are allowed. This occurs because the requests do not match the `ports` in the DENY policy: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" \ -c curl -n foo -- sh -c \ 'echo "port 9001" | nc tcp-echo 9001' | grep "hello" && echo 'connection succeeded' || echo 'connection rejected' hello port 9001 connection succeeded {{< /text >}} ## Clean up Remove the namespace foo: {{< text bash >}} $ kubectl delete namespace foo {{< /text >}} | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authorization/authz-tcp/index.md | master | istio | [
0.008543903939425945,
0.13613414764404297,
0.0004394697898533195,
-0.08456775546073914,
-0.04395761713385582,
-0.04601593688130379,
0.004831299185752869,
-0.05562957376241684,
0.062064237892627716,
0.03232915699481964,
0.007761229295283556,
-0.09079542011022568,
0.0076448023319244385,
0.00... | 0.083624 |
This task shows you how to set up an Istio authorization policy using a new value for the [action field](/docs/reference/config/security/authorization-policy/#AuthorizationPolicy-Action), `CUSTOM`, to delegate the access control to an external authorization system. This can be used to integrate with [OPA authorization](https://www.openpolicyagent.org/docs/envoy), [`oauth2-proxy`](https://github.com/oauth2-proxy/oauth2-proxy), your own custom external authorization server and more. ## Before you begin Before you begin this task, do the following: \* Read the [Istio authorization concepts](/docs/concepts/security/#authorization). \* Follow the [Istio installation guide](/docs/setup/install/istioctl/) to install Istio. \* Deploy test workloads: This task uses two workloads, `httpbin` and `curl`, both deployed in namespace `foo`. Both workloads run with an Envoy proxy sidecar. Deploy the `foo` namespace and workloads with the following command: {{< text bash >}} $ kubectl create ns foo $ kubectl label ns foo istio-injection=enabled $ kubectl apply -f @samples/httpbin/httpbin.yaml@ -n foo $ kubectl apply -f @samples/curl/curl.yaml@ -n foo {{< /text >}} \* Verify that `curl` can access `httpbin` with the following command: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl http://httpbin.foo:8000/ip -s -o /dev/null -w "%{http\_code}\n" 200 {{< /text >}} {{< warning >}} If you donβt see the expected output as you follow the task, retry after a few seconds. Caching and propagation overhead can cause some delay. {{< /warning >}} ## Deploy the external authorizer First, you need to deploy the external authorizer. For this, you will simply deploy the sample external authorizer in a standalone pod in the mesh. 1. Run the following command to deploy the sample external authorizer: {{< text bash >}} $ kubectl apply -n foo -f {{< github\_file >}}/samples/extauthz/ext-authz.yaml service/ext-authz created deployment.apps/ext-authz created {{< /text >}} 1. Verify the sample external authorizer is up and running: {{< text bash >}} $ kubectl logs "$(kubectl get pod -l app=ext-authz -n foo -o jsonpath={.items..metadata.name})" -n foo -c ext-authz 2021/01/07 22:55:47 Starting HTTP server at [::]:8000 2021/01/07 22:55:47 Starting gRPC server at [::]:9000 {{< /text >}} Alternatively, you can also deploy the external authorizer as a separate container in the same pod of the application that needs the external authorization or even deploy it outside of the mesh. In either case, you will also need to create a service entry resource to register the service to the mesh and make sure it is accessible to the proxy. The following is an example service entry for an external authorizer deployed in a separate container in the same pod of the application that needs the external authorization. {{< text yaml >}} apiVersion: networking.istio.io/v1 kind: ServiceEntry metadata: name: external-authz-grpc-local spec: hosts: - "external-authz-grpc.local" # The service name to be used in the extension provider in the mesh config. endpoints: - address: "127.0.0.1" ports: - name: grpc number: 9191 # The port number to be used in the extension provider in the mesh config. protocol: GRPC resolution: STATIC {{< /text >}} ## Define the external authorizer In order to use the `CUSTOM` action in the authorization policy, you must define the external authorizer that is allowed to be used in the mesh. This is currently defined in the [extension provider](https://github.com/istio/api/blob/a205c627e4b955302bbb77dd837c8548e89e6e64/mesh/v1alpha1/config.proto#L534) in the mesh config. Currently, the only supported extension provider type is the [Envoy `ext\_authz`](https://www.envoyproxy.io/docs/envoy/v1.16.2/intro/arch\_overview/security/ext\_authz\_filter) provider. The external authorizer must implement the corresponding Envoy `ext\_authz` check API. In this task, you will use a [sample external authorizer]({{< github\_tree >}}/samples/extauthz) which allows requests with the header `x-ext-authz: allow`. 1. Edit the mesh config with the following command: {{< text bash >}} $ kubectl edit configmap istio -n istio-system {{< /text >}} 1. In the editor, add the extension provider definitions shown below: The following content defines | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authorization/authz-custom/index.md | master | istio | [
-0.04842866212129593,
-0.0029510450549423695,
-0.08546870201826096,
0.020542383193969727,
-0.09210161864757538,
-0.0748463124036789,
0.04878247156739235,
0.07261505722999573,
-0.008696640841662884,
0.03360490873456001,
-0.04218018427491188,
-0.03383453190326691,
0.04235747456550598,
0.0563... | 0.389241 |
external authorizer]({{< github\_tree >}}/samples/extauthz) which allows requests with the header `x-ext-authz: allow`. 1. Edit the mesh config with the following command: {{< text bash >}} $ kubectl edit configmap istio -n istio-system {{< /text >}} 1. In the editor, add the extension provider definitions shown below: The following content defines two external providers `sample-ext-authz-grpc` and `sample-ext-authz-http` using the same service `ext-authz.foo.svc.cluster.local`. The service implements both the HTTP and gRPC check API as defined by the Envoy `ext\_authz` filter. You will deploy the service in the following step. {{< text yaml >}} data: mesh: |- # Add the following content to define the external authorizers. extensionProviders: - name: "sample-ext-authz-grpc" envoyExtAuthzGrpc: service: "ext-authz.foo.svc.cluster.local" port: "9000" - name: "sample-ext-authz-http" envoyExtAuthzHttp: service: "ext-authz.foo.svc.cluster.local" port: "8000" includeRequestHeadersInCheck: ["x-ext-authz"] {{< /text >}} Alternatively, you can modify the extension provider to control the behavior of the `ext\_authz` filter for things like what headers to send to the external authorizer, what headers to send to the application backend, the status to return on error and more. For example, the following defines an extension provider that can be used with the [`oauth2-proxy`](https://github.com/oauth2-proxy/oauth2-proxy): {{< text yaml >}} data: mesh: |- extensionProviders: - name: "oauth2-proxy" envoyExtAuthzHttp: service: "oauth2-proxy.foo.svc.cluster.local" port: "4180" # The default port used by oauth2-proxy. includeRequestHeadersInCheck: ["authorization", "cookie"] # headers sent to the oauth2-proxy in the check request. headersToUpstreamOnAllow: ["authorization", "path", "x-auth-request-user", "x-auth-request-email", "x-auth-request-access-token"] # headers sent to backend application when request is allowed. headersToDownstreamOnAllow: ["set-cookie"] # headers sent back to the client when request is allowed. headersToDownstreamOnDeny: ["content-type", "set-cookie"] # headers sent back to the client when request is denied. {{< /text >}} ## Enable with external authorization The external authorizer is now ready to be used by the authorization policy. 1. Enable the external authorization with the following command: The following command applies an authorization policy with the `CUSTOM` action value for the `httpbin` workload. The policy enables the external authorization for requests to path `/headers` using the external authorizer defined by `sample-ext-authz-grpc`. {{< text bash >}} $ kubectl apply -n foo -f - <}} At runtime, requests to path `/headers` of the `httpbin` workload will be paused by the `ext\_authz` filter, and a check request will be sent to the external authorizer to decide whether the request should be allowed or denied. 1. Verify a request to path `/headers` with header `x-ext-authz: deny` is denied by the sample `ext\_authz` server: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl "http://httpbin.foo:8000/headers" -H "x-ext-authz: deny" -s denied by ext\_authz for not found header `x-ext-authz: allow` in the request {{< /text >}} 1. Verify a request to path `/headers` with header `x-ext-authz: allow` is allowed by the sample `ext\_authz` server: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl "http://httpbin.foo:8000/headers" -H "x-ext-authz: allow" -s | jq '.headers' ... "X-Ext-Authz-Check-Result": [ "allowed" ], ... {{< /text >}} 1. Verify a request to path `/ip` is allowed and does not trigger the external authorization: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl "http://httpbin.foo:8000/ip" -s -o /dev/null -w "%{http\_code}\n" 200 {{< /text >}} 1. Check the log of the sample `ext\_authz` server to confirm it was called twice (for the two requests). The first one was allowed and the second one was denied: {{< text bash >}} $ kubectl logs "$(kubectl get pod -l app=ext-authz -n foo -o jsonpath={.items..metadata.name})" -n foo -c ext-authz 2021/01/07 22:55:47 Starting HTTP server at [::]:8000 2021/01/07 22:55:47 | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authorization/authz-custom/index.md | master | istio | [
-0.0004163575067650527,
0.03540961816906929,
-0.03800864517688751,
-0.026209978386759758,
0.009684448130428791,
-0.07030195742845535,
0.02276020497083664,
0.020337538793683052,
0.06984341144561768,
0.042663224041461945,
0.005511967930942774,
-0.10567650943994522,
-0.015424290671944618,
0.0... | 0.202055 |
to confirm it was called twice (for the two requests). The first one was allowed and the second one was denied: {{< text bash >}} $ kubectl logs "$(kubectl get pod -l app=ext-authz -n foo -o jsonpath={.items..metadata.name})" -n foo -c ext-authz 2021/01/07 22:55:47 Starting HTTP server at [::]:8000 2021/01/07 22:55:47 Starting gRPC server at [::]:9000 2021/01/08 03:25:00 [gRPCv3][denied]: httpbin.foo:8000/headers, attributes: source:{address:{socket\_address:{address:"10.44.0.22" port\_value:52088}} principal:"spiffe://cluster.local/ns/foo/sa/curl"} destination:{address:{socket\_address:{address:"10.44.3.30" port\_value:80}} principal:"spiffe://cluster.local/ns/foo/sa/httpbin"} request:{time:{seconds:1610076306 nanos:473835000} http:{id:"13869142855783664817" method:"GET" headers:{key:":authority" value:"httpbin.foo:8000"} headers:{key:":method" value:"GET"} headers:{key:":path" value:"/headers"} headers:{key:"accept" value:"\*/\*"} headers:{key:"content-length" value:"0"} headers:{key:"user-agent" value:"curl/7.74.0-DEV"} headers:{key:"x-b3-sampled" value:"1"} headers:{key:"x-b3-spanid" value:"377ba0cdc2334270"} headers:{key:"x-b3-traceid" value:"635187cb20d92f62377ba0cdc2334270"} headers:{key:"x-envoy-attempt-count" value:"1"} headers:{key:"x-ext-authz" value:"deny"} headers:{key:"x-forwarded-client-cert" value:"By=spiffe://cluster.local/ns/foo/sa/httpbin;Hash=dd14782fa2f439724d271dbed846ef843ff40d3932b615da650d028db655fc8d;Subject=\"\";URI=spiffe://cluster.local/ns/foo/sa/curl"} headers:{key:"x-forwarded-proto" value:"http"} headers:{key:"x-request-id" value:"9609691a-4e9b-9545-ac71-3889bc2dffb0"} path:"/headers" host:"httpbin.foo:8000" protocol:"HTTP/1.1"}} metadata\_context:{} 2021/01/08 03:25:06 [gRPCv3][allowed]: httpbin.foo:8000/headers, attributes: source:{address:{socket\_address:{address:"10.44.0.22" port\_value:52184}} principal:"spiffe://cluster.local/ns/foo/sa/curl"} destination:{address:{socket\_address:{address:"10.44.3.30" port\_value:80}} principal:"spiffe://cluster.local/ns/foo/sa/httpbin"} request:{time:{seconds:1610076300 nanos:925912000} http:{id:"17995949296433813435" method:"GET" headers:{key:":authority" value:"httpbin.foo:8000"} headers:{key:":method" value:"GET"} headers:{key:":path" value:"/headers"} headers:{key:"accept" value:"\*/\*"} headers:{key:"content-length" value:"0"} headers:{key:"user-agent" value:"curl/7.74.0-DEV"} headers:{key:"x-b3-sampled" value:"1"} headers:{key:"x-b3-spanid" value:"a66b5470e922fa80"} headers:{key:"x-b3-traceid" value:"300c2f2b90a618c8a66b5470e922fa80"} headers:{key:"x-envoy-attempt-count" value:"1"} headers:{key:"x-ext-authz" value:"allow"} headers:{key:"x-forwarded-client-cert" value:"By=spiffe://cluster.local/ns/foo/sa/httpbin;Hash=dd14782fa2f439724d271dbed846ef843ff40d3932b615da650d028db655fc8d;Subject=\"\";URI=spiffe://cluster.local/ns/foo/sa/curl"} headers:{key:"x-forwarded-proto" value:"http"} headers:{key:"x-request-id" value:"2b62daf1-00b9-97d9-91b8-ba6194ef58a4"} path:"/headers" host:"httpbin.foo:8000" protocol:"HTTP/1.1"}} metadata\_context:{} {{< /text >}} You can also tell from the log that mTLS is enabled for the connection between the `ext-authz` filter and the sample `ext-authz` server because the source principal is populated with the value `spiffe://cluster.local/ns/foo/sa/curl`. You can now apply another authorization policy for the sample `ext-authz` server to control who is allowed to access it. ## Clean up 1. Remove the namespace `foo` from your configuration: {{< text bash >}} $ kubectl delete namespace foo {{< /text >}} 1. Remove the extension provider definition from the mesh config. ## Performance expectations See [performance benchmarking](https://github.com/istio/tools/tree/master/perf/benchmark/configs/istio/ext\_authz). | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authorization/authz-custom/index.md | master | istio | [
0.008590440265834332,
0.08017455786466599,
-0.008081129752099514,
-0.02528916858136654,
-0.014798947609961033,
-0.09470333904027939,
-0.004807471297681332,
-0.05422895774245262,
0.09082262963056564,
0.03712974116206169,
0.014568876475095749,
-0.0514252707362175,
-0.019356433302164078,
0.03... | 0.069173 |
This task shows you how to enforce IP-based access control on an Istio ingress gateway using an authorization policy. {{< boilerplate gateway-api-support >}} ## Before you begin Before you begin this task, do the following: \* Read the [Istio authorization concepts](/docs/concepts/security/#authorization). \* Install Istio using the [Istio installation guide](/docs/setup/install/istioctl/). \* Deploy a workload, `httpbin`, in namespace `foo` with sidecar injection enabled: {{< text bash >}} $ kubectl create ns foo $ kubectl label namespace foo istio-injection=enabled $ kubectl apply -f @samples/httpbin/httpbin.yaml@ -n foo {{< /text >}} \* Expose `httpbin` through an ingress gateway: {{< tabset category-name="config-api" >}} {{< tab name="Istio APIs" category-value="istio-apis" >}} Configure the gateway: {{< text bash >}} $ kubectl apply -f @samples/httpbin/httpbin-gateway.yaml@ -n foo {{< /text >}} Turn on RBAC debugging in Envoy for the ingress gateway: {{< text bash >}} $ kubectl get pods -n istio-system -o name -l istio=ingressgateway | sed 's|pod/||' | while read -r pod; do istioctl proxy-config log "$pod" -n istio-system --level rbac:debug; done {{< /text >}} Follow the instructions in [Determining the ingress IP and ports](/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports) to define the `INGRESS\_PORT` and `INGRESS\_HOST` environment variables. {{< /tab >}} {{< tab name="Gateway API" category-value="gateway-api" >}} Create the gateway: {{< text bash >}} $ kubectl apply -f @samples/httpbin/gateway-api/httpbin-gateway.yaml@ -n foo {{< /text >}} Wait for the gateway to be ready: {{< text bash >}} $ kubectl wait --for=condition=programmed gtw -n foo httpbin-gateway {{< /text >}} Turn on RBAC debugging in Envoy for the ingress gateway: {{< text bash >}} $ kubectl get pods -n foo -o name -l gateway.networking.k8s.io/gateway-name=httpbin-gateway | sed 's|pod/||' | while read -r pod; do istioctl proxy-config log "$pod" -n foo --level rbac:debug; done {{< /text >}} Set the `INGRESS\_PORT` and `INGRESS\_HOST` environment variables: {{< text bash >}} $ export INGRESS\_HOST=$(kubectl get gtw httpbin-gateway -n foo -o jsonpath='{.status.addresses[0].value}') $ export INGRESS\_PORT=$(kubectl get gtw httpbin-gateway -n foo -o jsonpath='{.spec.listeners[?(@.name=="http")].port}') {{< /text >}} {{< /tab >}} {{< /tabset >}} \* Verify that the `httpbin` workload and ingress gateway are working as expected using this command: {{< text bash >}} $ curl "$INGRESS\_HOST:$INGRESS\_PORT"/headers -s -o /dev/null -w "%{http\_code}\n" 200 {{< /text >}} {{< warning >}} If you donβt see the expected output, retry after a few seconds. Caching and propagation overhead can cause a delay. {{< /warning >}} ## Getting traffic into Kubernetes and Istio All methods of getting traffic into Kubernetes involve opening a port on all worker nodes. The main features that accomplish this are the `NodePort` service and the `LoadBalancer` service. Even the Kubernetes `Ingress` resource must be backed by an Ingress controller that will create either a `NodePort` or a `LoadBalancer` service. \* A `NodePort` just opens up a port in the range 30000-32767 on each worker node and uses a label selector to identify which Pods to send the traffic to. You have to manually create some kind of load balancer in front of your worker nodes or use Round-Robin DNS. \* A `LoadBalancer` is just like a `NodePort`, except it also creates an environment specific external load balancer to handle distributing traffic to the worker nodes. For example, in AWS EKS, the `LoadBalancer` service will create a Classic ELB with your worker nodes as targets. If your Kubernetes environment does not have a `LoadBalancer` implementation, then it will just behave like a `NodePort`. An Istio ingress gateway creates a `LoadBalancer` service. What if the Pod that is handling traffic from the `NodePort` or `LoadBalancer` isn't running on the worker node that received the traffic? Kubernetes has its own internal proxy called kube-proxy that receives the packets and forwards them to the correct node. ## Source IP address of | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authorization/authz-ingress/index.md | master | istio | [
-0.01177935115993023,
0.04616833105683327,
-0.03032097965478897,
0.014855626039206982,
-0.09465470910072327,
0.004675514996051788,
0.06211065128445625,
-0.011275815777480602,
-0.0019298320403322577,
0.045729316771030426,
-0.0230748001486063,
-0.11452310532331467,
-0.012836858630180359,
0.0... | 0.407274 |
a `LoadBalancer` service. What if the Pod that is handling traffic from the `NodePort` or `LoadBalancer` isn't running on the worker node that received the traffic? Kubernetes has its own internal proxy called kube-proxy that receives the packets and forwards them to the correct node. ## Source IP address of the original client If a packet goes through an external proxy load balancer and/or kube-proxy, then the original source IP address of the client is lost. The following subsections describe some strategies for preserving the original client IP for logging or security purpose for different load balancer types: 1. [TCP/UDP Proxy Load Balancer](#tcp-proxy) 1. [Network Load Balancer](#network) 1. [HTTP/HTTPS Load Balancer](#http-https) For reference, here are the types of load balancers created by Istio with a `LoadBalancer` service on popular managed Kubernetes environments: |Cloud Provider | Load Balancer Name | Load Balancer Type ----------------|-------------------------------|------------------- |AWS EKS | Classic Elastic Load Balancer | TCP Proxy |GCP GKE | TCP/UDP Network Load Balancer | Network |Azure AKS | Azure Load Balancer | Network |IBM IKS/ROKS | Network Load Balancer | Network |DO DOKS | Load Balancer | Network {{< tip >}} You can instruct AWS EKS to create a Network Load Balancer with an annotation on the gateway service: {{< tabset category-name="config-api" >}} {{< tab name="Istio APIs" category-value="istio-apis" >}} {{< text yaml >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: meshConfig: accessLogEncoding: JSON accessLogFile: /dev/stdout components: ingressGateways: - enabled: true k8s: hpaSpec: maxReplicas: 10 minReplicas: 5 serviceAnnotations: service.beta.kubernetes.io/aws-load-balancer-type: "nlb" {{< /text >}} {{< /tab >}} {{< tab name="Gateway API" category-value="gateway-api" >}} {{< text yaml >}} apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: httpbin-gateway annotations: service.beta.kubernetes.io/aws-load-balancer-type: "nlb" spec: gatewayClassName: istio ... {{< /text >}} {{< /tab >}} {{< /tabset >}} {{< /tip >}} ### TCP/UDP Proxy Load Balancer {#tcp-proxy} If you are using a TCP/UDP Proxy external load balancer (AWS Classic ELB), it can use the [PROXY Protocol](https://www.haproxy.com/blog/haproxy/proxy-protocol/) to embed the original client IP address in the packet data. Both the external load balancer and the Istio ingress gateway must support the PROXY protocol for it to work. Here is a sample configuration that shows how to make an ingress gateway on AWS EKS support the PROXY Protocol: {{< tabset category-name="config-api" >}} {{< tab name="Istio APIs" category-value="istio-apis" >}} {{< text yaml >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: meshConfig: accessLogEncoding: JSON accessLogFile: /dev/stdout defaultConfig: gatewayTopology: proxyProtocol: {} components: ingressGateways: - enabled: true name: istio-ingressgateway k8s: hpaSpec: maxReplicas: 10 minReplicas: 5 serviceAnnotations: service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "\*" ... {{< /text >}} {{< /tab >}} {{< tab name="Gateway API" category-value="gateway-api" >}} {{< text yaml >}} apiVersion: gateway.networking.k8s.io/v1 kind: Gateway metadata: name: httpbin-gateway annotations: service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "\*" proxy.istio.io/config: '{"gatewayTopology" : { "proxyProtocol": {} }}' spec: gatewayClassName: istio ... --- apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: httpbin-gateway spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: httpbin-gateway-istio minReplicas: 5 maxReplicas: 10 {{< /text >}} {{< /tab >}} {{< /tabset >}} ### Network Load Balancer {#network} If you are using a TCP/UDP network load balancer that preserves the client IP address (AWS Network Load Balancer, GCP External Network Load Balancer, Azure Load Balancer) or you are using Round-Robin DNS, then you can use the `externalTrafficPolicy: Local` setting to also preserve the client IP inside Kubernetes by bypassing kube-proxy and preventing it from sending traffic to other nodes. {{< warning >}} For production deployments it is strongly recommended to \*\*deploy an ingress gateway pod to multiple nodes\*\* if you enable `externalTrafficPolicy: Local`. Otherwise, this creates a situation where \*\*only\*\* nodes with an active ingress gateway pod will be able to accept and distribute incoming NLB traffic to the rest of the cluster, creating potential ingress traffic bottlenecks and reduced internal | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authorization/authz-ingress/index.md | master | istio | [
-0.08676788955926895,
0.047238606959581375,
0.0847499892115593,
-0.05038284882903099,
-0.02284833788871765,
-0.05540081858634949,
0.07740601897239685,
-0.04083375260233879,
0.08241720497608185,
0.03618054836988449,
-0.10058954358100891,
-0.006497025024145842,
-0.028446970507502556,
-0.0340... | 0.292919 |
an ingress gateway pod to multiple nodes\*\* if you enable `externalTrafficPolicy: Local`. Otherwise, this creates a situation where \*\*only\*\* nodes with an active ingress gateway pod will be able to accept and distribute incoming NLB traffic to the rest of the cluster, creating potential ingress traffic bottlenecks and reduced internal load balancing capability, or even complete loss of ingress traffic to the cluster if the subset of nodes with ingress gateway pods go down. See [Source IP for Services with `Type=NodePort`](https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-nodeport) for more information. {{< /warning >}} Update the ingress gateway to set `externalTrafficPolicy: Local` to preserve the original client source IP on the ingress gateway using the following command: {{< tabset category-name="config-api" >}} {{< tab name="Istio APIs" category-value="istio-apis" >}} {{< text bash >}} $ kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}' {{< /text >}} {{< /tab >}} {{< tab name="Gateway API" category-value="gateway-api" >}} {{< text bash >}} $ kubectl patch svc httpbin-gateway-istio -n foo -p '{"spec":{"externalTrafficPolicy":"Local"}}' {{< /text >}} {{< /tab >}} {{< /tabset >}} ### HTTP/HTTPS Load Balancer {#http-https} If you are using an HTTP/HTTPS external load balancer (AWS ALB, GCP ), it can put the original client IP address in the X-Forwarded-For header. Istio can extract the client IP address from this header with some configuration. See [Configuring Gateway Network Topology](/docs/ops/configuration/traffic-management/network-topologies/). Quick example if using a single load balancer in front of Kubernetes: {{< text yaml >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: meshConfig: accessLogEncoding: JSON accessLogFile: /dev/stdout defaultConfig: gatewayTopology: numTrustedProxies: 1 {{< /text >}} ## IP-based allow list and deny list \*\*When to use `ipBlocks` vs. `remoteIpBlocks`:\*\* If you are using the X-Forwarded-For HTTP header or the PROXY Protocol to determine the original client IP address, then you should use `remoteIpBlocks` in your `AuthorizationPolicy`. If you are using `externalTrafficPolicy: Local`, then you should use `ipBlocks` in your `AuthorizationPolicy`. |Load Balancer Type |Source of Client IP | `ipBlocks` vs. `remoteIpBlocks` --------------------|----------------------|--------------------------- | TCP Proxy | PROXY Protocol | `remoteIpBlocks` | Network | packet source address| `ipBlocks` | HTTP/HTTPS | X-Forwarded-For | `remoteIpBlocks` \* The following command creates the authorization policy, `ingress-policy`, for the Istio ingress gateway. The following policy sets the `action` field to `ALLOW` to allow the IP addresses specified in the `ipBlocks` to access the ingress gateway. IP addresses not in the list will be denied. The `ipBlocks` supports both single IP address and CIDR notation. {{< tabset category-name="config-api" >}} {{< tab name="Istio APIs" category-value="istio-apis" >}} \*\*\*ipBlocks:\*\*\* {{< text bash >}} $ kubectl apply -f - <}} \*\*\*remoteIpBlocks:\*\*\* {{< text bash >}} $ kubectl apply -f - <}} {{< /tab >}} {{< tab name="Gateway API" category-value="gateway-api" >}} \*\*\*ipBlocks:\*\*\* {{< text bash >}} $ kubectl apply -f - <}} \*\*\*remoteIpBlocks:\*\*\* {{< text bash >}} $ kubectl apply -f - <}} {{< /tab >}} {{< /tabset >}} \* Verify that a request to the ingress gateway is denied: {{< text bash >}} $ curl "$INGRESS\_HOST:$INGRESS\_PORT"/headers -s -o /dev/null -w "%{http\_code}\n" 403 {{< /text >}} \* Assign your original client IP address to an env variable. If you don't know it, you can find it in the Envoy logs using the following command: {{< tabset category-name="config-api" >}} {{< tab name="Istio APIs" category-value="istio-apis" >}} \*\*\*ipBlocks:\*\*\* {{< text bash >}} $ CLIENT\_IP=$(kubectl get pods -n istio-system -o name -l istio=ingressgateway | sed 's|pod/||' | while read -r pod; do kubectl logs "$pod" -n istio-system | grep remoteIP; done | tail -1 | awk -F, '{print $3}' | awk -F: '{print $2}' | sed 's/ //') && echo "$CLIENT\_IP" 192.168.10.15 {{< /text >}} \*\*\*remoteIpBlocks:\*\*\* {{< text bash >}} $ CLIENT\_IP=$(kubectl get pods -n istio-system -o name -l istio=ingressgateway | sed 's|pod/||' | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authorization/authz-ingress/index.md | master | istio | [
0.0011082018027082086,
-0.0009650971041992307,
0.023600172251462936,
0.0374087430536747,
0.0012860234128311276,
0.00915235374122858,
0.049088794738054276,
0.019484668970108032,
0.034850891679525375,
0.10297702252864838,
-0.04355945065617561,
-0.07039126753807068,
0.021528368815779686,
-0.0... | 0.320911 |
logs "$pod" -n istio-system | grep remoteIP; done | tail -1 | awk -F, '{print $3}' | awk -F: '{print $2}' | sed 's/ //') && echo "$CLIENT\_IP" 192.168.10.15 {{< /text >}} \*\*\*remoteIpBlocks:\*\*\* {{< text bash >}} $ CLIENT\_IP=$(kubectl get pods -n istio-system -o name -l istio=ingressgateway | sed 's|pod/||' | while read -r pod; do kubectl logs "$pod" -n istio-system | grep remoteIP; done | tail -1 | awk -F, '{print $4}' | awk -F: '{print $2}' | sed 's/ //') && echo "$CLIENT\_IP" 192.168.10.15 {{< /text >}} {{< /tab >}} {{< tab name="Gateway API" category-value="gateway-api" >}} \*\*\*ipBlocks:\*\*\* {{< text bash >}} $ CLIENT\_IP=$(kubectl get pods -n foo -o name -l gateway.networking.k8s.io/gateway-name=httpbin-gateway | sed 's|pod/||' | while read -r pod; do kubectl logs "$pod" -n foo | grep remoteIP; done | tail -1 | awk -F, '{print $3}' | awk -F: '{print $2}' | sed 's/ //') && echo "$CLIENT\_IP" 192.168.10.15 {{< /text >}} \*\*\*remoteIpBlocks:\*\*\* {{< text bash >}} $ CLIENT\_IP=$(kubectl get pods -n foo -o name -l gateway.networking.k8s.io/gateway-name=httpbin-gateway | sed 's|pod/||' | while read -r pod; do kubectl logs "$pod" -n foo | grep remoteIP; done | tail -1 | awk -F, '{print $4}' | awk -F: '{print $2}' | sed 's/ //') && echo "$CLIENT\_IP" 192.168.10.15 {{< /text >}} {{< /tab >}} {{< /tabset >}} \* Update the `ingress-policy` to include your client IP address: {{< tabset category-name="config-api" >}} {{< tab name="Istio APIs" category-value="istio-apis" >}} \*\*\*ipBlocks:\*\*\* {{< text bash >}} $ kubectl apply -f - <}} \*\*\*remoteIpBlocks:\*\*\* {{< text bash >}} $ kubectl apply -f - <}} {{< /tab >}} {{< tab name="Gateway API" category-value="gateway-api" >}} \*\*\*ipBlocks:\*\*\* {{< text bash >}} $ kubectl apply -f - <}} \*\*\*remoteIpBlocks:\*\*\* {{< text bash >}} $ kubectl apply -f - <}} {{< /tab >}} {{< /tabset >}} \* Verify that a request to the ingress gateway is allowed: {{< text bash >}} $ curl "$INGRESS\_HOST:$INGRESS\_PORT"/headers -s -o /dev/null -w "%{http\_code}\n" 200 {{< /text >}} \* Update the `ingress-policy` authorization policy to set the `action` key to `DENY` so that the IP addresses specified in the `ipBlocks` are not allowed to access the ingress gateway: {{< tabset category-name="config-api" >}} {{< tab name="Istio APIs" category-value="istio-apis" >}} \*\*\*ipBlocks:\*\*\* {{< text bash >}} $ kubectl apply -f - <}} \*\*\*remoteIpBlocks:\*\*\* {{< text bash >}} $ kubectl apply -f - <}} {{< /tab >}} {{< tab name="Gateway API" category-value="gateway-api" >}} \*\*\*ipBlocks:\*\*\* {{< text bash >}} $ kubectl apply -f - <}} \*\*\*remoteIpBlocks:\*\*\* {{< text bash >}} $ kubectl apply -f - <}} {{< /tab >}} {{< /tabset >}} \* Verify that a request to the ingress gateway is denied: {{< text bash >}} $ curl "$INGRESS\_HOST:$INGRESS\_PORT"/headers -s -o /dev/null -w "%{http\_code}\n" 403 {{< /text >}} \* You could use an online proxy service to access the ingress gateway using a different client IP to verify the request is allowed. \* If you are not getting the responses you expect, view the ingress gateway logs which should show RBAC debugging information: {{< tabset category-name="config-api" >}} {{< tab name="Istio APIs" category-value="istio-apis" >}} {{< text bash >}} $ kubectl get pods -n istio-system -o name -l istio=ingressgateway | sed 's|pod/||' | while read -r pod; do kubectl logs "$pod" -n istio-system; done {{< /text >}} {{< /tab >}} {{< tab name="Gateway API" category-value="gateway-api" >}} {{< text bash >}} $ kubectl get pods -n foo -o name -l gateway.networking.k8s.io/gateway-name=httpbin-gateway | sed 's|pod/||' | while read -r pod; do kubectl logs "$pod" -n foo; done {{< /text >}} {{< /tab >}} {{< /tabset >}} ## Clean up \* Remove the authorization policy: {{< tabset category-name="config-api" >}} {{< tab name="Istio APIs" | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authorization/authz-ingress/index.md | master | istio | [
0.03401418402791023,
0.08601953834295273,
0.021291635930538177,
-0.030015313997864723,
-0.015257950872182846,
-0.029687637463212013,
0.07052488625049591,
0.007584462407976389,
0.06765930354595184,
0.05543367192149162,
0.026283610612154007,
-0.054710593074560165,
-0.03263523057103157,
-0.05... | 0.288388 |
$ kubectl get pods -n foo -o name -l gateway.networking.k8s.io/gateway-name=httpbin-gateway | sed 's|pod/||' | while read -r pod; do kubectl logs "$pod" -n foo; done {{< /text >}} {{< /tab >}} {{< /tabset >}} ## Clean up \* Remove the authorization policy: {{< tabset category-name="config-api" >}} {{< tab name="Istio APIs" category-value="istio-apis" >}} {{< text bash >}} $ kubectl delete authorizationpolicy ingress-policy -n istio-system {{< /text >}} {{< /tab >}} {{< tab name="Gateway API" category-value="gateway-api" >}} {{< text bash >}} $ kubectl delete authorizationpolicy ingress-policy -n foo {{< /text >}} {{< /tab >}} {{< /tabset >}} \* Remove the namespace `foo`: {{< text bash >}} $ kubectl delete namespace foo {{< /text >}} | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authorization/authz-ingress/index.md | master | istio | [
0.060599859803915024,
0.07117052376270294,
-0.0015345003921538591,
-0.01198542583733797,
-0.01884152740240097,
-0.0197428148239851,
0.05921445041894913,
-0.054725658148527145,
0.05877162516117096,
0.045107726007699966,
-0.007643914315849543,
-0.10898972302675247,
-0.02673722617328167,
-0.0... | 0.229439 |
{{< boilerplate alpha >}} This task shows you how to set up an Istio authorization policy using a new [experimental annotation `istio.io/dry-run`](/docs/reference/config/annotations/) to dry-run the policy without actually enforcing it. The dry-run annotation allows you to better understand the effect of an authorization policy before applying it to the production traffic. This helps to reduce the risk of breaking the production traffic caused by an incorrect authorization policy. ## Before you begin Before you begin this task, do the following: \* Read the [Istio authorization concepts](/docs/concepts/security/#authorization). \* Follow the [Istio installation guide](/docs/setup/install) to install Istio. \* Deploy Zipkin for checking dry-run tracing results. Follow the [Zipkin task](/docs/tasks/observability/distributed-tracing/zipkin/) to install Zipkin in the cluster. \* Deploy Prometheus for checking dry-run metric results. Follow the [Prometheus task](/docs/tasks/observability/metrics/querying-metrics/) to install the Prometheus in the cluster. \* Deploy test workloads: This task uses two workloads, `httpbin` and `curl`, both deployed in namespace `foo`. Both workloads run with an Envoy proxy sidecar. Create the `foo` namespace and deploy the workloads with the following command: {{< text bash >}} $ kubectl create ns foo $ kubectl label ns foo istio-injection=enabled $ kubectl apply -f @samples/httpbin/httpbin.yaml@ -n foo $ kubectl apply -f @samples/curl/curl.yaml@ -n foo {{< /text >}} \* Enable proxy debug level log for checking dry-run logging results: {{< text bash >}} $ istioctl proxy-config log deploy/httpbin.foo --level "rbac:debug" | grep rbac rbac: debug {{< /text >}} \* Verify that `curl` can access `httpbin` with the following command: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl http://httpbin.foo:8000/ip -s -o /dev/null -w "%{http\_code}\n" 200 {{< /text >}} {{< warning >}} If you donβt see the expected output as you follow the task, retry after a few seconds. Caching and propagation overhead can cause some delay. {{< /warning >}} ## Create dry-run policy 1. Create an authorization policy with dry-run annotation `"istio.io/dry-run": "true"` with the following command: {{< text bash >}} $ kubectl apply -n foo -f - <}} You can also use the following command to quickly change an existing authorization policy to dry-run mode: {{< text bash >}} $ kubectl annotate --overwrite authorizationpolicies deny-path-headers -n foo istio.io/dry-run='true' {{< /text >}} 1. Verify a request to path `/headers` is allowed because the policy is created in dry-run mode, run the following command to send 20 requests from `curl` to `httpbin`, the request includes the header `X-B3-Sampled: 1` to always trigger the Zipkin tracing: {{< text bash >}} $ for i in {1..20}; do kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl http://httpbin.foo:8000/headers -H "X-B3-Sampled: 1" -s -o /dev/null -w "%{http\_code}\n"; done 200 200 200 ... {{< /text >}} ## Check dry-run result in proxy log The dry-run results can be found in the proxy debug log in the format of `shadow denied, matched policy ns[foo]-policy[deny-path-headers]-rule[0]`. Run the following command to check the log: {{< text bash >}} $ kubectl logs "$(kubectl -n foo -l app=httpbin get pods -o jsonpath={.items..metadata.name})" -c istio-proxy -n foo | grep "shadow denied" 2021-11-19T20:20:48.733099Z debug envoy rbac shadow denied, matched policy ns[foo]-policy[deny-path-headers]-rule[0] 2021-11-19T20:21:45.502199Z debug envoy rbac shadow denied, matched policy ns[foo]-policy[deny-path-headers]-rule[0] 2021-11-19T20:22:33.065348Z debug envoy rbac shadow denied, matched policy ns[foo]-policy[deny-path-headers]-rule[0] ... {{< /text >}} Also see the [troubleshooting guide](/docs/ops/common-problems/security-issues/#ensure-proxies-enforce-policies-correctly) for more details of the logging. ## Check dry-run result in metric using Prometheus 1. Open the Prometheus dashboard with the following command: {{< text bash >}} $ istioctl dashboard prometheus {{< /text >}} 1. In the Prometheus dashboard, search for the following metric: {{< text plain >}} envoy\_http\_inbound\_0\_0\_0\_0\_80\_rbac{authz\_dry\_run\_action="deny",authz\_dry\_run\_result="denied"} {{< /text | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authorization/authz-dry-run/index.md | master | istio | [
-0.03807307779788971,
0.049838293343782425,
-0.008139469660818577,
0.014164306223392487,
-0.0007192370831035078,
-0.09300752729177475,
0.03843096271157265,
0.04160936176776886,
-0.07996244728565216,
0.03985489159822464,
-0.026316862553358078,
-0.08669083565473557,
-0.004107910208404064,
0.... | 0.565081 |
more details of the logging. ## Check dry-run result in metric using Prometheus 1. Open the Prometheus dashboard with the following command: {{< text bash >}} $ istioctl dashboard prometheus {{< /text >}} 1. In the Prometheus dashboard, search for the following metric: {{< text plain >}} envoy\_http\_inbound\_0\_0\_0\_0\_80\_rbac{authz\_dry\_run\_action="deny",authz\_dry\_run\_result="denied"} {{< /text >}} 1. Verify the queried metric result as follows: {{< text plain >}} envoy\_http\_inbound\_0\_0\_0\_0\_80\_rbac{app="httpbin",authz\_dry\_run\_action="deny",authz\_dry\_run\_result="denied",instance="10.44.1.11:15020",istio\_io\_rev="default",job="kubernetes-pods",kubernetes\_namespace="foo",kubernetes\_pod\_name="httpbin-74fb669cc6-95qm8",pod\_template\_hash="74fb669cc6",security\_istio\_io\_tlsMode="istio",service\_istio\_io\_canonical\_name="httpbin",service\_istio\_io\_canonical\_revision="v1",version="v1"} 20 {{< /text >}} 1. The queried metric has value `20` (you might find a different value depending on how many requests you have sent. It's expected as long as the value is greater than 0). This means the dry-run policy applied to the `httpbin` workload on port `80` matched one request. The policy would reject the request once if it was not in dry-run mode. 1. The following is a screenshot of the Prometheus dashboard: {{< image width="100%" link="./prometheus.png" caption="Prometheus dashboard" >}} ## Check dry-run result in tracing using Zipkin 1. Open the Zipkin dashboard with the following command: {{< text bash >}} $ istioctl dashboard zipkin {{< /text >}} 1. Find the trace result for the request from `curl` to `httpbin`. Try to send some more requests if you do see the trace result due to the delay in the Zipkin. 1. In the trace result, you should find the following custom tags indicating the request is rejected by the dry-run policy `deny-path-headers` in the namespace `foo`: {{< text plain >}} istio.authorization.dry\_run.deny\_policy.name: ns[foo]-policy[deny-path-headers]-rule[0] istio.authorization.dry\_run.deny\_policy.result: denied {{< /text >}} 1. The following is a screenshot of the Zipkin dashboard: {{< image width="100%" link="./trace.png" caption="Zipkin dashboard" >}} ## Summary The Proxy debug log, Prometheus metric and Zipkin trace results indicate that the dry-run policy will reject the request. You can further change the policy if the dry-run result is not expected. It's recommended to keep the dry-run policy for some additional time so that it can be tested with more production traffic. When you are confident about the dry-run result, you can disable the dry-run mode so that the policy will start to actually reject requests. This can be achieved by either of the following approaches: \* Remove the dry-run annotation completely; or \* Change the value of the dry-run annotation to `false`. ## Limitations The dry-run annotation is currently in experimental stage and has the following limitations: \* The dry-run annotation currently only supports ALLOW and DENY policies; \* There will be two separate dry-run results (i.e. log, metric and tracing tag) for ALLOW and DENY policies due to the fact that the ALLOW and DENY policies are enforced separately in the proxy. You should take all the two dry-run results into consideration because a request could be allowed by an ALLOW policy but still rejected by another DENY policy; \* The dry-run results in the proxy log, metric and tracing are for manual troubleshooting purposes and should not be used as an API because it may change anytime without prior notice. ## Clean up 1. Remove the namespace `foo` from your configuration: {{< text bash >}} $ kubectl delete namespace foo {{< /text >}} 1. Remove Prometheus and Zipkin if no longer needed. | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authorization/authz-dry-run/index.md | master | istio | [
-0.01893215999007225,
0.05862577259540558,
-0.004060820676386356,
0.03567009046673775,
0.036834679543972015,
-0.14037485420703888,
0.028754934668540955,
-0.053332481533288956,
0.009287108667194843,
0.017534784972667694,
-0.03023514710366726,
-0.08071199059486389,
0.03446841984987259,
0.056... | 0.220554 |
This task shows you how to migrate from one trust domain to another without changing authorization policy. In Istio 1.4, we introduce an alpha feature to support {{< gloss >}}trust domain migration{{}} for authorization policy. This means if an Istio mesh needs to change its {{< gloss >}}trust domain{{}}, the authorization policy doesn't need to be changed manually. In Istio, if a {{< gloss >}}workload{{}} is running in namespace `foo` with the service account `bar`, and the trust domain of the system is `my-td`, the identity of said workload is `spiffe://my-td/ns/foo/sa/bar`. By default, the Istio mesh trust domain is `cluster.local`, unless you specify it during the installation. ## Before you begin Before you begin this task, do the following: 1. Read the [Istio authorization concepts](/docs/concepts/security/#authorization). 1. Install Istio with a custom trust domain and mutual TLS enabled. {{< text bash >}} $ istioctl install --set profile=demo --set meshConfig.trustDomain=old-td {{< /text >}} 1. Deploy the [httpbin]({{< github\_tree >}}/samples/httpbin) sample in the `default` namespace and the [curl]({{< github\_tree >}}/samples/curl) sample in the `default` and `curl-allow` namespaces: {{< text bash >}} $ kubectl label namespace default istio-injection=enabled $ kubectl apply -f @samples/httpbin/httpbin.yaml@ $ kubectl apply -f @samples/curl/curl.yaml@ $ kubectl create namespace curl-allow $ kubectl label namespace curl-allow istio-injection=enabled $ kubectl apply -f @samples/curl/curl.yaml@ -n curl-allow {{< /text >}} 1. Apply the authorization policy below to deny all requests to `httpbin` except from `curl` in the `curl-allow` namespace. {{< text bash >}} $ kubectl apply -f - <}} Notice that it may take tens of seconds for the authorization policy to be propagated to the sidecars. 1. Verify that requests to `httpbin` from: \* `curl` in the `default` namespace are denied. {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -o jsonpath={.items..metadata.name})" -c curl -- curl http://httpbin.default:8000/ip -sS -o /dev/null -w "%{http\_code}\n" 403 {{< /text >}} \* `curl` in the `curl-allow` namespace are allowed. {{< text bash >}} $ kubectl exec "$(kubectl -n curl-allow get pod -l app=curl -o jsonpath={.items..metadata.name})" -c curl -n curl-allow -- curl http://httpbin.default:8000/ip -sS -o /dev/null -w "%{http\_code}\n" 200 {{< /text >}} ## Migrate trust domain without trust domain aliases 1. Install Istio with a new trust domain. {{< text bash >}} $ istioctl install --set profile=demo --set meshConfig.trustDomain=new-td {{< /text >}} 1. Redeploy istiod to pick up the trust domain changes. {{< text bash >}} $ kubectl rollout restart deployment -n istio-system istiod {{< /text >}} Istio mesh is now running with a new trust domain, `new-td`. 1. Redeploy the `httpbin` and `curl` applications to pick up changes from the new Istio control plane. {{< text bash >}} $ kubectl delete pod --all {{< /text >}} {{< text bash >}} $ kubectl delete pod --all -n curl-allow {{< /text >}} 1. Verify that requests to `httpbin` from both `curl` in `default` namespace and `curl-allow` namespace are denied. {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -o jsonpath={.items..metadata.name})" -c curl -- curl http://httpbin.default:8000/ip -sS -o /dev/null -w "%{http\_code}\n" 403 {{< /text >}} {{< text bash >}} $ kubectl exec "$(kubectl -n curl-allow get pod -l app=curl -o jsonpath={.items..metadata.name})" -c curl -n curl-allow -- curl http://httpbin.default:8000/ip -sS -o /dev/null -w "%{http\_code}\n" 403 {{< /text >}} This is because we specified an authorization policy that deny all requests to `httpbin`, except the ones the `old-td/ns/curl-allow/sa/curl` identity, which is the old identity of the `curl` application in `curl-allow` namespace. When we migrated to a new trust domain above, i.e. `new-td`, the identity of this `curl` application is now `new-td/ns/curl-allow/sa/curl`, which is not the same as `old-td/ns/curl-allow/sa/curl`. Therefore, requests from the `curl` application in `curl-allow` namespace to | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authorization/authz-td-migration/index.md | master | istio | [
-0.05862462520599365,
-0.04483707994222641,
-0.04241561517119408,
-0.013943910598754883,
-0.04471152648329735,
-0.08179605007171631,
0.08506350964307785,
-0.02728559635579586,
0.005345700308680534,
0.0041604358702898026,
-0.005810966715216637,
-0.13649392127990723,
0.045370686799287796,
0.... | 0.422627 |
identity, which is the old identity of the `curl` application in `curl-allow` namespace. When we migrated to a new trust domain above, i.e. `new-td`, the identity of this `curl` application is now `new-td/ns/curl-allow/sa/curl`, which is not the same as `old-td/ns/curl-allow/sa/curl`. Therefore, requests from the `curl` application in `curl-allow` namespace to `httpbin` were allowed before are now being denied. Prior to Istio 1.4, the only way to make this work is to change the authorization policy manually. In Istio 1.4, we introduce an easy way, as shown below. ## Migrate trust domain with trust domain aliases 1. Install Istio with a new trust domain and trust domain aliases. {{< text bash >}} $ cat < ./td-installation.yaml apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: meshConfig: trustDomain: new-td trustDomainAliases: - old-td EOF $ istioctl install --set profile=demo -f td-installation.yaml -y {{< /text >}} 1. Without changing the authorization policy, verify that requests to `httpbin` from: \* `curl` in the `default` namespace are denied. {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -o jsonpath={.items..metadata.name})" -c curl -- curl http://httpbin.default:8000/ip -sS -o /dev/null -w "%{http\_code}\n" 403 {{< /text >}} \* `curl` in the `curl-allow` namespace are allowed. {{< text bash >}} $ kubectl exec "$(kubectl -n curl-allow get pod -l app=curl -o jsonpath={.items..metadata.name})" -c curl -n curl-allow -- curl http://httpbin.default:8000/ip -sS -o /dev/null -w "%{http\_code}\n" 200 {{< /text >}} ## Best practices Starting from Istio 1.4, when writing authorization policy, you should consider using the value `cluster.local` as the trust domain part in the policy. For example, instead of `old-td/ns/curl-allow/sa/curl`, it should be `cluster.local/ns/curl-allow/sa/curl`. Notice that in this case, `cluster.local` is not the Istio mesh trust domain (the trust domain is still `old-td`). However, in authorization policy, `cluster.local` is a pointer that points to the current trust domain, i.e. `old-td` (and later `new-td`), as well as its aliases. By using `cluster.local` in the authorization policy, when you migrate to a new trust domain, Istio will detect this and treat the new trust domain as the old trust domain without you having to include the aliases. ## Clean up {{< text bash >}} $ kubectl delete authorizationpolicy service-httpbin.default.svc.cluster.local $ kubectl delete deploy httpbin; kubectl delete service httpbin; kubectl delete serviceaccount httpbin $ kubectl delete deploy curl; kubectl delete service curl; kubectl delete serviceaccount curl $ istioctl uninstall --purge -y $ kubectl delete namespace curl-allow istio-system $ rm ./td-installation.yaml {{< /text >}} | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authorization/authz-td-migration/index.md | master | istio | [
-0.04873466491699219,
-0.005913956556469202,
-0.041468605399131775,
-0.008289494551718235,
-0.016763698309659958,
-0.06475196778774261,
0.030893338844180107,
-0.011042228899896145,
0.010104835964739323,
0.03636978194117546,
0.012252306565642357,
-0.11977679282426834,
0.019417613744735718,
... | 0.48007 |
This task shows you how to set up an Istio authorization policy to enforce access based on a JSON Web Token (JWT). An Istio authorization policy supports both string typed and list-of-string typed JWT claims. ## Before you begin Before you begin this task, do the following: \* Complete the [Istio end user authentication task](/docs/tasks/security/authentication/authn-policy/#end-user-authentication). \* Read the [Istio authorization concepts](/docs/concepts/security/#authorization). \* Install Istio using [Istio installation guide](/docs/setup/install/istioctl/). \* Deploy two workloads: `httpbin` and `curl`. Deploy these in one namespace, for example `foo`. Both workloads run with an Envoy proxy in front of each. Deploy the example namespace and workloads using these commands: {{< text bash >}} $ kubectl create ns foo $ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/httpbin.yaml@) -n foo $ kubectl apply -f <(istioctl kube-inject -f @samples/curl/curl.yaml@) -n foo {{< /text >}} \* Verify that `curl` successfully communicates with `httpbin` using this command: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl http://httpbin.foo:8000/ip -sS -o /dev/null -w "%{http\_code}\n" 200 {{< /text >}} {{< warning >}} If you donβt see the expected output, retry after a few seconds. Caching and propagation can cause a delay. {{< /warning >}} ## Allow requests with valid JWT and list-typed claims 1. The following command creates the `jwt-example` request authentication policy for the `httpbin` workload in the `foo` namespace. This policy for `httpbin` workload accepts a JWT issued by `testing@secure.istio.io`: {{< text bash >}} $ kubectl apply -f - <}}/security/tools/jwt/samples/jwks.json" EOF {{< /text >}} 1. Verify that a request with an invalid JWT is denied: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl "http://httpbin.foo:8000/headers" -sS -o /dev/null -H "Authorization: Bearer invalidToken" -w "%{http\_code}\n" 401 {{< /text >}} 1. Verify that a request without a JWT is allowed because there is no authorization policy: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl "http://httpbin.foo:8000/headers" -sS -o /dev/null -w "%{http\_code}\n" 200 {{< /text >}} 1. The following command creates the `require-jwt` authorization policy for the `httpbin` workload in the `foo` namespace. The policy requires all requests to the `httpbin` workload to have a valid JWT with `requestPrincipal` set to `testing@secure.istio.io/testing@secure.istio.io`. Istio constructs the `requestPrincipal` by combining the `iss` and `sub` of the JWT token with a `/` separator as shown: {{< text syntax="bash" expandlinks="false" >}} $ kubectl apply -f - <}} 1. Get the JWT that sets the `iss` and `sub` keys to the same value, `testing@secure.istio.io`. This causes Istio to generate the attribute `requestPrincipal` with the value `testing@secure.istio.io/testing@secure.istio.io`: {{< text syntax="bash" expandlinks="false" >}} $ TOKEN=$(curl {{< github\_file >}}/security/tools/jwt/samples/demo.jwt -s) && echo "$TOKEN" | cut -d '.' -f2 - | base64 --decode {"exp":4685989700,"foo":"bar","iat":1532389700,"iss":"testing@secure.istio.io","sub":"testing@secure.istio.io"} {{< /text >}} 1. Verify that a request with a valid JWT is allowed: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl "http://httpbin.foo:8000/headers" -sS -o /dev/null -H "Authorization: Bearer $TOKEN" -w "%{http\_code}\n" 200 {{< /text >}} 1. Verify that a request without a JWT is denied: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl "http://httpbin.foo:8000/headers" -sS -o /dev/null -w "%{http\_code}\n" 403 {{< /text >}} 1. The following command updates the `require-jwt` authorization policy to also require the JWT to have a claim named `groups` containing the value `group1`: {{< text syntax="bash" expandlinks="false" >}} $ kubectl apply -f - <}} {{< warning >}} Don't | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authorization/authz-jwt/index.md | master | istio | [
-0.08213788270950317,
0.030445342883467674,
-0.021693700924515724,
-0.01740892231464386,
-0.10929897427558899,
-0.046551965177059174,
0.05690598487854004,
0.020207880064845085,
-0.0016481311758980155,
0.02510814182460308,
-0.05608163774013519,
-0.11581830680370331,
-0.00016740492719691247,
... | 0.457992 |
curl "http://httpbin.foo:8000/headers" -sS -o /dev/null -w "%{http\_code}\n" 403 {{< /text >}} 1. The following command updates the `require-jwt` authorization policy to also require the JWT to have a claim named `groups` containing the value `group1`: {{< text syntax="bash" expandlinks="false" >}} $ kubectl apply -f - <}} {{< warning >}} Don't include quotes in the `request.auth.claims` field unless the claim itself has quotes in it. {{< /warning >}} 1. Get the JWT that sets the `groups` claim to a list of strings: `group1` and `group2`: {{< text syntax="bash" expandlinks="false" >}} $ TOKEN\_GROUP=$(curl {{< github\_file >}}/security/tools/jwt/samples/groups-scope.jwt -s) && echo "$TOKEN\_GROUP" | cut -d '.' -f2 - | base64 --decode {"exp":3537391104,"groups":["group1","group2"],"iat":1537391104,"iss":"testing@secure.istio.io","scope":["scope1","scope2"],"sub":"testing@secure.istio.io"} {{< /text >}} 1. Verify that a request with the JWT that includes `group1` in the `groups` claim is allowed: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl "http://httpbin.foo:8000/headers" -sS -o /dev/null -H "Authorization: Bearer $TOKEN\_GROUP" -w "%{http\_code}\n" 200 {{< /text >}} 1. Verify that a request with a JWT, which doesnβt have the `groups` claim is rejected: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl "http://httpbin.foo:8000/headers" -sS -o /dev/null -H "Authorization: Bearer $TOKEN" -w "%{http\_code}\n" 403 {{< /text >}} ## Clean up Remove the namespace `foo`: {{< text bash >}} $ kubectl delete namespace foo {{< /text >}} | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authorization/authz-jwt/index.md | master | istio | [
-0.06693903356790543,
0.07783107459545135,
-0.007265626452863216,
-0.006067682523280382,
0.02153632417321205,
-0.008491706103086472,
0.023948078975081444,
-0.0309453122317791,
0.0765584334731102,
-0.028349896892905235,
-0.04041854292154312,
-0.12118750065565109,
0.06663676351308823,
-0.063... | 0.073359 |
This task shows you how to set up Istio authorization policy of `ALLOW` action for HTTP traffic in an Istio mesh. ## Before you begin Before you begin this task, do the following: \* Read the [Istio authorization concepts](/docs/concepts/security/#authorization). \* Follow the [Istio installation guide](/docs/setup/install/istioctl/) to install Istio with mutual TLS enabled. \* Deploy the [Bookinfo](/docs/examples/bookinfo/#deploying-the-application) sample application. After deploying the Bookinfo application, go to the Bookinfo product page at `http://$GATEWAY\_URL/productpage`. On the product page, you can see the following sections: \* \*\*Book Details\*\* in the middle of the page, which includes: book type, number of pages, publisher, etc. \* \*\*Book Reviews\*\* on bottom of the page. When you refresh the page, the app shows different versions of reviews in the product page. The app presents the reviews in a round robin style: red stars, black stars, or no stars. {{< tip >}} If you don't see the expected output in the browser as you follow the task, retry in a few more seconds because some delay is possible due to caching and other propagation overhead. {{< /tip >}} {{< warning >}} This task requires mutual TLS enabled because the following examples use principal and namespace in the policies. {{< /warning >}} ## Configure access control for workloads using HTTP traffic Using Istio, you can easily setup access control for {{< gloss "workload" >}}workloads{{< /gloss >}} in your mesh. This task shows you how to set up access control using Istio authorization. First, you configure a simple `allow-nothing` policy that rejects all requests to the workload, and then grant more access to the workload gradually and incrementally. 1. Run the following command to create a `allow-nothing` policy in the `default` namespace. The policy doesn't have a `selector` field, which applies the policy to every workload in the `default` namespace. The `spec:` field of the policy has the empty value `{}`. That value means that no traffic is permitted, effectively denying all requests. {{< text bash >}} $ kubectl apply -f - <}} Point your browser at the Bookinfo `productpage` (`http://$GATEWAY\_URL/productpage`). You should see `"RBAC: access denied"`. The error shows that the configured `deny-all` policy is working as intended, and Istio doesn't have any rules that allow any access to workloads in the mesh. 1. Run the following command to create a `productpage-viewer` policy to allow access with `GET` method to the `productpage` workload. The policy does not set the `from` field in the `rules` which means all sources are allowed, effectively allowing all users and workloads: {{< text bash >}} $ kubectl apply -f - <}} Point your browser at the Bookinfo `productpage` (`http://$GATEWAY\_URL/productpage`). Now you should see the "Bookinfo Sample" page. However, you can see the following errors on the page: \* `Error fetching product details` \* `Error fetching product reviews` on the page. These errors are expected because we have not granted the `productpage` workload access to the `details` and `reviews` workloads. Next, you need to configure a policy to grant access to those workloads. 1. Run the following command to create the `details-viewer` policy to allow the `productpage` workload, which issues requests using the `cluster.local/ns/default/sa/bookinfo-productpage` service account, to access the `details` workload through `GET` methods: {{< text bash >}} $ kubectl apply -f - <}} 1. Run the following command to create a policy `reviews-viewer` to allow the `productpage` workload, which issues requests using the `cluster.local/ns/default/sa/bookinfo-productpage` service account, to access the `reviews` workload through `GET` methods: {{< text bash >}} $ kubectl apply -f - <}} Point your browser at the Bookinfo `productpage` (`http://$GATEWAY\_URL/productpage`). Now, you should see the "Bookinfo Sample" page with "Book Details" on the | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authorization/authz-http/index.md | master | istio | [
-0.053267039358615875,
0.014855832792818546,
-0.09125193953514099,
-0.023668838664889336,
-0.054661814123392105,
-0.017691578716039658,
0.01902727596461773,
-0.015552014112472534,
0.017028450965881348,
0.03420381247997284,
-0.010706103406846523,
-0.08471080660820007,
0.027884645387530327,
... | 0.415839 |
the `productpage` workload, which issues requests using the `cluster.local/ns/default/sa/bookinfo-productpage` service account, to access the `reviews` workload through `GET` methods: {{< text bash >}} $ kubectl apply -f - <}} Point your browser at the Bookinfo `productpage` (`http://$GATEWAY\_URL/productpage`). Now, you should see the "Bookinfo Sample" page with "Book Details" on the lower left part, and "Book Reviews" on the lower right part. However, in the "Book Reviews" section, there is an error `Ratings service currently unavailable`. This is because the `reviews` workload doesn't have permission to access the `ratings` workload. To fix this issue, you need to grant the `reviews` workload access to the `ratings` workload. Next, we configure a policy to grant the `reviews` workload that access. 1. Run the following command to create the `ratings-viewer` policy to allow the `reviews` workload, which issues requests using the `cluster.local/ns/default/sa/bookinfo-reviews` service account, to access the `ratings` workload through `GET` methods: {{< text bash >}} $ kubectl apply -f - <}} Point your browser at the Bookinfo `productpage` (`http://$GATEWAY\_URL/productpage`). You should see the "black" and "red" ratings in the "Book Reviews" section. \*\*Congratulations!\*\* You successfully applied authorization policy to enforce access control for workloads using HTTP traffic. ## Clean up Remove all authorization policies from your configuration: {{< text bash >}} $ kubectl delete authorizationpolicy.security.istio.io/allow-nothing $ kubectl delete authorizationpolicy.security.istio.io/productpage-viewer $ kubectl delete authorizationpolicy.security.istio.io/details-viewer $ kubectl delete authorizationpolicy.security.istio.io/reviews-viewer $ kubectl delete authorizationpolicy.security.istio.io/ratings-viewer {{< /text >}} | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authorization/authz-http/index.md | master | istio | [
-0.004251406062394381,
-0.038041602820158005,
-0.06799660623073578,
0.02362745814025402,
-0.03041224740445614,
0.031991079449653625,
-0.0527532696723938,
-0.0867057740688324,
0.05742520093917847,
0.03748391196131706,
0.019069679081439972,
-0.04707612469792366,
0.005339999683201313,
-0.0409... | 0.042491 |
This task shows you how to set up Istio authorization policy of `DENY` action to explicitly deny traffic in an Istio mesh. This is different from the `ALLOW` action because the `DENY` action has higher priority and will not be bypassed by any `ALLOW` actions. ## Before you begin Before you begin this task, do the following: \* Read the [Istio authorization concepts](/docs/concepts/security/#authorization). \* Follow the [Istio installation guide](/docs/setup/install/istioctl/) to install Istio. \* Deploy workloads: This task uses two workloads, `httpbin` and `curl`, deployed on one namespace, `foo`. Both workloads run with an Envoy proxy in front of each. Deploy the example namespace and workloads with the following command: {{< text bash >}} $ kubectl create ns foo $ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/httpbin.yaml@) -n foo $ kubectl apply -f <(istioctl kube-inject -f @samples/curl/curl.yaml@) -n foo {{< /text >}} \* Verify that `curl` talks to `httpbin` with the following command: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl http://httpbin.foo:8000/ip -sS -o /dev/null -w "%{http\_code}\n" 200 {{< /text >}} {{< warning >}} If you donβt see the expected output as you follow the task, retry after a few seconds. Caching and propagation overhead can cause some delay. {{< /warning >}} ## Explicitly deny a request 1. The following command creates the `deny-method-get` authorization policy for the `httpbin` workload in the `foo` namespace. The policy sets the `action` to `DENY` to deny requests that satisfy the conditions set in the `rules` section. This type of policy is better known as a deny policy. In this case, the policy denies requests if their method is `GET`. {{< text bash >}} $ kubectl apply -f - <}} 1. Verify that `GET` requests are denied: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl "http://httpbin.foo:8000/get" -X GET -sS -o /dev/null -w "%{http\_code}\n" 403 {{< /text >}} 1. Verify that `POST` requests are allowed: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl "http://httpbin.foo:8000/post" -X POST -sS -o /dev/null -w "%{http\_code}\n" 200 {{< /text >}} 1. Update the `deny-method-get` authorization policy to deny `GET` requests only if the `x-token` value of the HTTP header is not `admin`. The following example policy sets the value of the `notValues` field to `["admin"]` to deny requests with a header value that is not `admin`: {{< text bash >}} $ kubectl apply -f - <}} 1. Verify that `GET` requests with the HTTP header `x-token: admin` are allowed: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl "http://httpbin.foo:8000/get" -X GET -H "x-token: admin" -sS -o /dev/null -w "%{http\_code}\n" 200 {{< /text >}} 1. Verify that GET requests with the HTTP header `x-token: guest` are denied: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl "http://httpbin.foo:8000/get" -X GET -H "x-token: guest" -sS -o /dev/null -w "%{http\_code}\n" 403 {{< /text >}} 1. The following command creates the `allow-path-ip` authorization policy to allow requests at the `/ip` path to the `httpbin` workload. This authorization policy sets the `action` field to `ALLOW`. This type of policy is better known as an allow policy. {{< text bash >}} $ kubectl apply -f - <}} 1. Verify that `GET` requests with the HTTP header `x-token: guest` at path `/ip` are denied by the `deny-method-get` policy. Deny policies takes | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authorization/authz-deny/index.md | master | istio | [
-0.03745805472135544,
0.046613190323114395,
0.0022942889481782913,
-0.01797172613441944,
-0.0942002534866333,
-0.04398820549249649,
0.0484105683863163,
-0.029881751164793968,
0.005879699718207121,
0.06839945167303085,
-0.02771168388426304,
-0.08257298916578293,
-0.01706758886575699,
0.0589... | 0.496123 |
sets the `action` field to `ALLOW`. This type of policy is better known as an allow policy. {{< text bash >}} $ kubectl apply -f - <}} 1. Verify that `GET` requests with the HTTP header `x-token: guest` at path `/ip` are denied by the `deny-method-get` policy. Deny policies takes precedence over the allow policies: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl "http://httpbin.foo:8000/ip" -X GET -H "x-token: guest" -s -o /dev/null -w "%{http\_code}\n" 403 {{< /text >}} 1. Verify that `GET` requests with the HTTP header `x-token: admin` at path `/ip` are allowed by the `allow-path-ip` policy: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl "http://httpbin.foo:8000/ip" -X GET -H "x-token: admin" -s -o /dev/null -w "%{http\_code}\n" 200 {{< /text >}} 1. Verify that `GET` requests with the HTTP header `x-token: admin` at path `/get` are denied because they donβt match the `allow-path-ip` policy: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl "http://httpbin.foo:8000/get" -X GET -H "x-token: admin" -s -o /dev/null -w "%{http\_code}\n" 403 {{< /text >}} ## Clean up Remove the namespace `foo` from your configuration: {{< text bash >}} $ kubectl delete namespace foo {{< /text >}} | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authorization/authz-deny/index.md | master | istio | [
0.03066021390259266,
0.10597161203622818,
-0.0327330157160759,
-0.061687421053647995,
-0.009259777143597603,
-0.047131605446338654,
-0.004670010879635811,
-0.05303741246461868,
0.06019037216901779,
0.05933450162410736,
0.02940542995929718,
-0.05846671387553215,
-0.019352050498127937,
-0.04... | 0.102532 |
{{< boilerplate alpha >}} This task shows you how to route requests based on JWT claims on an Istio ingress gateway using the request authentication and virtual service. Note: this feature only supports Istio ingress gateway and requires the use of both request authentication and virtual service to properly validate and route based on JWT claims. ## Before you begin \* Understand Istio [authentication policy](/docs/concepts/security/#authentication-policies) and [virtual service](/docs/concepts/traffic-management/#virtual-services) concepts. \* Install Istio using the [Istio installation guide](/docs/setup/install/istioctl/). \* Deploy a workload, `httpbin` in a namespace, for example `foo`, and expose it through the Istio ingress gateway with this command: {{< text bash >}} $ kubectl create ns foo $ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/httpbin.yaml@) -n foo $ kubectl apply -f @samples/httpbin/httpbin-gateway.yaml@ -n foo {{< /text >}} \* Follow the instructions in [Determining the ingress IP and ports](/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports) to define the `INGRESS\_HOST` and `INGRESS\_PORT` environment variables. \* Verify that the `httpbin` workload and ingress gateway are working as expected using this command: {{< text bash >}} $ curl "$INGRESS\_HOST:$INGRESS\_PORT"/headers -s -o /dev/null -w "%{http\_code}\n" 200 {{< /text >}} {{< warning >}} If you donβt see the expected output, retry after a few seconds. Caching and propagation overhead can cause a delay. {{< /warning >}} ## Configuring ingress routing based on JWT claims The Istio ingress gateway supports routing based on authenticated JWT, which is useful for routing based on end user identity and more secure compared using the unauthenticated HTTP attributes (e.g. path or header). 1. In order to route based on JWT claims, first create the request authentication to enable JWT validation: {{< text bash >}} $ kubectl apply -f - <}}/security/tools/jwt/samples/jwks.json" EOF {{< /text >}} The request authentication enables JWT validation on the Istio ingress gateway so that the validated JWT claims can later be used in the virtual service for routing purposes. The request authentication is applied on the ingress gateway because the JWT claim based routing is only supported on ingress gateways. Note: the request authentication will only check the JWT if it exists in the request. To make the JWT required and reject the request if it does not include JWT, apply the authorization policy as specified in the [task](/docs/tasks/security/authentication/authn-policy#require-a-valid-token). 1. Update the virtual service to route based on validated JWT claims: {{< text bash >}} $ kubectl apply -f - <}} The virtual service uses the reserved header `"@request.auth.claims.groups"` to match with the JWT claim `groups`. The prefix `@` denotes it is matching with the metadata derived from the JWT validation and not with HTTP headers. Claim of type string, list of string and nested claims are supported. Use the `.` or `[]` as a separator for nested claim names. For example, `"@request.auth.claims.name.givenName"` or `"@request.auth.claims[name][givenName]"` matches the nested claim `name` and `givenName`, they are equivalent here. When the claim name contains `.`, only `[]` can be used as a separator. ## Validating ingress routing based on JWT claims 1. Validate the ingress gateway returns the HTTP code 404 without JWT: {{< text bash >}} $ curl -s -I "http://$INGRESS\_HOST:$INGRESS\_PORT/headers" HTTP/1.1 404 Not Found ... {{< /text >}} You can also create the authorization policy to explicitly reject the request with HTTP code 403 when JWT is missing. 1. Validate the ingress gateway returns the HTTP code 401 with invalid JWT: {{< text bash >}} $ curl -s -I "http://$INGRESS\_HOST:$INGRESS\_PORT/headers" -H "Authorization: Bearer some.invalid.token" HTTP/1.1 401 Unauthorized ... {{< /text >}} The 401 is returned by the request authentication because the JWT failed the validation. 1. Validate the ingress gateway routes the request with a valid JWT token that includes the claim `groups: group1`: | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authentication/jwt-route/index.md | master | istio | [
-0.08042563498020172,
0.005456624552607536,
-0.027119319885969162,
-0.027924366295337677,
-0.0712885856628418,
-0.024508336558938026,
0.0684087872505188,
0.047086477279663086,
0.003831873182207346,
0.03726495802402496,
-0.06711382418870926,
-0.12152808904647827,
-0.008476047776639462,
0.01... | 0.527521 |
>}} $ curl -s -I "http://$INGRESS\_HOST:$INGRESS\_PORT/headers" -H "Authorization: Bearer some.invalid.token" HTTP/1.1 401 Unauthorized ... {{< /text >}} The 401 is returned by the request authentication because the JWT failed the validation. 1. Validate the ingress gateway routes the request with a valid JWT token that includes the claim `groups: group1`: {{< text syntax="bash" expandlinks="false" >}} $ TOKEN\_GROUP=$(curl {{< github\_file >}}/security/tools/jwt/samples/groups-scope.jwt -s) && echo "$TOKEN\_GROUP" | cut -d '.' -f2 - | base64 --decode {"exp":3537391104,"groups":["group1","group2"],"iat":1537391104,"iss":"testing@secure.istio.io","scope":["scope1","scope2"],"sub":"testing@secure.istio.io"} {{< /text >}} {{< text bash >}} $ curl -s -I "http://$INGRESS\_HOST:$INGRESS\_PORT/headers" -H "Authorization: Bearer $TOKEN\_GROUP" HTTP/1.1 200 OK ... {{< /text >}} 1. Validate the ingress gateway returns the HTTP code 404 with a valid JWT but does not include the claim `groups: group1`: {{< text syntax="bash" >}} $ TOKEN\_NO\_GROUP=$(curl {{< github\_file >}}/security/tools/jwt/samples/demo.jwt -s) && echo "$TOKEN\_NO\_GROUP" | cut -d '.' -f2 - | base64 --decode {"exp":4685989700,"foo":"bar","iat":1532389700,"iss":"testing@secure.istio.io","sub":"testing@secure.istio.io"} {{< /text >}} {{< text bash >}} $ curl -s -I "http://$INGRESS\_HOST:$INGRESS\_PORT/headers" -H "Authorization: Bearer $TOKEN\_NO\_GROUP" HTTP/1.1 404 Not Found ... {{< /text >}} ## Cleanup \* Remove the namespace `foo`: {{< text bash >}} $ kubectl delete namespace foo {{< /text >}} \* Remove the request authentication: {{< text bash >}} $ kubectl delete requestauthentication ingress-jwt -n istio-system {{< /text >}} | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authentication/jwt-route/index.md | master | istio | [
-0.07791212201118469,
0.06977757066488266,
-0.03686921298503876,
0.024435479193925858,
0.04057544469833374,
-0.042588114738464355,
0.006902133580297232,
-0.021927831694483757,
0.06220719963312149,
0.044672783464193344,
-0.03234636411070824,
-0.09276992082595825,
0.04697642847895622,
-0.012... | 0.200042 |
This task shows how to ensure your workloads only communicate using mutual TLS as they are migrated to Istio. Istio automatically configures workload sidecars to use [mutual TLS](/docs/tasks/security/authentication/authn-policy/#auto-mutual-tls) when calling other workloads. By default, Istio configures the destination workloads using `PERMISSIVE` mode. When `PERMISSIVE` mode is enabled, a service can accept both plaintext and mutual TLS traffic. In order to only allow mutual TLS traffic, the configuration needs to be changed to `STRICT` mode. You can use the [Grafana dashboard](/docs/tasks/observability/metrics/using-istio-dashboard/) to check which workloads are still sending plaintext traffic to the workloads in `PERMISSIVE` mode and choose to lock them down once the migration is done. ## Before you begin \* Understand Istio [authentication policy](/docs/concepts/security/#authentication-policies) and related [mutual TLS authentication](/docs/concepts/security/#mutual-tls-authentication) concepts. \* Read the [authentication policy task](/docs/tasks/security/authentication/authn-policy) to learn how to configure authentication policy. \* Have a Kubernetes cluster with Istio installed, without global mutual TLS enabled (for example, use the `default` configuration profile as described in [installation steps](/docs/setup/getting-started)). In this task, you can try out the migration process by creating sample workloads and modifying the policies to enforce STRICT mutual TLS between the workloads. ## Set up the cluster \* Create two namespaces, `foo` and `bar`, and deploy [httpbin]({{< github\_tree >}}/samples/httpbin) and [curl]({{< github\_tree >}}/samples/curl) with sidecars on both of them: {{< text bash >}} $ kubectl create ns foo $ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/httpbin.yaml@) -n foo $ kubectl apply -f <(istioctl kube-inject -f @samples/curl/curl.yaml@) -n foo $ kubectl create ns bar $ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/httpbin.yaml@) -n bar $ kubectl apply -f <(istioctl kube-inject -f @samples/curl/curl.yaml@) -n bar {{< /text >}} \* Create another namespace, `legacy`, and deploy [curl]({{< github\_tree >}}/samples/curl) without a sidecar: {{< text bash >}} $ kubectl create ns legacy $ kubectl apply -f @samples/curl/curl.yaml@ -n legacy {{< /text >}} \* Verify the setup by sending http requests (using curl) from the curl pods, in namespaces `foo`, `bar` and `legacy`, to `httpbin.foo` and `httpbin.bar`. All requests should succeed with return code 200. {{< text bash >}} $ for from in "foo" "bar" "legacy"; do for to in "foo" "bar"; do kubectl exec "$(kubectl get pod -l app=curl -n ${from} -o jsonpath={.items..metadata.name})" -c curl -n ${from} -- curl http://httpbin.${to}:8000/ip -s -o /dev/null -w "curl.${from} to httpbin.${to}: %{http\_code}\n"; done; done curl.foo to httpbin.foo: 200 curl.foo to httpbin.bar: 200 curl.bar to httpbin.foo: 200 curl.bar to httpbin.bar: 200 curl.legacy to httpbin.foo: 200 curl.legacy to httpbin.bar: 200 {{< /text >}} {{< tip >}} If any of the curl commands fail, ensure that there are no existing authentication policies or destination rules that might interfere with requests to the httpbin service. {{< text bash >}} $ kubectl get peerauthentication --all-namespaces No resources found {{< /text >}} {{< text bash >}} $ kubectl get destinationrule --all-namespaces No resources found {{< /text >}} {{< /tip >}} ## Lock down to mutual TLS by namespace After migrating all clients to Istio and injecting the Envoy sidecar, you can lock down workloads in the `foo` namespace to only accept mutual TLS traffic. {{< text bash >}} $ kubectl apply -n foo -f - <}} Now, you should see the request from `curl.legacy` to `httpbin.foo` failing. {{< text bash >}} $ for from in "foo" "bar" "legacy"; do for to in "foo" "bar"; do kubectl exec "$(kubectl get pod -l app=curl -n ${from} -o jsonpath={.items..metadata.name})" -c curl -n ${from} -- curl http://httpbin.${to}:8000/ip -s -o /dev/null -w "curl.${from} to httpbin.${to}: %{http\_code}\n"; done; done curl.foo to httpbin.foo: 200 curl.foo to httpbin.bar: 200 curl.bar to httpbin.foo: 200 curl.bar to httpbin.bar: 200 curl.legacy to httpbin.foo: 000 command terminated with exit code 56 curl.legacy to | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authentication/mtls-migration/index.md | master | istio | [
-0.1084429994225502,
0.060456953942775726,
-0.02017502672970295,
-0.016275348141789436,
-0.05447988957166672,
-0.1000269204378128,
0.04980466142296791,
-0.029507532715797424,
0.03162812441587448,
-0.02949640154838562,
-0.03810640424489975,
-0.1193472295999527,
-0.004316830541938543,
0.1086... | 0.476196 |
app=curl -n ${from} -o jsonpath={.items..metadata.name})" -c curl -n ${from} -- curl http://httpbin.${to}:8000/ip -s -o /dev/null -w "curl.${from} to httpbin.${to}: %{http\_code}\n"; done; done curl.foo to httpbin.foo: 200 curl.foo to httpbin.bar: 200 curl.bar to httpbin.foo: 200 curl.bar to httpbin.bar: 200 curl.legacy to httpbin.foo: 000 command terminated with exit code 56 curl.legacy to httpbin.bar: 200 {{< /text >}} If you installed Istio with `values.global.proxy.privileged=true`, you can use `tcpdump` to verify traffic is encrypted or not. {{< text bash >}} $ kubectl exec -nfoo "$(kubectl get pod -nfoo -lapp=httpbin -ojsonpath={.items..metadata.name})" -c istio-proxy -- sudo tcpdump dst port 80 -A tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes {{< /text >}} You will see plain text and encrypted text in the output when requests are sent from `curl.legacy` and `curl.foo` respectively. If you can't migrate all your services to Istio (i.e., inject Envoy sidecar in all of them), you will need to continue to use `PERMISSIVE` mode. However, when configured with `PERMISSIVE` mode, no authentication or authorization checks will be performed for plaintext traffic by default. We recommend you use [Istio Authorization](/docs/tasks/security/authorization/authz-http/) to configure different paths with different authorization policies. ## Lock down mutual TLS for the entire mesh You can lock down workloads in all namespaces to only accept mutual TLS traffic by putting the policy in the system namespace of your Istio installation. {{< text bash >}} $ kubectl apply -n istio-system -f - <}} Now, both the `foo` and `bar` namespaces enforce mutual TLS only traffic, so you should see requests from `curl.legacy` failing for both. {{< text bash >}} $ for from in "foo" "bar" "legacy"; do for to in "foo" "bar"; do kubectl exec "$(kubectl get pod -l app=curl -n ${from} -o jsonpath={.items..metadata.name})" -c curl -n ${from} -- curl http://httpbin.${to}:8000/ip -s -o /dev/null -w "curl.${from} to httpbin.${to}: %{http\_code}\n"; done; done {{< /text >}} ## Clean up the example 1. Remove the mesh-wide authentication policy. {{< text bash >}} $ kubectl delete peerauthentication -n foo default $ kubectl delete peerauthentication -n istio-system default {{< /text >}} 1. Remove the test namespaces. {{< text bash >}} $ kubectl delete ns foo bar legacy Namespaces foo bar legacy deleted. {{< /text >}} | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authentication/mtls-migration/index.md | master | istio | [
0.0038740811869502068,
0.04619358852505684,
-0.05827213078737259,
-0.034071605652570724,
-0.03803642839193344,
-0.0413992740213871,
0.001934459200128913,
-0.006962543819099665,
0.045778751373291016,
0.031245918944478035,
0.017800994217395782,
-0.12138313055038452,
-0.053941238671541214,
-0... | 0.266495 |
This task covers the primary activities you might need to perform when enabling, configuring, and using Istio authentication policies. Find out more about the underlying concepts in the [authentication overview](/docs/concepts/security/#authentication). ## Before you begin \* Understand Istio [authentication policy](/docs/concepts/security/#authentication-policies) and related [mutual TLS authentication](/docs/concepts/security/#mutual-tls-authentication) concepts. \* Install Istio on a Kubernetes cluster with the `default` configuration profile, as described in [installation steps](/docs/setup/getting-started). {{< text bash >}} $ istioctl install --set profile=default {{< /text >}} ### Setup Our examples use two namespaces `foo` and `bar`, with two services, `httpbin` and `curl`, both running with an Envoy proxy. We also use second instances of `httpbin` and `curl` running without the sidecar in the `legacy` namespace. If youβd like to use the same examples when trying the tasks, run the following: {{< text bash >}} $ kubectl create ns foo $ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/httpbin.yaml@) -n foo $ kubectl apply -f <(istioctl kube-inject -f @samples/curl/curl.yaml@) -n foo $ kubectl create ns bar $ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/httpbin.yaml@) -n bar $ kubectl apply -f <(istioctl kube-inject -f @samples/curl/curl.yaml@) -n bar $ kubectl create ns legacy $ kubectl apply -f @samples/httpbin/httpbin.yaml@ -n legacy $ kubectl apply -f @samples/curl/curl.yaml@ -n legacy {{< /text >}} You can verify setup by sending an HTTP request with `curl` from any `curl` pod in the namespace `foo`, `bar` or `legacy` to either `httpbin.foo`, `httpbin.bar` or `httpbin.legacy`. All requests should succeed with HTTP code 200. For example, here is a command to check `curl.bar` to `httpbin.foo` reachability: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n bar -o jsonpath={.items..metadata.name})" -c curl -n bar -- curl http://httpbin.foo:8000/ip -s -o /dev/null -w "%{http\_code}\n" 200 {{< /text >}} This one-liner command conveniently iterates through all reachability combinations: {{< text bash >}} $ for from in "foo" "bar" "legacy"; do for to in "foo" "bar" "legacy"; do kubectl exec "$(kubectl get pod -l app=curl -n ${from} -o jsonpath={.items..metadata.name})" -c curl -n ${from} -- curl -s "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "curl.${from} to httpbin.${to}: %{http\_code}\n"; done; done curl.foo to httpbin.foo: 200 curl.foo to httpbin.bar: 200 curl.foo to httpbin.legacy: 200 curl.bar to httpbin.foo: 200 curl.bar to httpbin.bar: 200 curl.bar to httpbin.legacy: 200 curl.legacy to httpbin.foo: 200 curl.legacy to httpbin.bar: 200 curl.legacy to httpbin.legacy: 200 {{< /text >}} Verify there is no peer authentication policy in the system with the following command: {{< text bash >}} $ kubectl get peerauthentication --all-namespaces No resources found {{< /text >}} Last but not least, verify that there are no destination rules that apply on the example services. You can do this by checking the `host:` value of existing destination rules and make sure they do not match. For example: {{< text bash >}} $ kubectl get destinationrules.networking.istio.io --all-namespaces -o yaml | grep "host:" {{< /text >}} {{< tip >}} Depending on the version of Istio, you may see destination rules for hosts other than those shown. However, there should be none with hosts in the `foo`, `bar` and `legacy` namespace, nor is the match-all wildcard `\*`. {{< /tip >}} ## Auto mutual TLS By default, Istio tracks the server workloads migrated to Istio proxies, and configures client proxies to send mutual TLS traffic to those workloads automatically, and to send plain text traffic to workloads without sidecars. Thus, all traffic between workloads with proxies uses mutual TLS, without you doing anything. For example, take the response from a request to `httpbin/header`. When using mutual TLS, the proxy injects the `X-Forwarded-Client-Cert` header to the upstream request to the backend. That header's presence is evidence that mutual TLS is used. For example: {{< | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authentication/authn-policy/index.md | master | istio | [
-0.05513375252485275,
0.028132958337664604,
-0.0007298181881196797,
-0.007429439574480057,
-0.0997900515794754,
-0.036840472370386124,
0.049990877509117126,
0.020939644426107407,
0.04503944143652916,
0.0115490248426795,
-0.026887735351920128,
-0.14354804158210754,
0.0237946268171072,
0.052... | 0.438431 |
workloads with proxies uses mutual TLS, without you doing anything. For example, take the response from a request to `httpbin/header`. When using mutual TLS, the proxy injects the `X-Forwarded-Client-Cert` header to the upstream request to the backend. That header's presence is evidence that mutual TLS is used. For example: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl -s http://httpbin.foo:8000/headers -s | jq '.headers["X-Forwarded-Client-Cert"][0]' | sed 's/Hash=[a-z0-9]\*;/Hash=;/' "By=spiffe://cluster.local/ns/foo/sa/httpbin;Hash=;Subject=\"\";URI=spiffe://cluster.local/ns/foo/sa/curl" {{< /text >}} When the server doesn't have sidecar, the `X-Forwarded-Client-Cert` header is not there, which implies requests are in plain text. {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl http://httpbin.legacy:8000/headers -s | grep X-Forwarded-Client-Cert {{< /text >}} ## Globally enabling Istio mutual TLS in STRICT mode While Istio automatically upgrades all traffic between the proxies and the workloads to mutual TLS, workloads can still receive plain text traffic. To prevent non-mutual TLS traffic for the whole mesh, set a mesh-wide peer authentication policy with the mutual TLS mode set to `STRICT`. The mesh-wide peer authentication policy should not have a `selector` and must be applied in the \*\*root namespace\*\*, for example: {{< text bash >}} $ kubectl apply -f - <}} {{< tip >}} The example assumes `istio-system` is the root namespace. If you used a different value during installation, replace `istio-system` with the value you used. {{< /tip >}} This peer authentication policy configures workloads to only accept requests encrypted with TLS. Since it doesn't specify a value for the `selector` field, the policy applies to all workloads in the mesh. Run the test command again: {{< text bash >}} $ for from in "foo" "bar" "legacy"; do for to in "foo" "bar" "legacy"; do kubectl exec "$(kubectl get pod -l app=curl -n ${from} -o jsonpath={.items..metadata.name})" -c curl -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "curl.${from} to httpbin.${to}: %{http\_code}\n"; done; done curl.foo to httpbin.foo: 200 curl.foo to httpbin.bar: 200 curl.foo to httpbin.legacy: 200 curl.bar to httpbin.foo: 200 curl.bar to httpbin.bar: 200 curl.bar to httpbin.legacy: 200 curl.legacy to httpbin.foo: 000 command terminated with exit code 56 curl.legacy to httpbin.bar: 000 command terminated with exit code 56 curl.legacy to httpbin.legacy: 200 {{< /text >}} You see requests still succeed, except for those from the client that doesn't have proxy, `curl.legacy`, to the server with a proxy, `httpbin.foo` or `httpbin.bar`. This is expected because mutual TLS is now strictly required, but the workload without sidecar cannot comply. ### Cleanup part 1 Remove global authentication policy added in the session: {{< text bash >}} $ kubectl delete peerauthentication -n istio-system default {{< /text >}} ## Enable mutual TLS per namespace or workload ### Namespace-wide policy To change mutual TLS for all workloads within a particular namespace, use a namespace-wide policy. The specification of the policy is the same as for a mesh-wide policy, but you specify the namespace it applies to under `metadata`. For example, the following peer authentication policy enables strict mutual TLS for the `foo` namespace: {{< text bash >}} $ kubectl apply -f - <}} As this policy is applied on workloads in namespace `foo` only, you should see only request from client-without-sidecar (`curl.legacy`) to `httpbin.foo` start to fail. {{< text bash >}} $ for from in "foo" "bar" "legacy"; do for to in "foo" "bar" "legacy"; do kubectl exec "$(kubectl get pod -l app=curl -n ${from} -o jsonpath={.items..metadata.name})" -c curl -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "curl.${from} to httpbin.${to}: %{http\_code}\n"; done; done curl.foo to httpbin.foo: 200 curl.foo to | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authentication/authn-policy/index.md | master | istio | [
-0.029377209022641182,
0.0899123027920723,
0.012402203865349293,
-0.01593679003417492,
-0.07282926142215729,
-0.05932443216443062,
0.0007344417972490191,
-0.057916317135095596,
0.13111162185668945,
0.017012983560562134,
-0.05448000878095627,
-0.04163980484008789,
0.03361564129590988,
0.004... | 0.069791 |
>}} $ for from in "foo" "bar" "legacy"; do for to in "foo" "bar" "legacy"; do kubectl exec "$(kubectl get pod -l app=curl -n ${from} -o jsonpath={.items..metadata.name})" -c curl -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "curl.${from} to httpbin.${to}: %{http\_code}\n"; done; done curl.foo to httpbin.foo: 200 curl.foo to httpbin.bar: 200 curl.foo to httpbin.legacy: 200 curl.bar to httpbin.foo: 200 curl.bar to httpbin.bar: 200 curl.bar to httpbin.legacy: 200 curl.legacy to httpbin.foo: 000 command terminated with exit code 56 curl.legacy to httpbin.bar: 200 curl.legacy to httpbin.legacy: 200 {{< /text >}} ### Enable mutual TLS per workload To set a peer authentication policy for a specific workload, you must configure the `selector` section and specify the labels that match the desired workload. For example, the following peer authentication policy enables strict mutual TLS for the `httpbin.bar` workload: {{< text bash >}} $ cat <}} Again, run the probing command. As expected, request from `curl.legacy` to `httpbin.bar` starts failing with the same reasons. {{< text bash >}} $ for from in "foo" "bar" "legacy"; do for to in "foo" "bar" "legacy"; do kubectl exec "$(kubectl get pod -l app=curl -n ${from} -o jsonpath={.items..metadata.name})" -c curl -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "curl.${from} to httpbin.${to}: %{http\_code}\n"; done; done curl.foo to httpbin.foo: 200 curl.foo to httpbin.bar: 200 curl.foo to httpbin.legacy: 200 curl.bar to httpbin.foo: 200 curl.bar to httpbin.bar: 200 curl.bar to httpbin.legacy: 200 curl.legacy to httpbin.foo: 000 command terminated with exit code 56 curl.legacy to httpbin.bar: 000 command terminated with exit code 56 curl.legacy to httpbin.legacy: 200 {{< /text >}} {{< text plain >}} ... curl.legacy to httpbin.bar: 000 command terminated with exit code 56 {{< /text >}} To refine the mutual TLS settings per port, you must configure the `portLevelMtls` section. For example, the following peer authentication policy requires mutual TLS on all ports, except port `8080`: {{< text bash >}} $ cat <}} 1. The port value in the peer authentication policy is the container's port. 1. You can only use `portLevelMtls` if the port is bound to a service. Istio ignores it otherwise. {{< text bash >}} $ for from in "foo" "bar" "legacy"; do for to in "foo" "bar" "legacy"; do kubectl exec "$(kubectl get pod -l app=curl -n ${from} -o jsonpath={.items..metadata.name})" -c curl -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "curl.${from} to httpbin.${to}: %{http\_code}\n"; done; done curl.foo to httpbin.foo: 200 curl.foo to httpbin.bar: 200 curl.foo to httpbin.legacy: 200 curl.bar to httpbin.foo: 200 curl.bar to httpbin.bar: 200 curl.bar to httpbin.legacy: 200 curl.legacy to httpbin.foo: 000 command terminated with exit code 56 curl.legacy to httpbin.bar: 200 curl.legacy to httpbin.legacy: 200 {{< /text >}} ### Policy precedence A workload-specific peer authentication policy takes precedence over a namespace-wide policy. You can test this behavior if you add a policy to disable mutual TLS for the `httpbin.foo` workload, for example. Note that you've already created a namespace-wide policy that enables mutual TLS for all services in namespace `foo` and observe that requests from `curl.legacy` to `httpbin.foo` are failing (see above). {{< text bash >}} $ cat <}} Re-running the request from `curl.legacy`, you should see a success return code again (200), confirming service-specific policy overrides the namespace-wide policy. {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n legacy -o jsonpath={.items..metadata.name})" -c curl -n legacy -- curl http://httpbin.foo:8000/ip -s -o /dev/null -w "%{http\_code}\n" 200 {{< /text >}} ### Cleanup part 2 Remove policies created in the above steps: {{< text bash >}} $ kubectl delete peerauthentication default overwrite-example -n foo $ kubectl delete peerauthentication httpbin -n bar {{< /text >}} ## End-user authentication To experiment with this | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authentication/authn-policy/index.md | master | istio | [
0.0390106737613678,
0.009709333069622517,
0.011666852049529552,
-0.054433561861515045,
-0.07564065605401993,
-0.01834769919514656,
-0.053769759833812714,
-0.03496527671813965,
0.06616772711277008,
0.023805130273103714,
0.025733981281518936,
-0.08623646199703217,
-0.039165198802948,
-0.0659... | 0.067032 |
http://httpbin.foo:8000/ip -s -o /dev/null -w "%{http\_code}\n" 200 {{< /text >}} ### Cleanup part 2 Remove policies created in the above steps: {{< text bash >}} $ kubectl delete peerauthentication default overwrite-example -n foo $ kubectl delete peerauthentication httpbin -n bar {{< /text >}} ## End-user authentication To experiment with this feature, you need a valid JWT. The JWT must correspond to the JWKS endpoint you want to use for the demo. This tutorial uses the test token [JWT test]({{< github\_file >}}/security/tools/jwt/samples/demo.jwt) and [JWKS endpoint]({{< github\_file >}}/security/tools/jwt/samples/jwks.json) from the Istio code base. Also, for convenience, expose `httpbin.foo` via an ingress gateway (for more details, see the [ingress task](/docs/tasks/traffic-management/ingress/)). {{< boilerplate gateway-api-support >}} {{< tabset category-name="config-api" >}} {{< tab name="Istio APIs" category-value="istio-apis" >}} Configure the gateway: {{< text bash >}} $ kubectl apply -f @samples/httpbin/httpbin-gateway.yaml@ -n foo {{< /text >}} Follow the instructions in [Determining the ingress IP and ports](/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports) to define the `INGRESS\_PORT` and `INGRESS\_HOST` environment variables. {{< /tab >}} {{< tab name="Gateway API" category-value="gateway-api" >}} Create the gateway: {{< text bash >}} $ kubectl apply -f @samples/httpbin/gateway-api/httpbin-gateway.yaml@ -n foo $ kubectl wait --for=condition=programmed gtw -n foo httpbin-gateway {{< /text >}} Set the `INGRESS\_PORT` and `INGRESS\_HOST` environment variables: {{< text bash >}} $ export INGRESS\_HOST=$(kubectl get gtw httpbin-gateway -n foo -o jsonpath='{.status.addresses[0].value}') $ export INGRESS\_PORT=$(kubectl get gtw httpbin-gateway -n foo -o jsonpath='{.spec.listeners[?(@.name=="http")].port}') {{< /text >}} {{< /tab >}} {{< /tabset >}} Run a test query through the gateway: {{< text bash >}} $ curl "$INGRESS\_HOST:$INGRESS\_PORT/headers" -s -o /dev/null -w "%{http\_code}\n" 200 {{< /text >}} Now, add a request authentication policy that requires end-user JWT for the ingress gateway. {{< tabset category-name="config-api" >}} {{< tab name="Istio APIs" category-value="istio-apis" >}} {{< text bash >}} $ kubectl apply -f - <}}/security/tools/jwt/samples/jwks.json" EOF {{< /text >}} {{< /tab >}} {{< tab name="Gateway API" category-value="gateway-api" >}} {{< text bash >}} $ kubectl apply -f - <}}/security/tools/jwt/samples/jwks.json" EOF {{< /text >}} {{< /tab >}} {{< /tabset >}} Apply the policy in the namespace of the workload it selects, the ingress gateway in this case. If you provide a token in the authorization header, its implicitly default location, Istio validates the token using the [public key set]({{< github\_file >}}/security/tools/jwt/samples/jwks.json), and rejects requests if the bearer token is invalid. However, requests without tokens are accepted. To observe this behavior, retry the request without a token, with a bad token, and with a valid token: {{< text bash >}} $ curl "$INGRESS\_HOST:$INGRESS\_PORT/headers" -s -o /dev/null -w "%{http\_code}\n" 200 {{< /text >}} {{< text bash >}} $ curl --header "Authorization: Bearer deadbeef" "$INGRESS\_HOST:$INGRESS\_PORT/headers" -s -o /dev/null -w "%{http\_code}\n" 401 {{< /text >}} {{< text bash >}} $ TOKEN=$(curl {{< github\_file >}}/security/tools/jwt/samples/demo.jwt -s) $ curl --header "Authorization: Bearer $TOKEN" "$INGRESS\_HOST:$INGRESS\_PORT/headers" -s -o /dev/null -w "%{http\_code}\n" 200 {{< /text >}} To observe other aspects of JWT validation, use the script [`gen-jwt.py`]({{< github\_tree >}}/security/tools/jwt/samples/gen-jwt.py) to generate new tokens to test with different issuer, audiences, expiry date, etc. The script can be downloaded from the Istio repository: {{< text bash >}} $ wget --no-verbose {{< github\_file >}}/security/tools/jwt/samples/gen-jwt.py {{< /text >}} You also need the `key.pem` file: {{< text bash >}} $ wget --no-verbose {{< github\_file >}}/security/tools/jwt/samples/key.pem {{< /text >}} {{< tip >}} Download the [jwcrypto](https://pypi.org/project/jwcrypto) library, if you haven't installed it on your system. {{< /tip >}} The JWT authentication has 60 seconds clock skew, this means the JWT token will become valid 60 seconds earlier than its configured `nbf` and remain valid 60 seconds after its configured `exp`. For example, the command below creates a token that expires in 5 seconds. As you see, Istio authenticates requests using that token successfully at first but | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authentication/authn-policy/index.md | master | istio | [
-0.06335648894309998,
0.033912427723407745,
-0.0004440392949618399,
-0.0652124285697937,
-0.06335721909999847,
-0.042031604796648026,
0.0005467094597406685,
-0.048858627676963806,
0.035636331886053085,
-0.0006186507525853813,
-0.032560888677835464,
-0.06668349355459213,
0.02415028028190136,
... | 0.204457 |
this means the JWT token will become valid 60 seconds earlier than its configured `nbf` and remain valid 60 seconds after its configured `exp`. For example, the command below creates a token that expires in 5 seconds. As you see, Istio authenticates requests using that token successfully at first but rejects them after 65 seconds: {{< text bash >}} $ TOKEN=$(python3 ./gen-jwt.py ./key.pem --expire 5) $ for i in $(seq 1 10); do curl --header "Authorization: Bearer $TOKEN" "$INGRESS\_HOST:$INGRESS\_PORT/headers" -s -o /dev/null -w "%{http\_code}\n"; sleep 10; done 200 200 200 200 200 200 200 401 401 401 {{< /text >}} You can also add a JWT policy to an ingress gateway (e.g., service `istio-ingressgateway.istio-system.svc.cluster.local`). This is often used to define a JWT policy for all services bound to the gateway, instead of for individual services. ### Require a valid token To reject requests without valid tokens, add an authorization policy with a rule specifying a `DENY` action for requests without request principals, shown as `notRequestPrincipals: ["\*"]` in the following example. Request principals are available only when valid JWT tokens are provided. The rule therefore denies requests without valid tokens. {{< tabset category-name="config-api" >}} {{< tab name="Istio APIs" category-value="istio-apis" >}} {{< text bash >}} $ kubectl apply -f - <}} {{< /tab >}} {{< tab name="Gateway API" category-value="gateway-api" >}} {{< text bash >}} $ kubectl apply -f - <}} {{< /tab >}} {{< /tabset >}} Retry the request without a token. The request now fails with error code `403`: {{< text bash >}} $ curl "$INGRESS\_HOST:$INGRESS\_PORT/headers" -s -o /dev/null -w "%{http\_code}\n" 403 {{< /text >}} ### Require valid tokens per-path To refine authorization with a token requirement per host, path, or method, change the authorization policy to only require JWT on `/headers`. When this authorization rule takes effect, requests to `$INGRESS\_HOST:$INGRESS\_PORT/headers` fail with the error code `403`. Requests to all other paths succeed, for example `$INGRESS\_HOST:$INGRESS\_PORT/ip`. {{< tabset category-name="config-api" >}} {{< tab name="Istio APIs" category-value="istio-apis" >}} {{< text bash >}} $ kubectl apply -f - <}} {{< /tab >}} {{< tab name="Gateway API" category-value="gateway-api" >}} {{< text bash >}} $ kubectl apply -f - <}} {{< /tab >}} {{< /tabset >}} {{< text bash >}} $ curl "$INGRESS\_HOST:$INGRESS\_PORT/headers" -s -o /dev/null -w "%{http\_code}\n" 403 {{< /text >}} {{< text bash >}} $ curl "$INGRESS\_HOST:$INGRESS\_PORT/ip" -s -o /dev/null -w "%{http\_code}\n" 200 {{< /text >}} ### Cleanup part 3 1. Remove authentication policy: {{< text bash >}} $ kubectl -n istio-system delete requestauthentication jwt-example {{< /text >}} 1. Remove authorization policy: {{< text bash >}} $ kubectl -n istio-system delete authorizationpolicy frontend-ingress {{< /text >}} 1. Remove the token generator script and key file: {{< text bash >}} $ rm -f ./gen-jwt.py ./key.pem {{< /text >}} 1. If you are not planning to explore any follow-on tasks, you can remove all resources simply by deleting test namespaces. {{< text bash >}} $ kubectl delete ns foo bar legacy {{< /text >}} | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authentication/authn-policy/index.md | master | istio | [
-0.08939914405345917,
0.02979455143213272,
-0.0079801045358181,
0.03057064302265644,
-0.004860042128711939,
-0.07552649825811386,
0.0192358810454607,
0.026447143405675888,
0.08168085664510727,
-0.0006634839810431004,
-0.02195020765066147,
-0.024387676268815994,
0.006855633109807968,
-0.037... | 0.424209 |
{{< boilerplate experimental >}} This task shows you how to copy valid JWT claims to HTTP headers after JWT authentication is successfully completed via an Istio request authentication policy. {{< warning >}} Only claims of type string, boolean, and integer are supported. Array type claims are not supported at this time. {{< /warning >}} ## Before you begin Before you begin this task, do the following: \* Familiarize yourself with [Istio end user authentication](/docs/tasks/security/authentication/authn-policy/#end-user-authentication) support. \* Install Istio using [Istio installation guide](/docs/setup/install/istioctl/). \* Deploy `httpbin` and `curl` workloads in namespace `foo` with sidecar injection enabled. Deploy the example namespace and workloads using these commands: {{< text bash >}} $ kubectl create ns foo $ kubectl label namespace foo istio-injection=enabled $ kubectl apply -f @samples/httpbin/httpbin.yaml@ -n foo $ kubectl apply -f @samples/curl/curl.yaml@ -n foo {{< /text >}} \* Verify that `curl` successfully communicates with `httpbin` using this command: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl http://httpbin.foo:8000/ip -sS -o /dev/null -w "%{http\_code}\n" 200 {{< /text >}} {{< warning >}} If you donβt see the expected output, retry after a few seconds. Caching and propagation can cause a delay. {{< /warning >}} ## Allow requests with valid JWT and list-typed claims 1. The following command creates the `jwt-example` request authentication policy for the `httpbin` workload in the `foo` namespace. This policy accepts a JWT issued by `testing@secure.istio.io` and copies the value of claim `foo` to an HTTP header `X-Jwt-Claim-Foo`: {{< text bash >}} $ kubectl apply -f - <}}/security/tools/jwt/samples/jwks.json" outputClaimToHeaders: - header: "x-jwt-claim-foo" claim: "foo" EOF {{< /text >}} 1. Verify that a request with an invalid JWT is denied: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl "http://httpbin.foo:8000/headers" -sS -o /dev/null -H "Authorization: Bearer invalidToken" -w "%{http\_code}\n" 401 {{< /text >}} 1. Get the JWT which is issued by `testing@secure.istio.io` and has a claim with key `foo`. {{< text syntax="bash" expandlinks="false" >}} $ TOKEN=$(curl {{< github\_file >}}/security/tools/jwt/samples/demo.jwt -s) && echo "$TOKEN" | cut -d '.' -f2 - | base64 --decode - {"exp":4685989700,"foo":"bar","iat":1532389700,"iss":"testing@secure.istio.io","sub":"testing@secure.istio.io"} {{< /text >}} 1. Verify that a request with a valid JWT is allowed: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl "http://httpbin.foo:8000/headers" -sS -o /dev/null -H "Authorization: Bearer $TOKEN" -w "%{http\_code}\n" 200 {{< /text >}} 1. Verify that a request contains a valid HTTP header with JWT claim value: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=curl -n foo -o jsonpath={.items..metadata.name})" -c curl -n foo -- curl "http://httpbin.foo:8000/headers" -sS -H "Authorization: Bearer $TOKEN" | jq '.headers["X-Jwt-Claim-Foo"][0]' "bar" {{< /text >}} ## Clean up Remove the namespace `foo`: {{< text bash >}} $ kubectl delete namespace foo {{< /text >}} | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/security/authentication/claim-to-header/index.md | master | istio | [
-0.08651873469352722,
0.04353027790784836,
-0.005541938357055187,
-0.018731456249952316,
-0.042717184871435165,
-0.015613257884979248,
0.03455469757318497,
0.025507668033242226,
-0.014118061400949955,
-0.006628282833844423,
-0.016614310443401337,
-0.15813246369361877,
0.015105964615941048,
... | 0.41677 |
Telemetry API has been in Istio as a first-class API for quite sometime now. Previously, users had to configure metrics in the `telemetry` section of the Istio configuration. This task shows you how to customize the metrics that Istio generates with Telemetry API. ## Before you begin [Install Istio](/docs/setup/) in your cluster and deploy an application. Telemetry API can not work together with `EnvoyFilter`. For more details please checkout this [issue](https://github.com/istio/istio/issues/39772). \* Starting with Istio version `1.18`, the Prometheus `EnvoyFilter` will not be installed by default, and instead `meshConfig.defaultProviders` is used to enable it. Telemetry API should be used to further customize the telemetry pipeline. \* For versions of Istio before `1.18`, you should install with the following `IstioOperator` configuration: {{< text yaml >}} apiVersion: install.istio.io/v1alpha1 kind: IstioOperator spec: values: telemetry: enabled: true v2: enabled: false {{< /text >}} ## Override metrics The `metrics` section provides values for the metric dimensions as expressions, and allows you to remove or override the existing metric dimensions. You can modify the standard metric definitions using `tags\_to\_remove` or by re-defining a dimension. 1. Remove `grpc\_response\_status` tags from `REQUEST\_COUNT` metric {{< text yaml >}} apiVersion: telemetry.istio.io/v1 kind: Telemetry metadata: name: remove-tags namespace: istio-system spec: metrics: - providers: - name: prometheus overrides: - match: mode: CLIENT\_AND\_SERVER metric: REQUEST\_COUNT tagOverrides: grpc\_response\_status: operation: REMOVE {{< /text >}} 1. Add custom tags for `REQUEST\_COUNT` metric {{< text yaml >}} apiVersion: telemetry.istio.io/v1 kind: Telemetry metadata: name: custom-tags namespace: istio-system spec: metrics: - overrides: - match: metric: REQUEST\_COUNT mode: CLIENT tagOverrides: destination\_x: value: filter\_state.upstream\_peer.app - match: metric: REQUEST\_COUNT mode: SERVER tagOverrides: source\_x: value: filter\_state.downstream\_peer.app providers: - name: prometheus {{< /text >}} ## Disable metrics 1. Disable all metrics by following configuration: {{< text yaml >}} apiVersion: telemetry.istio.io/v1 kind: Telemetry metadata: name: remove-all-metrics namespace: istio-system spec: metrics: - providers: - name: prometheus overrides: - disabled: true match: mode: CLIENT\_AND\_SERVER metric: ALL\_METRICS {{< /text >}} 1. Disable `REQUEST\_COUNT` metrics by following configuration: {{< text yaml >}} apiVersion: telemetry.istio.io/v1 kind: Telemetry metadata: name: remove-request-count namespace: istio-system spec: metrics: - providers: - name: prometheus overrides: - disabled: true match: mode: CLIENT\_AND\_SERVER metric: REQUEST\_COUNT {{< /text >}} 1. Disable `REQUEST\_COUNT` metrics for client by following configuration: {{< text yaml >}} apiVersion: telemetry.istio.io/v1 kind: Telemetry metadata: name: remove-client namespace: istio-system spec: metrics: - providers: - name: prometheus overrides: - disabled: true match: mode: CLIENT metric: REQUEST\_COUNT {{< /text >}} 1. Disable `REQUEST\_COUNT` metrics for server by following configuration: {{< text yaml >}} apiVersion: telemetry.istio.io/v1 kind: Telemetry metadata: name: remove-server namespace: istio-system spec: metrics: - providers: - name: prometheus overrides: - disabled: true match: mode: SERVER metric: REQUEST\_COUNT {{< /text >}} ## Verify the results Send traffic to the mesh. For the Bookinfo sample, visit `http://$GATEWAY\_URL/productpage` in your web browser or issue the following command: {{< text bash >}} $ curl "http://$GATEWAY\_URL/productpage" {{< /text >}} {{< tip >}} `$GATEWAY\_URL` is the value set in the [Bookinfo](/docs/examples/bookinfo/) example. {{< /tip >}} Use the following command to verify that Istio generates the data for your new or modified dimensions: {{< text bash >}} $ istioctl x es "$(kubectl get pod -l app=productpage -o jsonpath='{.items[0].metadata.name}')" -oprom | grep istio\_requests\_total | grep -v TYPE |grep -v 'reporter="destination"' {{< /text >}} {{< text bash >}} $ istioctl x es "$(kubectl get pod -l app=details -o jsonpath='{.items[0].metadata.name}')" -oprom | grep istio\_requests\_total {{< /text >}} For example, in the output, locate the metric `istio\_requests\_total` and verify it contains your new dimension. {{< tip >}} It might take a short period of time for the proxies to start applying the config. If the metric is not received, you may retry sending requests after a short wait, | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/observability/metrics/telemetry-api/index.md | master | istio | [
-0.012114745564758778,
-0.0134959165006876,
0.005388803780078888,
0.026760714128613472,
-0.04731522873044014,
-0.08404302597045898,
0.014503451064229012,
0.041226260364055634,
-0.015079032629728317,
0.019480034708976746,
-0.02892564982175827,
-0.12741506099700928,
-0.05859806388616562,
0.0... | 0.480482 |
For example, in the output, locate the metric `istio\_requests\_total` and verify it contains your new dimension. {{< tip >}} It might take a short period of time for the proxies to start applying the config. If the metric is not received, you may retry sending requests after a short wait, and look for the metric again. {{< /tip >}} | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/observability/metrics/telemetry-api/index.md | master | istio | [
-0.009753195568919182,
-0.06481066346168518,
-0.004528926685452461,
0.05556658282876015,
-0.046096038073301315,
-0.09694614261388779,
-0.022574687376618385,
-0.007298865355551243,
-0.013954753056168556,
-0.00047868574620224535,
-0.0039036108646541834,
-0.0710502564907074,
-0.0132122263312339... | 0.163801 |
This task shows you how to customize the metrics that Istio generates. Istio generates telemetry that various dashboards consume to help you visualize your mesh. For example, dashboards that support Istio include: \* [Grafana](/docs/tasks/observability/metrics/using-istio-dashboard/) \* [Kiali](/docs/tasks/observability/kiali/) \* [Prometheus](/docs/tasks/observability/metrics/querying-metrics/) By default, Istio defines and generates a set of standard metrics (e.g. `requests\_total`), but you can also customize them and create new metrics using the [Telemetry API](/docs/tasks/observability/telemetry/). ## Before you begin [Install Istio](/docs/setup/) in your cluster and deploy an application. Alternatively, you can set up custom statistics as part of the Istio installation. The [Bookinfo](/docs/examples/bookinfo/) sample application is used as the example application throughout this task. For installation instructions, see [deploying the Bookinfo application](/docs/examples/bookinfo/#deploying-the-application). ## Enable custom metrics To customize telemetry metrics, for example, to add `request\_host` and `destination\_port` dimensions to the `requests\_total` metric emitted by both gateways and sidecars in the inbound and outbound direction, use the following: {{< text bash >}} $ cat < ./custom\_metrics.yaml apiVersion: telemetry.istio.io/v1 kind: Telemetry metadata: name: namespace-metrics spec: metrics: - providers: - name: prometheus overrides: - match: metric: REQUEST\_COUNT tagOverrides: destination\_port: value: "string(destination.port)" request\_host: value: "request.host" EOF $ kubectl apply -f custom\_metrics.yaml {{< /text >}} ## Verify the results Send traffic to the mesh. For the Bookinfo sample, visit `http://$GATEWAY\_URL/productpage` in your web browser or issue the following command: {{< text bash >}} $ curl "http://$GATEWAY\_URL/productpage" {{< /text >}} {{< tip >}} `$GATEWAY\_URL` is the value set in the [Bookinfo](/docs/examples/bookinfo/) example. {{< /tip >}} Use the following command to verify that Istio generates the data for your new or modified dimensions: {{< text bash >}} $ kubectl exec "$(kubectl get pod -l app=productpage -o jsonpath='{.items[0].metadata.name}')" -c istio-proxy -- curl -sS 'localhost:15000/stats/prometheus' | grep istio\_requests\_total {{< /text >}} For example, in the output, locate the metric `istio\_requests\_total` and verify it contains your new dimension. {{< tip >}} It might take a short period of time for the proxies to start applying the config. If the metric is not received, you may retry sending requests after a short wait, and look for the metric again. {{< /tip >}} ## Use expressions for values The values in the metric configuration are common expressions, which means you must double-quote strings in JSON, e.g. "'string value'". Unlike Mixer expression language, there is no support for the pipe (`|`) operator, but you can emulate it with the `has` or `in` operator, for example: {{< text plain >}} has(request.host) ? request.host : "unknown" {{< /text >}} For more information, see [Common Expression Language](https://opensource.google/projects/cel). Istio exposes all standard [Envoy attributes](https://www.envoyproxy.io/docs/envoy/latest/intro/arch\_overview/advanced/attributes). Peer metadata is available as attributes `upstream\_peer` for outbound and `downstream\_peer` for inbound with the following fields: | Field | Type | Value | |-------------|----------|------------------------------------------------------------| | `app` | `string` | Application name. | | `version` | `string` | Application version. | | `service` | `string` | Service instance. | | `revision` | `string` | Service version. | | `name` | `string` | Name of the pod. | | `namespace` | `string` | Namespace that the pod runs in. | | `type` | `string` | Workload type. | | `workload` | `string` | Workload name. | | `cluster` | `string` | Identifier for the cluster to which this workload belongs. | For example, the expression for the peer `app` label to be used in an outbound configuration is `filter\_state.downstream\_peer.app` or `filter\_state.upstream\_peer.app`. ## Cleanup To delete the `Bookinfo` sample application and its configuration, see [`Bookinfo` cleanup](/docs/examples/bookinfo/#cleanup). | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/observability/metrics/customize-metrics/index.md | master | istio | [
-0.04207663610577583,
-0.00531720370054245,
-0.056942690163850784,
0.023893000558018684,
-0.08974767476320267,
-0.06645815819501877,
0.020777888596057892,
0.08656463027000427,
0.002918910002335906,
0.024263355880975723,
-0.05284157767891884,
-0.17888127267360687,
0.002522410824894905,
0.06... | 0.576371 |
delete the `Bookinfo` sample application and its configuration, see [`Bookinfo` cleanup](/docs/examples/bookinfo/#cleanup). | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/observability/metrics/customize-metrics/index.md | master | istio | [
-0.01262771524488926,
-0.01976604200899601,
-0.05856658145785332,
-0.03375411406159401,
-0.012658142484724522,
-0.036987707018852234,
-0.09233812242746353,
-0.03636404126882553,
0.02880575880408287,
0.047637902200222015,
0.015975669026374817,
0.05346953123807907,
-0.010453675873577595,
-0.... | -0.019185 |
This task shows you how to query for Istio Metrics using Prometheus. As part of this task, you will use the web-based interface for querying metric values. The [Bookinfo](/docs/examples/bookinfo/) sample application is used as the example application throughout this task. ## Before you begin \* [Install Istio](/docs/setup) in your cluster. \* Install the [Prometheus Addon](/docs/ops/integrations/prometheus/#option-1-quick-start). \* Deploy the [Bookinfo](/docs/examples/bookinfo/) application. ## Querying Istio metrics 1. Verify that the `prometheus` service is running in your cluster. In Kubernetes environments, execute the following command: {{< text bash >}} $ kubectl -n istio-system get svc prometheus NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE prometheus ClusterIP 10.109.160.254 9090/TCP 4m {{< /text >}} 1. Send traffic to the mesh. For the Bookinfo sample, visit `http://$GATEWAY\_URL/productpage` in your web browser or issue the following command: {{< text bash >}} $ curl "http://$GATEWAY\_URL/productpage" {{< /text >}} {{< tip >}} `$GATEWAY\_URL` is the value set in the [Bookinfo](/docs/examples/bookinfo/) example. {{< /tip >}} 1. Open the Prometheus UI. In Kubernetes environments, execute the following command: {{< text bash >}} $ istioctl dashboard prometheus {{< /text >}} Click \*\*Graph\*\* to the right of Prometheus in the header. 1. Execute a Prometheus query. In the "Expression" input box at the top of the web page, enter the text: {{< text plain >}} istio\_requests\_total {{< /text >}} Then, click the \*\*Execute\*\* button. The results will be similar to: {{< image link="./prometheus\_query\_result.png" caption="Prometheus Query Result" >}} You can also see the query results graphically by selecting the Graph tab underneath the \*\*Execute\*\* button. {{< image link="./prometheus\_query\_result\_graphical.png" caption="Prometheus Query Result - Graphical" >}} Other queries to try: \* Total count of all requests to the `productpage` service: {{< text plain >}} istio\_requests\_total{destination\_service="productpage.default.svc.cluster.local"} {{< /text >}} \* Total count of all requests to `v3` of the `reviews` service: {{< text plain >}} istio\_requests\_total{destination\_service="reviews.default.svc.cluster.local", destination\_version="v3"} {{< /text >}} This query returns the current total count of all requests to the v3 of the `reviews` service. \* Rate of requests over the past 5 minutes to all instances of the `productpage` service: {{< text plain >}} rate(istio\_requests\_total{destination\_service=~"productpage.\*", response\_code="200"}[5m]) {{< /text >}} ### About the Prometheus addon The Prometheus addon is a Prometheus server that comes preconfigured to scrape Istio endpoints to collect metrics. It provides a mechanism for persistent storage and querying of Istio metrics. For more on querying Prometheus, please read their [querying docs](https://prometheus.io/docs/querying/basics/). ## Cleanup \* Remove any `istioctl` processes that may still be running using control-C or: {{< text bash >}} $ killall istioctl {{< /text >}} \* If you are not planning to explore any follow-on tasks, refer to the [Bookinfo cleanup](/docs/examples/bookinfo/#cleanup) instructions to shutdown the application. | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/observability/metrics/querying-metrics/index.md | master | istio | [
-0.007113885134458542,
-0.005698755849152803,
-0.04698197916150093,
0.040675655007362366,
-0.08692440390586853,
-0.038519237190485,
0.010078903287649155,
0.03373754769563675,
0.04552842676639557,
0.021833769977092743,
-0.01733677089214325,
-0.15722590684890747,
-0.012396988458931446,
-0.00... | 0.410092 |
This task shows you how to set up and use the Istio Dashboard to monitor mesh traffic. As part of this task, you will use the Grafana Istio addon and the web-based interface for viewing service mesh traffic data. The [Bookinfo](/docs/examples/bookinfo/) sample application is used as the example application throughout this task. ## Before you begin \* [Install Istio](/docs/setup) in your cluster. \* Install the [Grafana Addon](/docs/ops/integrations/grafana/#option-1-quick-start). \* Install the [Prometheus Addon](/docs/ops/integrations/prometheus/#option-1-quick-start). \* Deploy the [Bookinfo](/docs/examples/bookinfo/) application. ## Viewing the Istio dashboard 1. Verify that the `prometheus` service is running in your cluster. In Kubernetes environments, execute the following command: {{< text bash >}} $ kubectl -n istio-system get svc prometheus NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE prometheus ClusterIP 10.100.250.202 9090/TCP 103s {{< /text >}} 1. Verify that the Grafana service is running in your cluster. In Kubernetes environments, execute the following command: {{< text bash >}} $ kubectl -n istio-system get svc grafana NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana ClusterIP 10.103.244.103 3000/TCP 2m25s {{< /text >}} 1. Open the Istio Dashboard via the Grafana UI. In Kubernetes environments, execute the following command: {{< text bash >}} $ istioctl dashboard grafana {{< /text >}} Visit [http://localhost:3000/d/G8wLrJIZk/istio-mesh-dashboard](http://localhost:3000/d/G8wLrJIZk/istio-mesh-dashboard) in your web browser. The Istio Dashboard will look similar to: {{< image link="./grafana-istio-dashboard.png" caption="Istio Dashboard" >}} 1. Send traffic to the mesh. For the Bookinfo sample, visit `http://$GATEWAY\_URL/productpage` in your web browser or issue the following command: {{< boilerplate trace-generation >}} {{< tip >}} `$GATEWAY\_URL` is the value set in the [Bookinfo](/docs/examples/bookinfo/) example. {{< /tip >}} Refresh the page a few times (or send the command a few times) to generate a small amount of traffic. Look at the Istio Dashboard again. It should reflect the traffic that was generated. It will look similar to: {{< image link="./dashboard-with-traffic.png" caption="Istio Dashboard With Traffic" >}} This gives the global view of the Mesh along with services and workloads in the mesh. You can get more details about services and workloads by navigating to their specific dashboards as explained below. 1. Visualize Service Dashboards. From the Grafana dashboard's left hand corner navigation menu, you can navigate to Istio Service Dashboard or visit [http://localhost:3000/d/LJ\_uJAvmk/istio-service-dashboard](http://localhost:3000/d/LJ\_uJAvmk/istio-service-dashboard) in your web browser. {{< tip >}} You may need to select a service in the Service dropdown. {{< /tip >}} The Istio Service Dashboard will look similar to: {{< image link="./istio-service-dashboard.png" caption="Istio Service Dashboard" >}} This gives details about metrics for the service and then client workloads (workloads that are calling this service) and service workloads (workloads that are providing this service) for that service. 1. Visualize Workload Dashboards. From the Grafana dashboard's left hand corner navigation menu, you can navigate to Istio Workload Dashboard or visit [http://localhost:3000/d/UbsSZTDik/istio-workload-dashboard](http://localhost:3000/d/UbsSZTDik/istio-workload-dashboard) in your web browser. The Istio Workload Dashboard will look similar to: {{< image link="./istio-workload-dashboard.png" caption="Istio Workload Dashboard" >}} This gives details about metrics for each workload and then inbound workloads (workloads that are sending request to this workload) and outbound services (services to which this workload send requests) for that workload. ### About the Grafana dashboards The Istio Dashboard consists of three main sections: 1. A Mesh Summary View. This section provides Global Summary view of the Mesh and shows HTTP/gRPC and TCP workloads in the Mesh. 1. Individual Services View. This section provides metrics about requests and responses for each individual service within the mesh (HTTP/gRPC and TCP). This also provides metrics about client and service workloads for this service. 1. Individual Workloads View: This section provides metrics about requests and responses for each individual workload within the mesh (HTTP/gRPC and TCP). This also provides metrics about inbound workloads and outbound | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/observability/metrics/using-istio-dashboard/index.md | master | istio | [
-0.037015270441770554,
0.0056810760870575905,
-0.06220019608736038,
0.011657060123980045,
-0.038638997822999954,
-0.06382253021001816,
-0.021550647914409637,
0.0036234622821211815,
0.03419828787446022,
0.06905429065227509,
-0.022702964022755623,
-0.1394561529159546,
-0.04632081091403961,
0... | 0.443776 |
service within the mesh (HTTP/gRPC and TCP). This also provides metrics about client and service workloads for this service. 1. Individual Workloads View: This section provides metrics about requests and responses for each individual workload within the mesh (HTTP/gRPC and TCP). This also provides metrics about inbound workloads and outbound services for this workload. For more on how to create, configure, and edit dashboards, please see the [Grafana documentation](https://docs.grafana.org/). ## Cleanup \* Remove any `kubectl port-forward` processes that may be running: {{< text bash >}} $ killall kubectl {{< /text >}} \* If you are not planning to explore any follow-on tasks, refer to the [Bookinfo cleanup](/docs/examples/bookinfo/#cleanup) instructions to shutdown the application. | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/observability/metrics/using-istio-dashboard/index.md | master | istio | [
-0.0627993643283844,
0.047582801431417465,
-0.049522001296281815,
-0.051324643194675446,
-0.07724674791097641,
-0.1064162626862526,
-0.05928670987486839,
-0.043965063989162445,
0.07676783949136734,
0.05960508808493614,
-0.049958813935518265,
-0.02712293155491352,
-0.05705516040325165,
-0.0... | 0.204291 |
This task shows how to configure Istio to automatically gather telemetry for TCP services in a mesh. At the end of this task, you can query default TCP metrics for your mesh. The [Bookinfo](/docs/examples/bookinfo/) sample application is used as the example throughout this task. ## Before you begin \* [Install Istio](/docs/setup) in your cluster and deploy an application. You must also install [Prometheus](/docs/ops/integrations/prometheus/). \* This task assumes that the Bookinfo sample will be deployed in the `default` namespace. If you use a different namespace, update the example configuration and commands. ## Collecting new telemetry data 1. Setup Bookinfo to use MongoDB. 1. Install `v2` of the `ratings` service. {{< text bash >}} $ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo-ratings-v2.yaml@ serviceaccount/bookinfo-ratings-v2 created deployment.apps/ratings-v2 created {{< /text >}} 1. Install the `mongodb` service: {{< text bash >}} $ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo-db.yaml@ service/mongodb created deployment.apps/mongodb-v1 created {{< /text >}} 1. The Bookinfo sample deploys multiple versions of each microservice, so begin by creating destination rules that define the service subsets corresponding to each version, and the load balancing policy for each subset. {{< text bash >}} $ kubectl apply -f @samples/bookinfo/networking/destination-rule-all.yaml@ {{< /text >}} If you enabled mutual TLS, run the following command instead: {{< text bash >}} $ kubectl apply -f @samples/bookinfo/networking/destination-rule-all-mtls.yaml@ {{< /text >}} To display the destination rules, run the following command: {{< text bash >}} $ kubectl get destinationrules -o yaml {{< /text >}} Wait a few seconds for destination rules to propagate before adding virtual services that refer to these subsets, because the subset references in virtual services rely on the destination rules. 1. Create `ratings` and `reviews` virtual services: {{< text bash >}} $ kubectl apply -f @samples/bookinfo/networking/virtual-service-ratings-db.yaml@ virtualservice.networking.istio.io/reviews created virtualservice.networking.istio.io/ratings created {{< /text >}} 1. Send traffic to the sample application. For the Bookinfo sample, visit `http://$GATEWAY\_URL/productpage` in your web browser or use the following command: {{< text bash >}} $ curl http://"$GATEWAY\_URL/productpage" {{< /text >}} {{< tip >}} `$GATEWAY\_URL` is the value set in the [Bookinfo](/docs/examples/bookinfo/) example. {{< /tip >}} 1. Verify that the TCP metric values are being generated and collected. In a Kubernetes environment, setup port-forwarding for Prometheus by using the following command: {{< text bash >}} $ istioctl dashboard prometheus {{< /text >}} View the values for the TCP metrics in the Prometheus browser window. Select \*\*Graph\*\*. Enter the `istio\_tcp\_connections\_opened\_total` metric or `istio\_tcp\_connections\_closed\_total` and select \*\*Execute\*\*. The table displayed in the \*\*Console\*\* tab includes entries similar to: {{< text plain >}} istio\_tcp\_connections\_opened\_total{ destination\_version="v1", instance="172.17.0.18:42422", job="istio-mesh", canonical\_service\_name="ratings-v2", canonical\_service\_revision="v2"} {{< /text >}} {{< text plain >}} istio\_tcp\_connections\_closed\_total{ destination\_version="v1", instance="172.17.0.18:42422", job="istio-mesh", canonical\_service\_name="ratings-v2", canonical\_service\_revision="v2"} {{< /text >}} ## Understanding TCP telemetry collection In this task, you used Istio configuration to automatically generate and report metrics for all traffic to a TCP service within the mesh. TCP Metrics for all active connections are recorded every `15s` by default and this timer is configurable via `tcpReportingDuration`. Metrics for a connection are also recorded at the end of the connection. ### TCP attributes Several TCP-specific attributes enable TCP policy and control within Istio. These attributes are generated by Envoy Proxies and obtained from Istio using Envoy's Node Metadata. Envoy forwards Node Metadata to Peer Envoys using ALPN based tunneling and a prefix based protocol. We define a new protocol `istio-peer-exchange`, that is advertised and prioritized by the client and the server sidecars in the mesh. ALPN negotiation resolves the protocol to `istio-peer-exchange` for connections between Istio enabled proxies, but not between an Istio enabled proxy and any other proxy. This protocol extends TCP as follows: 1. TCP client, as a first sequence of bytes, sends a magic byte string and | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/observability/metrics/tcp-metrics/index.md | master | istio | [
-0.027595186606049538,
0.007129285950213671,
-0.046906210482120514,
0.025440648198127747,
-0.09928151220083237,
-0.07235920429229736,
0.004017803817987442,
0.011654019355773926,
0.05335891619324684,
0.04443327710032463,
-0.0433918796479702,
-0.16618765890598297,
-0.010289051569998264,
0.00... | 0.370773 |
server sidecars in the mesh. ALPN negotiation resolves the protocol to `istio-peer-exchange` for connections between Istio enabled proxies, but not between an Istio enabled proxy and any other proxy. This protocol extends TCP as follows: 1. TCP client, as a first sequence of bytes, sends a magic byte string and a length prefixed payload. 1. TCP server, as a first sequence of bytes, sends a magic byte sequence and a length prefixed payload. These payloads are protobuf encoded serialized metadata. 1. Client and server can write simultaneously and out of order. The extension filter in Envoy then does the further processing in downstream and upstream until either the magic byte sequence is not matched or the entire payload is read. {{< image link="./alpn-based-tunneling-protocol.svg" alt="Attribute Generation Flow for TCP Services in an Istio Mesh." caption="TCP Attribute Flow" >}} ## Cleanup \* Remove the `port-forward` process: {{< text bash >}} $ killall istioctl {{< /text >}} \* If you are not planning to explore any follow-on tasks, refer to the [Bookinfo cleanup](/docs/examples/bookinfo/#cleanup) instructions to shutdown the application. | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/observability/metrics/tcp-metrics/index.md | master | istio | [
-0.045474376529455185,
0.02803676575422287,
0.05139678344130516,
0.03738398849964142,
-0.08332756906747818,
-0.047874804586172104,
0.03867088258266449,
-0.04709364101290703,
0.047662824392318726,
0.013289877213537693,
-0.06569831073284149,
-0.036505453288555145,
-0.04965347424149513,
0.028... | 0.413004 |
It's useful to visualize telemetry based on the type of requests and responses handled by services in your mesh. For example, a bookseller tracks the number of times book reviews are requested. A book review request has this structure: {{< text plain >}} GET /reviews/{review\_id} {{< /text >}} Counting the number of review requests must account for the unbounded element `review\_id`. `GET /reviews/1` followed by `GET /reviews/2` should count as two requests to get reviews. Istio lets you create classification rules using the AttributeGen plugin that groups requests into a fixed number of logical operations. For example, you can create an operation named `GetReviews`, which is a common way to identify operations using the [`Open API Spec operationId`](https://swagger.io/docs/specification/paths-and-operations/). This information is injected into request processing as `istio\_operationId` attribute with value equal to `GetReviews`. You can use the attribute as a dimension in Istio standard metrics. Similarly, you can track metrics based on other operations like `ListReviews` and `CreateReviews`. ## Classify metrics by request You can classify requests based on their type, for example `ListReview`, `GetReview`, `CreateReview`. 1. Create a file, for example `attribute\_gen\_service.yaml`, and save it with the following contents. This adds the `istio.attributegen` plugin. It also creates an attribute, `istio\_operationId` and populates it with values for the categories to count as metrics. This configuration is service-specific since request paths are typically service-specific. {{< text yaml >}} apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: istio-attributegen-filter spec: selector: matchLabels: app: reviews url: https://storage.googleapis.com/istio-build/proxy/attributegen-359dcd3a19f109c50e97517fe6b1e2676e870c4d.wasm imagePullPolicy: Always phase: AUTHN pluginConfig: attributes: - output\_attribute: "istio\_operationId" match: - value: "ListReviews" condition: "request.url\_path == '/reviews' && request.method == 'GET'" - value: "GetReview" condition: "request.url\_path.matches('^/reviews/[[:alnum:]]\*$') && request.method == 'GET'" - value: "CreateReview" condition: "request.url\_path == '/reviews/' && request.method == 'POST'" --- apiVersion: telemetry.istio.io/v1 kind: Telemetry metadata: name: custom-tags spec: metrics: - overrides: - match: metric: REQUEST\_COUNT mode: CLIENT\_AND\_SERVER tagOverrides: request\_operation: value: filter\_state['wasm.istio\_operationId'] providers: - name: prometheus {{< /text >}} 1. Apply your changes using the following command: {{< text bash >}} $ kubectl -n istio-system apply -f attribute\_gen\_service.yaml {{< /text >}} 1. After the changes take effect, visit Prometheus and look for the new or changed dimensions, for example `istio\_requests\_total` in `reviews` pods. ## Classify metrics by response You can classify responses using a similar process as requests. Do note that the `response\_code` dimension already exists by default. The example below will change how it is populated. 1. Create a file, for example `attribute\_gen\_service.yaml`, and save it with the following contents. This adds the `istio.attributegen` plugin and generates the `istio\_responseClass` attribute used by the stats plugin. This example classifies various responses, such as grouping all response codes in the `200` range as a `2xx` dimension. {{< text yaml >}} apiVersion: extensions.istio.io/v1alpha1 kind: WasmPlugin metadata: name: istio-attributegen-filter spec: selector: matchLabels: app: productpage url: https://storage.googleapis.com/istio-build/proxy/attributegen-359dcd3a19f109c50e97517fe6b1e2676e870c4d.wasm imagePullPolicy: Always phase: AUTHN pluginConfig: attributes: - output\_attribute: istio\_responseClass match: - value: 2xx condition: response.code >= 200 && response.code <= 299 - value: 3xx condition: response.code >= 300 && response.code <= 399 - value: "404" condition: response.code == 404 - value: "429" condition: response.code == 429 - value: "503" condition: response.code == 503 - value: 5xx condition: response.code >= 500 && response.code <= 599 - value: 4xx condition: response.code >= 400 && response.code <= 499 --- apiVersion: telemetry.istio.io/v1 kind: Telemetry metadata: name: custom-tags spec: metrics: - overrides: - match: metric: REQUEST\_COUNT mode: CLIENT\_AND\_SERVER tagOverrides: response\_code: value: filter\_state['wasm.istio\_responseClass'] providers: - name: prometheus {{< /text >}} 1. Apply your changes using the following command: {{< text bash >}} $ kubectl -n istio-system apply -f attribute\_gen\_service.yaml {{< /text >}} ## Verify the results 1. Generate metrics by sending traffic to your application. 1. Visit Prometheus and look for | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/observability/metrics/classify-metrics/index.md | master | istio | [
-0.06378819793462753,
0.02710622549057007,
-0.0861077532172203,
0.0636332705616951,
0.004324679728597403,
-0.05319038778543472,
0.008829498663544655,
0.015136992558836937,
0.02788604237139225,
0.004313113167881966,
-0.005713994614779949,
-0.08551107347011566,
0.05316285416483879,
0.0432586... | 0.358841 |
value: filter\_state['wasm.istio\_responseClass'] providers: - name: prometheus {{< /text >}} 1. Apply your changes using the following command: {{< text bash >}} $ kubectl -n istio-system apply -f attribute\_gen\_service.yaml {{< /text >}} ## Verify the results 1. Generate metrics by sending traffic to your application. 1. Visit Prometheus and look for the new or changed dimensions, for example `2xx`. Alternatively, use the following command to verify that Istio generates the data for your new dimension: {{< text bash >}} $ kubectl exec pod-name -c istio-proxy -- curl -sS 'localhost:15000/stats/prometheus' | grep istio\_ {{< /text >}} In the output, locate the metric (e.g. `istio\_requests\_total`) and verify the presence of the new or changed dimension. ## Troubleshooting If classification does not occur as expected, check the following potential causes and resolutions. Review the Envoy proxy logs for the pod that has the service on which you applied the configuration change. Check that there are no errors reported by the service in the Envoy proxy logs on the pod, (`pod-name`), where you configured classification by using the following command: {{< text bash >}} $ kubectl logs pod-name -c istio-proxy | grep -e "Config Error" -e "envoy wasm" {{< /text >}} Additionally, ensure that there are no Envoy proxy crashes by looking for signs of restarts in the output of the following command: {{< text bash >}} $ kubectl get pods pod-name {{< /text >}} ## Cleanup Remove the yaml configuration file. {{< text bash >}} $ kubectl -n istio-system delete -f attribute\_gen\_service.yaml {{< /text >}} | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/observability/metrics/classify-metrics/index.md | master | istio | [
-0.0023138518445193768,
0.020562725141644478,
0.015548921190202236,
0.02799670584499836,
-0.05259319022297859,
-0.060573700815439224,
0.015998244285583496,
0.001088291872292757,
0.046600691974163055,
0.016881179064512253,
-0.037610720843076706,
-0.15949121117591858,
-0.051065605133771896,
... | 0.370889 |
This task demonstrates how to \*\*securely scrape Istio sidecar and gateway metrics\*\* using Prometheus over \*\*Istio mTLS\*\*. By default, Prometheus scrapes metrics from Istio workloads and gateways over HTTP. In this task, you configure Istio and Prometheus so that metrics are scraped securely over encrypted connections. This document focuses specifically on Envoy and Istio-generated telemetry exposed by sidecars and gateways. It does not cover application-level metrics emitted by workloads themselves. For general Prometheus integration with Istio, including application metrics, see the [Prometheus integration](/docs/ops/integrations/prometheus/) documentation. ## Understand default metrics scraping By default, Istio exposes metrics on the `/stats/prometheus` endpoint: \* Workload metrics are served from the sidecar telemetry port (`15020`) or Envoy-only port (`15090`). \* Gateway metrics are served from the gateway pod telemetry port. \* These endpoints are \*\*not protected by mutual TLS\*\*, and scraping directly over HTTPS is discouraged. This task replaces the default scraping with a \*\*secure mTLS-enabled configuration\*\*. Prometheus will use a secure fronting port (`15091`) instead of hitting telemetry ports directly. ## Before you begin \* [Install Istio](/docs/setup) in your cluster using the \*\*default profile\*\*. ## Install Prometheus with secure scraping To enable secure metrics scraping, Prometheus requires an Istio sidecar to authenticate to workloads and gateways over mTLS. 1. Enable sidecar injection for Prometheus namespace {{< text bash >}} $ kubectl create namespace prometheus $ kubectl label namespace monitoring istio-injection=enabled --overwrite {{< /text >}} This ensures that any Prometheus pods created or restarted will automatically have an `istio-proxy` sidecar. {{< tip >}} The Istio sidecar injected into the Prometheus pod is used only to provision an Istio workload certificate for mTLS authentication. Traffic interception is explicitly disabled and Prometheus continues to operate as a standard Kubernetes workload. As an alternative, Istio can be integrated with [cert-manager](docs/ops/integrations/certmanager/) to provision certificates for Prometheus. In that model, an Istio sidecar is not required. {{< /tip >}} 1. Update the Prometheus Deployment pod template Istio provides a sample Prometheus installation at `samples/addons/prometheus.yaml`. Modify `samples/addons/prometheus.yaml` to annotate the Prometheus deployment to enable sidecar injection, mount Istio certificates, and configure the proxy: {{< text yaml >}} apiVersion: apps/v1 kind: Deployment metadata: name: prometheus namespace: monitoring spec: template: metadata: annotations: sidecar.istio.io/inject: "true" sidecar.istio.io/userVolumeMount: | [{"name": "istio-certs", "mountPath": "/etc/istio-certs", "readOnly": true}] proxy.istio.io/config: | proxyMetadata: OUTPUT\_CERTS: /etc/istio-certs proxyMetadata.INBOUND\_CAPTURE\_PORTS: "" spec: containers: - name: prometheus image: prom/prometheus:latest volumes: - name: istio-certs secret: secretName: istio.default {{< /text >}} \*\*Notes:\*\* \* `OUTPUT\_CERTS` points to where the Istio sidecar writes certificates for Prometheus to use. \* `INBOUND\_CAPTURE\_PORTS: ""` prevents the sidecar from intercepting Prometheus traffic. \* `userVolumeMount` mounts the certificates inside Prometheus. 1. Modify the Prometheus Scrape Job Configuration in `samples/addons/prometheus.yaml` to add an additional job for scraping secure metrics: {{< text yaml >}} - job\_name: 'istio-secure-merged-metrics' kubernetes\_sd\_configs: - role: pod relabel\_configs: - source\_labels: [\_\_meta\_kubernetes\_pod\_annotation\_prometheus\_istio\_io\_secure\_port] action: keep regex: .+ - source\_labels: [\_\_meta\_kubernetes\_pod\_annotation\_prometheus\_io\_path] action: replace target\_label: \_\_metrics\_path\_\_ regex: (.+) - source\_labels: - \_\_meta\_kubernetes\_pod\_ip - \_\_meta\_kubernetes\_pod\_annotation\_prometheus\_istio\_io\_secure\_port action: replace target\_label: \_\_address\_\_ regex: (.+);(.+) replacement: $1:$2 scheme: https tls\_config: ca\_file: /etc/istio-certs/root-cert.pem cert\_file: /etc/istio-certs/cert-chain.pem key\_file: /etc/istio-certs/key.pem insecure\_skip\_verify: true {{< /text >}} 1. Verify the Prometheus pod has an Istio sidecar {{< text bash >}} $ kubectl get pod -n monitoring -o jsonpath='{.spec.containers[\*].name}' {{< /text >}} You should see an `istio-proxy` container. ## Secure Metrics for Sidecars This task uses `httpbin` as the example workload to generate traffic and metrics. 1. Enable sidecar injection in the default namespace and deploy httpbin {{< text bash >}} $ kubectl label namespace default istio-injection=enabled --overwrite $ kubectl apply -f @samples/httpbin/httpbin.yaml {{< /text >}} 1. Annotate the httpbin pod for secure Prometheus scraping Ensure Prometheus scrapes metrics securely via the mTLS port (`15091`): {{< text bash >}} $ | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/observability/metrics/secure-metrics/index.md | master | istio | [
-0.1109982579946518,
0.02774881385266781,
-0.02278405986726284,
0.029959190636873245,
-0.04365772008895874,
-0.13322176039218903,
0.021931199356913567,
0.028225740417838097,
-0.014426816254854202,
-0.022587157785892487,
-0.03301243111491203,
-0.1032637506723404,
-0.023067697882652283,
0.01... | 0.417268 |
in the default namespace and deploy httpbin {{< text bash >}} $ kubectl label namespace default istio-injection=enabled --overwrite $ kubectl apply -f @samples/httpbin/httpbin.yaml {{< /text >}} 1. Annotate the httpbin pod for secure Prometheus scraping Ensure Prometheus scrapes metrics securely via the mTLS port (`15091`): {{< text bash >}} $ kubectl annotate pod -n default \ -l app=httpbin \ prometheus.io/scrape="true" \ prometheus.io/path="/stats/prometheus" \ prometheus.istio.io/secure-port="15091" \ --overwrite {{< /text >}} These annotations allow Prometheus to discover the httpbin pod and scrape metrics over the secure listener. 1. Create a secure listener on port 15091 Workload metrics can be exposed securely using a sidecar listener on port `15091`. This forwards requests from the secure listener to the sidecar telemetry port `15020`. For Envoy-only metrics, use port `15090`. {{< text bash >}} $ cat <}} ## Secure Metrics for Gateways Istio Gateways expose metrics that Prometheus can scrape. By default, these metrics are on ports `15020` for merged telemetry and `15090` for Envoy-only telemetry, and they are not mTLS-protected. The following steps configure secure scraping over port 15091 using Istio mTLS. 1. Create a `Gateway` with secure listener on port `15091`. We create a `Gateway` to expose both standard HTTP traffic and a dedicated secure HTTPS port for metrics. The HTTPS server uses `ISTIO\_MUTUAL` TLS mode so that only clients with Istio-issued certificates (like the Prometheus sidecar) can scrape metrics. {{< text bash >}} $ cat <}} 1. Create a `ServiceEntry` for the `Gateway` telemetry port (15020 or 15090) Prometheus cannot directly access the gatewayβs internal ports unless they are exposed in the mesh. A `ServiceEntry` allows Prometheus to route requests inside the mesh to these ports. You can choose 15020 for merged telemetry or 15090 for Envoy-only telemetry. {{< text bash >}} $ cat <}} 1. Create a `VirtualService` to route metrics The `VirtualService` maps requests from the secure listener (15091) to the `ServiceEntry` pointing to the telemetry port (15020 or 15090). This ensures that metrics requests sent to `https://:15091/stats/prometheus` are properly routed inside the mesh. {{< text bash >}} $ cat <}} 1. Annotate the `Gateway` pod {{< text bash >}} $ kubectl annotate pod -n istio-system prometheus.istio.io/secure-port=15091 --overwrite {{< /text >}} ## Verification ### Verify secure metrics scraping with Prometheus After completing the configuration, verify that Prometheus is successfully scraping metrics from Istio workloads and gateways over \*\*mutual TLS\*\*. 1. Open the Prometheus dashboard {{< text bash >}} $ istioctl dashboard prometheus {{< /text >}} This command opens the Prometheus dashboard in your default browser. 1. Verify scrape targets 1. In the Prometheus UI, navigate to \*\*Status β Targets\*\*. 1. Locate the job named `istio-secure-merged-metrics` which is what we used while configuring the new Prometheus scrape job. Verify that the targets for the httpbin workload and the Istio Ingress Gateway are listed with endpoints similar to: `https://:15091/stats/prometheus UP`. Each target should report a status of \*\*UP\*\*. This confirms that Prometheus is scraping metrics using \*\*HTTPS over Istio mTLS\*\* via the secure fronting port (`15091`), rather than directly accessing the telemetry ports (`15020` or `15090`). | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/observability/metrics/secure-metrics/index.md | master | istio | [
-0.06319837272167206,
0.061584558337926865,
-0.025294680148363113,
-0.0011614995310083032,
-0.018293915316462517,
-0.08548344671726227,
0.0036498485133051872,
-0.037754133343696594,
0.03546345233917236,
0.024456419050693512,
-0.004427571780979633,
-0.12103340029716492,
-0.007763909641653299,... | 0.296766 |
This task shows you how to visualize different aspects of your Istio mesh. As part of this task, you install the [Kiali](https://www.kiali.io) addon and use the web-based graphical user interface to view service graphs of the mesh and your Istio configuration objects. {{< idea >}} This task does not cover all of the features provided by Kiali. To learn about the full set of features it supports, see the [Kiali website](https://kiali.io/docs/features/). {{< /idea >}} This task uses the [Bookinfo](/docs/examples/bookinfo/) sample application as the example throughout. This task assumes the Bookinfo application is installed in the `bookinfo` namespace. ## Before you begin Follow the [Kiali installation](/docs/ops/integrations/kiali/#installation) documentation to deploy Kiali into your cluster. ## Generating a graph 1. To verify the service is running in your cluster, run the following command: {{< text bash >}} $ kubectl -n istio-system get svc kiali {{< /text >}} 1. To determine the Bookinfo URL, follow the instructions to determine the [Bookinfo ingress `GATEWAY\_URL`](/docs/examples/bookinfo/#determine-the-ingress-ip-and-port). 1. To send traffic to the mesh, you have three options \* Visit `http://$GATEWAY\_URL/productpage` in your web browser \* Use the following command multiple times: {{< text bash >}} $ curl http://$GATEWAY\_URL/productpage {{< /text >}} \* If you installed the `watch` command in your system, send requests continually with: {{< text bash >}} $ watch -n 1 curl -o /dev/null -s -w %{http\_code} $GATEWAY\_URL/productpage {{< /text >}} 1. To open the Kiali UI, execute the following command in your Kubernetes environment: {{< text bash >}} $ istioctl dashboard kiali {{< /text >}} 1. View the overview of your mesh in the \*\*Overview\*\* page that appears immediately after you log in. The \*\*Overview\*\* page displays all the namespaces that have services in your mesh. The following screenshot shows a similar page: {{< image width="75%" link="./kiali-overview.png" caption="Example Overview" >}} 1. To view a namespace graph, Select the `Graph` option in the kebab menu of the Bookinfo overview card. The kebab menu is at the top right of card and looks like 3 vertical dots. Click it to see the available options. The page looks similar to: {{< image width="75%" link="./kiali-graph.png" caption="Example Graph" >}} 1. The graph represents traffic flowing through the service mesh for a period of time. It is generated using Istio telemetry. 1. To view a summary of metrics, select any node or edge in the graph to display its metric details in the summary details panel on the right. 1. To view your service mesh using different graph types, select a graph type from the \*\*Graph Type\*\* drop down menu. There are several graph types to choose from: \*\*App\*\*, \*\*Versioned App\*\*, \*\*Workload\*\*, \*\*Service\*\*. \* The \*\*App\*\* graph type aggregates all versions of an app into a single graph node. The following example shows a single \*\*reviews\*\* node representing the three versions of the reviews app. Note that the `Show Service Nodes` Display option has been disabled. {{< image width="75%" link="./kiali-app.png" caption="Example App Graph" >}} \* The \*\*Versioned App\*\* graph type shows a node for each version of an app, but all versions of a particular app are grouped together. The following example shows the \*\*reviews\*\* group box that contains the three nodes that represents the three versions of the reviews app. {{< image width="75%" link="./kiali-versionedapp.png" caption="Example Versioned App Graph" >}} \* The \*\*Workload\*\* graph type shows a node for each workload in your service mesh. This graph type does not require you to use the `app` and `version` labels so if you opt to not use those labels on your components, this may be your graph type of choice. {{< image width="70%" link="./kiali-workload.png" caption="Example Workload Graph" >}} \* The \*\*Service\*\* | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/observability/kiali/index.md | master | istio | [
0.015528314746916294,
-0.004206209443509579,
-0.03043561801314354,
0.006354005075991154,
-0.032607000321149826,
-0.032053329050540924,
-0.007897896692156792,
0.04103394225239754,
0.01575210504233837,
-0.0016379246953874826,
-0.014372165314853191,
-0.1355391889810562,
0.029953662306070328,
... | 0.535599 |
workload in your service mesh. This graph type does not require you to use the `app` and `version` labels so if you opt to not use those labels on your components, this may be your graph type of choice. {{< image width="70%" link="./kiali-workload.png" caption="Example Workload Graph" >}} \* The \*\*Service\*\* graph type shows a high-level aggregation of service traffic in your mesh. {{< image width="70%" link="./kiali-service-graph.png" caption="Example Service Graph" >}} ## Examining Istio configuration 1. The left menu options lead to list views for \*\*Applications\*\*, \*\*Workloads\*\*, \*\*Services\*\* and \*\*Istio Config\*\*. The following screenshot shows \*\*Services\*\* information for the Bookinfo namespace: {{< image width="80%" link="./kiali-services.png" caption="Example Details" >}} ## Traffic Shifting You can use the Kiali traffic shifting wizard to define the specific percentage of request traffic to route to two or more workloads. 1. View the \*\*Versioned app graph\*\* of the `bookinfo` graph. \* Make sure you have enabled the \*\*Traffic Distribution\*\* Edge Label \*\*Display\*\* option to see the percentage of traffic routed to each workload. \* Make sure you have enabled the Show \*\*Service Nodes\*\* \*\*Display\*\* option to view the service nodes in the graph. {{< image width="80%" link="./kiali-wiz0-graph-options.png" caption="Bookinfo Graph Options" >}} 1. Focus on the `ratings` service within the `bookinfo` graph by clicking on the `ratings` service (triangle) node. Notice the `ratings` service traffic is evenly distributed to the two `ratings` workloads `v1` and `v2` (50% of requests are routed to each workload). {{< image width="80%" link="./kiali-wiz1-graph-ratings-percent.png" caption="Graph Showing Percentage of Traffic" >}} 1. Click the \*\*ratings\*\* link found in the side panel to go to the detail view for the `ratings` service. This could also be done by secondary-click on the `ratings` service node, and selecting `Details` from the context menu. 1. From the \*\*Actions\*\* drop down menu, select \*\*Traffic Shifting\*\* to access the traffic shifting wizard. {{< image width="80%" link="./kiali-wiz2-ratings-service-action-menu.png" caption="Service Actions Menu" >}} 1. Drag the sliders to specify the percentage of traffic to route to each workload. For `ratings-v1`, set it to 10%; for `ratings-v2` set it to 90%. {{< image width="80%" link="./kiali-wiz3-traffic-shifting-wizard.png" caption="Weighted Routing Wizard" >}} 1. Click the \*\*Preview\*\* button to view the YAML that will be generated by the wizard. {{< image width="80%" link="./kiali-wiz3b-traffic-shifting-wizard-preview.png" caption="Routing Wizard Preview" >}} 1. Click the \*\*Create\*\* button and confirm to apply the new traffic settings. 1. Click \*\*Graph\*\* in the left hand navigation bar to return to the `bookinfo` graph. Notice that the `ratings` service node is now badged with the `virtual service` icon. 1. Send requests to the `bookinfo` application. For example, to send one request per second, you can execute this command if you have `watch` installed on your system: {{< text bash >}} $ watch -n 1 curl -o /dev/null -s -w %{http\_code} $GATEWAY\_URL/productpage {{< /text >}} 1. After a few minutes you will notice that the traffic percentage will reflect the new traffic route, thus confirming the fact that your new traffic route is successfully routing 90% of all traffic requests to `ratings-v2`. {{< image width="80%" link="./kiali-wiz4-traffic-shifting-90-10.png" caption="90% Ratings Traffic Routed to ratings-v2" >}} ## Validating Istio configuration Kiali can validate your Istio resources to ensure they follow proper conventions and semantics. Any problems detected in the configuration of your Istio resources can be flagged as errors or warnings depending on the severity of the incorrect configuration. See the [Kiali validations page](https://kiali.io/docs/features/validations/) for the list of all validation checks Kiali performs. {{< idea >}} Istio provides `istioctl analyze` which provides analysis in a way that can be used in a CI pipeline. The two approaches can be complementary. {{< /idea >}} Force an invalid configuration of a service port name | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/observability/kiali/index.md | master | istio | [
-0.008252604864537716,
0.011260527186095715,
-0.005443141330033541,
0.024605751037597656,
-0.022502413019537926,
-0.0357523076236248,
0.019009780138731003,
0.05001291632652283,
-0.012443984858691692,
0.056603625416755676,
-0.04702843725681305,
-0.1039598286151886,
0.004790213890373707,
0.1... | 0.479347 |
validations page](https://kiali.io/docs/features/validations/) for the list of all validation checks Kiali performs. {{< idea >}} Istio provides `istioctl analyze` which provides analysis in a way that can be used in a CI pipeline. The two approaches can be complementary. {{< /idea >}} Force an invalid configuration of a service port name to see how Kiali reports a validation error. 1. Change the port name of the `details` service from `http` to `foo`: {{< text bash >}} $ kubectl patch service details -n bookinfo --type json -p '[{"op":"replace","path":"/spec/ports/0/name", "value":"foo"}]' {{< /text >}} 1. Navigate to the \*\*Services\*\* list by clicking \*\*Services\*\* on the left hand navigation bar. 1. Select `bookinfo` from the \*\*Namespace\*\* drop down menu if it is not already selected. 1. Notice the error icon displayed in the \*\*Configuration\*\* column of the `details` row. {{< image width="80%" link="./kiali-validate1-list.png" caption="Services List Showing Invalid Configuration" >}} 1. Click the \*\*details\*\* link in the \*\*Name\*\* column to navigate to the service details view. 1. Hover over the error icon to display a tool tip describing the error. {{< image width="80%" link="./kiali-validate2-errormsg.png" caption="Service Details Describing the Invalid Configuration" >}} 1. Change the port name back to `http` to correct the configuration and return `bookinfo` back to its normal state. {{< text bash >}} $ kubectl patch service details -n bookinfo --type json -p '[{"op":"replace","path":"/spec/ports/0/name", "value":"http"}]' {{< /text >}} {{< image width="80%" link="./kiali-validate3-ok.png" caption="Service Details Showing Valid Configuration" >}} ## Viewing and editing Istio configuration YAML Kiali provides a YAML editor for viewing and editing Istio configuration resources. The YAML editor will also provide validation messages when it detects incorrect configurations. 1. Introduce an error in the `bookinfo` VirtualService {{< text bash >}} $ kubectl patch vs bookinfo -n bookinfo --type json -p '[{"op":"replace","path":"/spec/gateways/0", "value":"bookinfo-gateway-invalid"}]' {{< /text >}} 1. Click `Istio Config` on the left hand navigation bar to navigate to the Istio configuration list. 1. Select `bookinfo` from the \*\*Namespace\*\* drop down menu if it is not already selected. 1. Notice the error icon that alerts you to a configuration problem. {{< image width="80%" link="./kiali-istioconfig0-errormsgs.png" caption="Istio Config List Incorrect Configuration" >}} 1. Click the error icon in the \*\*Configuration\*\* column of the `bookinfo` row to navigate to the `bookinfo` virtual service view. 1. The \*\*YAML\*\* tab is preselected. Notice the color highlights and icons on the rows that have validation check notifications associated with them. {{< image width="80%" link="./kiali-istioconfig3-details-yaml1.png" caption="YAML Editor Showing Validation Notifications" >}} 1. Hover over the red icon to view the tool tip message that informs you of the validation check that triggered the error. For more details on the cause of the error and how to resolve it, look up the validation error message on the [Kiali Validations page](https://kiali.io/docs/features/validations/). {{< image width="80%" link="./kiali-istioconfig3-details-yaml3.png" caption="YAML Editor Showing Error Tool Tip" >}} 1. Reset the virtual service `bookinfo` back to its original state. {{< text bash >}} $ kubectl patch vs bookinfo -n bookinfo --type json -p '[{"op":"replace","path":"/spec/gateways/0", "value":"bookinfo-gateway"}]' {{< /text >}} ## Additional Features Kiali has many more features than reviewed in this task, such as an [integration with Jaeger tracing](https://kiali.io/docs/features/tracing/). For more details on these additional features, see the [Kiali documentation](https://kiali.io/docs/features/). For a deeper exploration of Kiali it is recommended to run through the [Kiali Tutorial](https://kiali.io/docs/tutorials/). ## Cleanup If you are not planning any follow-up tasks, remove the Bookinfo sample application and Kiali from your cluster. 1. To remove the Bookinfo application, refer to the [Bookinfo cleanup](/docs/examples/bookinfo/#cleanup) instructions. 1. To remove Kiali from a Kubernetes environment: {{< text bash >}} $ kubectl delete -f {{< github\_file >}}/samples/addons/kiali.yaml {{< /text >}} | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/observability/kiali/index.md | master | istio | [
-0.03505709394812584,
0.043185845017433167,
0.028769973665475845,
-0.03421168774366379,
-0.04119262099266052,
-0.05860510095953941,
-0.003501993604004383,
-0.015179365873336792,
0.046385228633880615,
-0.016447853296995163,
-0.02252572774887085,
-0.1284010261297226,
0.016545094549655914,
0.... | 0.328887 |
application and Kiali from your cluster. 1. To remove the Bookinfo application, refer to the [Bookinfo cleanup](/docs/examples/bookinfo/#cleanup) instructions. 1. To remove Kiali from a Kubernetes environment: {{< text bash >}} $ kubectl delete -f {{< github\_file >}}/samples/addons/kiali.yaml {{< /text >}} | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/observability/kiali/index.md | master | istio | [
0.0338100865483284,
0.008850779384374619,
-0.008207704871892929,
-0.037729158997535706,
0.02532198280096054,
0.01776128262281418,
-0.04083476588129997,
-0.08164864778518677,
0.08370558172464371,
-0.007708846591413021,
0.043857164680957794,
-0.01306278444826603,
0.012170332483947277,
-0.041... | 0.18069 |
The simplest kind of Istio logging is [Envoy's access logging](https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access\_log/usage). Envoy proxies print access information to their standard output. The standard output of Envoy's containers can then be printed by the `kubectl logs` command. {{< boilerplate before-you-begin-egress >}} {{< boilerplate start-httpbin-service >}} ## Enable Envoy's access logging Istio offers a few ways to enable access logs. Use of the Telemetry API is recommended ### Using Telemetry API The Telemetry API can be used to enable or disable access logs: {{< text yaml >}} apiVersion: telemetry.istio.io/v1 kind: Telemetry metadata: name: mesh-default namespace: istio-system spec: accessLogging: - providers: - name: envoy {{< /text >}} The above example uses the default `envoy` access log provider, and we do not configure anything other than default settings. Similar configuration can also be applied on an individual namespace, or to an individual workload, to control logging at a fine grained level. For more information about using the Telemetry API, see the [Telemetry API overview](/docs/tasks/observability/telemetry/). ### Using Mesh Config If you used an `IstioOperator` configuration to install Istio, add the following field to your configuration: {{< text yaml >}} spec: meshConfig: accessLogFile: /dev/stdout {{< /text >}} Otherwise, add the equivalent setting to your original `istioctl install` command, for example: {{< text syntax=bash snip\_id=none >}} $ istioctl install --set meshConfig.accessLogFile=/dev/stdout {{< /text >}} You can also choose between JSON and text by setting `accessLogEncoding` to `JSON` or `TEXT`. You may also want to customize the [format](https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access\_log/usage#format-rules) of the access log by editing `accessLogFormat`. Refer to [global mesh options](/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig) for more information on all three of these settings: \* `meshConfig.accessLogFile` \* `meshConfig.accessLogEncoding` \* `meshConfig.accessLogFormat` ## Default access log format Istio will use the following default access log format if `accessLogFormat` is not specified: {{< text plain >}} [%START\_TIME%] \"%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%\" %RESPONSE\_CODE% %RESPONSE\_FLAGS% %RESPONSE\_CODE\_DETAILS% %CONNECTION\_TERMINATION\_DETAILS% \"%UPSTREAM\_TRANSPORT\_FAILURE\_REASON%\" %BYTES\_RECEIVED% %BYTES\_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% \"%REQ(X-FORWARDED-FOR)%\" \"%REQ(USER-AGENT)%\" \"%REQ(X-REQUEST-ID)%\" \"%REQ(:AUTHORITY)%\" \"%UPSTREAM\_HOST%\" %UPSTREAM\_CLUSTER\_RAW% %UPSTREAM\_LOCAL\_ADDRESS% %DOWNSTREAM\_LOCAL\_ADDRESS% %DOWNSTREAM\_REMOTE\_ADDRESS% %REQUESTED\_SERVER\_NAME% %ROUTE\_NAME%\n {{< /text >}} The following table shows an example using the default access log format for a request sent from `curl` to `httpbin`: | Log operator | access log in curl | access log in httpbin | |--------------------------------------------------------------------|--------------------|-----------------------| | `[%START\_TIME%]` | `[2020-11-25T21:26:18.409Z]` | `[2020-11-25T21:26:18.409Z]` | `\"%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%\"` | `"GET /status/418 HTTP/1.1"` | `"GET /status/418 HTTP/1.1"` | `%RESPONSE\_CODE%` | `418` | `418` | `%RESPONSE\_FLAGS%` | `-` | `-` | `%RESPONSE\_CODE\_DETAILS%` | `via\_upstream` | `via\_upstream` | `%CONNECTION\_TERMINATION\_DETAILS%` | `-` | `-` | `\"%UPSTREAM\_TRANSPORT\_FAILURE\_REASON%\"` | `"-"` | `"-"` | `%BYTES\_RECEIVED%` | `0` | `0` | `%BYTES\_SENT%` | `135` | `135` | `%DURATION%` | `4` | `3` | `%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)%` | `4` | `1` | `\"%REQ(X-FORWARDED-FOR)%\"` | `"-"` | `"-"` | `\"%REQ(USER-AGENT)%\"` | `"curl/7.73.0-DEV"` | `"curl/7.73.0-DEV"` | `\"%REQ(X-REQUEST-ID)%\"` | `"84961386-6d84-929d-98bd-c5aee93b5c88"` | `"84961386-6d84-929d-98bd-c5aee93b5c88"` | `\"%REQ(:AUTHORITY)%\"` | `"httpbin:8000"` | `"httpbin:8000"` | `\"%UPSTREAM\_HOST%\"` | `"10.44.1.27:80"` | `"127.0.0.1:80"` | `%UPSTREAM\_CLUSTER\_RAW%` | `outbound|8000||httpbin.foo.svc.cluster.local` | `inbound|8000||` | `%UPSTREAM\_LOCAL\_ADDRESS%` | `10.44.1.23:37652` | `127.0.0.1:41854` | `%DOWNSTREAM\_LOCAL\_ADDRESS%` | `10.0.45.184:8000` | `10.44.1.27:80` | `%DOWNSTREAM\_REMOTE\_ADDRESS%` | `10.44.1.23:46520` | `10.44.1.23:37652` | `%REQUESTED\_SERVER\_NAME%` | `-` | `outbound\_.8000\_.\_.httpbin.foo.svc.cluster.local` | `%ROUTE\_NAME%` | `default` | `default` ## Test the access log 1. Send a request from `curl` to `httpbin`: {{< text bash >}} $ kubectl exec "$SOURCE\_POD" -c curl -- curl -sS -v httpbin:8000/status/418 ... < HTTP/1.1 418 Unknown ... < server: envoy ... I'm a teapot! ... {{< /text >}} 1. Check `curl`'s log: {{< text bash >}} $ kubectl logs -l app=curl -c istio-proxy [2020-11-25T21:26:18.409Z] "GET /status/418 HTTP/1.1" 418 - via\_upstream - "-" 0 135 4 4 "-" "curl/7.73.0-DEV" "84961386-6d84-929d-98bd-c5aee93b5c88" "httpbin:8000" "10.44.1.27:80" outbound|8000||httpbin.foo.svc.cluster.local 10.44.1.23:37652 10.0.45.184:8000 10.44.1.23:46520 - default {{< /text >}} 1. Check `httpbin`'s log: {{< text bash >}} $ kubectl logs -l app=httpbin -c istio-proxy [2020-11-25T21:26:18.409Z] "GET /status/418 HTTP/1.1" 418 - via\_upstream - "-" 0 | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/observability/logs/access-log/index.md | master | istio | [
0.04212028160691261,
0.053989287465810776,
-0.004696857184171677,
0.04021019861102104,
-0.012928875163197517,
-0.06952930241823196,
0.03484737500548363,
0.0027646650560200214,
0.037212640047073364,
0.06105368211865425,
-0.050205688923597336,
-0.08584900945425034,
-0.048431396484375,
0.0381... | 0.410596 |
/status/418 HTTP/1.1" 418 - via\_upstream - "-" 0 135 4 4 "-" "curl/7.73.0-DEV" "84961386-6d84-929d-98bd-c5aee93b5c88" "httpbin:8000" "10.44.1.27:80" outbound|8000||httpbin.foo.svc.cluster.local 10.44.1.23:37652 10.0.45.184:8000 10.44.1.23:46520 - default {{< /text >}} 1. Check `httpbin`'s log: {{< text bash >}} $ kubectl logs -l app=httpbin -c istio-proxy [2020-11-25T21:26:18.409Z] "GET /status/418 HTTP/1.1" 418 - via\_upstream - "-" 0 135 3 1 "-" "curl/7.73.0-DEV" "84961386-6d84-929d-98bd-c5aee93b5c88" "httpbin:8000" "127.0.0.1:80" inbound|8000|| 127.0.0.1:41854 10.44.1.27:80 10.44.1.23:37652 outbound\_.8000\_.\_.httpbin.foo.svc.cluster.local default {{< /text >}} Note that the messages corresponding to the request appear in logs of the Istio proxies of both the source and the destination, `curl` and `httpbin`, respectively. You can see in the log the HTTP verb (`GET`), the HTTP path (`/status/418`), the response code (`418`) and other [request-related information](https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access\_log/usage#format-rules). ## Cleanup Shutdown the [curl]({{< github\_tree >}}/samples/curl) and [httpbin]({{< github\_tree >}}/samples/httpbin) services: {{< text bash >}} $ kubectl delete -f @samples/curl/curl.yaml@ $ kubectl delete -f @samples/httpbin/httpbin.yaml@ {{< /text >}} ### Disable Envoy's access logging Remove, or set to `""`, the `meshConfig.accessLogFile` setting in your Istio install configuration. {{< tip >}} In the example below, replace `default` with the name of the profile you used when you installed Istio. {{< /tip >}} {{< text bash >}} $ istioctl install --set profile=default β Istio core installed β Istiod installed β Ingress gateways installed β Installation complete {{< /text >}} | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/observability/logs/access-log/index.md | master | istio | [
-0.013548469170928001,
0.03535202518105507,
-0.02266726642847061,
-0.023897793143987656,
-0.027397558093070984,
-0.07710689306259155,
-0.027805062010884285,
-0.028756944462656975,
0.07613585889339447,
0.03657090663909912,
-0.027845436707139015,
-0.08447672426700592,
-0.06856878101825714,
-... | 0.230031 |
Telemetry API has been in Istio as a first-class API for quite sometime now. Previously users had to configure telemetry in the `MeshConfig` section of Istio configuration. {{< boilerplate before-you-begin-egress >}} {{< boilerplate start-httpbin-service >}} ## Installation In this example, we will send logs to [Grafana Loki](https://grafana.com/oss/loki/) so make sure it is installed: {{< text syntax=bash snip\_id=install\_loki >}} $ istioctl install -f @samples/open-telemetry/loki/iop.yaml@ --skip-confirmation $ kubectl apply -f @samples/addons/loki.yaml@ -n istio-system $ kubectl apply -f @samples/open-telemetry/loki/otel.yaml@ -n istio-system {{< /text >}} ## Get started with Telemetry API 1. Enable access logging {{< text bash >}} $ cat <}} The above example uses the built-in `envoy` access log provider, and we do not configure anything other than default settings. 1. Disable access log for specific workload You can disable access log for `curl` service with the following configuration: {{< text bash >}} $ cat <}} 1. Filter access log with workload mode You can disable inbound access log for `httpbin` service with the following configuration: {{< text bash >}} $ cat <}} 1. Filter access log with CEL expression The following configuration displays access log only when response code is greater or equal to 500: {{< text bash >}} $ cat <= 500 EOF {{< /text >}} {{< tip >}} There's no `response.code` attribute when connections fail. In that case, you should use the CEL expression `!has(response.code) || response.code >= 500`. {{< /tip >}} 1. Set default filter access log with CEL expression The following configuration displays access logs only when the response code is greater or equal to 400 or the request went to the BlackHoleCluster or the PassthroughCluster: Note: The `xds.cluster\_name` is only available with Istio release 1.16.2 and higher {{< text bash >}} $ cat <= 400 || xds.cluster\_name == 'BlackHoleCluster' || xds.cluster\_name == 'PassthroughCluster' " EOF {{< /text >}} 1. Filter health check access logs with CEL expression The following configuration displays access logs only when the logs are not generated by the Amazon Route 53 Health Check Service. Note: The `request.useragent` is specific to HTTP traffic, therefore to avoid breaking TCP traffic, we need to check for the existence of the field. For more information, see [CEL Type Checking](https://kubernetes.io/docs/reference/using-api/cel/#type-checking) {{< text bash >}} $ cat <}} For more information, see [Use expressions for values](/docs/tasks/observability/metrics/customize-metrics/#use-expressions-for-values) ## Work with OpenTelemetry provider Istio supports sending access logs with [OpenTelemetry](https://opentelemetry.io/) protocol, as explained [here](/docs/tasks/observability/logs/otel-provider). ## Cleanup 1. Remove all Telemetry API: {{< text bash >}} $ kubectl delete telemetry --all -A {{< /text >}} 1. Remove `loki`: {{< text bash >}} $ kubectl delete -f @samples/addons/loki.yaml@ -n istio-system $ kubectl delete -f @samples/open-telemetry/loki/otel.yaml@ -n istio-system {{< /text >}} 1. Uninstall Istio from the cluster: {{< text bash >}} $ istioctl uninstall --purge --skip-confirmation {{< /text >}} | https://github.com/istio/istio.io/blob/master//content/en/docs/tasks/observability/logs/telemetry-api/index.md | master | istio | [
-0.06133927404880524,
0.025896985083818436,
-0.03801281377673149,
-0.016321470960974693,
-0.03665483370423317,
-0.10962414741516113,
0.019836854189634323,
0.062086813151836395,
0.04309374466538429,
0.0707627683877945,
0.0015208646655082703,
-0.12280751019716263,
-0.06595034897327423,
0.034... | 0.480731 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.