id
stringlengths 14
16
| text
stringlengths 1
2.43k
| source
stringlengths 99
229
|
---|---|---|
f924b2f48a23-6 | | New Region | CodePipeline is now available in Asia Pacific \(Tokyo\)\. The [Quotas in AWS CodePipeline](limits.md) topic and [Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#codepipeline_region) topic have been updated\. | December 14, 2016 |
| New Region | CodePipeline is now available in South America \(São Paulo\)\. The [Quotas in AWS CodePipeline](limits.md) topic and [Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#codepipeline_region) topic have been updated\. | December 7, 2016 | | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/history.md |
f924b2f48a23-7 | | Updated topics | You can now add AWS CodeBuild as a build action to any stage of a pipeline\. AWS CodeBuild is a fully managed build service in the cloud that compiles your source code, runs unit tests, and produces artifacts that are ready to deploy\. You can use an existing build project or create one in the CodePipeline console\. The output of the build project can then be deployed as part of a pipeline\. The topics [Product and service integrations with CodePipeline](integrations.md), [Create a pipeline in CodePipeline](pipelines-create.md), Authentication and Access Control, and [CodePipeline pipeline structure reference](reference-pipeline-structure.md) have been updated to reflect this support for AWS CodeBuild\. You can now use CodePipeline with AWS CloudFormation and the AWS Serverless Application Model to continuously deliver your serverless applications\. The topic [Product and service integrations with CodePipeline](integrations.md) has been updated to reflect this support\. [Product and service integrations with CodePipeline](integrations.md) has been reorganized to group AWS and partner offerings by action type\. | December 1, 2016 |
| New Region | CodePipeline is now available in Europe \(Frankfurt\)\. The [Quotas in AWS CodePipeline](limits.md) topic and [Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#codepipeline_region) topic have been updated\. | November 16, 2016 | | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/history.md |
f924b2f48a23-8 | | Updated topics | AWS CloudFormation can now be selected as a deployment provider in pipelines, enabling you to take action on AWS CloudFormation stacks and change sets as part of a pipeline execution\. The topics [Product and service integrations with CodePipeline](integrations.md), [Create a pipeline in CodePipeline](pipelines-create.md), Authentication and Access Control, and [CodePipeline pipeline structure reference](reference-pipeline-structure.md) have been updated to reflect this support for AWS CloudFormation\. | November 3, 2016 |
| New Region | CodePipeline is now available in the Asia Pacific \(Sydney\) Region\. The [Quotas in AWS CodePipeline](limits.md) topic and [Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#codepipeline_region) topic have been updated\. | October 26, 2016 |
| New Region | CodePipeline is now available in Asia Pacific \(Singapore\)\. The [Quotas in AWS CodePipeline](limits.md) topic and [Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#codepipeline_region) topic have been updated\. | October 20, 2016 | | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/history.md |
f924b2f48a23-9 | | New Region | CodePipeline is now available in the US East \(Ohio\) Region\. The [Quotas in AWS CodePipeline](limits.md) topic and [Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#codepipeline_region) topic have been updated\. | October 17, 2016 |
| Updated topic | [Create a pipeline in CodePipeline](pipelines-create.md) has been updated to reflect support for displaying version identifiers of custom actions in the **Source provider** and **Build provider** lists\. | September 22, 2016 |
| Updated topic | The [Manage approval actions in CodePipeline](approvals.md) section has been updated to reflect an enhancement that lets Approval action reviewers open the **Approve or reject the revision** form directly from an email notification\. | September 14, 2016 |
| New and updated topics | A new topic, [View pipeline execution source revisions \(console\)](pipelines-view-console.md#pipelines-source-revisions-console), describes how to view details about code changes currently flowing through your software release pipeline\. Quick access to this information can be useful when reviewing manual approval actions or troubleshooting failures in your pipeline\. A new section, [Monitoring pipelines with CodePipeline](monitoring.md), provides a central location for all topics related to monitoring the status and progress of your pipelines\. | September 08, 2016 | | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/history.md |
f924b2f48a23-10 | | New and updated topics | A new section, [Manage approval actions in CodePipeline](approvals.md), provides information about configuring and using manual approval actions in pipelines\. Topics in this section provide conceptual information about the approval process; instructions for setting up required IAM permissions, creating approval actions, and approving or rejecting approval actions; and samples of the JSON data generated when an approval action is reached in a pipeline\. | July 06, 2016 |
| New Region | CodePipeline is now available in the Europe \(Ireland\) Region\. The [Quotas in AWS CodePipeline](limits.md) topic and [Regions and endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#codepipeline_region) topic have been updated\. | June 23, 2016 |
| New topic | A new topic, [Retry a failed action in CodePipeline](actions-retry.md), has been added to describe how to retry a failed action or a group of parallel failed actions in stage\. | June 22, 2016 | | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/history.md |
f924b2f48a23-11 | | Updated topics | A number of topics, including [Create a pipeline in CodePipeline](pipelines-create.md), Authentication and Access Control, [CodePipeline pipeline structure reference](reference-pipeline-structure.md), and [Product and service integrations with CodePipeline](integrations.md), have been updated to reflect support for configuring a pipeline to deploy code in conjunction with custom Chef cookbooks and applications created in AWS OpsWorks\. CodePipeline support for AWS OpsWorks is currently available in the US East \(N\. Virginia\) Region \(us\-east\-1\) only\. | June 2, 2016 |
| New and updated topics | A new topic, [Tutorial: Create a simple pipeline \(CodeCommit repository\)](tutorials-simple-codecommit.md), has been added\. This topic provides a sample walkthrough showing how to use a CodeCommit repository and branch as the source location for a source action in a pipeline\. Several other topics have been updated to reflect this integration with CodeCommit, including Authentication and Access Control, [Product and service integrations with CodePipeline](integrations.md), [Tutorial: Create a four\-stage pipeline](tutorials-four-stage-pipeline.md), and [Troubleshooting CodePipeline](troubleshooting.md)\. | April 18, 2016 |
| New topic | A new topic, [Invoke an AWS Lambda function in a pipeline in CodePipeline](actions-invoke-lambda-function.md), has been added\. This topic contains sample AWS Lambda functions and steps for adding Lambda functions to pipelines\. | January 27, 2016 | | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/history.md |
f924b2f48a23-12 | | Updated topic | A new section has been added to Authentication and Access Control, Resource\-based Policies\. | January 22, 2016 |
| New topic | A new topic, [Product and service integrations with CodePipeline](integrations.md), has been added\. Information about integrations with partners and other AWS services has been moved to this topic\. Links to blogs and videos have also been added\. | December 17, 2015 |
| Updated topic | Details of integration with Solano CI have been added to [Product and service integrations with CodePipeline](integrations.md)\. | November 17, 2015 |
| Updated topic | The CodePipeline Plugin for Jenkins is now available through the Jenkins Plugin Manager as part of the library of plugins for Jenkins\. The steps for installing the plugin have been updated in [Tutorial: Create a four\-stage pipeline](tutorials-four-stage-pipeline.md)\. | November 9, 2015 |
| New Region | CodePipeline is now available in the US West \(Oregon\) Region\. The [Quotas in AWS CodePipeline](limits.md) topic has been updated\. Links have been added to [Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#codepipeline_region)\. | October 22, 2015 | | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/history.md |
f924b2f48a23-13 | | New topic | Two new topics, [Configure server\-side encryption for artifacts stored in Amazon S3 for CodePipeline](S3-artifact-encryption.md) and [Create a pipeline in CodePipeline that uses resources from another AWS account](pipelines-create-cross-account.md), have been added\. A new section has been added to Authentication and Access Control, [Example 8: Use AWS resources associated with another account in a pipeline](customer-managed-policies.md#identity-based-policies-example-8)\. | August 25, 2015 |
| Updated topic | The [Create and add a custom action in CodePipeline](actions-create-custom-action.md) topic has been updated to reflect changes in the structure, including `inputArtifactDetails` and `outputArtifactDetails`\. | August 17, 2015 |
| Updated topic | The [Troubleshooting CodePipeline](troubleshooting.md) topic has been updated with revised steps for troubleshooting problems with the service role and Elastic Beanstalk\. | August 11, 2015 |
| Updated topic | The Authentication and Access Control topic has been updated with the latest changes to the [service role for CodePipeline](security-iam.md#how-to-custom-role)\. | August 6, 2015 |
| New topic | A [Troubleshooting CodePipeline](troubleshooting.md) topic has been added\. Updated steps have been added for IAM roles and Jenkins in [Tutorial: Create a four\-stage pipeline](tutorials-four-stage-pipeline.md)\. | July 24, 2015 | | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/history.md |
f924b2f48a23-14 | | Topic update | Updated steps have been added for downloading the sample files in [Tutorial: Create a simple pipeline \(S3 bucket\)](tutorials-simple-s3.md) and [Tutorial: Create a four\-stage pipeline](tutorials-four-stage-pipeline.md)\. | July 22, 2015 |
| Topic update | A temporary workaround for download issues with the sample files was added in [Tutorial: Create a simple pipeline \(S3 bucket\)](tutorials-simple-s3.md)\. | July 17, 2015 |
| Topic update | A link was added in [Quotas in AWS CodePipeline](limits.md) to point to information about which limits can be changed\. | July 15, 2015 |
| Topic update | The managed policies section in Authentication and Access Control was updated\. | July 10, 2015 |
| Initial Public Release | This is the initial public release of the CodePipeline User Guide\. | July 9, 2015 | | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/history.md |
5518b3d13eb7-0 | You can configure your pipeline to use a personal access token to connect to GitHub\. The advantage of using tokens instead of passwords in a script is that tokens can be revoked or rotated\. You can also grant specific privileges and permissions to a personal access token\. Each personal access token is associated at the pipeline, rather than account, level\.
**Note**
You might have to update other applications if they are using the same personal access token\. As a security best practice, do not share a single token across multiple applications\. Create a personal access token for each application\. For more information, see [Creating a personal access token for the command line](https://help.github.com/articles/creating-an-access-token-for-command-line-use/) on the GitHub website\.
**To create a GitHub personal access token and then update the pipeline structure with the new token**
1. In GitHub, from the drop\-down option on your profile photo, choose **Settings**\.
1. Choose **Developer settings**, and then choose **Personal access tokens**\.
1. Choose **Generate new token**\.
1. Under **Select scopes**, select **admin:repo\_hook** and **repo**\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/GitHub-create-personal-token-CLI.md |
5518b3d13eb7-1 | 1. Under **Select scopes**, select **admin:repo\_hook** and **repo**\.
![\[Configure the personal access token scopes.\]](http://docs.aws.amazon.com/codepipeline/latest/userguide/images/personal-token-checkboxes.png)![\[Configure the personal access token scopes.\]](http://docs.aws.amazon.com/codepipeline/latest/userguide/)![\[Configure the personal access token scopes.\]](http://docs.aws.amazon.com/codepipeline/latest/userguide/)
1. Choose **Generate token**\.
1. Next to the generated token, choose the copy icon\.
**Note**
Make sure you copy your generated token now\. You cannot view the token after you close this page\.
1. At a terminal \(Linux, macOS, or Unix\) or command prompt \(Windows\), run the get\-pipeline command on the pipeline where you want to change the OAuth token, and then copy the output of the command to a JSON file\. For example, for a pipeline named MyFirstPipeline, you would type something similar to the following:
```
aws codepipeline get-pipeline --name MyFirstPipeline >pipeline.json
```
The output of the command is sent to the `pipeline.json` file\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/GitHub-create-personal-token-CLI.md |
5518b3d13eb7-2 | ```
The output of the command is sent to the `pipeline.json` file\.
1. Open the file in a plain\-text editor and edit the value in the `OAuthTokenField` of your GitHub action\.
When you use the AWS CLI to create the pipeline, you can pass your GitHub personal access token in this field\. Replace the asterisks \(\*\*\*\*\) with the token you copied from GitHub\. When you run `get-pipeline` to view the action configuration, the four\-asterisk mask is displayed for this value\. For example, for a personal access token with the value `111222333444555666777888EXAMPLE`:
```
"configuration": {
"Owner": "MyGitHubUserName",
"Repo": "test-repo",
"Branch": "master",
"OAuthToken": "111222333444555666777888EXAMPLE"
}
```
**Note**
When you use an AWS CloudFormation template to create the pipeline, you must first store the token as a secret in AWS Secrets Manager\. You include the value for this field as a dynamic reference to the stored secret in Secrets Manager\. For an example, see [GitHub](action-reference-GitHub.md)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/GitHub-create-personal-token-CLI.md |
5518b3d13eb7-3 | 1. If you are working with the pipeline structure retrieved using the `get-pipeline` command, you must modify the structure in the JSON file by removing the `metadata` lines from the file\. Otherwise, the `update-pipeline` command cannot use it\. Remove the section from the pipeline structure in the JSON file \(the `"metadata": { }` lines and the `"created"`, `"pipelineARN"`, and `"updated"` fields\)\.
For example, remove the following lines from the structure:
```
"metadata": {
"pipelineArn": "arn:aws:codepipeline:region:account-ID:pipeline-name",
"created": "date",
"updated": "date"
}
```
1. Save the file, and then run the update\-pipeline with the `--cli-input-json` parameter to specify the JSON file you just edited\.
For example, to update a pipeline named MyFirstPipeline, you would type something similar to the following:
```
aws codepipeline update-pipeline --cli-input-json file://pipeline.json
```
**Important**
Be sure to include `file://` before the file name\. It is required in this command\.
1. Repeat steps 6 through 8 for every pipeline that contains a GitHub action\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/GitHub-create-personal-token-CLI.md |
5518b3d13eb7-4 | 1. Repeat steps 6 through 8 for every pipeline that contains a GitHub action\.
1. When you are finished, delete the JSON files used to update those pipelines\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/GitHub-create-personal-token-CLI.md |
71470b0818ad-0 | A *tag* is a custom attribute label that you or AWS assigns to an AWS resource\. Each AWS tag has two parts:
+ A *tag key* \(for example, `CostCenter`, `Environment`, `Project`, or `Secret`\)\. Tag keys are case sensitive\.
+ An optional field known as a *tag value* \(for example, `111122223333`, `Production`, or a team name\)\. Omitting the tag value is the same as using an empty string\. Like tag keys, tag values are case sensitive\.
Together these are known as key\-value pairs\.
Tags help you identify and organize your AWS resources\. Many AWS services support tagging, so you can assign the same tag to resources from different services to indicate that the resources are related\. For example, you can assign the same tag to a pipeline that you assign to an Amazon S3 source bucket\.
For tips on using tags, see the [AWS Tagging Strategies](https://aws.amazon.com/answers/account-management/aws-tagging-strategies/) post on the *AWS Answers* blog\.
You can tag the following resource types in CodePipeline:
+ [Tag a pipeline in CodePipeline](pipelines-tag.md)
+ [Tag a custom action in CodePipeline](customactions-tag.md)
+ [Tag a webhook in CodePipeline](tag-webhooks.md)
You can use the AWS CLI, CodePipeline APIs, or AWS SDKs to: | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/tag-resources.md |
71470b0818ad-1 | You can use the AWS CLI, CodePipeline APIs, or AWS SDKs to:
+ Add tags to a pipeline, custom action, or webhook when you create it\.
+ Add, manage, and remove tags for a pipeline, custom action, or webhook\.
You can also use the console to add, manage, and remove tags for a pipeline\.
In addition to identifying, organizing, and tracking your resource with tags, you can use tags in IAM policies to help control who can view and interact with your resource\. For examples of tag\-based access policies, see [Using tags to control access to CodePipeline resources](tag-based-access-control.md)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/tag-resources.md |
44c4b9a8a5bb-0 | As an example of a DevOps pipeline, a two\-stage pipeline might have a source stage called **Source** and a second stage called **Prod**\. In this example, the pipeline is updating the application with the latest changes and continuously deploying the latest result\. Before it deploys the latest application, the pipeline builds and tests the web application\. In this example, a group of developers have set up an infrastructure template and the source code for a web application in a GitHub repository called MyRepository\.
![\[A pipeline with sample stages and actions.\]](http://docs.aws.amazon.com/codepipeline/latest/userguide/images/pipeline-elements-workflow-application.png)![\[A pipeline with sample stages and actions.\]](http://docs.aws.amazon.com/codepipeline/latest/userguide/)![\[A pipeline with sample stages and actions.\]](http://docs.aws.amazon.com/codepipeline/latest/userguide/)
For example, a developer pushes a fix to the web application's index page, and the following occurs:
1. The application source code is maintained in a repository configured as a GitHub source action in the pipeline\. When developers push commits to the repository, CodePipeline detects the pushed change, and a pipeline execution starts from the **Source Stage**\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/concepts-devops-example.md |
44c4b9a8a5bb-1 | 1. The GitHub source action completes successfully \(that is, the latest changes have been downloaded and stored to the artifact bucket unique to that execution\)\. The *output artifacts* produced by the GitHub source action, which are the application files from the repository, are then used as the *input artifacts* to be worked on by the actions in the next stage\.
1. The pipeline execution transitions from the **Source Stage** to the **Prod Stage**\. The first action in the **Prod Stage** runs a build project created in CodeBuild and configured as a build action in the pipeline\. The build task pulls a build environment image and builds the web application in a virtual container\.
1. The next action in the **Prod Stage** is a unit test project created in CodeBuild and configured as a test action in the pipeline\.
1. The unit tested code is next worked on by a deploy action in the **Prod Stage** that deploys the application to a production environment\. After the deploy action completes successfully, the final action in the stage is an integration testing project created in CodeBuild and configured as a test action in the pipeline\. The test action calls to shell scripts that install and run a test tool, such as a link checker, on the web application\. After successful completion, the output is a built web application and a set of test results\.
Developers can add actions to the pipeline that deploy or further test the application after it is built and tested for each change\.
For more information, see [How pipeline executions work](concepts-how-it-works.md)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/concepts-devops-example.md |
1a7e91b35caf-0 | Triggers the pipeline when a new object is uploaded to the configured bucket and object key\.
**Note**
When you create your source bucket, make sure you enable versioning on the bucket\. If you want to use an existing Amazon S3 bucket, see [Using versioning](http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html) to enable versioning on an existing bucket\.
If you use the console to create or edit your pipeline, CodePipeline creates a CloudWatch Events rule that starts your pipeline when a change occurs in the S3 source bucket\.
You must have already created an Amazon S3 source bucket and uploaded the source files as a single ZIP file before you connect the pipeline through an Amazon S3 action\.
**Note**
When Amazon S3 is the source provider for your pipeline, you may zip your source file or files into a single \.zip and upload the \.zip to your source bucket\. You may also upload a single unzipped file; however, downstream actions that expect a \.zip file will fail\.
**Topics**
+ [Action type](#action-reference-S3-type)
+ [Configuration parameters](#action-reference-S3-config)
+ [Input artifacts](#action-reference-S3-input)
+ [Output artifacts](#action-reference-S3-output)
+ [Output variables](#action-reference-S3-variables) | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/action-reference-S3.md |
1a7e91b35caf-1 | + [Output artifacts](#action-reference-S3-output)
+ [Output variables](#action-reference-S3-variables)
+ [Action declaration \(S3 example\)](#action-reference-S3-example)
+ [See also](#action-reference-S3-links) | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/action-reference-S3.md |
5817fd81e356-0 | + Category: `Source`
+ Owner: `AWS`
+ Provider: `S3`
+ Version: `1` | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/action-reference-S3.md |
8876e2736f35-0 | **S3Bucket**
Required: Yes
The name of the Amazon S3 bucket where source changes are to be detected\.
**S3ObjectKey**
Required: Yes
The name of the Amazon S3 object key where source changes are to be detected\.
**PollForSourceChanges**
Required: No
`PollForSourceChanges` controls whether CodePipeline polls the Amazon S3 source bucket for source changes\. We recommend that you use CloudWatch Events and CloudTrail to detect source changes instead\. For more information about configuring CloudWatch Events, see [Update pipelines for push events \(Amazon S3 source\) \(CLI\)](update-change-detection.md#update-change-detection-cli-S3) or [Update pipelines for push events \(Amazon S3 source\) \(AWS CloudFormation template\)](update-change-detection.md#update-change-detection-cfn-s3)\.
If you intend to configure CloudWatch Events, you must set `PollForSourceChanges` to `false` to avoid duplicate pipeline executions\.
Valid values for this parameter:
+ `True`: If set, CodePipeline polls your source location for source changes\.
**Note**
If you omit `PollForSourceChanges`, CodePipeline defaults to polling your source location for source changes\. This behavior is the same as if `PollForSourceChanges` is included and set to `true`\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/action-reference-S3.md |
8876e2736f35-1 | + `False`: If set, CodePipeline does not poll your source location for source changes\. Use this setting if you intend to configure a CloudWatch Events rule to detect source changes\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/action-reference-S3.md |
4331c11b9654-0 | + **Number of Artifacts:** `0`
+ **Description:** Input artifacts do not apply for this action type\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/action-reference-S3.md |
5dcc66456b77-0 | + **Number of Artifacts:** `1`
+ **Description:** Provides the artifacts that are available in the source bucket configured to connect to the pipeline\. The artifacts generated from the bucket are the output artifacts for the Amazon S3 action\. The Amazon S3 object metadata \(ETag and version ID\) is displayed in CodePipeline as the source revision for the triggered pipeline execution\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/action-reference-S3.md |
2efcaa6e24b8-0 | When configured, this action produces variables that can be referenced by the action configuration of a downstream action in the pipeline\. This action produces variables which can be viewed as output variables, even if the action doesn't have a namespace\. You configure an action with a namespace to make those variables available to the configuration of downstream actions\.
For more information about variables in CodePipeline, see [Variables](reference-variables.md)\.
**ETag**
The entity tag for the object related to the source change that triggered the pipeline\. The ETag is an MD5 hash of the object\. ETag reflects only changes to the contents of an object, not its metadata\.
**VersionId**
The version ID for the version of the object related to the source change that triggered the pipeline\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/action-reference-S3.md |
36636e2d0b63-0 | ------ | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/action-reference-S3.md |
1a780e37d3bd-0 | ```
Name: Source
Actions:
- RunOrder: 1
OutputArtifacts:
- Name: SourceArtifact
ActionTypeId:
Provider: S3
Owner: AWS
Version: '1'
Category: Source
Region: us-west-2
Name: Source
Configuration:
S3Bucket: my-bucket-oregon
S3ObjectKey: my-application.zip
PollForSourceChanges: 'false'
InputArtifacts: []
```
------ | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/action-reference-S3.md |
366ae4f36c8c-0 | ```
{
"Name": "Source",
"Actions": [
{
"RunOrder": 1,
"OutputArtifacts": [
{
"Name": "SourceArtifact"
}
],
"ActionTypeId": {
"Provider": "S3",
"Owner": "AWS",
"Version": "1",
"Category": "Source"
},
"Region": "us-west-2",
"Name": "Source",
"Configuration": {
"S3Bucket": "my-bucket-oregon",
"S3ObjectKey": "my-application.zip",
"PollForSourceChanges": "false"
},
"InputArtifacts": []
}
]
},
```
------ | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/action-reference-S3.md |
8695086e0bdc-0 | The following related resources can help you as you work with this action\.
+ [Tutorial: Create a simple pipeline \(S3 bucket\)](tutorials-simple-s3.md) – This tutorial provides a sample app spec file and sample CodeDeploy application and deployment group\. Use this tutorial to create a pipeline with an Amazon S3 source that deploys to Amazon EC2 instances\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/action-reference-S3.md |
0ad794405c5c-0 | You can use the AWS CodePipeline console or the AWS CLI to view details about pipelines associated with your AWS account\.
**Topics**
+ [View pipeline details and history \(console\)](pipelines-view-console.md)
+ [View pipeline details and history \(CLI\)](pipelines-view-cli.md) | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/pipelines-view.md |
cd3e6dde7023-0 | Pipelines start automatically when repository changes are detected\. One change detection method is periodic checks\. Periodic checks can be enabled or disabled using the `PollForSourceChanges` flag\. If you use the CLI to create or edit a pipeline, this parameter defaults to `true`\. This is not the recommended configuration\. Instead, edit your pipeline to use the recommended change\-detection method and then set this parameter to `false`\.
**Note**
Most source actions in CodePipeline, such as GitHub, require either a configured change detection resource \(such as a webhook or CloudWatch Events rule\) or use the option to poll the repository for source changes\. For pipelines with a Bitbucket Cloud source action, you do not have to set up a webhook or default to polling\. The connections action manages your source change detection for you\.
For more information about creating a pipeline with the recommended configuration, see [Create a Pipeline \(Console\)](pipelines-create.md#pipelines-create-console) and [Create a Pipeline \(CLI\)](pipelines-create.md#pipelines-create-cli)\. For more information about updating an action or pipeline with the recommended configuration, see [Edit a Pipeline \(Console\)](pipelines-edit.md#pipelines-edit-console) and [Edit a Pipeline \(CLI\)](pipelines-edit.md#pipelines-edit-cli)\.
For more information, see [Change\-Detection Methods Used to Start Pipelines Automatically](pipelines-about-starting.md#change-detection-methods)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/run-automatically-polling.md |
eaf063c75707-0 | AWS addresses many common use cases by providing standalone IAM policies that are created and administered by AWS\. Managed policies grant necessary permissions for common use cases so you can avoid having to investigate which permissions are needed\. For more information, see [AWS Managed Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#aws-managed-policies) in the *IAM User Guide*\.
The following AWS managed policies, which you can attach to users in your account, are specific to CodePipeline:
+ `AWSCodePipelineFullAccess` – Grants full access to CodePipeline\.
+ `AWSCodePipelineCustomActionAccess` – Grants permission to an IAM user to create custom actions in CodePipeline or integrate Jenkins resources for build or test actions\.
+ `AWSCodePipelineReadOnlyAccess` – Grants read\-only access to CodePipeline\.
+ `AWSCodePipelineApproverAccess` – Grants permission to an IAM user to approve or reject a manual approval action\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/managed-policies.md |
431f637e632f-0 | You can use AWS CodePipeline to continuously deliver your AWS SAM serverless application to the AWS Serverless Application Repository\.
This tutorial shows how to create and configure a pipeline to build your serverless application that is hosted in GitHub and publish it to the AWS Serverless Application Repository automatically\. The pipeline uses GitHub as the source provider and CodeBuild as the build provider\. To publish your serverless application to the AWS Serverless Application Repository, you deploy an [application](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:077246666028:applications~aws-serverless-codepipeline-serverlessrepo-publish ) \(from the AWS Serverless Application Repository\) and associate the Lambda function created by that application as an Invoke action provider in your pipeline\. Then you can continuously deliver application updates to the AWS Serverless Application Repository, without writing any code\.
**Important**
Many of the actions you add to your pipeline in this procedure involve AWS resources that you need to create before you create the pipeline\. AWS resources for your source actions must always be created in the same AWS Region where you create your pipeline\. For example, if you create your pipeline in the US East \(Ohio\) Region, your CodeCommit repository must be in the US East \(Ohio\) Region\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/tutorials-serverlessrepo-auto-publish.md |
431f637e632f-1 | You can add cross\-region actions when you create your pipeline\. AWS resources for cross\-region actions must be in the same AWS Region where you plan to execute the action\. For more information, see [Add a cross\-Region action in CodePipeline](actions-create-cross-region.md)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/tutorials-serverlessrepo-auto-publish.md |
0ecefb52cff9-0 | In this tutorial, we assume the following\.
+ You are familiar with [AWS Serverless Application Model \(AWS SAM\)](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/) and the [AWS Serverless Application Repository](https://docs.aws.amazon.com/serverlessrepo/latest/devguide/)\.
+ You have a serverless application hosted in GitHub that you have published to the AWS Serverless Application Repository using the AWS SAM CLI\. To publish an example application to the AWS Serverless Application Repository, see [Quick Start: Publishing Applications](https://docs.aws.amazon.com/serverlessrepo/latest/devguide/serverlessrepo-quick-start.html) in the *AWS Serverless Application Repository Developer Guide*\. To publish your own application to the AWS Serverless Application Repository, see [Publishing Applications Using the AWS SAM CLI](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-template-publishing-applications.html) in the *AWS Serverless Application Model Developer Guide*\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/tutorials-serverlessrepo-auto-publish.md |
254531a60b78-0 | Create a `buildspec.yml` file with the following contents, and add it to your serverless application's GitHub repository\. Replace *template\.yml* with your application's AWS SAM template and *bucketname* with the S3 bucket where your packaged application is stored\.
```
version: 0.2
phases:
install:
runtime-versions:
python: 3.8
build:
commands:
- pip install --upgrade pip
- pip install pipenv --user
- pipenv install awscli aws-sam-cli
- pipenv run sam package --template-file template.yml --s3-bucket bucketname --output-template-file packaged-template.yml
artifacts:
files:
- packaged-template.yml
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/tutorials-serverlessrepo-auto-publish.md |
fa011eef29b4-0 | Follow these steps to create your pipeline in the AWS Region where you want to publish your serverless application\.
1. Sign in to the AWS Management Console and open the CodePipeline console at [https://console\.aws\.amazon\.com/codepipeline/](https://console.aws.amazon.com/codepipeline/)\.
1. If necessary, switch to the AWS Region where you want to publish your serverless application\.
1. Choose **Create pipeline**\. On the **Choose pipeline settings** page, in **Pipeline name**, enter the name for your pipeline\.
1. In **Service role**, choose **New service role** to allow CodePipeline to create a service role in IAM\.
1. Leave the settings under **Advanced settings** at their defaults, and then choose **Next**\.
1. On the **Add source stage** page, in **Source provider**, choose **GitHub**, and then choose **Connect to GitHub**\.
1. In the browser window, choose **Authorize aws\-codesuite**\. This allows your pipeline to make your repository a source, and to use webhooks that detect when new code is pushed to the repository\.
1. In **Repository**, choose your GitHub source repository\.
1. In **Branch**, choose your GitHub branch\.
1. Choose **Next**\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/tutorials-serverlessrepo-auto-publish.md |
fa011eef29b4-1 | 1. In **Branch**, choose your GitHub branch\.
1. Choose **Next**\.
1. On the **Add build stage** page, add a build stage:
1. In **Build provider**, choose **AWS CodeBuild**\. For **Region**, use the pipeline Region\.
1. Choose **Create project**\.
1. In **Project name**, enter a name for this build project\.
1. In **Environment image**, choose **Managed image**\. For **Operating system**, choose **Ubuntu**\.
1. For **Runtime** and **Runtime version**, choose the runtime and version required for your serverless application\.
1. For **Service role**, choose **New service role**\.
1. For **Build specifications**, choose **Use a buildspec file**\.
1. Choose **Continue to CodePipeline**\. This opens the CodePipeline console and creates a CodeBuild project that uses the `buildspec.yml` in your repository for configuration\. The build project uses a service role to manage AWS service permissions\. This step might take a couple of minutes\.
1. Choose **Next**\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/tutorials-serverlessrepo-auto-publish.md |
fa011eef29b4-2 | 1. Choose **Next**\.
1. On the **Add deploy stage** page, choose **Skip deploy stage**, and then accept the warning message by choosing **Skip** again\. Choose **Next**\.
1. Choose **Create pipeline**\. You should see a diagram that shows the source and build stages\.
1. Grant the CodeBuild service role permission to access the S3 bucket where your packaged application is stored\.
1. In the **Build** stage of your new pipeline, choose **CodeBuild**\.
1. Choose the **Build details** tab\.
1. In **Environment**, choose the CodeBuild service role to open the IAM console\.
1. Expand the selection for `CodeBuildBasePolicy`, and choose **Edit policy**\.
1. Choose **JSON**\.
1. Add a new policy statement with the following contents\. The statement allows CodeBuild to put objects into the S3 bucket where your packaged application is stored\. Replace *bucketname* with the name of your S3 bucket\.
```
{
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucketname/*"
],
"Action": [
"s3:PutObject"
]
}
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/tutorials-serverlessrepo-auto-publish.md |
fa011eef29b4-3 | "Action": [
"s3:PutObject"
]
}
```
1. Choose **Review policy**\.
1. Choose **Save changes**\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/tutorials-serverlessrepo-auto-publish.md |
04c8313a60d7-0 | Follow these steps to deploy the application that contains the Lambda function that performs the publish to the AWS Serverless Application Repository\. This application is **aws\-serverless\-codepipeline\-serverlessrepo\-publish**\.
**Note**
You must deploy the application to the same AWS Region as your pipeline\.
1. Go to the [application](https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:077246666028:applications~aws-serverless-codepipeline-serverlessrepo-publish ) page, and choose **Deploy**\.
1. Select **I acknowledge that this app creates custom IAM roles**\.
1. Choose **Deploy**\.
1. Choose **View AWS CloudFormation Stack** to open the AWS CloudFormation console\.
1. Expand the **Resources** section\. You see **ServerlessRepoPublish**, which is of the type **AWS::Lambda::Function**\. Make a note of the physical ID of this resource for the next step\. You use this physical ID when you create the new publish action in CodePipeline\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/tutorials-serverlessrepo-auto-publish.md |
69179251ff9e-0 | Follow these steps to create the publish action in your pipeline\.
1. Open the CodePipeline console at [https://console\.aws\.amazon\.com/codepipeline/](https://console.aws.amazon.com/codepipeline/)\.
1. In the left navigation section, choose the pipeline that you want to edit\.
1. Choose **Edit**\.
1. After the last stage of your current pipeline, choose **\+ Add stage**\. In **Stage name** enter a name, such as **Publish**, and choose **Add stage**\.
1. In the new stage, choose **\+ Add action group**\.
1. Enter an action name\. From **Action provider**, in **Invoke**, choose **AWS Lambda**\.
1. From **Input artifacts**, choose **BuildArtifact**\.
1. From **Function name**, choose the physical ID of the Lambda function that you noted in the previous step\.
1. Choose **Save** for the action\.
1. Choose **Done** for the stage\.
1. In the upper right, choose **Save**\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/tutorials-serverlessrepo-auto-publish.md |
69179251ff9e-1 | 1. Choose **Done** for the stage\.
1. In the upper right, choose **Save**\.
1. To verify your pipeline, make a change to your application in GitHub\. For example, change the application's description in the `Metadata` section of your AWS SAM template file\. Commit the change and push it to your GitHub branch\. This triggers your pipeline to run\. When the pipeline is complete, check that your application has been updated with your change in the [AWS Serverless Application Repository](https://console.aws.amazon.com/serverlessrepo/home)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/tutorials-serverlessrepo-auto-publish.md |
2370a851672a-0 | When a pipeline execution starts to run through a pipeline, it enters one stage at a time and locks the stage while all action executions in the stage are running\. These in\-progress actions must be handled in a way so that, when the pipeline execution is stopped, the actions are either allowed to complete or abandoned\.
There are two ways to stop a pipeline execution:
+ **Stop and wait**: AWS CodePipeline waits to stop the execution until all in\-progress actions are completed \(that is, the actions have a `Succeeded` or `Failed` status\)\. This option preserves in\-progress actions\. The execution is in a `Stopping` state until the in\-progress actions are complete\. Then the execution is in a `Stopped` state\. The stage unlocks after the actions are complete\.
If you choose to stop and wait, and you change your mind while your execution is still in a `Stopping` state, you can then choose to abandon\.
+ **Stop and abandon**: AWS CodePipeline stops the execution without waiting for in\-progress actions to complete\. The execution is in a `Stopping` state for a very short time while the in\-progress actions are abandoned\. After the execution is stopped, the action execution is in an `Abandoned` state while the pipeline execution is in a `Stopped` state\. The stage unlocks\.
For a pipeline execution in a `Stopped` state, the actions in the stage where the execution stopped can be retried\.
**Warning**
This option can lead to failed tasks or out of sequence tasks\.
**Topics** | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/pipelines-stop.md |
2370a851672a-1 | **Warning**
This option can lead to failed tasks or out of sequence tasks\.
**Topics**
+ [Stop a pipeline execution \(console\)](#pipelines-stop-console)
+ [Stop a pipeline execution \(CLI\)](#pipelines-stop-cli) | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/pipelines-stop.md |
c74554d321a0-0 | 1. Sign in to the AWS Management Console and open the CodePipeline console at [http://console\.aws\.amazon\.com/codesuite/codepipeline/home](http://console.aws.amazon.com/codesuite/codepipeline/home)\.
1. Do one of the following:
**Note**
Before you stop an execution, we recommend that you disable the transition in front of the stage\. This way, when the stage unlocks due to the stopped execution, the stage does not accept a subsequent pipeline execution\.
+ In **Name**, choose the name of the pipeline with the execution you want to stop\. On the pipeline details page, choose **Stop execution**\.
+ Choose **View history**\. On the history page, choose **Stop execution**\.
1. On the **Stop execution** page, under **Select execution**, choose the execution you want to stop\.
**Note**
The execution is displayed only if it is still in progress\. Executions that are already complete are not displayed\.
![\[Image NOT FOUND\]](http://docs.aws.amazon.com/codepipeline/latest/userguide/images/stop-execution.png)![\[Image NOT FOUND\]](http://docs.aws.amazon.com/codepipeline/latest/userguide/)![\[Image NOT FOUND\]](http://docs.aws.amazon.com/codepipeline/latest/userguide/) | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/pipelines-stop.md |
c74554d321a0-1 | 1. Under **Select an action to apply to execution**, choose one of the following:
+ To make sure the execution does not stop until all in\-progress actions are complete, choose **Stop and wait**\.
**Note**
You cannot choose to stop and wait if the execution is already in a **Stopping** state, but you can choose to stop and abandon\.
+ To stop without waiting for in\-progress actions to complete, choose **Stop and abandon**\.
**Warning**
This option can lead to failed tasks or out of sequence tasks\.
1. \(Optional\) Enter comments\. These comments, along with the execution status, are displayed on the history page for the execution\.
1. Choose **Stop**\.
**Important**
This action cannot be undone\.
1. View the execution status in the pipeline visualization as follows:
+ If you chose to stop and wait, the selected execution continues until in\-progress actions are completed\.
+ The success banner message is displayed at the top of the console\.
+ In the current stage, in\-progress actions continue in an `InProgress` state\. While the actions are in progress, the pipeline execution is in a `Stopping` state\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/pipelines-stop.md |
c74554d321a0-2 | After the actions complete \(that is, the action fails or succeeds\), the pipeline execution changes to a `Stopped` state and the action changes to a `Failed` or `Succeeded` state\. You can also view the action state on the execution details page\. You can view the execution status on the execution history page or the execution details page\.
+ The pipeline execution changes to a `Stopping` state briefly, and then it changes to a `Stopped` state\. You can view the execution status on the execution history page or the execution details page\.
+ If you chose to stop and abandon, the execution does not wait for in\-progress actions to complete\.
+ The success banner message is displayed at the top of the console\.
+ In the current stage, in\-progress actions change to a status of `Abandoned`\. You can also view the action status on the execution details page\.
+ The pipeline execution changes to a `Stopping` state briefly, and then it changes to a `Stopped` state\. You can view the execution status on the execution history page or the execution details page\.
You can view the pipeline execution status in the execution history view and the detailed history view\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/pipelines-stop.md |
755e311f5a2f-0 | To use the AWS CLI to manually stop a pipeline, use the stop\-pipeline\-execution command with the following parameters:
+ Execution ID \(required\)
+ Comments \(optional\)
+ Pipeline name \(required\)
+ Abandon flag \(optional, the default is false\)
Command format:
```
aws codepipeline stop-pipeline-execution --pipeline-name Pipeline_Name –-pipeline-execution-id Execution_ID [--abandon | --no-abandon] [--reason STOP_EXECUTION_REASON]
```
1. Open a terminal \(Linux, macOS, or Unix\) or command prompt \(Windows\)\.
1. To stop a pipeline execution, choose one of the following:
+ To make sure the execution does not stop until all in\-progress actions are complete, choose to stop and wait\. You can do this by including the no\-abandon parameter\. If you do not specify the parameter, the command defaults to stop and wait\. Use the AWS CLI to run the stop\-pipeline\-execution command, specifying the name of the pipeline and the execution ID\. For example, to stop a pipeline named *MyFirstPipeline* with the stop and wait option specified:
```
aws codepipeline stop-pipeline-execution --pipeline-name MyFirstPipeline --pipeline-execution-id d-EXAMPLE --no-abandon
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/pipelines-stop.md |
755e311f5a2f-1 | ```
For example, to stop a pipeline named *MyFirstPipeline*, defaulting to the stop and wait option, and choosing to include comments:
```
aws codepipeline stop-pipeline-execution --pipeline-name MyFirstPipeline --pipeline-execution-id d-EXAMPLE --reason "Stopping execution after the build action is done"
```
**Note**
You cannot choose to stop and wait if the execution is already in a **Stopping** state\. You can choose to stop and abandon an execution that is already in a **Stopping** state\.
+ To stop without waiting for in\-progress actions to complete, choose to stop and abandon\. Include the abandon parameter\. Use the AWS CLI to run the stop\-pipeline\-execution command, specifying the name of the pipeline and the execution ID\.
For example, to stop a pipeline named *MyFirstPipeline*, specifying the abandon option, and choosing to include comments:
```
aws codepipeline stop-pipeline-execution --pipeline-name MyFirstPipeline --pipeline-execution-id d-EXAMPLE --abandon --reason "Stopping execution for a bug fix"
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/pipelines-stop.md |
8f0a63da6b5e-0 | When you create a pipeline, you create a service role or use an existing service role\.
You can use the CodePipeline console or the AWS CLI to create an CodePipeline service role\. A service role is required to create a pipeline, and the pipeline is always associated to that service role\.
The service role is not an AWS managed role but is created initially for pipeline creation, and then as new permissions are added to the service role policy, you may need to update the service role for your pipeline\. Once your pipeline is created with a service role, you cannot apply a different service role to that pipeline\. Attach the recommended policy to the service role\.
For more information about the service role, see [Manage the CodePipeline service role](security-iam.md#how-to-custom-role)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/aws-codepipeline-user-guide/doc_source/pipelines-create-service-role.md |
bab3f1d5e21c-0 | Run a [COPY](r_COPY.md) command to connect to the Amazon EMR cluster and load the data into an Amazon Redshift table\. The Amazon EMR cluster must continue running until the COPY command completes\. For example, do not configure the cluster to auto\-terminate\.
**Important**
If any of the data files are changed or deleted before the COPY completes, you might have unexpected results, or the COPY operation might fail\.
In the COPY command, specify the Amazon EMR cluster ID and the HDFS file path and file name\.
```
copy sales
from 'emr://myemrclusterid/myoutput/part*' credentials
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole';
```
You can use the wildcard characters asterisk \( `*` \) and question mark \( `?` \) as part of the file name argument\. For example, `part*` loads the files `part-0000`, `part-0001`, and so on\. If you specify only a folder name, COPY attempts to load all files in the folder\.
**Important**
If you use wildcard characters or use only the folder name, verify that no unwanted files will be loaded or the COPY command will fail\. For example, some processes might write a log file to the output folder\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/load-from-emr-steps-run-copy.md |
6e1b7a646bc0-0 | ST\_CoveredBy returns true if the first input geometry is covered by the second input geometry\. Geometry `A` is covered by geometry `B` if both are nonempty and every point in `A` is a point in `B`\.
ST\_CoveredBy\(`A`, `B`\) is equivalent to ST\_Covers\(`B`, `A`\)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_CoveredBy-function.md |
b171d1e1021f-0 | ```
ST_CoveredBy(geom1, geom2)
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_CoveredBy-function.md |
33ef02f0a1d1-0 | *geom1*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\. This value is compared with *geom2* to determine if it's covered by *geom2*\.
*geom2*
A value of data type `GEOMETRY` or an expression that evaluates to a `GEOMETRY` type\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_CoveredBy-function.md |
1dd6b0dfab36-0 | `BOOLEAN`
If *geom1* or *geom2* is null, then null is returned\.
If *geom1* and *geom2* don't have the same value for the spatial reference system identifier \(SRID\), then an error is returned\.
If *geom1* or *geom2* is a geometry collection, then an error is returned\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_CoveredBy-function.md |
1ef1a12bcb2a-0 | The following SQL checks if the first polygon is covered by the second polygon\.
```
SELECT ST_CoveredBy(ST_GeomFromText('POLYGON((0 2,1 1,0 -1,0 2))'), ST_GeomFromText('POLYGON((-1 3,2 1,0 -3,-1 3))'));
```
```
st_coveredby
-----------
true
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/ST_CoveredBy-function.md |
76866229d9a2-0 | Analyzes delete execution steps for queries\.
This view is visible to all users\. Superusers can see all rows; regular users can see only their own data\. For more information, see [Visibility of data in system tables and views](c_visibility-of-data.md)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DELETE.md |
9ba0bfe9b29b-0 | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_STL_DELETE.html) | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DELETE.md |
777d60eccd08-0 | In order to create a row in STL\_DELETE, the following example inserts a row into the EVENT table and then deletes it\.
First, insert a row into the EVENT table and verify that it was inserted\.
```
insert into event(eventid,venueid,catid,dateid,eventname)
values ((select max(eventid)+1 from event),95,9,1857,'Lollapalooza');
```
```
select * from event
where eventname='Lollapalooza'
order by eventid;
```
```
eventid | venueid | catid | dateid | eventname | starttime
---------+---------+-------+--------+--------------+---------------------
4274 | 102 | 9 | 1965 | Lollapalooza | 2008-05-01 19:00:00
4684 | 114 | 9 | 2105 | Lollapalooza | 2008-10-06 14:00:00
5673 | 128 | 9 | 1973 | Lollapalooza | 2008-05-01 15:00:00
5740 | 51 | 9 | 1933 | Lollapalooza | 2008-04-17 15:00:00 | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DELETE.md |
777d60eccd08-1 | 5740 | 51 | 9 | 1933 | Lollapalooza | 2008-04-17 15:00:00
5856 | 119 | 9 | 1831 | Lollapalooza | 2008-01-05 14:00:00
6040 | 126 | 9 | 2145 | Lollapalooza | 2008-11-15 15:00:00
7972 | 92 | 9 | 2026 | Lollapalooza | 2008-07-19 19:30:00
8046 | 65 | 9 | 1840 | Lollapalooza | 2008-01-14 15:00:00
8518 | 48 | 9 | 1904 | Lollapalooza | 2008-03-19 15:00:00
8799 | 95 | 9 | 1857 | Lollapalooza |
(10 rows)
```
Now, delete the row that you added to the EVENT table and verify that it was deleted\.
```
delete from event
where eventname='Lollapalooza' and eventid=(select max(eventid) from event);
```
```
select * from event
where eventname='Lollapalooza'
order by eventid;
```
```
eventid | venueid | catid | dateid | eventname | starttime | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DELETE.md |
777d60eccd08-2 | ```
```
eventid | venueid | catid | dateid | eventname | starttime
---------+---------+-------+--------+--------------+---------------------
4274 | 102 | 9 | 1965 | Lollapalooza | 2008-05-01 19:00:00
4684 | 114 | 9 | 2105 | Lollapalooza | 2008-10-06 14:00:00
5673 | 128 | 9 | 1973 | Lollapalooza | 2008-05-01 15:00:00
5740 | 51 | 9 | 1933 | Lollapalooza | 2008-04-17 15:00:00
5856 | 119 | 9 | 1831 | Lollapalooza | 2008-01-05 14:00:00
6040 | 126 | 9 | 2145 | Lollapalooza | 2008-11-15 15:00:00
7972 | 92 | 9 | 2026 | Lollapalooza | 2008-07-19 19:30:00
8046 | 65 | 9 | 1840 | Lollapalooza | 2008-01-14 15:00:00
8518 | 48 | 9 | 1904 | Lollapalooza | 2008-03-19 15:00:00
(9 rows)
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DELETE.md |
777d60eccd08-3 | (9 rows)
```
Then query stl\_delete to see the execution steps for the deletion\. In this example, the query returned over 300 rows, so the output below is shortened for display purposes\.
```
select query, slice, segment, step, tasknum, rows, tbl from stl_delete order by query;
```
```
query | slice | segment | step | tasknum | rows | tbl
-------+-------+---------+------+---------+------+--------
7 | 0 | 0 | 1 | 0 | 0 | 100000
7 | 1 | 0 | 1 | 0 | 0 | 100000
8 | 0 | 0 | 1 | 2 | 0 | 100001
8 | 1 | 0 | 1 | 2 | 0 | 100001
9 | 0 | 0 | 1 | 4 | 0 | 100002
9 | 1 | 0 | 1 | 4 | 0 | 100002
10 | 0 | 0 | 1 | 6 | 0 | 100003
10 | 1 | 0 | 1 | 6 | 0 | 100003
11 | 0 | 0 | 1 | 8 | 0 | 100253
11 | 1 | 0 | 1 | 8 | 0 | 100253
12 | 0 | 0 | 1 | 0 | 0 | 100255 | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DELETE.md |
777d60eccd08-4 | 11 | 1 | 0 | 1 | 8 | 0 | 100253
12 | 0 | 0 | 1 | 0 | 0 | 100255
12 | 1 | 0 | 1 | 0 | 0 | 100255
13 | 0 | 0 | 1 | 2 | 0 | 100257
13 | 1 | 0 | 1 | 2 | 0 | 100257
14 | 0 | 0 | 1 | 4 | 0 | 100259
14 | 1 | 0 | 1 | 4 | 0 | 100259
...
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_STL_DELETE.md |
9710ca2ce5ad-0 | **Topics**
+ [ABORT](r_ABORT.md)
+ [ALTER DATABASE](r_ALTER_DATABASE.md)
+ [ALTER DEFAULT PRIVILEGES](r_ALTER_DEFAULT_PRIVILEGES.md)
+ [ALTER GROUP](r_ALTER_GROUP.md)
+ [ALTER PROCEDURE](r_ALTER_PROCEDURE.md)
+ [ALTER SCHEMA](r_ALTER_SCHEMA.md)
+ [ALTER TABLE](r_ALTER_TABLE.md)
+ [ALTER TABLE APPEND](r_ALTER_TABLE_APPEND.md)
+ [ALTER USER](r_ALTER_USER.md)
+ [ANALYZE](r_ANALYZE.md)
+ [ANALYZE COMPRESSION](r_ANALYZE_COMPRESSION.md)
+ [BEGIN](r_BEGIN.md)
+ [CALL](r_CALL_procedure.md)
+ [CANCEL](r_CANCEL.md)
+ [CLOSE](close.md)
+ [COMMENT](r_COMMENT.md)
+ [COMMIT](r_COMMIT.md)
+ [COPY](r_COPY.md)
+ [CREATE DATABASE](r_CREATE_DATABASE.md)
+ [CREATE EXTERNAL SCHEMA](r_CREATE_EXTERNAL_SCHEMA.md)
+ [CREATE EXTERNAL TABLE](r_CREATE_EXTERNAL_TABLE.md)
+ [CREATE FUNCTION](r_CREATE_FUNCTION.md) | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_SQL_commands.md |
9710ca2ce5ad-1 | + [CREATE EXTERNAL TABLE](r_CREATE_EXTERNAL_TABLE.md)
+ [CREATE FUNCTION](r_CREATE_FUNCTION.md)
+ [CREATE GROUP](r_CREATE_GROUP.md)
+ [CREATE LIBRARY](r_CREATE_LIBRARY.md)
+ [CREATE MATERIALIZED VIEW](materialized-view-create-sql-command.md)
+ [CREATE PROCEDURE](r_CREATE_PROCEDURE.md)
+ [CREATE SCHEMA](r_CREATE_SCHEMA.md)
+ [CREATE TABLE](r_CREATE_TABLE_NEW.md)
+ [CREATE TABLE AS](r_CREATE_TABLE_AS.md)
+ [CREATE USER](r_CREATE_USER.md)
+ [CREATE VIEW](r_CREATE_VIEW.md)
+ [DEALLOCATE](r_DEALLOCATE.md)
+ [DECLARE](declare.md)
+ [DELETE](r_DELETE.md)
+ [DROP DATABASE](r_DROP_DATABASE.md)
+ [DROP FUNCTION](r_DROP_FUNCTION.md)
+ [DROP GROUP](r_DROP_GROUP.md)
+ [DROP LIBRARY](r_DROP_LIBRARY.md)
+ [DROP MATERIALIZED VIEW](materialized-view-drop-sql-command.md)
+ [DROP PROCEDURE](r_DROP_PROCEDURE.md)
+ [DROP SCHEMA](r_DROP_SCHEMA.md) | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_SQL_commands.md |
9710ca2ce5ad-2 | + [DROP PROCEDURE](r_DROP_PROCEDURE.md)
+ [DROP SCHEMA](r_DROP_SCHEMA.md)
+ [DROP TABLE](r_DROP_TABLE.md)
+ [DROP USER](r_DROP_USER.md)
+ [DROP VIEW](r_DROP_VIEW.md)
+ [END](r_END.md)
+ [EXECUTE](r_EXECUTE.md)
+ [EXPLAIN](r_EXPLAIN.md)
+ [FETCH](fetch.md)
+ [GRANT](r_GRANT.md)
+ [INSERT](r_INSERT_30.md)
+ [INSERT \(external table\)](r_INSERT_external_table.md)
+ [LOCK](r_LOCK.md)
+ [PREPARE](r_PREPARE.md)
+ [REFRESH MATERIALIZED VIEW](materialized-view-refresh-sql-command.md)
+ [RESET](r_RESET.md)
+ [REVOKE](r_REVOKE.md)
+ [ROLLBACK](r_ROLLBACK.md)
+ [SELECT](r_SELECT_synopsis.md)
+ [SELECT INTO](r_SELECT_INTO.md)
+ [SET](r_SET.md)
+ [SET SESSION AUTHORIZATION](r_SET_SESSION_AUTHORIZATION.md) | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_SQL_commands.md |
9710ca2ce5ad-3 | + [SET](r_SET.md)
+ [SET SESSION AUTHORIZATION](r_SET_SESSION_AUTHORIZATION.md)
+ [SET SESSION CHARACTERISTICS](r_SET_SESSION_CHARACTERISTICS.md)
+ [SHOW](r_SHOW.md)
+ [SHOW PROCEDURE](r_SHOW_PROCEDURE.md)
+ [START TRANSACTION](r_START_TRANSACTION.md)
+ [TRUNCATE](r_TRUNCATE.md)
+ [UNLOAD](r_UNLOAD.md)
+ [UPDATE](r_UPDATE.md)
+ [VACUUM](r_VACUUM_command.md)
The SQL language consists of commands that you use to create and manipulate database objects, run queries, load tables, and modify the data in tables\.
**Note**
Amazon Redshift is based on PostgreSQL 8\.0\.2\. Amazon Redshift and PostgreSQL have a number of very important differences that you must be aware of as you design and develop your data warehouse applications\. For more information about how Amazon Redshift SQL differs from PostgreSQL, see [Amazon Redshift and PostgreSQL](c_redshift-and-postgres-sql.md)\.
**Note**
The maximum size for a single SQL statement is 16 MB\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c_SQL_commands.md |
d78a3fd9b426-0 | If you are a first\-time user of Amazon Redshift, we recommend that you begin by reading the following sections\.
+ Service Highlights and Pricing – The [product detail page](https://aws.amazon.com/redshift/) provides the Amazon Redshift value proposition, service highlights, and pricing\.
+ Getting Started – [Amazon Redshift Getting Started](https://docs.aws.amazon.com/redshift/latest/gsg/) includes an example that walks you through the process of creating an Amazon Redshift data warehouse cluster, creating database tables, uploading data, and testing queries\.
After you complete the Getting Started guide, we recommend that you explore one of the following guides:
+ [Amazon Redshift Cluster Management Guide](https://docs.aws.amazon.com/redshift/latest/mgmt/) – The Cluster Management guide shows you how to create and manage Amazon Redshift clusters\.
If you are an application developer, you can use the Amazon Redshift Query API to manage clusters programmatically\. Additionally, the AWS SDK libraries that wrap the underlying Amazon Redshift API can help simplify your programming tasks\. If you prefer a more interactive way of managing clusters, you can use the Amazon Redshift console and the AWS command line interface \(AWS CLI\)\. For information about the API and CLI, go to the following manuals:
+ [API reference](https://docs.aws.amazon.com/redshift/latest/APIReference/) | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-first-time-user.md |
d78a3fd9b426-1 | + [API reference](https://docs.aws.amazon.com/redshift/latest/APIReference/)
+ [CLI reference](https://docs.aws.amazon.com/cli/latest/reference/redshift/)
+ Amazon Redshift Database Developer Guide* \(this document\)* – If you are a database developer, the Database Developer Guide explains how to design, build, query, and maintain the databases that make up your data warehouse\.
If you are transitioning to Amazon Redshift from another relational database system or data warehouse application, you should be aware of important differences in how Amazon Redshift is implemented\. For a summary of the most important considerations for designing tables and loading data, see [Amazon Redshift best practices for designing tables](c_designing-tables-best-practices.md) and [Amazon Redshift best practices for loading data](c_loading-data-best-practices.md)\. Amazon Redshift is based on PostgreSQL 8\.0\.2\. For a detailed list of the differences between Amazon Redshift and PostgreSQL, see [Amazon Redshift and PostgreSQL](c_redshift-and-postgres-sql.md)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-first-time-user.md |
338b5f12c505-0 | The COUNT function counts the rows defined by the expression\.
The COUNT function has three variations\. COUNT \( \* \) counts all the rows in the target table whether they include nulls or not\. COUNT \( *expression* \) computes the number of rows with non\-NULL values in a specific column or expression\. COUNT \( DISTINCT *expression* \) computes the number of distinct non\-NULL values in a column or expression\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COUNT.md |
5155c99a76f2-0 | ```
[ APPROXIMATE ] COUNT ( [ DISTINCT | ALL ] * | expression )
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COUNT.md |
278aeaf4c0ab-0 | *expression *
The target column or expression that the function operates on\.
DISTINCT \| ALL
With the argument DISTINCT, the function eliminates all duplicate values from the specified expression before doing the count\. With the argument ALL, the function retains all duplicate values from the expression for counting\. ALL is the default\.
APPROXIMATE
When used with APPROXIMATE, a COUNT \( DISTINCT *expression* \) function uses a HyperLogLog algorithm to approximate the number of distinct non\-NULL values in a column or expression\. Queries that use the APPROXIMATE keyword execute much faster, with a low relative error of around 2%\. Approximation is warranted for queries that return a large number of distinct values, in the millions or more per query, or per group, if there is a group by clause\. For smaller sets of distinct values, in the thousands, approximation might be slower than a precise count\. APPROXIMATE can only be used with COUNT \( DISTINCT \)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COUNT.md |
226b8c1dec8d-0 | The COUNT function supports all argument data types\.
The COUNT function returns BIGINT\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COUNT.md |
d5d433231ef6-0 | Count all of the users from the state of Florida:
```
select count (*) from users where state='FL';
count
-------
510
(1 row)
```
Count all of the unique venue IDs from the EVENT table:
```
select count (distinct venueid) as venues from event;
venues
--------
204
(1 row)
```
Count the number of times each seller listed batches of more than four tickets for sale\. Group the results by seller ID:
```
select count(*), sellerid from listing
where numtickets > 4
group by sellerid
order by 1 desc, 2;
count | sellerid
-------+----------
12 | 6386
11 | 17304
11 | 20123
11 | 25428
...
```
The following examples compare the return values and execution times for COUNT and APPROXIMATE COUNT\.
```
select count(distinct pricepaid) from sales;
count
-------
4528
(1 row)
Time: 48.048 ms | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COUNT.md |
d5d433231ef6-1 | count
-------
4528
(1 row)
Time: 48.048 ms
select approximate count(distinct pricepaid) from sales;
count
-------
4541
(1 row)
Time: 21.728 ms
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_COUNT.md |
2d3b0efdcf44-0 | Returns the user name of the current "effective" user of the database, as applicable to checking permissions\. Usually, this user name will be the same as the session user; however, this can occasionally be changed by superusers\.
**Note**
Do not use trailing parentheses when calling CURRENT\_USER\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_USER.md |
e189909b2a83-0 | ```
current_user
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_USER.md |
686a9b8f5073-0 | CURRENT\_USER returns a CHAR or VARCHAR string\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_USER.md |
410f95ed4716-0 | The following query returns the name of the current database user:
```
select current_user;
current_user
--------------
dwuser
(1 row)
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_CURRENT_USER.md |
0e3aac183c1c-0 | If a query is taking longer than expected, use the following steps to identify and correct issues that might be negatively affecting the query’s performance\. If you aren’t sure what queries in your system might benefit from performance tuning, start by running the diagnostic query in [Identifying queries that are top candidates for tuning](diagnostic-queries-for-query-tuning.md#identify-queries-that-are-top-candidates-for-tuning)\.
1. Make sure your tables are designed according to best practices\. For more information, see [Amazon Redshift best practices for designing tables](c_designing-tables-best-practices.md)\.
1. See if you can delete or archive any unneeded data in your tables\. For example, suppose your queries always target the last 6 months’ worth of data but you have the last 18 months’ worth in your tables\. In this case, you can delete or archive the older data to reduce the number of records that need to be scanned and distributed\.
1. Run the [VACUUM](r_VACUUM_command.md) command on the tables in the query to reclaim space and re\-sort rows\. Running VACUUM helps if the unsorted region is large and the query uses the sort key in a join or in the predicate\.
1. Run the [ANALYZE](r_ANALYZE.md) command on the tables in the query to make sure statistics are up to date\. Running ANALYZE helps if any of the tables in the query have recently changed a lot in size\. If running a full ANALYZE command will take too long, run ANALYZE on a single column to reduce processing time\. This approach still updates the table size statistics; table size is a significant factor in query planning\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-query-analysis-process.md |
0e3aac183c1c-1 | 1. Make sure that your query has been run once for each type of client \(based on what type of connection protocol the client uses\) so that the query is compiled and cached\. This approach speeds up subsequent runs of the query\. For more information, see [Factors affecting query performance](c-query-performance.md)\.
1. Check the [STL\_ALERT\_EVENT\_LOG](r_STL_ALERT_EVENT_LOG.md) table to identify and correct possible issues with your query\. For more information, see [Reviewing query alerts](c-reviewing-query-alerts.md)\.
1. Run the [EXPLAIN](r_EXPLAIN.md) command to get the query plan and use it to optimize the query\. For more information, see [Analyzing the query plan](c-analyzing-the-query-plan.md)\.
1. Use the [SVL\_QUERY\_SUMMARY](r_SVL_QUERY_SUMMARY.md) and [SVL\_QUERY\_REPORT](r_SVL_QUERY_REPORT.md) views to get summary information and use it to optimize the query\. For more information, see [Analyzing the query summary](c-analyzing-the-query-summary.md)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-query-analysis-process.md |
0e3aac183c1c-2 | Sometimes a query that should execute quickly is forced to wait until another, longer\-running query finishes\. In that case, you might have nothing to improve in the query itself, but you can improve overall system performance by creating and using query queues for different types of queries\. To get an idea of queue wait time for your queries, see [Reviewing queue wait times for queries](diagnostic-queries-for-query-tuning.md#review-queue-wait-times-for-queries)\. For more information about configuring query queues, see [Implementing workload management](cm-c-implementing-workload-management.md)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/c-query-analysis-process.md |
ce7dc92a143f-0 | In this context, *computation* refers to binary mathematical operations: addition, subtraction, multiplication, and division\. This section describes the expected return types for these operations, as well as the specific formula that is applied to determine precision and scale when DECIMAL data types are involved\.
When numeric values are computed during query processing, you might encounter cases where the computation is impossible and the query returns a numeric overflow error\. You might also encounter cases where the scale of computed values varies or is unexpected\. For some operations, you can use explicit casting \(type promotion\) or Amazon Redshift configuration parameters to work around these problems\.
For information about the results of similar computations with SQL functions, see [Aggregate functions](c_Aggregate_Functions.md)\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_numeric_computations201.md |
93f3b77047bc-0 | Given the set of numeric data types supported in Amazon Redshift, the following table shows the expected return types for addition, subtraction, multiplication, and division operations\. The first column on the left side of the table represents the first operand in the calculation, and the top row represents the second operand\.
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_numeric_computations201.html) | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_numeric_computations201.md |
d19a165ab817-0 | The following table summarizes the rules for computing resulting precision and scale when mathematical operations return DECIMAL results\. In this table, `p1` and `s1` represent the precision and scale of the first operand in a calculation and `p2` and `s2` represent the precision and scale of the second operand\. \(Regardless of these calculations, the maximum result precision is 38, and the maximum result scale is 38\.\)
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/redshift/latest/dg/r_numeric_computations201.html)
For example, the PRICEPAID and COMMISSION columns in the SALES table are both DECIMAL\(8,2\) columns\. If you divide PRICEPAID by COMMISSION \(or vice versa\), the formula is applied as follows:
```
Precision = 8-2 + 2 + max(4,2+8-2+1)
= 6 + 2 + 9 = 17
Scale = max(4,2+8-2+1) = 9
Result = DECIMAL(17,9)
```
The following calculation is the general rule for computing the resulting precision and scale for operations performed on DECIMAL values with set operators such as UNION, INTERSECT, and EXCEPT or functions such as COALESCE and DECODE:
```
Scale = max(s1,s2)
Precision = min(max(p1-s1,p2-s2)+scale,19)
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_numeric_computations201.md |
d19a165ab817-1 | Precision = min(max(p1-s1,p2-s2)+scale,19)
```
For example, a DEC1 table with one DECIMAL\(7,2\) column is joined with a DEC2 table with one DECIMAL\(15,3\) column to create a DEC3 table\. The schema of DEC3 shows that the column becomes a NUMERIC\(15,3\) column\.
```
create table dec3 as select * from dec1 union select * from dec2;
```
Result
```
select "column", type, encoding, distkey, sortkey
from pg_table_def where tablename = 'dec3';
column | type | encoding | distkey | sortkey
-------+---------------+----------+---------+---------
c1 | numeric(15,3) | none | f | 0
```
In the above example, the formula is applied as follows:
```
Precision = min(max(7-2,15-3) + max(2,3), 19)
= 12 + 3 = 15
Scale = max(2,3) = 3
Result = DECIMAL(15,3)
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_numeric_computations201.md |
d44e03d36207-0 | For division operations, divide\-by\-zero conditions return errors\.
The scale limit of 100 is applied after the precision and scale are calculated\. If the calculated result scale is greater than 100, division results are scaled as follows:
+ Precision = ` precision - (scale - max_scale)`
+ Scale = ` max_scale `
If the calculated precision is greater than the maximum precision \(38\), the precision is reduced to 38, and the scale becomes the result of: `max(38 + scale - precision), min(4, 100))` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_numeric_computations201.md |
2bc5de13adcc-0 | Overflow is checked for all numeric computations\. DECIMAL data with a precision of 19 or less is stored as 64\-bit integers\. DECIMAL data with a precision that is greater than 19 is stored as 128\-bit integers\. The maximum precision for all DECIMAL values is 38, and the maximum scale is 37\. Overflow errors occur when a value exceeds these limits, which apply to both intermediate and final result sets:
+ Explicit casting results in runtime overflow errors when specific data values do not fit the requested precision or scale specified by the cast function\. For example, you cannot cast all values from the PRICEPAID column in the SALES table \(a DECIMAL\(8,2\) column\) and return a DECIMAL\(7,3\) result:
```
select pricepaid::decimal(7,3) from sales;
ERROR: Numeric data overflow (result precision)
```
This error occurs because *some* of the larger values in the PRICEPAID column cannot be cast\.
+ Multiplication operations produce results in which the result scale is the sum of the scale of each operand\. If both operands have a scale of 4, for example, the result scale is 8, leaving only 10 digits for the left side of the decimal point\. Therefore, it is relatively easy to run into overflow conditions when multiplying two large numbers that both have significant scale\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_numeric_computations201.md |
9b422634a3d3-0 | When one of the operands in a calculation has an INTEGER data type and the other operand is DECIMAL, the INTEGER operand is implicitly cast as a DECIMAL:
+ INT2 \(SMALLINT\) is cast as DECIMAL\(5,0\)
+ INT4 \(INTEGER\) is cast as DECIMAL\(10,0\)
+ INT8 \(BIGINT\) is cast as DECIMAL\(19,0\)
For example, if you multiply SALES\.COMMISSION, a DECIMAL\(8,2\) column, and SALES\.QTYSOLD, a SMALLINT column, this calculation is cast as:
```
DECIMAL(8,2) * DECIMAL(5,0)
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_numeric_computations201.md |
f14992630507-0 | Returns the column metadata for a table or view definition\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_GET_COLS.md |
b16f9a195943-0 | ```
pg_get_cols('name')
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_GET_COLS.md |
aa8322e42e92-0 | *name*
The name of an Amazon Redshift table or view\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_GET_COLS.md |
616faf50a816-0 | VARCHAR | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_GET_COLS.md |
e3c0eafad52d-0 | The PG\_GET\_COLS function returns one row for each column in the table or view definition\. The row contains a comma\-separated list with the schema name, relation name, column name, data type, and column number\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_GET_COLS.md |
c8140917e0b8-0 | The following example returns the column metadata for a view named SALES\_VW\.
```
select pg_get_cols('sales_vw');
pg_get_cols
-----------------------------------------------------------
(public,sales_vw,salesid,integer,1)
(public,sales_vw,listid,integer,2)
(public,sales_vw,sellerid,integer,3)
(public,sales_vw,buyerid,integer,4)
(public,sales_vw,eventid,integer,5)
(public,sales_vw,dateid,smallint,6)
(public,sales_vw,qtysold,smallint,7)
(public,sales_vw,pricepaid,"numeric(8,2)",8)
(public,sales_vw,commission,"numeric(8,2)",9)
(public,sales_vw,saletime,"timestamp without time zone",10)
```
The following example returns the column metadata for the SALES\_VW view in table format\.
```
select * from pg_get_cols('sales_vw') | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_GET_COLS.md |
c8140917e0b8-1 | ```
select * from pg_get_cols('sales_vw')
cols(view_schema name, view_name name, col_name name, col_type varchar, col_num int);
view_schema | view_name | col_name | col_type | col_num
------------+-----------+------------+-----------------------------+--------
public | sales_vw | salesid | integer | 1
public | sales_vw | listid | integer | 2
public | sales_vw | sellerid | integer | 3
public | sales_vw | buyerid | integer | 4
public | sales_vw | eventid | integer | 5
public | sales_vw | dateid | smallint | 6
public | sales_vw | qtysold | smallint | 7
public | sales_vw | pricepaid | numeric(8,2) | 8
public | sales_vw | commission | numeric(8,2) | 9
public | sales_vw | saletime | timestamp without time zone | 10
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/PG_GET_COLS.md |
8a88098b3ae4-0 | For Oracle compatibility, Amazon Redshift supports the Oracle outer\-join operator \(\+\) in WHERE clause join conditions\. This operator is intended for use only in defining outer\-join conditions; don't try to use it in other contexts\. Other uses of this operator are silently ignored in most cases\.
An outer join returns all of the rows that the equivalent inner join would return, plus non\-matching rows from one or both tables\. In the FROM clause, you can specify left, right, and full outer joins\. In the WHERE clause, you can specify left and right outer joins only\.
To outer join tables TABLE1 and TABLE2 and return non\-matching rows from TABLE1 \(a left outer join\), specify `TABLE1 LEFT OUTER JOIN TABLE2` in the FROM clause or apply the \(\+\) operator to all joining columns from TABLE2 in the WHERE clause\. For all rows in TABLE1 that have no matching rows in TABLE2, the result of the query contains nulls for any select list expressions that contain columns from TABLE2\.
To produce the same behavior for all rows in TABLE2 that have no matching rows in TABLE1, specify `TABLE1 RIGHT OUTER JOIN TABLE2` in the FROM clause or apply the \(\+\) operator to all joining columns from TABLE1 in the WHERE clause\. | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WHERE_oracle_outer.md |
a5dc810808d8-0 | ```
[ WHERE {
[ table1.column1 = table2.column1(+) ]
[ table1.column1(+) = table2.column1 ]
}
```
The first condition is equivalent to:
```
from table1 left outer join table2
on table1.column1=table2.column1
```
The second condition is equivalent to:
```
from table1 right outer join table2
on table1.column1=table2.column1
```
**Note**
The syntax shown here covers the simple case of an equijoin over one pair of joining columns\. However, other types of comparison conditions and multiple pairs of joining columns are also valid\.
For example, the following WHERE clause defines an outer join over two pairs of columns\. The \(\+\) operator must be attached to the same table in both conditions:
```
where table1.col1 > table2.col1(+)
and table1.col2 = table2.col2(+)
``` | https://github.com/siagholami/aws-documentation/tree/main/documents/amazon-redshift-developer-guide/doc_source/r_WHERE_oracle_outer.md |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.