diff --git "a/src/processing/output.jsonl" "b/src/processing/output.jsonl" new file mode 100644--- /dev/null +++ "b/src/processing/output.jsonl" @@ -0,0 +1,708 @@ +{"global_id": 0, "doc_id": "wavelength", "chunk_id": "0", "question_id": 1, "question": "What does AWS Wavelength enable developers to do?", "answer_span": "AWS Wavelength enables developers to build applications that require edge computing infrastructure to deliver low latency to mobile devices and end users or increase the resiliency of their existing edge applications.", "chunk": "AWS Wavelength Developer Guide What is AWS Wavelength? AWS Wavelength enables developers to build applications that require edge computing infrastructure to deliver low latency to mobile devices and end users or increase the resiliency of their existing edge applications. Wavelength deploys standard AWS compute and storage services to the edge of communications service providers' (CSP) networks. You can extend a virtual private cloud (VPC) to one or more Wavelength Zones. You can then use AWS resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances to run the applications that require low latency or edge resiliency within the Wavelength Zone, while seamlessly communicating back to your existing AWS services deployed in the parent AWS Region. For more information, see AWS Wavelength. Wavelength concepts The following are the key concepts: • Wavelength — A new type of AWS infrastructure designed to run workloads that require low latency or edge resiliency. • Wavelength Zone — A zone in the carrier location where the Wavelength infrastructure is deployed. Wavelength Zones are associated with an AWS Region. A Wavelength Zone is a logical extension of the Region, and is managed by the control plane in the Region. • VPC — A customer virtual private cloud (VPC) that spans Availability Zones, Local Zones, and Wavelength Zones, and has deployed resources such as Amazon EC2 instances in the subnets that are associated with the zones. • Wavelength subnet — A subnet that you create in a Wavelength Zone. You can create one or more subnets, and then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group"} +{"global_id": 1, "doc_id": "wavelength", "chunk_id": "0", "question_id": 2, "question": "What is a Wavelength Zone?", "answer_span": "Wavelength Zone — A zone in the carrier location where the Wavelength infrastructure is deployed.", "chunk": "AWS Wavelength Developer Guide What is AWS Wavelength? AWS Wavelength enables developers to build applications that require edge computing infrastructure to deliver low latency to mobile devices and end users or increase the resiliency of their existing edge applications. Wavelength deploys standard AWS compute and storage services to the edge of communications service providers' (CSP) networks. You can extend a virtual private cloud (VPC) to one or more Wavelength Zones. You can then use AWS resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances to run the applications that require low latency or edge resiliency within the Wavelength Zone, while seamlessly communicating back to your existing AWS services deployed in the parent AWS Region. For more information, see AWS Wavelength. Wavelength concepts The following are the key concepts: • Wavelength — A new type of AWS infrastructure designed to run workloads that require low latency or edge resiliency. • Wavelength Zone — A zone in the carrier location where the Wavelength infrastructure is deployed. Wavelength Zones are associated with an AWS Region. A Wavelength Zone is a logical extension of the Region, and is managed by the control plane in the Region. • VPC — A customer virtual private cloud (VPC) that spans Availability Zones, Local Zones, and Wavelength Zones, and has deployed resources such as Amazon EC2 instances in the subnets that are associated with the zones. • Wavelength subnet — A subnet that you create in a Wavelength Zone. You can create one or more subnets, and then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group"} +{"global_id": 2, "doc_id": "wavelength", "chunk_id": "0", "question_id": 3, "question": "What is the purpose of a carrier gateway?", "answer_span": "A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet.", "chunk": "AWS Wavelength Developer Guide What is AWS Wavelength? AWS Wavelength enables developers to build applications that require edge computing infrastructure to deliver low latency to mobile devices and end users or increase the resiliency of their existing edge applications. Wavelength deploys standard AWS compute and storage services to the edge of communications service providers' (CSP) networks. You can extend a virtual private cloud (VPC) to one or more Wavelength Zones. You can then use AWS resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances to run the applications that require low latency or edge resiliency within the Wavelength Zone, while seamlessly communicating back to your existing AWS services deployed in the parent AWS Region. For more information, see AWS Wavelength. Wavelength concepts The following are the key concepts: • Wavelength — A new type of AWS infrastructure designed to run workloads that require low latency or edge resiliency. • Wavelength Zone — A zone in the carrier location where the Wavelength infrastructure is deployed. Wavelength Zones are associated with an AWS Region. A Wavelength Zone is a logical extension of the Region, and is managed by the control plane in the Region. • VPC — A customer virtual private cloud (VPC) that spans Availability Zones, Local Zones, and Wavelength Zones, and has deployed resources such as Amazon EC2 instances in the subnets that are associated with the zones. • Wavelength subnet — A subnet that you create in a Wavelength Zone. You can create one or more subnets, and then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group"} +{"global_id": 3, "doc_id": "wavelength", "chunk_id": "0", "question_id": 4, "question": "What can you extend to one or more Wavelength Zones?", "answer_span": "You can extend a virtual private cloud (VPC) to one or more Wavelength Zones.", "chunk": "AWS Wavelength Developer Guide What is AWS Wavelength? AWS Wavelength enables developers to build applications that require edge computing infrastructure to deliver low latency to mobile devices and end users or increase the resiliency of their existing edge applications. Wavelength deploys standard AWS compute and storage services to the edge of communications service providers' (CSP) networks. You can extend a virtual private cloud (VPC) to one or more Wavelength Zones. You can then use AWS resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances to run the applications that require low latency or edge resiliency within the Wavelength Zone, while seamlessly communicating back to your existing AWS services deployed in the parent AWS Region. For more information, see AWS Wavelength. Wavelength concepts The following are the key concepts: • Wavelength — A new type of AWS infrastructure designed to run workloads that require low latency or edge resiliency. • Wavelength Zone — A zone in the carrier location where the Wavelength infrastructure is deployed. Wavelength Zones are associated with an AWS Region. A Wavelength Zone is a logical extension of the Region, and is managed by the control plane in the Region. • VPC — A customer virtual private cloud (VPC) that spans Availability Zones, Local Zones, and Wavelength Zones, and has deployed resources such as Amazon EC2 instances in the subnets that are associated with the zones. • Wavelength subnet — A subnet that you create in a Wavelength Zone. You can create one or more subnets, and then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group"} +{"global_id": 4, "doc_id": "wavelength", "chunk_id": "1", "question_id": 1, "question": "What is the purpose of a carrier gateway?", "answer_span": "It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet.", "chunk": "then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group — A unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses. • Wavelength application — An application that you run on an AWS resource in a Wavelength Zone. Wavelength concepts 1 AWS Wavelength Developer Guide AWS resources on Wavelength You can create Amazon EC2 instances, Amazon EBS volumes, and Amazon VPC subnets and carrier gateways in Wavelength Zones. You can also use the following: • Amazon EC2 Auto Scaling • Amazon EKS clusters • Amazon ECS clusters • Amazon EC2 Systems Manager • Amazon CloudWatch • AWS CloudTrail • AWS CloudFormation • Application Load Balancer in select Wavelength Zones. For a list of these Zones, see Load balancing. The services in Wavelength are part of a VPC that is connected over a reliable connection to an AWS Region for easy access to services running in Regional subnets. Working with Wavelength You can create, access, and manage your EC2 resources, Wavelength Zones, and carrier gateways using any of the following interfaces: • AWS Management Console— Provides a web interface that you can use to access your Wavelength resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs"} +{"global_id": 5, "doc_id": "wavelength", "chunk_id": "1", "question_id": 2, "question": "What can you create in Wavelength Zones?", "answer_span": "You can create Amazon EC2 instances, Amazon EBS volumes, and Amazon VPC subnets and carrier gateways in Wavelength Zones.", "chunk": "then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group — A unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses. • Wavelength application — An application that you run on an AWS resource in a Wavelength Zone. Wavelength concepts 1 AWS Wavelength Developer Guide AWS resources on Wavelength You can create Amazon EC2 instances, Amazon EBS volumes, and Amazon VPC subnets and carrier gateways in Wavelength Zones. You can also use the following: • Amazon EC2 Auto Scaling • Amazon EKS clusters • Amazon ECS clusters • Amazon EC2 Systems Manager • Amazon CloudWatch • AWS CloudTrail • AWS CloudFormation • Application Load Balancer in select Wavelength Zones. For a list of these Zones, see Load balancing. The services in Wavelength are part of a VPC that is connected over a reliable connection to an AWS Region for easy access to services running in Regional subnets. Working with Wavelength You can create, access, and manage your EC2 resources, Wavelength Zones, and carrier gateways using any of the following interfaces: • AWS Management Console— Provides a web interface that you can use to access your Wavelength resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs"} +{"global_id": 6, "doc_id": "wavelength", "chunk_id": "1", "question_id": 3, "question": "What interface provides a web interface to access Wavelength resources?", "answer_span": "AWS Management Console— Provides a web interface that you can use to access your Wavelength resources.", "chunk": "then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group — A unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses. • Wavelength application — An application that you run on an AWS resource in a Wavelength Zone. Wavelength concepts 1 AWS Wavelength Developer Guide AWS resources on Wavelength You can create Amazon EC2 instances, Amazon EBS volumes, and Amazon VPC subnets and carrier gateways in Wavelength Zones. You can also use the following: • Amazon EC2 Auto Scaling • Amazon EKS clusters • Amazon ECS clusters • Amazon EC2 Systems Manager • Amazon CloudWatch • AWS CloudTrail • AWS CloudFormation • Application Load Balancer in select Wavelength Zones. For a list of these Zones, see Load balancing. The services in Wavelength are part of a VPC that is connected over a reliable connection to an AWS Region for easy access to services running in Regional subnets. Working with Wavelength You can create, access, and manage your EC2 resources, Wavelength Zones, and carrier gateways using any of the following interfaces: • AWS Management Console— Provides a web interface that you can use to access your Wavelength resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs"} +{"global_id": 7, "doc_id": "wavelength", "chunk_id": "1", "question_id": 4, "question": "Which operating systems support the AWS Command Line Interface?", "answer_span": "AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, macOS, and Linux.", "chunk": "then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group — A unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses. • Wavelength application — An application that you run on an AWS resource in a Wavelength Zone. Wavelength concepts 1 AWS Wavelength Developer Guide AWS resources on Wavelength You can create Amazon EC2 instances, Amazon EBS volumes, and Amazon VPC subnets and carrier gateways in Wavelength Zones. You can also use the following: • Amazon EC2 Auto Scaling • Amazon EKS clusters • Amazon ECS clusters • Amazon EC2 Systems Manager • Amazon CloudWatch • AWS CloudTrail • AWS CloudFormation • Application Load Balancer in select Wavelength Zones. For a list of these Zones, see Load balancing. The services in Wavelength are part of a VPC that is connected over a reliable connection to an AWS Region for easy access to services running in Regional subnets. Working with Wavelength You can create, access, and manage your EC2 resources, Wavelength Zones, and carrier gateways using any of the following interfaces: • AWS Management Console— Provides a web interface that you can use to access your Wavelength resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs"} +{"global_id": 8, "doc_id": "wavelength", "chunk_id": "2", "question_id": 1, "question": "What operating systems are supported by the services mentioned in the text?", "answer_span": "is supported on Windows, macOS, and Linux.", "chunk": "services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and handling errors. For more information, see AWS SDKs. When you use any of the interfaces for your Wavelength Zones, use the parent Region. AWS resources on Wavelength 2 AWS Wavelength Developer Guide Pricing For more information, see AWS Wavelength Pricing. Use cases for AWS Wavelength Using AWS Wavelength Zones can help you accomplish a variety of goals. This section lists a few to give you an idea of the possibilities. Contents • Online betting and regulated industries • Media and entertainment • Healthcare • Augmented reality (AR) and virtual reality (VR) • Connected vehicles • Smart factories • Real-time gaming Online betting and regulated industries AWS Wavelength provides edge resiliency to help address data residency requirements for regulated industries, such as online sports betting. Using a combination of AWS Wavelength alongside existing AWS hybrid and edge services such as AWS Outposts or AWS Local Zones, you can create highly-available architectures within state or country borders. Media and entertainment Wavelength provides the low latency needed to live stream high-resolution video and high-fidelity audio, and to embed interactive experiences into live video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using"} +{"global_id": 9, "doc_id": "wavelength", "chunk_id": "2", "question_id": 2, "question": "What does AWS SDKs provide?", "answer_span": "Provides language-specific APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and handling errors.", "chunk": "services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and handling errors. For more information, see AWS SDKs. When you use any of the interfaces for your Wavelength Zones, use the parent Region. AWS resources on Wavelength 2 AWS Wavelength Developer Guide Pricing For more information, see AWS Wavelength Pricing. Use cases for AWS Wavelength Using AWS Wavelength Zones can help you accomplish a variety of goals. This section lists a few to give you an idea of the possibilities. Contents • Online betting and regulated industries • Media and entertainment • Healthcare • Augmented reality (AR) and virtual reality (VR) • Connected vehicles • Smart factories • Real-time gaming Online betting and regulated industries AWS Wavelength provides edge resiliency to help address data residency requirements for regulated industries, such as online sports betting. Using a combination of AWS Wavelength alongside existing AWS hybrid and edge services such as AWS Outposts or AWS Local Zones, you can create highly-available architectures within state or country borders. Media and entertainment Wavelength provides the low latency needed to live stream high-resolution video and high-fidelity audio, and to embed interactive experiences into live video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using"} +{"global_id": 10, "doc_id": "wavelength", "chunk_id": "2", "question_id": 3, "question": "What are some use cases for AWS Wavelength mentioned in the text?", "answer_span": "Online betting and regulated industries, Media and entertainment, Healthcare, Augmented reality (AR) and virtual reality (VR), Connected vehicles, Smart factories, Real-time gaming.", "chunk": "services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and handling errors. For more information, see AWS SDKs. When you use any of the interfaces for your Wavelength Zones, use the parent Region. AWS resources on Wavelength 2 AWS Wavelength Developer Guide Pricing For more information, see AWS Wavelength Pricing. Use cases for AWS Wavelength Using AWS Wavelength Zones can help you accomplish a variety of goals. This section lists a few to give you an idea of the possibilities. Contents • Online betting and regulated industries • Media and entertainment • Healthcare • Augmented reality (AR) and virtual reality (VR) • Connected vehicles • Smart factories • Real-time gaming Online betting and regulated industries AWS Wavelength provides edge resiliency to help address data residency requirements for regulated industries, such as online sports betting. Using a combination of AWS Wavelength alongside existing AWS hybrid and edge services such as AWS Outposts or AWS Local Zones, you can create highly-available architectures within state or country borders. Media and entertainment Wavelength provides the low latency needed to live stream high-resolution video and high-fidelity audio, and to embed interactive experiences into live video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using"} +{"global_id": 11, "doc_id": "wavelength", "chunk_id": "2", "question_id": 4, "question": "How does AWS Wavelength help online betting and regulated industries?", "answer_span": "AWS Wavelength provides edge resiliency to help address data residency requirements for regulated industries, such as online sports betting.", "chunk": "services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and handling errors. For more information, see AWS SDKs. When you use any of the interfaces for your Wavelength Zones, use the parent Region. AWS resources on Wavelength 2 AWS Wavelength Developer Guide Pricing For more information, see AWS Wavelength Pricing. Use cases for AWS Wavelength Using AWS Wavelength Zones can help you accomplish a variety of goals. This section lists a few to give you an idea of the possibilities. Contents • Online betting and regulated industries • Media and entertainment • Healthcare • Augmented reality (AR) and virtual reality (VR) • Connected vehicles • Smart factories • Real-time gaming Online betting and regulated industries AWS Wavelength provides edge resiliency to help address data residency requirements for regulated industries, such as online sports betting. Using a combination of AWS Wavelength alongside existing AWS hybrid and edge services such as AWS Outposts or AWS Local Zones, you can create highly-available architectures within state or country borders. Media and entertainment Wavelength provides the low latency needed to live stream high-resolution video and high-fidelity audio, and to embed interactive experiences into live video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using"} +{"global_id": 12, "doc_id": "wavelength", "chunk_id": "3", "question_id": 1, "question": "What do real-time video analytics provide for live events?", "answer_span": "Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience.", "chunk": "video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using AWS Wavelength to host the remote rendering engine, doctors can experience an immersive training experience without procuring the often-required expensive equipment to do so. Augmented reality (AR) and virtual reality (VR) By accessing compute resources on AWS Wavelength, AR/VR applications can reduce the Motion to Photon (MTP) latencies to the benchmark that is needed to offer a realistic customer experience. When you use AWS Wavelength, you can offer AR/VR in locations where it is not possible to run local system servers. Connected vehicles Cellular Vehicle-to-Everything (C-V2X) is an increasingly important platform for enabling functionality such as intelligent driving, real-time HD maps, and increased road safety. Low latency access to the compute infrastructure that's needed to run data processing and analytics on AWS Wavelength enables real-time monitoring of data from sensors on the vehicle. This allows for secure connectivity, in-car telematics, and autonomous driving. Smart factories Industrial automation applications use ML inference at the edge to analyze images and videos to detect quality issues on fast moving assembly lines and to trigger actions that address the issues. With AWS Wavelength, these applications can be deployed without having to use expensive, GPUbased servers on the factory floor. Real-time gaming Real-time game streaming depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength"} +{"global_id": 13, "doc_id": "wavelength", "chunk_id": "3", "question_id": 2, "question": "How does AWS Wavelength benefit medical training providers?", "answer_span": "Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more.", "chunk": "video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using AWS Wavelength to host the remote rendering engine, doctors can experience an immersive training experience without procuring the often-required expensive equipment to do so. Augmented reality (AR) and virtual reality (VR) By accessing compute resources on AWS Wavelength, AR/VR applications can reduce the Motion to Photon (MTP) latencies to the benchmark that is needed to offer a realistic customer experience. When you use AWS Wavelength, you can offer AR/VR in locations where it is not possible to run local system servers. Connected vehicles Cellular Vehicle-to-Everything (C-V2X) is an increasingly important platform for enabling functionality such as intelligent driving, real-time HD maps, and increased road safety. Low latency access to the compute infrastructure that's needed to run data processing and analytics on AWS Wavelength enables real-time monitoring of data from sensors on the vehicle. This allows for secure connectivity, in-car telematics, and autonomous driving. Smart factories Industrial automation applications use ML inference at the edge to analyze images and videos to detect quality issues on fast moving assembly lines and to trigger actions that address the issues. With AWS Wavelength, these applications can be deployed without having to use expensive, GPUbased servers on the factory floor. Real-time gaming Real-time game streaming depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength"} +{"global_id": 14, "doc_id": "wavelength", "chunk_id": "3", "question_id": 3, "question": "What is the significance of low latency in real-time gaming?", "answer_span": "Real-time game streaming depends on low latency to preserve the user experience.", "chunk": "video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using AWS Wavelength to host the remote rendering engine, doctors can experience an immersive training experience without procuring the often-required expensive equipment to do so. Augmented reality (AR) and virtual reality (VR) By accessing compute resources on AWS Wavelength, AR/VR applications can reduce the Motion to Photon (MTP) latencies to the benchmark that is needed to offer a realistic customer experience. When you use AWS Wavelength, you can offer AR/VR in locations where it is not possible to run local system servers. Connected vehicles Cellular Vehicle-to-Everything (C-V2X) is an increasingly important platform for enabling functionality such as intelligent driving, real-time HD maps, and increased road safety. Low latency access to the compute infrastructure that's needed to run data processing and analytics on AWS Wavelength enables real-time monitoring of data from sensors on the vehicle. This allows for secure connectivity, in-car telematics, and autonomous driving. Smart factories Industrial automation applications use ML inference at the edge to analyze images and videos to detect quality issues on fast moving assembly lines and to trigger actions that address the issues. With AWS Wavelength, these applications can be deployed without having to use expensive, GPUbased servers on the factory floor. Real-time gaming Real-time game streaming depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength"} +{"global_id": 15, "doc_id": "wavelength", "chunk_id": "3", "question_id": 4, "question": "What functionality does Cellular Vehicle-to-Everything (C-V2X) enable?", "answer_span": "Cellular Vehicle-to-Everything (C-V2X) is an increasingly important platform for enabling functionality such as intelligent driving, real-time HD maps, and increased road safety.", "chunk": "video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using AWS Wavelength to host the remote rendering engine, doctors can experience an immersive training experience without procuring the often-required expensive equipment to do so. Augmented reality (AR) and virtual reality (VR) By accessing compute resources on AWS Wavelength, AR/VR applications can reduce the Motion to Photon (MTP) latencies to the benchmark that is needed to offer a realistic customer experience. When you use AWS Wavelength, you can offer AR/VR in locations where it is not possible to run local system servers. Connected vehicles Cellular Vehicle-to-Everything (C-V2X) is an increasingly important platform for enabling functionality such as intelligent driving, real-time HD maps, and increased road safety. Low latency access to the compute infrastructure that's needed to run data processing and analytics on AWS Wavelength enables real-time monitoring of data from sensors on the vehicle. This allows for secure connectivity, in-car telematics, and autonomous driving. Smart factories Industrial automation applications use ML inference at the edge to analyze images and videos to detect quality issues on fast moving assembly lines and to trigger actions that address the issues. With AWS Wavelength, these applications can be deployed without having to use expensive, GPUbased servers on the factory floor. Real-time gaming Real-time game streaming depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength"} +{"global_id": 16, "doc_id": "wavelength", "chunk_id": "4", "question_id": 1, "question": "What is the purpose of AWS Wavelength?", "answer_span": "depends on low latency to preserve the user experience.", "chunk": "depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength works The following diagram demonstrates how you can create a subnet that uses resources in a communications service provider (CSP) network at a specific location. For resources that must be deployed to the Wavelength Zone, first opt in to the Wavelength Zone, and then create resources in the Wavelength Zone. Contents • VPCs • Subnets • Carrier gateways • Carrier IP address • Routing • DNS • Maximum transmission unit VPCs After you create a VPC in a Region, create a subnet in a Wavelength Zone that is associated with the VPC. In addition to the Wavelength Zone, you can create resources in all of the Availability Zones and Local Zones that are associated with the VPC. VPCs 5 AWS Wavelength Developer Guide You have control over the VPC networking components, such as IP address assignment, subnets, and route table creation. VPCs that contain a subnet in a Wavelength Zone can connect to a carrier gateway. A carrier gateway allows you to connect to the following resources: • 4G/LTE and 5G devices on the telecommunication carrier network • Internet access including fixed wireless access for select Wavelength Zone partners. For more information, see Multi-access AWS Wavelength. • Outbound traffic to public internet resources Subnets Any subnet that you create in a Wavelength Zone inherits the main VPC route table, which includes the local route. The local route enables connectivity between the subnets in the VPC, including the subnets that are in the Wavelength Zone. AWS recommends that you configure custom"} +{"global_id": 17, "doc_id": "wavelength", "chunk_id": "4", "question_id": 2, "question": "What types of devices can connect to a carrier gateway?", "answer_span": "A carrier gateway allows you to connect to the following resources: • 4G/LTE and 5G devices on the telecommunication carrier network", "chunk": "depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength works The following diagram demonstrates how you can create a subnet that uses resources in a communications service provider (CSP) network at a specific location. For resources that must be deployed to the Wavelength Zone, first opt in to the Wavelength Zone, and then create resources in the Wavelength Zone. Contents • VPCs • Subnets • Carrier gateways • Carrier IP address • Routing • DNS • Maximum transmission unit VPCs After you create a VPC in a Region, create a subnet in a Wavelength Zone that is associated with the VPC. In addition to the Wavelength Zone, you can create resources in all of the Availability Zones and Local Zones that are associated with the VPC. VPCs 5 AWS Wavelength Developer Guide You have control over the VPC networking components, such as IP address assignment, subnets, and route table creation. VPCs that contain a subnet in a Wavelength Zone can connect to a carrier gateway. A carrier gateway allows you to connect to the following resources: • 4G/LTE and 5G devices on the telecommunication carrier network • Internet access including fixed wireless access for select Wavelength Zone partners. For more information, see Multi-access AWS Wavelength. • Outbound traffic to public internet resources Subnets Any subnet that you create in a Wavelength Zone inherits the main VPC route table, which includes the local route. The local route enables connectivity between the subnets in the VPC, including the subnets that are in the Wavelength Zone. AWS recommends that you configure custom"} +{"global_id": 18, "doc_id": "wavelength", "chunk_id": "4", "question_id": 3, "question": "What must you do before creating resources in the Wavelength Zone?", "answer_span": "first opt in to the Wavelength Zone, and then create resources in the Wavelength Zone.", "chunk": "depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength works The following diagram demonstrates how you can create a subnet that uses resources in a communications service provider (CSP) network at a specific location. For resources that must be deployed to the Wavelength Zone, first opt in to the Wavelength Zone, and then create resources in the Wavelength Zone. Contents • VPCs • Subnets • Carrier gateways • Carrier IP address • Routing • DNS • Maximum transmission unit VPCs After you create a VPC in a Region, create a subnet in a Wavelength Zone that is associated with the VPC. In addition to the Wavelength Zone, you can create resources in all of the Availability Zones and Local Zones that are associated with the VPC. VPCs 5 AWS Wavelength Developer Guide You have control over the VPC networking components, such as IP address assignment, subnets, and route table creation. VPCs that contain a subnet in a Wavelength Zone can connect to a carrier gateway. A carrier gateway allows you to connect to the following resources: • 4G/LTE and 5G devices on the telecommunication carrier network • Internet access including fixed wireless access for select Wavelength Zone partners. For more information, see Multi-access AWS Wavelength. • Outbound traffic to public internet resources Subnets Any subnet that you create in a Wavelength Zone inherits the main VPC route table, which includes the local route. The local route enables connectivity between the subnets in the VPC, including the subnets that are in the Wavelength Zone. AWS recommends that you configure custom"} +{"global_id": 19, "doc_id": "wavelength", "chunk_id": "4", "question_id": 4, "question": "What does any subnet created in a Wavelength Zone inherit?", "answer_span": "Any subnet that you create in a Wavelength Zone inherits the main VPC route table, which includes the local route.", "chunk": "depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength works The following diagram demonstrates how you can create a subnet that uses resources in a communications service provider (CSP) network at a specific location. For resources that must be deployed to the Wavelength Zone, first opt in to the Wavelength Zone, and then create resources in the Wavelength Zone. Contents • VPCs • Subnets • Carrier gateways • Carrier IP address • Routing • DNS • Maximum transmission unit VPCs After you create a VPC in a Region, create a subnet in a Wavelength Zone that is associated with the VPC. In addition to the Wavelength Zone, you can create resources in all of the Availability Zones and Local Zones that are associated with the VPC. VPCs 5 AWS Wavelength Developer Guide You have control over the VPC networking components, such as IP address assignment, subnets, and route table creation. VPCs that contain a subnet in a Wavelength Zone can connect to a carrier gateway. A carrier gateway allows you to connect to the following resources: • 4G/LTE and 5G devices on the telecommunication carrier network • Internet access including fixed wireless access for select Wavelength Zone partners. For more information, see Multi-access AWS Wavelength. • Outbound traffic to public internet resources Subnets Any subnet that you create in a Wavelength Zone inherits the main VPC route table, which includes the local route. The local route enables connectivity between the subnets in the VPC, including the subnets that are in the Wavelength Zone. AWS recommends that you configure custom"} +{"global_id": 20, "doc_id": "wavelength", "chunk_id": "5", "question_id": 1, "question": "What does any subnet created in a Wavelength Zone inherit?", "answer_span": "Any subnet that you create in a Wavelength Zone inherits the main VPC route table, which includes the local route.", "chunk": "public internet resources Subnets Any subnet that you create in a Wavelength Zone inherits the main VPC route table, which includes the local route. The local route enables connectivity between the subnets in the VPC, including the subnets that are in the Wavelength Zone. AWS recommends that you configure custom route tables for your subnets in Wavelength Zones. The destinations are the same destinations as a subnet in an Availability Zone or Local Zone, with the addition of a carrier gateway. For more information, see the section called “Routing”. Carrier gateways A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and it allows outbound traffic to the carrier network and internet. There is no inbound connection configuration from the internet to a Wavelength Zone through the carrier gateway. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your Wavelength Zone and the telecommunication carrier, and devices on the telecommunication carrier network. The carrier gateway performs NAT of the Wavelength instances' IP addresses to the Carrier IP addresses from a pool that is assigned to the network border group. The carrier gateway NAT function is similar to how an internet gateway functions in a Region. Subnets 6 AWS Wavelength Developer Guide Carrier IP address A Carrier IP address is the address that you assign to a network interface, which resides in a subnet in a Wavelength Zone (for example an EC2 instance). The carrier gateway uses the address for traffic from the interface to the internet or to mobile devices. The carrier gateway uses NAT to translate the address, and then sends the traffic to the destination. Traffic from the telecommunication carrier network routes through"} +{"global_id": 21, "doc_id": "wavelength", "chunk_id": "5", "question_id": 2, "question": "What does the local route enable?", "answer_span": "The local route enables connectivity between the subnets in the VPC, including the subnets that are in the Wavelength Zone.", "chunk": "public internet resources Subnets Any subnet that you create in a Wavelength Zone inherits the main VPC route table, which includes the local route. The local route enables connectivity between the subnets in the VPC, including the subnets that are in the Wavelength Zone. AWS recommends that you configure custom route tables for your subnets in Wavelength Zones. The destinations are the same destinations as a subnet in an Availability Zone or Local Zone, with the addition of a carrier gateway. For more information, see the section called “Routing”. Carrier gateways A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and it allows outbound traffic to the carrier network and internet. There is no inbound connection configuration from the internet to a Wavelength Zone through the carrier gateway. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your Wavelength Zone and the telecommunication carrier, and devices on the telecommunication carrier network. The carrier gateway performs NAT of the Wavelength instances' IP addresses to the Carrier IP addresses from a pool that is assigned to the network border group. The carrier gateway NAT function is similar to how an internet gateway functions in a Region. Subnets 6 AWS Wavelength Developer Guide Carrier IP address A Carrier IP address is the address that you assign to a network interface, which resides in a subnet in a Wavelength Zone (for example an EC2 instance). The carrier gateway uses the address for traffic from the interface to the internet or to mobile devices. The carrier gateway uses NAT to translate the address, and then sends the traffic to the destination. Traffic from the telecommunication carrier network routes through"} +{"global_id": 22, "doc_id": "wavelength", "chunk_id": "5", "question_id": 3, "question": "What are the two purposes of a carrier gateway?", "answer_span": "A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and it allows outbound traffic to the carrier network and internet.", "chunk": "public internet resources Subnets Any subnet that you create in a Wavelength Zone inherits the main VPC route table, which includes the local route. The local route enables connectivity between the subnets in the VPC, including the subnets that are in the Wavelength Zone. AWS recommends that you configure custom route tables for your subnets in Wavelength Zones. The destinations are the same destinations as a subnet in an Availability Zone or Local Zone, with the addition of a carrier gateway. For more information, see the section called “Routing”. Carrier gateways A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and it allows outbound traffic to the carrier network and internet. There is no inbound connection configuration from the internet to a Wavelength Zone through the carrier gateway. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your Wavelength Zone and the telecommunication carrier, and devices on the telecommunication carrier network. The carrier gateway performs NAT of the Wavelength instances' IP addresses to the Carrier IP addresses from a pool that is assigned to the network border group. The carrier gateway NAT function is similar to how an internet gateway functions in a Region. Subnets 6 AWS Wavelength Developer Guide Carrier IP address A Carrier IP address is the address that you assign to a network interface, which resides in a subnet in a Wavelength Zone (for example an EC2 instance). The carrier gateway uses the address for traffic from the interface to the internet or to mobile devices. The carrier gateway uses NAT to translate the address, and then sends the traffic to the destination. Traffic from the telecommunication carrier network routes through"} +{"global_id": 23, "doc_id": "wavelength", "chunk_id": "5", "question_id": 4, "question": "What does the carrier gateway perform for the Wavelength instances' IP addresses?", "answer_span": "The carrier gateway performs NAT of the Wavelength instances' IP addresses to the Carrier IP addresses from a pool that is assigned to the network border group.", "chunk": "public internet resources Subnets Any subnet that you create in a Wavelength Zone inherits the main VPC route table, which includes the local route. The local route enables connectivity between the subnets in the VPC, including the subnets that are in the Wavelength Zone. AWS recommends that you configure custom route tables for your subnets in Wavelength Zones. The destinations are the same destinations as a subnet in an Availability Zone or Local Zone, with the addition of a carrier gateway. For more information, see the section called “Routing”. Carrier gateways A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and it allows outbound traffic to the carrier network and internet. There is no inbound connection configuration from the internet to a Wavelength Zone through the carrier gateway. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your Wavelength Zone and the telecommunication carrier, and devices on the telecommunication carrier network. The carrier gateway performs NAT of the Wavelength instances' IP addresses to the Carrier IP addresses from a pool that is assigned to the network border group. The carrier gateway NAT function is similar to how an internet gateway functions in a Region. Subnets 6 AWS Wavelength Developer Guide Carrier IP address A Carrier IP address is the address that you assign to a network interface, which resides in a subnet in a Wavelength Zone (for example an EC2 instance). The carrier gateway uses the address for traffic from the interface to the internet or to mobile devices. The carrier gateway uses NAT to translate the address, and then sends the traffic to the destination. Traffic from the telecommunication carrier network routes through"} +{"global_id": 24, "doc_id": "wavelength", "chunk_id": "6", "question_id": 1, "question": "What does the carrier gateway use to translate the address?", "answer_span": "The carrier gateway uses NAT to translate the address, and then sends the traffic to the destination.", "chunk": "Wavelength Zone (for example an EC2 instance). The carrier gateway uses the address for traffic from the interface to the internet or to mobile devices. The carrier gateway uses NAT to translate the address, and then sends the traffic to the destination. Traffic from the telecommunication carrier network routes through the carrier gateway. You allocate a Carrier IP address from a network border group, which is a unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses, for example, us-east-1-wl1-bos-wlz-1. Routing You can set the carrier gateway as a destination in a route table for the following resources: • VPCs that contain subnets in a Wavelength Zone • Subnets in Wavelength Zones Create a custom route table for the subnets in the Wavelength Zones so that the default route goes to the carrier gateway, which then sends traffic to the internet and telecommunication carrier network. Example: Carrier gateway routing to the public internet Consider a scenario with the following configuration: • A VPC with Availability Zones and a Wavelength Zone • A subnet in the Wavelength Zone • An EC2 instance in the subnet in the Wavelength Zone • A Carrier IP address for the network interface associated with the EC2 instance • An IP address association that maps the private IP address of the EC2 instance to the Carrier IP address Carrier IP address 7 AWS Wavelength Developer Guide You need the following entries in the Wavelength subnet route table. Destination Target Notes VPC CIDR Local This route allows for intraVPC connectivity, including subnets in the Availability Zones. 0.0.0.0/0 carrier-gateway-id The Carrier IP address provides internet connectivity through the carrier gateway. Carrier gateway access to the public internet The carrier gateway provides access to the internet from your Wavelength subnets. For information about"} +{"global_id": 25, "doc_id": "wavelength", "chunk_id": "6", "question_id": 2, "question": "From where do you allocate a Carrier IP address?", "answer_span": "You allocate a Carrier IP address from a network border group, which is a unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses.", "chunk": "Wavelength Zone (for example an EC2 instance). The carrier gateway uses the address for traffic from the interface to the internet or to mobile devices. The carrier gateway uses NAT to translate the address, and then sends the traffic to the destination. Traffic from the telecommunication carrier network routes through the carrier gateway. You allocate a Carrier IP address from a network border group, which is a unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses, for example, us-east-1-wl1-bos-wlz-1. Routing You can set the carrier gateway as a destination in a route table for the following resources: • VPCs that contain subnets in a Wavelength Zone • Subnets in Wavelength Zones Create a custom route table for the subnets in the Wavelength Zones so that the default route goes to the carrier gateway, which then sends traffic to the internet and telecommunication carrier network. Example: Carrier gateway routing to the public internet Consider a scenario with the following configuration: • A VPC with Availability Zones and a Wavelength Zone • A subnet in the Wavelength Zone • An EC2 instance in the subnet in the Wavelength Zone • A Carrier IP address for the network interface associated with the EC2 instance • An IP address association that maps the private IP address of the EC2 instance to the Carrier IP address Carrier IP address 7 AWS Wavelength Developer Guide You need the following entries in the Wavelength subnet route table. Destination Target Notes VPC CIDR Local This route allows for intraVPC connectivity, including subnets in the Availability Zones. 0.0.0.0/0 carrier-gateway-id The Carrier IP address provides internet connectivity through the carrier gateway. Carrier gateway access to the public internet The carrier gateway provides access to the internet from your Wavelength subnets. For information about"} +{"global_id": 26, "doc_id": "wavelength", "chunk_id": "6", "question_id": 3, "question": "What must you create for the subnets in the Wavelength Zones?", "answer_span": "Create a custom route table for the subnets in the Wavelength Zones so that the default route goes to the carrier gateway.", "chunk": "Wavelength Zone (for example an EC2 instance). The carrier gateway uses the address for traffic from the interface to the internet or to mobile devices. The carrier gateway uses NAT to translate the address, and then sends the traffic to the destination. Traffic from the telecommunication carrier network routes through the carrier gateway. You allocate a Carrier IP address from a network border group, which is a unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses, for example, us-east-1-wl1-bos-wlz-1. Routing You can set the carrier gateway as a destination in a route table for the following resources: • VPCs that contain subnets in a Wavelength Zone • Subnets in Wavelength Zones Create a custom route table for the subnets in the Wavelength Zones so that the default route goes to the carrier gateway, which then sends traffic to the internet and telecommunication carrier network. Example: Carrier gateway routing to the public internet Consider a scenario with the following configuration: • A VPC with Availability Zones and a Wavelength Zone • A subnet in the Wavelength Zone • An EC2 instance in the subnet in the Wavelength Zone • A Carrier IP address for the network interface associated with the EC2 instance • An IP address association that maps the private IP address of the EC2 instance to the Carrier IP address Carrier IP address 7 AWS Wavelength Developer Guide You need the following entries in the Wavelength subnet route table. Destination Target Notes VPC CIDR Local This route allows for intraVPC connectivity, including subnets in the Availability Zones. 0.0.0.0/0 carrier-gateway-id The Carrier IP address provides internet connectivity through the carrier gateway. Carrier gateway access to the public internet The carrier gateway provides access to the internet from your Wavelength subnets. For information about"} +{"global_id": 27, "doc_id": "wavelength", "chunk_id": "6", "question_id": 4, "question": "What does the carrier gateway provide access to?", "answer_span": "The carrier gateway provides access to the internet from your Wavelength subnets.", "chunk": "Wavelength Zone (for example an EC2 instance). The carrier gateway uses the address for traffic from the interface to the internet or to mobile devices. The carrier gateway uses NAT to translate the address, and then sends the traffic to the destination. Traffic from the telecommunication carrier network routes through the carrier gateway. You allocate a Carrier IP address from a network border group, which is a unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses, for example, us-east-1-wl1-bos-wlz-1. Routing You can set the carrier gateway as a destination in a route table for the following resources: • VPCs that contain subnets in a Wavelength Zone • Subnets in Wavelength Zones Create a custom route table for the subnets in the Wavelength Zones so that the default route goes to the carrier gateway, which then sends traffic to the internet and telecommunication carrier network. Example: Carrier gateway routing to the public internet Consider a scenario with the following configuration: • A VPC with Availability Zones and a Wavelength Zone • A subnet in the Wavelength Zone • An EC2 instance in the subnet in the Wavelength Zone • A Carrier IP address for the network interface associated with the EC2 instance • An IP address association that maps the private IP address of the EC2 instance to the Carrier IP address Carrier IP address 7 AWS Wavelength Developer Guide You need the following entries in the Wavelength subnet route table. Destination Target Notes VPC CIDR Local This route allows for intraVPC connectivity, including subnets in the Availability Zones. 0.0.0.0/0 carrier-gateway-id The Carrier IP address provides internet connectivity through the carrier gateway. Carrier gateway access to the public internet The carrier gateway provides access to the internet from your Wavelength subnets. For information about"} +{"global_id": 28, "doc_id": "wavelength", "chunk_id": "7", "question_id": 1, "question": "What does the route 0.0.0.0/0 allow for?", "answer_span": "This route allows for intraVPC connectivity, including subnets in the Availability Zones.", "chunk": "VPC CIDR Local This route allows for intraVPC connectivity, including subnets in the Availability Zones. 0.0.0.0/0 carrier-gateway-id The Carrier IP address provides internet connectivity through the carrier gateway. Carrier gateway access to the public internet The carrier gateway provides access to the internet from your Wavelength subnets. For information about protocol considerations, see the section called “Networking considerations”. Traffic initiated from the EC2 instance for the internet uses the 0.0.0.0/0 route to route traffic to the carrier gateway. The carrier gateway maps the EC2 instance IP address to the Carrier IP address, and then sends the traffic to the telecommunication carrier. Example: Carrier gateway routing to the public internet 8 AWS Wavelength Developer Guide DNS EC2 instances use EC2 DNS to resolve domain names to IP addresses. Route 53 supports DNS features, such as domain registration, and DNS routing. Both public and private hosted Wavelength Zones are supported for routing traffic to specific domains. Route 53 resolvers are hosted in the Region. You can also use your own DNS services to resolve domain names. Maximum transmission unit Generally, the maximum transmission unit (MTU) is as follows: • 9001 bytes between EC2 instances in the same Wavelength Zone. • 1500 bytes between carrier gateway and a Wavelength Zone. • 1500 bytes between an EC2 instance in a Wavelength Zone and an EC2 instance in the Region when the traffic uses a public IP address. • 1300 bytes between an EC2 instance in a Wavelength Zone and an EC2 instance in the Region when the traffic uses a private IP address. DNS 9 AWS Wavelength Developer Guide Get started with AWS Wavelength The following diagram shows the resources that you need to configure to get started using AWS Wavelength. • A VPC in your Region • A carrier gateway • A public"} +{"global_id": 29, "doc_id": "wavelength", "chunk_id": "7", "question_id": 2, "question": "What does the carrier gateway provide access to?", "answer_span": "The carrier gateway provides access to the internet from your Wavelength subnets.", "chunk": "VPC CIDR Local This route allows for intraVPC connectivity, including subnets in the Availability Zones. 0.0.0.0/0 carrier-gateway-id The Carrier IP address provides internet connectivity through the carrier gateway. Carrier gateway access to the public internet The carrier gateway provides access to the internet from your Wavelength subnets. For information about protocol considerations, see the section called “Networking considerations”. Traffic initiated from the EC2 instance for the internet uses the 0.0.0.0/0 route to route traffic to the carrier gateway. The carrier gateway maps the EC2 instance IP address to the Carrier IP address, and then sends the traffic to the telecommunication carrier. Example: Carrier gateway routing to the public internet 8 AWS Wavelength Developer Guide DNS EC2 instances use EC2 DNS to resolve domain names to IP addresses. Route 53 supports DNS features, such as domain registration, and DNS routing. Both public and private hosted Wavelength Zones are supported for routing traffic to specific domains. Route 53 resolvers are hosted in the Region. You can also use your own DNS services to resolve domain names. Maximum transmission unit Generally, the maximum transmission unit (MTU) is as follows: • 9001 bytes between EC2 instances in the same Wavelength Zone. • 1500 bytes between carrier gateway and a Wavelength Zone. • 1500 bytes between an EC2 instance in a Wavelength Zone and an EC2 instance in the Region when the traffic uses a public IP address. • 1300 bytes between an EC2 instance in a Wavelength Zone and an EC2 instance in the Region when the traffic uses a private IP address. DNS 9 AWS Wavelength Developer Guide Get started with AWS Wavelength The following diagram shows the resources that you need to configure to get started using AWS Wavelength. • A VPC in your Region • A carrier gateway • A public"} +{"global_id": 30, "doc_id": "wavelength", "chunk_id": "7", "question_id": 3, "question": "What is the maximum transmission unit (MTU) between EC2 instances in the same Wavelength Zone?", "answer_span": "Generally, the maximum transmission unit (MTU) is as follows: • 9001 bytes between EC2 instances in the same Wavelength Zone.", "chunk": "VPC CIDR Local This route allows for intraVPC connectivity, including subnets in the Availability Zones. 0.0.0.0/0 carrier-gateway-id The Carrier IP address provides internet connectivity through the carrier gateway. Carrier gateway access to the public internet The carrier gateway provides access to the internet from your Wavelength subnets. For information about protocol considerations, see the section called “Networking considerations”. Traffic initiated from the EC2 instance for the internet uses the 0.0.0.0/0 route to route traffic to the carrier gateway. The carrier gateway maps the EC2 instance IP address to the Carrier IP address, and then sends the traffic to the telecommunication carrier. Example: Carrier gateway routing to the public internet 8 AWS Wavelength Developer Guide DNS EC2 instances use EC2 DNS to resolve domain names to IP addresses. Route 53 supports DNS features, such as domain registration, and DNS routing. Both public and private hosted Wavelength Zones are supported for routing traffic to specific domains. Route 53 resolvers are hosted in the Region. You can also use your own DNS services to resolve domain names. Maximum transmission unit Generally, the maximum transmission unit (MTU) is as follows: • 9001 bytes between EC2 instances in the same Wavelength Zone. • 1500 bytes between carrier gateway and a Wavelength Zone. • 1500 bytes between an EC2 instance in a Wavelength Zone and an EC2 instance in the Region when the traffic uses a public IP address. • 1300 bytes between an EC2 instance in a Wavelength Zone and an EC2 instance in the Region when the traffic uses a private IP address. DNS 9 AWS Wavelength Developer Guide Get started with AWS Wavelength The following diagram shows the resources that you need to configure to get started using AWS Wavelength. • A VPC in your Region • A carrier gateway • A public"} +{"global_id": 31, "doc_id": "wavelength", "chunk_id": "7", "question_id": 4, "question": "What resources are needed to get started using AWS Wavelength?", "answer_span": "The following diagram shows the resources that you need to configure to get started using AWS Wavelength. • A VPC in your Region • A carrier gateway • A public", "chunk": "VPC CIDR Local This route allows for intraVPC connectivity, including subnets in the Availability Zones. 0.0.0.0/0 carrier-gateway-id The Carrier IP address provides internet connectivity through the carrier gateway. Carrier gateway access to the public internet The carrier gateway provides access to the internet from your Wavelength subnets. For information about protocol considerations, see the section called “Networking considerations”. Traffic initiated from the EC2 instance for the internet uses the 0.0.0.0/0 route to route traffic to the carrier gateway. The carrier gateway maps the EC2 instance IP address to the Carrier IP address, and then sends the traffic to the telecommunication carrier. Example: Carrier gateway routing to the public internet 8 AWS Wavelength Developer Guide DNS EC2 instances use EC2 DNS to resolve domain names to IP addresses. Route 53 supports DNS features, such as domain registration, and DNS routing. Both public and private hosted Wavelength Zones are supported for routing traffic to specific domains. Route 53 resolvers are hosted in the Region. You can also use your own DNS services to resolve domain names. Maximum transmission unit Generally, the maximum transmission unit (MTU) is as follows: • 9001 bytes between EC2 instances in the same Wavelength Zone. • 1500 bytes between carrier gateway and a Wavelength Zone. • 1500 bytes between an EC2 instance in a Wavelength Zone and an EC2 instance in the Region when the traffic uses a public IP address. • 1300 bytes between an EC2 instance in a Wavelength Zone and an EC2 instance in the Region when the traffic uses a private IP address. DNS 9 AWS Wavelength Developer Guide Get started with AWS Wavelength The following diagram shows the resources that you need to configure to get started using AWS Wavelength. • A VPC in your Region • A carrier gateway • A public"} +{"global_id": 32, "doc_id": "wavelength", "chunk_id": "8", "question_id": 1, "question": "What is the first step to get started with AWS Wavelength?", "answer_span": "Step 1: Opt in to Wavelength Zones", "chunk": "Region when the traffic uses a private IP address. DNS 9 AWS Wavelength Developer Guide Get started with AWS Wavelength The following diagram shows the resources that you need to configure to get started using AWS Wavelength. • A VPC in your Region • A carrier gateway • A public subnet in an Availability Zone in your Region • An instance in the public subnet • An instance in the Wavelength Zone subnet with a Carrier IP address Tasks • Step 1: Opt in to Wavelength Zones • Step 2: Configure your network • Step 3: Launch an instance in your Availability Zone public subnet 10 AWS Wavelength Developer Guide • Step 4: Launch an instance in the Wavelength zone • Step 5: Test the connectivity Step 1: Opt in to Wavelength Zones Before you specify a Wavelength Zone for a resource or service, you must opt in to the zone. Prerequisites • Some AWS resources are not available in all Regions. Make sure that you can create the resources that you need in the desired Region or Wavelength Zone before launching an instance in a specific Wavelength Zone. • Before you begin, review Quotas and considerations, which includes information about available Wavelength Zones, service differences, and Service Quotas. You should also speak with your mobile operator about mobile service plans and any additional requirements. To opt in to Wavelength Zone using the console 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the Region selector in the navigation bar, select the Region for the Wavelength Zone. 3. On the navigation pane, choose EC2 Dashboard. 4. In the upper-right corner of the page, choose Account attributes, Zones. 5. Under Wavelength Zones, choose Manage. 6. Choose Enabled. 7. Choose Update zone group. To enable Wavelength Zones using the AWS CLI"} +{"global_id": 33, "doc_id": "wavelength", "chunk_id": "8", "question_id": 2, "question": "What must you do before specifying a Wavelength Zone for a resource?", "answer_span": "Before you specify a Wavelength Zone for a resource or service, you must opt in to the zone.", "chunk": "Region when the traffic uses a private IP address. DNS 9 AWS Wavelength Developer Guide Get started with AWS Wavelength The following diagram shows the resources that you need to configure to get started using AWS Wavelength. • A VPC in your Region • A carrier gateway • A public subnet in an Availability Zone in your Region • An instance in the public subnet • An instance in the Wavelength Zone subnet with a Carrier IP address Tasks • Step 1: Opt in to Wavelength Zones • Step 2: Configure your network • Step 3: Launch an instance in your Availability Zone public subnet 10 AWS Wavelength Developer Guide • Step 4: Launch an instance in the Wavelength zone • Step 5: Test the connectivity Step 1: Opt in to Wavelength Zones Before you specify a Wavelength Zone for a resource or service, you must opt in to the zone. Prerequisites • Some AWS resources are not available in all Regions. Make sure that you can create the resources that you need in the desired Region or Wavelength Zone before launching an instance in a specific Wavelength Zone. • Before you begin, review Quotas and considerations, which includes information about available Wavelength Zones, service differences, and Service Quotas. You should also speak with your mobile operator about mobile service plans and any additional requirements. To opt in to Wavelength Zone using the console 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the Region selector in the navigation bar, select the Region for the Wavelength Zone. 3. On the navigation pane, choose EC2 Dashboard. 4. In the upper-right corner of the page, choose Account attributes, Zones. 5. Under Wavelength Zones, choose Manage. 6. Choose Enabled. 7. Choose Update zone group. To enable Wavelength Zones using the AWS CLI"} +{"global_id": 34, "doc_id": "wavelength", "chunk_id": "8", "question_id": 3, "question": "What should you review before launching an instance in a specific Wavelength Zone?", "answer_span": "Before you begin, review Quotas and considerations, which includes information about available Wavelength Zones, service differences, and Service Quotas.", "chunk": "Region when the traffic uses a private IP address. DNS 9 AWS Wavelength Developer Guide Get started with AWS Wavelength The following diagram shows the resources that you need to configure to get started using AWS Wavelength. • A VPC in your Region • A carrier gateway • A public subnet in an Availability Zone in your Region • An instance in the public subnet • An instance in the Wavelength Zone subnet with a Carrier IP address Tasks • Step 1: Opt in to Wavelength Zones • Step 2: Configure your network • Step 3: Launch an instance in your Availability Zone public subnet 10 AWS Wavelength Developer Guide • Step 4: Launch an instance in the Wavelength zone • Step 5: Test the connectivity Step 1: Opt in to Wavelength Zones Before you specify a Wavelength Zone for a resource or service, you must opt in to the zone. Prerequisites • Some AWS resources are not available in all Regions. Make sure that you can create the resources that you need in the desired Region or Wavelength Zone before launching an instance in a specific Wavelength Zone. • Before you begin, review Quotas and considerations, which includes information about available Wavelength Zones, service differences, and Service Quotas. You should also speak with your mobile operator about mobile service plans and any additional requirements. To opt in to Wavelength Zone using the console 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the Region selector in the navigation bar, select the Region for the Wavelength Zone. 3. On the navigation pane, choose EC2 Dashboard. 4. In the upper-right corner of the page, choose Account attributes, Zones. 5. Under Wavelength Zones, choose Manage. 6. Choose Enabled. 7. Choose Update zone group. To enable Wavelength Zones using the AWS CLI"} +{"global_id": 35, "doc_id": "wavelength", "chunk_id": "8", "question_id": 4, "question": "Where can you find the Amazon EC2 console?", "answer_span": "Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.", "chunk": "Region when the traffic uses a private IP address. DNS 9 AWS Wavelength Developer Guide Get started with AWS Wavelength The following diagram shows the resources that you need to configure to get started using AWS Wavelength. • A VPC in your Region • A carrier gateway • A public subnet in an Availability Zone in your Region • An instance in the public subnet • An instance in the Wavelength Zone subnet with a Carrier IP address Tasks • Step 1: Opt in to Wavelength Zones • Step 2: Configure your network • Step 3: Launch an instance in your Availability Zone public subnet 10 AWS Wavelength Developer Guide • Step 4: Launch an instance in the Wavelength zone • Step 5: Test the connectivity Step 1: Opt in to Wavelength Zones Before you specify a Wavelength Zone for a resource or service, you must opt in to the zone. Prerequisites • Some AWS resources are not available in all Regions. Make sure that you can create the resources that you need in the desired Region or Wavelength Zone before launching an instance in a specific Wavelength Zone. • Before you begin, review Quotas and considerations, which includes information about available Wavelength Zones, service differences, and Service Quotas. You should also speak with your mobile operator about mobile service plans and any additional requirements. To opt in to Wavelength Zone using the console 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the Region selector in the navigation bar, select the Region for the Wavelength Zone. 3. On the navigation pane, choose EC2 Dashboard. 4. In the upper-right corner of the page, choose Account attributes, Zones. 5. Under Wavelength Zones, choose Manage. 6. Choose Enabled. 7. Choose Update zone group. To enable Wavelength Zones using the AWS CLI"} +{"global_id": 36, "doc_id": "wavelength", "chunk_id": "9", "question_id": 1, "question": "What is the first step to enable Wavelength Zones using the AWS CLI?", "answer_span": "To do so, use the modify-availabilityzone-group command.", "chunk": "bar, select the Region for the Wavelength Zone. 3. On the navigation pane, choose EC2 Dashboard. 4. In the upper-right corner of the page, choose Account attributes, Zones. 5. Under Wavelength Zones, choose Manage. 6. Choose Enabled. 7. Choose Update zone group. To enable Wavelength Zones using the AWS CLI Alternatively, use the AWS CLI to enable Wavelength Zones. To do so, use the modify-availabilityzone-group command. Step 2: Configure your network After you opt in to the Wavelength Zone, create a VPC, a carrier gateway, and a public subnet in the Availability Zone. Step 1: Opt in to Wavelength Zones 11 AWS Wavelength Developer Guide Tasks • Create a VPC • Create a carrier gateway and a subnet associated with the Wavelength Zone • Create a public subnet in an Availability Zone Create a VPC Create a VPC to extend to your Wavelength Zone. To create a VPC using the console 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. Choose Create VPC. 3. For Resources to create, choose VPC only. 4. For Name tag, optionally provide a name for your VPC. Doing so creates the tag Name=value. 5. For IPv4 CIDR block, specify an IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Note You can specify a range of publicly routable IPv4 addresses. However, we currently do not support direct access to the internet from publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 6. Choose Create VPC. Create a carrier gateway and a subnet associated with the Wavelength Zone After you create"} +{"global_id": 37, "doc_id": "wavelength", "chunk_id": "9", "question_id": 2, "question": "What should you create after opting in to the Wavelength Zone?", "answer_span": "create a VPC, a carrier gateway, and a public subnet in the Availability Zone.", "chunk": "bar, select the Region for the Wavelength Zone. 3. On the navigation pane, choose EC2 Dashboard. 4. In the upper-right corner of the page, choose Account attributes, Zones. 5. Under Wavelength Zones, choose Manage. 6. Choose Enabled. 7. Choose Update zone group. To enable Wavelength Zones using the AWS CLI Alternatively, use the AWS CLI to enable Wavelength Zones. To do so, use the modify-availabilityzone-group command. Step 2: Configure your network After you opt in to the Wavelength Zone, create a VPC, a carrier gateway, and a public subnet in the Availability Zone. Step 1: Opt in to Wavelength Zones 11 AWS Wavelength Developer Guide Tasks • Create a VPC • Create a carrier gateway and a subnet associated with the Wavelength Zone • Create a public subnet in an Availability Zone Create a VPC Create a VPC to extend to your Wavelength Zone. To create a VPC using the console 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. Choose Create VPC. 3. For Resources to create, choose VPC only. 4. For Name tag, optionally provide a name for your VPC. Doing so creates the tag Name=value. 5. For IPv4 CIDR block, specify an IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Note You can specify a range of publicly routable IPv4 addresses. However, we currently do not support direct access to the internet from publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 6. Choose Create VPC. Create a carrier gateway and a subnet associated with the Wavelength Zone After you create"} +{"global_id": 38, "doc_id": "wavelength", "chunk_id": "9", "question_id": 3, "question": "What is the URL to open the Amazon VPC console?", "answer_span": "Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.", "chunk": "bar, select the Region for the Wavelength Zone. 3. On the navigation pane, choose EC2 Dashboard. 4. In the upper-right corner of the page, choose Account attributes, Zones. 5. Under Wavelength Zones, choose Manage. 6. Choose Enabled. 7. Choose Update zone group. To enable Wavelength Zones using the AWS CLI Alternatively, use the AWS CLI to enable Wavelength Zones. To do so, use the modify-availabilityzone-group command. Step 2: Configure your network After you opt in to the Wavelength Zone, create a VPC, a carrier gateway, and a public subnet in the Availability Zone. Step 1: Opt in to Wavelength Zones 11 AWS Wavelength Developer Guide Tasks • Create a VPC • Create a carrier gateway and a subnet associated with the Wavelength Zone • Create a public subnet in an Availability Zone Create a VPC Create a VPC to extend to your Wavelength Zone. To create a VPC using the console 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. Choose Create VPC. 3. For Resources to create, choose VPC only. 4. For Name tag, optionally provide a name for your VPC. Doing so creates the tag Name=value. 5. For IPv4 CIDR block, specify an IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Note You can specify a range of publicly routable IPv4 addresses. However, we currently do not support direct access to the internet from publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 6. Choose Create VPC. Create a carrier gateway and a subnet associated with the Wavelength Zone After you create"} +{"global_id": 39, "doc_id": "wavelength", "chunk_id": "9", "question_id": 4, "question": "What is recommended for the IPv4 CIDR block when creating a VPC?", "answer_span": "We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918.", "chunk": "bar, select the Region for the Wavelength Zone. 3. On the navigation pane, choose EC2 Dashboard. 4. In the upper-right corner of the page, choose Account attributes, Zones. 5. Under Wavelength Zones, choose Manage. 6. Choose Enabled. 7. Choose Update zone group. To enable Wavelength Zones using the AWS CLI Alternatively, use the AWS CLI to enable Wavelength Zones. To do so, use the modify-availabilityzone-group command. Step 2: Configure your network After you opt in to the Wavelength Zone, create a VPC, a carrier gateway, and a public subnet in the Availability Zone. Step 1: Opt in to Wavelength Zones 11 AWS Wavelength Developer Guide Tasks • Create a VPC • Create a carrier gateway and a subnet associated with the Wavelength Zone • Create a public subnet in an Availability Zone Create a VPC Create a VPC to extend to your Wavelength Zone. To create a VPC using the console 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. Choose Create VPC. 3. For Resources to create, choose VPC only. 4. For Name tag, optionally provide a name for your VPC. Doing so creates the tag Name=value. 5. For IPv4 CIDR block, specify an IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Note You can specify a range of publicly routable IPv4 addresses. However, we currently do not support direct access to the internet from publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 6. Choose Create VPC. Create a carrier gateway and a subnet associated with the Wavelength Zone After you create"} +{"global_id": 40, "doc_id": "wavelength", "chunk_id": "10", "question_id": 1, "question": "What happens if Windows instances are launched into a VPC with certain IP address ranges?", "answer_span": "Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges).", "chunk": "publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 6. Choose Create VPC. Create a carrier gateway and a subnet associated with the Wavelength Zone After you create a VPC, create a carrier gateway, and then select the subnets that route traffic to the carrier gateway. When you choose to automatically route traffic from subnets to the carrier gateway, we create the following resources: Create a VPC 12 AWS Wavelength Developer Guide • A carrier gateway • A subnet. You can optionally assign all carrier gateway tags except the Name tag to the subnet. • A network ACL with the following resources: • A subnet association with the subnet in the Wavelength Zone • Default inbound and outbound rules for your traffic. • A route table with the following resources: • A route for local traffic • A route that routes non-local traffic to the carrier gateway • An association with the subnet To create a carrier gateway 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Carrier gateways, and then choose Create carrier gateway. 3. (Optional) For Name, enter a name for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following: a. Under Existing subnets in Wavelength Zone, select the box for each Wavelength subnet to route to the carrier gateway. b. To create a subnet in the Wavelength Zone, choose Add new subnet, enter the required information, and then choose Add new subnet. 6. (Optional) To add a tag to the carrier gateway, choose Add tag, and then enter the tag key and tag value."} +{"global_id": 41, "doc_id": "wavelength", "chunk_id": "10", "question_id": 2, "question": "What resources are created when traffic is automatically routed from subnets to the carrier gateway?", "answer_span": "we create the following resources: Create a VPC 12 AWS Wavelength Developer Guide • A carrier gateway • A subnet.", "chunk": "publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 6. Choose Create VPC. Create a carrier gateway and a subnet associated with the Wavelength Zone After you create a VPC, create a carrier gateway, and then select the subnets that route traffic to the carrier gateway. When you choose to automatically route traffic from subnets to the carrier gateway, we create the following resources: Create a VPC 12 AWS Wavelength Developer Guide • A carrier gateway • A subnet. You can optionally assign all carrier gateway tags except the Name tag to the subnet. • A network ACL with the following resources: • A subnet association with the subnet in the Wavelength Zone • Default inbound and outbound rules for your traffic. • A route table with the following resources: • A route for local traffic • A route that routes non-local traffic to the carrier gateway • An association with the subnet To create a carrier gateway 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Carrier gateways, and then choose Create carrier gateway. 3. (Optional) For Name, enter a name for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following: a. Under Existing subnets in Wavelength Zone, select the box for each Wavelength subnet to route to the carrier gateway. b. To create a subnet in the Wavelength Zone, choose Add new subnet, enter the required information, and then choose Add new subnet. 6. (Optional) To add a tag to the carrier gateway, choose Add tag, and then enter the tag key and tag value."} +{"global_id": 42, "doc_id": "wavelength", "chunk_id": "10", "question_id": 3, "question": "What is the first step to create a carrier gateway?", "answer_span": "Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.", "chunk": "publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 6. Choose Create VPC. Create a carrier gateway and a subnet associated with the Wavelength Zone After you create a VPC, create a carrier gateway, and then select the subnets that route traffic to the carrier gateway. When you choose to automatically route traffic from subnets to the carrier gateway, we create the following resources: Create a VPC 12 AWS Wavelength Developer Guide • A carrier gateway • A subnet. You can optionally assign all carrier gateway tags except the Name tag to the subnet. • A network ACL with the following resources: • A subnet association with the subnet in the Wavelength Zone • Default inbound and outbound rules for your traffic. • A route table with the following resources: • A route for local traffic • A route that routes non-local traffic to the carrier gateway • An association with the subnet To create a carrier gateway 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Carrier gateways, and then choose Create carrier gateway. 3. (Optional) For Name, enter a name for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following: a. Under Existing subnets in Wavelength Zone, select the box for each Wavelength subnet to route to the carrier gateway. b. To create a subnet in the Wavelength Zone, choose Add new subnet, enter the required information, and then choose Add new subnet. 6. (Optional) To add a tag to the carrier gateway, choose Add tag, and then enter the tag key and tag value."} +{"global_id": 43, "doc_id": "wavelength", "chunk_id": "10", "question_id": 4, "question": "What optional action can be taken when creating a carrier gateway?", "answer_span": "(Optional) For Name, enter a name for the carrier gateway.", "chunk": "publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 6. Choose Create VPC. Create a carrier gateway and a subnet associated with the Wavelength Zone After you create a VPC, create a carrier gateway, and then select the subnets that route traffic to the carrier gateway. When you choose to automatically route traffic from subnets to the carrier gateway, we create the following resources: Create a VPC 12 AWS Wavelength Developer Guide • A carrier gateway • A subnet. You can optionally assign all carrier gateway tags except the Name tag to the subnet. • A network ACL with the following resources: • A subnet association with the subnet in the Wavelength Zone • Default inbound and outbound rules for your traffic. • A route table with the following resources: • A route for local traffic • A route that routes non-local traffic to the carrier gateway • An association with the subnet To create a carrier gateway 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Carrier gateways, and then choose Create carrier gateway. 3. (Optional) For Name, enter a name for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following: a. Under Existing subnets in Wavelength Zone, select the box for each Wavelength subnet to route to the carrier gateway. b. To create a subnet in the Wavelength Zone, choose Add new subnet, enter the required information, and then choose Add new subnet. 6. (Optional) To add a tag to the carrier gateway, choose Add tag, and then enter the tag key and tag value."} +{"global_id": 44, "doc_id": "wavelength", "chunk_id": "11", "question_id": 1, "question": "What is the first step to create a subnet in the Wavelength Zone?", "answer_span": "To create a subnet in the Wavelength Zone, choose Add new subnet, enter the required information, and then choose Add new subnet.", "chunk": "route to the carrier gateway. b. To create a subnet in the Wavelength Zone, choose Add new subnet, enter the required information, and then choose Add new subnet. 6. (Optional) To add a tag to the carrier gateway, choose Add tag, and then enter the tag key and tag value. 7. Choose Create carrier gateway. Create a public subnet in an Availability Zone Create a subnet in an Availability Zone in the Region. To add a subnet 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Subnets. Create a public subnet in an Availability Zone 13 AWS Wavelength Developer Guide 3. Choose Create subnet. 4. For VPC, choose the VPC. 5. For Subnet name, provide a name for the subnet. Doing so creates the tag Name=value. 6. For Availability Zone, chose an Availability Zone, or choose No Preference to have AWS choose one for you. 7. For IPv4 CIDR block, specify an IPv4 address range for your subnet, using CIDR notation. 8. Choose Create subnet. Step 3: Launch an instance in your Availability Zone public subnet Launch an EC2 instance in the subnet that you created in the Availability Zone. You will use this instance to test the connectivity from the Region to the Wavelength Zone. You can launch EC2 instances in the public subnet that you created. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. Step 4: Launch an instance in the Wavelength zone After you complete the networking configuration, launch an instance, and then allocate a Carrier IP address for the instance. Options • Option 1: Auto assign a Carrier IP address • Option 2: Allocate and associate a Carrier IP address from the"} +{"global_id": 45, "doc_id": "wavelength", "chunk_id": "11", "question_id": 2, "question": "What should you do to add a tag to the carrier gateway?", "answer_span": "To add a tag to the carrier gateway, choose Add tag, and then enter the tag key and tag value.", "chunk": "route to the carrier gateway. b. To create a subnet in the Wavelength Zone, choose Add new subnet, enter the required information, and then choose Add new subnet. 6. (Optional) To add a tag to the carrier gateway, choose Add tag, and then enter the tag key and tag value. 7. Choose Create carrier gateway. Create a public subnet in an Availability Zone Create a subnet in an Availability Zone in the Region. To add a subnet 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Subnets. Create a public subnet in an Availability Zone 13 AWS Wavelength Developer Guide 3. Choose Create subnet. 4. For VPC, choose the VPC. 5. For Subnet name, provide a name for the subnet. Doing so creates the tag Name=value. 6. For Availability Zone, chose an Availability Zone, or choose No Preference to have AWS choose one for you. 7. For IPv4 CIDR block, specify an IPv4 address range for your subnet, using CIDR notation. 8. Choose Create subnet. Step 3: Launch an instance in your Availability Zone public subnet Launch an EC2 instance in the subnet that you created in the Availability Zone. You will use this instance to test the connectivity from the Region to the Wavelength Zone. You can launch EC2 instances in the public subnet that you created. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. Step 4: Launch an instance in the Wavelength zone After you complete the networking configuration, launch an instance, and then allocate a Carrier IP address for the instance. Options • Option 1: Auto assign a Carrier IP address • Option 2: Allocate and associate a Carrier IP address from the"} +{"global_id": 46, "doc_id": "wavelength", "chunk_id": "11", "question_id": 3, "question": "What is the URL for the Amazon VPC console?", "answer_span": "Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.", "chunk": "route to the carrier gateway. b. To create a subnet in the Wavelength Zone, choose Add new subnet, enter the required information, and then choose Add new subnet. 6. (Optional) To add a tag to the carrier gateway, choose Add tag, and then enter the tag key and tag value. 7. Choose Create carrier gateway. Create a public subnet in an Availability Zone Create a subnet in an Availability Zone in the Region. To add a subnet 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Subnets. Create a public subnet in an Availability Zone 13 AWS Wavelength Developer Guide 3. Choose Create subnet. 4. For VPC, choose the VPC. 5. For Subnet name, provide a name for the subnet. Doing so creates the tag Name=value. 6. For Availability Zone, chose an Availability Zone, or choose No Preference to have AWS choose one for you. 7. For IPv4 CIDR block, specify an IPv4 address range for your subnet, using CIDR notation. 8. Choose Create subnet. Step 3: Launch an instance in your Availability Zone public subnet Launch an EC2 instance in the subnet that you created in the Availability Zone. You will use this instance to test the connectivity from the Region to the Wavelength Zone. You can launch EC2 instances in the public subnet that you created. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. Step 4: Launch an instance in the Wavelength zone After you complete the networking configuration, launch an instance, and then allocate a Carrier IP address for the instance. Options • Option 1: Auto assign a Carrier IP address • Option 2: Allocate and associate a Carrier IP address from the"} +{"global_id": 47, "doc_id": "wavelength", "chunk_id": "11", "question_id": 4, "question": "What is the purpose of launching an EC2 instance in the public subnet?", "answer_span": "You will use this instance to test the connectivity from the Region to the Wavelength Zone.", "chunk": "route to the carrier gateway. b. To create a subnet in the Wavelength Zone, choose Add new subnet, enter the required information, and then choose Add new subnet. 6. (Optional) To add a tag to the carrier gateway, choose Add tag, and then enter the tag key and tag value. 7. Choose Create carrier gateway. Create a public subnet in an Availability Zone Create a subnet in an Availability Zone in the Region. To add a subnet 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Subnets. Create a public subnet in an Availability Zone 13 AWS Wavelength Developer Guide 3. Choose Create subnet. 4. For VPC, choose the VPC. 5. For Subnet name, provide a name for the subnet. Doing so creates the tag Name=value. 6. For Availability Zone, chose an Availability Zone, or choose No Preference to have AWS choose one for you. 7. For IPv4 CIDR block, specify an IPv4 address range for your subnet, using CIDR notation. 8. Choose Create subnet. Step 3: Launch an instance in your Availability Zone public subnet Launch an EC2 instance in the subnet that you created in the Availability Zone. You will use this instance to test the connectivity from the Region to the Wavelength Zone. You can launch EC2 instances in the public subnet that you created. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. Step 4: Launch an instance in the Wavelength zone After you complete the networking configuration, launch an instance, and then allocate a Carrier IP address for the instance. Options • Option 1: Auto assign a Carrier IP address • Option 2: Allocate and associate a Carrier IP address from the"} +{"global_id": 48, "doc_id": "wavelength", "chunk_id": "12", "question_id": 1, "question": "What is the first step after completing the networking configuration?", "answer_span": "After you complete the networking configuration, launch an instance, and then allocate a Carrier IP address for the instance.", "chunk": "Step 4: Launch an instance in the Wavelength zone After you complete the networking configuration, launch an instance, and then allocate a Carrier IP address for the instance. Options • Option 1: Auto assign a Carrier IP address • Option 2: Allocate and associate a Carrier IP address from the network border group Option 1: Auto assign a Carrier IP address AWS recommends that you use the AWS CLI because you can automatically allocate and associate the Carrier IP address with the network interface. Use the run-instances command as follows to launch an instance in the Wavelength Zone subnet. Step 3: Launch an instance in your Availability Zone public subnet 14 AWS Wavelength Developer Guide aws ec2 run-instances --region us-east-1 --network-interfaces \"DeviceIndex=0,AssociateCarrierIpAddress=true,SubnetId=subnet-036aa298f4EXAMPLE\" -image-id ami-04125ecea1EXAMPLE --instance-type t3.medium • DeviceIndex – Specify 0 to indicate the primary network interface (eth0). • SubnetId – Specify the ID of the subnet in the Wavelength Zone. • AssociateCarrierIpAddress – Set this value to true to assign a Carrier IP address to the network interface. Option 2: Allocate and associate a Carrier IP address from the network border group You can launch EC2 instances in the subnet that you created when you added the carrier gateway. For more information, see the section called “Create a carrier gateway and a subnet associated with the Wavelength Zone”. Security groups control inbound and outbound traffic for instances in a subnet, just as they do for instances in an Availability Zone subnet. To connect to an EC2 instance in a subnet, specify a key pair when you launch the instance, just as you do for instances in an Availability Zone subnet. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. To allocate"} +{"global_id": 49, "doc_id": "wavelength", "chunk_id": "12", "question_id": 2, "question": "What does AWS recommend for allocating and associating a Carrier IP address?", "answer_span": "AWS recommends that you use the AWS CLI because you can automatically allocate and associate the Carrier IP address with the network interface.", "chunk": "Step 4: Launch an instance in the Wavelength zone After you complete the networking configuration, launch an instance, and then allocate a Carrier IP address for the instance. Options • Option 1: Auto assign a Carrier IP address • Option 2: Allocate and associate a Carrier IP address from the network border group Option 1: Auto assign a Carrier IP address AWS recommends that you use the AWS CLI because you can automatically allocate and associate the Carrier IP address with the network interface. Use the run-instances command as follows to launch an instance in the Wavelength Zone subnet. Step 3: Launch an instance in your Availability Zone public subnet 14 AWS Wavelength Developer Guide aws ec2 run-instances --region us-east-1 --network-interfaces \"DeviceIndex=0,AssociateCarrierIpAddress=true,SubnetId=subnet-036aa298f4EXAMPLE\" -image-id ami-04125ecea1EXAMPLE --instance-type t3.medium • DeviceIndex – Specify 0 to indicate the primary network interface (eth0). • SubnetId – Specify the ID of the subnet in the Wavelength Zone. • AssociateCarrierIpAddress – Set this value to true to assign a Carrier IP address to the network interface. Option 2: Allocate and associate a Carrier IP address from the network border group You can launch EC2 instances in the subnet that you created when you added the carrier gateway. For more information, see the section called “Create a carrier gateway and a subnet associated with the Wavelength Zone”. Security groups control inbound and outbound traffic for instances in a subnet, just as they do for instances in an Availability Zone subnet. To connect to an EC2 instance in a subnet, specify a key pair when you launch the instance, just as you do for instances in an Availability Zone subnet. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. To allocate"} +{"global_id": 50, "doc_id": "wavelength", "chunk_id": "12", "question_id": 3, "question": "What command is used to launch an instance in the Wavelength Zone subnet?", "answer_span": "Use the run-instances command as follows to launch an instance in the Wavelength Zone subnet.", "chunk": "Step 4: Launch an instance in the Wavelength zone After you complete the networking configuration, launch an instance, and then allocate a Carrier IP address for the instance. Options • Option 1: Auto assign a Carrier IP address • Option 2: Allocate and associate a Carrier IP address from the network border group Option 1: Auto assign a Carrier IP address AWS recommends that you use the AWS CLI because you can automatically allocate and associate the Carrier IP address with the network interface. Use the run-instances command as follows to launch an instance in the Wavelength Zone subnet. Step 3: Launch an instance in your Availability Zone public subnet 14 AWS Wavelength Developer Guide aws ec2 run-instances --region us-east-1 --network-interfaces \"DeviceIndex=0,AssociateCarrierIpAddress=true,SubnetId=subnet-036aa298f4EXAMPLE\" -image-id ami-04125ecea1EXAMPLE --instance-type t3.medium • DeviceIndex – Specify 0 to indicate the primary network interface (eth0). • SubnetId – Specify the ID of the subnet in the Wavelength Zone. • AssociateCarrierIpAddress – Set this value to true to assign a Carrier IP address to the network interface. Option 2: Allocate and associate a Carrier IP address from the network border group You can launch EC2 instances in the subnet that you created when you added the carrier gateway. For more information, see the section called “Create a carrier gateway and a subnet associated with the Wavelength Zone”. Security groups control inbound and outbound traffic for instances in a subnet, just as they do for instances in an Availability Zone subnet. To connect to an EC2 instance in a subnet, specify a key pair when you launch the instance, just as you do for instances in an Availability Zone subnet. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. To allocate"} +{"global_id": 51, "doc_id": "wavelength", "chunk_id": "12", "question_id": 4, "question": "What must you specify to connect to an EC2 instance in a subnet?", "answer_span": "To connect to an EC2 instance in a subnet, specify a key pair when you launch the instance, just as you do for instances in an Availability Zone subnet.", "chunk": "Step 4: Launch an instance in the Wavelength zone After you complete the networking configuration, launch an instance, and then allocate a Carrier IP address for the instance. Options • Option 1: Auto assign a Carrier IP address • Option 2: Allocate and associate a Carrier IP address from the network border group Option 1: Auto assign a Carrier IP address AWS recommends that you use the AWS CLI because you can automatically allocate and associate the Carrier IP address with the network interface. Use the run-instances command as follows to launch an instance in the Wavelength Zone subnet. Step 3: Launch an instance in your Availability Zone public subnet 14 AWS Wavelength Developer Guide aws ec2 run-instances --region us-east-1 --network-interfaces \"DeviceIndex=0,AssociateCarrierIpAddress=true,SubnetId=subnet-036aa298f4EXAMPLE\" -image-id ami-04125ecea1EXAMPLE --instance-type t3.medium • DeviceIndex – Specify 0 to indicate the primary network interface (eth0). • SubnetId – Specify the ID of the subnet in the Wavelength Zone. • AssociateCarrierIpAddress – Set this value to true to assign a Carrier IP address to the network interface. Option 2: Allocate and associate a Carrier IP address from the network border group You can launch EC2 instances in the subnet that you created when you added the carrier gateway. For more information, see the section called “Create a carrier gateway and a subnet associated with the Wavelength Zone”. Security groups control inbound and outbound traffic for instances in a subnet, just as they do for instances in an Availability Zone subnet. To connect to an EC2 instance in a subnet, specify a key pair when you launch the instance, just as you do for instances in an Availability Zone subnet. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. To allocate"} +{"global_id": 52, "doc_id": "wavelength", "chunk_id": "13", "question_id": 1, "question": "What command is used to allocate a Carrier IP address?", "answer_span": "Use the allocate-address command as follows to allocate a Carrier IP address.", "chunk": "subnet, specify a key pair when you launch the instance, just as you do for instances in an Availability Zone subnet. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. To allocate and associate a Carrier IP address 1. Use the allocate-address command as follows to allocate a Carrier IP address. aws ec2 allocate-address --region us-east-1 --domain vpc --network-border-group useast-1-wl1-bos-wlz-1 The following is example output. { \"AllocationId\": \"eipalloc-05807b62acEXAMPLE\", \"PublicIpv4Pool\": \"amazon\", \"NetworkBorderGroup\": \"us-east-1-wl1-bos-wlz-1\", \"Domain\": \"vpc\", \"CarrierIp\": \"155.146.10.111\" } Option 2: Allocate and associate a Carrier IP address from the network border group 15 AWS Wavelength 2. Developer Guide Use the associate-address command as follows to associate the Carrier IP address with the EC2 instance. aws ec2 associate-address --allocation-id eipalloc-05807b62acEXAMPLE --networkinterface-id eni-1a2b3c4d The following is example output. { \"AssociationId\": \"eipassoc-02463d08ceEXAMPLE\", } Step 5: Test the connectivity Before you test the connectivity, do the following: • Review the section called “Networking considerations” • Configure the instance security group to allow ICMP traffic. Test the connectivity from the instance in the Region to the Wavelength Zone instance. Depending on your operating system, use SSH or RDP to connect to the Carrier IP address of your Wavelength Zone instance. You can use a secure bastion host. Run the ping command to the Wavelength Zone instance. In the following example, the IP address of the subnet in the Wavelength Zone is 10.0.3.112. ping 10.0.3.112 Pinging 10.0.3.112 Reply from 10.0.3.112: Reply from 10.0.3.112: Reply from 10.0.3.112: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 10.0.3.112 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test"} +{"global_id": 53, "doc_id": "wavelength", "chunk_id": "13", "question_id": 2, "question": "What is the example output for allocating a Carrier IP address?", "answer_span": "{ \"AllocationId\": \"eipalloc-05807b62acEXAMPLE\", \"PublicIpv4Pool\": \"amazon\", \"NetworkBorderGroup\": \"us-east-1-wl1-bos-wlz-1\", \"Domain\": \"vpc\", \"CarrierIp\": \"155.146.10.111\" }", "chunk": "subnet, specify a key pair when you launch the instance, just as you do for instances in an Availability Zone subnet. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. To allocate and associate a Carrier IP address 1. Use the allocate-address command as follows to allocate a Carrier IP address. aws ec2 allocate-address --region us-east-1 --domain vpc --network-border-group useast-1-wl1-bos-wlz-1 The following is example output. { \"AllocationId\": \"eipalloc-05807b62acEXAMPLE\", \"PublicIpv4Pool\": \"amazon\", \"NetworkBorderGroup\": \"us-east-1-wl1-bos-wlz-1\", \"Domain\": \"vpc\", \"CarrierIp\": \"155.146.10.111\" } Option 2: Allocate and associate a Carrier IP address from the network border group 15 AWS Wavelength 2. Developer Guide Use the associate-address command as follows to associate the Carrier IP address with the EC2 instance. aws ec2 associate-address --allocation-id eipalloc-05807b62acEXAMPLE --networkinterface-id eni-1a2b3c4d The following is example output. { \"AssociationId\": \"eipassoc-02463d08ceEXAMPLE\", } Step 5: Test the connectivity Before you test the connectivity, do the following: • Review the section called “Networking considerations” • Configure the instance security group to allow ICMP traffic. Test the connectivity from the instance in the Region to the Wavelength Zone instance. Depending on your operating system, use SSH or RDP to connect to the Carrier IP address of your Wavelength Zone instance. You can use a secure bastion host. Run the ping command to the Wavelength Zone instance. In the following example, the IP address of the subnet in the Wavelength Zone is 10.0.3.112. ping 10.0.3.112 Pinging 10.0.3.112 Reply from 10.0.3.112: Reply from 10.0.3.112: Reply from 10.0.3.112: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 10.0.3.112 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test"} +{"global_id": 54, "doc_id": "wavelength", "chunk_id": "13", "question_id": 3, "question": "What command is used to associate a Carrier IP address with an EC2 instance?", "answer_span": "Use the associate-address command as follows to associate the Carrier IP address with the EC2 instance.", "chunk": "subnet, specify a key pair when you launch the instance, just as you do for instances in an Availability Zone subnet. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. To allocate and associate a Carrier IP address 1. Use the allocate-address command as follows to allocate a Carrier IP address. aws ec2 allocate-address --region us-east-1 --domain vpc --network-border-group useast-1-wl1-bos-wlz-1 The following is example output. { \"AllocationId\": \"eipalloc-05807b62acEXAMPLE\", \"PublicIpv4Pool\": \"amazon\", \"NetworkBorderGroup\": \"us-east-1-wl1-bos-wlz-1\", \"Domain\": \"vpc\", \"CarrierIp\": \"155.146.10.111\" } Option 2: Allocate and associate a Carrier IP address from the network border group 15 AWS Wavelength 2. Developer Guide Use the associate-address command as follows to associate the Carrier IP address with the EC2 instance. aws ec2 associate-address --allocation-id eipalloc-05807b62acEXAMPLE --networkinterface-id eni-1a2b3c4d The following is example output. { \"AssociationId\": \"eipassoc-02463d08ceEXAMPLE\", } Step 5: Test the connectivity Before you test the connectivity, do the following: • Review the section called “Networking considerations” • Configure the instance security group to allow ICMP traffic. Test the connectivity from the instance in the Region to the Wavelength Zone instance. Depending on your operating system, use SSH or RDP to connect to the Carrier IP address of your Wavelength Zone instance. You can use a secure bastion host. Run the ping command to the Wavelength Zone instance. In the following example, the IP address of the subnet in the Wavelength Zone is 10.0.3.112. ping 10.0.3.112 Pinging 10.0.3.112 Reply from 10.0.3.112: Reply from 10.0.3.112: Reply from 10.0.3.112: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 10.0.3.112 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test"} +{"global_id": 55, "doc_id": "wavelength", "chunk_id": "13", "question_id": 4, "question": "What should you do before testing the connectivity?", "answer_span": "Before you test the connectivity, do the following: • Review the section called “Networking considerations” • Configure the instance security group to allow ICMP traffic.", "chunk": "subnet, specify a key pair when you launch the instance, just as you do for instances in an Availability Zone subnet. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. To allocate and associate a Carrier IP address 1. Use the allocate-address command as follows to allocate a Carrier IP address. aws ec2 allocate-address --region us-east-1 --domain vpc --network-border-group useast-1-wl1-bos-wlz-1 The following is example output. { \"AllocationId\": \"eipalloc-05807b62acEXAMPLE\", \"PublicIpv4Pool\": \"amazon\", \"NetworkBorderGroup\": \"us-east-1-wl1-bos-wlz-1\", \"Domain\": \"vpc\", \"CarrierIp\": \"155.146.10.111\" } Option 2: Allocate and associate a Carrier IP address from the network border group 15 AWS Wavelength 2. Developer Guide Use the associate-address command as follows to associate the Carrier IP address with the EC2 instance. aws ec2 associate-address --allocation-id eipalloc-05807b62acEXAMPLE --networkinterface-id eni-1a2b3c4d The following is example output. { \"AssociationId\": \"eipassoc-02463d08ceEXAMPLE\", } Step 5: Test the connectivity Before you test the connectivity, do the following: • Review the section called “Networking considerations” • Configure the instance security group to allow ICMP traffic. Test the connectivity from the instance in the Region to the Wavelength Zone instance. Depending on your operating system, use SSH or RDP to connect to the Carrier IP address of your Wavelength Zone instance. You can use a secure bastion host. Run the ping command to the Wavelength Zone instance. In the following example, the IP address of the subnet in the Wavelength Zone is 10.0.3.112. ping 10.0.3.112 Pinging 10.0.3.112 Reply from 10.0.3.112: Reply from 10.0.3.112: Reply from 10.0.3.112: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 10.0.3.112 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test"} +{"global_id": 56, "doc_id": "wavelength", "chunk_id": "14", "question_id": 1, "question": "What is the IP address used to test connectivity from the Wavelength Zone instance to the carrier network?", "answer_span": "In the following example, the carrier network IP address is 198.51.100.130.", "chunk": "10.0.3.112: Reply from 10.0.3.112: Reply from 10.0.3.112: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 10.0.3.112 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test the connectivity 16 AWS Wavelength Developer Guide Test the connectivity from the instance in the Wavelength Zone instance to the carrier network. Depending on your operating system, use SSH or RDP to connect to the Carrier IP address of your Wavelength Zone instance. You can use a secure bastion host. You need a device on the carrier network in order to test the connectivity from the Wavelength Zone to the carrier network. Run the ping command to an address in the carrier network. In the following example, the carrier network IP address is 198.51.100.130. ping 198.51.100.130 Pinging 198.51.100.130 Reply from 198.51.100.130: Reply from 198.51.100.130: Reply from 198.51.100.130: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 198.51.100.130 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test the connectivity 17 AWS Wavelength Developer Guide Carrier gateway for AWS Wavelength A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and it allows outbound traffic to the carrier network and the internet. There is generally no inbound connection configuration from the internet to a Wavelength Zone through the carrier gateway with the exception of select partners. For more information, see Multi-access AWS Wavelength. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your"} +{"global_id": 57, "doc_id": "wavelength", "chunk_id": "14", "question_id": 2, "question": "What does a carrier gateway allow in terms of traffic?", "answer_span": "It allows inbound traffic from a carrier network in a specific location, and it allows outbound traffic to the carrier network and the internet.", "chunk": "10.0.3.112: Reply from 10.0.3.112: Reply from 10.0.3.112: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 10.0.3.112 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test the connectivity 16 AWS Wavelength Developer Guide Test the connectivity from the instance in the Wavelength Zone instance to the carrier network. Depending on your operating system, use SSH or RDP to connect to the Carrier IP address of your Wavelength Zone instance. You can use a secure bastion host. You need a device on the carrier network in order to test the connectivity from the Wavelength Zone to the carrier network. Run the ping command to an address in the carrier network. In the following example, the carrier network IP address is 198.51.100.130. ping 198.51.100.130 Pinging 198.51.100.130 Reply from 198.51.100.130: Reply from 198.51.100.130: Reply from 198.51.100.130: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 198.51.100.130 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test the connectivity 17 AWS Wavelength Developer Guide Carrier gateway for AWS Wavelength A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and it allows outbound traffic to the carrier network and the internet. There is generally no inbound connection configuration from the internet to a Wavelength Zone through the carrier gateway with the exception of select partners. For more information, see Multi-access AWS Wavelength. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your"} +{"global_id": 58, "doc_id": "wavelength", "chunk_id": "14", "question_id": 3, "question": "What is the round trip time in milliseconds for the ping to 10.0.3.112?", "answer_span": "Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms.", "chunk": "10.0.3.112: Reply from 10.0.3.112: Reply from 10.0.3.112: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 10.0.3.112 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test the connectivity 16 AWS Wavelength Developer Guide Test the connectivity from the instance in the Wavelength Zone instance to the carrier network. Depending on your operating system, use SSH or RDP to connect to the Carrier IP address of your Wavelength Zone instance. You can use a secure bastion host. You need a device on the carrier network in order to test the connectivity from the Wavelength Zone to the carrier network. Run the ping command to an address in the carrier network. In the following example, the carrier network IP address is 198.51.100.130. ping 198.51.100.130 Pinging 198.51.100.130 Reply from 198.51.100.130: Reply from 198.51.100.130: Reply from 198.51.100.130: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 198.51.100.130 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test the connectivity 17 AWS Wavelength Developer Guide Carrier gateway for AWS Wavelength A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and it allows outbound traffic to the carrier network and the internet. There is generally no inbound connection configuration from the internet to a Wavelength Zone through the carrier gateway with the exception of select partners. For more information, see Multi-access AWS Wavelength. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your"} +{"global_id": 59, "doc_id": "wavelength", "chunk_id": "14", "question_id": 4, "question": "What type of traffic does a carrier gateway support?", "answer_span": "A carrier gateway supports IPv4 traffic.", "chunk": "10.0.3.112: Reply from 10.0.3.112: Reply from 10.0.3.112: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 10.0.3.112 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test the connectivity 16 AWS Wavelength Developer Guide Test the connectivity from the instance in the Wavelength Zone instance to the carrier network. Depending on your operating system, use SSH or RDP to connect to the Carrier IP address of your Wavelength Zone instance. You can use a secure bastion host. You need a device on the carrier network in order to test the connectivity from the Wavelength Zone to the carrier network. Run the ping command to an address in the carrier network. In the following example, the carrier network IP address is 198.51.100.130. ping 198.51.100.130 Pinging 198.51.100.130 Reply from 198.51.100.130: Reply from 198.51.100.130: Reply from 198.51.100.130: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 198.51.100.130 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test the connectivity 17 AWS Wavelength Developer Guide Carrier gateway for AWS Wavelength A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and it allows outbound traffic to the carrier network and the internet. There is generally no inbound connection configuration from the internet to a Wavelength Zone through the carrier gateway with the exception of select partners. For more information, see Multi-access AWS Wavelength. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your"} +{"global_id": 60, "doc_id": "wavelength", "chunk_id": "15", "question_id": 1, "question": "What is the purpose of a carrier gateway?", "answer_span": "The carrier gateway provides connectivity between your Wavelength Zone and the carrier, and devices on the carrier network.", "chunk": "the internet to a Wavelength Zone through the carrier gateway with the exception of select partners. For more information, see Multi-access AWS Wavelength. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your Wavelength Zone and the carrier, and devices on the carrier network. The carrier gateway performs NAT of the Wavelength instances' IP addresses to the Carrier IP addresses from a pool that is assigned to the network border group. The carrier gateway NAT function is similar to how an internet gateway functions in a Region. Enable access to the carrier network To enable access to or from the carrier network for instances in a Wavelength subnet, you must do the following: • Create a VPC. • Create a carrier gateway and attach the carrier gateway to your VPC. When you create the carrier gateway, you can optionally choose which subnets route to the carrier gateway. When you select this option, we automatically create the resources related to carrier gateways, such as route tables and network ACLs. If you do not choose this option, then you must perform the following tasks: • Select the subnets that route traffic to the carrier gateway. • Ensure that your subnet route tables have a route that directs traffic to the carrier gateway. • Ensure that instances in your subnet have a globally unique Carrier IP address. • Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance. Enable access to the carrier network 18 AWS Wavelength Developer Guide Work with carrier gateways The following sections describe how to manually create a carrier gateway for your VPC to support inbound traffic from the carrier"} +{"global_id": 61, "doc_id": "wavelength", "chunk_id": "15", "question_id": 2, "question": "What must you do to enable access to the carrier network for instances in a Wavelength subnet?", "answer_span": "To enable access to or from the carrier network for instances in a Wavelength subnet, you must do the following: • Create a VPC. • Create a carrier gateway and attach the carrier gateway to your VPC.", "chunk": "the internet to a Wavelength Zone through the carrier gateway with the exception of select partners. For more information, see Multi-access AWS Wavelength. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your Wavelength Zone and the carrier, and devices on the carrier network. The carrier gateway performs NAT of the Wavelength instances' IP addresses to the Carrier IP addresses from a pool that is assigned to the network border group. The carrier gateway NAT function is similar to how an internet gateway functions in a Region. Enable access to the carrier network To enable access to or from the carrier network for instances in a Wavelength subnet, you must do the following: • Create a VPC. • Create a carrier gateway and attach the carrier gateway to your VPC. When you create the carrier gateway, you can optionally choose which subnets route to the carrier gateway. When you select this option, we automatically create the resources related to carrier gateways, such as route tables and network ACLs. If you do not choose this option, then you must perform the following tasks: • Select the subnets that route traffic to the carrier gateway. • Ensure that your subnet route tables have a route that directs traffic to the carrier gateway. • Ensure that instances in your subnet have a globally unique Carrier IP address. • Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance. Enable access to the carrier network 18 AWS Wavelength Developer Guide Work with carrier gateways The following sections describe how to manually create a carrier gateway for your VPC to support inbound traffic from the carrier"} +{"global_id": 62, "doc_id": "wavelength", "chunk_id": "15", "question_id": 3, "question": "What is similar to how a carrier gateway performs NAT?", "answer_span": "The carrier gateway NAT function is similar to how an internet gateway functions in a Region.", "chunk": "the internet to a Wavelength Zone through the carrier gateway with the exception of select partners. For more information, see Multi-access AWS Wavelength. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your Wavelength Zone and the carrier, and devices on the carrier network. The carrier gateway performs NAT of the Wavelength instances' IP addresses to the Carrier IP addresses from a pool that is assigned to the network border group. The carrier gateway NAT function is similar to how an internet gateway functions in a Region. Enable access to the carrier network To enable access to or from the carrier network for instances in a Wavelength subnet, you must do the following: • Create a VPC. • Create a carrier gateway and attach the carrier gateway to your VPC. When you create the carrier gateway, you can optionally choose which subnets route to the carrier gateway. When you select this option, we automatically create the resources related to carrier gateways, such as route tables and network ACLs. If you do not choose this option, then you must perform the following tasks: • Select the subnets that route traffic to the carrier gateway. • Ensure that your subnet route tables have a route that directs traffic to the carrier gateway. • Ensure that instances in your subnet have a globally unique Carrier IP address. • Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance. Enable access to the carrier network 18 AWS Wavelength Developer Guide Work with carrier gateways The following sections describe how to manually create a carrier gateway for your VPC to support inbound traffic from the carrier"} +{"global_id": 63, "doc_id": "wavelength", "chunk_id": "15", "question_id": 4, "question": "What happens if you do not choose the option to automatically create resources related to carrier gateways?", "answer_span": "If you do not choose this option, then you must perform the following tasks: • Select the subnets that route traffic to the carrier gateway.", "chunk": "the internet to a Wavelength Zone through the carrier gateway with the exception of select partners. For more information, see Multi-access AWS Wavelength. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your Wavelength Zone and the carrier, and devices on the carrier network. The carrier gateway performs NAT of the Wavelength instances' IP addresses to the Carrier IP addresses from a pool that is assigned to the network border group. The carrier gateway NAT function is similar to how an internet gateway functions in a Region. Enable access to the carrier network To enable access to or from the carrier network for instances in a Wavelength subnet, you must do the following: • Create a VPC. • Create a carrier gateway and attach the carrier gateway to your VPC. When you create the carrier gateway, you can optionally choose which subnets route to the carrier gateway. When you select this option, we automatically create the resources related to carrier gateways, such as route tables and network ACLs. If you do not choose this option, then you must perform the following tasks: • Select the subnets that route traffic to the carrier gateway. • Ensure that your subnet route tables have a route that directs traffic to the carrier gateway. • Ensure that instances in your subnet have a globally unique Carrier IP address. • Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance. Enable access to the carrier network 18 AWS Wavelength Developer Guide Work with carrier gateways The following sections describe how to manually create a carrier gateway for your VPC to support inbound traffic from the carrier"} +{"global_id": 64, "doc_id": "wavelength", "chunk_id": "16", "question_id": 1, "question": "What do security group rules allow?", "answer_span": "security group rules allow the relevant traffic to flow to and from your instance.", "chunk": "security group rules allow the relevant traffic to flow to and from your instance. Enable access to the carrier network 18 AWS Wavelength Developer Guide Work with carrier gateways The following sections describe how to manually create a carrier gateway for your VPC to support inbound traffic from the carrier network (for example, mobile phones), and to support outbound traffic to the carrier network and the internet. Tasks • Create a VPC • Create a carrier gateway • Create a security group to access the carrier network • Allocate and associate a Carrier IP address with the instance in the Wavelength Zone subnet • Routing to a Wavelength Zone carrier gateway • View the carrier gateway details • Manage carrier gateway tags • Delete a carrier gateway Create a VPC You can create an empty Wavelength VPC as follows. Limitation You can specify a range of publicly routable IPv4 addresses. However, we do not support direct access to the internet from publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Your VPCs, Create VPC. 3. Do the following and then choose Create. • Name tag: Optionally provide a name for your VPC. Doing so creates a tag with a key of Name and the value that you specify. • IPv4 CIDR block: Specify an IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Work with carrier gateways 19 AWS Wavelength Developer Guide To create a VPC using the AWS CLI Use"} +{"global_id": 65, "doc_id": "wavelength", "chunk_id": "16", "question_id": 2, "question": "What is the purpose of creating a carrier gateway?", "answer_span": "to support inbound traffic from the carrier network (for example, mobile phones), and to support outbound traffic to the carrier network and the internet.", "chunk": "security group rules allow the relevant traffic to flow to and from your instance. Enable access to the carrier network 18 AWS Wavelength Developer Guide Work with carrier gateways The following sections describe how to manually create a carrier gateway for your VPC to support inbound traffic from the carrier network (for example, mobile phones), and to support outbound traffic to the carrier network and the internet. Tasks • Create a VPC • Create a carrier gateway • Create a security group to access the carrier network • Allocate and associate a Carrier IP address with the instance in the Wavelength Zone subnet • Routing to a Wavelength Zone carrier gateway • View the carrier gateway details • Manage carrier gateway tags • Delete a carrier gateway Create a VPC You can create an empty Wavelength VPC as follows. Limitation You can specify a range of publicly routable IPv4 addresses. However, we do not support direct access to the internet from publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Your VPCs, Create VPC. 3. Do the following and then choose Create. • Name tag: Optionally provide a name for your VPC. Doing so creates a tag with a key of Name and the value that you specify. • IPv4 CIDR block: Specify an IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Work with carrier gateways 19 AWS Wavelength Developer Guide To create a VPC using the AWS CLI Use"} +{"global_id": 66, "doc_id": "wavelength", "chunk_id": "16", "question_id": 3, "question": "What limitation is mentioned regarding CIDR blocks in a VPC?", "answer_span": "we do not support direct access to the internet from publicly routable CIDR blocks in a VPC.", "chunk": "security group rules allow the relevant traffic to flow to and from your instance. Enable access to the carrier network 18 AWS Wavelength Developer Guide Work with carrier gateways The following sections describe how to manually create a carrier gateway for your VPC to support inbound traffic from the carrier network (for example, mobile phones), and to support outbound traffic to the carrier network and the internet. Tasks • Create a VPC • Create a carrier gateway • Create a security group to access the carrier network • Allocate and associate a Carrier IP address with the instance in the Wavelength Zone subnet • Routing to a Wavelength Zone carrier gateway • View the carrier gateway details • Manage carrier gateway tags • Delete a carrier gateway Create a VPC You can create an empty Wavelength VPC as follows. Limitation You can specify a range of publicly routable IPv4 addresses. However, we do not support direct access to the internet from publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Your VPCs, Create VPC. 3. Do the following and then choose Create. • Name tag: Optionally provide a name for your VPC. Doing so creates a tag with a key of Name and the value that you specify. • IPv4 CIDR block: Specify an IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Work with carrier gateways 19 AWS Wavelength Developer Guide To create a VPC using the AWS CLI Use"} +{"global_id": 67, "doc_id": "wavelength", "chunk_id": "16", "question_id": 4, "question": "What is recommended for specifying an IPv4 CIDR block for the VPC?", "answer_span": "We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918.", "chunk": "security group rules allow the relevant traffic to flow to and from your instance. Enable access to the carrier network 18 AWS Wavelength Developer Guide Work with carrier gateways The following sections describe how to manually create a carrier gateway for your VPC to support inbound traffic from the carrier network (for example, mobile phones), and to support outbound traffic to the carrier network and the internet. Tasks • Create a VPC • Create a carrier gateway • Create a security group to access the carrier network • Allocate and associate a Carrier IP address with the instance in the Wavelength Zone subnet • Routing to a Wavelength Zone carrier gateway • View the carrier gateway details • Manage carrier gateway tags • Delete a carrier gateway Create a VPC You can create an empty Wavelength VPC as follows. Limitation You can specify a range of publicly routable IPv4 addresses. However, we do not support direct access to the internet from publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Your VPCs, Create VPC. 3. Do the following and then choose Create. • Name tag: Optionally provide a name for your VPC. Doing so creates a tag with a key of Name and the value that you specify. • IPv4 CIDR block: Specify an IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Work with carrier gateways 19 AWS Wavelength Developer Guide To create a VPC using the AWS CLI Use"} +{"global_id": 68, "doc_id": "wavelength", "chunk_id": "17", "question_id": 1, "question": "What type of IP address ranges is recommended for the CIDR block in the VPC?", "answer_span": "We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918;", "chunk": "IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Work with carrier gateways 19 AWS Wavelength Developer Guide To create a VPC using the AWS CLI Use the create-vpc command. Create a carrier gateway After you create a VPC, create a carrier gateway and then select the subnets that route traffic to the carrier gateway. If you have not opted in to a Wavelength Zone, the Amazon Virtual Private Cloud Console prompts you to opt in. For more information, see the section called “Manage Zones”. When you choose to automatically route traffic from subnets to the carrier gateway, we create the following resources: • A carrier gateway • A subnet. You can optionally assign all carrier gateway tags that do not have a Key value of Name to the subnet. • A network ACL with the following resources: • A subnet associated with the subnet in the Wavelength Zone • Default inbound and outbound rules for all of your traffic. • A route table with the following resources: • A route for all local traffic • A route that routes all non-local traffic to the carrier gateway • An association with the subnet To create a carrier gateway 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Carrier Gateways, and then choose Create carrier gateway. 3. Optional: For Name, enter a name for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following. a. Under Existing subnets in Wavelength Zone, select the box for each subnet to route to the carrier gateway. Create a carrier gateway 20"} +{"global_id": 69, "doc_id": "wavelength", "chunk_id": "17", "question_id": 2, "question": "What command is used to create a VPC using the AWS CLI?", "answer_span": "Use the create-vpc command.", "chunk": "IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Work with carrier gateways 19 AWS Wavelength Developer Guide To create a VPC using the AWS CLI Use the create-vpc command. Create a carrier gateway After you create a VPC, create a carrier gateway and then select the subnets that route traffic to the carrier gateway. If you have not opted in to a Wavelength Zone, the Amazon Virtual Private Cloud Console prompts you to opt in. For more information, see the section called “Manage Zones”. When you choose to automatically route traffic from subnets to the carrier gateway, we create the following resources: • A carrier gateway • A subnet. You can optionally assign all carrier gateway tags that do not have a Key value of Name to the subnet. • A network ACL with the following resources: • A subnet associated with the subnet in the Wavelength Zone • Default inbound and outbound rules for all of your traffic. • A route table with the following resources: • A route for all local traffic • A route that routes all non-local traffic to the carrier gateway • An association with the subnet To create a carrier gateway 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Carrier Gateways, and then choose Create carrier gateway. 3. Optional: For Name, enter a name for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following. a. Under Existing subnets in Wavelength Zone, select the box for each subnet to route to the carrier gateway. Create a carrier gateway 20"} +{"global_id": 70, "doc_id": "wavelength", "chunk_id": "17", "question_id": 3, "question": "What happens if you have not opted in to a Wavelength Zone?", "answer_span": "the Amazon Virtual Private Cloud Console prompts you to opt in.", "chunk": "IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Work with carrier gateways 19 AWS Wavelength Developer Guide To create a VPC using the AWS CLI Use the create-vpc command. Create a carrier gateway After you create a VPC, create a carrier gateway and then select the subnets that route traffic to the carrier gateway. If you have not opted in to a Wavelength Zone, the Amazon Virtual Private Cloud Console prompts you to opt in. For more information, see the section called “Manage Zones”. When you choose to automatically route traffic from subnets to the carrier gateway, we create the following resources: • A carrier gateway • A subnet. You can optionally assign all carrier gateway tags that do not have a Key value of Name to the subnet. • A network ACL with the following resources: • A subnet associated with the subnet in the Wavelength Zone • Default inbound and outbound rules for all of your traffic. • A route table with the following resources: • A route for all local traffic • A route that routes all non-local traffic to the carrier gateway • An association with the subnet To create a carrier gateway 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Carrier Gateways, and then choose Create carrier gateway. 3. Optional: For Name, enter a name for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following. a. Under Existing subnets in Wavelength Zone, select the box for each subnet to route to the carrier gateway. Create a carrier gateway 20"} +{"global_id": 71, "doc_id": "wavelength", "chunk_id": "17", "question_id": 4, "question": "What resources are created when you choose to automatically route traffic to the carrier gateway?", "answer_span": "we create the following resources: • A carrier gateway • A subnet.", "chunk": "IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Work with carrier gateways 19 AWS Wavelength Developer Guide To create a VPC using the AWS CLI Use the create-vpc command. Create a carrier gateway After you create a VPC, create a carrier gateway and then select the subnets that route traffic to the carrier gateway. If you have not opted in to a Wavelength Zone, the Amazon Virtual Private Cloud Console prompts you to opt in. For more information, see the section called “Manage Zones”. When you choose to automatically route traffic from subnets to the carrier gateway, we create the following resources: • A carrier gateway • A subnet. You can optionally assign all carrier gateway tags that do not have a Key value of Name to the subnet. • A network ACL with the following resources: • A subnet associated with the subnet in the Wavelength Zone • Default inbound and outbound rules for all of your traffic. • A route table with the following resources: • A route for all local traffic • A route that routes all non-local traffic to the carrier gateway • An association with the subnet To create a carrier gateway 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Carrier Gateways, and then choose Create carrier gateway. 3. Optional: For Name, enter a name for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following. a. Under Existing subnets in Wavelength Zone, select the box for each subnet to route to the carrier gateway. Create a carrier gateway 20"} +{"global_id": 72, "doc_id": "wavelength", "chunk_id": "18", "question_id": 1, "question": "What is the first step to create a carrier gateway?", "answer_span": "For VPC, choose the VPC.", "chunk": "for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following. a. Under Existing subnets in Wavelength Zone, select the box for each subnet to route to the carrier gateway. Create a carrier gateway 20 AWS Wavelength b. Developer Guide To create a subnet in the Wavelength Zone, choose Add new subnet, specify the following information, and then choose Add new subnet: • Name tag: Optionally provide a name for your subnet. Doing so creates a tag with a key of Name and the value that you specify. • VPC: Choose the VPC. • Availability Zone: Choose the Wavelength Zone. • IPv4 CIDR block: Specify an IPv4 CIDR block for your subnet, for example, 10.0.1.0/24. • To apply the carrier gateway tags to the subnet, select Apply same tags from this carrier gateway. 6. 7. (Optional) To add a tag to the carrier gateway, choose Add tag, and then do the following: • For Key, enter the key name. • For Value, enter the key value. Choose Create carrier gateway. To create a carrier gateway using the AWS CLI 1. Use the create-carrier-gateway command. 2. Add a VPC route table with the following resources: • A route for all VPC local traffic • A route that routes all non-local traffic to the carrier gateway • An association with the subnets in the Wavelength Zone For more information, see the section called “Routing to a Wavelength Zone carrier gateway”. Create a security group to access the carrier network By default, a VPC security group allows all outbound traffic. You can create a new security group and add rules that allow inbound traffic from the carrier. Then, you associate the security group with instances in the subnet."} +{"global_id": 73, "doc_id": "wavelength", "chunk_id": "18", "question_id": 2, "question": "What should you do to apply the carrier gateway tags to the subnet?", "answer_span": "To apply the carrier gateway tags to the subnet, select Apply same tags from this carrier gateway.", "chunk": "for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following. a. Under Existing subnets in Wavelength Zone, select the box for each subnet to route to the carrier gateway. Create a carrier gateway 20 AWS Wavelength b. Developer Guide To create a subnet in the Wavelength Zone, choose Add new subnet, specify the following information, and then choose Add new subnet: • Name tag: Optionally provide a name for your subnet. Doing so creates a tag with a key of Name and the value that you specify. • VPC: Choose the VPC. • Availability Zone: Choose the Wavelength Zone. • IPv4 CIDR block: Specify an IPv4 CIDR block for your subnet, for example, 10.0.1.0/24. • To apply the carrier gateway tags to the subnet, select Apply same tags from this carrier gateway. 6. 7. (Optional) To add a tag to the carrier gateway, choose Add tag, and then do the following: • For Key, enter the key name. • For Value, enter the key value. Choose Create carrier gateway. To create a carrier gateway using the AWS CLI 1. Use the create-carrier-gateway command. 2. Add a VPC route table with the following resources: • A route for all VPC local traffic • A route that routes all non-local traffic to the carrier gateway • An association with the subnets in the Wavelength Zone For more information, see the section called “Routing to a Wavelength Zone carrier gateway”. Create a security group to access the carrier network By default, a VPC security group allows all outbound traffic. You can create a new security group and add rules that allow inbound traffic from the carrier. Then, you associate the security group with instances in the subnet."} +{"global_id": 74, "doc_id": "wavelength", "chunk_id": "18", "question_id": 3, "question": "What command is used to create a carrier gateway using the AWS CLI?", "answer_span": "Use the create-carrier-gateway command.", "chunk": "for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following. a. Under Existing subnets in Wavelength Zone, select the box for each subnet to route to the carrier gateway. Create a carrier gateway 20 AWS Wavelength b. Developer Guide To create a subnet in the Wavelength Zone, choose Add new subnet, specify the following information, and then choose Add new subnet: • Name tag: Optionally provide a name for your subnet. Doing so creates a tag with a key of Name and the value that you specify. • VPC: Choose the VPC. • Availability Zone: Choose the Wavelength Zone. • IPv4 CIDR block: Specify an IPv4 CIDR block for your subnet, for example, 10.0.1.0/24. • To apply the carrier gateway tags to the subnet, select Apply same tags from this carrier gateway. 6. 7. (Optional) To add a tag to the carrier gateway, choose Add tag, and then do the following: • For Key, enter the key name. • For Value, enter the key value. Choose Create carrier gateway. To create a carrier gateway using the AWS CLI 1. Use the create-carrier-gateway command. 2. Add a VPC route table with the following resources: • A route for all VPC local traffic • A route that routes all non-local traffic to the carrier gateway • An association with the subnets in the Wavelength Zone For more information, see the section called “Routing to a Wavelength Zone carrier gateway”. Create a security group to access the carrier network By default, a VPC security group allows all outbound traffic. You can create a new security group and add rules that allow inbound traffic from the carrier. Then, you associate the security group with instances in the subnet."} +{"global_id": 75, "doc_id": "wavelength", "chunk_id": "18", "question_id": 4, "question": "What does a VPC security group allow by default?", "answer_span": "By default, a VPC security group allows all outbound traffic.", "chunk": "for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following. a. Under Existing subnets in Wavelength Zone, select the box for each subnet to route to the carrier gateway. Create a carrier gateway 20 AWS Wavelength b. Developer Guide To create a subnet in the Wavelength Zone, choose Add new subnet, specify the following information, and then choose Add new subnet: • Name tag: Optionally provide a name for your subnet. Doing so creates a tag with a key of Name and the value that you specify. • VPC: Choose the VPC. • Availability Zone: Choose the Wavelength Zone. • IPv4 CIDR block: Specify an IPv4 CIDR block for your subnet, for example, 10.0.1.0/24. • To apply the carrier gateway tags to the subnet, select Apply same tags from this carrier gateway. 6. 7. (Optional) To add a tag to the carrier gateway, choose Add tag, and then do the following: • For Key, enter the key name. • For Value, enter the key value. Choose Create carrier gateway. To create a carrier gateway using the AWS CLI 1. Use the create-carrier-gateway command. 2. Add a VPC route table with the following resources: • A route for all VPC local traffic • A route that routes all non-local traffic to the carrier gateway • An association with the subnets in the Wavelength Zone For more information, see the section called “Routing to a Wavelength Zone carrier gateway”. Create a security group to access the carrier network By default, a VPC security group allows all outbound traffic. You can create a new security group and add rules that allow inbound traffic from the carrier. Then, you associate the security group with instances in the subnet."} +{"global_id": 76, "doc_id": "wavelength", "chunk_id": "19", "question_id": 1, "question": "What is the purpose of creating a security group in this context?", "answer_span": "Create a security group to access the carrier network", "chunk": "Zone carrier gateway”. Create a security group to access the carrier network By default, a VPC security group allows all outbound traffic. You can create a new security group and add rules that allow inbound traffic from the carrier. Then, you associate the security group with instances in the subnet. Create a security group to access the carrier network 21"} +{"global_id": 77, "doc_id": "wavelength", "chunk_id": "19", "question_id": 2, "question": "What does a default VPC security group allow?", "answer_span": "By default, a VPC security group allows all outbound traffic.", "chunk": "Zone carrier gateway”. Create a security group to access the carrier network By default, a VPC security group allows all outbound traffic. You can create a new security group and add rules that allow inbound traffic from the carrier. Then, you associate the security group with instances in the subnet. Create a security group to access the carrier network 21"} +{"global_id": 78, "doc_id": "wavelength", "chunk_id": "19", "question_id": 3, "question": "What can you do to allow inbound traffic from the carrier?", "answer_span": "You can create a new security group and add rules that allow inbound traffic from the carrier.", "chunk": "Zone carrier gateway”. Create a security group to access the carrier network By default, a VPC security group allows all outbound traffic. You can create a new security group and add rules that allow inbound traffic from the carrier. Then, you associate the security group with instances in the subnet. Create a security group to access the carrier network 21"} +{"global_id": 79, "doc_id": "wavelength", "chunk_id": "19", "question_id": 4, "question": "What must be done after creating a security group?", "answer_span": "Then, you associate the security group with instances in the subnet.", "chunk": "Zone carrier gateway”. Create a security group to access the carrier network By default, a VPC security group allows all outbound traffic. You can create a new security group and add rules that allow inbound traffic from the carrier. Then, you associate the security group with instances in the subnet. Create a security group to access the carrier network 21"} +{"global_id": 80, "doc_id": "beanstalk", "chunk_id": "0", "question_id": 1, "question": "What can you deploy using AWS Elastic Beanstalk?", "answer_span": "With Elastic Beanstalk you can deploy web applications into the AWS Cloud on a variety of supported platforms.", "chunk": "AWS Elastic Beanstalk Developer Guide What is AWS Elastic Beanstalk? With Elastic Beanstalk you can deploy web applications into the AWS Cloud on a variety of supported platforms. You build and deploy your applications. Elastic Beanstalk provisions Amazon EC2 instances, configures load balancing, sets up health monitoring, and dynamically scales your environment. In addition to web server environments, Elastic Beanstalk also provides worker environments which you can use to process messages from an Amazon SQS queue, useful for asynchronous or longrunning tasks. For more information, see Elastic Beanstalk worker environments. 1 AWS Elastic Beanstalk Developer Guide Supported platforms Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and Ruby. Elastic Beanstalk also supports Docker containers, where you can choose your own programming language and application dependencies. When you deploy your application, Elastic Supported platforms 2 AWS Elastic Beanstalk Developer Guide Beanstalk builds the selected supported platform version and provisions one or more AWS resources, such as Amazon EC2 instances, in your AWS account to run your application. You can interact with Elastic Beanstalk through the Elastic Beanstalk console, the AWS Command Line Interface (AWS CLI), or the EB CLI, a high-level command line tool designed specifically for Elastic Beanstalk. You can perform most deployment tasks, such as changing the size of your fleet of Amazon EC2 instances or monitoring your application, directly from the Elastic Beanstalk web interface (console). To learn more about how to deploy a sample web application using Elastic Beanstalk, see Learn how to get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your"} +{"global_id": 81, "doc_id": "beanstalk", "chunk_id": "0", "question_id": 2, "question": "What types of environments does Elastic Beanstalk provide?", "answer_span": "In addition to web server environments, Elastic Beanstalk also provides worker environments which you can use to process messages from an Amazon SQS queue, useful for asynchronous or longrunning tasks.", "chunk": "AWS Elastic Beanstalk Developer Guide What is AWS Elastic Beanstalk? With Elastic Beanstalk you can deploy web applications into the AWS Cloud on a variety of supported platforms. You build and deploy your applications. Elastic Beanstalk provisions Amazon EC2 instances, configures load balancing, sets up health monitoring, and dynamically scales your environment. In addition to web server environments, Elastic Beanstalk also provides worker environments which you can use to process messages from an Amazon SQS queue, useful for asynchronous or longrunning tasks. For more information, see Elastic Beanstalk worker environments. 1 AWS Elastic Beanstalk Developer Guide Supported platforms Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and Ruby. Elastic Beanstalk also supports Docker containers, where you can choose your own programming language and application dependencies. When you deploy your application, Elastic Supported platforms 2 AWS Elastic Beanstalk Developer Guide Beanstalk builds the selected supported platform version and provisions one or more AWS resources, such as Amazon EC2 instances, in your AWS account to run your application. You can interact with Elastic Beanstalk through the Elastic Beanstalk console, the AWS Command Line Interface (AWS CLI), or the EB CLI, a high-level command line tool designed specifically for Elastic Beanstalk. You can perform most deployment tasks, such as changing the size of your fleet of Amazon EC2 instances or monitoring your application, directly from the Elastic Beanstalk web interface (console). To learn more about how to deploy a sample web application using Elastic Beanstalk, see Learn how to get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your"} +{"global_id": 82, "doc_id": "beanstalk", "chunk_id": "0", "question_id": 3, "question": "Which programming languages are supported by Elastic Beanstalk?", "answer_span": "Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and Ruby.", "chunk": "AWS Elastic Beanstalk Developer Guide What is AWS Elastic Beanstalk? With Elastic Beanstalk you can deploy web applications into the AWS Cloud on a variety of supported platforms. You build and deploy your applications. Elastic Beanstalk provisions Amazon EC2 instances, configures load balancing, sets up health monitoring, and dynamically scales your environment. In addition to web server environments, Elastic Beanstalk also provides worker environments which you can use to process messages from an Amazon SQS queue, useful for asynchronous or longrunning tasks. For more information, see Elastic Beanstalk worker environments. 1 AWS Elastic Beanstalk Developer Guide Supported platforms Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and Ruby. Elastic Beanstalk also supports Docker containers, where you can choose your own programming language and application dependencies. When you deploy your application, Elastic Supported platforms 2 AWS Elastic Beanstalk Developer Guide Beanstalk builds the selected supported platform version and provisions one or more AWS resources, such as Amazon EC2 instances, in your AWS account to run your application. You can interact with Elastic Beanstalk through the Elastic Beanstalk console, the AWS Command Line Interface (AWS CLI), or the EB CLI, a high-level command line tool designed specifically for Elastic Beanstalk. You can perform most deployment tasks, such as changing the size of your fleet of Amazon EC2 instances or monitoring your application, directly from the Elastic Beanstalk web interface (console). To learn more about how to deploy a sample web application using Elastic Beanstalk, see Learn how to get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your"} +{"global_id": 83, "doc_id": "beanstalk", "chunk_id": "0", "question_id": 4, "question": "How can you interact with Elastic Beanstalk?", "answer_span": "You can interact with Elastic Beanstalk through the Elastic Beanstalk console, the AWS Command Line Interface (AWS CLI), or the EB CLI, a high-level command line tool designed specifically for Elastic Beanstalk.", "chunk": "AWS Elastic Beanstalk Developer Guide What is AWS Elastic Beanstalk? With Elastic Beanstalk you can deploy web applications into the AWS Cloud on a variety of supported platforms. You build and deploy your applications. Elastic Beanstalk provisions Amazon EC2 instances, configures load balancing, sets up health monitoring, and dynamically scales your environment. In addition to web server environments, Elastic Beanstalk also provides worker environments which you can use to process messages from an Amazon SQS queue, useful for asynchronous or longrunning tasks. For more information, see Elastic Beanstalk worker environments. 1 AWS Elastic Beanstalk Developer Guide Supported platforms Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and Ruby. Elastic Beanstalk also supports Docker containers, where you can choose your own programming language and application dependencies. When you deploy your application, Elastic Supported platforms 2 AWS Elastic Beanstalk Developer Guide Beanstalk builds the selected supported platform version and provisions one or more AWS resources, such as Amazon EC2 instances, in your AWS account to run your application. You can interact with Elastic Beanstalk through the Elastic Beanstalk console, the AWS Command Line Interface (AWS CLI), or the EB CLI, a high-level command line tool designed specifically for Elastic Beanstalk. You can perform most deployment tasks, such as changing the size of your fleet of Amazon EC2 instances or monitoring your application, directly from the Elastic Beanstalk web interface (console). To learn more about how to deploy a sample web application using Elastic Beanstalk, see Learn how to get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your"} +{"global_id": 84, "doc_id": "beanstalk", "chunk_id": "1", "question_id": 1, "question": "What is the first step to use Elastic Beanstalk?", "answer_span": "To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk.", "chunk": "get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your code. After you create and deploy your application and your environment is launched, you can manage your environment and deploy new application versions. Information about the application— including metrics, events, and environment status—is made available through the Elastic Beanstalk console, APIs, and Command Line Interfaces. The following diagram illustrates Elastic Beanstalk workflow: Pricing There is no additional charge for Elastic Beanstalk. You pay only for the underlying AWS resources that your application consumes. For details about pricing, see the Elastic Beanstalk service detail page. Application deploy workflow 3 AWS Elastic Beanstalk Developer Guide Next steps We recommend the tutorial, Getting started tutorial, to start using Elastic Beanstalk. The tutorial steps you through creating, viewing, and updating a sample Elastic Beanstalk application. Next steps 4 AWS Elastic Beanstalk Developer Guide Learn how to get started with Elastic Beanstalk With Elastic Beanstalk you can deploy, monitor, and scale web applications and services. Typically, you will develop your code locally then deploy it to Amazon EC2 server instances. Theses instances, also called environments, run on platforms that can be upgraded through the AWS console or the command line. To get started, we recommend deploying a pre-built sample application directly from the console. Then, you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them"} +{"global_id": 85, "doc_id": "beanstalk", "chunk_id": "1", "question_id": 2, "question": "What information is made available through the Elastic Beanstalk console?", "answer_span": "Information about the application— including metrics, events, and environment status—is made available through the Elastic Beanstalk console, APIs, and Command Line Interfaces.", "chunk": "get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your code. After you create and deploy your application and your environment is launched, you can manage your environment and deploy new application versions. Information about the application— including metrics, events, and environment status—is made available through the Elastic Beanstalk console, APIs, and Command Line Interfaces. The following diagram illustrates Elastic Beanstalk workflow: Pricing There is no additional charge for Elastic Beanstalk. You pay only for the underlying AWS resources that your application consumes. For details about pricing, see the Elastic Beanstalk service detail page. Application deploy workflow 3 AWS Elastic Beanstalk Developer Guide Next steps We recommend the tutorial, Getting started tutorial, to start using Elastic Beanstalk. The tutorial steps you through creating, viewing, and updating a sample Elastic Beanstalk application. Next steps 4 AWS Elastic Beanstalk Developer Guide Learn how to get started with Elastic Beanstalk With Elastic Beanstalk you can deploy, monitor, and scale web applications and services. Typically, you will develop your code locally then deploy it to Amazon EC2 server instances. Theses instances, also called environments, run on platforms that can be upgraded through the AWS console or the command line. To get started, we recommend deploying a pre-built sample application directly from the console. Then, you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them"} +{"global_id": 86, "doc_id": "beanstalk", "chunk_id": "1", "question_id": 3, "question": "Are there any additional charges for using Elastic Beanstalk?", "answer_span": "There is no additional charge for Elastic Beanstalk.", "chunk": "get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your code. After you create and deploy your application and your environment is launched, you can manage your environment and deploy new application versions. Information about the application— including metrics, events, and environment status—is made available through the Elastic Beanstalk console, APIs, and Command Line Interfaces. The following diagram illustrates Elastic Beanstalk workflow: Pricing There is no additional charge for Elastic Beanstalk. You pay only for the underlying AWS resources that your application consumes. For details about pricing, see the Elastic Beanstalk service detail page. Application deploy workflow 3 AWS Elastic Beanstalk Developer Guide Next steps We recommend the tutorial, Getting started tutorial, to start using Elastic Beanstalk. The tutorial steps you through creating, viewing, and updating a sample Elastic Beanstalk application. Next steps 4 AWS Elastic Beanstalk Developer Guide Learn how to get started with Elastic Beanstalk With Elastic Beanstalk you can deploy, monitor, and scale web applications and services. Typically, you will develop your code locally then deploy it to Amazon EC2 server instances. Theses instances, also called environments, run on platforms that can be upgraded through the AWS console or the command line. To get started, we recommend deploying a pre-built sample application directly from the console. Then, you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them"} +{"global_id": 87, "doc_id": "beanstalk", "chunk_id": "1", "question_id": 4, "question": "What do you typically do before deploying your code to Elastic Beanstalk?", "answer_span": "Typically, you will develop your code locally then deploy it to Amazon EC2 server instances.", "chunk": "get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your code. After you create and deploy your application and your environment is launched, you can manage your environment and deploy new application versions. Information about the application— including metrics, events, and environment status—is made available through the Elastic Beanstalk console, APIs, and Command Line Interfaces. The following diagram illustrates Elastic Beanstalk workflow: Pricing There is no additional charge for Elastic Beanstalk. You pay only for the underlying AWS resources that your application consumes. For details about pricing, see the Elastic Beanstalk service detail page. Application deploy workflow 3 AWS Elastic Beanstalk Developer Guide Next steps We recommend the tutorial, Getting started tutorial, to start using Elastic Beanstalk. The tutorial steps you through creating, viewing, and updating a sample Elastic Beanstalk application. Next steps 4 AWS Elastic Beanstalk Developer Guide Learn how to get started with Elastic Beanstalk With Elastic Beanstalk you can deploy, monitor, and scale web applications and services. Typically, you will develop your code locally then deploy it to Amazon EC2 server instances. Theses instances, also called environments, run on platforms that can be upgraded through the AWS console or the command line. To get started, we recommend deploying a pre-built sample application directly from the console. Then, you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them"} +{"global_id": 88, "doc_id": "beanstalk", "chunk_id": "2", "question_id": 1, "question": "What section should you refer to for learning how to develop locally and deploy from the command line?", "answer_span": "you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”.", "chunk": "you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them at the end. The total charges are typically less than a dollar. For information about how to minimize charges, see AWS free tier. After completing this tutorial, you will understand the basics of creating, configuring, deploying, updating, and monitoring an Elastic Beanstalk application with environments running on Amazon EC2 instances. Estimated duration: 35-45 minutes 5 AWS Elastic Beanstalk Developer Guide 6 AWS Elastic Beanstalk Developer Guide What you will build Your first Elastic Beanstalk application will consist of a single Amazon EC2 environment running the PHP sample on a PHP managed platform. Elastic Beanstalk application An Elastic Beanstalk application is a container for Elastic Beanstalk components, including environments where your application code runs on platforms provided and managed by Elastic Beanstalk, or in custom containers that you provide. Environment An Elastic Beanstalk environment is a collection of AWS resources running together including an Amazon EC2 instance. When you create an environment, Elastic Beanstalk provisions the necessary resources into your AWS account. Platform A platform is a combination of an operating system, programming language runtime, web server, application server, and additional Elastic Beanstalk components. Elastic Beanstalk provides manged platforms, or you can provide your own platform in a container. Elastic Beanstalk supports platforms for different programming languages, application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to"} +{"global_id": 89, "doc_id": "beanstalk", "chunk_id": "2", "question_id": 2, "question": "Are there any costs associated with using Elastic Beanstalk?", "answer_span": "There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them at the end.", "chunk": "you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them at the end. The total charges are typically less than a dollar. For information about how to minimize charges, see AWS free tier. After completing this tutorial, you will understand the basics of creating, configuring, deploying, updating, and monitoring an Elastic Beanstalk application with environments running on Amazon EC2 instances. Estimated duration: 35-45 minutes 5 AWS Elastic Beanstalk Developer Guide 6 AWS Elastic Beanstalk Developer Guide What you will build Your first Elastic Beanstalk application will consist of a single Amazon EC2 environment running the PHP sample on a PHP managed platform. Elastic Beanstalk application An Elastic Beanstalk application is a container for Elastic Beanstalk components, including environments where your application code runs on platforms provided and managed by Elastic Beanstalk, or in custom containers that you provide. Environment An Elastic Beanstalk environment is a collection of AWS resources running together including an Amazon EC2 instance. When you create an environment, Elastic Beanstalk provisions the necessary resources into your AWS account. Platform A platform is a combination of an operating system, programming language runtime, web server, application server, and additional Elastic Beanstalk components. Elastic Beanstalk provides manged platforms, or you can provide your own platform in a container. Elastic Beanstalk supports platforms for different programming languages, application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to"} +{"global_id": 90, "doc_id": "beanstalk", "chunk_id": "2", "question_id": 3, "question": "What will your first Elastic Beanstalk application consist of?", "answer_span": "Your first Elastic Beanstalk application will consist of a single Amazon EC2 environment running the PHP sample on a PHP managed platform.", "chunk": "you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them at the end. The total charges are typically less than a dollar. For information about how to minimize charges, see AWS free tier. After completing this tutorial, you will understand the basics of creating, configuring, deploying, updating, and monitoring an Elastic Beanstalk application with environments running on Amazon EC2 instances. Estimated duration: 35-45 minutes 5 AWS Elastic Beanstalk Developer Guide 6 AWS Elastic Beanstalk Developer Guide What you will build Your first Elastic Beanstalk application will consist of a single Amazon EC2 environment running the PHP sample on a PHP managed platform. Elastic Beanstalk application An Elastic Beanstalk application is a container for Elastic Beanstalk components, including environments where your application code runs on platforms provided and managed by Elastic Beanstalk, or in custom containers that you provide. Environment An Elastic Beanstalk environment is a collection of AWS resources running together including an Amazon EC2 instance. When you create an environment, Elastic Beanstalk provisions the necessary resources into your AWS account. Platform A platform is a combination of an operating system, programming language runtime, web server, application server, and additional Elastic Beanstalk components. Elastic Beanstalk provides manged platforms, or you can provide your own platform in a container. Elastic Beanstalk supports platforms for different programming languages, application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to"} +{"global_id": 91, "doc_id": "beanstalk", "chunk_id": "2", "question_id": 4, "question": "What is an Elastic Beanstalk environment?", "answer_span": "An Elastic Beanstalk environment is a collection of AWS resources running together including an Amazon EC2 instance.", "chunk": "you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them at the end. The total charges are typically less than a dollar. For information about how to minimize charges, see AWS free tier. After completing this tutorial, you will understand the basics of creating, configuring, deploying, updating, and monitoring an Elastic Beanstalk application with environments running on Amazon EC2 instances. Estimated duration: 35-45 minutes 5 AWS Elastic Beanstalk Developer Guide 6 AWS Elastic Beanstalk Developer Guide What you will build Your first Elastic Beanstalk application will consist of a single Amazon EC2 environment running the PHP sample on a PHP managed platform. Elastic Beanstalk application An Elastic Beanstalk application is a container for Elastic Beanstalk components, including environments where your application code runs on platforms provided and managed by Elastic Beanstalk, or in custom containers that you provide. Environment An Elastic Beanstalk environment is a collection of AWS resources running together including an Amazon EC2 instance. When you create an environment, Elastic Beanstalk provisions the necessary resources into your AWS account. Platform A platform is a combination of an operating system, programming language runtime, web server, application server, and additional Elastic Beanstalk components. Elastic Beanstalk provides manged platforms, or you can provide your own platform in a container. Elastic Beanstalk supports platforms for different programming languages, application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to"} +{"global_id": 92, "doc_id": "beanstalk", "chunk_id": "3", "question_id": 1, "question": "What must you choose when creating an environment?", "answer_span": "When you create an environment, you must choose the platform.", "chunk": "supports platforms for different programming languages, application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to a new environment on a different platform. Step 1 - Create an application To create your example application, you'll use the Create application console wizard. It creates an Elastic Beanstalk application and launches an environment within it. Reminder: an environment is a collection of AWS resources required to run your application code. What you will build 7 AWS Elastic Beanstalk Developer Guide To create an application 1. Open the Elastic Beanstalk console. 2. Choose Create application. 3. For Application name enter getting-started-app. The console provides a six step process for creating an application and configuring an environment. For this quick start, you'll only need to focus on the first two steps, then you can skip ahead to review and create your application and environment. To configure an environment 1. In Environment information, for Environment name enter: gs-app-web-env. 2. For Platform, choose the PHP platform. 3. For Application code and Presets, accept the defaults (Sample application and Single instance), then choose Next. To configure service access Next, you need two roles. A service role allows Elastic Beanstalk to monitor your EC2 instances and upgrade you environment’s platform. An EC2 instance profile role permits tasks such as writing logs and interacting with other services. To create the Service role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include"} +{"global_id": 93, "doc_id": "beanstalk", "chunk_id": "3", "question_id": 2, "question": "What happens if you need to change programming languages?", "answer_span": "If you need to change programming languages, you must create and switch to a new environment on a different platform.", "chunk": "supports platforms for different programming languages, application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to a new environment on a different platform. Step 1 - Create an application To create your example application, you'll use the Create application console wizard. It creates an Elastic Beanstalk application and launches an environment within it. Reminder: an environment is a collection of AWS resources required to run your application code. What you will build 7 AWS Elastic Beanstalk Developer Guide To create an application 1. Open the Elastic Beanstalk console. 2. Choose Create application. 3. For Application name enter getting-started-app. The console provides a six step process for creating an application and configuring an environment. For this quick start, you'll only need to focus on the first two steps, then you can skip ahead to review and create your application and environment. To configure an environment 1. In Environment information, for Environment name enter: gs-app-web-env. 2. For Platform, choose the PHP platform. 3. For Application code and Presets, accept the defaults (Sample application and Single instance), then choose Next. To configure service access Next, you need two roles. A service role allows Elastic Beanstalk to monitor your EC2 instances and upgrade you environment’s platform. An EC2 instance profile role permits tasks such as writing logs and interacting with other services. To create the Service role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include"} +{"global_id": 94, "doc_id": "beanstalk", "chunk_id": "3", "question_id": 3, "question": "What is the first step to create an application?", "answer_span": "To create your example application, you'll use the Create application console wizard.", "chunk": "supports platforms for different programming languages, application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to a new environment on a different platform. Step 1 - Create an application To create your example application, you'll use the Create application console wizard. It creates an Elastic Beanstalk application and launches an environment within it. Reminder: an environment is a collection of AWS resources required to run your application code. What you will build 7 AWS Elastic Beanstalk Developer Guide To create an application 1. Open the Elastic Beanstalk console. 2. Choose Create application. 3. For Application name enter getting-started-app. The console provides a six step process for creating an application and configuring an environment. For this quick start, you'll only need to focus on the first two steps, then you can skip ahead to review and create your application and environment. To configure an environment 1. In Environment information, for Environment name enter: gs-app-web-env. 2. For Platform, choose the PHP platform. 3. For Application code and Presets, accept the defaults (Sample application and Single instance), then choose Next. To configure service access Next, you need two roles. A service role allows Elastic Beanstalk to monitor your EC2 instances and upgrade you environment’s platform. An EC2 instance profile role permits tasks such as writing logs and interacting with other services. To create the Service role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include"} +{"global_id": 95, "doc_id": "beanstalk", "chunk_id": "3", "question_id": 4, "question": "What role allows Elastic Beanstalk to monitor your EC2 instances?", "answer_span": "A service role allows Elastic Beanstalk to monitor your EC2 instances and upgrade you environment’s platform.", "chunk": "supports platforms for different programming languages, application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to a new environment on a different platform. Step 1 - Create an application To create your example application, you'll use the Create application console wizard. It creates an Elastic Beanstalk application and launches an environment within it. Reminder: an environment is a collection of AWS resources required to run your application code. What you will build 7 AWS Elastic Beanstalk Developer Guide To create an application 1. Open the Elastic Beanstalk console. 2. Choose Create application. 3. For Application name enter getting-started-app. The console provides a six step process for creating an application and configuring an environment. For this quick start, you'll only need to focus on the first two steps, then you can skip ahead to review and create your application and environment. To configure an environment 1. In Environment information, for Environment name enter: gs-app-web-env. 2. For Platform, choose the PHP platform. 3. For Application code and Presets, accept the defaults (Sample application and Single instance), then choose Next. To configure service access Next, you need two roles. A service role allows Elastic Beanstalk to monitor your EC2 instances and upgrade you environment’s platform. An EC2 instance profile role permits tasks such as writing logs and interacting with other services. To create the Service role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include"} +{"global_id": 96, "doc_id": "beanstalk", "chunk_id": "4", "question_id": 1, "question": "What is the first step to create the Service role?", "answer_span": "For Service role, choose Create role.", "chunk": "other services. To create the Service role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: Developer Guide • AWSElasticBeanstalkEnhancedHealth • AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created service role. To create the EC2 instance profile 1. Choose Create role. 2. For Trusted entity type, choose AWS service. 3. For Use case, choose Elastic Beanstalk – Compute. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: • AWSElasticBeanstalkWebTier • AWSElasticBeanstalkWorkerTier • AWSElasticBeanstalkMulticontainerDocker 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created EC2 instance profile. To finish configuring and creating your application 1. Skip over EC2 key pair. We'll show you other ways to connect to your Amazon EC2 instances through the Console. 2. Choose Skip to Review to move over several optional steps. Optional steps: networking, databases, scaling parameters, advanced configuration for updates, monitoring, and logging. 3. On the Review page which shows a summary of your choices, choose Submit. Step 1 - Create an application 9 AWS Elastic Beanstalk Developer Guide Congratulations! You have created an application and configured an environment! Now you need to wait for the resources to deploy. Step 2 - Deploy your application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will"} +{"global_id": 97, "doc_id": "beanstalk", "chunk_id": "4", "question_id": 2, "question": "Which permissions policies need to be verified when creating the Service role?", "answer_span": "Verify that Permissions policies include the following, then choose Next: Developer Guide • AWSElasticBeanstalkEnhancedHealth • AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy", "chunk": "other services. To create the Service role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: Developer Guide • AWSElasticBeanstalkEnhancedHealth • AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created service role. To create the EC2 instance profile 1. Choose Create role. 2. For Trusted entity type, choose AWS service. 3. For Use case, choose Elastic Beanstalk – Compute. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: • AWSElasticBeanstalkWebTier • AWSElasticBeanstalkWorkerTier • AWSElasticBeanstalkMulticontainerDocker 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created EC2 instance profile. To finish configuring and creating your application 1. Skip over EC2 key pair. We'll show you other ways to connect to your Amazon EC2 instances through the Console. 2. Choose Skip to Review to move over several optional steps. Optional steps: networking, databases, scaling parameters, advanced configuration for updates, monitoring, and logging. 3. On the Review page which shows a summary of your choices, choose Submit. Step 1 - Create an application 9 AWS Elastic Beanstalk Developer Guide Congratulations! You have created an application and configured an environment! Now you need to wait for the resources to deploy. Step 2 - Deploy your application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will"} +{"global_id": 98, "doc_id": "beanstalk", "chunk_id": "4", "question_id": 3, "question": "What should you choose to skip when finishing configuring and creating your application?", "answer_span": "Skip over EC2 key pair.", "chunk": "other services. To create the Service role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: Developer Guide • AWSElasticBeanstalkEnhancedHealth • AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created service role. To create the EC2 instance profile 1. Choose Create role. 2. For Trusted entity type, choose AWS service. 3. For Use case, choose Elastic Beanstalk – Compute. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: • AWSElasticBeanstalkWebTier • AWSElasticBeanstalkWorkerTier • AWSElasticBeanstalkMulticontainerDocker 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created EC2 instance profile. To finish configuring and creating your application 1. Skip over EC2 key pair. We'll show you other ways to connect to your Amazon EC2 instances through the Console. 2. Choose Skip to Review to move over several optional steps. Optional steps: networking, databases, scaling parameters, advanced configuration for updates, monitoring, and logging. 3. On the Review page which shows a summary of your choices, choose Submit. Step 1 - Create an application 9 AWS Elastic Beanstalk Developer Guide Congratulations! You have created an application and configured an environment! Now you need to wait for the resources to deploy. Step 2 - Deploy your application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will"} +{"global_id": 99, "doc_id": "beanstalk", "chunk_id": "4", "question_id": 4, "question": "How long can the initial deploy take to create the resources?", "answer_span": "The initial deploy can take up to five minutes to create the resources.", "chunk": "other services. To create the Service role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: Developer Guide • AWSElasticBeanstalkEnhancedHealth • AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created service role. To create the EC2 instance profile 1. Choose Create role. 2. For Trusted entity type, choose AWS service. 3. For Use case, choose Elastic Beanstalk – Compute. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: • AWSElasticBeanstalkWebTier • AWSElasticBeanstalkWorkerTier • AWSElasticBeanstalkMulticontainerDocker 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created EC2 instance profile. To finish configuring and creating your application 1. Skip over EC2 key pair. We'll show you other ways to connect to your Amazon EC2 instances through the Console. 2. Choose Skip to Review to move over several optional steps. Optional steps: networking, databases, scaling parameters, advanced configuration for updates, monitoring, and logging. 3. On the Review page which shows a summary of your choices, choose Submit. Step 1 - Create an application 9 AWS Elastic Beanstalk Developer Guide Congratulations! You have created an application and configured an environment! Now you need to wait for the resources to deploy. Step 2 - Deploy your application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will"} +{"global_id": 100, "doc_id": "beanstalk", "chunk_id": "5", "question_id": 1, "question": "What does Elastic Beanstalk do when you create an application?", "answer_span": "When you create an application, Elastic Beanstalk sets up the environments for you.", "chunk": "deploy. Step 2 - Deploy your application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will be deployed to your stack. When you create the example application, Elastic Beanstalk creates the following resources: • EC2 instance – An Amazon EC2 virtual machine configured to run web apps on the platform you selected. Every platform runs a different set of software, configuration files, and scripts to support a specific language version, framework, web container, or combination thereof. Most platforms use either Apache or nginx as a reverse proxy to forward web traffic to your web app, serve static assets, and generate access and error logs. You can connect to your Amazon EC2 instances to view configuration and logs. Step 2 - Deploy your application 10 AWS Elastic Beanstalk Developer Guide • Instance security group – An Amazon EC2 security group will be created to allow incoming requests on port 80, so inbound traffic on a load balancer can reach your web app. • Amazon S3 bucket – A storage location for your source code, logs, and other artifacts. • Amazon CloudWatch alarms – Two CloudWatch alarms are created to monitor the load on your instances and scale them up or down as needed. • AWS CloudFormation stack – Elastic Beanstalk uses AWS CloudFormation to deploy the resources in your environment and make configuration changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then"} +{"global_id": 101, "doc_id": "beanstalk", "chunk_id": "5", "question_id": 2, "question": "How long can the initial deploy take?", "answer_span": "The initial deploy can take up to five minutes to create the resources.", "chunk": "deploy. Step 2 - Deploy your application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will be deployed to your stack. When you create the example application, Elastic Beanstalk creates the following resources: • EC2 instance – An Amazon EC2 virtual machine configured to run web apps on the platform you selected. Every platform runs a different set of software, configuration files, and scripts to support a specific language version, framework, web container, or combination thereof. Most platforms use either Apache or nginx as a reverse proxy to forward web traffic to your web app, serve static assets, and generate access and error logs. You can connect to your Amazon EC2 instances to view configuration and logs. Step 2 - Deploy your application 10 AWS Elastic Beanstalk Developer Guide • Instance security group – An Amazon EC2 security group will be created to allow incoming requests on port 80, so inbound traffic on a load balancer can reach your web app. • Amazon S3 bucket – A storage location for your source code, logs, and other artifacts. • Amazon CloudWatch alarms – Two CloudWatch alarms are created to monitor the load on your instances and scale them up or down as needed. • AWS CloudFormation stack – Elastic Beanstalk uses AWS CloudFormation to deploy the resources in your environment and make configuration changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then"} +{"global_id": 102, "doc_id": "beanstalk", "chunk_id": "5", "question_id": 3, "question": "What type of virtual machine does Elastic Beanstalk create?", "answer_span": "An Amazon EC2 virtual machine configured to run web apps on the platform you selected.", "chunk": "deploy. Step 2 - Deploy your application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will be deployed to your stack. When you create the example application, Elastic Beanstalk creates the following resources: • EC2 instance – An Amazon EC2 virtual machine configured to run web apps on the platform you selected. Every platform runs a different set of software, configuration files, and scripts to support a specific language version, framework, web container, or combination thereof. Most platforms use either Apache or nginx as a reverse proxy to forward web traffic to your web app, serve static assets, and generate access and error logs. You can connect to your Amazon EC2 instances to view configuration and logs. Step 2 - Deploy your application 10 AWS Elastic Beanstalk Developer Guide • Instance security group – An Amazon EC2 security group will be created to allow incoming requests on port 80, so inbound traffic on a load balancer can reach your web app. • Amazon S3 bucket – A storage location for your source code, logs, and other artifacts. • Amazon CloudWatch alarms – Two CloudWatch alarms are created to monitor the load on your instances and scale them up or down as needed. • AWS CloudFormation stack – Elastic Beanstalk uses AWS CloudFormation to deploy the resources in your environment and make configuration changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then"} +{"global_id": 103, "doc_id": "beanstalk", "chunk_id": "5", "question_id": 4, "question": "What is the purpose of the Amazon S3 bucket created by Elastic Beanstalk?", "answer_span": "A storage location for your source code, logs, and other artifacts.", "chunk": "deploy. Step 2 - Deploy your application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will be deployed to your stack. When you create the example application, Elastic Beanstalk creates the following resources: • EC2 instance – An Amazon EC2 virtual machine configured to run web apps on the platform you selected. Every platform runs a different set of software, configuration files, and scripts to support a specific language version, framework, web container, or combination thereof. Most platforms use either Apache or nginx as a reverse proxy to forward web traffic to your web app, serve static assets, and generate access and error logs. You can connect to your Amazon EC2 instances to view configuration and logs. Step 2 - Deploy your application 10 AWS Elastic Beanstalk Developer Guide • Instance security group – An Amazon EC2 security group will be created to allow incoming requests on port 80, so inbound traffic on a load balancer can reach your web app. • Amazon S3 bucket – A storage location for your source code, logs, and other artifacts. • Amazon CloudWatch alarms – Two CloudWatch alarms are created to monitor the load on your instances and scale them up or down as needed. • AWS CloudFormation stack – Elastic Beanstalk uses AWS CloudFormation to deploy the resources in your environment and make configuration changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then"} +{"global_id": 104, "doc_id": "beanstalk", "chunk_id": "6", "question_id": 1, "question": "What is the format of the domain name that routes to your web app?", "answer_span": "A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com.", "chunk": "in your environment and make configuration changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then deploys your code into the environment. During the process, the console tracks progress and displays event status in the Events tab. Step 2 - Deploy your application 11 AWS Elastic Beanstalk Developer Guide After all of the resources are deployed, the environment's health should change to Ok. Step 2 - Deploy your application 12 AWS Elastic Beanstalk Developer Guide Your application is ready! After you see your application health change to Ok, you can browse to your web application's website. Step 3 - Explore the Elastic Beanstalk environment You'll start exploring your deployed application environment from the Environment overview page in the console. To view the environment and your application 1. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 2. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 3. Choose Go to environment to browse your application! (You can also choose the URL link listed for Domain to browse your application.) The connection will be HTTP (not HTTPS), so you might see a warning in your browser. Step 3 - Explore the environment 13 AWS Elastic Beanstalk Developer Guide Back in the Elastic Beanstalk console, the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on. The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you"} +{"global_id": 105, "doc_id": "beanstalk", "chunk_id": "6", "question_id": 2, "question": "What happens after all of the resources are deployed?", "answer_span": "the environment's health should change to Ok.", "chunk": "in your environment and make configuration changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then deploys your code into the environment. During the process, the console tracks progress and displays event status in the Events tab. Step 2 - Deploy your application 11 AWS Elastic Beanstalk Developer Guide After all of the resources are deployed, the environment's health should change to Ok. Step 2 - Deploy your application 12 AWS Elastic Beanstalk Developer Guide Your application is ready! After you see your application health change to Ok, you can browse to your web application's website. Step 3 - Explore the Elastic Beanstalk environment You'll start exploring your deployed application environment from the Environment overview page in the console. To view the environment and your application 1. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 2. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 3. Choose Go to environment to browse your application! (You can also choose the URL link listed for Domain to browse your application.) The connection will be HTTP (not HTTPS), so you might see a warning in your browser. Step 3 - Explore the environment 13 AWS Elastic Beanstalk Developer Guide Back in the Elastic Beanstalk console, the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on. The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you"} +{"global_id": 106, "doc_id": "beanstalk", "chunk_id": "6", "question_id": 3, "question": "How can you start exploring your deployed application environment?", "answer_span": "You'll start exploring your deployed application environment from the Environment overview page in the console.", "chunk": "in your environment and make configuration changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then deploys your code into the environment. During the process, the console tracks progress and displays event status in the Events tab. Step 2 - Deploy your application 11 AWS Elastic Beanstalk Developer Guide After all of the resources are deployed, the environment's health should change to Ok. Step 2 - Deploy your application 12 AWS Elastic Beanstalk Developer Guide Your application is ready! After you see your application health change to Ok, you can browse to your web application's website. Step 3 - Explore the Elastic Beanstalk environment You'll start exploring your deployed application environment from the Environment overview page in the console. To view the environment and your application 1. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 2. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 3. Choose Go to environment to browse your application! (You can also choose the URL link listed for Domain to browse your application.) The connection will be HTTP (not HTTPS), so you might see a warning in your browser. Step 3 - Explore the environment 13 AWS Elastic Beanstalk Developer Guide Back in the Elastic Beanstalk console, the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on. The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you"} +{"global_id": 107, "doc_id": "beanstalk", "chunk_id": "6", "question_id": 4, "question": "What information is shown in the Environment overview of the Elastic Beanstalk console?", "answer_span": "the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on.", "chunk": "in your environment and make configuration changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then deploys your code into the environment. During the process, the console tracks progress and displays event status in the Events tab. Step 2 - Deploy your application 11 AWS Elastic Beanstalk Developer Guide After all of the resources are deployed, the environment's health should change to Ok. Step 2 - Deploy your application 12 AWS Elastic Beanstalk Developer Guide Your application is ready! After you see your application health change to Ok, you can browse to your web application's website. Step 3 - Explore the Elastic Beanstalk environment You'll start exploring your deployed application environment from the Environment overview page in the console. To view the environment and your application 1. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 2. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 3. Choose Go to environment to browse your application! (You can also choose the URL link listed for Domain to browse your application.) The connection will be HTTP (not HTTPS), so you might see a warning in your browser. Step 3 - Explore the environment 13 AWS Elastic Beanstalk Developer Guide Back in the Elastic Beanstalk console, the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on. The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you"} +{"global_id": 108, "doc_id": "beanstalk", "chunk_id": "7", "question_id": 1, "question": "What information is included in the Environment overview?", "answer_span": "the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on.", "chunk": "the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on. The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you will see recent environment activity in the Events tab. Step 3 - Explore the environment 14 AWS Elastic Beanstalk Developer Guide While Elastic Beanstalk creates your AWS resources and launches your application, the environment is in a Pending state. Status messages about launch events are continuously added to the list of Events . The environment's Domain is the URL for your deployed web application. In the left navigation pane, Go to environment also takes you to your domain. Similarly, the left navigation pane has links that correspond to the various tabs. Take note of the Configuration link in the left navigation pane. which displays a summary of environment configuration option values, grouped by category. Environment configuration settings Take note of the Configuration link in the left navigation pane. You can view and edit detailed environment settings, such as service roles, networking, database, scaling, managed platform updates, memory, health monitoring, rolling deployment, logging, and more! The various tabs contain detailed information about your environment: Step 3 - Explore the environment 15 AWS Elastic Beanstalk Developer Guide • Events – View an updating list of information and error messages from the Elastic Beanstalk service and other services for resources in your environment. • Health – View status and detailed health information for the Amazon EC2 instances running your application. • Logs – Retrieve and download logs from the Amazon EC2 in your environment. You can retrieve full logs or recent activity. The retrieved logs are available for 15 minutes. • Monitoring –"} +{"global_id": 109, "doc_id": "beanstalk", "chunk_id": "7", "question_id": 2, "question": "What is the status of the environment while Elastic Beanstalk is launching the application?", "answer_span": "the environment is in a Pending state.", "chunk": "the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on. The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you will see recent environment activity in the Events tab. Step 3 - Explore the environment 14 AWS Elastic Beanstalk Developer Guide While Elastic Beanstalk creates your AWS resources and launches your application, the environment is in a Pending state. Status messages about launch events are continuously added to the list of Events . The environment's Domain is the URL for your deployed web application. In the left navigation pane, Go to environment also takes you to your domain. Similarly, the left navigation pane has links that correspond to the various tabs. Take note of the Configuration link in the left navigation pane. which displays a summary of environment configuration option values, grouped by category. Environment configuration settings Take note of the Configuration link in the left navigation pane. You can view and edit detailed environment settings, such as service roles, networking, database, scaling, managed platform updates, memory, health monitoring, rolling deployment, logging, and more! The various tabs contain detailed information about your environment: Step 3 - Explore the environment 15 AWS Elastic Beanstalk Developer Guide • Events – View an updating list of information and error messages from the Elastic Beanstalk service and other services for resources in your environment. • Health – View status and detailed health information for the Amazon EC2 instances running your application. • Logs – Retrieve and download logs from the Amazon EC2 in your environment. You can retrieve full logs or recent activity. The retrieved logs are available for 15 minutes. • Monitoring –"} +{"global_id": 110, "doc_id": "beanstalk", "chunk_id": "7", "question_id": 3, "question": "What can you view and edit in the Configuration link?", "answer_span": "You can view and edit detailed environment settings, such as service roles, networking, database, scaling, managed platform updates, memory, health monitoring, rolling deployment, logging, and more!", "chunk": "the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on. The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you will see recent environment activity in the Events tab. Step 3 - Explore the environment 14 AWS Elastic Beanstalk Developer Guide While Elastic Beanstalk creates your AWS resources and launches your application, the environment is in a Pending state. Status messages about launch events are continuously added to the list of Events . The environment's Domain is the URL for your deployed web application. In the left navigation pane, Go to environment also takes you to your domain. Similarly, the left navigation pane has links that correspond to the various tabs. Take note of the Configuration link in the left navigation pane. which displays a summary of environment configuration option values, grouped by category. Environment configuration settings Take note of the Configuration link in the left navigation pane. You can view and edit detailed environment settings, such as service roles, networking, database, scaling, managed platform updates, memory, health monitoring, rolling deployment, logging, and more! The various tabs contain detailed information about your environment: Step 3 - Explore the environment 15 AWS Elastic Beanstalk Developer Guide • Events – View an updating list of information and error messages from the Elastic Beanstalk service and other services for resources in your environment. • Health – View status and detailed health information for the Amazon EC2 instances running your application. • Logs – Retrieve and download logs from the Amazon EC2 in your environment. You can retrieve full logs or recent activity. The retrieved logs are available for 15 minutes. • Monitoring –"} +{"global_id": 111, "doc_id": "beanstalk", "chunk_id": "7", "question_id": 4, "question": "How long are the retrieved logs available for?", "answer_span": "The retrieved logs are available for 15 minutes.", "chunk": "the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on. The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you will see recent environment activity in the Events tab. Step 3 - Explore the environment 14 AWS Elastic Beanstalk Developer Guide While Elastic Beanstalk creates your AWS resources and launches your application, the environment is in a Pending state. Status messages about launch events are continuously added to the list of Events . The environment's Domain is the URL for your deployed web application. In the left navigation pane, Go to environment also takes you to your domain. Similarly, the left navigation pane has links that correspond to the various tabs. Take note of the Configuration link in the left navigation pane. which displays a summary of environment configuration option values, grouped by category. Environment configuration settings Take note of the Configuration link in the left navigation pane. You can view and edit detailed environment settings, such as service roles, networking, database, scaling, managed platform updates, memory, health monitoring, rolling deployment, logging, and more! The various tabs contain detailed information about your environment: Step 3 - Explore the environment 15 AWS Elastic Beanstalk Developer Guide • Events – View an updating list of information and error messages from the Elastic Beanstalk service and other services for resources in your environment. • Health – View status and detailed health information for the Amazon EC2 instances running your application. • Logs – Retrieve and download logs from the Amazon EC2 in your environment. You can retrieve full logs or recent activity. The retrieved logs are available for 15 minutes. • Monitoring –"} +{"global_id": 112, "doc_id": "beanstalk", "chunk_id": "8", "question_id": 1, "question": "What can you view regarding the health of Amazon EC2 instances?", "answer_span": "View status and detailed health information for the Amazon EC2 instances running your application.", "chunk": "• Health – View status and detailed health information for the Amazon EC2 instances running your application. • Logs – Retrieve and download logs from the Amazon EC2 in your environment. You can retrieve full logs or recent activity. The retrieved logs are available for 15 minutes. • Monitoring – View statistics for the environment, such as average latency and CPU utilization. • Alarms – View and edit alarms that are configured for environment metrics. • Managed updates – View information about upcoming and completed managed platform updates and instance replacement. • Tags – View and edit key-value pairs that are applied to your environment. Note Links in the console navigation pane will display the corresponding tab. Troubleshooting with logs For troubleshooting unexpected behaviors or debugging deployments, you might want to check the logs in your environments. You can request 100 lines of all the log files under the Logs tab in the Elastic Beanstalk console. Alternatively, you can connect directly to the Amazon EC2 instance and tail the logs in realtime. To request the logs (Elastic Beanstalk console) 1. Navigate to your environment in the Elastic Beanstalk console. 2. Choose the Logs tab or left-nav, then choose Request logs. 3. Select Last 100 lines. 4. After the logs are created, choose the Download link to view the logs in the browser. In the logs, find the log and note the directory for the nginx access log. Troubleshooting with logs 16 AWS Elastic Beanstalk Developer Guide Add a policy to enable connections to Amazon EC2 Before you can connect, you must add a policy that enables connections to Amazon EC2 with Session Manager. 1. Navigate to the IAM console. 2. Find and select the aws-elasticbeanstalk-ec2-role role. 3. Choose Add permission, then Attach policies. 4. Search for a default policy that"} +{"global_id": 113, "doc_id": "beanstalk", "chunk_id": "8", "question_id": 2, "question": "How long are the retrieved logs available for?", "answer_span": "The retrieved logs are available for 15 minutes.", "chunk": "• Health – View status and detailed health information for the Amazon EC2 instances running your application. • Logs – Retrieve and download logs from the Amazon EC2 in your environment. You can retrieve full logs or recent activity. The retrieved logs are available for 15 minutes. • Monitoring – View statistics for the environment, such as average latency and CPU utilization. • Alarms – View and edit alarms that are configured for environment metrics. • Managed updates – View information about upcoming and completed managed platform updates and instance replacement. • Tags – View and edit key-value pairs that are applied to your environment. Note Links in the console navigation pane will display the corresponding tab. Troubleshooting with logs For troubleshooting unexpected behaviors or debugging deployments, you might want to check the logs in your environments. You can request 100 lines of all the log files under the Logs tab in the Elastic Beanstalk console. Alternatively, you can connect directly to the Amazon EC2 instance and tail the logs in realtime. To request the logs (Elastic Beanstalk console) 1. Navigate to your environment in the Elastic Beanstalk console. 2. Choose the Logs tab or left-nav, then choose Request logs. 3. Select Last 100 lines. 4. After the logs are created, choose the Download link to view the logs in the browser. In the logs, find the log and note the directory for the nginx access log. Troubleshooting with logs 16 AWS Elastic Beanstalk Developer Guide Add a policy to enable connections to Amazon EC2 Before you can connect, you must add a policy that enables connections to Amazon EC2 with Session Manager. 1. Navigate to the IAM console. 2. Find and select the aws-elasticbeanstalk-ec2-role role. 3. Choose Add permission, then Attach policies. 4. Search for a default policy that"} +{"global_id": 114, "doc_id": "beanstalk", "chunk_id": "8", "question_id": 3, "question": "What is the first step to request logs in the Elastic Beanstalk console?", "answer_span": "Navigate to your environment in the Elastic Beanstalk console.", "chunk": "• Health – View status and detailed health information for the Amazon EC2 instances running your application. • Logs – Retrieve and download logs from the Amazon EC2 in your environment. You can retrieve full logs or recent activity. The retrieved logs are available for 15 minutes. • Monitoring – View statistics for the environment, such as average latency and CPU utilization. • Alarms – View and edit alarms that are configured for environment metrics. • Managed updates – View information about upcoming and completed managed platform updates and instance replacement. • Tags – View and edit key-value pairs that are applied to your environment. Note Links in the console navigation pane will display the corresponding tab. Troubleshooting with logs For troubleshooting unexpected behaviors or debugging deployments, you might want to check the logs in your environments. You can request 100 lines of all the log files under the Logs tab in the Elastic Beanstalk console. Alternatively, you can connect directly to the Amazon EC2 instance and tail the logs in realtime. To request the logs (Elastic Beanstalk console) 1. Navigate to your environment in the Elastic Beanstalk console. 2. Choose the Logs tab or left-nav, then choose Request logs. 3. Select Last 100 lines. 4. After the logs are created, choose the Download link to view the logs in the browser. In the logs, find the log and note the directory for the nginx access log. Troubleshooting with logs 16 AWS Elastic Beanstalk Developer Guide Add a policy to enable connections to Amazon EC2 Before you can connect, you must add a policy that enables connections to Amazon EC2 with Session Manager. 1. Navigate to the IAM console. 2. Find and select the aws-elasticbeanstalk-ec2-role role. 3. Choose Add permission, then Attach policies. 4. Search for a default policy that"} +{"global_id": 115, "doc_id": "beanstalk", "chunk_id": "8", "question_id": 4, "question": "What must you do before connecting to Amazon EC2 with Session Manager?", "answer_span": "Add a policy that enables connections to Amazon EC2 with Session Manager.", "chunk": "• Health – View status and detailed health information for the Amazon EC2 instances running your application. • Logs – Retrieve and download logs from the Amazon EC2 in your environment. You can retrieve full logs or recent activity. The retrieved logs are available for 15 minutes. • Monitoring – View statistics for the environment, such as average latency and CPU utilization. • Alarms – View and edit alarms that are configured for environment metrics. • Managed updates – View information about upcoming and completed managed platform updates and instance replacement. • Tags – View and edit key-value pairs that are applied to your environment. Note Links in the console navigation pane will display the corresponding tab. Troubleshooting with logs For troubleshooting unexpected behaviors or debugging deployments, you might want to check the logs in your environments. You can request 100 lines of all the log files under the Logs tab in the Elastic Beanstalk console. Alternatively, you can connect directly to the Amazon EC2 instance and tail the logs in realtime. To request the logs (Elastic Beanstalk console) 1. Navigate to your environment in the Elastic Beanstalk console. 2. Choose the Logs tab or left-nav, then choose Request logs. 3. Select Last 100 lines. 4. After the logs are created, choose the Download link to view the logs in the browser. In the logs, find the log and note the directory for the nginx access log. Troubleshooting with logs 16 AWS Elastic Beanstalk Developer Guide Add a policy to enable connections to Amazon EC2 Before you can connect, you must add a policy that enables connections to Amazon EC2 with Session Manager. 1. Navigate to the IAM console. 2. Find and select the aws-elasticbeanstalk-ec2-role role. 3. Choose Add permission, then Attach policies. 4. Search for a default policy that"} +{"global_id": 116, "doc_id": "beanstalk", "chunk_id": "9", "question_id": 1, "question": "What must you add to enable connections to Amazon EC2 with Session Manager?", "answer_span": "you must add a policy that enables connections to Amazon EC2 with Session Manager.", "chunk": "enable connections to Amazon EC2 Before you can connect, you must add a policy that enables connections to Amazon EC2 with Session Manager. 1. Navigate to the IAM console. 2. Find and select the aws-elasticbeanstalk-ec2-role role. 3. Choose Add permission, then Attach policies. 4. Search for a default policy that begins with the following text: AmazonSSMManagedEC2Instance, then add it to the role. To connect to your Amazon EC2 with Session Manager 1. Navigate to the Amazon EC2 console. 2. Choose Instances, then select your gs-app-web-env instance. 3. Choose Connect, then Session Manager. 4. Choose Connect. After connecting to the instance, start a bash shell and tail the logs: 1. Run the command bash. 2. Run the command cd /var/log/nginx. 3. Run the command tail -f access.log. 4. In your browser, go to the application domain URL. Refresh. Congratulations, you're connected! You should see log entries in your instance update every time you refresh the page. Connect button not working? If the connect button is not available, go back to IAM and verify that you added the necessary policy to the role. Troubleshooting with logs 17 AWS Elastic Beanstalk Developer Guide Step 4 - Update your application Eventually, you will want to update your application. You can deploy a new version at any time, as long as no other update operations are in progress on your environment. The application version that you started this tutorial with is called Sample Application. To update your application version 1. Download the following PHP sample application: PHP – php-v2.zip 2. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 3. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 4. On the environment overview page, choose Upload and deploy. 5. Select Choose"} +{"global_id": 117, "doc_id": "beanstalk", "chunk_id": "9", "question_id": 2, "question": "What is the first step to connect to your Amazon EC2 with Session Manager?", "answer_span": "Navigate to the Amazon EC2 console.", "chunk": "enable connections to Amazon EC2 Before you can connect, you must add a policy that enables connections to Amazon EC2 with Session Manager. 1. Navigate to the IAM console. 2. Find and select the aws-elasticbeanstalk-ec2-role role. 3. Choose Add permission, then Attach policies. 4. Search for a default policy that begins with the following text: AmazonSSMManagedEC2Instance, then add it to the role. To connect to your Amazon EC2 with Session Manager 1. Navigate to the Amazon EC2 console. 2. Choose Instances, then select your gs-app-web-env instance. 3. Choose Connect, then Session Manager. 4. Choose Connect. After connecting to the instance, start a bash shell and tail the logs: 1. Run the command bash. 2. Run the command cd /var/log/nginx. 3. Run the command tail -f access.log. 4. In your browser, go to the application domain URL. Refresh. Congratulations, you're connected! You should see log entries in your instance update every time you refresh the page. Connect button not working? If the connect button is not available, go back to IAM and verify that you added the necessary policy to the role. Troubleshooting with logs 17 AWS Elastic Beanstalk Developer Guide Step 4 - Update your application Eventually, you will want to update your application. You can deploy a new version at any time, as long as no other update operations are in progress on your environment. The application version that you started this tutorial with is called Sample Application. To update your application version 1. Download the following PHP sample application: PHP – php-v2.zip 2. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 3. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 4. On the environment overview page, choose Upload and deploy. 5. Select Choose"} +{"global_id": 118, "doc_id": "beanstalk", "chunk_id": "9", "question_id": 3, "question": "What command should you run to start a bash shell after connecting to the instance?", "answer_span": "Run the command bash.", "chunk": "enable connections to Amazon EC2 Before you can connect, you must add a policy that enables connections to Amazon EC2 with Session Manager. 1. Navigate to the IAM console. 2. Find and select the aws-elasticbeanstalk-ec2-role role. 3. Choose Add permission, then Attach policies. 4. Search for a default policy that begins with the following text: AmazonSSMManagedEC2Instance, then add it to the role. To connect to your Amazon EC2 with Session Manager 1. Navigate to the Amazon EC2 console. 2. Choose Instances, then select your gs-app-web-env instance. 3. Choose Connect, then Session Manager. 4. Choose Connect. After connecting to the instance, start a bash shell and tail the logs: 1. Run the command bash. 2. Run the command cd /var/log/nginx. 3. Run the command tail -f access.log. 4. In your browser, go to the application domain URL. Refresh. Congratulations, you're connected! You should see log entries in your instance update every time you refresh the page. Connect button not working? If the connect button is not available, go back to IAM and verify that you added the necessary policy to the role. Troubleshooting with logs 17 AWS Elastic Beanstalk Developer Guide Step 4 - Update your application Eventually, you will want to update your application. You can deploy a new version at any time, as long as no other update operations are in progress on your environment. The application version that you started this tutorial with is called Sample Application. To update your application version 1. Download the following PHP sample application: PHP – php-v2.zip 2. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 3. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 4. On the environment overview page, choose Upload and deploy. 5. Select Choose"} +{"global_id": 119, "doc_id": "beanstalk", "chunk_id": "9", "question_id": 4, "question": "What should you do if the connect button is not available?", "answer_span": "go back to IAM and verify that you added the necessary policy to the role.", "chunk": "enable connections to Amazon EC2 Before you can connect, you must add a policy that enables connections to Amazon EC2 with Session Manager. 1. Navigate to the IAM console. 2. Find and select the aws-elasticbeanstalk-ec2-role role. 3. Choose Add permission, then Attach policies. 4. Search for a default policy that begins with the following text: AmazonSSMManagedEC2Instance, then add it to the role. To connect to your Amazon EC2 with Session Manager 1. Navigate to the Amazon EC2 console. 2. Choose Instances, then select your gs-app-web-env instance. 3. Choose Connect, then Session Manager. 4. Choose Connect. After connecting to the instance, start a bash shell and tail the logs: 1. Run the command bash. 2. Run the command cd /var/log/nginx. 3. Run the command tail -f access.log. 4. In your browser, go to the application domain URL. Refresh. Congratulations, you're connected! You should see log entries in your instance update every time you refresh the page. Connect button not working? If the connect button is not available, go back to IAM and verify that you added the necessary policy to the role. Troubleshooting with logs 17 AWS Elastic Beanstalk Developer Guide Step 4 - Update your application Eventually, you will want to update your application. You can deploy a new version at any time, as long as no other update operations are in progress on your environment. The application version that you started this tutorial with is called Sample Application. To update your application version 1. Download the following PHP sample application: PHP – php-v2.zip 2. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 3. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 4. On the environment overview page, choose Upload and deploy. 5. Select Choose"} +{"global_id": 120, "doc_id": "beanstalk", "chunk_id": "10", "question_id": 1, "question": "What is the first step to deploy an application using Elastic Beanstalk?", "answer_span": "Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region.", "chunk": "application: PHP – php-v2.zip 2. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 3. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 4. On the environment overview page, choose Upload and deploy. 5. Select Choose file, and then upload the sample application source bundle that you downloaded. The console automatically fills in the Version label with a new unique label, automatically incrementing a trailing integer. If you choose your own version label, ensure that it's unique. Step 4 - Update your application 18 AWS Elastic Beanstalk 6. Developer Guide Choose Deploy. While Elastic Beanstalk deploys your file to your Amazon EC2 instances, you can view the deployment status on the Environment overview page. While the application version is updated, the environment Health status is gray. When the deployment is complete, Elastic Beanstalk performs an application health check. When the application responds to the health check, it's considered healthy and the status returns to green. The environment overview shows the new Running Version—the name you provided as the Version label. Elastic Beanstalk also uploads your new application version and adds it to the table of application versions. To view the table, choose Application versions under getting-started-app on the navigation pane. Update success! You should see an updated \"v2\" message after refreshing your browser. If you want to edit the source yourself, unzip, edit, then re-zip the source bundle. On macOS, use the following command from inside your php directory with the -X to exclude extra file attributes: zip -X -r ../php-v2.zip . Step 5 - Scale your application You can configure your environment to better suit your application. For example, if you have a compute-intensive application, you can change the type of Amazon Elastic Compute Cloud"} +{"global_id": 121, "doc_id": "beanstalk", "chunk_id": "10", "question_id": 2, "question": "What happens to the environment Health status while the application version is updated?", "answer_span": "While the application version is updated, the environment Health status is gray.", "chunk": "application: PHP – php-v2.zip 2. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 3. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 4. On the environment overview page, choose Upload and deploy. 5. Select Choose file, and then upload the sample application source bundle that you downloaded. The console automatically fills in the Version label with a new unique label, automatically incrementing a trailing integer. If you choose your own version label, ensure that it's unique. Step 4 - Update your application 18 AWS Elastic Beanstalk 6. Developer Guide Choose Deploy. While Elastic Beanstalk deploys your file to your Amazon EC2 instances, you can view the deployment status on the Environment overview page. While the application version is updated, the environment Health status is gray. When the deployment is complete, Elastic Beanstalk performs an application health check. When the application responds to the health check, it's considered healthy and the status returns to green. The environment overview shows the new Running Version—the name you provided as the Version label. Elastic Beanstalk also uploads your new application version and adds it to the table of application versions. To view the table, choose Application versions under getting-started-app on the navigation pane. Update success! You should see an updated \"v2\" message after refreshing your browser. If you want to edit the source yourself, unzip, edit, then re-zip the source bundle. On macOS, use the following command from inside your php directory with the -X to exclude extra file attributes: zip -X -r ../php-v2.zip . Step 5 - Scale your application You can configure your environment to better suit your application. For example, if you have a compute-intensive application, you can change the type of Amazon Elastic Compute Cloud"} +{"global_id": 122, "doc_id": "beanstalk", "chunk_id": "10", "question_id": 3, "question": "What should you do if you want to edit the source of the application?", "answer_span": "If you want to edit the source yourself, unzip, edit, then re-zip the source bundle.", "chunk": "application: PHP – php-v2.zip 2. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 3. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 4. On the environment overview page, choose Upload and deploy. 5. Select Choose file, and then upload the sample application source bundle that you downloaded. The console automatically fills in the Version label with a new unique label, automatically incrementing a trailing integer. If you choose your own version label, ensure that it's unique. Step 4 - Update your application 18 AWS Elastic Beanstalk 6. Developer Guide Choose Deploy. While Elastic Beanstalk deploys your file to your Amazon EC2 instances, you can view the deployment status on the Environment overview page. While the application version is updated, the environment Health status is gray. When the deployment is complete, Elastic Beanstalk performs an application health check. When the application responds to the health check, it's considered healthy and the status returns to green. The environment overview shows the new Running Version—the name you provided as the Version label. Elastic Beanstalk also uploads your new application version and adds it to the table of application versions. To view the table, choose Application versions under getting-started-app on the navigation pane. Update success! You should see an updated \"v2\" message after refreshing your browser. If you want to edit the source yourself, unzip, edit, then re-zip the source bundle. On macOS, use the following command from inside your php directory with the -X to exclude extra file attributes: zip -X -r ../php-v2.zip . Step 5 - Scale your application You can configure your environment to better suit your application. For example, if you have a compute-intensive application, you can change the type of Amazon Elastic Compute Cloud"} +{"global_id": 123, "doc_id": "beanstalk", "chunk_id": "10", "question_id": 4, "question": "What command should be used on macOS to re-zip the source bundle excluding extra file attributes?", "answer_span": "On macOS, use the following command from inside your php directory with the -X to exclude extra file attributes: zip -X -r ../php-v2.zip .", "chunk": "application: PHP – php-v2.zip 2. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 3. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 4. On the environment overview page, choose Upload and deploy. 5. Select Choose file, and then upload the sample application source bundle that you downloaded. The console automatically fills in the Version label with a new unique label, automatically incrementing a trailing integer. If you choose your own version label, ensure that it's unique. Step 4 - Update your application 18 AWS Elastic Beanstalk 6. Developer Guide Choose Deploy. While Elastic Beanstalk deploys your file to your Amazon EC2 instances, you can view the deployment status on the Environment overview page. While the application version is updated, the environment Health status is gray. When the deployment is complete, Elastic Beanstalk performs an application health check. When the application responds to the health check, it's considered healthy and the status returns to green. The environment overview shows the new Running Version—the name you provided as the Version label. Elastic Beanstalk also uploads your new application version and adds it to the table of application versions. To view the table, choose Application versions under getting-started-app on the navigation pane. Update success! You should see an updated \"v2\" message after refreshing your browser. If you want to edit the source yourself, unzip, edit, then re-zip the source bundle. On macOS, use the following command from inside your php directory with the -X to exclude extra file attributes: zip -X -r ../php-v2.zip . Step 5 - Scale your application You can configure your environment to better suit your application. For example, if you have a compute-intensive application, you can change the type of Amazon Elastic Compute Cloud"} +{"global_id": 124, "doc_id": "beanstalk", "chunk_id": "11", "question_id": 1, "question": "What command is used to zip the PHP directory while excluding extra file attributes?", "answer_span": "zip -X -r ../php-v2.zip .", "chunk": "your php directory with the -X to exclude extra file attributes: zip -X -r ../php-v2.zip . Step 5 - Scale your application You can configure your environment to better suit your application. For example, if you have a compute-intensive application, you can change the type of Amazon Elastic Compute Cloud (Amazon EC2) instance that is running your application. To apply configuration changes, Elastic Beanstalk performs an environment update. Some configuration changes are simple and happen quickly. Some changes require deleting and recreating AWS resources, which can take several minutes. When you change configuration settings, Elastic Beanstalk warns you about potential application downtime. Step 5 - Scale your application 19 AWS Elastic Beanstalk Developer Guide Increase capacity settings In this example of a configuration change, you edit your environment's capacity settings. You configure a load-balanced, scalable environment that has between two and four Amazon EC2 instances in its Auto Scaling group, and then you verify that the change occurred. Elastic Beanstalk creates an additional Amazon EC2 instance, adding to the single instance that it created initially. Then, Elastic Beanstalk associates both instances with the environment's load balancer. As a result, your application's responsiveness is improved and its availability is increased. To change your environment's capacity 1. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 2. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 3. In the navigation pane, choose Configuration. 4. In the Instance traffic and scaling configuration category, choose Edit. 5. Collapse the Instances section, so you can more easily see the Capacity section. Under Auto Scaling group change Environment type to Load balanced. 6. In the Instances row, change Min to 2 and Max to 4. Increase capacity settings 20 AWS Elastic Beanstalk 7."} +{"global_id": 125, "doc_id": "beanstalk", "chunk_id": "11", "question_id": 2, "question": "What does Elastic Beanstalk do to apply configuration changes?", "answer_span": "Elastic Beanstalk performs an environment update.", "chunk": "your php directory with the -X to exclude extra file attributes: zip -X -r ../php-v2.zip . Step 5 - Scale your application You can configure your environment to better suit your application. For example, if you have a compute-intensive application, you can change the type of Amazon Elastic Compute Cloud (Amazon EC2) instance that is running your application. To apply configuration changes, Elastic Beanstalk performs an environment update. Some configuration changes are simple and happen quickly. Some changes require deleting and recreating AWS resources, which can take several minutes. When you change configuration settings, Elastic Beanstalk warns you about potential application downtime. Step 5 - Scale your application 19 AWS Elastic Beanstalk Developer Guide Increase capacity settings In this example of a configuration change, you edit your environment's capacity settings. You configure a load-balanced, scalable environment that has between two and four Amazon EC2 instances in its Auto Scaling group, and then you verify that the change occurred. Elastic Beanstalk creates an additional Amazon EC2 instance, adding to the single instance that it created initially. Then, Elastic Beanstalk associates both instances with the environment's load balancer. As a result, your application's responsiveness is improved and its availability is increased. To change your environment's capacity 1. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 2. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 3. In the navigation pane, choose Configuration. 4. In the Instance traffic and scaling configuration category, choose Edit. 5. Collapse the Instances section, so you can more easily see the Capacity section. Under Auto Scaling group change Environment type to Load balanced. 6. In the Instances row, change Min to 2 and Max to 4. Increase capacity settings 20 AWS Elastic Beanstalk 7."} +{"global_id": 126, "doc_id": "beanstalk", "chunk_id": "11", "question_id": 3, "question": "How many Amazon EC2 instances can be configured in the Auto Scaling group?", "answer_span": "between two and four Amazon EC2 instances in its Auto Scaling group", "chunk": "your php directory with the -X to exclude extra file attributes: zip -X -r ../php-v2.zip . Step 5 - Scale your application You can configure your environment to better suit your application. For example, if you have a compute-intensive application, you can change the type of Amazon Elastic Compute Cloud (Amazon EC2) instance that is running your application. To apply configuration changes, Elastic Beanstalk performs an environment update. Some configuration changes are simple and happen quickly. Some changes require deleting and recreating AWS resources, which can take several minutes. When you change configuration settings, Elastic Beanstalk warns you about potential application downtime. Step 5 - Scale your application 19 AWS Elastic Beanstalk Developer Guide Increase capacity settings In this example of a configuration change, you edit your environment's capacity settings. You configure a load-balanced, scalable environment that has between two and four Amazon EC2 instances in its Auto Scaling group, and then you verify that the change occurred. Elastic Beanstalk creates an additional Amazon EC2 instance, adding to the single instance that it created initially. Then, Elastic Beanstalk associates both instances with the environment's load balancer. As a result, your application's responsiveness is improved and its availability is increased. To change your environment's capacity 1. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 2. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 3. In the navigation pane, choose Configuration. 4. In the Instance traffic and scaling configuration category, choose Edit. 5. Collapse the Instances section, so you can more easily see the Capacity section. Under Auto Scaling group change Environment type to Load balanced. 6. In the Instances row, change Min to 2 and Max to 4. Increase capacity settings 20 AWS Elastic Beanstalk 7."} +{"global_id": 127, "doc_id": "beanstalk", "chunk_id": "11", "question_id": 4, "question": "What should you change the Environment type to in the Auto Scaling group?", "answer_span": "change Environment type to Load balanced.", "chunk": "your php directory with the -X to exclude extra file attributes: zip -X -r ../php-v2.zip . Step 5 - Scale your application You can configure your environment to better suit your application. For example, if you have a compute-intensive application, you can change the type of Amazon Elastic Compute Cloud (Amazon EC2) instance that is running your application. To apply configuration changes, Elastic Beanstalk performs an environment update. Some configuration changes are simple and happen quickly. Some changes require deleting and recreating AWS resources, which can take several minutes. When you change configuration settings, Elastic Beanstalk warns you about potential application downtime. Step 5 - Scale your application 19 AWS Elastic Beanstalk Developer Guide Increase capacity settings In this example of a configuration change, you edit your environment's capacity settings. You configure a load-balanced, scalable environment that has between two and four Amazon EC2 instances in its Auto Scaling group, and then you verify that the change occurred. Elastic Beanstalk creates an additional Amazon EC2 instance, adding to the single instance that it created initially. Then, Elastic Beanstalk associates both instances with the environment's load balancer. As a result, your application's responsiveness is improved and its availability is increased. To change your environment's capacity 1. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 2. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 3. In the navigation pane, choose Configuration. 4. In the Instance traffic and scaling configuration category, choose Edit. 5. Collapse the Instances section, so you can more easily see the Capacity section. Under Auto Scaling group change Environment type to Load balanced. 6. In the Instances row, change Min to 2 and Max to 4. Increase capacity settings 20 AWS Elastic Beanstalk 7."} +{"global_id": 128, "doc_id": "beanstalk", "chunk_id": "12", "question_id": 1, "question": "What should you do to change the environment type to Load balanced?", "answer_span": "Under Auto Scaling group change Environment type to Load balanced.", "chunk": "scaling configuration category, choose Edit. 5. Collapse the Instances section, so you can more easily see the Capacity section. Under Auto Scaling group change Environment type to Load balanced. 6. In the Instances row, change Min to 2 and Max to 4. Increase capacity settings 20 AWS Elastic Beanstalk 7. Developer Guide To save the changes choose Apply at the bottom of the page. If you are warned that the update will replace all of your current instances. Choose Confirm. The environment update can take a few minutes. You should see several updates in the list of events. Watch for the event Successfully deployed new configuration to environment. Verify increased capacity After the environment update is complete and the environment is ready, Elastic Beanstalk automatically launched a second instance to meet your new minimum capacity setting. To verify the increased capacity 1. Choose Health from either the tab list or left navigation pane. 2. Review the Enhanced instance health section. You just scaled up! With two Amazon EC2 instances, your environment capacity has doubled, and it only took a few minutes. Cleaning up your Elastic Beanstalk environment To ensure that you're not charged for any services you aren't using, delete all application versions and terminate environments, which also deletes the AWS resources that the environment created for you. Verify increased capacity 21"} +{"global_id": 129, "doc_id": "beanstalk", "chunk_id": "12", "question_id": 2, "question": "What is the new minimum and maximum capacity setting after the changes?", "answer_span": "In the Instances row, change Min to 2 and Max to 4.", "chunk": "scaling configuration category, choose Edit. 5. Collapse the Instances section, so you can more easily see the Capacity section. Under Auto Scaling group change Environment type to Load balanced. 6. In the Instances row, change Min to 2 and Max to 4. Increase capacity settings 20 AWS Elastic Beanstalk 7. Developer Guide To save the changes choose Apply at the bottom of the page. If you are warned that the update will replace all of your current instances. Choose Confirm. The environment update can take a few minutes. You should see several updates in the list of events. Watch for the event Successfully deployed new configuration to environment. Verify increased capacity After the environment update is complete and the environment is ready, Elastic Beanstalk automatically launched a second instance to meet your new minimum capacity setting. To verify the increased capacity 1. Choose Health from either the tab list or left navigation pane. 2. Review the Enhanced instance health section. You just scaled up! With two Amazon EC2 instances, your environment capacity has doubled, and it only took a few minutes. Cleaning up your Elastic Beanstalk environment To ensure that you're not charged for any services you aren't using, delete all application versions and terminate environments, which also deletes the AWS resources that the environment created for you. Verify increased capacity 21"} +{"global_id": 130, "doc_id": "beanstalk", "chunk_id": "12", "question_id": 3, "question": "What should you do if warned that the update will replace all current instances?", "answer_span": "Choose Confirm.", "chunk": "scaling configuration category, choose Edit. 5. Collapse the Instances section, so you can more easily see the Capacity section. Under Auto Scaling group change Environment type to Load balanced. 6. In the Instances row, change Min to 2 and Max to 4. Increase capacity settings 20 AWS Elastic Beanstalk 7. Developer Guide To save the changes choose Apply at the bottom of the page. If you are warned that the update will replace all of your current instances. Choose Confirm. The environment update can take a few minutes. You should see several updates in the list of events. Watch for the event Successfully deployed new configuration to environment. Verify increased capacity After the environment update is complete and the environment is ready, Elastic Beanstalk automatically launched a second instance to meet your new minimum capacity setting. To verify the increased capacity 1. Choose Health from either the tab list or left navigation pane. 2. Review the Enhanced instance health section. You just scaled up! With two Amazon EC2 instances, your environment capacity has doubled, and it only took a few minutes. Cleaning up your Elastic Beanstalk environment To ensure that you're not charged for any services you aren't using, delete all application versions and terminate environments, which also deletes the AWS resources that the environment created for you. Verify increased capacity 21"} +{"global_id": 131, "doc_id": "beanstalk", "chunk_id": "12", "question_id": 4, "question": "How can you verify that the capacity has increased after the environment update?", "answer_span": "To verify the increased capacity 1. Choose Health from either the tab list or left navigation pane.", "chunk": "scaling configuration category, choose Edit. 5. Collapse the Instances section, so you can more easily see the Capacity section. Under Auto Scaling group change Environment type to Load balanced. 6. In the Instances row, change Min to 2 and Max to 4. Increase capacity settings 20 AWS Elastic Beanstalk 7. Developer Guide To save the changes choose Apply at the bottom of the page. If you are warned that the update will replace all of your current instances. Choose Confirm. The environment update can take a few minutes. You should see several updates in the list of events. Watch for the event Successfully deployed new configuration to environment. Verify increased capacity After the environment update is complete and the environment is ready, Elastic Beanstalk automatically launched a second instance to meet your new minimum capacity setting. To verify the increased capacity 1. Choose Health from either the tab list or left navigation pane. 2. Review the Enhanced instance health section. You just scaled up! With two Amazon EC2 instances, your environment capacity has doubled, and it only took a few minutes. Cleaning up your Elastic Beanstalk environment To ensure that you're not charged for any services you aren't using, delete all application versions and terminate environments, which also deletes the AWS resources that the environment created for you. Verify increased capacity 21"} +{"global_id": 132, "doc_id": "fargate", "chunk_id": "0", "question_id": 1, "question": "What technology does AWS Fargate use with Amazon ECS?", "answer_span": "AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances.", "chunk": "Amazon Elastic Container Service Developer Guide AWS Fargate for Amazon ECS AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing. When you run your tasks and services with the Fargate launch type, you package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application. Each Fargate task has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task. You configure your task definitions for Fargate by setting the requiresCompatibilities task definition parameter to FARGATE. For more information, see Launch types. Fargate offers platform versions for Amazon Linux 2 (platform version 1.3.0), Bottlerocket operating system (platform version 1.4.0), and Microsoft Windows 2019 Server Full and Core editions.Unless otherwise specified, the information on this page applies to all Fargate platforms. This topic describes the different components of Fargate tasks and services, and calls out special considerations for using Fargate with Amazon ECS. For information about the Regions that support Linux containers on Fargate, see the section called “Linux containers on AWS Fargate”. For information about the Regions that support Windows containers on Fargate, see the section called “Windows containers on AWS Fargate”. Walkthroughs For information about how to get started using the console, see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about"} +{"global_id": 133, "doc_id": "fargate", "chunk_id": "0", "question_id": 2, "question": "What do you no longer have to do when using AWS Fargate?", "answer_span": "With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers.", "chunk": "Amazon Elastic Container Service Developer Guide AWS Fargate for Amazon ECS AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing. When you run your tasks and services with the Fargate launch type, you package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application. Each Fargate task has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task. You configure your task definitions for Fargate by setting the requiresCompatibilities task definition parameter to FARGATE. For more information, see Launch types. Fargate offers platform versions for Amazon Linux 2 (platform version 1.3.0), Bottlerocket operating system (platform version 1.4.0), and Microsoft Windows 2019 Server Full and Core editions.Unless otherwise specified, the information on this page applies to all Fargate platforms. This topic describes the different components of Fargate tasks and services, and calls out special considerations for using Fargate with Amazon ECS. For information about the Regions that support Linux containers on Fargate, see the section called “Linux containers on AWS Fargate”. For information about the Regions that support Windows containers on Fargate, see the section called “Windows containers on AWS Fargate”. Walkthroughs For information about how to get started using the console, see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about"} +{"global_id": 134, "doc_id": "fargate", "chunk_id": "0", "question_id": 3, "question": "What must you set to configure your task definitions for Fargate?", "answer_span": "You configure your task definitions for Fargate by setting the requiresCompatibilities task definition parameter to FARGATE.", "chunk": "Amazon Elastic Container Service Developer Guide AWS Fargate for Amazon ECS AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing. When you run your tasks and services with the Fargate launch type, you package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application. Each Fargate task has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task. You configure your task definitions for Fargate by setting the requiresCompatibilities task definition parameter to FARGATE. For more information, see Launch types. Fargate offers platform versions for Amazon Linux 2 (platform version 1.3.0), Bottlerocket operating system (platform version 1.4.0), and Microsoft Windows 2019 Server Full and Core editions.Unless otherwise specified, the information on this page applies to all Fargate platforms. This topic describes the different components of Fargate tasks and services, and calls out special considerations for using Fargate with Amazon ECS. For information about the Regions that support Linux containers on Fargate, see the section called “Linux containers on AWS Fargate”. For information about the Regions that support Windows containers on Fargate, see the section called “Windows containers on AWS Fargate”. Walkthroughs For information about how to get started using the console, see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about"} +{"global_id": 135, "doc_id": "fargate", "chunk_id": "0", "question_id": 4, "question": "What operating systems does Fargate offer platform versions for?", "answer_span": "Fargate offers platform versions for Amazon Linux 2 (platform version 1.3.0), Bottlerocket operating system (platform version 1.4.0), and Microsoft Windows 2019 Server Full and Core editions.", "chunk": "Amazon Elastic Container Service Developer Guide AWS Fargate for Amazon ECS AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing. When you run your tasks and services with the Fargate launch type, you package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application. Each Fargate task has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task. You configure your task definitions for Fargate by setting the requiresCompatibilities task definition parameter to FARGATE. For more information, see Launch types. Fargate offers platform versions for Amazon Linux 2 (platform version 1.3.0), Bottlerocket operating system (platform version 1.4.0), and Microsoft Windows 2019 Server Full and Core editions.Unless otherwise specified, the information on this page applies to all Fargate platforms. This topic describes the different components of Fargate tasks and services, and calls out special considerations for using Fargate with Amazon ECS. For information about the Regions that support Linux containers on Fargate, see the section called “Linux containers on AWS Fargate”. For information about the Regions that support Windows containers on Fargate, see the section called “Windows containers on AWS Fargate”. Walkthroughs For information about how to get started using the console, see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about"} +{"global_id": 136, "doc_id": "fargate", "chunk_id": "1", "question_id": 1, "question": "What are the two types of Amazon ECS tasks mentioned for the Fargate launch type?", "answer_span": "• Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type", "chunk": "“Windows containers on AWS Fargate”. Walkthroughs For information about how to get started using the console, see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about how to get started using the AWS CLI, see: • Creating an Amazon ECS Linux task for the Fargate launch type with the AWS CLI Walkthroughs 167 Amazon Elastic Container Service Developer Guide • Creating an Amazon ECS Windows task for the Fargate launch type with the AWS CLI Capacity providers The following capacity providers are available: • Fargate • Fargate Spot - Run interruption tolerant Amazon ECS tasks at a discounted rate compared to the AWS Fargate price. Fargate Spot runs tasks on spare compute capacity. When AWS needs the capacity back, your tasks will be interrupted with a two-minute warning. For more information, see Amazon ECS clusters for Fargate. Task definitions Tasks that use the Fargate launch type don't support all of the Amazon ECS task definition parameters that are available. Some parameters aren't supported at all, and others behave differently for Fargate tasks. For more information, see Task CPU and memory. Platform versions AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each"} +{"global_id": 137, "doc_id": "fargate", "chunk_id": "1", "question_id": 2, "question": "What is Fargate Spot?", "answer_span": "Fargate Spot - Run interruption tolerant Amazon ECS tasks at a discounted rate compared to the AWS Fargate price.", "chunk": "“Windows containers on AWS Fargate”. Walkthroughs For information about how to get started using the console, see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about how to get started using the AWS CLI, see: • Creating an Amazon ECS Linux task for the Fargate launch type with the AWS CLI Walkthroughs 167 Amazon Elastic Container Service Developer Guide • Creating an Amazon ECS Windows task for the Fargate launch type with the AWS CLI Capacity providers The following capacity providers are available: • Fargate • Fargate Spot - Run interruption tolerant Amazon ECS tasks at a discounted rate compared to the AWS Fargate price. Fargate Spot runs tasks on spare compute capacity. When AWS needs the capacity back, your tasks will be interrupted with a two-minute warning. For more information, see Amazon ECS clusters for Fargate. Task definitions Tasks that use the Fargate launch type don't support all of the Amazon ECS task definition parameters that are available. Some parameters aren't supported at all, and others behave differently for Fargate tasks. For more information, see Task CPU and memory. Platform versions AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each"} +{"global_id": 138, "doc_id": "fargate", "chunk_id": "1", "question_id": 3, "question": "What do tasks that use the Fargate launch type not support?", "answer_span": "Tasks that use the Fargate launch type don't support all of the Amazon ECS task definition parameters that are available.", "chunk": "“Windows containers on AWS Fargate”. Walkthroughs For information about how to get started using the console, see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about how to get started using the AWS CLI, see: • Creating an Amazon ECS Linux task for the Fargate launch type with the AWS CLI Walkthroughs 167 Amazon Elastic Container Service Developer Guide • Creating an Amazon ECS Windows task for the Fargate launch type with the AWS CLI Capacity providers The following capacity providers are available: • Fargate • Fargate Spot - Run interruption tolerant Amazon ECS tasks at a discounted rate compared to the AWS Fargate price. Fargate Spot runs tasks on spare compute capacity. When AWS needs the capacity back, your tasks will be interrupted with a two-minute warning. For more information, see Amazon ECS clusters for Fargate. Task definitions Tasks that use the Fargate launch type don't support all of the Amazon ECS task definition parameters that are available. Some parameters aren't supported at all, and others behave differently for Fargate tasks. For more information, see Task CPU and memory. Platform versions AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each"} +{"global_id": 139, "doc_id": "fargate", "chunk_id": "1", "question_id": 4, "question": "What is a Fargate platform version?", "answer_span": "AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure.", "chunk": "“Windows containers on AWS Fargate”. Walkthroughs For information about how to get started using the console, see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about how to get started using the AWS CLI, see: • Creating an Amazon ECS Linux task for the Fargate launch type with the AWS CLI Walkthroughs 167 Amazon Elastic Container Service Developer Guide • Creating an Amazon ECS Windows task for the Fargate launch type with the AWS CLI Capacity providers The following capacity providers are available: • Fargate • Fargate Spot - Run interruption tolerant Amazon ECS tasks at a discounted rate compared to the AWS Fargate price. Fargate Spot runs tasks on spare compute capacity. When AWS needs the capacity back, your tasks will be interrupted with a two-minute warning. For more information, see Amazon ECS clusters for Fargate. Task definitions Tasks that use the Fargate launch type don't support all of the Amazon ECS task definition parameters that are available. Some parameters aren't supported at all, and others behave differently for Fargate tasks. For more information, see Task CPU and memory. Platform versions AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each"} +{"global_id": 140, "doc_id": "fargate", "chunk_id": "2", "question_id": 1, "question": "What happens when a security issue is found that affects an existing platform version?", "answer_span": "If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision.", "chunk": "to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . Capacity providers 168 Amazon Elastic Container Service Developer Guide For more information see Fargate platform versions for Amazon ECS. Service load balancing Your Amazon ECS service on AWS Fargate can optionally be configured to use Elastic Load Balancing to distribute traffic evenly across the tasks in your service. Amazon ECS services on AWS Fargate support the Application Load Balancer, Network Load Balancer, and load balancer types. Application Load Balancers are used to route HTTP/HTTPS (or layer 7) traffic. Network Load Balancers are used to route TCP or UDP (or layer 4) traffic. For more information, see Use load balancing to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network"} +{"global_id": 141, "doc_id": "fargate", "chunk_id": "2", "question_id": 2, "question": "How can you ensure that tasks are always started on secure and patched infrastructure?", "answer_span": "A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure.", "chunk": "to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . Capacity providers 168 Amazon Elastic Container Service Developer Guide For more information see Fargate platform versions for Amazon ECS. Service load balancing Your Amazon ECS service on AWS Fargate can optionally be configured to use Elastic Load Balancing to distribute traffic evenly across the tasks in your service. Amazon ECS services on AWS Fargate support the Application Load Balancer, Network Load Balancer, and load balancer types. Application Load Balancers are used to route HTTP/HTTPS (or layer 7) traffic. Network Load Balancers are used to route TCP or UDP (or layer 4) traffic. For more information, see Use load balancing to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network"} +{"global_id": 142, "doc_id": "fargate", "chunk_id": "2", "question_id": 3, "question": "What types of load balancers are supported by Amazon ECS services on AWS Fargate?", "answer_span": "Amazon ECS services on AWS Fargate support the Application Load Balancer, Network Load Balancer, and load balancer types.", "chunk": "to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . Capacity providers 168 Amazon Elastic Container Service Developer Guide For more information see Fargate platform versions for Amazon ECS. Service load balancing Your Amazon ECS service on AWS Fargate can optionally be configured to use Elastic Load Balancing to distribute traffic evenly across the tasks in your service. Amazon ECS services on AWS Fargate support the Application Load Balancer, Network Load Balancer, and load balancer types. Application Load Balancers are used to route HTTP/HTTPS (or layer 7) traffic. Network Load Balancers are used to route TCP or UDP (or layer 4) traffic. For more information, see Use load balancing to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network"} +{"global_id": 143, "doc_id": "fargate", "chunk_id": "2", "question_id": 4, "question": "What must you choose as the target type when creating a target group for these services?", "answer_span": "When you create a target group for these services, you must choose ip as the target type, not instance.", "chunk": "to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . Capacity providers 168 Amazon Elastic Container Service Developer Guide For more information see Fargate platform versions for Amazon ECS. Service load balancing Your Amazon ECS service on AWS Fargate can optionally be configured to use Elastic Load Balancing to distribute traffic evenly across the tasks in your service. Amazon ECS services on AWS Fargate support the Application Load Balancer, Network Load Balancer, and load balancer types. Application Load Balancers are used to route HTTP/HTTPS (or layer 7) traffic. Network Load Balancers are used to route TCP or UDP (or layer 4) traffic. For more information, see Use load balancing to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network"} +{"global_id": 144, "doc_id": "fargate", "chunk_id": "3", "question_id": 1, "question": "What types of traffic can be routed using the described method?", "answer_span": "to route TCP or UDP (or layer 4) traffic.", "chunk": "to route TCP or UDP (or layer 4) traffic. For more information, see Use load balancing to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network mode are associated with an elastic network interface, not an Amazon EC2 instance. For more information, see Use load balancing to distribute Amazon ECS service traffic. Using a Network Load Balancer to route UDP traffic to your Amazon ECS on AWS Fargate tasks is only supported when using platform version 1.4 or later. Usage metrics You can use CloudWatch usage metrics to provide visibility into your accounts usage of resources. Use these metrics to visualize your current service usage on CloudWatch graphs and dashboards. AWS Fargate usage metrics correspond to AWS service quotas. You can configure alarms that alert you when your usage approaches a service quota. For more information about AWS Fargate service quotas, Amazon ECS endpoints and quotas in the Amazon Web Services General Reference.. For more information about AWS Fargate usage metrics, see AWS Fargate usage metrics. Amazon ECS security considerations for when to use the Fargate launch type We recommend that customers looking for strong isolation for their tasks use Fargate. Fargate runs each task in a hardware virtualization environment. This ensures that these containerized workloads do not share network interfaces, Fargate ephemeral storage, CPU, or memory with other tasks. For more information, see Security Overview of AWS Fargate. Service load balancing 169 Amazon Elastic Container Service Developer Guide Fargate security best practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt"} +{"global_id": 145, "doc_id": "fargate", "chunk_id": "3", "question_id": 2, "question": "What must you choose as the target type when creating a target group for certain services?", "answer_span": "you must choose ip as the target type, not instance.", "chunk": "to route TCP or UDP (or layer 4) traffic. For more information, see Use load balancing to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network mode are associated with an elastic network interface, not an Amazon EC2 instance. For more information, see Use load balancing to distribute Amazon ECS service traffic. Using a Network Load Balancer to route UDP traffic to your Amazon ECS on AWS Fargate tasks is only supported when using platform version 1.4 or later. Usage metrics You can use CloudWatch usage metrics to provide visibility into your accounts usage of resources. Use these metrics to visualize your current service usage on CloudWatch graphs and dashboards. AWS Fargate usage metrics correspond to AWS service quotas. You can configure alarms that alert you when your usage approaches a service quota. For more information about AWS Fargate service quotas, Amazon ECS endpoints and quotas in the Amazon Web Services General Reference.. For more information about AWS Fargate usage metrics, see AWS Fargate usage metrics. Amazon ECS security considerations for when to use the Fargate launch type We recommend that customers looking for strong isolation for their tasks use Fargate. Fargate runs each task in a hardware virtualization environment. This ensures that these containerized workloads do not share network interfaces, Fargate ephemeral storage, CPU, or memory with other tasks. For more information, see Security Overview of AWS Fargate. Service load balancing 169 Amazon Elastic Container Service Developer Guide Fargate security best practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt"} +{"global_id": 146, "doc_id": "fargate", "chunk_id": "3", "question_id": 3, "question": "When is using a Network Load Balancer to route UDP traffic supported?", "answer_span": "Using a Network Load Balancer to route UDP traffic to your Amazon ECS on AWS Fargate tasks is only supported when using platform version 1.4 or later.", "chunk": "to route TCP or UDP (or layer 4) traffic. For more information, see Use load balancing to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network mode are associated with an elastic network interface, not an Amazon EC2 instance. For more information, see Use load balancing to distribute Amazon ECS service traffic. Using a Network Load Balancer to route UDP traffic to your Amazon ECS on AWS Fargate tasks is only supported when using platform version 1.4 or later. Usage metrics You can use CloudWatch usage metrics to provide visibility into your accounts usage of resources. Use these metrics to visualize your current service usage on CloudWatch graphs and dashboards. AWS Fargate usage metrics correspond to AWS service quotas. You can configure alarms that alert you when your usage approaches a service quota. For more information about AWS Fargate service quotas, Amazon ECS endpoints and quotas in the Amazon Web Services General Reference.. For more information about AWS Fargate usage metrics, see AWS Fargate usage metrics. Amazon ECS security considerations for when to use the Fargate launch type We recommend that customers looking for strong isolation for their tasks use Fargate. Fargate runs each task in a hardware virtualization environment. This ensures that these containerized workloads do not share network interfaces, Fargate ephemeral storage, CPU, or memory with other tasks. For more information, see Security Overview of AWS Fargate. Service load balancing 169 Amazon Elastic Container Service Developer Guide Fargate security best practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt"} +{"global_id": 147, "doc_id": "fargate", "chunk_id": "3", "question_id": 4, "question": "What does AWS Fargate usage metrics correspond to?", "answer_span": "AWS Fargate usage metrics correspond to AWS service quotas.", "chunk": "to route TCP or UDP (or layer 4) traffic. For more information, see Use load balancing to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network mode are associated with an elastic network interface, not an Amazon EC2 instance. For more information, see Use load balancing to distribute Amazon ECS service traffic. Using a Network Load Balancer to route UDP traffic to your Amazon ECS on AWS Fargate tasks is only supported when using platform version 1.4 or later. Usage metrics You can use CloudWatch usage metrics to provide visibility into your accounts usage of resources. Use these metrics to visualize your current service usage on CloudWatch graphs and dashboards. AWS Fargate usage metrics correspond to AWS service quotas. You can configure alarms that alert you when your usage approaches a service quota. For more information about AWS Fargate service quotas, Amazon ECS endpoints and quotas in the Amazon Web Services General Reference.. For more information about AWS Fargate usage metrics, see AWS Fargate usage metrics. Amazon ECS security considerations for when to use the Fargate launch type We recommend that customers looking for strong isolation for their tasks use Fargate. Fargate runs each task in a hardware virtualization environment. This ensures that these containerized workloads do not share network interfaces, Fargate ephemeral storage, CPU, or memory with other tasks. For more information, see Security Overview of AWS Fargate. Service load balancing 169 Amazon Elastic Container Service Developer Guide Fargate security best practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt"} +{"global_id": 148, "doc_id": "fargate", "chunk_id": "4", "question_id": 1, "question": "What is recommended for encrypting ephemeral storage for Fargate?", "answer_span": "You should have your ephemeral storage encrypted by either AWS KMS or your own customer managed keys.", "chunk": "of AWS Fargate. Service load balancing 169 Amazon Elastic Container Service Developer Guide Fargate security best practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt ephemeral storage for Fargate You should have your ephemeral storage encrypted by either AWS KMS or your own customer managed keys. For tasks that are hosted on Fargate using platform version 1.4.0 or later, each task receives 20 GiB of ephemeral storage. For more information, see customer managed key (CMK). You can increase the total amount of ephemeral storage, up to a maximum of 200 GiB, by specifying the ephemeralStorage parameter in your task definition. For such tasks that were launched on May 28, 2020 or later, the ephemeral storage is encrypted with an AES-256 encryption algorithm using an encryption key managed by Fargate. For more information, see Storage options for Amazon ECS tasks. Example: Launching an task on Fargate platform version 1.4.0 with ephemeral storage encryption The following command will launch a task on Fargate platform version 1.4. Because this task is launched as part of the cluster, it uses the 20 GiB of ephemeral storage that's automatically encrypted. aws ecs run-task --cluster clustername \\ --task-definition taskdefinition:version \\ --count 1 --launch-type \"FARGATE\" \\ --platform-version 1.4.0 \\ --network-configuration \"awsvpcConfiguration={subnets=[subnetid],securityGroups=[securitygroupid]}\" \\ --region region SYS_PTRACE capability for kernel syscall tracing with Fargate The default configuration of Linux capabilities that are added or removed from your container are provided by Docker. Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your"} +{"global_id": 149, "doc_id": "fargate", "chunk_id": "4", "question_id": 2, "question": "What is the maximum amount of ephemeral storage that can be specified in a task definition?", "answer_span": "You can increase the total amount of ephemeral storage, up to a maximum of 200 GiB, by specifying the ephemeralStorage parameter in your task definition.", "chunk": "of AWS Fargate. Service load balancing 169 Amazon Elastic Container Service Developer Guide Fargate security best practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt ephemeral storage for Fargate You should have your ephemeral storage encrypted by either AWS KMS or your own customer managed keys. For tasks that are hosted on Fargate using platform version 1.4.0 or later, each task receives 20 GiB of ephemeral storage. For more information, see customer managed key (CMK). You can increase the total amount of ephemeral storage, up to a maximum of 200 GiB, by specifying the ephemeralStorage parameter in your task definition. For such tasks that were launched on May 28, 2020 or later, the ephemeral storage is encrypted with an AES-256 encryption algorithm using an encryption key managed by Fargate. For more information, see Storage options for Amazon ECS tasks. Example: Launching an task on Fargate platform version 1.4.0 with ephemeral storage encryption The following command will launch a task on Fargate platform version 1.4. Because this task is launched as part of the cluster, it uses the 20 GiB of ephemeral storage that's automatically encrypted. aws ecs run-task --cluster clustername \\ --task-definition taskdefinition:version \\ --count 1 --launch-type \"FARGATE\" \\ --platform-version 1.4.0 \\ --network-configuration \"awsvpcConfiguration={subnets=[subnetid],securityGroups=[securitygroupid]}\" \\ --region region SYS_PTRACE capability for kernel syscall tracing with Fargate The default configuration of Linux capabilities that are added or removed from your container are provided by Docker. Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your"} +{"global_id": 150, "doc_id": "fargate", "chunk_id": "4", "question_id": 3, "question": "What encryption algorithm is used for ephemeral storage launched on Fargate after May 28, 2020?", "answer_span": "the ephemeral storage is encrypted with an AES-256 encryption algorithm using an encryption key managed by Fargate.", "chunk": "of AWS Fargate. Service load balancing 169 Amazon Elastic Container Service Developer Guide Fargate security best practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt ephemeral storage for Fargate You should have your ephemeral storage encrypted by either AWS KMS or your own customer managed keys. For tasks that are hosted on Fargate using platform version 1.4.0 or later, each task receives 20 GiB of ephemeral storage. For more information, see customer managed key (CMK). You can increase the total amount of ephemeral storage, up to a maximum of 200 GiB, by specifying the ephemeralStorage parameter in your task definition. For such tasks that were launched on May 28, 2020 or later, the ephemeral storage is encrypted with an AES-256 encryption algorithm using an encryption key managed by Fargate. For more information, see Storage options for Amazon ECS tasks. Example: Launching an task on Fargate platform version 1.4.0 with ephemeral storage encryption The following command will launch a task on Fargate platform version 1.4. Because this task is launched as part of the cluster, it uses the 20 GiB of ephemeral storage that's automatically encrypted. aws ecs run-task --cluster clustername \\ --task-definition taskdefinition:version \\ --count 1 --launch-type \"FARGATE\" \\ --platform-version 1.4.0 \\ --network-configuration \"awsvpcConfiguration={subnets=[subnetid],securityGroups=[securitygroupid]}\" \\ --region region SYS_PTRACE capability for kernel syscall tracing with Fargate The default configuration of Linux capabilities that are added or removed from your container are provided by Docker. Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your"} +{"global_id": 151, "doc_id": "fargate", "chunk_id": "4", "question_id": 4, "question": "Which kernel capability is supported for tasks launched on Fargate?", "answer_span": "Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability.", "chunk": "of AWS Fargate. Service load balancing 169 Amazon Elastic Container Service Developer Guide Fargate security best practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt ephemeral storage for Fargate You should have your ephemeral storage encrypted by either AWS KMS or your own customer managed keys. For tasks that are hosted on Fargate using platform version 1.4.0 or later, each task receives 20 GiB of ephemeral storage. For more information, see customer managed key (CMK). You can increase the total amount of ephemeral storage, up to a maximum of 200 GiB, by specifying the ephemeralStorage parameter in your task definition. For such tasks that were launched on May 28, 2020 or later, the ephemeral storage is encrypted with an AES-256 encryption algorithm using an encryption key managed by Fargate. For more information, see Storage options for Amazon ECS tasks. Example: Launching an task on Fargate platform version 1.4.0 with ephemeral storage encryption The following command will launch a task on Fargate platform version 1.4. Because this task is launched as part of the cluster, it uses the 20 GiB of ephemeral storage that's automatically encrypted. aws ecs run-task --cluster clustername \\ --task-definition taskdefinition:version \\ --count 1 --launch-type \"FARGATE\" \\ --platform-version 1.4.0 \\ --network-configuration \"awsvpcConfiguration={subnets=[subnetid],securityGroups=[securitygroupid]}\" \\ --region region SYS_PTRACE capability for kernel syscall tracing with Fargate The default configuration of Linux capabilities that are added or removed from your container are provided by Docker. Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your"} +{"global_id": 152, "doc_id": "fargate", "chunk_id": "5", "question_id": 1, "question": "What kernel capability does Fargate support for tasks?", "answer_span": "Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability.", "chunk": "removed from your container are provided by Docker. Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your Fargate Task using SYS_PTRACE capability The code discussed in the previous video can be found on GitHub here. Use Amazon GuardDuty with Fargate Runtime Monitoring Amazon GuardDuty is a threat detection service that helps protect your accounts, containers, workloads, and the data within your AWS environment. Using machine learning (ML) models, and anomaly and threat detection capabilities, GuardDuty continuously monitors different log sources and runtime activity to identify and prioritize potential security risks and malicious activities in your environment. Runtime Monitoring in GuardDuty protects workloads running on Fargate by continuously monitoring AWS log and networking activity to identify malicious or unauthorized behavior. Runtime Monitoring uses a lightweight, fully managed GuardDuty security agent that analyzes onhost behavior, such as file access, process execution, and network connections. This covers issues including escalation of privileges, use of exposed credentials, or communication with malicious IP addresses, domains, and the presence of malware on your Amazon EC2 instances and container workloads. For more information, see GuardDuty Runtime Monitoring in the GuardDuty User Guide. Fargate security considerations for Amazon ECS Each task has a dedicated infrastructure capacity because Fargate runs each workload on an isolated virtual environment. Workloads that run on Fargate do not share network interfaces, ephemeral storage, CPU, or memory with other tasks. You can run multiple containers within a task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code,"} +{"global_id": 153, "doc_id": "fargate", "chunk_id": "5", "question_id": 2, "question": "What service does Amazon GuardDuty provide?", "answer_span": "Amazon GuardDuty is a threat detection service that helps protect your accounts, containers, workloads, and the data within your AWS environment.", "chunk": "removed from your container are provided by Docker. Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your Fargate Task using SYS_PTRACE capability The code discussed in the previous video can be found on GitHub here. Use Amazon GuardDuty with Fargate Runtime Monitoring Amazon GuardDuty is a threat detection service that helps protect your accounts, containers, workloads, and the data within your AWS environment. Using machine learning (ML) models, and anomaly and threat detection capabilities, GuardDuty continuously monitors different log sources and runtime activity to identify and prioritize potential security risks and malicious activities in your environment. Runtime Monitoring in GuardDuty protects workloads running on Fargate by continuously monitoring AWS log and networking activity to identify malicious or unauthorized behavior. Runtime Monitoring uses a lightweight, fully managed GuardDuty security agent that analyzes onhost behavior, such as file access, process execution, and network connections. This covers issues including escalation of privileges, use of exposed credentials, or communication with malicious IP addresses, domains, and the presence of malware on your Amazon EC2 instances and container workloads. For more information, see GuardDuty Runtime Monitoring in the GuardDuty User Guide. Fargate security considerations for Amazon ECS Each task has a dedicated infrastructure capacity because Fargate runs each workload on an isolated virtual environment. Workloads that run on Fargate do not share network interfaces, ephemeral storage, CPU, or memory with other tasks. You can run multiple containers within a task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code,"} +{"global_id": 154, "doc_id": "fargate", "chunk_id": "5", "question_id": 3, "question": "What does Runtime Monitoring in GuardDuty do?", "answer_span": "Runtime Monitoring in GuardDuty protects workloads running on Fargate by continuously monitoring AWS log and networking activity to identify malicious or unauthorized behavior.", "chunk": "removed from your container are provided by Docker. Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your Fargate Task using SYS_PTRACE capability The code discussed in the previous video can be found on GitHub here. Use Amazon GuardDuty with Fargate Runtime Monitoring Amazon GuardDuty is a threat detection service that helps protect your accounts, containers, workloads, and the data within your AWS environment. Using machine learning (ML) models, and anomaly and threat detection capabilities, GuardDuty continuously monitors different log sources and runtime activity to identify and prioritize potential security risks and malicious activities in your environment. Runtime Monitoring in GuardDuty protects workloads running on Fargate by continuously monitoring AWS log and networking activity to identify malicious or unauthorized behavior. Runtime Monitoring uses a lightweight, fully managed GuardDuty security agent that analyzes onhost behavior, such as file access, process execution, and network connections. This covers issues including escalation of privileges, use of exposed credentials, or communication with malicious IP addresses, domains, and the presence of malware on your Amazon EC2 instances and container workloads. For more information, see GuardDuty Runtime Monitoring in the GuardDuty User Guide. Fargate security considerations for Amazon ECS Each task has a dedicated infrastructure capacity because Fargate runs each workload on an isolated virtual environment. Workloads that run on Fargate do not share network interfaces, ephemeral storage, CPU, or memory with other tasks. You can run multiple containers within a task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code,"} +{"global_id": 155, "doc_id": "fargate", "chunk_id": "5", "question_id": 4, "question": "How does Fargate ensure isolation for workloads?", "answer_span": "Each task has a dedicated infrastructure capacity because Fargate runs each workload on an isolated virtual environment.", "chunk": "removed from your container are provided by Docker. Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your Fargate Task using SYS_PTRACE capability The code discussed in the previous video can be found on GitHub here. Use Amazon GuardDuty with Fargate Runtime Monitoring Amazon GuardDuty is a threat detection service that helps protect your accounts, containers, workloads, and the data within your AWS environment. Using machine learning (ML) models, and anomaly and threat detection capabilities, GuardDuty continuously monitors different log sources and runtime activity to identify and prioritize potential security risks and malicious activities in your environment. Runtime Monitoring in GuardDuty protects workloads running on Fargate by continuously monitoring AWS log and networking activity to identify malicious or unauthorized behavior. Runtime Monitoring uses a lightweight, fully managed GuardDuty security agent that analyzes onhost behavior, such as file access, process execution, and network connections. This covers issues including escalation of privileges, use of exposed credentials, or communication with malicious IP addresses, domains, and the presence of malware on your Amazon EC2 instances and container workloads. For more information, see GuardDuty Runtime Monitoring in the GuardDuty User Guide. Fargate security considerations for Amazon ECS Each task has a dedicated infrastructure capacity because Fargate runs each workload on an isolated virtual environment. Workloads that run on Fargate do not share network interfaces, ephemeral storage, CPU, or memory with other tasks. You can run multiple containers within a task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code,"} +{"global_id": 156, "doc_id": "fargate", "chunk_id": "6", "question_id": 1, "question": "What is a sidecar in the context of Amazon ECS tasks?", "answer_span": "A sidecar is a container that runs alongside an application container in an Amazon ECS task.", "chunk": "interfaces, ephemeral storage, CPU, or memory with other tasks. You can run multiple containers within a task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code, processes running in sidecars can augment the application. Sidecars help you segregate application functions into dedicated containers, making it easier for you to update parts of your application. Containers that are part of the same task share resources for the Fargate launch type because these containers will always run on the same host and share compute resources. These containers also share the ephemeral storage provided by Fargate. Linux containers in a task share network namespaces, including the IP address and network ports. Inside a task, containers that belong to the task can inter-communicate over localhost. The runtime environment in Fargate prevents you from using certain controller features that are supported on EC2 instances. Consider the following when you architect workloads that run on Fargate: Use Amazon GuardDuty with Fargate Runtime Monitoring 171 Amazon Elastic Container Service Developer Guide • No privileged containers or access - Features such as privileged containers or access are currently unavailable on Fargate. This will affect uses cases such as running Docker in Docker. • Limited access to Linux capabilities - The environment in which containers run on Fargate is locked down. Additional Linux capabilities, such as CAP_SYS_ADMIN and CAP_NET_ADMIN, are restricted to prevent a privilege escalation. Fargate supports adding the CAP_SYS_PTRACE Linux capability to tasks to allow observability and security tools deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS"} +{"global_id": 157, "doc_id": "fargate", "chunk_id": "6", "question_id": 2, "question": "How do sidecars benefit application functions?", "answer_span": "Sidecars help you segregate application functions into dedicated containers, making it easier for you to update parts of your application.", "chunk": "interfaces, ephemeral storage, CPU, or memory with other tasks. You can run multiple containers within a task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code, processes running in sidecars can augment the application. Sidecars help you segregate application functions into dedicated containers, making it easier for you to update parts of your application. Containers that are part of the same task share resources for the Fargate launch type because these containers will always run on the same host and share compute resources. These containers also share the ephemeral storage provided by Fargate. Linux containers in a task share network namespaces, including the IP address and network ports. Inside a task, containers that belong to the task can inter-communicate over localhost. The runtime environment in Fargate prevents you from using certain controller features that are supported on EC2 instances. Consider the following when you architect workloads that run on Fargate: Use Amazon GuardDuty with Fargate Runtime Monitoring 171 Amazon Elastic Container Service Developer Guide • No privileged containers or access - Features such as privileged containers or access are currently unavailable on Fargate. This will affect uses cases such as running Docker in Docker. • Limited access to Linux capabilities - The environment in which containers run on Fargate is locked down. Additional Linux capabilities, such as CAP_SYS_ADMIN and CAP_NET_ADMIN, are restricted to prevent a privilege escalation. Fargate supports adding the CAP_SYS_PTRACE Linux capability to tasks to allow observability and security tools deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS"} +{"global_id": 158, "doc_id": "fargate", "chunk_id": "6", "question_id": 3, "question": "What resources do containers in the same task share when using the Fargate launch type?", "answer_span": "These containers will always run on the same host and share compute resources.", "chunk": "interfaces, ephemeral storage, CPU, or memory with other tasks. You can run multiple containers within a task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code, processes running in sidecars can augment the application. Sidecars help you segregate application functions into dedicated containers, making it easier for you to update parts of your application. Containers that are part of the same task share resources for the Fargate launch type because these containers will always run on the same host and share compute resources. These containers also share the ephemeral storage provided by Fargate. Linux containers in a task share network namespaces, including the IP address and network ports. Inside a task, containers that belong to the task can inter-communicate over localhost. The runtime environment in Fargate prevents you from using certain controller features that are supported on EC2 instances. Consider the following when you architect workloads that run on Fargate: Use Amazon GuardDuty with Fargate Runtime Monitoring 171 Amazon Elastic Container Service Developer Guide • No privileged containers or access - Features such as privileged containers or access are currently unavailable on Fargate. This will affect uses cases such as running Docker in Docker. • Limited access to Linux capabilities - The environment in which containers run on Fargate is locked down. Additional Linux capabilities, such as CAP_SYS_ADMIN and CAP_NET_ADMIN, are restricted to prevent a privilege escalation. Fargate supports adding the CAP_SYS_PTRACE Linux capability to tasks to allow observability and security tools deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS"} +{"global_id": 159, "doc_id": "fargate", "chunk_id": "6", "question_id": 4, "question": "What is a limitation of the Fargate runtime environment regarding Linux capabilities?", "answer_span": "The environment in which containers run on Fargate is locked down.", "chunk": "interfaces, ephemeral storage, CPU, or memory with other tasks. You can run multiple containers within a task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code, processes running in sidecars can augment the application. Sidecars help you segregate application functions into dedicated containers, making it easier for you to update parts of your application. Containers that are part of the same task share resources for the Fargate launch type because these containers will always run on the same host and share compute resources. These containers also share the ephemeral storage provided by Fargate. Linux containers in a task share network namespaces, including the IP address and network ports. Inside a task, containers that belong to the task can inter-communicate over localhost. The runtime environment in Fargate prevents you from using certain controller features that are supported on EC2 instances. Consider the following when you architect workloads that run on Fargate: Use Amazon GuardDuty with Fargate Runtime Monitoring 171 Amazon Elastic Container Service Developer Guide • No privileged containers or access - Features such as privileged containers or access are currently unavailable on Fargate. This will affect uses cases such as running Docker in Docker. • Limited access to Linux capabilities - The environment in which containers run on Fargate is locked down. Additional Linux capabilities, such as CAP_SYS_ADMIN and CAP_NET_ADMIN, are restricted to prevent a privilege escalation. Fargate supports adding the CAP_SYS_PTRACE Linux capability to tasks to allow observability and security tools deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS"} +{"global_id": 160, "doc_id": "fargate", "chunk_id": "7", "question_id": 1, "question": "What capability does Fargate support adding to tasks for observability and security tools?", "answer_span": "Fargate supports adding the CAP_SYS_PTRACE Linux capability to tasks to allow observability and security tools deployed within the task to monitor the containerized application.", "chunk": "escalation. Fargate supports adding the CAP_SYS_PTRACE Linux capability to tasks to allow observability and security tools deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS exec to run commands in or get a shell to a container running on Fargate. You can use ECS exec to help collect diagnostic information for debugging. Fargate also prevents containers from accessing the underlying host’s resources, such as the file system, devices, networking, and container runtime. • Networking - You can use security groups and network ACLs to control inbound and outbound traffic. Fargate tasks receive an IP address from the configured subnet in your VPC. Fargate platform versions for Amazon ECS AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision"} +{"global_id": 161, "doc_id": "fargate", "chunk_id": "7", "question_id": 2, "question": "Can customers connect to the underlying host running their workloads on Fargate?", "answer_span": "Neither customers nor AWS operators can connect to a host running customer workloads.", "chunk": "escalation. Fargate supports adding the CAP_SYS_PTRACE Linux capability to tasks to allow observability and security tools deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS exec to run commands in or get a shell to a container running on Fargate. You can use ECS exec to help collect diagnostic information for debugging. Fargate also prevents containers from accessing the underlying host’s resources, such as the file system, devices, networking, and container runtime. • Networking - You can use security groups and network ACLs to control inbound and outbound traffic. Fargate tasks receive an IP address from the configured subnet in your VPC. Fargate platform versions for Amazon ECS AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision"} +{"global_id": 162, "doc_id": "fargate", "chunk_id": "7", "question_id": 3, "question": "What do Fargate tasks receive from the configured subnet in your VPC?", "answer_span": "Fargate tasks receive an IP address from the configured subnet in your VPC.", "chunk": "escalation. Fargate supports adding the CAP_SYS_PTRACE Linux capability to tasks to allow observability and security tools deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS exec to run commands in or get a shell to a container running on Fargate. You can use ECS exec to help collect diagnostic information for debugging. Fargate also prevents containers from accessing the underlying host’s resources, such as the file system, devices, networking, and container runtime. • Networking - You can use security groups and network ACLs to control inbound and outbound traffic. Fargate tasks receive an IP address from the configured subnet in your VPC. Fargate platform versions for Amazon ECS AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision"} +{"global_id": 163, "doc_id": "fargate", "chunk_id": "7", "question_id": 4, "question": "What happens if a security issue is found that affects an existing platform version?", "answer_span": "If a security issue is found that affects an existing platform version, AWS creates a new patched revision.", "chunk": "escalation. Fargate supports adding the CAP_SYS_PTRACE Linux capability to tasks to allow observability and security tools deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS exec to run commands in or get a shell to a container running on Fargate. You can use ECS exec to help collect diagnostic information for debugging. Fargate also prevents containers from accessing the underlying host’s resources, such as the file system, devices, networking, and container runtime. • Networking - You can use security groups and network ACLs to control inbound and outbound traffic. Fargate tasks receive an IP address from the configured subnet in your VPC. Fargate platform versions for Amazon ECS AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision"} +{"global_id": 164, "doc_id": "fargate", "chunk_id": "8", "question_id": 1, "question": "What ensures that tasks on Fargate are started on secure infrastructure?", "answer_span": "A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure.", "chunk": "start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . You specify the platform version when you run a task, or deploy a service. Fargate platform versions 172 Amazon Elastic Container Service Developer Guide Consider the following when specifying a platform version: • You can specify a a specific version number, for example 1.4.0, or LATEST. The LATEST Linux platform version is 1.4.0. The LATEST Windows platform version is 1.0.0. • If you want to update the platform version for a service, create a deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. To change the service to run tasks on the Linux platform version 1.4.0, you update your service and specify a new platform version. Your tasks are redeployed with the latest platform version and the latest platform version revision. For more information about deployments, see Amazon ECS services. • If your service is scaled up without updating the platform version, those tasks receive the platform version that was specified on the service's current deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. If you increase the desired count of the service, the service scheduler starts the new tasks using the latest platform version revision of platform"} +{"global_id": 165, "doc_id": "fargate", "chunk_id": "8", "question_id": 2, "question": "What happens if a security issue is found in an existing platform version?", "answer_span": "If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision.", "chunk": "start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . You specify the platform version when you run a task, or deploy a service. Fargate platform versions 172 Amazon Elastic Container Service Developer Guide Consider the following when specifying a platform version: • You can specify a a specific version number, for example 1.4.0, or LATEST. The LATEST Linux platform version is 1.4.0. The LATEST Windows platform version is 1.0.0. • If you want to update the platform version for a service, create a deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. To change the service to run tasks on the Linux platform version 1.4.0, you update your service and specify a new platform version. Your tasks are redeployed with the latest platform version and the latest platform version revision. For more information about deployments, see Amazon ECS services. • If your service is scaled up without updating the platform version, those tasks receive the platform version that was specified on the service's current deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. If you increase the desired count of the service, the service scheduler starts the new tasks using the latest platform version revision of platform"} +{"global_id": 166, "doc_id": "fargate", "chunk_id": "8", "question_id": 3, "question": "How can you specify the platform version when running a task?", "answer_span": "You specify the platform version when you run a task, or deploy a service.", "chunk": "start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . You specify the platform version when you run a task, or deploy a service. Fargate platform versions 172 Amazon Elastic Container Service Developer Guide Consider the following when specifying a platform version: • You can specify a a specific version number, for example 1.4.0, or LATEST. The LATEST Linux platform version is 1.4.0. The LATEST Windows platform version is 1.0.0. • If you want to update the platform version for a service, create a deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. To change the service to run tasks on the Linux platform version 1.4.0, you update your service and specify a new platform version. Your tasks are redeployed with the latest platform version and the latest platform version revision. For more information about deployments, see Amazon ECS services. • If your service is scaled up without updating the platform version, those tasks receive the platform version that was specified on the service's current deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. If you increase the desired count of the service, the service scheduler starts the new tasks using the latest platform version revision of platform"} +{"global_id": 167, "doc_id": "fargate", "chunk_id": "8", "question_id": 4, "question": "What should you do to update the platform version for a service?", "answer_span": "If you want to update the platform version for a service, create a deployment.", "chunk": "start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . You specify the platform version when you run a task, or deploy a service. Fargate platform versions 172 Amazon Elastic Container Service Developer Guide Consider the following when specifying a platform version: • You can specify a a specific version number, for example 1.4.0, or LATEST. The LATEST Linux platform version is 1.4.0. The LATEST Windows platform version is 1.0.0. • If you want to update the platform version for a service, create a deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. To change the service to run tasks on the Linux platform version 1.4.0, you update your service and specify a new platform version. Your tasks are redeployed with the latest platform version and the latest platform version revision. For more information about deployments, see Amazon ECS services. • If your service is scaled up without updating the platform version, those tasks receive the platform version that was specified on the service's current deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. If you increase the desired count of the service, the service scheduler starts the new tasks using the latest platform version revision of platform"} +{"global_id": 168, "doc_id": "fargate", "chunk_id": "9", "question_id": 1, "question": "What happens when you increase the desired count of a service on the Linux platform version 1.3.0?", "answer_span": "If you increase the desired count of the service, the service scheduler starts the new tasks using the latest platform version revision of platform version 1.3.0.", "chunk": "version that was specified on the service's current deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. If you increase the desired count of the service, the service scheduler starts the new tasks using the latest platform version revision of platform version 1.3.0. • New tasks always run on the latest revision of a platform version. This ensures tasks are always on secured and patched infrastructure. • The platform version numbers for Linux containers and Windows containers on Fargate are independent. For example, the behavior, features, and software used in platform version 1.0.0 for Windows containers on Fargate aren't comparable to those of platform version 1.0.0 for Linux containers on Fargate. • The following applies to Fargate Windows platform versions. Microsoft Windows Server container images must be created from a specific version of Windows Server. You must select the same version of Windows Server in the platformFamily when you run a task or create a service that matches the Windows Server container image. Additionally, you can provide a matching operatingSystemFamily in the task definition to prevent tasks from being run on the wrong Windows version. For more information, see Matching container host version with container image versions on the Microsoft Learn website. Fargate platform versions 173 Amazon Elastic Container Service Developer Guide Migrating to Linux platform version 1.4.0 on Amazon ECS Consider the following when migrating your Amazon ECS on Fargate tasks from platform version 1.0.0, 1.1.0, 1.2.0, or 1.3.0 to platform version 1.4.0. It is best practice to confirm your task works properly on platform version 1.4.0 before you migrate the tasks. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Amazon ECS on Fargate tasks receive a single elastic network"} +{"global_id": 169, "doc_id": "fargate", "chunk_id": "9", "question_id": 2, "question": "How do new tasks operate in relation to platform versions?", "answer_span": "New tasks always run on the latest revision of a platform version.", "chunk": "version that was specified on the service's current deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. If you increase the desired count of the service, the service scheduler starts the new tasks using the latest platform version revision of platform version 1.3.0. • New tasks always run on the latest revision of a platform version. This ensures tasks are always on secured and patched infrastructure. • The platform version numbers for Linux containers and Windows containers on Fargate are independent. For example, the behavior, features, and software used in platform version 1.0.0 for Windows containers on Fargate aren't comparable to those of platform version 1.0.0 for Linux containers on Fargate. • The following applies to Fargate Windows platform versions. Microsoft Windows Server container images must be created from a specific version of Windows Server. You must select the same version of Windows Server in the platformFamily when you run a task or create a service that matches the Windows Server container image. Additionally, you can provide a matching operatingSystemFamily in the task definition to prevent tasks from being run on the wrong Windows version. For more information, see Matching container host version with container image versions on the Microsoft Learn website. Fargate platform versions 173 Amazon Elastic Container Service Developer Guide Migrating to Linux platform version 1.4.0 on Amazon ECS Consider the following when migrating your Amazon ECS on Fargate tasks from platform version 1.0.0, 1.1.0, 1.2.0, or 1.3.0 to platform version 1.4.0. It is best practice to confirm your task works properly on platform version 1.4.0 before you migrate the tasks. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Amazon ECS on Fargate tasks receive a single elastic network"} +{"global_id": 170, "doc_id": "fargate", "chunk_id": "9", "question_id": 3, "question": "What must be selected when running a task or creating a service for Windows containers?", "answer_span": "You must select the same version of Windows Server in the platformFamily when you run a task or create a service that matches the Windows Server container image.", "chunk": "version that was specified on the service's current deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. If you increase the desired count of the service, the service scheduler starts the new tasks using the latest platform version revision of platform version 1.3.0. • New tasks always run on the latest revision of a platform version. This ensures tasks are always on secured and patched infrastructure. • The platform version numbers for Linux containers and Windows containers on Fargate are independent. For example, the behavior, features, and software used in platform version 1.0.0 for Windows containers on Fargate aren't comparable to those of platform version 1.0.0 for Linux containers on Fargate. • The following applies to Fargate Windows platform versions. Microsoft Windows Server container images must be created from a specific version of Windows Server. You must select the same version of Windows Server in the platformFamily when you run a task or create a service that matches the Windows Server container image. Additionally, you can provide a matching operatingSystemFamily in the task definition to prevent tasks from being run on the wrong Windows version. For more information, see Matching container host version with container image versions on the Microsoft Learn website. Fargate platform versions 173 Amazon Elastic Container Service Developer Guide Migrating to Linux platform version 1.4.0 on Amazon ECS Consider the following when migrating your Amazon ECS on Fargate tasks from platform version 1.0.0, 1.1.0, 1.2.0, or 1.3.0 to platform version 1.4.0. It is best practice to confirm your task works properly on platform version 1.4.0 before you migrate the tasks. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Amazon ECS on Fargate tasks receive a single elastic network"} +{"global_id": 171, "doc_id": "fargate", "chunk_id": "9", "question_id": 4, "question": "What is recommended before migrating tasks to platform version 1.4.0?", "answer_span": "It is best practice to confirm your task works properly on platform version 1.4.0 before you migrate the tasks.", "chunk": "version that was specified on the service's current deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. If you increase the desired count of the service, the service scheduler starts the new tasks using the latest platform version revision of platform version 1.3.0. • New tasks always run on the latest revision of a platform version. This ensures tasks are always on secured and patched infrastructure. • The platform version numbers for Linux containers and Windows containers on Fargate are independent. For example, the behavior, features, and software used in platform version 1.0.0 for Windows containers on Fargate aren't comparable to those of platform version 1.0.0 for Linux containers on Fargate. • The following applies to Fargate Windows platform versions. Microsoft Windows Server container images must be created from a specific version of Windows Server. You must select the same version of Windows Server in the platformFamily when you run a task or create a service that matches the Windows Server container image. Additionally, you can provide a matching operatingSystemFamily in the task definition to prevent tasks from being run on the wrong Windows version. For more information, see Matching container host version with container image versions on the Microsoft Learn website. Fargate platform versions 173 Amazon Elastic Container Service Developer Guide Migrating to Linux platform version 1.4.0 on Amazon ECS Consider the following when migrating your Amazon ECS on Fargate tasks from platform version 1.0.0, 1.1.0, 1.2.0, or 1.3.0 to platform version 1.4.0. It is best practice to confirm your task works properly on platform version 1.4.0 before you migrate the tasks. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Amazon ECS on Fargate tasks receive a single elastic network"} +{"global_id": 172, "doc_id": "fargate", "chunk_id": "10", "question_id": 1, "question": "What is the best practice before migrating tasks?", "answer_span": "It is best practice to confirm your task works properly on platform version 1.4.0 before you migrate the tasks.", "chunk": "platform version 1.4.0. It is best practice to confirm your task works properly on platform version 1.4.0 before you migrate the tasks. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Amazon ECS on Fargate tasks receive a single elastic network interface (referred to as the task ENI) and all network traffic flows through that ENI within your VPC. The traffic is visible to you through your VPC flow logs. For more information see Amazon ECS task networking options for the Fargate launch type. • If you use interface VPC endpoints, consider the following. • For container images hosted with Amazon ECR, you need the following endpoints. For more information, see Amazon ECR interface VPC endpoints (AWS PrivateLink) in the Amazon Elastic Container Registry User Guide. • com.amazonaws.region.ecr.dkr Amazon ECR VPC endpoint • com.amazonaws.region.ecr.api Amazon ECR VPC endpoint • Amazon S3 gateway endpoint • When your task definition references Secrets Manager secrets to retrieve sensitive data for your containers, you must create the interface VPC endpoints for Secrets Manager. For more information, see Using Secrets Manager with VPC Endpoints in the AWS Secrets Manager User Guide. • When your task definition references Systems Manager Parameter Store parameters to retrieve sensitive data for your containers, you must create the interface VPC endpoints for Systems Manager. For more information, see Improve the security of EC2 instances by using VPC endpoints for Systems Manager in the AWS Systems Manager User Guide. • The security group for the Elastic Network Interface (ENI) associated with your task needs the security group rules to allow traffic between the task and the VPC endpoints. Fargate Linux platform version change log The following are the available Linux platform versions. For information about platform version deprecation, see AWS Fargate Linux"} +{"global_id": 173, "doc_id": "fargate", "chunk_id": "10", "question_id": 2, "question": "What has been updated regarding network traffic behavior in platform version 1.4.0?", "answer_span": "The network traffic behavior to and from tasks has been updated.", "chunk": "platform version 1.4.0. It is best practice to confirm your task works properly on platform version 1.4.0 before you migrate the tasks. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Amazon ECS on Fargate tasks receive a single elastic network interface (referred to as the task ENI) and all network traffic flows through that ENI within your VPC. The traffic is visible to you through your VPC flow logs. For more information see Amazon ECS task networking options for the Fargate launch type. • If you use interface VPC endpoints, consider the following. • For container images hosted with Amazon ECR, you need the following endpoints. For more information, see Amazon ECR interface VPC endpoints (AWS PrivateLink) in the Amazon Elastic Container Registry User Guide. • com.amazonaws.region.ecr.dkr Amazon ECR VPC endpoint • com.amazonaws.region.ecr.api Amazon ECR VPC endpoint • Amazon S3 gateway endpoint • When your task definition references Secrets Manager secrets to retrieve sensitive data for your containers, you must create the interface VPC endpoints for Secrets Manager. For more information, see Using Secrets Manager with VPC Endpoints in the AWS Secrets Manager User Guide. • When your task definition references Systems Manager Parameter Store parameters to retrieve sensitive data for your containers, you must create the interface VPC endpoints for Systems Manager. For more information, see Improve the security of EC2 instances by using VPC endpoints for Systems Manager in the AWS Systems Manager User Guide. • The security group for the Elastic Network Interface (ENI) associated with your task needs the security group rules to allow traffic between the task and the VPC endpoints. Fargate Linux platform version change log The following are the available Linux platform versions. For information about platform version deprecation, see AWS Fargate Linux"} +{"global_id": 174, "doc_id": "fargate", "chunk_id": "10", "question_id": 3, "question": "What must you create when your task definition references Secrets Manager secrets?", "answer_span": "you must create the interface VPC endpoints for Secrets Manager.", "chunk": "platform version 1.4.0. It is best practice to confirm your task works properly on platform version 1.4.0 before you migrate the tasks. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Amazon ECS on Fargate tasks receive a single elastic network interface (referred to as the task ENI) and all network traffic flows through that ENI within your VPC. The traffic is visible to you through your VPC flow logs. For more information see Amazon ECS task networking options for the Fargate launch type. • If you use interface VPC endpoints, consider the following. • For container images hosted with Amazon ECR, you need the following endpoints. For more information, see Amazon ECR interface VPC endpoints (AWS PrivateLink) in the Amazon Elastic Container Registry User Guide. • com.amazonaws.region.ecr.dkr Amazon ECR VPC endpoint • com.amazonaws.region.ecr.api Amazon ECR VPC endpoint • Amazon S3 gateway endpoint • When your task definition references Secrets Manager secrets to retrieve sensitive data for your containers, you must create the interface VPC endpoints for Secrets Manager. For more information, see Using Secrets Manager with VPC Endpoints in the AWS Secrets Manager User Guide. • When your task definition references Systems Manager Parameter Store parameters to retrieve sensitive data for your containers, you must create the interface VPC endpoints for Systems Manager. For more information, see Improve the security of EC2 instances by using VPC endpoints for Systems Manager in the AWS Systems Manager User Guide. • The security group for the Elastic Network Interface (ENI) associated with your task needs the security group rules to allow traffic between the task and the VPC endpoints. Fargate Linux platform version change log The following are the available Linux platform versions. For information about platform version deprecation, see AWS Fargate Linux"} +{"global_id": 175, "doc_id": "fargate", "chunk_id": "10", "question_id": 4, "question": "What is required for the security group associated with the Elastic Network Interface?", "answer_span": "The security group for the Elastic Network Interface (ENI) associated with your task needs the security group rules to allow traffic between the task and the VPC endpoints.", "chunk": "platform version 1.4.0. It is best practice to confirm your task works properly on platform version 1.4.0 before you migrate the tasks. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Amazon ECS on Fargate tasks receive a single elastic network interface (referred to as the task ENI) and all network traffic flows through that ENI within your VPC. The traffic is visible to you through your VPC flow logs. For more information see Amazon ECS task networking options for the Fargate launch type. • If you use interface VPC endpoints, consider the following. • For container images hosted with Amazon ECR, you need the following endpoints. For more information, see Amazon ECR interface VPC endpoints (AWS PrivateLink) in the Amazon Elastic Container Registry User Guide. • com.amazonaws.region.ecr.dkr Amazon ECR VPC endpoint • com.amazonaws.region.ecr.api Amazon ECR VPC endpoint • Amazon S3 gateway endpoint • When your task definition references Secrets Manager secrets to retrieve sensitive data for your containers, you must create the interface VPC endpoints for Secrets Manager. For more information, see Using Secrets Manager with VPC Endpoints in the AWS Secrets Manager User Guide. • When your task definition references Systems Manager Parameter Store parameters to retrieve sensitive data for your containers, you must create the interface VPC endpoints for Systems Manager. For more information, see Improve the security of EC2 instances by using VPC endpoints for Systems Manager in the AWS Systems Manager User Guide. • The security group for the Elastic Network Interface (ENI) associated with your task needs the security group rules to allow traffic between the task and the VPC endpoints. Fargate Linux platform version change log The following are the available Linux platform versions. For information about platform version deprecation, see AWS Fargate Linux"} +{"global_id": 176, "doc_id": "fargate", "chunk_id": "11", "question_id": 1, "question": "What does the security group rules need to allow?", "answer_span": "the security group rules to allow traffic between the task and the VPC endpoints.", "chunk": "group for the Elastic Network Interface (ENI) associated with your task needs the security group rules to allow traffic between the task and the VPC endpoints. Fargate Linux platform version change log The following are the available Linux platform versions. For information about platform version deprecation, see AWS Fargate Linux platform version deprecation. Migrating to Linux platform version 1.4.0 174 Amazon Elastic Container Service Developer Guide 1.4.0 The following is the changelog for platform version 1.4.0. • Beginning on November 5, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to use the following features: • When using Secrets Manager to store sensitive data, you can inject a specific JSON key or a specific version of a secret as an environment variable or in a log configuration. For more information, see Pass sensitive data to an Amazon ECS container. • Specify environment variables in bulk using the environmentFiles container definition parameter. For more information, see Pass an individual environment variable to an Amazon ECS container. • Tasks run in a VPC and subnet enabled for IPv6 will be assigned both a private IPv4 address and an IPv6 address. For more information, see Amazon ECS task networking options for the Fargate launch type. • The task metadata endpoint version 4 provides additional metadata about your task and container including the task launch type, the Amazon Resource Name (ARN) of the container, and the log driver and log driver options used. When querying the /stats endpoint you also receive network rate stats for your containers. For more information, see Task metadata endpoint version 4. • Beginning on July 30, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to route UDP traffic using a Network Load Balancer to"} +{"global_id": 177, "doc_id": "fargate", "chunk_id": "11", "question_id": 2, "question": "What is the changelog for platform version 1.4.0 related to?", "answer_span": "The following is the changelog for platform version 1.4.0.", "chunk": "group for the Elastic Network Interface (ENI) associated with your task needs the security group rules to allow traffic between the task and the VPC endpoints. Fargate Linux platform version change log The following are the available Linux platform versions. For information about platform version deprecation, see AWS Fargate Linux platform version deprecation. Migrating to Linux platform version 1.4.0 174 Amazon Elastic Container Service Developer Guide 1.4.0 The following is the changelog for platform version 1.4.0. • Beginning on November 5, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to use the following features: • When using Secrets Manager to store sensitive data, you can inject a specific JSON key or a specific version of a secret as an environment variable or in a log configuration. For more information, see Pass sensitive data to an Amazon ECS container. • Specify environment variables in bulk using the environmentFiles container definition parameter. For more information, see Pass an individual environment variable to an Amazon ECS container. • Tasks run in a VPC and subnet enabled for IPv6 will be assigned both a private IPv4 address and an IPv6 address. For more information, see Amazon ECS task networking options for the Fargate launch type. • The task metadata endpoint version 4 provides additional metadata about your task and container including the task launch type, the Amazon Resource Name (ARN) of the container, and the log driver and log driver options used. When querying the /stats endpoint you also receive network rate stats for your containers. For more information, see Task metadata endpoint version 4. • Beginning on July 30, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to route UDP traffic using a Network Load Balancer to"} +{"global_id": 178, "doc_id": "fargate", "chunk_id": "11", "question_id": 3, "question": "What can you do when using Secrets Manager with platform version 1.4.0?", "answer_span": "you can inject a specific JSON key or a specific version of a secret as an environment variable or in a log configuration.", "chunk": "group for the Elastic Network Interface (ENI) associated with your task needs the security group rules to allow traffic between the task and the VPC endpoints. Fargate Linux platform version change log The following are the available Linux platform versions. For information about platform version deprecation, see AWS Fargate Linux platform version deprecation. Migrating to Linux platform version 1.4.0 174 Amazon Elastic Container Service Developer Guide 1.4.0 The following is the changelog for platform version 1.4.0. • Beginning on November 5, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to use the following features: • When using Secrets Manager to store sensitive data, you can inject a specific JSON key or a specific version of a secret as an environment variable or in a log configuration. For more information, see Pass sensitive data to an Amazon ECS container. • Specify environment variables in bulk using the environmentFiles container definition parameter. For more information, see Pass an individual environment variable to an Amazon ECS container. • Tasks run in a VPC and subnet enabled for IPv6 will be assigned both a private IPv4 address and an IPv6 address. For more information, see Amazon ECS task networking options for the Fargate launch type. • The task metadata endpoint version 4 provides additional metadata about your task and container including the task launch type, the Amazon Resource Name (ARN) of the container, and the log driver and log driver options used. When querying the /stats endpoint you also receive network rate stats for your containers. For more information, see Task metadata endpoint version 4. • Beginning on July 30, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to route UDP traffic using a Network Load Balancer to"} +{"global_id": 179, "doc_id": "fargate", "chunk_id": "11", "question_id": 4, "question": "What will tasks run in a VPC and subnet enabled for IPv6 be assigned?", "answer_span": "will be assigned both a private IPv4 address and an IPv6 address.", "chunk": "group for the Elastic Network Interface (ENI) associated with your task needs the security group rules to allow traffic between the task and the VPC endpoints. Fargate Linux platform version change log The following are the available Linux platform versions. For information about platform version deprecation, see AWS Fargate Linux platform version deprecation. Migrating to Linux platform version 1.4.0 174 Amazon Elastic Container Service Developer Guide 1.4.0 The following is the changelog for platform version 1.4.0. • Beginning on November 5, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to use the following features: • When using Secrets Manager to store sensitive data, you can inject a specific JSON key or a specific version of a secret as an environment variable or in a log configuration. For more information, see Pass sensitive data to an Amazon ECS container. • Specify environment variables in bulk using the environmentFiles container definition parameter. For more information, see Pass an individual environment variable to an Amazon ECS container. • Tasks run in a VPC and subnet enabled for IPv6 will be assigned both a private IPv4 address and an IPv6 address. For more information, see Amazon ECS task networking options for the Fargate launch type. • The task metadata endpoint version 4 provides additional metadata about your task and container including the task launch type, the Amazon Resource Name (ARN) of the container, and the log driver and log driver options used. When querying the /stats endpoint you also receive network rate stats for your containers. For more information, see Task metadata endpoint version 4. • Beginning on July 30, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to route UDP traffic using a Network Load Balancer to"} +{"global_id": 180, "doc_id": "fargate", "chunk_id": "12", "question_id": 1, "question": "What new feature was introduced for Amazon ECS tasks launched on Fargate using platform version 1.4.0 starting July 30, 2020?", "answer_span": "any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to route UDP traffic using a Network Load Balancer to their Amazon ECS on Fargate tasks.", "chunk": "endpoint you also receive network rate stats for your containers. For more information, see Task metadata endpoint version 4. • Beginning on July 30, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to route UDP traffic using a Network Load Balancer to their Amazon ECS on Fargate tasks. For more information, see Use load balancing to distribute Amazon ECS service traffic. • Beginning on May 28, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will have its ephemeral storage encrypted with an AES-256 encryption algorithm using an AWS owned encryption key. For more information, see Fargate task ephemeral storage for Amazon ECS and Storage options for Amazon ECS tasks. • Added support for using Amazon EFS file system volumes for persistent task storage. For more information, see Use Amazon EFS volumes with Amazon ECS. • The ephemeral task storage has been increased to a minimum of 20 GB for each task. For more information, see Fargate task ephemeral storage for Amazon ECS. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Fargate tasks receive a single elastic network interface (referred to as the task ENI) and all network traffic flows through that ENI within your VPC and will be visible to you through Linux Platform version change log 175 Amazon Elastic Container Service Developer Guide your VPC flow logs. For more information about networking for the Amazon EC2 launch type, see Amazon ECS task networking options for the EC2 launch type. For more information about networking for the Fargate launch type, see Amazon ECS task networking options for the Fargate launch type. • Task ENIs add support for jumbo frames. Network interfaces are configured with a maximum transmission"} +{"global_id": 181, "doc_id": "fargate", "chunk_id": "12", "question_id": 2, "question": "What encryption method is used for ephemeral storage in Amazon ECS tasks launched on Fargate starting May 28, 2020?", "answer_span": "any new Amazon ECS task launched on Fargate using platform version 1.4.0 will have its ephemeral storage encrypted with an AES-256 encryption algorithm using an AWS owned encryption key.", "chunk": "endpoint you also receive network rate stats for your containers. For more information, see Task metadata endpoint version 4. • Beginning on July 30, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to route UDP traffic using a Network Load Balancer to their Amazon ECS on Fargate tasks. For more information, see Use load balancing to distribute Amazon ECS service traffic. • Beginning on May 28, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will have its ephemeral storage encrypted with an AES-256 encryption algorithm using an AWS owned encryption key. For more information, see Fargate task ephemeral storage for Amazon ECS and Storage options for Amazon ECS tasks. • Added support for using Amazon EFS file system volumes for persistent task storage. For more information, see Use Amazon EFS volumes with Amazon ECS. • The ephemeral task storage has been increased to a minimum of 20 GB for each task. For more information, see Fargate task ephemeral storage for Amazon ECS. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Fargate tasks receive a single elastic network interface (referred to as the task ENI) and all network traffic flows through that ENI within your VPC and will be visible to you through Linux Platform version change log 175 Amazon Elastic Container Service Developer Guide your VPC flow logs. For more information about networking for the Amazon EC2 launch type, see Amazon ECS task networking options for the EC2 launch type. For more information about networking for the Fargate launch type, see Amazon ECS task networking options for the Fargate launch type. • Task ENIs add support for jumbo frames. Network interfaces are configured with a maximum transmission"} +{"global_id": 182, "doc_id": "fargate", "chunk_id": "12", "question_id": 3, "question": "What is the minimum size of ephemeral task storage for Amazon ECS tasks?", "answer_span": "The ephemeral task storage has been increased to a minimum of 20 GB for each task.", "chunk": "endpoint you also receive network rate stats for your containers. For more information, see Task metadata endpoint version 4. • Beginning on July 30, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to route UDP traffic using a Network Load Balancer to their Amazon ECS on Fargate tasks. For more information, see Use load balancing to distribute Amazon ECS service traffic. • Beginning on May 28, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will have its ephemeral storage encrypted with an AES-256 encryption algorithm using an AWS owned encryption key. For more information, see Fargate task ephemeral storage for Amazon ECS and Storage options for Amazon ECS tasks. • Added support for using Amazon EFS file system volumes for persistent task storage. For more information, see Use Amazon EFS volumes with Amazon ECS. • The ephemeral task storage has been increased to a minimum of 20 GB for each task. For more information, see Fargate task ephemeral storage for Amazon ECS. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Fargate tasks receive a single elastic network interface (referred to as the task ENI) and all network traffic flows through that ENI within your VPC and will be visible to you through Linux Platform version change log 175 Amazon Elastic Container Service Developer Guide your VPC flow logs. For more information about networking for the Amazon EC2 launch type, see Amazon ECS task networking options for the EC2 launch type. For more information about networking for the Fargate launch type, see Amazon ECS task networking options for the Fargate launch type. • Task ENIs add support for jumbo frames. Network interfaces are configured with a maximum transmission"} +{"global_id": 183, "doc_id": "fargate", "chunk_id": "12", "question_id": 4, "question": "What change was made to the network traffic behavior for Fargate tasks with platform version 1.4.0?", "answer_span": "Starting with platform version 1.4.0, all Fargate tasks receive a single elastic network interface (referred to as the task ENI) and all network traffic flows through that ENI within your VPC and will be visible to you through your VPC flow logs.", "chunk": "endpoint you also receive network rate stats for your containers. For more information, see Task metadata endpoint version 4. • Beginning on July 30, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to route UDP traffic using a Network Load Balancer to their Amazon ECS on Fargate tasks. For more information, see Use load balancing to distribute Amazon ECS service traffic. • Beginning on May 28, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will have its ephemeral storage encrypted with an AES-256 encryption algorithm using an AWS owned encryption key. For more information, see Fargate task ephemeral storage for Amazon ECS and Storage options for Amazon ECS tasks. • Added support for using Amazon EFS file system volumes for persistent task storage. For more information, see Use Amazon EFS volumes with Amazon ECS. • The ephemeral task storage has been increased to a minimum of 20 GB for each task. For more information, see Fargate task ephemeral storage for Amazon ECS. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Fargate tasks receive a single elastic network interface (referred to as the task ENI) and all network traffic flows through that ENI within your VPC and will be visible to you through Linux Platform version change log 175 Amazon Elastic Container Service Developer Guide your VPC flow logs. For more information about networking for the Amazon EC2 launch type, see Amazon ECS task networking options for the EC2 launch type. For more information about networking for the Fargate launch type, see Amazon ECS task networking options for the Fargate launch type. • Task ENIs add support for jumbo frames. Network interfaces are configured with a maximum transmission"} +{"global_id": 184, "doc_id": "fargate", "chunk_id": "13", "question_id": 1, "question": "What does the Fargate launch type support for networking?", "answer_span": "For more information about networking for the Fargate launch type, see Amazon ECS task networking options for the Fargate launch type.", "chunk": "launch type, see Amazon ECS task networking options for the EC2 launch type. For more information about networking for the Fargate launch type, see Amazon ECS task networking options for the Fargate launch type. • Task ENIs add support for jumbo frames. Network interfaces are configured with a maximum transmission unit (MTU), which is the size of the largest payload that fits within a single frame. The larger the MTU, the more application payload can fit within a single frame, which reduces per-frame overhead and increases efficiency. Supporting jumbo frames will reduce overhead when the network path between your task and the destination supports jumbo frames, such as all traffic that remains within your VPC. • CloudWatch Container Insights will include network performance metrics for Fargate tasks. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability. • Added support for the task metadata endpoint version 4 which provides additional information for your Fargate tasks, including network stats for the task and which Availability Zone the task is running in. For more information, see Amazon ECS task metadata endpoint version 4 and Amazon ECS task metadata endpoint version 4 for tasks on Fargate. • Added support for the SYS_PTRACE Linux parameter in container definitions. For more information, see Linux parameters. • The Fargate container agent replaces the use of the Amazon ECS container agent for all Fargate tasks. Usually, this change does not have an effect on how your tasks run. • The container runtime is now using Containerd instead of Docker. Most likely, this change does not have an effect on how your tasks run. You will notice that some error messages that originate with the container runtime changes from mentioning Docker to more general errors. For more information, see Amazon ECS stopped task error"} +{"global_id": 185, "doc_id": "fargate", "chunk_id": "13", "question_id": 2, "question": "What is the benefit of supporting jumbo frames?", "answer_span": "Supporting jumbo frames will reduce overhead when the network path between your task and the destination supports jumbo frames, such as all traffic that remains within your VPC.", "chunk": "launch type, see Amazon ECS task networking options for the EC2 launch type. For more information about networking for the Fargate launch type, see Amazon ECS task networking options for the Fargate launch type. • Task ENIs add support for jumbo frames. Network interfaces are configured with a maximum transmission unit (MTU), which is the size of the largest payload that fits within a single frame. The larger the MTU, the more application payload can fit within a single frame, which reduces per-frame overhead and increases efficiency. Supporting jumbo frames will reduce overhead when the network path between your task and the destination supports jumbo frames, such as all traffic that remains within your VPC. • CloudWatch Container Insights will include network performance metrics for Fargate tasks. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability. • Added support for the task metadata endpoint version 4 which provides additional information for your Fargate tasks, including network stats for the task and which Availability Zone the task is running in. For more information, see Amazon ECS task metadata endpoint version 4 and Amazon ECS task metadata endpoint version 4 for tasks on Fargate. • Added support for the SYS_PTRACE Linux parameter in container definitions. For more information, see Linux parameters. • The Fargate container agent replaces the use of the Amazon ECS container agent for all Fargate tasks. Usually, this change does not have an effect on how your tasks run. • The container runtime is now using Containerd instead of Docker. Most likely, this change does not have an effect on how your tasks run. You will notice that some error messages that originate with the container runtime changes from mentioning Docker to more general errors. For more information, see Amazon ECS stopped task error"} +{"global_id": 186, "doc_id": "fargate", "chunk_id": "13", "question_id": 3, "question": "What does CloudWatch Container Insights provide for Fargate tasks?", "answer_span": "CloudWatch Container Insights will include network performance metrics for Fargate tasks.", "chunk": "launch type, see Amazon ECS task networking options for the EC2 launch type. For more information about networking for the Fargate launch type, see Amazon ECS task networking options for the Fargate launch type. • Task ENIs add support for jumbo frames. Network interfaces are configured with a maximum transmission unit (MTU), which is the size of the largest payload that fits within a single frame. The larger the MTU, the more application payload can fit within a single frame, which reduces per-frame overhead and increases efficiency. Supporting jumbo frames will reduce overhead when the network path between your task and the destination supports jumbo frames, such as all traffic that remains within your VPC. • CloudWatch Container Insights will include network performance metrics for Fargate tasks. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability. • Added support for the task metadata endpoint version 4 which provides additional information for your Fargate tasks, including network stats for the task and which Availability Zone the task is running in. For more information, see Amazon ECS task metadata endpoint version 4 and Amazon ECS task metadata endpoint version 4 for tasks on Fargate. • Added support for the SYS_PTRACE Linux parameter in container definitions. For more information, see Linux parameters. • The Fargate container agent replaces the use of the Amazon ECS container agent for all Fargate tasks. Usually, this change does not have an effect on how your tasks run. • The container runtime is now using Containerd instead of Docker. Most likely, this change does not have an effect on how your tasks run. You will notice that some error messages that originate with the container runtime changes from mentioning Docker to more general errors. For more information, see Amazon ECS stopped task error"} +{"global_id": 187, "doc_id": "fargate", "chunk_id": "13", "question_id": 4, "question": "What has replaced the Amazon ECS container agent for Fargate tasks?", "answer_span": "The Fargate container agent replaces the use of the Amazon ECS container agent for all Fargate tasks.", "chunk": "launch type, see Amazon ECS task networking options for the EC2 launch type. For more information about networking for the Fargate launch type, see Amazon ECS task networking options for the Fargate launch type. • Task ENIs add support for jumbo frames. Network interfaces are configured with a maximum transmission unit (MTU), which is the size of the largest payload that fits within a single frame. The larger the MTU, the more application payload can fit within a single frame, which reduces per-frame overhead and increases efficiency. Supporting jumbo frames will reduce overhead when the network path between your task and the destination supports jumbo frames, such as all traffic that remains within your VPC. • CloudWatch Container Insights will include network performance metrics for Fargate tasks. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability. • Added support for the task metadata endpoint version 4 which provides additional information for your Fargate tasks, including network stats for the task and which Availability Zone the task is running in. For more information, see Amazon ECS task metadata endpoint version 4 and Amazon ECS task metadata endpoint version 4 for tasks on Fargate. • Added support for the SYS_PTRACE Linux parameter in container definitions. For more information, see Linux parameters. • The Fargate container agent replaces the use of the Amazon ECS container agent for all Fargate tasks. Usually, this change does not have an effect on how your tasks run. • The container runtime is now using Containerd instead of Docker. Most likely, this change does not have an effect on how your tasks run. You will notice that some error messages that originate with the container runtime changes from mentioning Docker to more general errors. For more information, see Amazon ECS stopped task error"} +{"global_id": 188, "doc_id": "fargate", "chunk_id": "14", "question_id": 1, "question": "What change is mentioned regarding the container runtime?", "answer_span": "now using Containerd instead of Docker.", "chunk": "now using Containerd instead of Docker. Most likely, this change does not have an effect on how your tasks run. You will notice that some error messages that originate with the container runtime changes from mentioning Docker to more general errors. For more information, see Amazon ECS stopped task error messages. • Based on Amazon Linux 2. 1.3.0 The following is the changelog for platform version 1.3.0. • Beginning on Sept 30, 2019, any new Fargate task that is launched supports the awsfirelens log driver. Configure the FireLens for Amazon ECS to use task definition parameters to route logs to an AWS service or AWS Partner Network (APN) destination for log storage and analytics. For more information, see Send Amazon ECS logs to an AWS service or AWS Partner. Linux Platform version change log 176 Amazon Elastic Container Service Developer Guide • Added task recycling for Fargate tasks, which is the process of refreshing tasks that are a part of an Amazon ECS service. For more information, Task retirement and maintenance for AWS Fargate on Amazon ECS. • Beginning on March 27, 2019, any new Fargate task that is launched can use additional task definition parameters that you use to define a proxy configuration, dependencies for container startup and shutdown as well as a per-container start and stop timeout value. For more information, see Proxy configuration, Container dependency, and Container timeouts. • Beginning on April 2, 2019, any new Fargate task that is launched supports injecting sensitive data into your containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that"} +{"global_id": 189, "doc_id": "fargate", "chunk_id": "14", "question_id": 2, "question": "What does the changelog for platform version 1.3.0 include?", "answer_span": "The following is the changelog for platform version 1.3.0.", "chunk": "now using Containerd instead of Docker. Most likely, this change does not have an effect on how your tasks run. You will notice that some error messages that originate with the container runtime changes from mentioning Docker to more general errors. For more information, see Amazon ECS stopped task error messages. • Based on Amazon Linux 2. 1.3.0 The following is the changelog for platform version 1.3.0. • Beginning on Sept 30, 2019, any new Fargate task that is launched supports the awsfirelens log driver. Configure the FireLens for Amazon ECS to use task definition parameters to route logs to an AWS service or AWS Partner Network (APN) destination for log storage and analytics. For more information, see Send Amazon ECS logs to an AWS service or AWS Partner. Linux Platform version change log 176 Amazon Elastic Container Service Developer Guide • Added task recycling for Fargate tasks, which is the process of refreshing tasks that are a part of an Amazon ECS service. For more information, Task retirement and maintenance for AWS Fargate on Amazon ECS. • Beginning on March 27, 2019, any new Fargate task that is launched can use additional task definition parameters that you use to define a proxy configuration, dependencies for container startup and shutdown as well as a per-container start and stop timeout value. For more information, see Proxy configuration, Container dependency, and Container timeouts. • Beginning on April 2, 2019, any new Fargate task that is launched supports injecting sensitive data into your containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that"} +{"global_id": 190, "doc_id": "fargate", "chunk_id": "14", "question_id": 3, "question": "What feature supports the awsfirelens log driver?", "answer_span": "any new Fargate task that is launched supports the awsfirelens log driver.", "chunk": "now using Containerd instead of Docker. Most likely, this change does not have an effect on how your tasks run. You will notice that some error messages that originate with the container runtime changes from mentioning Docker to more general errors. For more information, see Amazon ECS stopped task error messages. • Based on Amazon Linux 2. 1.3.0 The following is the changelog for platform version 1.3.0. • Beginning on Sept 30, 2019, any new Fargate task that is launched supports the awsfirelens log driver. Configure the FireLens for Amazon ECS to use task definition parameters to route logs to an AWS service or AWS Partner Network (APN) destination for log storage and analytics. For more information, see Send Amazon ECS logs to an AWS service or AWS Partner. Linux Platform version change log 176 Amazon Elastic Container Service Developer Guide • Added task recycling for Fargate tasks, which is the process of refreshing tasks that are a part of an Amazon ECS service. For more information, Task retirement and maintenance for AWS Fargate on Amazon ECS. • Beginning on March 27, 2019, any new Fargate task that is launched can use additional task definition parameters that you use to define a proxy configuration, dependencies for container startup and shutdown as well as a per-container start and stop timeout value. For more information, see Proxy configuration, Container dependency, and Container timeouts. • Beginning on April 2, 2019, any new Fargate task that is launched supports injecting sensitive data into your containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container de��nition. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that"} +{"global_id": 191, "doc_id": "fargate", "chunk_id": "14", "question_id": 4, "question": "What can new Fargate tasks launched after March 27, 2019, use?", "answer_span": "any new Fargate task that is launched can use additional task definition parameters that you use to define a proxy configuration, dependencies for container startup and shutdown as well as a per-container start and stop timeout value.", "chunk": "now using Containerd instead of Docker. Most likely, this change does not have an effect on how your tasks run. You will notice that some error messages that originate with the container runtime changes from mentioning Docker to more general errors. For more information, see Amazon ECS stopped task error messages. • Based on Amazon Linux 2. 1.3.0 The following is the changelog for platform version 1.3.0. • Beginning on Sept 30, 2019, any new Fargate task that is launched supports the awsfirelens log driver. Configure the FireLens for Amazon ECS to use task definition parameters to route logs to an AWS service or AWS Partner Network (APN) destination for log storage and analytics. For more information, see Send Amazon ECS logs to an AWS service or AWS Partner. Linux Platform version change log 176 Amazon Elastic Container Service Developer Guide • Added task recycling for Fargate tasks, which is the process of refreshing tasks that are a part of an Amazon ECS service. For more information, Task retirement and maintenance for AWS Fargate on Amazon ECS. • Beginning on March 27, 2019, any new Fargate task that is launched can use additional task definition parameters that you use to define a proxy configuration, dependencies for container startup and shutdown as well as a per-container start and stop timeout value. For more information, see Proxy configuration, Container dependency, and Container timeouts. • Beginning on April 2, 2019, any new Fargate task that is launched supports injecting sensitive data into your containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that"} +{"global_id": 192, "doc_id": "fargate", "chunk_id": "15", "question_id": 1, "question": "What can sensitive data be stored in for containers?", "answer_span": "containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition.", "chunk": "containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that is launched supports referencing sensitive data in the log configuration of a container using the secretOptions container definition parameter. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that is launched supports the splunk log driver in addition to the awslogs log driver. For more information, see Storage and logging. • Beginning on July 9, 2019, any new Fargate tasks that is launched supports CloudWatch Container Insights. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability. • Beginning on December 3, 2019, the Fargate Spot capacity provider is supported. For more information, see Amazon ECS clusters for Fargate. • Based on Amazon Linux 2. AWS Fargate Linux platform version deprecation This page lists Linux platform versions that AWS Fargate has deprecated or have been scheduled for deprecation. These platform versions remain available until the published deprecation date. A force update date is provided for each platform version scheduled for deprecation. On the force update date, any service using the LATEST platform version that is pointed to a platform version that is scheduled for deprecation will be updated using the force new deployment option. When the service is updated using the force new deployment option, all tasks running on a platform version scheduled for deprecation are stopped and new tasks are launched using the platform version that the LATEST tag points to at that time. Standalone tasks or services with an explicit platform"} +{"global_id": 193, "doc_id": "fargate", "chunk_id": "15", "question_id": 2, "question": "When did support for referencing sensitive data in the log configuration of a container begin?", "answer_span": "Beginning on May 1, 2019, any new Fargate task that is launched supports referencing sensitive data in the log configuration of a container using the secretOptions container definition parameter.", "chunk": "containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that is launched supports referencing sensitive data in the log configuration of a container using the secretOptions container definition parameter. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that is launched supports the splunk log driver in addition to the awslogs log driver. For more information, see Storage and logging. • Beginning on July 9, 2019, any new Fargate tasks that is launched supports CloudWatch Container Insights. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability. • Beginning on December 3, 2019, the Fargate Spot capacity provider is supported. For more information, see Amazon ECS clusters for Fargate. • Based on Amazon Linux 2. AWS Fargate Linux platform version deprecation This page lists Linux platform versions that AWS Fargate has deprecated or have been scheduled for deprecation. These platform versions remain available until the published deprecation date. A force update date is provided for each platform version scheduled for deprecation. On the force update date, any service using the LATEST platform version that is pointed to a platform version that is scheduled for deprecation will be updated using the force new deployment option. When the service is updated using the force new deployment option, all tasks running on a platform version scheduled for deprecation are stopped and new tasks are launched using the platform version that the LATEST tag points to at that time. Standalone tasks or services with an explicit platform"} +{"global_id": 194, "doc_id": "fargate", "chunk_id": "15", "question_id": 3, "question": "What log driver was added to Fargate tasks on May 1, 2019?", "answer_span": "Beginning on May 1, 2019, any new Fargate task that is launched supports the splunk log driver in addition to the awslogs log driver.", "chunk": "containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that is launched supports referencing sensitive data in the log configuration of a container using the secretOptions container definition parameter. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that is launched supports the splunk log driver in addition to the awslogs log driver. For more information, see Storage and logging. • Beginning on July 9, 2019, any new Fargate tasks that is launched supports CloudWatch Container Insights. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability. • Beginning on December 3, 2019, the Fargate Spot capacity provider is supported. For more information, see Amazon ECS clusters for Fargate. • Based on Amazon Linux 2. AWS Fargate Linux platform version deprecation This page lists Linux platform versions that AWS Fargate has deprecated or have been scheduled for deprecation. These platform versions remain available until the published deprecation date. A force update date is provided for each platform version scheduled for deprecation. On the force update date, any service using the LATEST platform version that is pointed to a platform version that is scheduled for deprecation will be updated using the force new deployment option. When the service is updated using the force new deployment option, all tasks running on a platform version scheduled for deprecation are stopped and new tasks are launched using the platform version that the LATEST tag points to at that time. Standalone tasks or services with an explicit platform"} +{"global_id": 195, "doc_id": "fargate", "chunk_id": "15", "question_id": 4, "question": "What is the significance of December 3, 2019, for Fargate?", "answer_span": "Beginning on December 3, 2019, the Fargate Spot capacity provider is supported.", "chunk": "containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that is launched supports referencing sensitive data in the log configuration of a container using the secretOptions container definition parameter. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that is launched supports the splunk log driver in addition to the awslogs log driver. For more information, see Storage and logging. • Beginning on July 9, 2019, any new Fargate tasks that is launched supports CloudWatch Container Insights. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability. • Beginning on December 3, 2019, the Fargate Spot capacity provider is supported. For more information, see Amazon ECS clusters for Fargate. • Based on Amazon Linux 2. AWS Fargate Linux platform version deprecation This page lists Linux platform versions that AWS Fargate has deprecated or have been scheduled for deprecation. These platform versions remain available until the published deprecation date. A force update date is provided for each platform version scheduled for deprecation. On the force update date, any service using the LATEST platform version that is pointed to a platform version that is scheduled for deprecation will be updated using the force new deployment option. When the service is updated using the force new deployment option, all tasks running on a platform version scheduled for deprecation are stopped and new tasks are launched using the platform version that the LATEST tag points to at that time. Standalone tasks or services with an explicit platform"} +{"global_id": 196, "doc_id": "fargate", "chunk_id": "16", "question_id": 1, "question": "What happens to tasks running on a platform version scheduled for deprecation when the service is updated using the force new deployment option?", "answer_span": "all tasks running on a platform version scheduled for deprecation are stopped and new tasks are launched using the platform version that the LATEST tag points to at that time.", "chunk": "option. When the service is updated using the force new deployment option, all tasks running on a platform version scheduled for deprecation are stopped and new tasks are launched using the platform version that the LATEST tag points to at that time. Standalone tasks or services with an explicit platform version set are not affected by the force update date. Linux platform version deprecation 177"} +{"global_id": 197, "doc_id": "fargate", "chunk_id": "16", "question_id": 2, "question": "Are standalone tasks or services with an explicit platform version affected by the force update date?", "answer_span": "Standalone tasks or services with an explicit platform version set are not affected by the force update date.", "chunk": "option. When the service is updated using the force new deployment option, all tasks running on a platform version scheduled for deprecation are stopped and new tasks are launched using the platform version that the LATEST tag points to at that time. Standalone tasks or services with an explicit platform version set are not affected by the force update date. Linux platform version deprecation 177"} +{"global_id": 198, "doc_id": "fargate", "chunk_id": "16", "question_id": 3, "question": "What does the force new deployment option do?", "answer_span": "When the service is updated using the force new deployment option, all tasks running on a platform version scheduled for deprecation are stopped and new tasks are launched using the platform version that the LATEST tag points to at that time.", "chunk": "option. When the service is updated using the force new deployment option, all tasks running on a platform version scheduled for deprecation are stopped and new tasks are launched using the platform version that the LATEST tag points to at that time. Standalone tasks or services with an explicit platform version set are not affected by the force update date. Linux platform version deprecation 177"} +{"global_id": 199, "doc_id": "fargate", "chunk_id": "16", "question_id": 4, "question": "What is the context of the text regarding Linux platform version?", "answer_span": "Linux platform version deprecation 177", "chunk": "option. When the service is updated using the force new deployment option, all tasks running on a platform version scheduled for deprecation are stopped and new tasks are launched using the platform version that the LATEST tag points to at that time. Standalone tasks or services with an explicit platform version set are not affected by the force update date. Linux platform version deprecation 177"} +{"global_id": 200, "doc_id": "ecs", "chunk_id": "0", "question_id": 1, "question": "What is Amazon Elastic Container Service?", "answer_span": "Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications.", "chunk": "Amazon Elastic Container Service Developer Guide What is Amazon Elastic Container Service? Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications. As a fully managed service, Amazon ECS comes with AWS configuration and operational best practices built-in. It's integrated with both AWS tools, such as Amazon Elastic Container Registry, and third-party tools, such as Docker. This integration makes it easier for teams to focus on building the applications, not the environment. You can run and scale your container workloads across AWS Regions in the cloud, and on-premises, without the complexity of managing a control plane. Terminology and components There are three layers in Amazon ECS: • Capacity - The infrastructure where your containers run • Controller - Deploy and manage your applications that run on the containers • Provisioning - The tools that you can use to interface with the scheduler to deploy and manage your applications and containers The following diagram shows the Amazon ECS layers. Terminology and components 1 Amazon Elastic Container Service Developer Guide The capacity is the infrastructure where your containers run. The following is an overview of the capacity options: • Amazon EC2 instances in the AWS cloud You choose the instance type, the number of instances, and manage the capacity. • Serverless (AWS Fargate) in the AWS cloud Fargate is a serverless, pay-as-you-go compute engine. With Fargate you don't need to manage servers, handle capacity planning, or isolate container workloads for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic"} +{"global_id": 201, "doc_id": "ecs", "chunk_id": "0", "question_id": 2, "question": "What are the three layers in Amazon ECS?", "answer_span": "There are three layers in Amazon ECS: • Capacity - The infrastructure where your containers run • Controller - Deploy and manage your applications that run on the containers • Provisioning - The tools that you can use to interface with the scheduler to deploy and manage your applications and containers", "chunk": "Amazon Elastic Container Service Developer Guide What is Amazon Elastic Container Service? Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications. As a fully managed service, Amazon ECS comes with AWS configuration and operational best practices built-in. It's integrated with both AWS tools, such as Amazon Elastic Container Registry, and third-party tools, such as Docker. This integration makes it easier for teams to focus on building the applications, not the environment. You can run and scale your container workloads across AWS Regions in the cloud, and on-premises, without the complexity of managing a control plane. Terminology and components There are three layers in Amazon ECS: • Capacity - The infrastructure where your containers run • Controller - Deploy and manage your applications that run on the containers • Provisioning - The tools that you can use to interface with the scheduler to deploy and manage your applications and containers The following diagram shows the Amazon ECS layers. Terminology and components 1 Amazon Elastic Container Service Developer Guide The capacity is the infrastructure where your containers run. The following is an overview of the capacity options: • Amazon EC2 instances in the AWS cloud You choose the instance type, the number of instances, and manage the capacity. • Serverless (AWS Fargate) in the AWS cloud Fargate is a serverless, pay-as-you-go compute engine. With Fargate you don't need to manage servers, handle capacity planning, or isolate container workloads for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic"} +{"global_id": 202, "doc_id": "ecs", "chunk_id": "0", "question_id": 3, "question": "What is AWS Fargate?", "answer_span": "Fargate is a serverless, pay-as-you-go compute engine.", "chunk": "Amazon Elastic Container Service Developer Guide What is Amazon Elastic Container Service? Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications. As a fully managed service, Amazon ECS comes with AWS configuration and operational best practices built-in. It's integrated with both AWS tools, such as Amazon Elastic Container Registry, and third-party tools, such as Docker. This integration makes it easier for teams to focus on building the applications, not the environment. You can run and scale your container workloads across AWS Regions in the cloud, and on-premises, without the complexity of managing a control plane. Terminology and components There are three layers in Amazon ECS: • Capacity - The infrastructure where your containers run • Controller - Deploy and manage your applications that run on the containers • Provisioning - The tools that you can use to interface with the scheduler to deploy and manage your applications and containers The following diagram shows the Amazon ECS layers. Terminology and components 1 Amazon Elastic Container Service Developer Guide The capacity is the infrastructure where your containers run. The following is an overview of the capacity options: • Amazon EC2 instances in the AWS cloud You choose the instance type, the number of instances, and manage the capacity. • Serverless (AWS Fargate) in the AWS cloud Fargate is a serverless, pay-as-you-go compute engine. With Fargate you don't need to manage servers, handle capacity planning, or isolate container workloads for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic"} +{"global_id": 203, "doc_id": "ecs", "chunk_id": "0", "question_id": 4, "question": "What does Amazon ECS Anywhere provide support for?", "answer_span": "Amazon ECS Anywhere provides support for registering an external instance such as an on-premises server or virtual machine (VM), to your Amazon ECS cluster.", "chunk": "Amazon Elastic Container Service Developer Guide What is Amazon Elastic Container Service? Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications. As a fully managed service, Amazon ECS comes with AWS configuration and operational best practices built-in. It's integrated with both AWS tools, such as Amazon Elastic Container Registry, and third-party tools, such as Docker. This integration makes it easier for teams to focus on building the applications, not the environment. You can run and scale your container workloads across AWS Regions in the cloud, and on-premises, without the complexity of managing a control plane. Terminology and components There are three layers in Amazon ECS: • Capacity - The infrastructure where your containers run • Controller - Deploy and manage your applications that run on the containers • Provisioning - The tools that you can use to interface with the scheduler to deploy and manage your applications and containers The following diagram shows the Amazon ECS layers. Terminology and components 1 Amazon Elastic Container Service Developer Guide The capacity is the infrastructure where your containers run. The following is an overview of the capacity options: • Amazon EC2 instances in the AWS cloud You choose the instance type, the number of instances, and manage the capacity. • Serverless (AWS Fargate) in the AWS cloud Fargate is a serverless, pay-as-you-go compute engine. With Fargate you don't need to manage servers, handle capacity planning, or isolate container workloads for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic"} +{"global_id": 204, "doc_id": "ecs", "chunk_id": "1", "question_id": 1, "question": "What does Amazon ECS Anywhere provide support for?", "answer_span": "Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster.", "chunk": "for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic Container Service Developer Guide Features Amazon ECS provides the following high-level features: Task definition The blueprint for the application. Cluster The infrastructure your application runs on. Task An application such as a batch job that performs work, and then stops. Service A long running stateless application. Account Setting Allows access to features. Cluster Auto Scaling Amazon ECS manages the scaling of Amazon EC2 instances that are registered to your cluster. Service Auto Scaling Amazon ECS increases or decreases the desired number of tasks in your service automatically. Provisioning There are multiple options for provisioning Amazon ECS: • AWS Management Console — Provides a web interface that you can use to access your Amazon ECS resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon ECS. It's supported on Windows, Mac, and Linux. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details. These include calculating signatures, handling request retries, and error handling. For more information, see AWS SDKs. Features 3 Amazon Elastic Container Service Developer Guide • AWS CDK — Provides an open-source software development framework that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing"} +{"global_id": 205, "doc_id": "ecs", "chunk_id": "1", "question_id": 2, "question": "What is the role of the Amazon ECS scheduler?", "answer_span": "The Amazon ECS scheduler is the software that manages your applications.", "chunk": "for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic Container Service Developer Guide Features Amazon ECS provides the following high-level features: Task definition The blueprint for the application. Cluster The infrastructure your application runs on. Task An application such as a batch job that performs work, and then stops. Service A long running stateless application. Account Setting Allows access to features. Cluster Auto Scaling Amazon ECS manages the scaling of Amazon EC2 instances that are registered to your cluster. Service Auto Scaling Amazon ECS increases or decreases the desired number of tasks in your service automatically. Provisioning There are multiple options for provisioning Amazon ECS: • AWS Management Console — Provides a web interface that you can use to access your Amazon ECS resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon ECS. It's supported on Windows, Mac, and Linux. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details. These include calculating signatures, handling request retries, and error handling. For more information, see AWS SDKs. Features 3 Amazon Elastic Container Service Developer Guide • AWS CDK — Provides an open-source software development framework that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing"} +{"global_id": 206, "doc_id": "ecs", "chunk_id": "1", "question_id": 3, "question": "What is a Task in Amazon ECS terminology?", "answer_span": "Task An application such as a batch job that performs work, and then stops.", "chunk": "for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic Container Service Developer Guide Features Amazon ECS provides the following high-level features: Task definition The blueprint for the application. Cluster The infrastructure your application runs on. Task An application such as a batch job that performs work, and then stops. Service A long running stateless application. Account Setting Allows access to features. Cluster Auto Scaling Amazon ECS manages the scaling of Amazon EC2 instances that are registered to your cluster. Service Auto Scaling Amazon ECS increases or decreases the desired number of tasks in your service automatically. Provisioning There are multiple options for provisioning Amazon ECS: • AWS Management Console — Provides a web interface that you can use to access your Amazon ECS resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon ECS. It's supported on Windows, Mac, and Linux. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details. These include calculating signatures, handling request retries, and error handling. For more information, see AWS SDKs. Features 3 Amazon Elastic Container Service Developer Guide • AWS CDK — Provides an open-source software development framework that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing"} +{"global_id": 207, "doc_id": "ecs", "chunk_id": "1", "question_id": 4, "question": "What are the options for provisioning Amazon ECS?", "answer_span": "There are multiple options for provisioning Amazon ECS: • AWS Management Console — Provides a web interface that you can use to access your Amazon ECS resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon ECS. It's supported on Windows, Mac, and Linux. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details.", "chunk": "for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic Container Service Developer Guide Features Amazon ECS provides the following high-level features: Task definition The blueprint for the application. Cluster The infrastructure your application runs on. Task An application such as a batch job that performs work, and then stops. Service A long running stateless application. Account Setting Allows access to features. Cluster Auto Scaling Amazon ECS manages the scaling of Amazon EC2 instances that are registered to your cluster. Service Auto Scaling Amazon ECS increases or decreases the desired number of tasks in your service automatically. Provisioning There are multiple options for provisioning Amazon ECS: • AWS Management Console — Provides a web interface that you can use to access your Amazon ECS resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon ECS. It's supported on Windows, Mac, and Linux. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details. These include calculating signatures, handling request retries, and error handling. For more information, see AWS SDKs. Features 3 Amazon Elastic Container Service Developer Guide • AWS CDK — Provides an open-source software development framework that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing"} +{"global_id": 208, "doc_id": "ecs", "chunk_id": "2", "question_id": 1, "question": "What does the AWS CDK do?", "answer_span": "The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation.", "chunk": "that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing information for Amazon ECS. • AWS Fargate pricing – Pricing information for Fargate. Related services Services to use with Amazon ECS You can use other AWS services to help you deploy yours tasks and services on Amazon ECS. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. Amazon CloudWatch Monitor your services and tasks. Amazon Elastic Container Registry Store and manage container images. Elastic Load Balancing Automatically distribute incoming service traffic. Amazon GuardDuty Detect potentially unauthorized or malicious use of your container instances and workloads. Pricing 4 Amazon Elastic Container Service Developer Guide Learn how to create and use Amazon ECS resources The following guides provide an introduction to the tools available to access Amazon ECS and introductory procedures to run containers. Docker basics takes you through the basic steps to create a Docker container image and upload it to an Amazon ECR private repository. The getting started guides walk you through using the AWS Copilot command line interface and the AWS Management Console to complete the common tasks to run your containers on Amazon ECS and AWS Fargate. Contents • Set up to use Amazon ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows"} +{"global_id": 209, "doc_id": "ecs", "chunk_id": "2", "question_id": 2, "question": "What does Amazon ECS pricing depend on?", "answer_span": "Amazon ECS pricing depends on the capacity option you choose for your containers.", "chunk": "that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing information for Amazon ECS. • AWS Fargate pricing – Pricing information for Fargate. Related services Services to use with Amazon ECS You can use other AWS services to help you deploy yours tasks and services on Amazon ECS. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. Amazon CloudWatch Monitor your services and tasks. Amazon Elastic Container Registry Store and manage container images. Elastic Load Balancing Automatically distribute incoming service traffic. Amazon GuardDuty Detect potentially unauthorized or malicious use of your container instances and workloads. Pricing 4 Amazon Elastic Container Service Developer Guide Learn how to create and use Amazon ECS resources The following guides provide an introduction to the tools available to access Amazon ECS and introductory procedures to run containers. Docker basics takes you through the basic steps to create a Docker container image and upload it to an Amazon ECR private repository. The getting started guides walk you through using the AWS Copilot command line interface and the AWS Management Console to complete the common tasks to run your containers on Amazon ECS and AWS Fargate. Contents • Set up to use Amazon ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows"} +{"global_id": 210, "doc_id": "ecs", "chunk_id": "2", "question_id": 3, "question": "What service helps ensure the correct number of Amazon EC2 instances?", "answer_span": "Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application.", "chunk": "that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing information for Amazon ECS. • AWS Fargate pricing – Pricing information for Fargate. Related services Services to use with Amazon ECS You can use other AWS services to help you deploy yours tasks and services on Amazon ECS. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. Amazon CloudWatch Monitor your services and tasks. Amazon Elastic Container Registry Store and manage container images. Elastic Load Balancing Automatically distribute incoming service traffic. Amazon GuardDuty Detect potentially unauthorized or malicious use of your container instances and workloads. Pricing 4 Amazon Elastic Container Service Developer Guide Learn how to create and use Amazon ECS resources The following guides provide an introduction to the tools available to access Amazon ECS and introductory procedures to run containers. Docker basics takes you through the basic steps to create a Docker container image and upload it to an Amazon ECR private repository. The getting started guides walk you through using the AWS Copilot command line interface and the AWS Management Console to complete the common tasks to run your containers on Amazon ECS and AWS Fargate. Contents • Set up to use Amazon ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows"} +{"global_id": 211, "doc_id": "ecs", "chunk_id": "2", "question_id": 4, "question": "What does the Docker basics guide help you with?", "answer_span": "Docker basics takes you through the basic steps to create a Docker container image and upload it to an Amazon ECR private repository.", "chunk": "that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing information for Amazon ECS. • AWS Fargate pricing – Pricing information for Fargate. Related services Services to use with Amazon ECS You can use other AWS services to help you deploy yours tasks and services on Amazon ECS. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. Amazon CloudWatch Monitor your services and tasks. Amazon Elastic Container Registry Store and manage container images. Elastic Load Balancing Automatically distribute incoming service traffic. Amazon GuardDuty Detect potentially unauthorized or malicious use of your container instances and workloads. Pricing 4 Amazon Elastic Container Service Developer Guide Learn how to create and use Amazon ECS resources The following guides provide an introduction to the tools available to access Amazon ECS and introductory procedures to run containers. Docker basics takes you through the basic steps to create a Docker container image and upload it to an Amazon ECR private repository. The getting started guides walk you through using the AWS Copilot command line interface and the AWS Management Console to complete the common tasks to run your containers on Amazon ECS and AWS Fargate. Contents • Set up to use Amazon ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows"} +{"global_id": 212, "doc_id": "ecs", "chunk_id": "3", "question_id": 1, "question": "What is the purpose of the AWS Management Console in relation to Amazon ECS?", "answer_span": "The AWS Management Console is a browser-based interface for managing Amazon ECS resources.", "chunk": "ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the EC2 launch type • Creating Amazon ECS resources using the AWS CDK • Creating Amazon ECS resources using the AWS Copilot command line interface Set up to use Amazon ECS If you've already signed up for Amazon Web Services (AWS) and have been using Amazon Elastic Compute Cloud (Amazon EC2), you are close to being able to use Amazon ECS. The set-up process for the two services is similar. The following guide prepares you for launching your first Amazon ECS cluster. Complete the following tasks to get set up for Amazon ECS. AWS Management Console The AWS Management Console is a browser-based interface for managing Amazon ECS resources. The console provides a visual overview of the service, making it easy to explore Amazon ECS features and functions without needing to use additional tools. Many related tutorials and walkthroughs are available that can guide you through use of the console. For a tutorial that guides you through the console, see Learn how to create and use Amazon ECS resources. Set up 5 Amazon Elastic Container Service Developer Guide When starting out, many customers prefer using the console because it provides instant visual feedback on whether the actions they take succeed. AWS customers that are familiar with the AWS Management Console, can easily manage related resources such as load balancers and Amazon EC2 instances. Start with the AWS Management Console. Sign up for an AWS account If you do not have an AWS account, complete the following steps to"} +{"global_id": 213, "doc_id": "ecs", "chunk_id": "3", "question_id": 2, "question": "What do many customers prefer when starting out with Amazon ECS?", "answer_span": "When starting out, many customers prefer using the console because it provides instant visual feedback on whether the actions they take succeed.", "chunk": "ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the EC2 launch type • Creating Amazon ECS resources using the AWS CDK • Creating Amazon ECS resources using the AWS Copilot command line interface Set up to use Amazon ECS If you've already signed up for Amazon Web Services (AWS) and have been using Amazon Elastic Compute Cloud (Amazon EC2), you are close to being able to use Amazon ECS. The set-up process for the two services is similar. The following guide prepares you for launching your first Amazon ECS cluster. Complete the following tasks to get set up for Amazon ECS. AWS Management Console The AWS Management Console is a browser-based interface for managing Amazon ECS resources. The console provides a visual overview of the service, making it easy to explore Amazon ECS features and functions without needing to use additional tools. Many related tutorials and walkthroughs are available that can guide you through use of the console. For a tutorial that guides you through the console, see Learn how to create and use Amazon ECS resources. Set up 5 Amazon Elastic Container Service Developer Guide When starting out, many customers prefer using the console because it provides instant visual feedback on whether the actions they take succeed. AWS customers that are familiar with the AWS Management Console, can easily manage related resources such as load balancers and Amazon EC2 instances. Start with the AWS Management Console. Sign up for an AWS account If you do not have an AWS account, complete the following steps to"} +{"global_id": 214, "doc_id": "ecs", "chunk_id": "3", "question_id": 3, "question": "What is one of the tasks mentioned for getting set up for Amazon ECS?", "answer_span": "Complete the following tasks to get set up for Amazon ECS.", "chunk": "ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the EC2 launch type • Creating Amazon ECS resources using the AWS CDK • Creating Amazon ECS resources using the AWS Copilot command line interface Set up to use Amazon ECS If you've already signed up for Amazon Web Services (AWS) and have been using Amazon Elastic Compute Cloud (Amazon EC2), you are close to being able to use Amazon ECS. The set-up process for the two services is similar. The following guide prepares you for launching your first Amazon ECS cluster. Complete the following tasks to get set up for Amazon ECS. AWS Management Console The AWS Management Console is a browser-based interface for managing Amazon ECS resources. The console provides a visual overview of the service, making it easy to explore Amazon ECS features and functions without needing to use additional tools. Many related tutorials and walkthroughs are available that can guide you through use of the console. For a tutorial that guides you through the console, see Learn how to create and use Amazon ECS resources. Set up 5 Amazon Elastic Container Service Developer Guide When starting out, many customers prefer using the console because it provides instant visual feedback on whether the actions they take succeed. AWS customers that are familiar with the AWS Management Console, can easily manage related resources such as load balancers and Amazon EC2 instances. Start with the AWS Management Console. Sign up for an AWS account If you do not have an AWS account, complete the following steps to"} +{"global_id": 215, "doc_id": "ecs", "chunk_id": "3", "question_id": 4, "question": "What can AWS customers manage easily if they are familiar with the AWS Management Console?", "answer_span": "AWS customers that are familiar with the AWS Management Console, can easily manage related resources such as load balancers and Amazon EC2 instances.", "chunk": "ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the EC2 launch type • Creating Amazon ECS resources using the AWS CDK • Creating Amazon ECS resources using the AWS Copilot command line interface Set up to use Amazon ECS If you've already signed up for Amazon Web Services (AWS) and have been using Amazon Elastic Compute Cloud (Amazon EC2), you are close to being able to use Amazon ECS. The set-up process for the two services is similar. The following guide prepares you for launching your first Amazon ECS cluster. Complete the following tasks to get set up for Amazon ECS. AWS Management Console The AWS Management Console is a browser-based interface for managing Amazon ECS resources. The console provides a visual overview of the service, making it easy to explore Amazon ECS features and functions without needing to use additional tools. Many related tutorials and walkthroughs are available that can guide you through use of the console. For a tutorial that guides you through the console, see Learn how to create and use Amazon ECS resources. Set up 5 Amazon Elastic Container Service Developer Guide When starting out, many customers prefer using the console because it provides instant visual feedback on whether the actions they take succeed. AWS customers that are familiar with the AWS Management Console, can easily manage related resources such as load balancers and Amazon EC2 instances. Start with the AWS Management Console. Sign up for an AWS account If you do not have an AWS account, complete the following steps to"} +{"global_id": 216, "doc_id": "ecs", "chunk_id": "4", "question_id": 1, "question": "What can AWS customers manage using the AWS Management Console?", "answer_span": "AWS customers that are familiar with the AWS Management Console, can easily manage related resources such as load balancers and Amazon EC2 instances.", "chunk": "take succeed. AWS customers that are familiar with the AWS Management Console, can easily manage related resources such as load balancers and Amazon EC2 instances. Start with the AWS Management Console. Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. Sign up for an AWS account 6 Amazon Elastic Container Service 2. Developer Guide Turn on multi-factor authentication (MFA) for your root user. For instructions,"} +{"global_id": 217, "doc_id": "ecs", "chunk_id": "4", "question_id": 2, "question": "What is created when you sign up for an AWS account?", "answer_span": "When you sign up for an AWS account, an AWS account root user is created.", "chunk": "take succeed. AWS customers that are familiar with the AWS Management Console, can easily manage related resources such as load balancers and Amazon EC2 instances. Start with the AWS Management Console. Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. Sign up for an AWS account 6 Amazon Elastic Container Service 2. Developer Guide Turn on multi-factor authentication (MFA) for your root user. For instructions,"} +{"global_id": 218, "doc_id": "ecs", "chunk_id": "4", "question_id": 3, "question": "What is a security best practice mentioned in the text?", "answer_span": "As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access.", "chunk": "take succeed. AWS customers that are familiar with the AWS Management Console, can easily manage related resources such as load balancers and Amazon EC2 instances. Start with the AWS Management Console. Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. Sign up for an AWS account 6 Amazon Elastic Container Service 2. Developer Guide Turn on multi-factor authentication (MFA) for your root user. For instructions,"} +{"global_id": 219, "doc_id": "ecs", "chunk_id": "4", "question_id": 4, "question": "What should you do after signing up for an AWS account?", "answer_span": "After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks.", "chunk": "take succeed. AWS customers that are familiar with the AWS Management Console, can easily manage related resources such as load balancers and Amazon EC2 instances. Start with the AWS Management Console. Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. Sign up for an AWS account 6 Amazon Elastic Container Service 2. Developer Guide Turn on multi-factor authentication (MFA) for your root user. For instructions,"} +{"global_id": 220, "doc_id": "ecs", "chunk_id": "5", "question_id": 1, "question": "What should you do to turn on multi-factor authentication for your root user?", "answer_span": "Turn on multi-factor authentication (MFA) for your root user.", "chunk": "page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. Sign up for an AWS account 6 Amazon Elastic Container Service 2. Developer Guide Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying leastprivilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create a user with administrative access 7 Amazon Elastic Container Service Developer Guide Create a virtual private cloud You can use Amazon Virtual Private Cloud (Amazon VPC) to launch AWS resources into a virtual"} +{"global_id": 221, "doc_id": "ecs", "chunk_id": "5", "question_id": 2, "question": "Where can you find instructions for enabling IAM Identity Center?", "answer_span": "For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide.", "chunk": "page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. Sign up for an AWS account 6 Amazon Elastic Container Service 2. Developer Guide Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying leastprivilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create a user with administrative access 7 Amazon Elastic Container Service Developer Guide Create a virtual private cloud You can use Amazon Virtual Private Cloud (Amazon VPC) to launch AWS resources into a virtual"} +{"global_id": 222, "doc_id": "ecs", "chunk_id": "5", "question_id": 3, "question": "How can you sign in as a user with administrative access?", "answer_span": "To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user.", "chunk": "page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. Sign up for an AWS account 6 Amazon Elastic Container Service 2. Developer Guide Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying leastprivilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create a user with administrative access 7 Amazon Elastic Container Service Developer Guide Create a virtual private cloud You can use Amazon Virtual Private Cloud (Amazon VPC) to launch AWS resources into a virtual"} +{"global_id": 223, "doc_id": "ecs", "chunk_id": "5", "question_id": 4, "question": "What is the first step to assign access to additional users in IAM Identity Center?", "answer_span": "In IAM Identity Center, create a permission set that follows the best practice of applying leastprivilege permissions.", "chunk": "page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. Sign up for an AWS account 6 Amazon Elastic Container Service 2. Developer Guide Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying leastprivilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create a user with administrative access 7 Amazon Elastic Container Service Developer Guide Create a virtual private cloud You can use Amazon Virtual Private Cloud (Amazon VPC) to launch AWS resources into a virtual"} +{"global_id": 224, "doc_id": "ecs", "chunk_id": "6", "question_id": 1, "question": "What is the purpose of Amazon Virtual Private Cloud (Amazon VPC)?", "answer_span": "You can use Amazon Virtual Private Cloud (Amazon VPC) to launch AWS resources into a virtual network that you've defined.", "chunk": "to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create a user with administrative access 7 Amazon Elastic Container Service Developer Guide Create a virtual private cloud You can use Amazon Virtual Private Cloud (Amazon VPC) to launch AWS resources into a virtual network that you've defined. We strongly suggest that you launch your container instances in a VPC. If you have a default VPC, you can skip this section and move to the next task, Create a security group. To determine whether you have a default VPC, see Work with your default VPC and default subnets in the Amazon VPC User Guide. Otherwise, you can create a nondefault VPC in your account using the steps below. For information about how to create a VPC, see Create a VPC in the Amazon VPC User Guide, and use the following table to determine what options to select. Option Value Resources to create VPC only Name Optionally provide a name for your VPC. IPv4 CIDR block IPv4 CIDR manual input The CIDR block size must have a size between /16 and /28. IPv6 CIDR block No IPv6 CIDR block Tenancy Default For more information about Amazon VPC, see What is Amazon VPC? in the Amazon VPC User Guide. Create a security group Security groups act as a firewall for associated container instances, controlling both inbound and outbound traffic at the container instance level. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add Create a virtual private cloud 8 Amazon Elastic Container Service Developer Guide rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your"} +{"global_id": 225, "doc_id": "ecs", "chunk_id": "6", "question_id": 2, "question": "What should you do if you have a default VPC?", "answer_span": "If you have a default VPC, you can skip this section and move to the next task, Create a security group.", "chunk": "to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create a user with administrative access 7 Amazon Elastic Container Service Developer Guide Create a virtual private cloud You can use Amazon Virtual Private Cloud (Amazon VPC) to launch AWS resources into a virtual network that you've defined. We strongly suggest that you launch your container instances in a VPC. If you have a default VPC, you can skip this section and move to the next task, Create a security group. To determine whether you have a default VPC, see Work with your default VPC and default subnets in the Amazon VPC User Guide. Otherwise, you can create a nondefault VPC in your account using the steps below. For information about how to create a VPC, see Create a VPC in the Amazon VPC User Guide, and use the following table to determine what options to select. Option Value Resources to create VPC only Name Optionally provide a name for your VPC. IPv4 CIDR block IPv4 CIDR manual input The CIDR block size must have a size between /16 and /28. IPv6 CIDR block No IPv6 CIDR block Tenancy Default For more information about Amazon VPC, see What is Amazon VPC? in the Amazon VPC User Guide. Create a security group Security groups act as a firewall for associated container instances, controlling both inbound and outbound traffic at the container instance level. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add Create a virtual private cloud 8 Amazon Elastic Container Service Developer Guide rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your"} +{"global_id": 226, "doc_id": "ecs", "chunk_id": "6", "question_id": 3, "question": "What is the CIDR block size requirement for creating a VPC?", "answer_span": "The CIDR block size must have a size between /16 and /28.", "chunk": "to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create a user with administrative access 7 Amazon Elastic Container Service Developer Guide Create a virtual private cloud You can use Amazon Virtual Private Cloud (Amazon VPC) to launch AWS resources into a virtual network that you've defined. We strongly suggest that you launch your container instances in a VPC. If you have a default VPC, you can skip this section and move to the next task, Create a security group. To determine whether you have a default VPC, see Work with your default VPC and default subnets in the Amazon VPC User Guide. Otherwise, you can create a nondefault VPC in your account using the steps below. For information about how to create a VPC, see Create a VPC in the Amazon VPC User Guide, and use the following table to determine what options to select. Option Value Resources to create VPC only Name Optionally provide a name for your VPC. IPv4 CIDR block IPv4 CIDR manual input The CIDR block size must have a size between /16 and /28. IPv6 CIDR block No IPv6 CIDR block Tenancy Default For more information about Amazon VPC, see What is Amazon VPC? in the Amazon VPC User Guide. Create a security group Security groups act as a firewall for associated container instances, controlling both inbound and outbound traffic at the container instance level. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add Create a virtual private cloud 8 Amazon Elastic Container Service Developer Guide rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your"} +{"global_id": 227, "doc_id": "ecs", "chunk_id": "6", "question_id": 4, "question": "What do security groups control?", "answer_span": "Security groups act as a firewall for associated container instances, controlling both inbound and outbound traffic at the container instance level.", "chunk": "to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create a user with administrative access 7 Amazon Elastic Container Service Developer Guide Create a virtual private cloud You can use Amazon Virtual Private Cloud (Amazon VPC) to launch AWS resources into a virtual network that you've defined. We strongly suggest that you launch your container instances in a VPC. If you have a default VPC, you can skip this section and move to the next task, Create a security group. To determine whether you have a default VPC, see Work with your default VPC and default subnets in the Amazon VPC User Guide. Otherwise, you can create a nondefault VPC in your account using the steps below. For information about how to create a VPC, see Create a VPC in the Amazon VPC User Guide, and use the following table to determine what options to select. Option Value Resources to create VPC only Name Optionally provide a name for your VPC. IPv4 CIDR block IPv4 CIDR manual input The CIDR block size must have a size between /16 and /28. IPv6 CIDR block No IPv6 CIDR block Tenancy Default For more information about Amazon VPC, see What is Amazon VPC? in the Amazon VPC User Guide. Create a security group Security groups act as a firewall for associated container instances, controlling both inbound and outbound traffic at the container instance level. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add Create a virtual private cloud 8 Amazon Elastic Container Service Developer Guide rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your"} +{"global_id": 228, "doc_id": "ecs", "chunk_id": "7", "question_id": 1, "question": "How can you connect to your container instance?", "answer_span": "connect to your container instance from your IP address using SSH.", "chunk": "connect to your container instance from your IP address using SSH. You can also add Create a virtual private cloud 8 Amazon Elastic Container Service Developer Guide rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your tasks. Container instances require external network access to communicate with the Amazon ECS service endpoint. If you plan to launch container instances in multiple Regions, you need to create a security group in each Region. For more information, see Regions and Availability Zones in the Amazon EC2 User Guide. Tip You need the public IP address of your local computer, which you can get using a service. For example, we provide the following service: http://checkip.amazonaws.com/ or https:// checkip.amazonaws.com/. To locate another service that provides your IP address, use the search phrase \"what is my IP address.\" If you are connecting through an internet service provider (ISP) or from behind a firewall without a static IP address, you must find out the range of IP addresses used by client computers. For information about how to create a security group, see Create a security group for your Amazon EC2 instance in the Amazon EC2 User Guide and use the following table to determine what options to select. Option Value Region The same Region in which you created your key pair. Name A name that is easy for you to remember, such as ecs-insta nces-default-cluster. VPC The default VPC (marked with \"(default)\"). Note If your account supports Amazon EC2 Classic, select the VPC Create a security group 9 Amazon Elastic Container Service Option Developer Guide Value that you created in the previous task. For information about the outbound rules to add for your use cases, see Security group rules for different use cases"} +{"global_id": 229, "doc_id": "ecs", "chunk_id": "7", "question_id": 2, "question": "What do container instances require to communicate with the Amazon ECS service endpoint?", "answer_span": "Container instances require external network access to communicate with the Amazon ECS service endpoint.", "chunk": "connect to your container instance from your IP address using SSH. You can also add Create a virtual private cloud 8 Amazon Elastic Container Service Developer Guide rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your tasks. Container instances require external network access to communicate with the Amazon ECS service endpoint. If you plan to launch container instances in multiple Regions, you need to create a security group in each Region. For more information, see Regions and Availability Zones in the Amazon EC2 User Guide. Tip You need the public IP address of your local computer, which you can get using a service. For example, we provide the following service: http://checkip.amazonaws.com/ or https:// checkip.amazonaws.com/. To locate another service that provides your IP address, use the search phrase \"what is my IP address.\" If you are connecting through an internet service provider (ISP) or from behind a firewall without a static IP address, you must find out the range of IP addresses used by client computers. For information about how to create a security group, see Create a security group for your Amazon EC2 instance in the Amazon EC2 User Guide and use the following table to determine what options to select. Option Value Region The same Region in which you created your key pair. Name A name that is easy for you to remember, such as ecs-insta nces-default-cluster. VPC The default VPC (marked with \"(default)\"). Note If your account supports Amazon EC2 Classic, select the VPC Create a security group 9 Amazon Elastic Container Service Option Developer Guide Value that you created in the previous task. For information about the outbound rules to add for your use cases, see Security group rules for different use cases"} +{"global_id": 230, "doc_id": "ecs", "chunk_id": "7", "question_id": 3, "question": "What should you do if you plan to launch container instances in multiple Regions?", "answer_span": "If you plan to launch container instances in multiple Regions, you need to create a security group in each Region.", "chunk": "connect to your container instance from your IP address using SSH. You can also add Create a virtual private cloud 8 Amazon Elastic Container Service Developer Guide rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your tasks. Container instances require external network access to communicate with the Amazon ECS service endpoint. If you plan to launch container instances in multiple Regions, you need to create a security group in each Region. For more information, see Regions and Availability Zones in the Amazon EC2 User Guide. Tip You need the public IP address of your local computer, which you can get using a service. For example, we provide the following service: http://checkip.amazonaws.com/ or https:// checkip.amazonaws.com/. To locate another service that provides your IP address, use the search phrase \"what is my IP address.\" If you are connecting through an internet service provider (ISP) or from behind a firewall without a static IP address, you must find out the range of IP addresses used by client computers. For information about how to create a security group, see Create a security group for your Amazon EC2 instance in the Amazon EC2 User Guide and use the following table to determine what options to select. Option Value Region The same Region in which you created your key pair. Name A name that is easy for you to remember, such as ecs-insta nces-default-cluster. VPC The default VPC (marked with \"(default)\"). Note If your account supports Amazon EC2 Classic, select the VPC Create a security group 9 Amazon Elastic Container Service Option Developer Guide Value that you created in the previous task. For information about the outbound rules to add for your use cases, see Security group rules for different use cases"} +{"global_id": 231, "doc_id": "ecs", "chunk_id": "7", "question_id": 4, "question": "What is a recommended service to find your public IP address?", "answer_span": "For example, we provide the following service: http://checkip.amazonaws.com/ or https:// checkip.amazonaws.com/.", "chunk": "connect to your container instance from your IP address using SSH. You can also add Create a virtual private cloud 8 Amazon Elastic Container Service Developer Guide rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your tasks. Container instances require external network access to communicate with the Amazon ECS service endpoint. If you plan to launch container instances in multiple Regions, you need to create a security group in each Region. For more information, see Regions and Availability Zones in the Amazon EC2 User Guide. Tip You need the public IP address of your local computer, which you can get using a service. For example, we provide the following service: http://checkip.amazonaws.com/ or https:// checkip.amazonaws.com/. To locate another service that provides your IP address, use the search phrase \"what is my IP address.\" If you are connecting through an internet service provider (ISP) or from behind a firewall without a static IP address, you must find out the range of IP addresses used by client computers. For information about how to create a security group, see Create a security group for your Amazon EC2 instance in the Amazon EC2 User Guide and use the following table to determine what options to select. Option Value Region The same Region in which you created your key pair. Name A name that is easy for you to remember, such as ecs-insta nces-default-cluster. VPC The default VPC (marked with \"(default)\"). Note If your account supports Amazon EC2 Classic, select the VPC Create a security group 9 Amazon Elastic Container Service Option Developer Guide Value that you created in the previous task. For information about the outbound rules to add for your use cases, see Security group rules for different use cases"} +{"global_id": 232, "doc_id": "ecs", "chunk_id": "8", "question_id": 1, "question": "What should you select if your account supports Amazon EC2 Classic?", "answer_span": "If your account supports Amazon EC2 Classic, select the VPC Create a security group", "chunk": "If your account supports Amazon EC2 Classic, select the VPC Create a security group 9 Amazon Elastic Container Service Option Developer Guide Value that you created in the previous task. For information about the outbound rules to add for your use cases, see Security group rules for different use cases in the Amazon EC2 User Guide. Amazon ECS container instances do not require any inbound ports to be open. However, you might want to add an SSH rule so you can log into the container instance and examine the tasks with Docker commands. You can also add rules for HTTP and HTTPS if you want your container instance to host a task that runs a web server. Container instances do require external network access to communicate with the Amazon ECS service endpoint. Complete the following steps to add these optional security group rules. Add the following three inbound rules to your security group.For information about how to create a security group, see Configure security group rules in the Amazon EC2 User Guide. Option Value HTTP rule Type: HTTP Source: Anywhere (0.0.0.0/0 ) This option automatically adds the 0.0.0.0/0 IPv4 CIDR block as the source. This is acceptable for a short time in a test environment, but it's unsafe in production environments. In production, authorize only a specific IP address or range of addresses to access your instance. HTTPS rule Create a security group Type: HTTPS 10 Amazon Elastic Container Service Option Developer Guide Value Source: Anywhere (0.0.0.0/0 ) This is acceptable for a short time in a test environment, but it's unsafe in productio n environments. In productio n, authorize only a specific IP address or range of addresses to access your instance. Create a security group 11 Amazon Elastic Container Service Developer Guide Option Value SSH rule Type: SSH"} +{"global_id": 233, "doc_id": "ecs", "chunk_id": "8", "question_id": 2, "question": "What do Amazon ECS container instances not require?", "answer_span": "Amazon ECS container instances do not require any inbound ports to be open.", "chunk": "If your account supports Amazon EC2 Classic, select the VPC Create a security group 9 Amazon Elastic Container Service Option Developer Guide Value that you created in the previous task. For information about the outbound rules to add for your use cases, see Security group rules for different use cases in the Amazon EC2 User Guide. Amazon ECS container instances do not require any inbound ports to be open. However, you might want to add an SSH rule so you can log into the container instance and examine the tasks with Docker commands. You can also add rules for HTTP and HTTPS if you want your container instance to host a task that runs a web server. Container instances do require external network access to communicate with the Amazon ECS service endpoint. Complete the following steps to add these optional security group rules. Add the following three inbound rules to your security group.For information about how to create a security group, see Configure security group rules in the Amazon EC2 User Guide. Option Value HTTP rule Type: HTTP Source: Anywhere (0.0.0.0/0 ) This option automatically adds the 0.0.0.0/0 IPv4 CIDR block as the source. This is acceptable for a short time in a test environment, but it's unsafe in production environments. In production, authorize only a specific IP address or range of addresses to access your instance. HTTPS rule Create a security group Type: HTTPS 10 Amazon Elastic Container Service Option Developer Guide Value Source: Anywhere (0.0.0.0/0 ) This is acceptable for a short time in a test environment, but it's unsafe in productio n environments. In productio n, authorize only a specific IP address or range of addresses to access your instance. Create a security group 11 Amazon Elastic Container Service Developer Guide Option Value SSH rule Type: SSH"} +{"global_id": 234, "doc_id": "ecs", "chunk_id": "8", "question_id": 3, "question": "What is acceptable for a short time in a test environment regarding the HTTP rule?", "answer_span": "This option automatically adds the 0.0.0.0/0 IPv4 CIDR block as the source. This is acceptable for a short time in a test environment, but it's unsafe in production environments.", "chunk": "If your account supports Amazon EC2 Classic, select the VPC Create a security group 9 Amazon Elastic Container Service Option Developer Guide Value that you created in the previous task. For information about the outbound rules to add for your use cases, see Security group rules for different use cases in the Amazon EC2 User Guide. Amazon ECS container instances do not require any inbound ports to be open. However, you might want to add an SSH rule so you can log into the container instance and examine the tasks with Docker commands. You can also add rules for HTTP and HTTPS if you want your container instance to host a task that runs a web server. Container instances do require external network access to communicate with the Amazon ECS service endpoint. Complete the following steps to add these optional security group rules. Add the following three inbound rules to your security group.For information about how to create a security group, see Configure security group rules in the Amazon EC2 User Guide. Option Value HTTP rule Type: HTTP Source: Anywhere (0.0.0.0/0 ) This option automatically adds the 0.0.0.0/0 IPv4 CIDR block as the source. This is acceptable for a short time in a test environment, but it's unsafe in production environments. In production, authorize only a specific IP address or range of addresses to access your instance. HTTPS rule Create a security group Type: HTTPS 10 Amazon Elastic Container Service Option Developer Guide Value Source: Anywhere (0.0.0.0/0 ) This is acceptable for a short time in a test environment, but it's unsafe in productio n environments. In productio n, authorize only a specific IP address or range of addresses to access your instance. Create a security group 11 Amazon Elastic Container Service Developer Guide Option Value SSH rule Type: SSH"} +{"global_id": 235, "doc_id": "ecs", "chunk_id": "8", "question_id": 4, "question": "What should you do in production regarding access to your instance?", "answer_span": "In production, authorize only a specific IP address or range of addresses to access your instance.", "chunk": "If your account supports Amazon EC2 Classic, select the VPC Create a security group 9 Amazon Elastic Container Service Option Developer Guide Value that you created in the previous task. For information about the outbound rules to add for your use cases, see Security group rules for different use cases in the Amazon EC2 User Guide. Amazon ECS container instances do not require any inbound ports to be open. However, you might want to add an SSH rule so you can log into the container instance and examine the tasks with Docker commands. You can also add rules for HTTP and HTTPS if you want your container instance to host a task that runs a web server. Container instances do require external network access to communicate with the Amazon ECS service endpoint. Complete the following steps to add these optional security group rules. Add the following three inbound rules to your security group.For information about how to create a security group, see Configure security group rules in the Amazon EC2 User Guide. Option Value HTTP rule Type: HTTP Source: Anywhere (0.0.0.0/0 ) This option automatically adds the 0.0.0.0/0 IPv4 CIDR block as the source. This is acceptable for a short time in a test environment, but it's unsafe in production environments. In production, authorize only a specific IP address or range of addresses to access your instance. HTTPS rule Create a security group Type: HTTPS 10 Amazon Elastic Container Service Option Developer Guide Value Source: Anywhere (0.0.0.0/0 ) This is acceptable for a short time in a test environment, but it's unsafe in productio n environments. In productio n, authorize only a specific IP address or range of addresses to access your instance. Create a security group 11 Amazon Elastic Container Service Developer Guide Option Value SSH rule Type: SSH"} +{"global_id": 236, "doc_id": "ecs", "chunk_id": "9", "question_id": 1, "question": "What is recommended for SSH access in production environments?", "answer_span": "In production, authorize only a specific IP address or range of addresses to access your instance.", "chunk": "acceptable for a short time in a test environment, but it's unsafe in productio n environments. In productio n, authorize only a specific IP address or range of addresses to access your instance. Create a security group 11 Amazon Elastic Container Service Developer Guide Option Value SSH rule Type: SSH Source: Custom, specify the public IP address of your computer or network in CIDR notation. To specify an individual IP address in CIDR notation, add the routing prefix /32. For example, if your IP address is 203.0.113 .25 , specify 203.0.113 .25/32 . If your company allocates addresses from a range, specify the entire range, such as 203.0.113 .0/24 . Important For security reasons, we don't recommend that you allow SSH access from all IP addresses (0.0.0.0/0 ) to your instance, except for testing purposes and only for a short time. Create the credentials to connect to your EC2 instance For Amazon ECS, a key pair is only needed if you intend on using the EC2 launch type. Create the credentials to connect to your EC2 instance 12 Amazon Elastic Container Service Developer Guide AWS uses public-key cryptography to secure the login information for your instance. A Linux instance, such as an Amazon ECS container instance, has no password to use for SSH access. You use a key pair to log in to your instance securely. You specify the name of the key pair when you launch your container instance, then provide the private key when you log in using SSH. If you haven't created a key pair already, you can create one using the Amazon EC2 console. If you plan to launch instances in multiple regions, you'll need to create a key pair in each region. For more information about regions, see Regions and Availability Zones in the Amazon"} +{"global_id": 237, "doc_id": "ecs", "chunk_id": "9", "question_id": 2, "question": "How do you specify an individual IP address in CIDR notation?", "answer_span": "To specify an individual IP address in CIDR notation, add the routing prefix /32.", "chunk": "acceptable for a short time in a test environment, but it's unsafe in productio n environments. In productio n, authorize only a specific IP address or range of addresses to access your instance. Create a security group 11 Amazon Elastic Container Service Developer Guide Option Value SSH rule Type: SSH Source: Custom, specify the public IP address of your computer or network in CIDR notation. To specify an individual IP address in CIDR notation, add the routing prefix /32. For example, if your IP address is 203.0.113 .25 , specify 203.0.113 .25/32 . If your company allocates addresses from a range, specify the entire range, such as 203.0.113 .0/24 . Important For security reasons, we don't recommend that you allow SSH access from all IP addresses (0.0.0.0/0 ) to your instance, except for testing purposes and only for a short time. Create the credentials to connect to your EC2 instance For Amazon ECS, a key pair is only needed if you intend on using the EC2 launch type. Create the credentials to connect to your EC2 instance 12 Amazon Elastic Container Service Developer Guide AWS uses public-key cryptography to secure the login information for your instance. A Linux instance, such as an Amazon ECS container instance, has no password to use for SSH access. You use a key pair to log in to your instance securely. You specify the name of the key pair when you launch your container instance, then provide the private key when you log in using SSH. If you haven't created a key pair already, you can create one using the Amazon EC2 console. If you plan to launch instances in multiple regions, you'll need to create a key pair in each region. For more information about regions, see Regions and Availability Zones in the Amazon"} +{"global_id": 238, "doc_id": "ecs", "chunk_id": "9", "question_id": 3, "question": "What is required to connect to an EC2 instance using SSH?", "answer_span": "You use a key pair to log in to your instance securely.", "chunk": "acceptable for a short time in a test environment, but it's unsafe in productio n environments. In productio n, authorize only a specific IP address or range of addresses to access your instance. Create a security group 11 Amazon Elastic Container Service Developer Guide Option Value SSH rule Type: SSH Source: Custom, specify the public IP address of your computer or network in CIDR notation. To specify an individual IP address in CIDR notation, add the routing prefix /32. For example, if your IP address is 203.0.113 .25 , specify 203.0.113 .25/32 . If your company allocates addresses from a range, specify the entire range, such as 203.0.113 .0/24 . Important For security reasons, we don't recommend that you allow SSH access from all IP addresses (0.0.0.0/0 ) to your instance, except for testing purposes and only for a short time. Create the credentials to connect to your EC2 instance For Amazon ECS, a key pair is only needed if you intend on using the EC2 launch type. Create the credentials to connect to your EC2 instance 12 Amazon Elastic Container Service Developer Guide AWS uses public-key cryptography to secure the login information for your instance. A Linux instance, such as an Amazon ECS container instance, has no password to use for SSH access. You use a key pair to log in to your instance securely. You specify the name of the key pair when you launch your container instance, then provide the private key when you log in using SSH. If you haven't created a key pair already, you can create one using the Amazon EC2 console. If you plan to launch instances in multiple regions, you'll need to create a key pair in each region. For more information about regions, see Regions and Availability Zones in the Amazon"} +{"global_id": 239, "doc_id": "ecs", "chunk_id": "9", "question_id": 4, "question": "What should you do if you plan to launch instances in multiple regions?", "answer_span": "If you plan to launch instances in multiple regions, you'll need to create a key pair in each region.", "chunk": "acceptable for a short time in a test environment, but it's unsafe in productio n environments. In productio n, authorize only a specific IP address or range of addresses to access your instance. Create a security group 11 Amazon Elastic Container Service Developer Guide Option Value SSH rule Type: SSH Source: Custom, specify the public IP address of your computer or network in CIDR notation. To specify an individual IP address in CIDR notation, add the routing prefix /32. For example, if your IP address is 203.0.113 .25 , specify 203.0.113 .25/32 . If your company allocates addresses from a range, specify the entire range, such as 203.0.113 .0/24 . Important For security reasons, we don't recommend that you allow SSH access from all IP addresses (0.0.0.0/0 ) to your instance, except for testing purposes and only for a short time. Create the credentials to connect to your EC2 instance For Amazon ECS, a key pair is only needed if you intend on using the EC2 launch type. Create the credentials to connect to your EC2 instance 12 Amazon Elastic Container Service Developer Guide AWS uses public-key cryptography to secure the login information for your instance. A Linux instance, such as an Amazon ECS container instance, has no password to use for SSH access. You use a key pair to log in to your instance securely. You specify the name of the key pair when you launch your container instance, then provide the private key when you log in using SSH. If you haven't created a key pair already, you can create one using the Amazon EC2 console. If you plan to launch instances in multiple regions, you'll need to create a key pair in each region. For more information about regions, see Regions and Availability Zones in the Amazon"} +{"global_id": 240, "doc_id": "ecs", "chunk_id": "10", "question_id": 1, "question": "What should you do if you plan to launch instances in multiple regions?", "answer_span": "If you plan to launch instances in multiple regions, you'll need to create a key pair in each region.", "chunk": "SSH. If you haven't created a key pair already, you can create one using the Amazon EC2 console. If you plan to launch instances in multiple regions, you'll need to create a key pair in each region. For more information about regions, see Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair • Use the Amazon EC2 console to create a key pair. For more information about creating a key pair, see Create a key pair in the Amazon EC2 User Guide. For information about how to connect to your instance, see Connect to your Linux instance in the Amazon EC2 User Guide. Install the AWS CLI The AWS Management Console can be used to manage all operations manually with Amazon ECS. However, you can install the AWS CLI on your local desktop or a developer box so that you can build scripts that can automate common management tasks in Amazon ECS. To use the AWS CLI with Amazon ECS, install the latest AWS CLI version. For information about installing the AWS CLI or upgrading it to the latest version, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. The AWS Command Line Interface (AWS CLI) is a unified tool that you can use to manage your AWS services. With this one tool alone, you can both control multiple AWS services and automate these services through scripts. The Amazon ECS commands in the AWS CLI are a reflection of the Amazon ECS API. The AWS CLI is suitable for customers who prefer and are used to scripting and interfacing with a command line tool and know exactly which actions they want to perform on their Amazon ECS resources. The AWS CLI is also"} +{"global_id": 241, "doc_id": "ecs", "chunk_id": "10", "question_id": 2, "question": "Where can you find more information about creating a key pair?", "answer_span": "For more information about creating a key pair, see Create a key pair in the Amazon EC2 User Guide.", "chunk": "SSH. If you haven't created a key pair already, you can create one using the Amazon EC2 console. If you plan to launch instances in multiple regions, you'll need to create a key pair in each region. For more information about regions, see Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair • Use the Amazon EC2 console to create a key pair. For more information about creating a key pair, see Create a key pair in the Amazon EC2 User Guide. For information about how to connect to your instance, see Connect to your Linux instance in the Amazon EC2 User Guide. Install the AWS CLI The AWS Management Console can be used to manage all operations manually with Amazon ECS. However, you can install the AWS CLI on your local desktop or a developer box so that you can build scripts that can automate common management tasks in Amazon ECS. To use the AWS CLI with Amazon ECS, install the latest AWS CLI version. For information about installing the AWS CLI or upgrading it to the latest version, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. The AWS Command Line Interface (AWS CLI) is a unified tool that you can use to manage your AWS services. With this one tool alone, you can both control multiple AWS services and automate these services through scripts. The Amazon ECS commands in the AWS CLI are a reflection of the Amazon ECS API. The AWS CLI is suitable for customers who prefer and are used to scripting and interfacing with a command line tool and know exactly which actions they want to perform on their Amazon ECS resources. The AWS CLI is also"} +{"global_id": 242, "doc_id": "ecs", "chunk_id": "10", "question_id": 3, "question": "What can the AWS Management Console be used for?", "answer_span": "The AWS Management Console can be used to manage all operations manually with Amazon ECS.", "chunk": "SSH. If you haven't created a key pair already, you can create one using the Amazon EC2 console. If you plan to launch instances in multiple regions, you'll need to create a key pair in each region. For more information about regions, see Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair • Use the Amazon EC2 console to create a key pair. For more information about creating a key pair, see Create a key pair in the Amazon EC2 User Guide. For information about how to connect to your instance, see Connect to your Linux instance in the Amazon EC2 User Guide. Install the AWS CLI The AWS Management Console can be used to manage all operations manually with Amazon ECS. However, you can install the AWS CLI on your local desktop or a developer box so that you can build scripts that can automate common management tasks in Amazon ECS. To use the AWS CLI with Amazon ECS, install the latest AWS CLI version. For information about installing the AWS CLI or upgrading it to the latest version, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. The AWS Command Line Interface (AWS CLI) is a unified tool that you can use to manage your AWS services. With this one tool alone, you can both control multiple AWS services and automate these services through scripts. The Amazon ECS commands in the AWS CLI are a reflection of the Amazon ECS API. The AWS CLI is suitable for customers who prefer and are used to scripting and interfacing with a command line tool and know exactly which actions they want to perform on their Amazon ECS resources. The AWS CLI is also"} +{"global_id": 243, "doc_id": "ecs", "chunk_id": "10", "question_id": 4, "question": "What is the AWS CLI suitable for?", "answer_span": "The AWS CLI is suitable for customers who prefer and are used to scripting and interfacing with a command line tool and know exactly which actions they want to perform on their Amazon ECS resources.", "chunk": "SSH. If you haven't created a key pair already, you can create one using the Amazon EC2 console. If you plan to launch instances in multiple regions, you'll need to create a key pair in each region. For more information about regions, see Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair • Use the Amazon EC2 console to create a key pair. For more information about creating a key pair, see Create a key pair in the Amazon EC2 User Guide. For information about how to connect to your instance, see Connect to your Linux instance in the Amazon EC2 User Guide. Install the AWS CLI The AWS Management Console can be used to manage all operations manually with Amazon ECS. However, you can install the AWS CLI on your local desktop or a developer box so that you can build scripts that can automate common management tasks in Amazon ECS. To use the AWS CLI with Amazon ECS, install the latest AWS CLI version. For information about installing the AWS CLI or upgrading it to the latest version, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. The AWS Command Line Interface (AWS CLI) is a unified tool that you can use to manage your AWS services. With this one tool alone, you can both control multiple AWS services and automate these services through scripts. The Amazon ECS commands in the AWS CLI are a reflection of the Amazon ECS API. The AWS CLI is suitable for customers who prefer and are used to scripting and interfacing with a command line tool and know exactly which actions they want to perform on their Amazon ECS resources. The AWS CLI is also"} +{"global_id": 244, "doc_id": "ecs", "chunk_id": "11", "question_id": 1, "question": "What does the AWS CLI reflect?", "answer_span": "AWS CLI are a reflection of the Amazon ECS API.", "chunk": "AWS CLI are a reflection of the Amazon ECS API. The AWS CLI is suitable for customers who prefer and are used to scripting and interfacing with a command line tool and know exactly which actions they want to perform on their Amazon ECS resources. The AWS CLI is also helpful to customers who want to familiarize themselves with the Amazon ECS APIs. Customers can use the AWS CLI to perform a number of operations on Amazon ECS resources, including Create, Read, Update, and Delete operations, directly from the command line interface. Install the AWS CLI 13 Amazon Elastic Container Service Developer Guide Use the AWS CLI if you are or want to become familiar with the Amazon ECS APIs and corresponding CLI commands and want to write automated scripts and perform specific actions on Amazon ECS resources. AWS also provides the command line tools AWS Tools for Windows PowerShell. For more information, see the AWS Tools for Windows PowerShell User Guide. Next steps for using Amazon ECS After installing the AWS CLI, there are many different tools you can utilize as you continue to use Amazon ECS. The following links explain what some of those tools are and give examples of how to use them with Amazon ECS. • Create your first container image with Docker and push it to Amazon ECR for use in your Amazon ECS task definitions. • Learn how to create an Amazon ECS Linux task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the EC2 launch type. • Using your preferred programming language, define infrastructure or architecture as code with the Creating Amazon ECS resources using the AWS CDK. • Define"} +{"global_id": 245, "doc_id": "ecs", "chunk_id": "11", "question_id": 2, "question": "Who is the AWS CLI suitable for?", "answer_span": "The AWS CLI is suitable for customers who prefer and are used to scripting and interfacing with a command line tool and know exactly which actions they want to perform on their Amazon ECS resources.", "chunk": "AWS CLI are a reflection of the Amazon ECS API. The AWS CLI is suitable for customers who prefer and are used to scripting and interfacing with a command line tool and know exactly which actions they want to perform on their Amazon ECS resources. The AWS CLI is also helpful to customers who want to familiarize themselves with the Amazon ECS APIs. Customers can use the AWS CLI to perform a number of operations on Amazon ECS resources, including Create, Read, Update, and Delete operations, directly from the command line interface. Install the AWS CLI 13 Amazon Elastic Container Service Developer Guide Use the AWS CLI if you are or want to become familiar with the Amazon ECS APIs and corresponding CLI commands and want to write automated scripts and perform specific actions on Amazon ECS resources. AWS also provides the command line tools AWS Tools for Windows PowerShell. For more information, see the AWS Tools for Windows PowerShell User Guide. Next steps for using Amazon ECS After installing the AWS CLI, there are many different tools you can utilize as you continue to use Amazon ECS. The following links explain what some of those tools are and give examples of how to use them with Amazon ECS. • Create your first container image with Docker and push it to Amazon ECR for use in your Amazon ECS task definitions. • Learn how to create an Amazon ECS Linux task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the EC2 launch type. • Using your preferred programming language, define infrastructure or architecture as code with the Creating Amazon ECS resources using the AWS CDK. • Define"} +{"global_id": 246, "doc_id": "ecs", "chunk_id": "11", "question_id": 3, "question": "What operations can customers perform on Amazon ECS resources using the AWS CLI?", "answer_span": "Customers can use the AWS CLI to perform a number of operations on Amazon ECS resources, including Create, Read, Update, and Delete operations, directly from the command line interface.", "chunk": "AWS CLI are a reflection of the Amazon ECS API. The AWS CLI is suitable for customers who prefer and are used to scripting and interfacing with a command line tool and know exactly which actions they want to perform on their Amazon ECS resources. The AWS CLI is also helpful to customers who want to familiarize themselves with the Amazon ECS APIs. Customers can use the AWS CLI to perform a number of operations on Amazon ECS resources, including Create, Read, Update, and Delete operations, directly from the command line interface. Install the AWS CLI 13 Amazon Elastic Container Service Developer Guide Use the AWS CLI if you are or want to become familiar with the Amazon ECS APIs and corresponding CLI commands and want to write automated scripts and perform specific actions on Amazon ECS resources. AWS also provides the command line tools AWS Tools for Windows PowerShell. For more information, see the AWS Tools for Windows PowerShell User Guide. Next steps for using Amazon ECS After installing the AWS CLI, there are many different tools you can utilize as you continue to use Amazon ECS. The following links explain what some of those tools are and give examples of how to use them with Amazon ECS. • Create your first container image with Docker and push it to Amazon ECR for use in your Amazon ECS task definitions. • Learn how to create an Amazon ECS Linux task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the EC2 launch type. • Using your preferred programming language, define infrastructure or architecture as code with the Creating Amazon ECS resources using the AWS CDK. • Define"} +{"global_id": 247, "doc_id": "ecs", "chunk_id": "11", "question_id": 4, "question": "What is one of the next steps after installing the AWS CLI?", "answer_span": "After installing the AWS CLI, there are many different tools you can utilize as you continue to use Amazon ECS.", "chunk": "AWS CLI are a reflection of the Amazon ECS API. The AWS CLI is suitable for customers who prefer and are used to scripting and interfacing with a command line tool and know exactly which actions they want to perform on their Amazon ECS resources. The AWS CLI is also helpful to customers who want to familiarize themselves with the Amazon ECS APIs. Customers can use the AWS CLI to perform a number of operations on Amazon ECS resources, including Create, Read, Update, and Delete operations, directly from the command line interface. Install the AWS CLI 13 Amazon Elastic Container Service Developer Guide Use the AWS CLI if you are or want to become familiar with the Amazon ECS APIs and corresponding CLI commands and want to write automated scripts and perform specific actions on Amazon ECS resources. AWS also provides the command line tools AWS Tools for Windows PowerShell. For more information, see the AWS Tools for Windows PowerShell User Guide. Next steps for using Amazon ECS After installing the AWS CLI, there are many different tools you can utilize as you continue to use Amazon ECS. The following links explain what some of those tools are and give examples of how to use them with Amazon ECS. • Create your first container image with Docker and push it to Amazon ECR for use in your Amazon ECS task definitions. • Learn how to create an Amazon ECS Linux task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the EC2 launch type. • Using your preferred programming language, define infrastructure or architecture as code with the Creating Amazon ECS resources using the AWS CDK. • Define"} +{"global_id": 248, "doc_id": "ecs", "chunk_id": "12", "question_id": 1, "question": "What is the purpose of creating an Amazon ECS Windows task?", "answer_span": "create an Amazon ECS Windows task for the Fargate launch type.", "chunk": "create an Amazon ECS Windows task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the EC2 launch type. • Using your preferred programming language, define infrastructure or architecture as code with the Creating Amazon ECS resources using the AWS CDK. • Define and manage all AWS resources in your environment with automated deployment using Using Amazon ECS with AWS CloudFormation. • Use the complete Creating Amazon ECS resources using the AWS Copilot command line interface end-to-end developer workflow to create, release, and operate container applications that comply with AWS best practices for infrastructure. Creating a container image for use on Amazon ECS Amazon ECS uses Docker images in task definitions to launch containers. Docker is a technology that provides the tools for you to build, run, test, and deploy distributed applications in containers. Amazon ECS schedules containerized applications on to container instances or on to AWS Fargate. Containerized applications are packaged as container images. This example creates a container image for a web server. You can create your first Docker image, and then push that image to Amazon ECR, which is a container registry, for use in your Amazon ECS task definitions. This walkthrough assumes that you Next steps for using Amazon ECS 14 Amazon Elastic Container Service Developer Guide possess a basic understanding of what Docker is and how it works. For more information about Docker, see What is Docker? and the Docker documentation. Prerequisites Before you begin, ensure the following prerequisites are met. • Ensure you have completed the Amazon ECR setup steps. For more information, see Moving an image through its lifecycle in Amazon ECR in the Amazon Elastic Container Registry User Guide. • Your user has the required IAM permissions to access and use the Amazon ECR service."} +{"global_id": 249, "doc_id": "ecs", "chunk_id": "12", "question_id": 2, "question": "What technology does Amazon ECS use to launch containers?", "answer_span": "Amazon ECS uses Docker images in task definitions to launch containers.", "chunk": "create an Amazon ECS Windows task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the EC2 launch type. • Using your preferred programming language, define infrastructure or architecture as code with the Creating Amazon ECS resources using the AWS CDK. • Define and manage all AWS resources in your environment with automated deployment using Using Amazon ECS with AWS CloudFormation. • Use the complete Creating Amazon ECS resources using the AWS Copilot command line interface end-to-end developer workflow to create, release, and operate container applications that comply with AWS best practices for infrastructure. Creating a container image for use on Amazon ECS Amazon ECS uses Docker images in task definitions to launch containers. Docker is a technology that provides the tools for you to build, run, test, and deploy distributed applications in containers. Amazon ECS schedules containerized applications on to container instances or on to AWS Fargate. Containerized applications are packaged as container images. This example creates a container image for a web server. You can create your first Docker image, and then push that image to Amazon ECR, which is a container registry, for use in your Amazon ECS task definitions. This walkthrough assumes that you Next steps for using Amazon ECS 14 Amazon Elastic Container Service Developer Guide possess a basic understanding of what Docker is and how it works. For more information about Docker, see What is Docker? and the Docker documentation. Prerequisites Before you begin, ensure the following prerequisites are met. • Ensure you have completed the Amazon ECR setup steps. For more information, see Moving an image through its lifecycle in Amazon ECR in the Amazon Elastic Container Registry User Guide. • Your user has the required IAM permissions to access and use the Amazon ECR service."} +{"global_id": 250, "doc_id": "ecs", "chunk_id": "12", "question_id": 3, "question": "What must you ensure before beginning to use Amazon ECS?", "answer_span": "ensure the following prerequisites are met.", "chunk": "create an Amazon ECS Windows task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the EC2 launch type. • Using your preferred programming language, define infrastructure or architecture as code with the Creating Amazon ECS resources using the AWS CDK. • Define and manage all AWS resources in your environment with automated deployment using Using Amazon ECS with AWS CloudFormation. • Use the complete Creating Amazon ECS resources using the AWS Copilot command line interface end-to-end developer workflow to create, release, and operate container applications that comply with AWS best practices for infrastructure. Creating a container image for use on Amazon ECS Amazon ECS uses Docker images in task definitions to launch containers. Docker is a technology that provides the tools for you to build, run, test, and deploy distributed applications in containers. Amazon ECS schedules containerized applications on to container instances or on to AWS Fargate. Containerized applications are packaged as container images. This example creates a container image for a web server. You can create your first Docker image, and then push that image to Amazon ECR, which is a container registry, for use in your Amazon ECS task definitions. This walkthrough assumes that you Next steps for using Amazon ECS 14 Amazon Elastic Container Service Developer Guide possess a basic understanding of what Docker is and how it works. For more information about Docker, see What is Docker? and the Docker documentation. Prerequisites Before you begin, ensure the following prerequisites are met. • Ensure you have completed the Amazon ECR setup steps. For more information, see Moving an image through its lifecycle in Amazon ECR in the Amazon Elastic Container Registry User Guide. • Your user has the required IAM permissions to access and use the Amazon ECR service."} +{"global_id": 251, "doc_id": "ecs", "chunk_id": "12", "question_id": 4, "question": "Where can you push your Docker image for use in Amazon ECS task definitions?", "answer_span": "push that image to Amazon ECR, which is a container registry, for use in your Amazon ECS task definitions.", "chunk": "create an Amazon ECS Windows task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the EC2 launch type. • Using your preferred programming language, define infrastructure or architecture as code with the Creating Amazon ECS resources using the AWS CDK. • Define and manage all AWS resources in your environment with automated deployment using Using Amazon ECS with AWS CloudFormation. • Use the complete Creating Amazon ECS resources using the AWS Copilot command line interface end-to-end developer workflow to create, release, and operate container applications that comply with AWS best practices for infrastructure. Creating a container image for use on Amazon ECS Amazon ECS uses Docker images in task definitions to launch containers. Docker is a technology that provides the tools for you to build, run, test, and deploy distributed applications in containers. Amazon ECS schedules containerized applications on to container instances or on to AWS Fargate. Containerized applications are packaged as container images. This example creates a container image for a web server. You can create your first Docker image, and then push that image to Amazon ECR, which is a container registry, for use in your Amazon ECS task definitions. This walkthrough assumes that you Next steps for using Amazon ECS 14 Amazon Elastic Container Service Developer Guide possess a basic understanding of what Docker is and how it works. For more information about Docker, see What is Docker? and the Docker documentation. Prerequisites Before you begin, ensure the following prerequisites are met. • Ensure you have completed the Amazon ECR setup steps. For more information, see Moving an image through its lifecycle in Amazon ECR in the Amazon Elastic Container Registry User Guide. • Your user has the required IAM permissions to access and use the Amazon ECR service."} +{"global_id": 252, "doc_id": "ecs", "chunk_id": "13", "question_id": 1, "question": "What must be completed before using Amazon ECR?", "answer_span": "Ensure you have completed the Amazon ECR setup steps.", "chunk": "prerequisites are met. • Ensure you have completed the Amazon ECR setup steps. For more information, see Moving an image through its lifecycle in Amazon ECR in the Amazon Elastic Container Registry User Guide. • Your user has the required IAM permissions to access and use the Amazon ECR service. For more information, see Amazon ECR managed policies. • You have Docker installed. For Docker installation steps for Amazon Linux 2023, see Installing Docker on AL2023. For all other operating systems, see the Docker documentation at Docker Desktop overview. • You have the AWS CLI installed and configured. For more information, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. If you don't have or need a local development environment and you prefer to use an Amazon EC2 instance to use Docker, we provide the following steps to launch an Amazon EC2 instance using Amazon Linux 2023 and install Docker Engine and the Docker CLI. Installing Docker on AL2023 Docker is available on many different operating systems, including most modern Linux distributions, like Ubuntu, and even macOS and Windows. For more information about how to install Docker on your particular operating system, go to the Docker installation guide. You do not need a local development system to use Docker. If you are using Amazon EC2 already, you can launch an Amazon Linux 2023 instance and install Docker to get started. If you already have Docker installed, skip to Create a Docker image. To install Docker on an Amazon EC2 instance using an Amazon Linux 2023 AMI 1. Launch an instance with the latest Amazon Linux 2023 AMI. For more information, see Launch an EC2 instance using the launch instance wizard in the console in the Amazon EC2 User Guide."} +{"global_id": 253, "doc_id": "ecs", "chunk_id": "13", "question_id": 2, "question": "What permissions are required to access the Amazon ECR service?", "answer_span": "Your user has the required IAM permissions to access and use the Amazon ECR service.", "chunk": "prerequisites are met. • Ensure you have completed the Amazon ECR setup steps. For more information, see Moving an image through its lifecycle in Amazon ECR in the Amazon Elastic Container Registry User Guide. • Your user has the required IAM permissions to access and use the Amazon ECR service. For more information, see Amazon ECR managed policies. • You have Docker installed. For Docker installation steps for Amazon Linux 2023, see Installing Docker on AL2023. For all other operating systems, see the Docker documentation at Docker Desktop overview. • You have the AWS CLI installed and configured. For more information, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. If you don't have or need a local development environment and you prefer to use an Amazon EC2 instance to use Docker, we provide the following steps to launch an Amazon EC2 instance using Amazon Linux 2023 and install Docker Engine and the Docker CLI. Installing Docker on AL2023 Docker is available on many different operating systems, including most modern Linux distributions, like Ubuntu, and even macOS and Windows. For more information about how to install Docker on your particular operating system, go to the Docker installation guide. You do not need a local development system to use Docker. If you are using Amazon EC2 already, you can launch an Amazon Linux 2023 instance and install Docker to get started. If you already have Docker installed, skip to Create a Docker image. To install Docker on an Amazon EC2 instance using an Amazon Linux 2023 AMI 1. Launch an instance with the latest Amazon Linux 2023 AMI. For more information, see Launch an EC2 instance using the launch instance wizard in the console in the Amazon EC2 User Guide."} +{"global_id": 254, "doc_id": "ecs", "chunk_id": "13", "question_id": 3, "question": "What should you do if you prefer to use an Amazon EC2 instance for Docker?", "answer_span": "If you don't have or need a local development environment and you prefer to use an Amazon EC2 instance to use Docker, we provide the following steps to launch an Amazon EC2 instance using Amazon Linux 2023 and install Docker Engine and the Docker CLI.", "chunk": "prerequisites are met. • Ensure you have completed the Amazon ECR setup steps. For more information, see Moving an image through its lifecycle in Amazon ECR in the Amazon Elastic Container Registry User Guide. • Your user has the required IAM permissions to access and use the Amazon ECR service. For more information, see Amazon ECR managed policies. • You have Docker installed. For Docker installation steps for Amazon Linux 2023, see Installing Docker on AL2023. For all other operating systems, see the Docker documentation at Docker Desktop overview. • You have the AWS CLI installed and configured. For more information, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. If you don't have or need a local development environment and you prefer to use an Amazon EC2 instance to use Docker, we provide the following steps to launch an Amazon EC2 instance using Amazon Linux 2023 and install Docker Engine and the Docker CLI. Installing Docker on AL2023 Docker is available on many different operating systems, including most modern Linux distributions, like Ubuntu, and even macOS and Windows. For more information about how to install Docker on your particular operating system, go to the Docker installation guide. You do not need a local development system to use Docker. If you are using Amazon EC2 already, you can launch an Amazon Linux 2023 instance and install Docker to get started. If you already have Docker installed, skip to Create a Docker image. To install Docker on an Amazon EC2 instance using an Amazon Linux 2023 AMI 1. Launch an instance with the latest Amazon Linux 2023 AMI. For more information, see Launch an EC2 instance using the launch instance wizard in the console in the Amazon EC2 User Guide."} +{"global_id": 255, "doc_id": "ecs", "chunk_id": "13", "question_id": 4, "question": "Where can you find the installation steps for Docker on Amazon Linux 2023?", "answer_span": "For Docker installation steps for Amazon Linux 2023, see Installing Docker on AL2023.", "chunk": "prerequisites are met. • Ensure you have completed the Amazon ECR setup steps. For more information, see Moving an image through its lifecycle in Amazon ECR in the Amazon Elastic Container Registry User Guide. • Your user has the required IAM permissions to access and use the Amazon ECR service. For more information, see Amazon ECR managed policies. • You have Docker installed. For Docker installation steps for Amazon Linux 2023, see Installing Docker on AL2023. For all other operating systems, see the Docker documentation at Docker Desktop overview. • You have the AWS CLI installed and configured. For more information, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. If you don't have or need a local development environment and you prefer to use an Amazon EC2 instance to use Docker, we provide the following steps to launch an Amazon EC2 instance using Amazon Linux 2023 and install Docker Engine and the Docker CLI. Installing Docker on AL2023 Docker is available on many different operating systems, including most modern Linux distributions, like Ubuntu, and even macOS and Windows. For more information about how to install Docker on your particular operating system, go to the Docker installation guide. You do not need a local development system to use Docker. If you are using Amazon EC2 already, you can launch an Amazon Linux 2023 instance and install Docker to get started. If you already have Docker installed, skip to Create a Docker image. To install Docker on an Amazon EC2 instance using an Amazon Linux 2023 AMI 1. Launch an instance with the latest Amazon Linux 2023 AMI. For more information, see Launch an EC2 instance using the launch instance wizard in the console in the Amazon EC2 User Guide."} +{"global_id": 256, "doc_id": "ecs", "chunk_id": "14", "question_id": 1, "question": "What is the first step to install Docker on an Amazon EC2 instance using an Amazon Linux 2023 AMI?", "answer_span": "1. Launch an instance with the latest Amazon Linux 2023 AMI.", "chunk": "a Docker image. To install Docker on an Amazon EC2 instance using an Amazon Linux 2023 AMI 1. Launch an instance with the latest Amazon Linux 2023 AMI. For more information, see Launch an EC2 instance using the launch instance wizard in the console in the Amazon EC2 User Guide. 2. Connect to your instance. For more information, see Connect to your EC2 instance in the Amazon EC2 User Guide. 3. Update the installed packages and package cache on your instance. Prerequisites 15 Amazon Elastic Container Service Developer Guide sudo yum update -y 4. Install the most recent Docker Community Edition package. sudo yum install docker 5. Start the Docker service. sudo service docker start 6. Add the ec2-user to the docker group so you can execute Docker commands without using sudo. sudo usermod -a -G docker ec2-user 7. Log out and log back in again to pick up the new docker group permissions. You can accomplish this by closing your current SSH terminal window and reconnecting to your instance in a new one. Your new SSH session will have the appropriate docker group permissions. 8. Verify that the ec2-user can run Docker commands without sudo. docker info Note In some cases, you may need to reboot your instance to provide permissions for the ec2-user to access the Docker daemon. Try rebooting your instance if you see the following error: Cannot connect to the Docker daemon. Is the docker daemon running on this host? Create a Docker image Amazon ECS task definitions use container images to launch containers on the container instances in your clusters. In this section, you create a Docker image of a simple web application, and test Create a Docker image 16 Amazon Elastic Container Service Developer Guide it on your local system or Amazon EC2 instance,"} +{"global_id": 257, "doc_id": "ecs", "chunk_id": "14", "question_id": 2, "question": "What command is used to install the most recent Docker Community Edition package?", "answer_span": "sudo yum install docker", "chunk": "a Docker image. To install Docker on an Amazon EC2 instance using an Amazon Linux 2023 AMI 1. Launch an instance with the latest Amazon Linux 2023 AMI. For more information, see Launch an EC2 instance using the launch instance wizard in the console in the Amazon EC2 User Guide. 2. Connect to your instance. For more information, see Connect to your EC2 instance in the Amazon EC2 User Guide. 3. Update the installed packages and package cache on your instance. Prerequisites 15 Amazon Elastic Container Service Developer Guide sudo yum update -y 4. Install the most recent Docker Community Edition package. sudo yum install docker 5. Start the Docker service. sudo service docker start 6. Add the ec2-user to the docker group so you can execute Docker commands without using sudo. sudo usermod -a -G docker ec2-user 7. Log out and log back in again to pick up the new docker group permissions. You can accomplish this by closing your current SSH terminal window and reconnecting to your instance in a new one. Your new SSH session will have the appropriate docker group permissions. 8. Verify that the ec2-user can run Docker commands without sudo. docker info Note In some cases, you may need to reboot your instance to provide permissions for the ec2-user to access the Docker daemon. Try rebooting your instance if you see the following error: Cannot connect to the Docker daemon. Is the docker daemon running on this host? Create a Docker image Amazon ECS task definitions use container images to launch containers on the container instances in your clusters. In this section, you create a Docker image of a simple web application, and test Create a Docker image 16 Amazon Elastic Container Service Developer Guide it on your local system or Amazon EC2 instance,"} +{"global_id": 258, "doc_id": "ecs", "chunk_id": "14", "question_id": 3, "question": "What should you do after adding the ec2-user to the docker group?", "answer_span": "Log out and log back in again to pick up the new docker group permissions.", "chunk": "a Docker image. To install Docker on an Amazon EC2 instance using an Amazon Linux 2023 AMI 1. Launch an instance with the latest Amazon Linux 2023 AMI. For more information, see Launch an EC2 instance using the launch instance wizard in the console in the Amazon EC2 User Guide. 2. Connect to your instance. For more information, see Connect to your EC2 instance in the Amazon EC2 User Guide. 3. Update the installed packages and package cache on your instance. Prerequisites 15 Amazon Elastic Container Service Developer Guide sudo yum update -y 4. Install the most recent Docker Community Edition package. sudo yum install docker 5. Start the Docker service. sudo service docker start 6. Add the ec2-user to the docker group so you can execute Docker commands without using sudo. sudo usermod -a -G docker ec2-user 7. Log out and log back in again to pick up the new docker group permissions. You can accomplish this by closing your current SSH terminal window and reconnecting to your instance in a new one. Your new SSH session will have the appropriate docker group permissions. 8. Verify that the ec2-user can run Docker commands without sudo. docker info Note In some cases, you may need to reboot your instance to provide permissions for the ec2-user to access the Docker daemon. Try rebooting your instance if you see the following error: Cannot connect to the Docker daemon. Is the docker daemon running on this host? Create a Docker image Amazon ECS task definitions use container images to launch containers on the container instances in your clusters. In this section, you create a Docker image of a simple web application, and test Create a Docker image 16 Amazon Elastic Container Service Developer Guide it on your local system or Amazon EC2 instance,"} +{"global_id": 259, "doc_id": "ecs", "chunk_id": "14", "question_id": 4, "question": "What error might indicate that you need to reboot your instance?", "answer_span": "Cannot connect to the Docker daemon. Is the docker daemon running on this host?", "chunk": "a Docker image. To install Docker on an Amazon EC2 instance using an Amazon Linux 2023 AMI 1. Launch an instance with the latest Amazon Linux 2023 AMI. For more information, see Launch an EC2 instance using the launch instance wizard in the console in the Amazon EC2 User Guide. 2. Connect to your instance. For more information, see Connect to your EC2 instance in the Amazon EC2 User Guide. 3. Update the installed packages and package cache on your instance. Prerequisites 15 Amazon Elastic Container Service Developer Guide sudo yum update -y 4. Install the most recent Docker Community Edition package. sudo yum install docker 5. Start the Docker service. sudo service docker start 6. Add the ec2-user to the docker group so you can execute Docker commands without using sudo. sudo usermod -a -G docker ec2-user 7. Log out and log back in again to pick up the new docker group permissions. You can accomplish this by closing your current SSH terminal window and reconnecting to your instance in a new one. Your new SSH session will have the appropriate docker group permissions. 8. Verify that the ec2-user can run Docker commands without sudo. docker info Note In some cases, you may need to reboot your instance to provide permissions for the ec2-user to access the Docker daemon. Try rebooting your instance if you see the following error: Cannot connect to the Docker daemon. Is the docker daemon running on this host? Create a Docker image Amazon ECS task definitions use container images to launch containers on the container instances in your clusters. In this section, you create a Docker image of a simple web application, and test Create a Docker image 16 Amazon Elastic Container Service Developer Guide it on your local system or Amazon EC2 instance,"} +{"global_id": 260, "doc_id": "ecs", "chunk_id": "15", "question_id": 1, "question": "What is the purpose of task definitions in Amazon ECS?", "answer_span": "task definitions use container images to launch containers on the container instances in your clusters.", "chunk": "task definitions use container images to launch containers on the container instances in your clusters. In this section, you create a Docker image of a simple web application, and test Create a Docker image 16 Amazon Elastic Container Service Developer Guide it on your local system or Amazon EC2 instance, and then push the image to the Amazon ECR container registry so you can use it in an Amazon ECS task definition. To create a Docker image of a simple web application 1. Create a file called Dockerfile. A Dockerfile is a manifest that describes the base image to use for your Docker image and what you want installed and running on it. For more information about Dockerfiles, go to the Dockerfile Reference. touch Dockerfile 2. Edit the Dockerfile you just created and add the following content. FROM public.ecr.aws/amazonlinux/amazonlinux:latest # Update installed packages and install Apache RUN yum update -y && \\ yum install -y httpd # Write hello world message RUN echo 'Hello World!' > /var/www/html/index.html # Configure Apache RUN echo 'mkdir -p /var/run/httpd' >> /root/run_apache.sh && \\ echo 'mkdir -p /var/lock/httpd' >> /root/run_apache.sh && \\ echo '/usr/sbin/httpd -D FOREGROUND' >> /root/run_apache.sh && \\ chmod 755 /root/run_apache.sh EXPOSE 80 CMD /root/run_apache.sh This Dockerfile uses the public Amazon Linux 2023 image hosted on Amazon ECR Public. The RUN instructions update the package caches, installs some software packages for the web server, and then write the \"Hello World!\" content to the web servers document root. The EXPOSE instruction means that port 80 on the container is the one that is listening, and the CMD instruction starts the web server. 3. Build the Docker image from your Dockerfile. Create a Docker image 17 Amazon Elastic Container Service Developer Guide Note Some versions of Docker may require the full path to your Dockerfile"} +{"global_id": 261, "doc_id": "ecs", "chunk_id": "15", "question_id": 2, "question": "What is the first step to create a Docker image of a simple web application?", "answer_span": "Create a file called Dockerfile.", "chunk": "task definitions use container images to launch containers on the container instances in your clusters. In this section, you create a Docker image of a simple web application, and test Create a Docker image 16 Amazon Elastic Container Service Developer Guide it on your local system or Amazon EC2 instance, and then push the image to the Amazon ECR container registry so you can use it in an Amazon ECS task definition. To create a Docker image of a simple web application 1. Create a file called Dockerfile. A Dockerfile is a manifest that describes the base image to use for your Docker image and what you want installed and running on it. For more information about Dockerfiles, go to the Dockerfile Reference. touch Dockerfile 2. Edit the Dockerfile you just created and add the following content. FROM public.ecr.aws/amazonlinux/amazonlinux:latest # Update installed packages and install Apache RUN yum update -y && \\ yum install -y httpd # Write hello world message RUN echo 'Hello World!' > /var/www/html/index.html # Configure Apache RUN echo 'mkdir -p /var/run/httpd' >> /root/run_apache.sh && \\ echo 'mkdir -p /var/lock/httpd' >> /root/run_apache.sh && \\ echo '/usr/sbin/httpd -D FOREGROUND' >> /root/run_apache.sh && \\ chmod 755 /root/run_apache.sh EXPOSE 80 CMD /root/run_apache.sh This Dockerfile uses the public Amazon Linux 2023 image hosted on Amazon ECR Public. The RUN instructions update the package caches, installs some software packages for the web server, and then write the \"Hello World!\" content to the web servers document root. The EXPOSE instruction means that port 80 on the container is the one that is listening, and the CMD instruction starts the web server. 3. Build the Docker image from your Dockerfile. Create a Docker image 17 Amazon Elastic Container Service Developer Guide Note Some versions of Docker may require the full path to your Dockerfile"} +{"global_id": 262, "doc_id": "ecs", "chunk_id": "15", "question_id": 3, "question": "What does the RUN instruction in the Dockerfile do?", "answer_span": "The RUN instructions update the package caches, installs some software packages for the web server, and then write the 'Hello World!' content to the web servers document root.", "chunk": "task definitions use container images to launch containers on the container instances in your clusters. In this section, you create a Docker image of a simple web application, and test Create a Docker image 16 Amazon Elastic Container Service Developer Guide it on your local system or Amazon EC2 instance, and then push the image to the Amazon ECR container registry so you can use it in an Amazon ECS task definition. To create a Docker image of a simple web application 1. Create a file called Dockerfile. A Dockerfile is a manifest that describes the base image to use for your Docker image and what you want installed and running on it. For more information about Dockerfiles, go to the Dockerfile Reference. touch Dockerfile 2. Edit the Dockerfile you just created and add the following content. FROM public.ecr.aws/amazonlinux/amazonlinux:latest # Update installed packages and install Apache RUN yum update -y && \\ yum install -y httpd # Write hello world message RUN echo 'Hello World!' > /var/www/html/index.html # Configure Apache RUN echo 'mkdir -p /var/run/httpd' >> /root/run_apache.sh && \\ echo 'mkdir -p /var/lock/httpd' >> /root/run_apache.sh && \\ echo '/usr/sbin/httpd -D FOREGROUND' >> /root/run_apache.sh && \\ chmod 755 /root/run_apache.sh EXPOSE 80 CMD /root/run_apache.sh This Dockerfile uses the public Amazon Linux 2023 image hosted on Amazon ECR Public. The RUN instructions update the package caches, installs some software packages for the web server, and then write the \"Hello World!\" content to the web servers document root. The EXPOSE instruction means that port 80 on the container is the one that is listening, and the CMD instruction starts the web server. 3. Build the Docker image from your Dockerfile. Create a Docker image 17 Amazon Elastic Container Service Developer Guide Note Some versions of Docker may require the full path to your Dockerfile"} +{"global_id": 263, "doc_id": "ecs", "chunk_id": "15", "question_id": 4, "question": "Which port is exposed by the Dockerfile for the web server?", "answer_span": "The EXPOSE instruction means that port 80 on the container is the one that is listening.", "chunk": "task definitions use container images to launch containers on the container instances in your clusters. In this section, you create a Docker image of a simple web application, and test Create a Docker image 16 Amazon Elastic Container Service Developer Guide it on your local system or Amazon EC2 instance, and then push the image to the Amazon ECR container registry so you can use it in an Amazon ECS task definition. To create a Docker image of a simple web application 1. Create a file called Dockerfile. A Dockerfile is a manifest that describes the base image to use for your Docker image and what you want installed and running on it. For more information about Dockerfiles, go to the Dockerfile Reference. touch Dockerfile 2. Edit the Dockerfile you just created and add the following content. FROM public.ecr.aws/amazonlinux/amazonlinux:latest # Update installed packages and install Apache RUN yum update -y && \\ yum install -y httpd # Write hello world message RUN echo 'Hello World!' > /var/www/html/index.html # Configure Apache RUN echo 'mkdir -p /var/run/httpd' >> /root/run_apache.sh && \\ echo 'mkdir -p /var/lock/httpd' >> /root/run_apache.sh && \\ echo '/usr/sbin/httpd -D FOREGROUND' >> /root/run_apache.sh && \\ chmod 755 /root/run_apache.sh EXPOSE 80 CMD /root/run_apache.sh This Dockerfile uses the public Amazon Linux 2023 image hosted on Amazon ECR Public. The RUN instructions update the package caches, installs some software packages for the web server, and then write the \"Hello World!\" content to the web servers document root. The EXPOSE instruction means that port 80 on the container is the one that is listening, and the CMD instruction starts the web server. 3. Build the Docker image from your Dockerfile. Create a Docker image 17 Amazon Elastic Container Service Developer Guide Note Some versions of Docker may require the full path to your Dockerfile"} +{"global_id": 264, "doc_id": "ecs", "chunk_id": "16", "question_id": 1, "question": "What command is used to build the Docker image from your Dockerfile?", "answer_span": "3. Build the Docker image from your Dockerfile.", "chunk": "80 on the container is the one that is listening, and the CMD instruction starts the web server. 3. Build the Docker image from your Dockerfile. Create a Docker image 17 Amazon Elastic Container Service Developer Guide Note Some versions of Docker may require the full path to your Dockerfile in the following command, instead of the relative path shown below. If you run the command an ARM based system, such as Apple Silicon, use the -platform option \"--platform linux/amd64\". docker build -t hello-world . 4. List your container image. docker images --filter reference=hello-world Output: REPOSITORY SIZE hello-world 194MB 5. TAG IMAGE ID CREATED latest e9ffedc8c286 4 minutes ago Run the newly built image. The -p 80:80 option maps the exposed port 80 on the container to port 80 on the host system. docker run -t -i -p 80:80 hello-world Note Output from the Apache web server is displayed in the terminal window. You can ignore the \"Could not reliably determine the fully qualified domain name\" message. 6. Open a browser and point to the server that is running Docker and hosting your container. • If you are using an EC2 instance, this is the Public DNS value for the server, which is the same address you use to connect to the instance with SSH. Make sure that the security group for your instance allows inbound traffic on port 80. Create a Docker image 18 Amazon Elastic Container Service Developer Guide • If you are running Docker locally, point your browser to http://localhost/. You should see a web page with your \"Hello World!\" statement. 7. Stop the Docker container by typing Ctrl + c. Push your image to Amazon Elastic Container Registry Amazon ECR is a managed AWS managed image registry service. You can use the Docker CLI to push,"} +{"global_id": 265, "doc_id": "ecs", "chunk_id": "16", "question_id": 2, "question": "What is the size of the hello-world image?", "answer_span": "Output: REPOSITORY SIZE hello-world 194MB", "chunk": "80 on the container is the one that is listening, and the CMD instruction starts the web server. 3. Build the Docker image from your Dockerfile. Create a Docker image 17 Amazon Elastic Container Service Developer Guide Note Some versions of Docker may require the full path to your Dockerfile in the following command, instead of the relative path shown below. If you run the command an ARM based system, such as Apple Silicon, use the -platform option \"--platform linux/amd64\". docker build -t hello-world . 4. List your container image. docker images --filter reference=hello-world Output: REPOSITORY SIZE hello-world 194MB 5. TAG IMAGE ID CREATED latest e9ffedc8c286 4 minutes ago Run the newly built image. The -p 80:80 option maps the exposed port 80 on the container to port 80 on the host system. docker run -t -i -p 80:80 hello-world Note Output from the Apache web server is displayed in the terminal window. You can ignore the \"Could not reliably determine the fully qualified domain name\" message. 6. Open a browser and point to the server that is running Docker and hosting your container. • If you are using an EC2 instance, this is the Public DNS value for the server, which is the same address you use to connect to the instance with SSH. Make sure that the security group for your instance allows inbound traffic on port 80. Create a Docker image 18 Amazon Elastic Container Service Developer Guide • If you are running Docker locally, point your browser to http://localhost/. You should see a web page with your \"Hello World!\" statement. 7. Stop the Docker container by typing Ctrl + c. Push your image to Amazon Elastic Container Registry Amazon ECR is a managed AWS managed image registry service. You can use the Docker CLI to push,"} +{"global_id": 266, "doc_id": "ecs", "chunk_id": "16", "question_id": 3, "question": "What command should you use to run the newly built image?", "answer_span": "docker run -t -i -p 80:80 hello-world", "chunk": "80 on the container is the one that is listening, and the CMD instruction starts the web server. 3. Build the Docker image from your Dockerfile. Create a Docker image 17 Amazon Elastic Container Service Developer Guide Note Some versions of Docker may require the full path to your Dockerfile in the following command, instead of the relative path shown below. If you run the command an ARM based system, such as Apple Silicon, use the -platform option \"--platform linux/amd64\". docker build -t hello-world . 4. List your container image. docker images --filter reference=hello-world Output: REPOSITORY SIZE hello-world 194MB 5. TAG IMAGE ID CREATED latest e9ffedc8c286 4 minutes ago Run the newly built image. The -p 80:80 option maps the exposed port 80 on the container to port 80 on the host system. docker run -t -i -p 80:80 hello-world Note Output from the Apache web server is displayed in the terminal window. You can ignore the \"Could not reliably determine the fully qualified domain name\" message. 6. Open a browser and point to the server that is running Docker and hosting your container. • If you are using an EC2 instance, this is the Public DNS value for the server, which is the same address you use to connect to the instance with SSH. Make sure that the security group for your instance allows inbound traffic on port 80. Create a Docker image 18 Amazon Elastic Container Service Developer Guide • If you are running Docker locally, point your browser to http://localhost/. You should see a web page with your \"Hello World!\" statement. 7. Stop the Docker container by typing Ctrl + c. Push your image to Amazon Elastic Container Registry Amazon ECR is a managed AWS managed image registry service. You can use the Docker CLI to push,"} +{"global_id": 267, "doc_id": "ecs", "chunk_id": "16", "question_id": 4, "question": "What should you do if you are running Docker locally?", "answer_span": "If you are running Docker locally, point your browser to http://localhost/.", "chunk": "80 on the container is the one that is listening, and the CMD instruction starts the web server. 3. Build the Docker image from your Dockerfile. Create a Docker image 17 Amazon Elastic Container Service Developer Guide Note Some versions of Docker may require the full path to your Dockerfile in the following command, instead of the relative path shown below. If you run the command an ARM based system, such as Apple Silicon, use the -platform option \"--platform linux/amd64\". docker build -t hello-world . 4. List your container image. docker images --filter reference=hello-world Output: REPOSITORY SIZE hello-world 194MB 5. TAG IMAGE ID CREATED latest e9ffedc8c286 4 minutes ago Run the newly built image. The -p 80:80 option maps the exposed port 80 on the container to port 80 on the host system. docker run -t -i -p 80:80 hello-world Note Output from the Apache web server is displayed in the terminal window. You can ignore the \"Could not reliably determine the fully qualified domain name\" message. 6. Open a browser and point to the server that is running Docker and hosting your container. • If you are using an EC2 instance, this is the Public DNS value for the server, which is the same address you use to connect to the instance with SSH. Make sure that the security group for your instance allows inbound traffic on port 80. Create a Docker image 18 Amazon Elastic Container Service Developer Guide • If you are running Docker locally, point your browser to http://localhost/. You should see a web page with your \"Hello World!\" statement. 7. Stop the Docker container by typing Ctrl + c. Push your image to Amazon Elastic Container Registry Amazon ECR is a managed AWS managed image registry service. You can use the Docker CLI to push,"} +{"global_id": 268, "doc_id": "ecs", "chunk_id": "17", "question_id": 1, "question": "What should you see when you navigate to http://localhost/?", "answer_span": "You should see a web page with your \"Hello World!\" statement.", "chunk": "browser to http://localhost/. You should see a web page with your \"Hello World!\" statement. 7. Stop the Docker container by typing Ctrl + c. Push your image to Amazon Elastic Container Registry Amazon ECR is a managed AWS managed image registry service. You can use the Docker CLI to push, pull, and manage images in your Amazon ECR repositories. For Amazon ECR product details, featured customer case studies, and FAQs, see the Amazon Elastic Container Registry product detail pages. To tag your image and push it to Amazon ECR 1. Create an Amazon ECR repository to store your hello-world image. Note the repositoryUri in the output. Substitute region, with your AWS Region, for example, us-east-1. aws ecr create-repository --repository-name hello-repository --region region Output: { \"repository\": { \"registryId\": \"aws_account_id\", \"repositoryName\": \"hello-repository\", \"repositoryArn\": \"arn:aws:ecr:region:aws_account_id:repository/hellorepository\", \"createdAt\": 1505337806.0, \"repositoryUri\": \"aws_account_id.dkr.ecr.region.amazonaws.com/hellorepository\" } } 2. Tag the hello-world image with the repositoryUri value from the previous step. docker tag hello-world aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Push your image to Amazon Elastic Container Registry 19 Amazon Elastic Container Service 3. Developer Guide Run the aws ecr get-login-password command. Specify the registry URI you want to authenticate to. For more information, see Registry Authentication in the Amazon Elastic Container Registry User Guide. aws ecr get-login-password --region region | docker login --username AWS -password-stdin aws_account_id.dkr.ecr.region.amazonaws.com Output: Login Succeeded Important If you receive an error, install or upgrade to the latest version of the AWS CLI. For more information, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. 4. Push the image to Amazon ECR with the repositoryUri value from the earlier step. docker push aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Clean up To continue on with creating an Amazon ECS task definition and launching a task with your container image, skip to the Next steps. When you"} +{"global_id": 269, "doc_id": "ecs", "chunk_id": "17", "question_id": 2, "question": "What command is used to create an Amazon ECR repository?", "answer_span": "aws ecr create-repository --repository-name hello-repository --region region", "chunk": "browser to http://localhost/. You should see a web page with your \"Hello World!\" statement. 7. Stop the Docker container by typing Ctrl + c. Push your image to Amazon Elastic Container Registry Amazon ECR is a managed AWS managed image registry service. You can use the Docker CLI to push, pull, and manage images in your Amazon ECR repositories. For Amazon ECR product details, featured customer case studies, and FAQs, see the Amazon Elastic Container Registry product detail pages. To tag your image and push it to Amazon ECR 1. Create an Amazon ECR repository to store your hello-world image. Note the repositoryUri in the output. Substitute region, with your AWS Region, for example, us-east-1. aws ecr create-repository --repository-name hello-repository --region region Output: { \"repository\": { \"registryId\": \"aws_account_id\", \"repositoryName\": \"hello-repository\", \"repositoryArn\": \"arn:aws:ecr:region:aws_account_id:repository/hellorepository\", \"createdAt\": 1505337806.0, \"repositoryUri\": \"aws_account_id.dkr.ecr.region.amazonaws.com/hellorepository\" } } 2. Tag the hello-world image with the repositoryUri value from the previous step. docker tag hello-world aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Push your image to Amazon Elastic Container Registry 19 Amazon Elastic Container Service 3. Developer Guide Run the aws ecr get-login-password command. Specify the registry URI you want to authenticate to. For more information, see Registry Authentication in the Amazon Elastic Container Registry User Guide. aws ecr get-login-password --region region | docker login --username AWS -password-stdin aws_account_id.dkr.ecr.region.amazonaws.com Output: Login Succeeded Important If you receive an error, install or upgrade to the latest version of the AWS CLI. For more information, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. 4. Push the image to Amazon ECR with the repositoryUri value from the earlier step. docker push aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Clean up To continue on with creating an Amazon ECS task definition and launching a task with your container image, skip to the Next steps. When you"} +{"global_id": 270, "doc_id": "ecs", "chunk_id": "17", "question_id": 3, "question": "What command do you run to authenticate to the registry?", "answer_span": "aws ecr get-login-password --region region | docker login --username AWS -password-stdin aws_account_id.dkr.ecr.region.amazonaws.com", "chunk": "browser to http://localhost/. You should see a web page with your \"Hello World!\" statement. 7. Stop the Docker container by typing Ctrl + c. Push your image to Amazon Elastic Container Registry Amazon ECR is a managed AWS managed image registry service. You can use the Docker CLI to push, pull, and manage images in your Amazon ECR repositories. For Amazon ECR product details, featured customer case studies, and FAQs, see the Amazon Elastic Container Registry product detail pages. To tag your image and push it to Amazon ECR 1. Create an Amazon ECR repository to store your hello-world image. Note the repositoryUri in the output. Substitute region, with your AWS Region, for example, us-east-1. aws ecr create-repository --repository-name hello-repository --region region Output: { \"repository\": { \"registryId\": \"aws_account_id\", \"repositoryName\": \"hello-repository\", \"repositoryArn\": \"arn:aws:ecr:region:aws_account_id:repository/hellorepository\", \"createdAt\": 1505337806.0, \"repositoryUri\": \"aws_account_id.dkr.ecr.region.amazonaws.com/hellorepository\" } } 2. Tag the hello-world image with the repositoryUri value from the previous step. docker tag hello-world aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Push your image to Amazon Elastic Container Registry 19 Amazon Elastic Container Service 3. Developer Guide Run the aws ecr get-login-password command. Specify the registry URI you want to authenticate to. For more information, see Registry Authentication in the Amazon Elastic Container Registry User Guide. aws ecr get-login-password --region region | docker login --username AWS -password-stdin aws_account_id.dkr.ecr.region.amazonaws.com Output: Login Succeeded Important If you receive an error, install or upgrade to the latest version of the AWS CLI. For more information, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. 4. Push the image to Amazon ECR with the repositoryUri value from the earlier step. docker push aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Clean up To continue on with creating an Amazon ECS task definition and launching a task with your container image, skip to the Next steps. When you"} +{"global_id": 271, "doc_id": "ecs", "chunk_id": "17", "question_id": 4, "question": "What should you do if you receive an error related to the AWS CLI?", "answer_span": "If you receive an error, install or upgrade to the latest version of the AWS CLI.", "chunk": "browser to http://localhost/. You should see a web page with your \"Hello World!\" statement. 7. Stop the Docker container by typing Ctrl + c. Push your image to Amazon Elastic Container Registry Amazon ECR is a managed AWS managed image registry service. You can use the Docker CLI to push, pull, and manage images in your Amazon ECR repositories. For Amazon ECR product details, featured customer case studies, and FAQs, see the Amazon Elastic Container Registry product detail pages. To tag your image and push it to Amazon ECR 1. Create an Amazon ECR repository to store your hello-world image. Note the repositoryUri in the output. Substitute region, with your AWS Region, for example, us-east-1. aws ecr create-repository --repository-name hello-repository --region region Output: { \"repository\": { \"registryId\": \"aws_account_id\", \"repositoryName\": \"hello-repository\", \"repositoryArn\": \"arn:aws:ecr:region:aws_account_id:repository/hellorepository\", \"createdAt\": 1505337806.0, \"repositoryUri\": \"aws_account_id.dkr.ecr.region.amazonaws.com/hellorepository\" } } 2. Tag the hello-world image with the repositoryUri value from the previous step. docker tag hello-world aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Push your image to Amazon Elastic Container Registry 19 Amazon Elastic Container Service 3. Developer Guide Run the aws ecr get-login-password command. Specify the registry URI you want to authenticate to. For more information, see Registry Authentication in the Amazon Elastic Container Registry User Guide. aws ecr get-login-password --region region | docker login --username AWS -password-stdin aws_account_id.dkr.ecr.region.amazonaws.com Output: Login Succeeded Important If you receive an error, install or upgrade to the latest version of the AWS CLI. For more information, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. 4. Push the image to Amazon ECR with the repositoryUri value from the earlier step. docker push aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Clean up To continue on with creating an Amazon ECS task definition and launching a task with your container image, skip to the Next steps. When you"} +{"global_id": 272, "doc_id": "ecs", "chunk_id": "18", "question_id": 1, "question": "What command is used to push the image to Amazon ECR?", "answer_span": "docker push aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository", "chunk": "Command Line Interface User Guide. 4. Push the image to Amazon ECR with the repositoryUri value from the earlier step. docker push aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Clean up To continue on with creating an Amazon ECS task definition and launching a task with your container image, skip to the Next steps. When you are done experimenting with your Amazon ECR image, you can delete the repository so you are not charged for image storage. aws ecr delete-repository --repository-name hello-repository --region region --force Next steps Your task definitions require a task execution role. For more information, see Amazon ECS task execution IAM role. After you have created and pushed your container image to Amazon ECR, you can use that image in a task definition. For more information, see one of the following: • the section called “Learn how to create a Linux task for the Fargate launch type” Clean up 20 Amazon Elastic Container Service Developer Guide • the section called “Learn how to create a Windows task for the Fargate launch type” • Creating an Amazon ECS Linux task for the Fargate launch type with the AWS CLI Learn how to create an Amazon ECS Linux task for the Fargate launch type Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage your containers. You can host your containers on a serverless infrastructure that is managed by Amazon ECS by launching your services or tasks on AWS Fargate. For more information on Fargate, see AWS Fargate for Amazon ECS. Get started with Amazon ECS on AWS Fargate by using the Fargate launch type for your tasks in the Regions where Amazon ECS supports AWS Fargate. Complete the following steps to get started with Amazon ECS on AWS Fargate. Prerequisites Before"} +{"global_id": 273, "doc_id": "ecs", "chunk_id": "18", "question_id": 2, "question": "What should you do when you are done experimenting with your Amazon ECR image?", "answer_span": "you can delete the repository so you are not charged for image storage.", "chunk": "Command Line Interface User Guide. 4. Push the image to Amazon ECR with the repositoryUri value from the earlier step. docker push aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Clean up To continue on with creating an Amazon ECS task definition and launching a task with your container image, skip to the Next steps. When you are done experimenting with your Amazon ECR image, you can delete the repository so you are not charged for image storage. aws ecr delete-repository --repository-name hello-repository --region region --force Next steps Your task definitions require a task execution role. For more information, see Amazon ECS task execution IAM role. After you have created and pushed your container image to Amazon ECR, you can use that image in a task definition. For more information, see one of the following: • the section called “Learn how to create a Linux task for the Fargate launch type” Clean up 20 Amazon Elastic Container Service Developer Guide • the section called “Learn how to create a Windows task for the Fargate launch type” • Creating an Amazon ECS Linux task for the Fargate launch type with the AWS CLI Learn how to create an Amazon ECS Linux task for the Fargate launch type Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage your containers. You can host your containers on a serverless infrastructure that is managed by Amazon ECS by launching your services or tasks on AWS Fargate. For more information on Fargate, see AWS Fargate for Amazon ECS. Get started with Amazon ECS on AWS Fargate by using the Fargate launch type for your tasks in the Regions where Amazon ECS supports AWS Fargate. Complete the following steps to get started with Amazon ECS on AWS Fargate. Prerequisites Before"} +{"global_id": 274, "doc_id": "ecs", "chunk_id": "18", "question_id": 3, "question": "What is required for your task definitions after creating and pushing your container image?", "answer_span": "Your task definitions require a task execution role.", "chunk": "Command Line Interface User Guide. 4. Push the image to Amazon ECR with the repositoryUri value from the earlier step. docker push aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Clean up To continue on with creating an Amazon ECS task definition and launching a task with your container image, skip to the Next steps. When you are done experimenting with your Amazon ECR image, you can delete the repository so you are not charged for image storage. aws ecr delete-repository --repository-name hello-repository --region region --force Next steps Your task definitions require a task execution role. For more information, see Amazon ECS task execution IAM role. After you have created and pushed your container image to Amazon ECR, you can use that image in a task definition. For more information, see one of the following: • the section called “Learn how to create a Linux task for the Fargate launch type” Clean up 20 Amazon Elastic Container Service Developer Guide • the section called “Learn how to create a Windows task for the Fargate launch type” • Creating an Amazon ECS Linux task for the Fargate launch type with the AWS CLI Learn how to create an Amazon ECS Linux task for the Fargate launch type Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage your containers. You can host your containers on a serverless infrastructure that is managed by Amazon ECS by launching your services or tasks on AWS Fargate. For more information on Fargate, see AWS Fargate for Amazon ECS. Get started with Amazon ECS on AWS Fargate by using the Fargate launch type for your tasks in the Regions where Amazon ECS supports AWS Fargate. Complete the following steps to get started with Amazon ECS on AWS Fargate. Prerequisites Before"} +{"global_id": 275, "doc_id": "ecs", "chunk_id": "18", "question_id": 4, "question": "What service does Amazon ECS provide for managing containers?", "answer_span": "Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage your containers.", "chunk": "Command Line Interface User Guide. 4. Push the image to Amazon ECR with the repositoryUri value from the earlier step. docker push aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Clean up To continue on with creating an Amazon ECS task definition and launching a task with your container image, skip to the Next steps. When you are done experimenting with your Amazon ECR image, you can delete the repository so you are not charged for image storage. aws ecr delete-repository --repository-name hello-repository --region region --force Next steps Your task definitions require a task execution role. For more information, see Amazon ECS task execution IAM role. After you have created and pushed your container image to Amazon ECR, you can use that image in a task definition. For more information, see one of the following: • the section called “Learn how to create a Linux task for the Fargate launch type” Clean up 20 Amazon Elastic Container Service Developer Guide • the section called “Learn how to create a Windows task for the Fargate launch type” • Creating an Amazon ECS Linux task for the Fargate launch type with the AWS CLI Learn how to create an Amazon ECS Linux task for the Fargate launch type Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage your containers. You can host your containers on a serverless infrastructure that is managed by Amazon ECS by launching your services or tasks on AWS Fargate. For more information on Fargate, see AWS Fargate for Amazon ECS. Get started with Amazon ECS on AWS Fargate by using the Fargate launch type for your tasks in the Regions where Amazon ECS supports AWS Fargate. Complete the following steps to get started with Amazon ECS on AWS Fargate. Prerequisites Before"} +{"global_id": 276, "doc_id": "ecs", "chunk_id": "19", "question_id": 1, "question": "What is required for Fargate tasks in Amazon ECS?", "answer_span": "the console attempts to automatically create the task execution IAM role, which is required for Fargate tasks.", "chunk": "information on Fargate, see AWS Fargate for Amazon ECS. Get started with Amazon ECS on AWS Fargate by using the Fargate launch type for your tasks in the Regions where Amazon ECS supports AWS Fargate. Complete the following steps to get started with Amazon ECS on AWS Fargate. Prerequisites Before you begin, complete the steps in Set up to use Amazon ECS and that your IAM user has the permissions specified in the AdministratorAccess IAM policy example. The console attempts to automatically create the task execution IAM role, which is required for Fargate tasks. To ensure that the console is able to create this IAM role, one of the following must be true: • Your user has administrator access. For more information, see Set up to use Amazon ECS. • Your user has the IAM permissions to create a service role. For more information, see Creating a Role to Delegate Permissions to an AWS Service. • A user with administrator access has manually created the task execution role so that it is available on the account to be used. For more information, see Amazon ECS task execution IAM role. Important The security group you select when creating a service with your task definition must have port 80 open for inbound traffic. Add the following inbound rule to your security group. Learn how to create a Linux task for the Fargate launch type 21"} +{"global_id": 277, "doc_id": "ecs", "chunk_id": "19", "question_id": 2, "question": "What must be true for the console to create the IAM role?", "answer_span": "one of the following must be true: • Your user has administrator access.", "chunk": "information on Fargate, see AWS Fargate for Amazon ECS. Get started with Amazon ECS on AWS Fargate by using the Fargate launch type for your tasks in the Regions where Amazon ECS supports AWS Fargate. Complete the following steps to get started with Amazon ECS on AWS Fargate. Prerequisites Before you begin, complete the steps in Set up to use Amazon ECS and that your IAM user has the permissions specified in the AdministratorAccess IAM policy example. The console attempts to automatically create the task execution IAM role, which is required for Fargate tasks. To ensure that the console is able to create this IAM role, one of the following must be true: • Your user has administrator access. For more information, see Set up to use Amazon ECS. • Your user has the IAM permissions to create a service role. For more information, see Creating a Role to Delegate Permissions to an AWS Service. • A user with administrator access has manually created the task execution role so that it is available on the account to be used. For more information, see Amazon ECS task execution IAM role. Important The security group you select when creating a service with your task definition must have port 80 open for inbound traffic. Add the following inbound rule to your security group. Learn how to create a Linux task for the Fargate launch type 21"} +{"global_id": 278, "doc_id": "ecs", "chunk_id": "19", "question_id": 3, "question": "What must the security group have open for inbound traffic?", "answer_span": "the security group you select when creating a service with your task definition must have port 80 open for inbound traffic.", "chunk": "information on Fargate, see AWS Fargate for Amazon ECS. Get started with Amazon ECS on AWS Fargate by using the Fargate launch type for your tasks in the Regions where Amazon ECS supports AWS Fargate. Complete the following steps to get started with Amazon ECS on AWS Fargate. Prerequisites Before you begin, complete the steps in Set up to use Amazon ECS and that your IAM user has the permissions specified in the AdministratorAccess IAM policy example. The console attempts to automatically create the task execution IAM role, which is required for Fargate tasks. To ensure that the console is able to create this IAM role, one of the following must be true: • Your user has administrator access. For more information, see Set up to use Amazon ECS. • Your user has the IAM permissions to create a service role. For more information, see Creating a Role to Delegate Permissions to an AWS Service. • A user with administrator access has manually created the task execution role so that it is available on the account to be used. For more information, see Amazon ECS task execution IAM role. Important The security group you select when creating a service with your task definition must have port 80 open for inbound traffic. Add the following inbound rule to your security group. Learn how to create a Linux task for the Fargate launch type 21"} +{"global_id": 279, "doc_id": "ecs", "chunk_id": "19", "question_id": 4, "question": "Where can you find more information about setting up Amazon ECS?", "answer_span": "For more information, see Set up to use Amazon ECS.", "chunk": "information on Fargate, see AWS Fargate for Amazon ECS. Get started with Amazon ECS on AWS Fargate by using the Fargate launch type for your tasks in the Regions where Amazon ECS supports AWS Fargate. Complete the following steps to get started with Amazon ECS on AWS Fargate. Prerequisites Before you begin, complete the steps in Set up to use Amazon ECS and that your IAM user has the permissions specified in the AdministratorAccess IAM policy example. The console attempts to automatically create the task execution IAM role, which is required for Fargate tasks. To ensure that the console is able to create this IAM role, one of the following must be true: • Your user has administrator access. For more information, see Set up to use Amazon ECS. • Your user has the IAM permissions to create a service role. For more information, see Creating a Role to Delegate Permissions to an AWS Service. • A user with administrator access has manually created the task execution role so that it is available on the account to be used. For more information, see Amazon ECS task execution IAM role. Important The security group you select when creating a service with your task definition must have port 80 open for inbound traffic. Add the following inbound rule to your security group. Learn how to create a Linux task for the Fargate launch type 21"} +{"global_id": 280, "doc_id": "ec2", "chunk_id": "0", "question_id": 1, "question": "What does Amazon EC2 provide?", "answer_span": "Amazon Elastic Compute Cloud (Amazon EC2) provides on-demand, scalable computing capacity in the Amazon Web Services (AWS) Cloud.", "chunk": "Amazon Elastic Compute Cloud User Guide What is Amazon EC2? Amazon Elastic Compute Cloud (Amazon EC2) provides on-demand, scalable computing capacity in the Amazon Web Services (AWS) Cloud. Using Amazon EC2 reduces hardware costs so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. You can add capacity (scale up) to handle compute-heavy tasks, such as monthly or yearly processes, or spikes in website traffic. When usage decreases, you can reduce capacity (scale down) again. An EC2 instance is a virtual server in the AWS Cloud. When you launch an EC2 instance, the instance type that you specify determines the hardware available to your instance. Each instance type offers a different balance of compute, memory, network, and storage resources. For more information, see the Amazon EC2 Instance Types Guide. Features of Amazon EC2 Amazon EC2 provides the following high-level features: Instances Virtual servers. Amazon Machine Images (AMIs) Preconfigured templates for your instances that package the components you need for your server (including the operating system and additional software). Instance types Various configurations of CPU, memory, storage, networking capacity, and graphics hardware for your instances. Features 1 Amazon Elastic Compute Cloud User Guide Amazon EBS volumes Persistent storage volumes for your data using Amazon Elastic Block Store (Amazon EBS). Instance store volumes Storage volumes for temporary data that is deleted when you stop, hibernate, or terminate your instance. Key pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to"} +{"global_id": 281, "doc_id": "ec2", "chunk_id": "0", "question_id": 2, "question": "What is an EC2 instance?", "answer_span": "An EC2 instance is a virtual server in the AWS Cloud.", "chunk": "Amazon Elastic Compute Cloud User Guide What is Amazon EC2? Amazon Elastic Compute Cloud (Amazon EC2) provides on-demand, scalable computing capacity in the Amazon Web Services (AWS) Cloud. Using Amazon EC2 reduces hardware costs so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. You can add capacity (scale up) to handle compute-heavy tasks, such as monthly or yearly processes, or spikes in website traffic. When usage decreases, you can reduce capacity (scale down) again. An EC2 instance is a virtual server in the AWS Cloud. When you launch an EC2 instance, the instance type that you specify determines the hardware available to your instance. Each instance type offers a different balance of compute, memory, network, and storage resources. For more information, see the Amazon EC2 Instance Types Guide. Features of Amazon EC2 Amazon EC2 provides the following high-level features: Instances Virtual servers. Amazon Machine Images (AMIs) Preconfigured templates for your instances that package the components you need for your server (including the operating system and additional software). Instance types Various configurations of CPU, memory, storage, networking capacity, and graphics hardware for your instances. Features 1 Amazon Elastic Compute Cloud User Guide Amazon EBS volumes Persistent storage volumes for your data using Amazon Elastic Block Store (Amazon EBS). Instance store volumes Storage volumes for temporary data that is deleted when you stop, hibernate, or terminate your instance. Key pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to"} +{"global_id": 282, "doc_id": "ec2", "chunk_id": "0", "question_id": 3, "question": "What are Amazon Machine Images (AMIs)?", "answer_span": "Amazon Machine Images (AMIs) Preconfigured templates for your instances that package the components you need for your server (including the operating system and additional software).", "chunk": "Amazon Elastic Compute Cloud User Guide What is Amazon EC2? Amazon Elastic Compute Cloud (Amazon EC2) provides on-demand, scalable computing capacity in the Amazon Web Services (AWS) Cloud. Using Amazon EC2 reduces hardware costs so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. You can add capacity (scale up) to handle compute-heavy tasks, such as monthly or yearly processes, or spikes in website traffic. When usage decreases, you can reduce capacity (scale down) again. An EC2 instance is a virtual server in the AWS Cloud. When you launch an EC2 instance, the instance type that you specify determines the hardware available to your instance. Each instance type offers a different balance of compute, memory, network, and storage resources. For more information, see the Amazon EC2 Instance Types Guide. Features of Amazon EC2 Amazon EC2 provides the following high-level features: Instances Virtual servers. Amazon Machine Images (AMIs) Preconfigured templates for your instances that package the components you need for your server (including the operating system and additional software). Instance types Various configurations of CPU, memory, storage, networking capacity, and graphics hardware for your instances. Features 1 Amazon Elastic Compute Cloud User Guide Amazon EBS volumes Persistent storage volumes for your data using Amazon Elastic Block Store (Amazon EBS). Instance store volumes Storage volumes for temporary data that is deleted when you stop, hibernate, or terminate your instance. Key pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to"} +{"global_id": 283, "doc_id": "ec2", "chunk_id": "0", "question_id": 4, "question": "What is the purpose of security groups in Amazon EC2?", "answer_span": "Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to.", "chunk": "Amazon Elastic Compute Cloud User Guide What is Amazon EC2? Amazon Elastic Compute Cloud (Amazon EC2) provides on-demand, scalable computing capacity in the Amazon Web Services (AWS) Cloud. Using Amazon EC2 reduces hardware costs so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. You can add capacity (scale up) to handle compute-heavy tasks, such as monthly or yearly processes, or spikes in website traffic. When usage decreases, you can reduce capacity (scale down) again. An EC2 instance is a virtual server in the AWS Cloud. When you launch an EC2 instance, the instance type that you specify determines the hardware available to your instance. Each instance type offers a different balance of compute, memory, network, and storage resources. For more information, see the Amazon EC2 Instance Types Guide. Features of Amazon EC2 Amazon EC2 provides the following high-level features: Instances Virtual servers. Amazon Machine Images (AMIs) Preconfigured templates for your instances that package the components you need for your server (including the operating system and additional software). Instance types Various configurations of CPU, memory, storage, networking capacity, and graphics hardware for your instances. Features 1 Amazon Elastic Compute Cloud User Guide Amazon EBS volumes Persistent storage volumes for your data using Amazon Elastic Block Store (Amazon EBS). Instance store volumes Storage volumes for temporary data that is deleted when you stop, hibernate, or terminate your instance. Key pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to"} +{"global_id": 284, "doc_id": "ec2", "chunk_id": "1", "question_id": 1, "question": "What does AWS store for secure login information?", "answer_span": "AWS stores the public key and you store the private key in a secure place.", "chunk": "pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to which your instances can connect. Amazon EC2 supports the processing, storage, and transmission of credit card data by a merchant or service provider, and has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS). For more information about PCI DSS, including how to request a copy of the AWS PCI Compliance Package, see PCI DSS Level 1. Related services Services to use with Amazon EC2 You can use other AWS services with the instances that you deploy using Amazon EC2. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. AWS Backup Automate backing up your Amazon EC2 instances and the Amazon EBS volumes attached to them. Amazon CloudWatch Monitor your instances and Amazon EBS volumes. Related services 2 Amazon Elastic Compute Cloud User Guide Elastic Load Balancing Automatically distribute incoming application traffic across multiple instances. Amazon GuardDuty Detect potentially unauthorized or malicious use of your EC2 instances. EC2 Image Builder Automate the creation, management, and deployment of customized, secure, and up-to-date server images. AWS Launch Wizard Size, configure, and deploy AWS resources for third-party applications without having to manually identify and provision individual AWS resources. AWS Systems Manager Perform operations at scale on EC2 instances with this secure end-to-end management solution. Additional compute services You can launch instances using another AWS compute service instead of using Amazon EC2. Amazon Lightsail Build websites or web applications using Amazon Lightsail, a cloud platform"} +{"global_id": 285, "doc_id": "ec2", "chunk_id": "1", "question_id": 2, "question": "What is a security group in the context of Amazon EC2?", "answer_span": "A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to which your instances can connect.", "chunk": "pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to which your instances can connect. Amazon EC2 supports the processing, storage, and transmission of credit card data by a merchant or service provider, and has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS). For more information about PCI DSS, including how to request a copy of the AWS PCI Compliance Package, see PCI DSS Level 1. Related services Services to use with Amazon EC2 You can use other AWS services with the instances that you deploy using Amazon EC2. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. AWS Backup Automate backing up your Amazon EC2 instances and the Amazon EBS volumes attached to them. Amazon CloudWatch Monitor your instances and Amazon EBS volumes. Related services 2 Amazon Elastic Compute Cloud User Guide Elastic Load Balancing Automatically distribute incoming application traffic across multiple instances. Amazon GuardDuty Detect potentially unauthorized or malicious use of your EC2 instances. EC2 Image Builder Automate the creation, management, and deployment of customized, secure, and up-to-date server images. AWS Launch Wizard Size, configure, and deploy AWS resources for third-party applications without having to manually identify and provision individual AWS resources. AWS Systems Manager Perform operations at scale on EC2 instances with this secure end-to-end management solution. Additional compute services You can launch instances using another AWS compute service instead of using Amazon EC2. Amazon Lightsail Build websites or web applications using Amazon Lightsail, a cloud platform"} +{"global_id": 286, "doc_id": "ec2", "chunk_id": "1", "question_id": 3, "question": "What standard has Amazon EC2 been validated as compliant with?", "answer_span": "Amazon EC2 has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS).", "chunk": "pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to which your instances can connect. Amazon EC2 supports the processing, storage, and transmission of credit card data by a merchant or service provider, and has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS). For more information about PCI DSS, including how to request a copy of the AWS PCI Compliance Package, see PCI DSS Level 1. Related services Services to use with Amazon EC2 You can use other AWS services with the instances that you deploy using Amazon EC2. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. AWS Backup Automate backing up your Amazon EC2 instances and the Amazon EBS volumes attached to them. Amazon CloudWatch Monitor your instances and Amazon EBS volumes. Related services 2 Amazon Elastic Compute Cloud User Guide Elastic Load Balancing Automatically distribute incoming application traffic across multiple instances. Amazon GuardDuty Detect potentially unauthorized or malicious use of your EC2 instances. EC2 Image Builder Automate the creation, management, and deployment of customized, secure, and up-to-date server images. AWS Launch Wizard Size, configure, and deploy AWS resources for third-party applications without having to manually identify and provision individual AWS resources. AWS Systems Manager Perform operations at scale on EC2 instances with this secure end-to-end management solution. Additional compute services You can launch instances using another AWS compute service instead of using Amazon EC2. Amazon Lightsail Build websites or web applications using Amazon Lightsail, a cloud platform"} +{"global_id": 287, "doc_id": "ec2", "chunk_id": "1", "question_id": 4, "question": "What service helps ensure the correct number of Amazon EC2 instances are available?", "answer_span": "Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application.", "chunk": "pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to which your instances can connect. Amazon EC2 supports the processing, storage, and transmission of credit card data by a merchant or service provider, and has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS). For more information about PCI DSS, including how to request a copy of the AWS PCI Compliance Package, see PCI DSS Level 1. Related services Services to use with Amazon EC2 You can use other AWS services with the instances that you deploy using Amazon EC2. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. AWS Backup Automate backing up your Amazon EC2 instances and the Amazon EBS volumes attached to them. Amazon CloudWatch Monitor your instances and Amazon EBS volumes. Related services 2 Amazon Elastic Compute Cloud User Guide Elastic Load Balancing Automatically distribute incoming application traffic across multiple instances. Amazon GuardDuty Detect potentially unauthorized or malicious use of your EC2 instances. EC2 Image Builder Automate the creation, management, and deployment of customized, secure, and up-to-date server images. AWS Launch Wizard Size, configure, and deploy AWS resources for third-party applications without having to manually identify and provision individual AWS resources. AWS Systems Manager Perform operations at scale on EC2 instances with this secure end-to-end management solution. Additional compute services You can launch instances using another AWS compute service instead of using Amazon EC2. Amazon Lightsail Build websites or web applications using Amazon Lightsail, a cloud platform"} +{"global_id": 288, "doc_id": "ec2", "chunk_id": "2", "question_id": 1, "question": "What is AWS Systems Manager used for?", "answer_span": "AWS Systems Manager Perform operations at scale on EC2 instances with this secure end-to-end management solution.", "chunk": "provision individual AWS resources. AWS Systems Manager Perform operations at scale on EC2 instances with this secure end-to-end management solution. Additional compute services You can launch instances using another AWS compute service instead of using Amazon EC2. Amazon Lightsail Build websites or web applications using Amazon Lightsail, a cloud platform that provides the resources that you need to deploy your project quickly, for a low, predictable monthly price. To compare Amazon EC2 and Lightsail, see Amazon Lightsail or Amazon EC2. Amazon Elastic Container Service (Amazon ECS) Deploy, manage, and scale containerized applications on a cluster of EC2 instances. For more information, see Choosing an AWS container service. Amazon Elastic Kubernetes Service (Amazon EKS) Run your Kubernetes applications on AWS. For more information, see Choosing an AWS container service. Related services 3 Amazon Elastic Compute Cloud User Guide Access Amazon EC2 You can create and manage your Amazon EC2 instances using the following interfaces: Amazon EC2 console A simple web interface to create and manage Amazon EC2 instances and resources. If you've signed up for an AWS account, you can access the Amazon EC2 console by signing into the AWS Management Console and selecting EC2 from the console home page. AWS Command Line Interface Enables you to interact with AWS services using commands in your command-line shell. It is supported on Windows, Mac, and Linux. For more information about the AWS CLI , see AWS Command Line Interface User Guide. You can find the Amazon EC2 commands in the AWS CLI Command Reference. AWS CloudFormation Amazon EC2 supports creating resources using AWS CloudFormation. You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you. You can reuse your CloudFormation templates to provision the same resources multiple times, whether"} +{"global_id": 289, "doc_id": "ec2", "chunk_id": "2", "question_id": 2, "question": "What is Amazon Lightsail used for?", "answer_span": "Build websites or web applications using Amazon Lightsail, a cloud platform that provides the resources that you need to deploy your project quickly, for a low, predictable monthly price.", "chunk": "provision individual AWS resources. AWS Systems Manager Perform operations at scale on EC2 instances with this secure end-to-end management solution. Additional compute services You can launch instances using another AWS compute service instead of using Amazon EC2. Amazon Lightsail Build websites or web applications using Amazon Lightsail, a cloud platform that provides the resources that you need to deploy your project quickly, for a low, predictable monthly price. To compare Amazon EC2 and Lightsail, see Amazon Lightsail or Amazon EC2. Amazon Elastic Container Service (Amazon ECS) Deploy, manage, and scale containerized applications on a cluster of EC2 instances. For more information, see Choosing an AWS container service. Amazon Elastic Kubernetes Service (Amazon EKS) Run your Kubernetes applications on AWS. For more information, see Choosing an AWS container service. Related services 3 Amazon Elastic Compute Cloud User Guide Access Amazon EC2 You can create and manage your Amazon EC2 instances using the following interfaces: Amazon EC2 console A simple web interface to create and manage Amazon EC2 instances and resources. If you've signed up for an AWS account, you can access the Amazon EC2 console by signing into the AWS Management Console and selecting EC2 from the console home page. AWS Command Line Interface Enables you to interact with AWS services using commands in your command-line shell. It is supported on Windows, Mac, and Linux. For more information about the AWS CLI , see AWS Command Line Interface User Guide. You can find the Amazon EC2 commands in the AWS CLI Command Reference. AWS CloudFormation Amazon EC2 supports creating resources using AWS CloudFormation. You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you. You can reuse your CloudFormation templates to provision the same resources multiple times, whether"} +{"global_id": 290, "doc_id": "ec2", "chunk_id": "2", "question_id": 3, "question": "How can you access Amazon EC2?", "answer_span": "You can create and manage your Amazon EC2 instances using the following interfaces: Amazon EC2 console A simple web interface to create and manage Amazon EC2 instances and resources.", "chunk": "provision individual AWS resources. AWS Systems Manager Perform operations at scale on EC2 instances with this secure end-to-end management solution. Additional compute services You can launch instances using another AWS compute service instead of using Amazon EC2. Amazon Lightsail Build websites or web applications using Amazon Lightsail, a cloud platform that provides the resources that you need to deploy your project quickly, for a low, predictable monthly price. To compare Amazon EC2 and Lightsail, see Amazon Lightsail or Amazon EC2. Amazon Elastic Container Service (Amazon ECS) Deploy, manage, and scale containerized applications on a cluster of EC2 instances. For more information, see Choosing an AWS container service. Amazon Elastic Kubernetes Service (Amazon EKS) Run your Kubernetes applications on AWS. For more information, see Choosing an AWS container service. Related services 3 Amazon Elastic Compute Cloud User Guide Access Amazon EC2 You can create and manage your Amazon EC2 instances using the following interfaces: Amazon EC2 console A simple web interface to create and manage Amazon EC2 instances and resources. If you've signed up for an AWS account, you can access the Amazon EC2 console by signing into the AWS Management Console and selecting EC2 from the console home page. AWS Command Line Interface Enables you to interact with AWS services using commands in your command-line shell. It is supported on Windows, Mac, and Linux. For more information about the AWS CLI , see AWS Command Line Interface User Guide. You can find the Amazon EC2 commands in the AWS CLI Command Reference. AWS CloudFormation Amazon EC2 supports creating resources using AWS CloudFormation. You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you. You can reuse your CloudFormation templates to provision the same resources multiple times, whether"} +{"global_id": 291, "doc_id": "ec2", "chunk_id": "2", "question_id": 4, "question": "What format can AWS CloudFormation templates be in?", "answer_span": "You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you.", "chunk": "provision individual AWS resources. AWS Systems Manager Perform operations at scale on EC2 instances with this secure end-to-end management solution. Additional compute services You can launch instances using another AWS compute service instead of using Amazon EC2. Amazon Lightsail Build websites or web applications using Amazon Lightsail, a cloud platform that provides the resources that you need to deploy your project quickly, for a low, predictable monthly price. To compare Amazon EC2 and Lightsail, see Amazon Lightsail or Amazon EC2. Amazon Elastic Container Service (Amazon ECS) Deploy, manage, and scale containerized applications on a cluster of EC2 instances. For more information, see Choosing an AWS container service. Amazon Elastic Kubernetes Service (Amazon EKS) Run your Kubernetes applications on AWS. For more information, see Choosing an AWS container service. Related services 3 Amazon Elastic Compute Cloud User Guide Access Amazon EC2 You can create and manage your Amazon EC2 instances using the following interfaces: Amazon EC2 console A simple web interface to create and manage Amazon EC2 instances and resources. If you've signed up for an AWS account, you can access the Amazon EC2 console by signing into the AWS Management Console and selecting EC2 from the console home page. AWS Command Line Interface Enables you to interact with AWS services using commands in your command-line shell. It is supported on Windows, Mac, and Linux. For more information about the AWS CLI , see AWS Command Line Interface User Guide. You can find the Amazon EC2 commands in the AWS CLI Command Reference. AWS CloudFormation Amazon EC2 supports creating resources using AWS CloudFormation. You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you. You can reuse your CloudFormation templates to provision the same resources multiple times, whether"} +{"global_id": 292, "doc_id": "ec2", "chunk_id": "3", "question_id": 1, "question": "What format can the template for AWS CloudFormation be in?", "answer_span": "You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you.", "chunk": "Command Reference. AWS CloudFormation Amazon EC2 supports creating resources using AWS CloudFormation. You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you. You can reuse your CloudFormation templates to provision the same resources multiple times, whether in the same Region and account or in multiple Regions and accounts. For more information about supported resource types and properties for Amazon EC2, see EC2 resource type reference in the AWS CloudFormation User Guide. AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, AWS provides libraries, sample code, tutorials, and other resources for software developers. These libraries provide basic functions that automate tasks such as cryptographically signing your requests, retrying requests, and handling error responses, making it easier for you to get started. For more information, see Tools to Build on AWS. AWS Tools for PowerShell A set of PowerShell modules that are built on the functionality exposed by the SDK for .NET. The Tools for PowerShell enable you to script operations on your AWS resources from the PowerShell command line. To get started, see the AWS Tools for PowerShell User Guide. You can find the cmdlets for Amazon EC2, in the AWS Tools for PowerShell Cmdlet Reference. Access EC2 4 Amazon Elastic Compute Cloud User Guide Query API Amazon EC2 provides a Query API. These requests are HTTP or HTTPS requests that use the HTTP verbs GET or POST and a Query parameter named Action. For more information about the API actions for Amazon EC2, see Actions in the Amazon EC2 API Reference. Pricing for Amazon EC2 Amazon EC2 provides the following pricing options: Free Tier You can get started with Amazon EC2 for free. To explore"} +{"global_id": 293, "doc_id": "ec2", "chunk_id": "3", "question_id": 2, "question": "What do AWS SDKs provide for software developers?", "answer_span": "AWS provides libraries, sample code, tutorials, and other resources for software developers.", "chunk": "Command Reference. AWS CloudFormation Amazon EC2 supports creating resources using AWS CloudFormation. You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you. You can reuse your CloudFormation templates to provision the same resources multiple times, whether in the same Region and account or in multiple Regions and accounts. For more information about supported resource types and properties for Amazon EC2, see EC2 resource type reference in the AWS CloudFormation User Guide. AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, AWS provides libraries, sample code, tutorials, and other resources for software developers. These libraries provide basic functions that automate tasks such as cryptographically signing your requests, retrying requests, and handling error responses, making it easier for you to get started. For more information, see Tools to Build on AWS. AWS Tools for PowerShell A set of PowerShell modules that are built on the functionality exposed by the SDK for .NET. The Tools for PowerShell enable you to script operations on your AWS resources from the PowerShell command line. To get started, see the AWS Tools for PowerShell User Guide. You can find the cmdlets for Amazon EC2, in the AWS Tools for PowerShell Cmdlet Reference. Access EC2 4 Amazon Elastic Compute Cloud User Guide Query API Amazon EC2 provides a Query API. These requests are HTTP or HTTPS requests that use the HTTP verbs GET or POST and a Query parameter named Action. For more information about the API actions for Amazon EC2, see Actions in the Amazon EC2 API Reference. Pricing for Amazon EC2 Amazon EC2 provides the following pricing options: Free Tier You can get started with Amazon EC2 for free. To explore"} +{"global_id": 294, "doc_id": "ec2", "chunk_id": "3", "question_id": 3, "question": "What can you use the Tools for PowerShell for?", "answer_span": "The Tools for PowerShell enable you to script operations on your AWS resources from the PowerShell command line.", "chunk": "Command Reference. AWS CloudFormation Amazon EC2 supports creating resources using AWS CloudFormation. You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you. You can reuse your CloudFormation templates to provision the same resources multiple times, whether in the same Region and account or in multiple Regions and accounts. For more information about supported resource types and properties for Amazon EC2, see EC2 resource type reference in the AWS CloudFormation User Guide. AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, AWS provides libraries, sample code, tutorials, and other resources for software developers. These libraries provide basic functions that automate tasks such as cryptographically signing your requests, retrying requests, and handling error responses, making it easier for you to get started. For more information, see Tools to Build on AWS. AWS Tools for PowerShell A set of PowerShell modules that are built on the functionality exposed by the SDK for .NET. The Tools for PowerShell enable you to script operations on your AWS resources from the PowerShell command line. To get started, see the AWS Tools for PowerShell User Guide. You can find the cmdlets for Amazon EC2, in the AWS Tools for PowerShell Cmdlet Reference. Access EC2 4 Amazon Elastic Compute Cloud User Guide Query API Amazon EC2 provides a Query API. These requests are HTTP or HTTPS requests that use the HTTP verbs GET or POST and a Query parameter named Action. For more information about the API actions for Amazon EC2, see Actions in the Amazon EC2 API Reference. Pricing for Amazon EC2 Amazon EC2 provides the following pricing options: Free Tier You can get started with Amazon EC2 for free. To explore"} +{"global_id": 295, "doc_id": "ec2", "chunk_id": "3", "question_id": 4, "question": "What type of requests does the Amazon EC2 Query API use?", "answer_span": "These requests are HTTP or HTTPS requests that use the HTTP verbs GET or POST and a Query parameter named Action.", "chunk": "Command Reference. AWS CloudFormation Amazon EC2 supports creating resources using AWS CloudFormation. You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you. You can reuse your CloudFormation templates to provision the same resources multiple times, whether in the same Region and account or in multiple Regions and accounts. For more information about supported resource types and properties for Amazon EC2, see EC2 resource type reference in the AWS CloudFormation User Guide. AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, AWS provides libraries, sample code, tutorials, and other resources for software developers. These libraries provide basic functions that automate tasks such as cryptographically signing your requests, retrying requests, and handling error responses, making it easier for you to get started. For more information, see Tools to Build on AWS. AWS Tools for PowerShell A set of PowerShell modules that are built on the functionality exposed by the SDK for .NET. The Tools for PowerShell enable you to script operations on your AWS resources from the PowerShell command line. To get started, see the AWS Tools for PowerShell User Guide. You can find the cmdlets for Amazon EC2, in the AWS Tools for PowerShell Cmdlet Reference. Access EC2 4 Amazon Elastic Compute Cloud User Guide Query API Amazon EC2 provides a Query API. These requests are HTTP or HTTPS requests that use the HTTP verbs GET or POST and a Query parameter named Action. For more information about the API actions for Amazon EC2, see Actions in the Amazon EC2 API Reference. Pricing for Amazon EC2 Amazon EC2 provides the following pricing options: Free Tier You can get started with Amazon EC2 for free. To explore"} +{"global_id": 296, "doc_id": "ec2", "chunk_id": "4", "question_id": 1, "question": "What is the Free Tier option for Amazon EC2?", "answer_span": "You can get started with Amazon EC2 for free.", "chunk": "or POST and a Query parameter named Action. For more information about the API actions for Amazon EC2, see Actions in the Amazon EC2 API Reference. Pricing for Amazon EC2 Amazon EC2 provides the following pricing options: Free Tier You can get started with Amazon EC2 for free. To explore the Free Tier options, see AWS Free Tier. On-Demand Instances Pay for the instances that you use by the second, with a minimum of 60 seconds, with no longterm commitments or upfront payments. Savings Plans You can reduce your Amazon EC2 costs by making a commitment to a consistent amount of usage, in USD per hour, for a term of 1 or 3 years. Reserved Instances You can reduce your Amazon EC2 costs by making a commitment to a specific instance configuration, including instance type and Region, for a term of 1 or 3 years. Spot Instances Request unused EC2 instances, which can reduce your Amazon EC2 costs significantly. Dedicated Hosts Reduce costs by using a physical EC2 server that is fully dedicated for your use, either OnDemand or as part of a Savings Plan. You can use your existing server-bound software licenses and get help meeting compliance requirements. On-Demand Capacity Reservations Reserve compute capacity for your EC2 instances in a specific Availability Zone for any duration of time. Pricing 5 Amazon Elastic Compute Cloud User Guide Per-second billing Removes the cost of unused minutes and seconds from your bill. For a complete list of charges and prices for Amazon EC2 and more information about the purchase models, see Amazon EC2 pricing. Estimates, billing, and cost optimization To create estimates for your AWS use cases, use the AWS Pricing Calculator. To estimate the cost of transforming Microsoft workloads to a modern architecture that uses open source and cloud-native services deployed"} +{"global_id": 297, "doc_id": "ec2", "chunk_id": "4", "question_id": 2, "question": "How does On-Demand Instances pricing work?", "answer_span": "Pay for the instances that you use by the second, with a minimum of 60 seconds, with no longterm commitments or upfront payments.", "chunk": "or POST and a Query parameter named Action. For more information about the API actions for Amazon EC2, see Actions in the Amazon EC2 API Reference. Pricing for Amazon EC2 Amazon EC2 provides the following pricing options: Free Tier You can get started with Amazon EC2 for free. To explore the Free Tier options, see AWS Free Tier. On-Demand Instances Pay for the instances that you use by the second, with a minimum of 60 seconds, with no longterm commitments or upfront payments. Savings Plans You can reduce your Amazon EC2 costs by making a commitment to a consistent amount of usage, in USD per hour, for a term of 1 or 3 years. Reserved Instances You can reduce your Amazon EC2 costs by making a commitment to a specific instance configuration, including instance type and Region, for a term of 1 or 3 years. Spot Instances Request unused EC2 instances, which can reduce your Amazon EC2 costs significantly. Dedicated Hosts Reduce costs by using a physical EC2 server that is fully dedicated for your use, either OnDemand or as part of a Savings Plan. You can use your existing server-bound software licenses and get help meeting compliance requirements. On-Demand Capacity Reservations Reserve compute capacity for your EC2 instances in a specific Availability Zone for any duration of time. Pricing 5 Amazon Elastic Compute Cloud User Guide Per-second billing Removes the cost of unused minutes and seconds from your bill. For a complete list of charges and prices for Amazon EC2 and more information about the purchase models, see Amazon EC2 pricing. Estimates, billing, and cost optimization To create estimates for your AWS use cases, use the AWS Pricing Calculator. To estimate the cost of transforming Microsoft workloads to a modern architecture that uses open source and cloud-native services deployed"} +{"global_id": 298, "doc_id": "ec2", "chunk_id": "4", "question_id": 3, "question": "What is the benefit of Savings Plans?", "answer_span": "You can reduce your Amazon EC2 costs by making a commitment to a consistent amount of usage, in USD per hour, for a term of 1 or 3 years.", "chunk": "or POST and a Query parameter named Action. For more information about the API actions for Amazon EC2, see Actions in the Amazon EC2 API Reference. Pricing for Amazon EC2 Amazon EC2 provides the following pricing options: Free Tier You can get started with Amazon EC2 for free. To explore the Free Tier options, see AWS Free Tier. On-Demand Instances Pay for the instances that you use by the second, with a minimum of 60 seconds, with no longterm commitments or upfront payments. Savings Plans You can reduce your Amazon EC2 costs by making a commitment to a consistent amount of usage, in USD per hour, for a term of 1 or 3 years. Reserved Instances You can reduce your Amazon EC2 costs by making a commitment to a specific instance configuration, including instance type and Region, for a term of 1 or 3 years. Spot Instances Request unused EC2 instances, which can reduce your Amazon EC2 costs significantly. Dedicated Hosts Reduce costs by using a physical EC2 server that is fully dedicated for your use, either OnDemand or as part of a Savings Plan. You can use your existing server-bound software licenses and get help meeting compliance requirements. On-Demand Capacity Reservations Reserve compute capacity for your EC2 instances in a specific Availability Zone for any duration of time. Pricing 5 Amazon Elastic Compute Cloud User Guide Per-second billing Removes the cost of unused minutes and seconds from your bill. For a complete list of charges and prices for Amazon EC2 and more information about the purchase models, see Amazon EC2 pricing. Estimates, billing, and cost optimization To create estimates for your AWS use cases, use the AWS Pricing Calculator. To estimate the cost of transforming Microsoft workloads to a modern architecture that uses open source and cloud-native services deployed"} +{"global_id": 299, "doc_id": "ec2", "chunk_id": "4", "question_id": 4, "question": "What does per-second billing do?", "answer_span": "Removes the cost of unused minutes and seconds from your bill.", "chunk": "or POST and a Query parameter named Action. For more information about the API actions for Amazon EC2, see Actions in the Amazon EC2 API Reference. Pricing for Amazon EC2 Amazon EC2 provides the following pricing options: Free Tier You can get started with Amazon EC2 for free. To explore the Free Tier options, see AWS Free Tier. On-Demand Instances Pay for the instances that you use by the second, with a minimum of 60 seconds, with no longterm commitments or upfront payments. Savings Plans You can reduce your Amazon EC2 costs by making a commitment to a consistent amount of usage, in USD per hour, for a term of 1 or 3 years. Reserved Instances You can reduce your Amazon EC2 costs by making a commitment to a specific instance configuration, including instance type and Region, for a term of 1 or 3 years. Spot Instances Request unused EC2 instances, which can reduce your Amazon EC2 costs significantly. Dedicated Hosts Reduce costs by using a physical EC2 server that is fully dedicated for your use, either OnDemand or as part of a Savings Plan. You can use your existing server-bound software licenses and get help meeting compliance requirements. On-Demand Capacity Reservations Reserve compute capacity for your EC2 instances in a specific Availability Zone for any duration of time. Pricing 5 Amazon Elastic Compute Cloud User Guide Per-second billing Removes the cost of unused minutes and seconds from your bill. For a complete list of charges and prices for Amazon EC2 and more information about the purchase models, see Amazon EC2 pricing. Estimates, billing, and cost optimization To create estimates for your AWS use cases, use the AWS Pricing Calculator. To estimate the cost of transforming Microsoft workloads to a modern architecture that uses open source and cloud-native services deployed"} +{"global_id": 300, "doc_id": "ec2", "chunk_id": "5", "question_id": 1, "question": "What tool can be used to create estimates for AWS use cases?", "answer_span": "To create estimates for your AWS use cases, use the AWS Pricing Calculator.", "chunk": "EC2 and more information about the purchase models, see Amazon EC2 pricing. Estimates, billing, and cost optimization To create estimates for your AWS use cases, use the AWS Pricing Calculator. To estimate the cost of transforming Microsoft workloads to a modern architecture that uses open source and cloud-native services deployed on AWS, use the AWS Modernization Calculator for Microsoft Workloads. To see your bill, go to the Billing and Cost Management Dashboard in the AWS Billing and Cost Management console. Your bill contains links to usage reports that provide details about your bill. To learn more about AWS account billing, see AWS Billing and Cost Management User Guide. If you have questions concerning AWS billing, accounts, and events, contact AWS Support. To calculate the cost of a sample provisioned environment, see Cloud Economics Center. When calculating the cost of a provisioned environment, remember to include incidental costs such as snapshot storage for EBS volumes. You can optimize the cost, security, and performance of your AWS environment using AWS Trusted Advisor. You can use AWS Cost Explorer to analyze the cost and usage of your EC2 instances. You can view data up to the last 13 months, and forecast how much you are likely to spend for the next 12 months. For more information, see Analyzing your costs and usage with AWS Cost Explorer in the AWS Cost Management User Guide. Resources • Amazon EC2 features • AWS re:Post • AWS Skill Builder • AWS Support Estimates, billing, and cost optimization 6 Amazon Elastic Compute Cloud User Guide • Hands-on Tutorials • Web Hosting • Windows on AWS Resources 7 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute Cloud (Amazon EC2). You'll learn how to launch and connect"} +{"global_id": 301, "doc_id": "ec2", "chunk_id": "5", "question_id": 2, "question": "Where can you see your AWS bill?", "answer_span": "To see your bill, go to the Billing and Cost Management Dashboard in the AWS Billing and Cost Management console.", "chunk": "EC2 and more information about the purchase models, see Amazon EC2 pricing. Estimates, billing, and cost optimization To create estimates for your AWS use cases, use the AWS Pricing Calculator. To estimate the cost of transforming Microsoft workloads to a modern architecture that uses open source and cloud-native services deployed on AWS, use the AWS Modernization Calculator for Microsoft Workloads. To see your bill, go to the Billing and Cost Management Dashboard in the AWS Billing and Cost Management console. Your bill contains links to usage reports that provide details about your bill. To learn more about AWS account billing, see AWS Billing and Cost Management User Guide. If you have questions concerning AWS billing, accounts, and events, contact AWS Support. To calculate the cost of a sample provisioned environment, see Cloud Economics Center. When calculating the cost of a provisioned environment, remember to include incidental costs such as snapshot storage for EBS volumes. You can optimize the cost, security, and performance of your AWS environment using AWS Trusted Advisor. You can use AWS Cost Explorer to analyze the cost and usage of your EC2 instances. You can view data up to the last 13 months, and forecast how much you are likely to spend for the next 12 months. For more information, see Analyzing your costs and usage with AWS Cost Explorer in the AWS Cost Management User Guide. Resources • Amazon EC2 features • AWS re:Post • AWS Skill Builder • AWS Support Estimates, billing, and cost optimization 6 Amazon Elastic Compute Cloud User Guide • Hands-on Tutorials • Web Hosting • Windows on AWS Resources 7 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute Cloud (Amazon EC2). You'll learn how to launch and connect"} +{"global_id": 302, "doc_id": "ec2", "chunk_id": "5", "question_id": 3, "question": "What should you include when calculating the cost of a provisioned environment?", "answer_span": "When calculating the cost of a provisioned environment, remember to include incidental costs such as snapshot storage for EBS volumes.", "chunk": "EC2 and more information about the purchase models, see Amazon EC2 pricing. Estimates, billing, and cost optimization To create estimates for your AWS use cases, use the AWS Pricing Calculator. To estimate the cost of transforming Microsoft workloads to a modern architecture that uses open source and cloud-native services deployed on AWS, use the AWS Modernization Calculator for Microsoft Workloads. To see your bill, go to the Billing and Cost Management Dashboard in the AWS Billing and Cost Management console. Your bill contains links to usage reports that provide details about your bill. To learn more about AWS account billing, see AWS Billing and Cost Management User Guide. If you have questions concerning AWS billing, accounts, and events, contact AWS Support. To calculate the cost of a sample provisioned environment, see Cloud Economics Center. When calculating the cost of a provisioned environment, remember to include incidental costs such as snapshot storage for EBS volumes. You can optimize the cost, security, and performance of your AWS environment using AWS Trusted Advisor. You can use AWS Cost Explorer to analyze the cost and usage of your EC2 instances. You can view data up to the last 13 months, and forecast how much you are likely to spend for the next 12 months. For more information, see Analyzing your costs and usage with AWS Cost Explorer in the AWS Cost Management User Guide. Resources • Amazon EC2 features • AWS re:Post • AWS Skill Builder • AWS Support Estimates, billing, and cost optimization 6 Amazon Elastic Compute Cloud User Guide • Hands-on Tutorials • Web Hosting • Windows on AWS Resources 7 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute Cloud (Amazon EC2). You'll learn how to launch and connect"} +{"global_id": 303, "doc_id": "ec2", "chunk_id": "5", "question_id": 4, "question": "How far back can you view data using AWS Cost Explorer?", "answer_span": "You can view data up to the last 13 months, and forecast how much you are likely to spend for the next 12 months.", "chunk": "EC2 and more information about the purchase models, see Amazon EC2 pricing. Estimates, billing, and cost optimization To create estimates for your AWS use cases, use the AWS Pricing Calculator. To estimate the cost of transforming Microsoft workloads to a modern architecture that uses open source and cloud-native services deployed on AWS, use the AWS Modernization Calculator for Microsoft Workloads. To see your bill, go to the Billing and Cost Management Dashboard in the AWS Billing and Cost Management console. Your bill contains links to usage reports that provide details about your bill. To learn more about AWS account billing, see AWS Billing and Cost Management User Guide. If you have questions concerning AWS billing, accounts, and events, contact AWS Support. To calculate the cost of a sample provisioned environment, see Cloud Economics Center. When calculating the cost of a provisioned environment, remember to include incidental costs such as snapshot storage for EBS volumes. You can optimize the cost, security, and performance of your AWS environment using AWS Trusted Advisor. You can use AWS Cost Explorer to analyze the cost and usage of your EC2 instances. You can view data up to the last 13 months, and forecast how much you are likely to spend for the next 12 months. For more information, see Analyzing your costs and usage with AWS Cost Explorer in the AWS Cost Management User Guide. Resources • Amazon EC2 features • AWS re:Post • AWS Skill Builder • AWS Support Estimates, billing, and cost optimization 6 Amazon Elastic Compute Cloud User Guide • Hands-on Tutorials • Web Hosting • Windows on AWS Resources 7 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute Cloud (Amazon EC2). You'll learn how to launch and connect"} +{"global_id": 304, "doc_id": "ec2", "chunk_id": "6", "question_id": 1, "question": "What is Amazon EC2?", "answer_span": "You'll learn how to launch and connect to an EC2 instance.", "chunk": "6 Amazon Elastic Compute Cloud User Guide • Hands-on Tutorials • Web Hosting • Windows on AWS Resources 7 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute Cloud (Amazon EC2). You'll learn how to launch and connect to an EC2 instance. An instance is a virtual server in the AWS Cloud. With Amazon EC2, you can set up and configure the operating system and applications that run on your instance. Overview The following diagram shows the key components that you'll use in this tutorial: • An image – A template that contains the software to run on your instance, such as the operating system. • A key pair – A set of security credentials that you use to prove your identity when connecting to your instance. The public key is on your instance and the private key is on your computer. • A network – A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. To help you get started quickly, your account comes with a default VPC in each AWS Region, and each default VPC has a default subnet in each Availability Zone. • A security group – Acts as a virtual firewall to control inbound and outbound traffic. • An EBS volume – We require a root volume for the image. You can optionally add data volumes. 8 Amazon Elastic Compute Cloud User Guide Cost for this tutorial When you create your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free Tier benefits for Amazon EC2, it won't cost you anything"} +{"global_id": 305, "doc_id": "ec2", "chunk_id": "6", "question_id": 2, "question": "What is an instance in the context of Amazon EC2?", "answer_span": "An instance is a virtual server in the AWS Cloud.", "chunk": "6 Amazon Elastic Compute Cloud User Guide • Hands-on Tutorials • Web Hosting • Windows on AWS Resources 7 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute Cloud (Amazon EC2). You'll learn how to launch and connect to an EC2 instance. An instance is a virtual server in the AWS Cloud. With Amazon EC2, you can set up and configure the operating system and applications that run on your instance. Overview The following diagram shows the key components that you'll use in this tutorial: • An image – A template that contains the software to run on your instance, such as the operating system. • A key pair – A set of security credentials that you use to prove your identity when connecting to your instance. The public key is on your instance and the private key is on your computer. • A network – A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. To help you get started quickly, your account comes with a default VPC in each AWS Region, and each default VPC has a default subnet in each Availability Zone. • A security group – Acts as a virtual firewall to control inbound and outbound traffic. • An EBS volume – We require a root volume for the image. You can optionally add data volumes. 8 Amazon Elastic Compute Cloud User Guide Cost for this tutorial When you create your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free Tier benefits for Amazon EC2, it won't cost you anything"} +{"global_id": 306, "doc_id": "ec2", "chunk_id": "6", "question_id": 3, "question": "What does a key pair consist of?", "answer_span": "A key pair – A set of security credentials that you use to prove your identity when connecting to your instance.", "chunk": "6 Amazon Elastic Compute Cloud User Guide • Hands-on Tutorials • Web Hosting • Windows on AWS Resources 7 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute Cloud (Amazon EC2). You'll learn how to launch and connect to an EC2 instance. An instance is a virtual server in the AWS Cloud. With Amazon EC2, you can set up and configure the operating system and applications that run on your instance. Overview The following diagram shows the key components that you'll use in this tutorial: • An image – A template that contains the software to run on your instance, such as the operating system. • A key pair – A set of security credentials that you use to prove your identity when connecting to your instance. The public key is on your instance and the private key is on your computer. • A network – A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. To help you get started quickly, your account comes with a default VPC in each AWS Region, and each default VPC has a default subnet in each Availability Zone. • A security group – Acts as a virtual firewall to control inbound and outbound traffic. • An EBS volume – We require a root volume for the image. You can optionally add data volumes. 8 Amazon Elastic Compute Cloud User Guide Cost for this tutorial When you create your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free Tier benefits for Amazon EC2, it won't cost you anything"} +{"global_id": 307, "doc_id": "ec2", "chunk_id": "6", "question_id": 4, "question": "What is the cost to get started with Amazon EC2?", "answer_span": "When you create your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier.", "chunk": "6 Amazon Elastic Compute Cloud User Guide • Hands-on Tutorials • Web Hosting • Windows on AWS Resources 7 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute Cloud (Amazon EC2). You'll learn how to launch and connect to an EC2 instance. An instance is a virtual server in the AWS Cloud. With Amazon EC2, you can set up and configure the operating system and applications that run on your instance. Overview The following diagram shows the key components that you'll use in this tutorial: • An image – A template that contains the software to run on your instance, such as the operating system. • A key pair – A set of security credentials that you use to prove your identity when connecting to your instance. The public key is on your instance and the private key is on your computer. • A network – A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. To help you get started quickly, your account comes with a default VPC in each AWS Region, and each default VPC has a default subnet in each Availability Zone. • A security group – Acts as a virtual firewall to control inbound and outbound traffic. • An EBS volume – We require a root volume for the image. You can optionally add data volumes. 8 Amazon Elastic Compute Cloud User Guide Cost for this tutorial When you create your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free Tier benefits for Amazon EC2, it won't cost you anything"} +{"global_id": 308, "doc_id": "ec2", "chunk_id": "7", "question_id": 1, "question": "What is the AWS Free Tier?", "answer_span": "you can get started with Amazon EC2 for free using the AWS Free Tier.", "chunk": "your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free Tier benefits for Amazon EC2, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. Otherwise, you'll incur the standard Amazon EC2 usage fees from the time that you launch the instance (even if it remains idle) until you terminate it. If you created your AWS account on or after July 15, 2025, it's less than 6 months old, and you haven't used up all your credits, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. For information on how to determine whether you are eligible for the Free Tier, see the section called “Track your Free Tier usage”. Tasks • Step 1: Launch an instance • Step 2: Connect to your instance • Step 3: Clean up your instance 9 Amazon Elastic Compute Cloud User Guide • Next steps Step 1: Launch an instance You can launch an EC2 instance using the AWS Management Console as described in the following procedure. This tutorial is intended to help you quickly launch your first instance, so it doesn't cover all possible options. To launch an instance 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console dashboard, in the Launch instance pane, choose Launch instance. 4."} +{"global_id": 309, "doc_id": "ec2", "chunk_id": "7", "question_id": 2, "question": "What happens if you exceed the Free Tier benefits for Amazon EC2?", "answer_span": "you'll incur the standard Amazon EC2 usage fees from the time that you launch the instance (even if it remains idle) until you terminate it.", "chunk": "your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free Tier benefits for Amazon EC2, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. Otherwise, you'll incur the standard Amazon EC2 usage fees from the time that you launch the instance (even if it remains idle) until you terminate it. If you created your AWS account on or after July 15, 2025, it's less than 6 months old, and you haven't used up all your credits, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. For information on how to determine whether you are eligible for the Free Tier, see the section called “Track your Free Tier usage”. Tasks • Step 1: Launch an instance • Step 2: Connect to your instance • Step 3: Clean up your instance 9 Amazon Elastic Compute Cloud User Guide • Next steps Step 1: Launch an instance You can launch an EC2 instance using the AWS Management Console as described in the following procedure. This tutorial is intended to help you quickly launch your first instance, so it doesn't cover all possible options. To launch an instance 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console dashboard, in the Launch instance pane, choose Launch instance. 4."} +{"global_id": 310, "doc_id": "ec2", "chunk_id": "7", "question_id": 3, "question": "What should you do to determine your eligibility for the Free Tier?", "answer_span": "For information on how to determine whether you are eligible for the Free Tier, see the section called “Track your Free Tier usage”.", "chunk": "your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free Tier benefits for Amazon EC2, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. Otherwise, you'll incur the standard Amazon EC2 usage fees from the time that you launch the instance (even if it remains idle) until you terminate it. If you created your AWS account on or after July 15, 2025, it's less than 6 months old, and you haven't used up all your credits, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. For information on how to determine whether you are eligible for the Free Tier, see the section called “Track your Free Tier usage”. Tasks • Step 1: Launch an instance • Step 2: Connect to your instance • Step 3: Clean up your instance 9 Amazon Elastic Compute Cloud User Guide • Next steps Step 1: Launch an instance You can launch an EC2 instance using the AWS Management Console as described in the following procedure. This tutorial is intended to help you quickly launch your first instance, so it doesn't cover all possible options. To launch an instance 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console dashboard, in the Launch instance pane, choose Launch instance. 4."} +{"global_id": 311, "doc_id": "ec2", "chunk_id": "7", "question_id": 4, "question": "What is the first step to launch an EC2 instance?", "answer_span": "Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.", "chunk": "your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free Tier benefits for Amazon EC2, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. Otherwise, you'll incur the standard Amazon EC2 usage fees from the time that you launch the instance (even if it remains idle) until you terminate it. If you created your AWS account on or after July 15, 2025, it's less than 6 months old, and you haven't used up all your credits, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. For information on how to determine whether you are eligible for the Free Tier, see the section called “Track your Free Tier usage”. Tasks • Step 1: Launch an instance • Step 2: Connect to your instance • Step 3: Clean up your instance 9 Amazon Elastic Compute Cloud User Guide • Next steps Step 1: Launch an instance You can launch an EC2 instance using the AWS Management Console as described in the following procedure. This tutorial is intended to help you quickly launch your first instance, so it doesn't cover all possible options. To launch an instance 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console dashboard, in the Launch instance pane, choose Launch instance. 4."} +{"global_id": 312, "doc_id": "ec2", "chunk_id": "8", "question_id": 1, "question": "What is displayed at the top of the screen?", "answer_span": "the current AWS Region — for example, Ohio.", "chunk": "the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console dashboard, in the Launch instance pane, choose Launch instance. 4. Under Name and tags, for Name, enter a descriptive name for your instance. 5. Under Application and OS Images (Amazon Machine Image), do the following: a. Choose Quick Start, and then choose the operating system (OS) for your instance. For your first Linux instance, we recommend that you choose Amazon Linux. b. From Amazon Machine Image (AMI), select an AMI that is marked Free Tier eligible. 6. Under Instance type, for Instance type, select an instance type that is marked Free Tier eligible. 7. Under Key pair (login), for Key pair name, choose an existing key pair or choose Create new key pair to create your first key pair. Warning If you choose Proceed without a key pair (Not recommended), you won't be able to connect to your instance using the methods described in this tutorial. 8. Under Network settings, notice that we selected your default VPC, selected the option to use the default subnet in an Availability Zone that we choose for you, and configured a security group with a rule that allows connections to your instance from anywhere (0.0.0.0.0/0). Step 1: Launch an instance 10 Amazon Elastic Compute Cloud User Guide Warning If you specify 0.0.0.0/0, you are enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize access only from the appropriate individual IP address or range"} +{"global_id": 313, "doc_id": "ec2", "chunk_id": "8", "question_id": 2, "question": "What should you enter for the instance name?", "answer_span": "Under Name and tags, for Name, enter a descriptive name for your instance.", "chunk": "the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console dashboard, in the Launch instance pane, choose Launch instance. 4. Under Name and tags, for Name, enter a descriptive name for your instance. 5. Under Application and OS Images (Amazon Machine Image), do the following: a. Choose Quick Start, and then choose the operating system (OS) for your instance. For your first Linux instance, we recommend that you choose Amazon Linux. b. From Amazon Machine Image (AMI), select an AMI that is marked Free Tier eligible. 6. Under Instance type, for Instance type, select an instance type that is marked Free Tier eligible. 7. Under Key pair (login), for Key pair name, choose an existing key pair or choose Create new key pair to create your first key pair. Warning If you choose Proceed without a key pair (Not recommended), you won't be able to connect to your instance using the methods described in this tutorial. 8. Under Network settings, notice that we selected your default VPC, selected the option to use the default subnet in an Availability Zone that we choose for you, and configured a security group with a rule that allows connections to your instance from anywhere (0.0.0.0.0/0). Step 1: Launch an instance 10 Amazon Elastic Compute Cloud User Guide Warning If you specify 0.0.0.0/0, you are enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize access only from the appropriate individual IP address or range"} +{"global_id": 314, "doc_id": "ec2", "chunk_id": "8", "question_id": 3, "question": "What is recommended for your first Linux instance?", "answer_span": "we recommend that you choose Amazon Linux.", "chunk": "the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console dashboard, in the Launch instance pane, choose Launch instance. 4. Under Name and tags, for Name, enter a descriptive name for your instance. 5. Under Application and OS Images (Amazon Machine Image), do the following: a. Choose Quick Start, and then choose the operating system (OS) for your instance. For your first Linux instance, we recommend that you choose Amazon Linux. b. From Amazon Machine Image (AMI), select an AMI that is marked Free Tier eligible. 6. Under Instance type, for Instance type, select an instance type that is marked Free Tier eligible. 7. Under Key pair (login), for Key pair name, choose an existing key pair or choose Create new key pair to create your first key pair. Warning If you choose Proceed without a key pair (Not recommended), you won't be able to connect to your instance using the methods described in this tutorial. 8. Under Network settings, notice that we selected your default VPC, selected the option to use the default subnet in an Availability Zone that we choose for you, and configured a security group with a rule that allows connections to your instance from anywhere (0.0.0.0.0/0). Step 1: Launch an instance 10 Amazon Elastic Compute Cloud User Guide Warning If you specify 0.0.0.0/0, you are enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize access only from the appropriate individual IP address or range"} +{"global_id": 315, "doc_id": "ec2", "chunk_id": "8", "question_id": 4, "question": "What happens if you choose to proceed without a key pair?", "answer_span": "you won't be able to connect to your instance using the methods described in this tutorial.", "chunk": "the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console dashboard, in the Launch instance pane, choose Launch instance. 4. Under Name and tags, for Name, enter a descriptive name for your instance. 5. Under Application and OS Images (Amazon Machine Image), do the following: a. Choose Quick Start, and then choose the operating system (OS) for your instance. For your first Linux instance, we recommend that you choose Amazon Linux. b. From Amazon Machine Image (AMI), select an AMI that is marked Free Tier eligible. 6. Under Instance type, for Instance type, select an instance type that is marked Free Tier eligible. 7. Under Key pair (login), for Key pair name, choose an existing key pair or choose Create new key pair to create your first key pair. Warning If you choose Proceed without a key pair (Not recommended), you won't be able to connect to your instance using the methods described in this tutorial. 8. Under Network settings, notice that we selected your default VPC, selected the option to use the default subnet in an Availability Zone that we choose for you, and configured a security group with a rule that allows connections to your instance from anywhere (0.0.0.0.0/0). Step 1: Launch an instance 10 Amazon Elastic Compute Cloud User Guide Warning If you specify 0.0.0.0/0, you are enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize access only from the appropriate individual IP address or range"} +{"global_id": 316, "doc_id": "ec2", "chunk_id": "9", "question_id": 1, "question": "What is recommended for access in production environments?", "answer_span": "In production, be sure to authorize access only from the appropriate individual IP address or range of addresses.", "chunk": "enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize access only from the appropriate individual IP address or range of addresses. For your first instance, we recommend that you use the default settings. Otherwise, you can update your network settings as follows: 9. • (Optional) To use a specific default subnet, choose Edit and then choose a subnet. • (Optional) To use a different VPC, choose Edit and then choose an existing VPC. If the VPC isn't configured for public internet access, you won't be able to connect to your instance. • (Optional) To restrict inbound connection traffic to a specific network, choose Custom instead of Anywhere, and enter the CIDR block for your network. • (Optional) To use a different security group, choose Select existing security group and choose an existing security group. If the security group does not have a rule that allows connection traffic from your network, you won't be able to connect to your instance. For a Linux instance, you must allow SSH traffic. For a Windows instance, you must allow RDP traffic. Under Configure storage, notice that we configured a root volume but no data volumes. This is sufficient for test purposes. 10. Review a summary of your instance configuration in the Summary panel, and when you're ready, choose Launch instance. 11. If the launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the instance starts, its state changes to running. Choose the Status"} +{"global_id": 317, "doc_id": "ec2", "chunk_id": "9", "question_id": 2, "question": "What must you allow for a Linux instance?", "answer_span": "For a Linux instance, you must allow SSH traffic.", "chunk": "enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize access only from the appropriate individual IP address or range of addresses. For your first instance, we recommend that you use the default settings. Otherwise, you can update your network settings as follows: 9. • (Optional) To use a specific default subnet, choose Edit and then choose a subnet. • (Optional) To use a different VPC, choose Edit and then choose an existing VPC. If the VPC isn't configured for public internet access, you won't be able to connect to your instance. • (Optional) To restrict inbound connection traffic to a specific network, choose Custom instead of Anywhere, and enter the CIDR block for your network. • (Optional) To use a different security group, choose Select existing security group and choose an existing security group. If the security group does not have a rule that allows connection traffic from your network, you won't be able to connect to your instance. For a Linux instance, you must allow SSH traffic. For a Windows instance, you must allow RDP traffic. Under Configure storage, notice that we configured a root volume but no data volumes. This is sufficient for test purposes. 10. Review a summary of your instance configuration in the Summary panel, and when you're ready, choose Launch instance. 11. If the launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the instance starts, its state changes to running. Choose the Status"} +{"global_id": 318, "doc_id": "ec2", "chunk_id": "9", "question_id": 3, "question": "What should you do if the VPC isn't configured for public internet access?", "answer_span": "If the VPC isn't configured for public internet access, you won't be able to connect to your instance.", "chunk": "enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize access only from the appropriate individual IP address or range of addresses. For your first instance, we recommend that you use the default settings. Otherwise, you can update your network settings as follows: 9. • (Optional) To use a specific default subnet, choose Edit and then choose a subnet. • (Optional) To use a different VPC, choose Edit and then choose an existing VPC. If the VPC isn't configured for public internet access, you won't be able to connect to your instance. • (Optional) To restrict inbound connection traffic to a specific network, choose Custom instead of Anywhere, and enter the CIDR block for your network. • (Optional) To use a different security group, choose Select existing security group and choose an existing security group. If the security group does not have a rule that allows connection traffic from your network, you won't be able to connect to your instance. For a Linux instance, you must allow SSH traffic. For a Windows instance, you must allow RDP traffic. Under Configure storage, notice that we configured a root volume but no data volumes. This is sufficient for test purposes. 10. Review a summary of your instance configuration in the Summary panel, and when you're ready, choose Launch instance. 11. If the launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the instance starts, its state changes to running. Choose the Status"} +{"global_id": 319, "doc_id": "ec2", "chunk_id": "9", "question_id": 4, "question": "What happens to the instance state after it starts?", "answer_span": "After the instance starts, its state changes to running.", "chunk": "enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize access only from the appropriate individual IP address or range of addresses. For your first instance, we recommend that you use the default settings. Otherwise, you can update your network settings as follows: 9. • (Optional) To use a specific default subnet, choose Edit and then choose a subnet. • (Optional) To use a different VPC, choose Edit and then choose an existing VPC. If the VPC isn't configured for public internet access, you won't be able to connect to your instance. • (Optional) To restrict inbound connection traffic to a specific network, choose Custom instead of Anywhere, and enter the CIDR block for your network. • (Optional) To use a different security group, choose Select existing security group and choose an existing security group. If the security group does not have a rule that allows connection traffic from your network, you won't be able to connect to your instance. For a Linux instance, you must allow SSH traffic. For a Windows instance, you must allow RDP traffic. Under Configure storage, notice that we configured a root volume but no data volumes. This is sufficient for test purposes. 10. Review a summary of your instance configuration in the Summary panel, and when you're ready, choose Launch instance. 11. If the launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the instance starts, its state changes to running. Choose the Status"} +{"global_id": 320, "doc_id": "ec2", "chunk_id": "10", "question_id": 1, "question": "What should you do after the launch is successful?", "answer_span": "choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch.", "chunk": "launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the instance starts, its state changes to running. Choose the Status and alarms tab. After your instance passes its status checks, it is ready to receive connection requests. Step 1: Launch an instance 11 Amazon Elastic Compute Cloud User Guide Step 2: Connect to your instance The procedure that you use depends on the operating system of the instance. If you can't connect to your instance, see Troubleshoot issues connecting to your Amazon EC2 Linux instance for assistance. Linux instances You can connect to your Linux instance using any SSH client. If you are running Windows on your computer, open a terminal and run the ssh command to verify that you have an SSH client installed. If the command is not found, install OpenSSH for Windows. To connect to your instance using SSH 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the SSH client tab. 5. (Optional) If you created a key pair when you launched the instance and downloaded the private key (.pem file) to a computer running Linux or macOS, run the example chmod command to set the permissions for your private key. 6. Copy the example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the instance. ssh -i key-pair-name.pem ec2-user@ec2-198-51-100-1.us-east-2.compute.amazonaws.com 7. In a terminal window"} +{"global_id": 321, "doc_id": "ec2", "chunk_id": "10", "question_id": 2, "question": "What is the initial state of the instance after launching?", "answer_span": "The initial instance state is pending.", "chunk": "launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the instance starts, its state changes to running. Choose the Status and alarms tab. After your instance passes its status checks, it is ready to receive connection requests. Step 1: Launch an instance 11 Amazon Elastic Compute Cloud User Guide Step 2: Connect to your instance The procedure that you use depends on the operating system of the instance. If you can't connect to your instance, see Troubleshoot issues connecting to your Amazon EC2 Linux instance for assistance. Linux instances You can connect to your Linux instance using any SSH client. If you are running Windows on your computer, open a terminal and run the ssh command to verify that you have an SSH client installed. If the command is not found, install OpenSSH for Windows. To connect to your instance using SSH 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the SSH client tab. 5. (Optional) If you created a key pair when you launched the instance and downloaded the private key (.pem file) to a computer running Linux or macOS, run the example chmod command to set the permissions for your private key. 6. Copy the example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the instance. ssh -i key-pair-name.pem ec2-user@ec2-198-51-100-1.us-east-2.compute.amazonaws.com 7. In a terminal window"} +{"global_id": 322, "doc_id": "ec2", "chunk_id": "10", "question_id": 3, "question": "What should you do if you can't connect to your instance?", "answer_span": "see Troubleshoot issues connecting to your Amazon EC2 Linux instance for assistance.", "chunk": "launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the instance starts, its state changes to running. Choose the Status and alarms tab. After your instance passes its status checks, it is ready to receive connection requests. Step 1: Launch an instance 11 Amazon Elastic Compute Cloud User Guide Step 2: Connect to your instance The procedure that you use depends on the operating system of the instance. If you can't connect to your instance, see Troubleshoot issues connecting to your Amazon EC2 Linux instance for assistance. Linux instances You can connect to your Linux instance using any SSH client. If you are running Windows on your computer, open a terminal and run the ssh command to verify that you have an SSH client installed. If the command is not found, install OpenSSH for Windows. To connect to your instance using SSH 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the SSH client tab. 5. (Optional) If you created a key pair when you launched the instance and downloaded the private key (.pem file) to a computer running Linux or macOS, run the example chmod command to set the permissions for your private key. 6. Copy the example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the instance. ssh -i key-pair-name.pem ec2-user@ec2-198-51-100-1.us-east-2.compute.amazonaws.com 7. In a terminal window"} +{"global_id": 323, "doc_id": "ec2", "chunk_id": "10", "question_id": 4, "question": "What command should you run to verify that you have an SSH client installed on Windows?", "answer_span": "run the ssh command to verify that you have an SSH client installed.", "chunk": "launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the instance starts, its state changes to running. Choose the Status and alarms tab. After your instance passes its status checks, it is ready to receive connection requests. Step 1: Launch an instance 11 Amazon Elastic Compute Cloud User Guide Step 2: Connect to your instance The procedure that you use depends on the operating system of the instance. If you can't connect to your instance, see Troubleshoot issues connecting to your Amazon EC2 Linux instance for assistance. Linux instances You can connect to your Linux instance using any SSH client. If you are running Windows on your computer, open a terminal and run the ssh command to verify that you have an SSH client installed. If the command is not found, install OpenSSH for Windows. To connect to your instance using SSH 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the SSH client tab. 5. (Optional) If you created a key pair when you launched the instance and downloaded the private key (.pem file) to a computer running Linux or macOS, run the example chmod command to set the permissions for your private key. 6. Copy the example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the instance. ssh -i key-pair-name.pem ec2-user@ec2-198-51-100-1.us-east-2.compute.amazonaws.com 7. In a terminal window"} +{"global_id": 324, "doc_id": "ec2", "chunk_id": "11", "question_id": 1, "question": "What is the name of the private key file in the example SSH command?", "answer_span": "key-pair-name.pem is the name of your private key file", "chunk": "example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the instance. ssh -i key-pair-name.pem ec2-user@ec2-198-51-100-1.us-east-2.compute.amazonaws.com 7. In a terminal window on your computer, run the ssh command that you saved in the previous step. If the private key file is not in the current directory, you must specify the fully-qualified path to the key file in this command. The following is an example response: The authenticity of host 'ec2-198-51-100-1.us-east-2.compute.amazonaws.com (198-51-100-1)' can't be established. ECDSA key fingerprint is l4UB/neBad9tvkgJf1QZWxheQmR59WgrgzEimCG6kZY. Are you sure you want to continue connecting (yes/no)? Step 2: Connect to your instance 12 Amazon Elastic Compute Cloud 8. User Guide (Optional) Verify that the fingerprint in the security alert matches the instance fingerprint contained in the console output when you first start an instance. To get the console output, choose Actions, Monitor and troubleshoot, Get system log. If the fingerprints don't match, someone might be attempting a man-in-the-middle attack. If they match, continue to the next step. 9. Enter yes. The following is an example response: Warning: Permanently added 'ec2-198-51-100-1.useast-2.compute.amazonaws.com' (ECDSA) to the list of known hosts. Windows instances To connect to a Windows instance using RDP, you must retrieve the initial administrator password and then enter this password when you connect to your instance. It takes a few minutes after instance launch before this password is available. Your account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in the AMI. To determine the correct username, identify the language"} +{"global_id": 325, "doc_id": "ec2", "chunk_id": "11", "question_id": 2, "question": "What should you do if the private key file is not in the current directory?", "answer_span": "you must specify the fully-qualified path to the key file in this command", "chunk": "example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the instance. ssh -i key-pair-name.pem ec2-user@ec2-198-51-100-1.us-east-2.compute.amazonaws.com 7. In a terminal window on your computer, run the ssh command that you saved in the previous step. If the private key file is not in the current directory, you must specify the fully-qualified path to the key file in this command. The following is an example response: The authenticity of host 'ec2-198-51-100-1.us-east-2.compute.amazonaws.com (198-51-100-1)' can't be established. ECDSA key fingerprint is l4UB/neBad9tvkgJf1QZWxheQmR59WgrgzEimCG6kZY. Are you sure you want to continue connecting (yes/no)? Step 2: Connect to your instance 12 Amazon Elastic Compute Cloud 8. User Guide (Optional) Verify that the fingerprint in the security alert matches the instance fingerprint contained in the console output when you first start an instance. To get the console output, choose Actions, Monitor and troubleshoot, Get system log. If the fingerprints don't match, someone might be attempting a man-in-the-middle attack. If they match, continue to the next step. 9. Enter yes. The following is an example response: Warning: Permanently added 'ec2-198-51-100-1.useast-2.compute.amazonaws.com' (ECDSA) to the list of known hosts. Windows instances To connect to a Windows instance using RDP, you must retrieve the initial administrator password and then enter this password when you connect to your instance. It takes a few minutes after instance launch before this password is available. Your account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in the AMI. To determine the correct username, identify the language"} +{"global_id": 326, "doc_id": "ec2", "chunk_id": "11", "question_id": 3, "question": "What should you verify to ensure security when connecting to your instance?", "answer_span": "Verify that the fingerprint in the security alert matches the instance fingerprint contained in the console output", "chunk": "example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the instance. ssh -i key-pair-name.pem ec2-user@ec2-198-51-100-1.us-east-2.compute.amazonaws.com 7. In a terminal window on your computer, run the ssh command that you saved in the previous step. If the private key file is not in the current directory, you must specify the fully-qualified path to the key file in this command. The following is an example response: The authenticity of host 'ec2-198-51-100-1.us-east-2.compute.amazonaws.com (198-51-100-1)' can't be established. ECDSA key fingerprint is l4UB/neBad9tvkgJf1QZWxheQmR59WgrgzEimCG6kZY. Are you sure you want to continue connecting (yes/no)? Step 2: Connect to your instance 12 Amazon Elastic Compute Cloud 8. User Guide (Optional) Verify that the fingerprint in the security alert matches the instance fingerprint contained in the console output when you first start an instance. To get the console output, choose Actions, Monitor and troubleshoot, Get system log. If the fingerprints don't match, someone might be attempting a man-in-the-middle attack. If they match, continue to the next step. 9. Enter yes. The following is an example response: Warning: Permanently added 'ec2-198-51-100-1.useast-2.compute.amazonaws.com' (ECDSA) to the list of known hosts. Windows instances To connect to a Windows instance using RDP, you must retrieve the initial administrator password and then enter this password when you connect to your instance. It takes a few minutes after instance launch before this password is available. Your account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in the AMI. To determine the correct username, identify the language"} +{"global_id": 327, "doc_id": "ec2", "chunk_id": "11", "question_id": 4, "question": "What must you do to connect to a Windows instance using RDP?", "answer_span": "you must retrieve the initial administrator password and then enter this password when you connect to your instance", "chunk": "example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the instance. ssh -i key-pair-name.pem ec2-user@ec2-198-51-100-1.us-east-2.compute.amazonaws.com 7. In a terminal window on your computer, run the ssh command that you saved in the previous step. If the private key file is not in the current directory, you must specify the fully-qualified path to the key file in this command. The following is an example response: The authenticity of host 'ec2-198-51-100-1.us-east-2.compute.amazonaws.com (198-51-100-1)' can't be established. ECDSA key fingerprint is l4UB/neBad9tvkgJf1QZWxheQmR59WgrgzEimCG6kZY. Are you sure you want to continue connecting (yes/no)? Step 2: Connect to your instance 12 Amazon Elastic Compute Cloud 8. User Guide (Optional) Verify that the fingerprint in the security alert matches the instance fingerprint contained in the console output when you first start an instance. To get the console output, choose Actions, Monitor and troubleshoot, Get system log. If the fingerprints don't match, someone might be attempting a man-in-the-middle attack. If they match, continue to the next step. 9. Enter yes. The following is an example response: Warning: Permanently added 'ec2-198-51-100-1.useast-2.compute.amazonaws.com' (ECDSA) to the list of known hosts. Windows instances To connect to a Windows instance using RDP, you must retrieve the initial administrator password and then enter this password when you connect to your instance. It takes a few minutes after instance launch before this password is available. Your account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in the AMI. To determine the correct username, identify the language"} +{"global_id": 328, "doc_id": "ec2", "chunk_id": "12", "question_id": 1, "question": "What action must the account have permission to call?", "answer_span": "account must have permission to call the GetPasswordData action.", "chunk": "account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in the AMI. To determine the correct username, identify the language of the OS, and then choose the corresponding username. For example, for an English OS, the username is Administrator, for a French OS it's Administrateur, and for a Portuguese OS it's Administrador. If a language version of the OS does not have a username in the same language, choose the username Administrator (Other). For more information, see Localized Names for Administrator Account in Windows in the Microsoft website. To retrieve the initial administrator password 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the RDP client tab. 5. For Username, choose the default username for the Administrator account. The username you choose must match the language of the operating system (OS) contained in the AMI that you Step 2: Connect to your instance 13 Amazon Elastic Compute Cloud User Guide used to launch your instance. If there is no username in the same language as your OS, choose Administrator (Other). 6. Choose Get password. 7. On the Get Windows password page, do the following: a. Choose Upload private key file and navigate to the private key (.pem) file that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password for the instance appears under Password, replacing the Get password"} +{"global_id": 329, "doc_id": "ec2", "chunk_id": "12", "question_id": 2, "question": "What is the default username for an English OS?", "answer_span": "for an English OS, the username is Administrator.", "chunk": "account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in the AMI. To determine the correct username, identify the language of the OS, and then choose the corresponding username. For example, for an English OS, the username is Administrator, for a French OS it's Administrateur, and for a Portuguese OS it's Administrador. If a language version of the OS does not have a username in the same language, choose the username Administrator (Other). For more information, see Localized Names for Administrator Account in Windows in the Microsoft website. To retrieve the initial administrator password 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the RDP client tab. 5. For Username, choose the default username for the Administrator account. The username you choose must match the language of the operating system (OS) contained in the AMI that you Step 2: Connect to your instance 13 Amazon Elastic Compute Cloud User Guide used to launch your instance. If there is no username in the same language as your OS, choose Administrator (Other). 6. Choose Get password. 7. On the Get Windows password page, do the following: a. Choose Upload private key file and navigate to the private key (.pem) file that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password for the instance appears under Password, replacing the Get password"} +{"global_id": 330, "doc_id": "ec2", "chunk_id": "12", "question_id": 3, "question": "What should you do to retrieve the initial administrator password?", "answer_span": "To retrieve the initial administrator password 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.", "chunk": "account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in the AMI. To determine the correct username, identify the language of the OS, and then choose the corresponding username. For example, for an English OS, the username is Administrator, for a French OS it's Administrateur, and for a Portuguese OS it's Administrador. If a language version of the OS does not have a username in the same language, choose the username Administrator (Other). For more information, see Localized Names for Administrator Account in Windows in the Microsoft website. To retrieve the initial administrator password 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the RDP client tab. 5. For Username, choose the default username for the Administrator account. The username you choose must match the language of the operating system (OS) contained in the AMI that you Step 2: Connect to your instance 13 Amazon Elastic Compute Cloud User Guide used to launch your instance. If there is no username in the same language as your OS, choose Administrator (Other). 6. Choose Get password. 7. On the Get Windows password page, do the following: a. Choose Upload private key file and navigate to the private key (.pem) file that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password for the instance appears under Password, replacing the Get password"} +{"global_id": 331, "doc_id": "ec2", "chunk_id": "12", "question_id": 4, "question": "What must the username you choose match?", "answer_span": "The username you choose must match the language of the operating system (OS) contained in the AMI that you used to launch your instance.", "chunk": "account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in the AMI. To determine the correct username, identify the language of the OS, and then choose the corresponding username. For example, for an English OS, the username is Administrator, for a French OS it's Administrateur, and for a Portuguese OS it's Administrador. If a language version of the OS does not have a username in the same language, choose the username Administrator (Other). For more information, see Localized Names for Administrator Account in Windows in the Microsoft website. To retrieve the initial administrator password 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the RDP client tab. 5. For Username, choose the default username for the Administrator account. The username you choose must match the language of the operating system (OS) contained in the AMI that you Step 2: Connect to your instance 13 Amazon Elastic Compute Cloud User Guide used to launch your instance. If there is no username in the same language as your OS, choose Administrator (Other). 6. Choose Get password. 7. On the Get Windows password page, do the following: a. Choose Upload private key file and navigate to the private key (.pem) file that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password for the instance appears under Password, replacing the Get password"} +{"global_id": 332, "doc_id": "ec2", "chunk_id": "13", "question_id": 1, "question": "What should you do after selecting the file to copy its contents?", "answer_span": "Select the file and choose Open to copy the entire contents of the file to this window.", "chunk": "that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password for the instance appears under Password, replacing the Get password link shown previously. c. Copy the password and save it in a safe place. This password is required to connect to the instance. The following procedure uses the Remote Desktop Connection client for Windows (MSTSC). If you're using a different RDP client, download the RDP file and then see the documentation for the RDP client for the steps to establish the RDP connection. To connect to a Windows instance using an RDP client 1. On the Connect to instance page, choose Download remote desktop file. When the file download is finished, choose Cancel to return to the Instances page. The RDP file is downloaded to your Downloads folder. 2. Run mstsc.exe to open the RDP client. 3. Expand Show options, choose Open, and select the .rdp file from your Downloads folder. 4. By default, Computer is the public IPv4 DNS name of the instance and User name is the administrator account. To connect to the instance using IPv6 instead, replace the public IPv4 DNS name of the instance with its IPv6 address. Review the default settings and change them as needed. 5. Choose Connect. If you receive a warning that the publisher of the remote connection is unknown, choose Connect to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following: • If you trust the certificate, choose Yes to connect"} +{"global_id": 333, "doc_id": "ec2", "chunk_id": "13", "question_id": 2, "question": "What appears under Password after choosing Decrypt password?", "answer_span": "the default administrator password for the instance appears under Password, replacing the Get password link shown previously.", "chunk": "that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password for the instance appears under Password, replacing the Get password link shown previously. c. Copy the password and save it in a safe place. This password is required to connect to the instance. The following procedure uses the Remote Desktop Connection client for Windows (MSTSC). If you're using a different RDP client, download the RDP file and then see the documentation for the RDP client for the steps to establish the RDP connection. To connect to a Windows instance using an RDP client 1. On the Connect to instance page, choose Download remote desktop file. When the file download is finished, choose Cancel to return to the Instances page. The RDP file is downloaded to your Downloads folder. 2. Run mstsc.exe to open the RDP client. 3. Expand Show options, choose Open, and select the .rdp file from your Downloads folder. 4. By default, Computer is the public IPv4 DNS name of the instance and User name is the administrator account. To connect to the instance using IPv6 instead, replace the public IPv4 DNS name of the instance with its IPv6 address. Review the default settings and change them as needed. 5. Choose Connect. If you receive a warning that the publisher of the remote connection is unknown, choose Connect to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following: • If you trust the certificate, choose Yes to connect"} +{"global_id": 334, "doc_id": "ec2", "chunk_id": "13", "question_id": 3, "question": "What is required to connect to the instance?", "answer_span": "This password is required to connect to the instance.", "chunk": "that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password for the instance appears under Password, replacing the Get password link shown previously. c. Copy the password and save it in a safe place. This password is required to connect to the instance. The following procedure uses the Remote Desktop Connection client for Windows (MSTSC). If you're using a different RDP client, download the RDP file and then see the documentation for the RDP client for the steps to establish the RDP connection. To connect to a Windows instance using an RDP client 1. On the Connect to instance page, choose Download remote desktop file. When the file download is finished, choose Cancel to return to the Instances page. The RDP file is downloaded to your Downloads folder. 2. Run mstsc.exe to open the RDP client. 3. Expand Show options, choose Open, and select the .rdp file from your Downloads folder. 4. By default, Computer is the public IPv4 DNS name of the instance and User name is the administrator account. To connect to the instance using IPv6 instead, replace the public IPv4 DNS name of the instance with its IPv6 address. Review the default settings and change them as needed. 5. Choose Connect. If you receive a warning that the publisher of the remote connection is unknown, choose Connect to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following: • If you trust the certificate, choose Yes to connect"} +{"global_id": 335, "doc_id": "ec2", "chunk_id": "13", "question_id": 4, "question": "What should you do if you receive a warning about the publisher of the remote connection?", "answer_span": "If you receive a warning that the publisher of the remote connection is unknown, choose Connect to continue.", "chunk": "that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password for the instance appears under Password, replacing the Get password link shown previously. c. Copy the password and save it in a safe place. This password is required to connect to the instance. The following procedure uses the Remote Desktop Connection client for Windows (MSTSC). If you're using a different RDP client, download the RDP file and then see the documentation for the RDP client for the steps to establish the RDP connection. To connect to a Windows instance using an RDP client 1. On the Connect to instance page, choose Download remote desktop file. When the file download is finished, choose Cancel to return to the Instances page. The RDP file is downloaded to your Downloads folder. 2. Run mstsc.exe to open the RDP client. 3. Expand Show options, choose Open, and select the .rdp file from your Downloads folder. 4. By default, Computer is the public IPv4 DNS name of the instance and User name is the administrator account. To connect to the instance using IPv6 instead, replace the public IPv4 DNS name of the instance with its IPv6 address. Review the default settings and change them as needed. 5. Choose Connect. If you receive a warning that the publisher of the remote connection is unknown, choose Connect to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following: • If you trust the certificate, choose Yes to connect"} +{"global_id": 336, "doc_id": "ec2", "chunk_id": "14", "question_id": 1, "question": "What should you do if you trust the self-signed certificate?", "answer_span": "If you trust the certi���cate, choose Yes to connect to your instance.", "chunk": "to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following: • If you trust the certificate, choose Yes to connect to your instance. Step 2: Connect to your instance 14 Amazon Elastic Compute Cloud • User Guide [Windows] Before you proceed, compare the thumbprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose View certificate and then choose Thumbprint from the Details tab. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. • [Mac OS X] Before you proceed, compare the fingerprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose Show Certificate, expand Details, and choose SHA1 Fingerprints. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. 8. If the RDP connection is successful, the RDP client displays the Windows login screen and then the Windows desktop. If you receive an error message instead, see the section called “Remote Desktop can't connect to the remote computer”. When you are finished with the RDP connection, you can close the RDP client. Step 3: Clean up your instance After you've finished with the instance that you created for this tutorial, you should clean up by terminating the instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that instance or usage that counts against your Free Tier limits"} +{"global_id": 337, "doc_id": "ec2", "chunk_id": "14", "question_id": 2, "question": "How can you confirm the identity of the remote computer on Windows?", "answer_span": "Before you proceed, compare the thumbprint of the certificate with the value in the system log to confirm the identity of the remote computer.", "chunk": "to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following: • If you trust the certificate, choose Yes to connect to your instance. Step 2: Connect to your instance 14 Amazon Elastic Compute Cloud • User Guide [Windows] Before you proceed, compare the thumbprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose View certificate and then choose Thumbprint from the Details tab. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. • [Mac OS X] Before you proceed, compare the fingerprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose Show Certificate, expand Details, and choose SHA1 Fingerprints. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. 8. If the RDP connection is successful, the RDP client displays the Windows login screen and then the Windows desktop. If you receive an error message instead, see the section called “Remote Desktop can't connect to the remote computer”. When you are finished with the RDP connection, you can close the RDP client. Step 3: Clean up your instance After you've finished with the instance that you created for this tutorial, you should clean up by terminating the instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that instance or usage that counts against your Free Tier limits"} +{"global_id": 338, "doc_id": "ec2", "chunk_id": "14", "question_id": 3, "question": "What happens if the RDP connection is successful?", "answer_span": "If the RDP connection is successful, the RDP client displays the Windows login screen and then the Windows desktop.", "chunk": "to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following: • If you trust the certificate, choose Yes to connect to your instance. Step 2: Connect to your instance 14 Amazon Elastic Compute Cloud • User Guide [Windows] Before you proceed, compare the thumbprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose View certificate and then choose Thumbprint from the Details tab. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. • [Mac OS X] Before you proceed, compare the fingerprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose Show Certificate, expand Details, and choose SHA1 Fingerprints. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. 8. If the RDP connection is successful, the RDP client displays the Windows login screen and then the Windows desktop. If you receive an error message instead, see the section called “Remote Desktop can't connect to the remote computer”. When you are finished with the RDP connection, you can close the RDP client. Step 3: Clean up your instance After you've finished with the instance that you created for this tutorial, you should clean up by terminating the instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that instance or usage that counts against your Free Tier limits"} +{"global_id": 339, "doc_id": "ec2", "chunk_id": "14", "question_id": 4, "question": "What should you do after finishing with the instance created for the tutorial?", "answer_span": "After you've finished with the instance that you created for this tutorial, you should clean up by terminating the instance.", "chunk": "to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following: • If you trust the certificate, choose Yes to connect to your instance. Step 2: Connect to your instance 14 Amazon Elastic Compute Cloud • User Guide [Windows] Before you proceed, compare the thumbprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose View certificate and then choose Thumbprint from the Details tab. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. • [Mac OS X] Before you proceed, compare the fingerprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose Show Certificate, expand Details, and choose SHA1 Fingerprints. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. 8. If the RDP connection is successful, the RDP client displays the Windows login screen and then the Windows desktop. If you receive an error message instead, see the section called “Remote Desktop can't connect to the remote computer”. When you are finished with the RDP connection, you can close the RDP client. Step 3: Clean up your instance After you've finished with the instance that you created for this tutorial, you should clean up by terminating the instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that instance or usage that counts against your Free Tier limits"} +{"global_id": 340, "doc_id": "ec2", "chunk_id": "15", "question_id": 1, "question": "What happens when you terminate an instance?", "answer_span": "Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it.", "chunk": "instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that instance or usage that counts against your Free Tier limits as soon as the instance status changes to shutting down or terminated. To keep your instance for later, but not incur charges or usage that counts against your Free Tier limits, you can stop the instance now and then start it again later. For more information, see Stop and start Amazon EC2 instances. To terminate your instance 1. In the navigation pane, choose Instances. In the list of instances, select the instance. 2. Choose Instance state, Terminate (delete) instance. Step 3: Clean up your instance 15 Amazon Elastic Compute Cloud 3. User Guide Choose Terminate (delete) when prompted for confirmation. Amazon EC2 shuts down and terminates your instance. After your instance is terminated, it remains visible on the console for a short while, and then the entry is automatically deleted. You cannot remove the terminated instance from the console display yourself. Next steps After you start your instance, you might want to explore the following next steps: • Explore the Amazon EC2 core concepts with the introductory tutorials. For more information, see Tutorials for launching EC2 instances. • Learn how to track your Amazon EC2 Free Tier usage using the console. For more information, see the section called “Track your Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User Guide. • Add an EBS volume. For more information, see"} +{"global_id": 341, "doc_id": "ec2", "chunk_id": "15", "question_id": 2, "question": "How can you avoid incurring charges while keeping your instance?", "answer_span": "To keep your instance for later, but not incur charges or usage that counts against your Free Tier limits, you can stop the instance now and then start it again later.", "chunk": "instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that instance or usage that counts against your Free Tier limits as soon as the instance status changes to shutting down or terminated. To keep your instance for later, but not incur charges or usage that counts against your Free Tier limits, you can stop the instance now and then start it again later. For more information, see Stop and start Amazon EC2 instances. To terminate your instance 1. In the navigation pane, choose Instances. In the list of instances, select the instance. 2. Choose Instance state, Terminate (delete) instance. Step 3: Clean up your instance 15 Amazon Elastic Compute Cloud 3. User Guide Choose Terminate (delete) when prompted for confirmation. Amazon EC2 shuts down and terminates your instance. After your instance is terminated, it remains visible on the console for a short while, and then the entry is automatically deleted. You cannot remove the terminated instance from the console display yourself. Next steps After you start your instance, you might want to explore the following next steps: • Explore the Amazon EC2 core concepts with the introductory tutorials. For more information, see Tutorials for launching EC2 instances. • Learn how to track your Amazon EC2 Free Tier usage using the console. For more information, see the section called “Track your Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User Guide. • Add an EBS volume. For more information, see"} +{"global_id": 342, "doc_id": "ec2", "chunk_id": "15", "question_id": 3, "question": "What is the first step to terminate your instance?", "answer_span": "In the navigation pane, choose Instances.", "chunk": "instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that instance or usage that counts against your Free Tier limits as soon as the instance status changes to shutting down or terminated. To keep your instance for later, but not incur charges or usage that counts against your Free Tier limits, you can stop the instance now and then start it again later. For more information, see Stop and start Amazon EC2 instances. To terminate your instance 1. In the navigation pane, choose Instances. In the list of instances, select the instance. 2. Choose Instance state, Terminate (delete) instance. Step 3: Clean up your instance 15 Amazon Elastic Compute Cloud 3. User Guide Choose Terminate (delete) when prompted for confirmation. Amazon EC2 shuts down and terminates your instance. After your instance is terminated, it remains visible on the console for a short while, and then the entry is automatically deleted. You cannot remove the terminated instance from the console display yourself. Next steps After you start your instance, you might want to explore the following next steps: • Explore the Amazon EC2 core concepts with the introductory tutorials. For more information, see Tutorials for launching EC2 instances. • Learn how to track your Amazon EC2 Free Tier usage using the console. For more information, see the section called “Track your Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User Guide. • Add an EBS volume. For more information, see"} +{"global_id": 343, "doc_id": "ec2", "chunk_id": "15", "question_id": 4, "question": "What should you do after your instance is terminated?", "answer_span": "After your instance is terminated, it remains visible on the console for a short while, and then the entry is automatically deleted.", "chunk": "instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that instance or usage that counts against your Free Tier limits as soon as the instance status changes to shutting down or terminated. To keep your instance for later, but not incur charges or usage that counts against your Free Tier limits, you can stop the instance now and then start it again later. For more information, see Stop and start Amazon EC2 instances. To terminate your instance 1. In the navigation pane, choose Instances. In the list of instances, select the instance. 2. Choose Instance state, Terminate (delete) instance. Step 3: Clean up your instance 15 Amazon Elastic Compute Cloud 3. User Guide Choose Terminate (delete) when prompted for confirmation. Amazon EC2 shuts down and terminates your instance. After your instance is terminated, it remains visible on the console for a short while, and then the entry is automatically deleted. You cannot remove the terminated instance from the console display yourself. Next steps After you start your instance, you might want to explore the following next steps: • Explore the Amazon EC2 core concepts with the introductory tutorials. For more information, see Tutorials for launching EC2 instances. • Learn how to track your Amazon EC2 Free Tier usage using the console. For more information, see the section called “Track your Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User Guide. • Add an EBS volume. For more information, see"} +{"global_id": 344, "doc_id": "ec2", "chunk_id": "16", "question_id": 1, "question": "What should you configure to notify you if your usage exceeds the Free Tier?", "answer_span": "Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025).", "chunk": "Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User Guide. • Add an EBS volume. For more information, see Create an Amazon EBS volume in the Amazon EBS User Guide. • Learn how to remotely manage your EC2 instance using the Run command. For more information, see AWS Systems Manager Run Command in the AWS Systems Manager User Guide. • Learn about instance purchasing options. For more information, see Amazon EC2 billing and purchasing options. • Get advice about instance types. For more information, see Get recommendations from EC2 instance type finder. Next steps 16 Amazon Elastic Compute Cloud User Guide Best practices for Amazon EC2 To ensure the maximum benefit from Amazon EC2, we recommend that you perform the following best practices. Security • Manage access to AWS resources and APIs using identity federation with an identity provider and IAM roles whenever possible. For more information, see Creating IAM policies in the IAM User Guide. • Implement the least permissive rules for your security group. • Regularly patch, update, and secure the operating system and applications on your instance. For more information, see Update management. For guidelines specific to Windows operating systems, see Security best practices for Windows instances. • Use Amazon Inspector to automatically discover and scan Amazon EC2 instances for software vulnerabilities and unintended network exposure. For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud controls in the AWS Security Hub User Guide. Storage •"} +{"global_id": 345, "doc_id": "ec2", "chunk_id": "16", "question_id": 2, "question": "Where can you find information about creating an Amazon EBS volume?", "answer_span": "For more information, see Create an Amazon EBS volume in the Amazon EBS User Guide.", "chunk": "Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User Guide. • Add an EBS volume. For more information, see Create an Amazon EBS volume in the Amazon EBS User Guide. • Learn how to remotely manage your EC2 instance using the Run command. For more information, see AWS Systems Manager Run Command in the AWS Systems Manager User Guide. • Learn about instance purchasing options. For more information, see Amazon EC2 billing and purchasing options. • Get advice about instance types. For more information, see Get recommendations from EC2 instance type finder. Next steps 16 Amazon Elastic Compute Cloud User Guide Best practices for Amazon EC2 To ensure the maximum benefit from Amazon EC2, we recommend that you perform the following best practices. Security • Manage access to AWS resources and APIs using identity federation with an identity provider and IAM roles whenever possible. For more information, see Creating IAM policies in the IAM User Guide. • Implement the least permissive rules for your security group. • Regularly patch, update, and secure the operating system and applications on your instance. For more information, see Update management. For guidelines specific to Windows operating systems, see Security best practices for Windows instances. • Use Amazon Inspector to automatically discover and scan Amazon EC2 instances for software vulnerabilities and unintended network exposure. For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud controls in the AWS Security Hub User Guide. Storage •"} +{"global_id": 346, "doc_id": "ec2", "chunk_id": "16", "question_id": 3, "question": "What is recommended to manage access to AWS resources and APIs?", "answer_span": "Manage access to AWS resources and APIs using identity federation with an identity provider and IAM roles whenever possible.", "chunk": "Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User Guide. • Add an EBS volume. For more information, see Create an Amazon EBS volume in the Amazon EBS User Guide. • Learn how to remotely manage your EC2 instance using the Run command. For more information, see AWS Systems Manager Run Command in the AWS Systems Manager User Guide. • Learn about instance purchasing options. For more information, see Amazon EC2 billing and purchasing options. • Get advice about instance types. For more information, see Get recommendations from EC2 instance type finder. Next steps 16 Amazon Elastic Compute Cloud User Guide Best practices for Amazon EC2 To ensure the maximum benefit from Amazon EC2, we recommend that you perform the following best practices. Security • Manage access to AWS resources and APIs using identity federation with an identity provider and IAM roles whenever possible. For more information, see Creating IAM policies in the IAM User Guide. • Implement the least permissive rules for your security group. • Regularly patch, update, and secure the operating system and applications on your instance. For more information, see Update management. For guidelines specific to Windows operating systems, see Security best practices for Windows instances. • Use Amazon Inspector to automatically discover and scan Amazon EC2 instances for software vulnerabilities and unintended network exposure. For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud controls in the AWS Security Hub User Guide. Storage •"} +{"global_id": 347, "doc_id": "ec2", "chunk_id": "16", "question_id": 4, "question": "What tool can be used to automatically discover and scan Amazon EC2 instances for vulnerabilities?", "answer_span": "Use Amazon Inspector to automatically discover and scan Amazon EC2 instances for software vulnerabilities and unintended network exposure.", "chunk": "Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User Guide. • Add an EBS volume. For more information, see Create an Amazon EBS volume in the Amazon EBS User Guide. • Learn how to remotely manage your EC2 instance using the Run command. For more information, see AWS Systems Manager Run Command in the AWS Systems Manager User Guide. • Learn about instance purchasing options. For more information, see Amazon EC2 billing and purchasing options. • Get advice about instance types. For more information, see Get recommendations from EC2 instance type finder. Next steps 16 Amazon Elastic Compute Cloud User Guide Best practices for Amazon EC2 To ensure the maximum benefit from Amazon EC2, we recommend that you perform the following best practices. Security • Manage access to AWS resources and APIs using identity federation with an identity provider and IAM roles whenever possible. For more information, see Creating IAM policies in the IAM User Guide. • Implement the least permissive rules for your security group. • Regularly patch, update, and secure the operating system and applications on your instance. For more information, see Update management. For guidelines specific to Windows operating systems, see Security best practices for Windows instances. • Use Amazon Inspector to automatically discover and scan Amazon EC2 instances for software vulnerabilities and unintended network exposure. For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud controls in the AWS Security Hub User Guide. Storage •"} +{"global_id": 348, "doc_id": "ec2", "chunk_id": "17", "question_id": 1, "question": "What should you use to monitor your Amazon EC2 resources against security best practices?", "answer_span": "Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards.", "chunk": "For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud controls in the AWS Security Hub User Guide. Storage • Understand the implications of the root device type for data persistence, backup, and recovery. For more information, see Root device type. • Use separate Amazon EBS volumes for the operating system versus your data. Ensure that the volume with your data persists after instance termination. For more information, see Preserve data when an instance is terminated. • Use the instance store available for your instance to store temporary data. Remember that the data stored in instance store is deleted when you stop, hibernate, or terminate your instance. If you use instance store for database storage, ensure that you have a cluster with a replication factor that ensures fault tolerance. • Encrypt EBS volumes and snapshots. For more information, see Amazon EBS encryption in the Amazon EBS User Guide. 17 Amazon Elastic Compute Cloud User Guide Resource management • Use instance metadata and custom resource tags to track and identify your AWS resources. For more information, see Use instance metadata to manage your EC2 instance and Tag your Amazon EC2 resources. • View your current limits for Amazon EC2. Plan to request any limit increases in advance of the time that you'll need them. For more information, see Amazon EC2 service quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the AWS Support User Guide. Backup and recovery • Regularly back"} +{"global_id": 349, "doc_id": "ec2", "chunk_id": "17", "question_id": 2, "question": "What is recommended for data persistence after instance termination?", "answer_span": "Ensure that the volume with your data persists after instance termination.", "chunk": "For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud controls in the AWS Security Hub User Guide. Storage • Understand the implications of the root device type for data persistence, backup, and recovery. For more information, see Root device type. • Use separate Amazon EBS volumes for the operating system versus your data. Ensure that the volume with your data persists after instance termination. For more information, see Preserve data when an instance is terminated. • Use the instance store available for your instance to store temporary data. Remember that the data stored in instance store is deleted when you stop, hibernate, or terminate your instance. If you use instance store for database storage, ensure that you have a cluster with a replication factor that ensures fault tolerance. • Encrypt EBS volumes and snapshots. For more information, see Amazon EBS encryption in the Amazon EBS User Guide. 17 Amazon Elastic Compute Cloud User Guide Resource management • Use instance metadata and custom resource tags to track and identify your AWS resources. For more information, see Use instance metadata to manage your EC2 instance and Tag your Amazon EC2 resources. • View your current limits for Amazon EC2. Plan to request any limit increases in advance of the time that you'll need them. For more information, see Amazon EC2 service quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the AWS Support User Guide. Backup and recovery • Regularly back"} +{"global_id": 350, "doc_id": "ec2", "chunk_id": "17", "question_id": 3, "question": "What should you do to store temporary data in your instance?", "answer_span": "Use the instance store available for your instance to store temporary data.", "chunk": "For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud controls in the AWS Security Hub User Guide. Storage • Understand the implications of the root device type for data persistence, backup, and recovery. For more information, see Root device type. • Use separate Amazon EBS volumes for the operating system versus your data. Ensure that the volume with your data persists after instance termination. For more information, see Preserve data when an instance is terminated. • Use the instance store available for your instance to store temporary data. Remember that the data stored in instance store is deleted when you stop, hibernate, or terminate your instance. If you use instance store for database storage, ensure that you have a cluster with a replication factor that ensures fault tolerance. • Encrypt EBS volumes and snapshots. For more information, see Amazon EBS encryption in the Amazon EBS User Guide. 17 Amazon Elastic Compute Cloud User Guide Resource management • Use instance metadata and custom resource tags to track and identify your AWS resources. For more information, see Use instance metadata to manage your EC2 instance and Tag your Amazon EC2 resources. • View your current limits for Amazon EC2. Plan to request any limit increases in advance of the time that you'll need them. For more information, see Amazon EC2 service quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the AWS Support User Guide. Backup and recovery • Regularly back"} +{"global_id": 351, "doc_id": "ec2", "chunk_id": "17", "question_id": 4, "question": "What tool can you use to inspect your AWS environment for recommendations?", "answer_span": "Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps.", "chunk": "For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud controls in the AWS Security Hub User Guide. Storage • Understand the implications of the root device type for data persistence, backup, and recovery. For more information, see Root device type. • Use separate Amazon EBS volumes for the operating system versus your data. Ensure that the volume with your data persists after instance termination. For more information, see Preserve data when an instance is terminated. • Use the instance store available for your instance to store temporary data. Remember that the data stored in instance store is deleted when you stop, hibernate, or terminate your instance. If you use instance store for database storage, ensure that you have a cluster with a replication factor that ensures fault tolerance. • Encrypt EBS volumes and snapshots. For more information, see Amazon EBS encryption in the Amazon EBS User Guide. 17 Amazon Elastic Compute Cloud User Guide Resource management • Use instance metadata and custom resource tags to track and identify your AWS resources. For more information, see Use instance metadata to manage your EC2 instance and Tag your Amazon EC2 resources. • View your current limits for Amazon EC2. Plan to request any limit increases in advance of the time that you'll need them. For more information, see Amazon EC2 service quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the AWS Support User Guide. Backup and recovery • Regularly back"} +{"global_id": 352, "doc_id": "ec2", "chunk_id": "18", "question_id": 1, "question": "What tool can be used to inspect your AWS environment and make recommendations?", "answer_span": "Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps.", "chunk": "quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the AWS Support User Guide. Backup and recovery • Regularly back up your EBS volumes using Amazon EBS snapshots, and create an Amazon Machine Image (AMI) from your instance to save the configuration as a template for launching future instances. For more information about AWS services that help achieve this use case, see AWS Backup and Amazon Data Lifecycle Manager. • Deploy critical components of your application across multiple Availability Zones, and replicate your data appropriately. • Design your applications to handle dynamic IP addressing when your instance restarts. For more information, see Amazon EC2 instance IP addressing. • Monitor and respond to events. For more information, see Monitor Amazon EC2 resources. • Ensure that you are prepared to handle failover. For a basic solution, you can manually attach a network interface or Elastic IP address to a replacement instance. For more information, see Elastic network interfaces. For an automated solution, you can use Amazon EC2 Auto Scaling. For more information, see the Amazon EC2 Auto Scaling User Guide. • Regularly test the process of recovering your instances and Amazon EBS volumes to ensure data and services are restored successfully. Networking • Set the time-to-live (TTL) value for your applications to 255, for IPv4 and IPv6. If you use a smaller value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI) is an image that provides the software that is required"} +{"global_id": 353, "doc_id": "ec2", "chunk_id": "18", "question_id": 2, "question": "What should you regularly back up using Amazon EBS snapshots?", "answer_span": "Regularly back up your EBS volumes using Amazon EBS snapshots, and create an Amazon Machine Image (AMI) from your instance to save the configuration as a template for launching future instances.", "chunk": "quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the AWS Support User Guide. Backup and recovery • Regularly back up your EBS volumes using Amazon EBS snapshots, and create an Amazon Machine Image (AMI) from your instance to save the configuration as a template for launching future instances. For more information about AWS services that help achieve this use case, see AWS Backup and Amazon Data Lifecycle Manager. • Deploy critical components of your application across multiple Availability Zones, and replicate your data appropriately. • Design your applications to handle dynamic IP addressing when your instance restarts. For more information, see Amazon EC2 instance IP addressing. • Monitor and respond to events. For more information, see Monitor Amazon EC2 resources. • Ensure that you are prepared to handle failover. For a basic solution, you can manually attach a network interface or Elastic IP address to a replacement instance. For more information, see Elastic network interfaces. For an automated solution, you can use Amazon EC2 Auto Scaling. For more information, see the Amazon EC2 Auto Scaling User Guide. • Regularly test the process of recovering your instances and Amazon EBS volumes to ensure data and services are restored successfully. Networking • Set the time-to-live (TTL) value for your applications to 255, for IPv4 and IPv6. If you use a smaller value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI) is an image that provides the software that is required"} +{"global_id": 354, "doc_id": "ec2", "chunk_id": "18", "question_id": 3, "question": "What is the recommended time-to-live (TTL) value for applications?", "answer_span": "Set the time-to-live (TTL) value for your applications to 255, for IPv4 and IPv6.", "chunk": "quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the AWS Support User Guide. Backup and recovery • Regularly back up your EBS volumes using Amazon EBS snapshots, and create an Amazon Machine Image (AMI) from your instance to save the configuration as a template for launching future instances. For more information about AWS services that help achieve this use case, see AWS Backup and Amazon Data Lifecycle Manager. • Deploy critical components of your application across multiple Availability Zones, and replicate your data appropriately. • Design your applications to handle dynamic IP addressing when your instance restarts. For more information, see Amazon EC2 instance IP addressing. • Monitor and respond to events. For more information, see Monitor Amazon EC2 resources. • Ensure that you are prepared to handle failover. For a basic solution, you can manually attach a network interface or Elastic IP address to a replacement instance. For more information, see Elastic network interfaces. For an automated solution, you can use Amazon EC2 Auto Scaling. For more information, see the Amazon EC2 Auto Scaling User Guide. • Regularly test the process of recovering your instances and Amazon EBS volumes to ensure data and services are restored successfully. Networking • Set the time-to-live (TTL) value for your applications to 255, for IPv4 and IPv6. If you use a smaller value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI) is an image that provides the software that is required"} +{"global_id": 355, "doc_id": "ec2", "chunk_id": "18", "question_id": 4, "question": "What should you do to ensure data and services are restored successfully?", "answer_span": "Regularly test the process of recovering your instances and Amazon EBS volumes to ensure data and services are restored successfully.", "chunk": "quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the AWS Support User Guide. Backup and recovery • Regularly back up your EBS volumes using Amazon EBS snapshots, and create an Amazon Machine Image (AMI) from your instance to save the configuration as a template for launching future instances. For more information about AWS services that help achieve this use case, see AWS Backup and Amazon Data Lifecycle Manager. • Deploy critical components of your application across multiple Availability Zones, and replicate your data appropriately. • Design your applications to handle dynamic IP addressing when your instance restarts. For more information, see Amazon EC2 instance IP addressing. • Monitor and respond to events. For more information, see Monitor Amazon EC2 resources. • Ensure that you are prepared to handle failover. For a basic solution, you can manually attach a network interface or Elastic IP address to a replacement instance. For more information, see Elastic network interfaces. For an automated solution, you can use Amazon EC2 Auto Scaling. For more information, see the Amazon EC2 Auto Scaling User Guide. • Regularly test the process of recovering your instances and Amazon EBS volumes to ensure data and services are restored successfully. Networking • Set the time-to-live (TTL) value for your applications to 255, for IPv4 and IPv6. If you use a smaller value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI) is an image that provides the software that is required"} +{"global_id": 356, "doc_id": "ec2", "chunk_id": "19", "question_id": 1, "question": "What is an Amazon Machine Image (AMI)?", "answer_span": "An Amazon Machine Image (AMI) is an image that provides the software that is required to set up and boot an Amazon EC2 instance.", "chunk": "value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI) is an image that provides the software that is required to set up and boot an Amazon EC2 instance. Each AMI also contains a block device mapping that specifies the block devices to attach to the instances that you launch. You must specify an AMI when you launch an instance. The AMI must be compatible with the instance type that you chose for your instance. You can use an AMI provided by AWS, a public AMI, an AMI that someone else shared with you, or an AMI that you purchased from the AWS Marketplace. An AMI is specific to the following: • Region • Operating system • Processor architecture • Root device type • Virtualization type You can launch multiple instances from a single AMI when you require multiple instances with the same configuration. You can use different AMIs to launch instances when you require instances with different configurations, as shown in the following diagram. 19 Amazon Elastic Compute Cloud User Guide You can create an AMI from your Amazon EC2 instances and then use it to launch instances with the same configuration. You can copy an AMI to another AWS Region, and then use it to launch instances in that Region. You can also share an AMI that you created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements for your EC2 instance • Paid AMIs in the AWS"} +{"global_id": 357, "doc_id": "ec2", "chunk_id": "19", "question_id": 2, "question": "What must you specify when launching an instance?", "answer_span": "You must specify an AMI when you launch an instance.", "chunk": "value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI) is an image that provides the software that is required to set up and boot an Amazon EC2 instance. Each AMI also contains a block device mapping that specifies the block devices to attach to the instances that you launch. You must specify an AMI when you launch an instance. The AMI must be compatible with the instance type that you chose for your instance. You can use an AMI provided by AWS, a public AMI, an AMI that someone else shared with you, or an AMI that you purchased from the AWS Marketplace. An AMI is specific to the following: • Region • Operating system • Processor architecture • Root device type • Virtualization type You can launch multiple instances from a single AMI when you require multiple instances with the same configuration. You can use different AMIs to launch instances when you require instances with different configurations, as shown in the following diagram. 19 Amazon Elastic Compute Cloud User Guide You can create an AMI from your Amazon EC2 instances and then use it to launch instances with the same configuration. You can copy an AMI to another AWS Region, and then use it to launch instances in that Region. You can also share an AMI that you created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements for your EC2 instance • Paid AMIs in the AWS"} +{"global_id": 358, "doc_id": "ec2", "chunk_id": "19", "question_id": 3, "question": "What can you do with an AMI that you created?", "answer_span": "You can create an AMI from your Amazon EC2 instances and then use it to launch instances with the same configuration.", "chunk": "value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI) is an image that provides the software that is required to set up and boot an Amazon EC2 instance. Each AMI also contains a block device mapping that specifies the block devices to attach to the instances that you launch. You must specify an AMI when you launch an instance. The AMI must be compatible with the instance type that you chose for your instance. You can use an AMI provided by AWS, a public AMI, an AMI that someone else shared with you, or an AMI that you purchased from the AWS Marketplace. An AMI is specific to the following: • Region • Operating system • Processor architecture • Root device type • Virtualization type You can launch multiple instances from a single AMI when you require multiple instances with the same configuration. You can use different AMIs to launch instances when you require instances with different configurations, as shown in the following diagram. 19 Amazon Elastic Compute Cloud User Guide You can create an AMI from your Amazon EC2 instances and then use it to launch instances with the same configuration. You can copy an AMI to another AWS Region, and then use it to launch instances in that Region. You can also share an AMI that you created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements for your EC2 instance • Paid AMIs in the AWS"} +{"global_id": 359, "doc_id": "ec2", "chunk_id": "19", "question_id": 4, "question": "What types of AMIs can you use to launch instances?", "answer_span": "You can use an AMI provided by AWS, a public AMI, an AMI that someone else shared with you, or an AMI that you purchased from the AWS Marketplace.", "chunk": "value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI) is an image that provides the software that is required to set up and boot an Amazon EC2 instance. Each AMI also contains a block device mapping that specifies the block devices to attach to the instances that you launch. You must specify an AMI when you launch an instance. The AMI must be compatible with the instance type that you chose for your instance. You can use an AMI provided by AWS, a public AMI, an AMI that someone else shared with you, or an AMI that you purchased from the AWS Marketplace. An AMI is specific to the following: • Region • Operating system • Processor architecture • Root device type • Virtualization type You can launch multiple instances from a single AMI when you require multiple instances with the same configuration. You can use different AMIs to launch instances when you require instances with different configurations, as shown in the following diagram. 19 Amazon Elastic Compute Cloud User Guide You can create an AMI from your Amazon EC2 instances and then use it to launch instances with the same configuration. You can copy an AMI to another AWS Region, and then use it to launch instances in that Region. You can also share an AMI that you created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements for your EC2 instance • Paid AMIs in the AWS"} +{"global_id": 360, "doc_id": "ec2", "chunk_id": "20", "question_id": 1, "question": "What can you do with your AMI using the AWS Marketplace?", "answer_span": "You can sell your AMI using the AWS Marketplace.", "chunk": "created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements for your EC2 instance • Paid AMIs in the AWS Marketplace for Amazon EC2 instances • Amazon EC2 AMI lifecycle • Instance launch behavior with Amazon EC2 boot modes • Use encryption with EBS-backed AMIs • Understand shared AMI usage in Amazon EC2 • Monitor AMI events using Amazon EventBridge • Understand AMI billing information • AMI quotas in Amazon EC2 20"} +{"global_id": 361, "doc_id": "ec2", "chunk_id": "20", "question_id": 2, "question": "What is one of the topics covered in the contents related to AMIs?", "answer_span": "AMI types and characteristics in Amazon EC2", "chunk": "created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements for your EC2 instance • Paid AMIs in the AWS Marketplace for Amazon EC2 instances • Amazon EC2 AMI lifecycle • Instance launch behavior with Amazon EC2 boot modes • Use encryption with EBS-backed AMIs • Understand shared AMI usage in Amazon EC2 • Monitor AMI events using Amazon EventBridge • Understand AMI billing information • AMI quotas in Amazon EC2 20"} +{"global_id": 362, "doc_id": "ec2", "chunk_id": "20", "question_id": 3, "question": "What does the document mention about AMI lifecycle?", "answer_span": "Amazon EC2 AMI lifecycle", "chunk": "created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements for your EC2 instance • Paid AMIs in the AWS Marketplace for Amazon EC2 instances • Amazon EC2 AMI lifecycle • Instance launch behavior with Amazon EC2 boot modes • Use encryption with EBS-backed AMIs • Understand shared AMI usage in Amazon EC2 • Monitor AMI events using Amazon EventBridge • Understand AMI billing information • AMI quotas in Amazon EC2 20"} +{"global_id": 363, "doc_id": "ec2", "chunk_id": "20", "question_id": 4, "question": "What is one of the behaviors associated with instance launch in Amazon EC2?", "answer_span": "Instance launch behavior with Amazon EC2 boot modes", "chunk": "created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements for your EC2 instance • Paid AMIs in the AWS Marketplace for Amazon EC2 instances • Amazon EC2 AMI lifecycle • Instance launch behavior with Amazon EC2 boot modes • Use encryption with EBS-backed AMIs • Understand shared AMI usage in Amazon EC2 • Monitor AMI events using Amazon EventBridge • Understand AMI billing information • AMI quotas in Amazon EC2 20"} +{"global_id": 364, "doc_id": "batch", "chunk_id": "0", "question_id": 1, "question": "What does AWS Batch help you to do?", "answer_span": "AWS Batch helps you to run batch computing workloads on the AWS Cloud.", "chunk": "AWS Batch User Guide What is AWS Batch? AWS Batch helps you to run batch computing workloads on the AWS Cloud. Batch computing is a common way for developers, scientists, and engineers to access large amounts of compute resources. AWS Batch removes the undifferentiated heavy lifting of configuring and managing the required infrastructure, similar to traditional batch computing software. This service can efficiently provision resources in response to jobs submitted in order to eliminate capacity constraints, reduce compute costs, and deliver results quickly. As a fully managed service, AWS Batch helps you to run batch computing workloads of any scale. AWS Batch automatically provisions compute resources and optimizes the workload distribution based on the quantity and scale of the workloads. With AWS Batch, there's no need to install or manage batch computing software, so you can focus your time on analyzing results and solving problems. 1 AWS Batch User Guide AWS Batch provides all of the necessary functionality to run high-scale, compute-intensive workloads on top of AWS managed container orchestration services, Amazon ECS and Amazon EKS. AWS Batch is able to scale compute capacity on Amazon EC2 instances and Fargate resources. AWS Batch provides a fully managed service for batch workloads, and delivers the operational capabilities to optimize these types of workloads for throughput, speed, resource efficiency, and cost. AWS Batch also enables SageMaker Training job queuing, allowing data scientists and ML engineers to submit Training jobs with priorities to configurable queues. This capability ensures that ML workloads run automatically as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides"} +{"global_id": 365, "doc_id": "batch", "chunk_id": "0", "question_id": 2, "question": "What type of workloads can AWS Batch efficiently provision resources for?", "answer_span": "AWS Batch automatically provisions compute resources and optimizes the workload distribution based on the quantity and scale of the workloads.", "chunk": "AWS Batch User Guide What is AWS Batch? AWS Batch helps you to run batch computing workloads on the AWS Cloud. Batch computing is a common way for developers, scientists, and engineers to access large amounts of compute resources. AWS Batch removes the undifferentiated heavy lifting of configuring and managing the required infrastructure, similar to traditional batch computing software. This service can efficiently provision resources in response to jobs submitted in order to eliminate capacity constraints, reduce compute costs, and deliver results quickly. As a fully managed service, AWS Batch helps you to run batch computing workloads of any scale. AWS Batch automatically provisions compute resources and optimizes the workload distribution based on the quantity and scale of the workloads. With AWS Batch, there's no need to install or manage batch computing software, so you can focus your time on analyzing results and solving problems. 1 AWS Batch User Guide AWS Batch provides all of the necessary functionality to run high-scale, compute-intensive workloads on top of AWS managed container orchestration services, Amazon ECS and Amazon EKS. AWS Batch is able to scale compute capacity on Amazon EC2 instances and Fargate resources. AWS Batch provides a fully managed service for batch workloads, and delivers the operational capabilities to optimize these types of workloads for throughput, speed, resource efficiency, and cost. AWS Batch also enables SageMaker Training job queuing, allowing data scientists and ML engineers to submit Training jobs with priorities to configurable queues. This capability ensures that ML workloads run automatically as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides"} +{"global_id": 366, "doc_id": "batch", "chunk_id": "0", "question_id": 3, "question": "What does AWS Batch eliminate the need for in terms of software management?", "answer_span": "With AWS Batch, there's no need to install or manage batch computing software, so you can focus your time on analyzing results and solving problems.", "chunk": "AWS Batch User Guide What is AWS Batch? AWS Batch helps you to run batch computing workloads on the AWS Cloud. Batch computing is a common way for developers, scientists, and engineers to access large amounts of compute resources. AWS Batch removes the undifferentiated heavy lifting of configuring and managing the required infrastructure, similar to traditional batch computing software. This service can efficiently provision resources in response to jobs submitted in order to eliminate capacity constraints, reduce compute costs, and deliver results quickly. As a fully managed service, AWS Batch helps you to run batch computing workloads of any scale. AWS Batch automatically provisions compute resources and optimizes the workload distribution based on the quantity and scale of the workloads. With AWS Batch, there's no need to install or manage batch computing software, so you can focus your time on analyzing results and solving problems. 1 AWS Batch User Guide AWS Batch provides all of the necessary functionality to run high-scale, compute-intensive workloads on top of AWS managed container orchestration services, Amazon ECS and Amazon EKS. AWS Batch is able to scale compute capacity on Amazon EC2 instances and Fargate resources. AWS Batch provides a fully managed service for batch workloads, and delivers the operational capabilities to optimize these types of workloads for throughput, speed, resource efficiency, and cost. AWS Batch also enables SageMaker Training job queuing, allowing data scientists and ML engineers to submit Training jobs with priorities to configurable queues. This capability ensures that ML workloads run automatically as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides"} +{"global_id": 367, "doc_id": "batch", "chunk_id": "0", "question_id": 4, "question": "How does AWS Batch support machine learning workloads?", "answer_span": "For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs.", "chunk": "AWS Batch User Guide What is AWS Batch? AWS Batch helps you to run batch computing workloads on the AWS Cloud. Batch computing is a common way for developers, scientists, and engineers to access large amounts of compute resources. AWS Batch removes the undifferentiated heavy lifting of configuring and managing the required infrastructure, similar to traditional batch computing software. This service can efficiently provision resources in response to jobs submitted in order to eliminate capacity constraints, reduce compute costs, and deliver results quickly. As a fully managed service, AWS Batch helps you to run batch computing workloads of any scale. AWS Batch automatically provisions compute resources and optimizes the workload distribution based on the quantity and scale of the workloads. With AWS Batch, there's no need to install or manage batch computing software, so you can focus your time on analyzing results and solving problems. 1 AWS Batch User Guide AWS Batch provides all of the necessary functionality to run high-scale, compute-intensive workloads on top of AWS managed container orchestration services, Amazon ECS and Amazon EKS. AWS Batch is able to scale compute capacity on Amazon EC2 instances and Fargate resources. AWS Batch provides a fully managed service for batch workloads, and delivers the operational capabilities to optimize these types of workloads for throughput, speed, resource efficiency, and cost. AWS Batch also enables SageMaker Training job queuing, allowing data scientists and ML engineers to submit Training jobs with priorities to configurable queues. This capability ensures that ML workloads run automatically as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides"} +{"global_id": 368, "doc_id": "batch", "chunk_id": "1", "question_id": 1, "question": "What capabilities does AWS Batch provide for SageMaker Training jobs?", "answer_span": "For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs.", "chunk": "as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides a shared responsibility model where administrators set up the infrastructure and permissions, while data scientists can focus on submitting and monitoring their ML training workloads. Jobs are automatically queued and executed based on configured priorities and resource availability. 2 AWS Batch User Guide Are you a first-time AWS Batch user? If you are a first-time user of AWS Batch, we recommend that you begin by reading the following sections: • Components of AWS Batch • Create IAM account and administrative user • Setting up AWS Batch • Getting started with AWS Batch tutorials • Getting started with AWS Batch on SageMaker AI Related services AWS Batch is a fully managed batch computing service that plans, schedules, and runs your containerized batch ML, simulation, and analytics workloads across the full range of AWS compute offerings, such as Amazon ECS, Amazon EKS, AWS Fargate, and Spot or On-Demand Instances. For more information about each managed compute service, see: • Amazon EC2 User Guide • AWS Fargate Developer Guide • Amazon EKS User Guide • Amazon SageMaker AI Developer Guide Accessing AWS Batch You can access AWS Batch using the following: AWS Batch console The web interface where you create and manage resources. AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the"} +{"global_id": 369, "doc_id": "batch", "chunk_id": "1", "question_id": 2, "question": "What is the shared responsibility model in AWS Batch?", "answer_span": "This provides a shared responsibility model where administrators set up the infrastructure and permissions, while data scientists can focus on submitting and monitoring their ML training workloads.", "chunk": "as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides a shared responsibility model where administrators set up the infrastructure and permissions, while data scientists can focus on submitting and monitoring their ML training workloads. Jobs are automatically queued and executed based on configured priorities and resource availability. 2 AWS Batch User Guide Are you a first-time AWS Batch user? If you are a first-time user of AWS Batch, we recommend that you begin by reading the following sections: • Components of AWS Batch • Create IAM account and administrative user • Setting up AWS Batch • Getting started with AWS Batch tutorials • Getting started with AWS Batch on SageMaker AI Related services AWS Batch is a fully managed batch computing service that plans, schedules, and runs your containerized batch ML, simulation, and analytics workloads across the full range of AWS compute offerings, such as Amazon ECS, Amazon EKS, AWS Fargate, and Spot or On-Demand Instances. For more information about each managed compute service, see: • Amazon EC2 User Guide • AWS Fargate Developer Guide • Amazon EKS User Guide • Amazon SageMaker AI Developer Guide Accessing AWS Batch You can access AWS Batch using the following: AWS Batch console The web interface where you create and manage resources. AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the"} +{"global_id": 370, "doc_id": "batch", "chunk_id": "1", "question_id": 3, "question": "What should first-time AWS Batch users read?", "answer_span": "If you are a first-time user of AWS Batch, we recommend that you begin by reading the following sections: • Components of AWS Batch • Create IAM account and administrative user • Setting up AWS Batch • Getting started with AWS Batch tutorials • Getting started with AWS Batch on SageMaker", "chunk": "as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides a shared responsibility model where administrators set up the infrastructure and permissions, while data scientists can focus on submitting and monitoring their ML training workloads. Jobs are automatically queued and executed based on configured priorities and resource availability. 2 AWS Batch User Guide Are you a first-time AWS Batch user? If you are a first-time user of AWS Batch, we recommend that you begin by reading the following sections: • Components of AWS Batch • Create IAM account and administrative user • Setting up AWS Batch • Getting started with AWS Batch tutorials • Getting started with AWS Batch on SageMaker AI Related services AWS Batch is a fully managed batch computing service that plans, schedules, and runs your containerized batch ML, simulation, and analytics workloads across the full range of AWS compute offerings, such as Amazon ECS, Amazon EKS, AWS Fargate, and Spot or On-Demand Instances. For more information about each managed compute service, see: • Amazon EC2 User Guide • AWS Fargate Developer Guide • Amazon EKS User Guide • Amazon SageMaker AI Developer Guide Accessing AWS Batch You can access AWS Batch using the following: AWS Batch console The web interface where you create and manage resources. AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the"} +{"global_id": 371, "doc_id": "batch", "chunk_id": "1", "question_id": 4, "question": "How can you access AWS Batch?", "answer_span": "You can access AWS Batch using the following: AWS Batch console The web interface where you create and manage resources.", "chunk": "as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides a shared responsibility model where administrators set up the infrastructure and permissions, while data scientists can focus on submitting and monitoring their ML training workloads. Jobs are automatically queued and executed based on configured priorities and resource availability. 2 AWS Batch User Guide Are you a first-time AWS Batch user? If you are a first-time user of AWS Batch, we recommend that you begin by reading the following sections: • Components of AWS Batch • Create IAM account and administrative user • Setting up AWS Batch • Getting started with AWS Batch tutorials • Getting started with AWS Batch on SageMaker AI Related services AWS Batch is a fully managed batch computing service that plans, schedules, and runs your containerized batch ML, simulation, and analytics workloads across the full range of AWS compute offerings, such as Amazon ECS, Amazon EKS, AWS Fargate, and Spot or On-Demand Instances. For more information about each managed compute service, see: • Amazon EC2 User Guide • AWS Fargate Developer Guide • Amazon EKS User Guide • Amazon SageMaker AI Developer Guide Accessing AWS Batch You can access AWS Batch using the following: AWS Batch console The web interface where you create and manage resources. AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the"} +{"global_id": 372, "doc_id": "batch", "chunk_id": "2", "question_id": 1, "question": "What operating systems support the AWS Command Line Interface?", "answer_span": "The AWS Command Line Interface is supported on Windows, macOS, and Linux.", "chunk": "AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the AWS CLI Command Reference. Are you a first-time AWS Batch user? 3 AWS Batch User Guide AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, use the libraries, sample code, tutorials, and other resources provided by AWS. These libraries provide basic functions that automate tasks, such as cryptographically signing your requests, retrying requests, and handling error responses. These functions make it more efficient for you to get started. For more information, see Tools to Build on AWS. Components of AWS Batch AWS Batch simplifies running batch jobs across multiple Availability Zones within a Region. You can create AWS Batch compute environments within a new or existing VPC. After a compute environment is up and associated with a job queue, you can define job definitions that specify which Docker container images to run your jobs. Container images are stored in and pulled from container registries, which may exist within or outside of your AWS infrastructure. Compute environment A compute environment is a set of managed or unmanaged compute resources that are used to run jobs. With managed compute environments, you can specify desired compute type (Fargate or EC2) at several levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum"} +{"global_id": 373, "doc_id": "batch", "chunk_id": "2", "question_id": 2, "question": "What does AWS Batch simplify?", "answer_span": "AWS Batch simplifies running batch jobs across multiple Availability Zones within a Region.", "chunk": "AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the AWS CLI Command Reference. Are you a first-time AWS Batch user? 3 AWS Batch User Guide AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, use the libraries, sample code, tutorials, and other resources provided by AWS. These libraries provide basic functions that automate tasks, such as cryptographically signing your requests, retrying requests, and handling error responses. These functions make it more efficient for you to get started. For more information, see Tools to Build on AWS. Components of AWS Batch AWS Batch simplifies running batch jobs across multiple Availability Zones within a Region. You can create AWS Batch compute environments within a new or existing VPC. After a compute environment is up and associated with a job queue, you can define job definitions that specify which Docker container images to run your jobs. Container images are stored in and pulled from container registries, which may exist within or outside of your AWS infrastructure. Compute environment A compute environment is a set of managed or unmanaged compute resources that are used to run jobs. With managed compute environments, you can specify desired compute type (Fargate or EC2) at several levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum"} +{"global_id": 374, "doc_id": "batch", "chunk_id": "2", "question_id": 3, "question": "What can you define after a compute environment is associated with a job queue?", "answer_span": "you can define job definitions that specify which Docker container images to run your jobs.", "chunk": "AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the AWS CLI Command Reference. Are you a first-time AWS Batch user? 3 AWS Batch User Guide AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, use the libraries, sample code, tutorials, and other resources provided by AWS. These libraries provide basic functions that automate tasks, such as cryptographically signing your requests, retrying requests, and handling error responses. These functions make it more efficient for you to get started. For more information, see Tools to Build on AWS. Components of AWS Batch AWS Batch simplifies running batch jobs across multiple Availability Zones within a Region. You can create AWS Batch compute environments within a new or existing VPC. After a compute environment is up and associated with a job queue, you can define job definitions that specify which Docker container images to run your jobs. Container images are stored in and pulled from container registries, which may exist within or outside of your AWS infrastructure. Compute environment A compute environment is a set of managed or unmanaged compute resources that are used to run jobs. With managed compute environments, you can specify desired compute type (Fargate or EC2) at several levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum"} +{"global_id": 375, "doc_id": "batch", "chunk_id": "2", "question_id": 4, "question": "What are managed compute environments used for?", "answer_span": "A compute environment is a set of managed or unmanaged compute resources that are used to run jobs.", "chunk": "AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the AWS CLI Command Reference. Are you a first-time AWS Batch user? 3 AWS Batch User Guide AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, use the libraries, sample code, tutorials, and other resources provided by AWS. These libraries provide basic functions that automate tasks, such as cryptographically signing your requests, retrying requests, and handling error responses. These functions make it more efficient for you to get started. For more information, see Tools to Build on AWS. Components of AWS Batch AWS Batch simplifies running batch jobs across multiple Availability Zones within a Region. You can create AWS Batch compute environments within a new or existing VPC. After a compute environment is up and associated with a job queue, you can define job definitions that specify which Docker container images to run your jobs. Container images are stored in and pulled from container registries, which may exist within or outside of your AWS infrastructure. Compute environment A compute environment is a set of managed or unmanaged compute resources that are used to run jobs. With managed compute environments, you can specify desired compute type (Fargate or EC2) at several levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum"} +{"global_id": 376, "doc_id": "batch", "chunk_id": "3", "question_id": 1, "question": "What types of EC2 instances can you set up in compute environments?", "answer_span": "You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge.", "chunk": "levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum number of vCPUs for the environment, along with the amount that you're Components of AWS Batch 4 AWS Batch User Guide willing to pay for a Spot Instance as a percentage of the On-Demand Instance price and a target set of VPC subnets. AWS Batch efficiently launches, manages, and terminates compute types as needed. You can also manage your own compute environments. As such, you're responsible for setting up and scaling the instances in an Amazon ECS cluster that AWS Batch creates for you. For more information, see Compute environments for AWS Batch. Job queues When you submit an AWS Batch job, you submit it to a particular job queue, where the job resides until it's scheduled onto a compute environment. You associate one or more compute environments with a job queue. You can also assign priority values for these compute environments and even across job queues themselves. For example, you can have a high priority queue that you submit time-sensitive jobs to, and a low priority queue for jobs that can run anytime when compute resources are cheaper. For more information, see Job queues. Job definitions A job definition specifies how jobs are to be run. You can think of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points"} +{"global_id": 377, "doc_id": "batch", "chunk_id": "3", "question_id": 2, "question": "What are the components you can specify for the compute environment?", "answer_span": "You can also specify the minimum, desired, and maximum number of vCPUs for the environment, along with the amount that you're willing to pay for a Spot Instance as a percentage of the On-Demand Instance price and a target set of VPC subnets.", "chunk": "levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum number of vCPUs for the environment, along with the amount that you're Components of AWS Batch 4 AWS Batch User Guide willing to pay for a Spot Instance as a percentage of the On-Demand Instance price and a target set of VPC subnets. AWS Batch efficiently launches, manages, and terminates compute types as needed. You can also manage your own compute environments. As such, you're responsible for setting up and scaling the instances in an Amazon ECS cluster that AWS Batch creates for you. For more information, see Compute environments for AWS Batch. Job queues When you submit an AWS Batch job, you submit it to a particular job queue, where the job resides until it's scheduled onto a compute environment. You associate one or more compute environments with a job queue. You can also assign priority values for these compute environments and even across job queues themselves. For example, you can have a high priority queue that you submit time-sensitive jobs to, and a low priority queue for jobs that can run anytime when compute resources are cheaper. For more information, see Job queues. Job definitions A job definition specifies how jobs are to be run. You can think of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points"} +{"global_id": 378, "doc_id": "batch", "chunk_id": "3", "question_id": 3, "question": "What happens when you submit an AWS Batch job?", "answer_span": "When you submit an AWS Batch job, you submit it to a particular job queue, where the job resides until it's scheduled onto a compute environment.", "chunk": "levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum number of vCPUs for the environment, along with the amount that you're Components of AWS Batch 4 AWS Batch User Guide willing to pay for a Spot Instance as a percentage of the On-Demand Instance price and a target set of VPC subnets. AWS Batch efficiently launches, manages, and terminates compute types as needed. You can also manage your own compute environments. As such, you're responsible for setting up and scaling the instances in an Amazon ECS cluster that AWS Batch creates for you. For more information, see Compute environments for AWS Batch. Job queues When you submit an AWS Batch job, you submit it to a particular job queue, where the job resides until it's scheduled onto a compute environment. You associate one or more compute environments with a job queue. You can also assign priority values for these compute environments and even across job queues themselves. For example, you can have a high priority queue that you submit time-sensitive jobs to, and a low priority queue for jobs that can run anytime when compute resources are cheaper. For more information, see Job queues. Job definitions A job definition specifies how jobs are to be run. You can think of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points"} +{"global_id": 379, "doc_id": "batch", "chunk_id": "3", "question_id": 4, "question": "What does a job definition specify?", "answer_span": "A job definition specifies how jobs are to be run.", "chunk": "levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum number of vCPUs for the environment, along with the amount that you're Components of AWS Batch 4 AWS Batch User Guide willing to pay for a Spot Instance as a percentage of the On-Demand Instance price and a target set of VPC subnets. AWS Batch efficiently launches, manages, and terminates compute types as needed. You can also manage your own compute environments. As such, you're responsible for setting up and scaling the instances in an Amazon ECS cluster that AWS Batch creates for you. For more information, see Compute environments for AWS Batch. Job queues When you submit an AWS Batch job, you submit it to a particular job queue, where the job resides until it's scheduled onto a compute environment. You associate one or more compute environments with a job queue. You can also assign priority values for these compute environments and even across job queues themselves. For example, you can have a high priority queue that you submit time-sensitive jobs to, and a low priority queue for jobs that can run anytime when compute resources are cheaper. For more information, see Job queues. Job definitions A job definition specifies how jobs are to be run. You can think of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points"} +{"global_id": 380, "doc_id": "batch", "chunk_id": "4", "question_id": 1, "question": "What is a job definition described as in the text?", "answer_span": "of a job definition as a blueprint for the resources in your job.", "chunk": "of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points for persistent storage. Many of the specifications in a job definition can be overridden by specifying new values when submitting individual Jobs. For more information, see Job definitions Jobs A unit of work (such as a shell script, a Linux executable, or a Docker container image) that you submit to AWS Batch. It has a name, and runs as a containerized application on AWS Fargate or Amazon EC2 resources in your compute environment, using parameters that you specify in a job definition. Jobs can reference other jobs by name or by ID, and can be dependent on the successful completion of other jobs or the availability of resources you specify. For more information, see Jobs. Scheduling policy You can use scheduling policies to configure how compute resources in a job queue are allocated between users or workloads. Using fair-share scheduling policies, you can assign different share identifiers to workloads or users. The AWS Batch job scheduler defaults to a first-in, first-out (FIFO) strategy. For more information, see Fair-share scheduling policies. Job queues 5 AWS Batch User Guide Consumable resources A consumable resource is a resource that is needed to run your jobs, such as a 3rd party license token, database access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating"} +{"global_id": 381, "doc_id": "batch", "chunk_id": "4", "question_id": 2, "question": "What can you supply your job with to provide access to other AWS resources?", "answer_span": "You can supply your job with an IAM role to provide access to other AWS resources.", "chunk": "of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points for persistent storage. Many of the specifications in a job definition can be overridden by specifying new values when submitting individual Jobs. For more information, see Job definitions Jobs A unit of work (such as a shell script, a Linux executable, or a Docker container image) that you submit to AWS Batch. It has a name, and runs as a containerized application on AWS Fargate or Amazon EC2 resources in your compute environment, using parameters that you specify in a job definition. Jobs can reference other jobs by name or by ID, and can be dependent on the successful completion of other jobs or the availability of resources you specify. For more information, see Jobs. Scheduling policy You can use scheduling policies to configure how compute resources in a job queue are allocated between users or workloads. Using fair-share scheduling policies, you can assign different share identifiers to workloads or users. The AWS Batch job scheduler defaults to a first-in, first-out (FIFO) strategy. For more information, see Fair-share scheduling policies. Job queues 5 AWS Batch User Guide Consumable resources A consumable resource is a resource that is needed to run your jobs, such as a 3rd party license token, database access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating"} +{"global_id": 382, "doc_id": "batch", "chunk_id": "4", "question_id": 3, "question": "What does a job in AWS Batch run as?", "answer_span": "It has a name, and runs as a containerized application on AWS Fargate or Amazon EC2 resources in your compute environment.", "chunk": "of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points for persistent storage. Many of the specifications in a job definition can be overridden by specifying new values when submitting individual Jobs. For more information, see Job definitions Jobs A unit of work (such as a shell script, a Linux executable, or a Docker container image) that you submit to AWS Batch. It has a name, and runs as a containerized application on AWS Fargate or Amazon EC2 resources in your compute environment, using parameters that you specify in a job definition. Jobs can reference other jobs by name or by ID, and can be dependent on the successful completion of other jobs or the availability of resources you specify. For more information, see Jobs. Scheduling policy You can use scheduling policies to configure how compute resources in a job queue are allocated between users or workloads. Using fair-share scheduling policies, you can assign different share identifiers to workloads or users. The AWS Batch job scheduler defaults to a first-in, first-out (FIFO) strategy. For more information, see Fair-share scheduling policies. Job queues 5 AWS Batch User Guide Consumable resources A consumable resource is a resource that is needed to run your jobs, such as a 3rd party license token, database access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating"} +{"global_id": 383, "doc_id": "batch", "chunk_id": "4", "question_id": 4, "question": "What is a consumable resource according to the text?", "answer_span": "A consumable resource is a resource that is needed to run your jobs, such as a 3rd party license token, database access bandwidth, the need to throttle calls to a third-party API, and so on.", "chunk": "of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points for persistent storage. Many of the specifications in a job definition can be overridden by specifying new values when submitting individual Jobs. For more information, see Job definitions Jobs A unit of work (such as a shell script, a Linux executable, or a Docker container image) that you submit to AWS Batch. It has a name, and runs as a containerized application on AWS Fargate or Amazon EC2 resources in your compute environment, using parameters that you specify in a job definition. Jobs can reference other jobs by name or by ID, and can be dependent on the successful completion of other jobs or the availability of resources you specify. For more information, see Jobs. Scheduling policy You can use scheduling policies to configure how compute resources in a job queue are allocated between users or workloads. Using fair-share scheduling policies, you can assign different share identifiers to workloads or users. The AWS Batch job scheduler defaults to a first-in, first-out (FIFO) strategy. For more information, see Fair-share scheduling policies. Job queues 5 AWS Batch User Guide Consumable resources A consumable resource is a resource that is needed to run your jobs, such as a 3rd party license token, database access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating"} +{"global_id": 384, "doc_id": "batch", "chunk_id": "5", "question_id": 1, "question": "What does AWS Batch take into account when scheduling a job?", "answer_span": "You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job.", "chunk": "access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating only the jobs that have all the required resources available. For more information, see Resource-aware scheduling . Service Environment A Service Environment define how AWS Batch integrates with SageMaker for job execution. Service Environments enable AWS Batch to submit and manage jobs on SageMaker while providing the queuing, scheduling, and priority management capabilities of AWS Batch. Service Environments define capacity limits for specific service types such as SageMaker Training jobs. The capacity limits control the maximum resources that can be used by service jobs in the environment. For more information, see Service environments for AWS Batch. Service job A service job is a unit of work that you submit to AWS Batch to run on a service environment. Service jobs leverage AWS Batch's queuing and scheduling capabilities while delegating actual execution to the external service. For example, SageMaker Training jobs submitted as service jobs are queued and prioritized by AWS Batch, but the SageMaker Training job execution occurs within SageMaker AI infrastructure. This integration enables data scientists and ML engineers to benefit from AWS Batch's automated workload management, and priority queuing, for their SageMaker AI Training workloads. Service jobs can reference other jobs by name or ID and support job dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon"} +{"global_id": 385, "doc_id": "batch", "chunk_id": "5", "question_id": 2, "question": "What is a Service Environment in AWS Batch?", "answer_span": "A Service Environment define how AWS Batch integrates with SageMaker for job execution.", "chunk": "access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating only the jobs that have all the required resources available. For more information, see Resource-aware scheduling . Service Environment A Service Environment define how AWS Batch integrates with SageMaker for job execution. Service Environments enable AWS Batch to submit and manage jobs on SageMaker while providing the queuing, scheduling, and priority management capabilities of AWS Batch. Service Environments define capacity limits for specific service types such as SageMaker Training jobs. The capacity limits control the maximum resources that can be used by service jobs in the environment. For more information, see Service environments for AWS Batch. Service job A service job is a unit of work that you submit to AWS Batch to run on a service environment. Service jobs leverage AWS Batch's queuing and scheduling capabilities while delegating actual execution to the external service. For example, SageMaker Training jobs submitted as service jobs are queued and prioritized by AWS Batch, but the SageMaker Training job execution occurs within SageMaker AI infrastructure. This integration enables data scientists and ML engineers to benefit from AWS Batch's automated workload management, and priority queuing, for their SageMaker AI Training workloads. Service jobs can reference other jobs by name or ID and support job dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon"} +{"global_id": 386, "doc_id": "batch", "chunk_id": "5", "question_id": 3, "question": "What is a service job in AWS Batch?", "answer_span": "A service job is a unit of work that you submit to AWS Batch to run on a service environment.", "chunk": "access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating only the jobs that have all the required resources available. For more information, see Resource-aware scheduling . Service Environment A Service Environment define how AWS Batch integrates with SageMaker for job execution. Service Environments enable AWS Batch to submit and manage jobs on SageMaker while providing the queuing, scheduling, and priority management capabilities of AWS Batch. Service Environments define capacity limits for specific service types such as SageMaker Training jobs. The capacity limits control the maximum resources that can be used by service jobs in the environment. For more information, see Service environments for AWS Batch. Service job A service job is a unit of work that you submit to AWS Batch to run on a service environment. Service jobs leverage AWS Batch's queuing and scheduling capabilities while delegating actual execution to the external service. For example, SageMaker Training jobs submitted as service jobs are queued and prioritized by AWS Batch, but the SageMaker Training job execution occurs within SageMaker AI infrastructure. This integration enables data scientists and ML engineers to benefit from AWS Batch's automated workload management, and priority queuing, for their SageMaker AI Training workloads. Service jobs can reference other jobs by name or ID and support job dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon"} +{"global_id": 387, "doc_id": "batch", "chunk_id": "5", "question_id": 4, "question": "How do service jobs benefit from AWS Batch?", "answer_span": "This integration enables data scientists and ML engineers to benefit from AWS Batch's automated workload management, and priority queuing, for their SageMaker AI Training workloads.", "chunk": "access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating only the jobs that have all the required resources available. For more information, see Resource-aware scheduling . Service Environment A Service Environment define how AWS Batch integrates with SageMaker for job execution. Service Environments enable AWS Batch to submit and manage jobs on SageMaker while providing the queuing, scheduling, and priority management capabilities of AWS Batch. Service Environments define capacity limits for specific service types such as SageMaker Training jobs. The capacity limits control the maximum resources that can be used by service jobs in the environment. For more information, see Service environments for AWS Batch. Service job A service job is a unit of work that you submit to AWS Batch to run on a service environment. Service jobs leverage AWS Batch's queuing and scheduling capabilities while delegating actual execution to the external service. For example, SageMaker Training jobs submitted as service jobs are queued and prioritized by AWS Batch, but the SageMaker Training job execution occurs within SageMaker AI infrastructure. This integration enables data scientists and ML engineers to benefit from AWS Batch's automated workload management, and priority queuing, for their SageMaker AI Training workloads. Service jobs can reference other jobs by name or ID and support job dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon"} +{"global_id": 388, "doc_id": "batch", "chunk_id": "6", "question_id": 1, "question": "What services must you be using to soon use AWS Batch?", "answer_span": "If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon use AWS Batch.", "chunk": "dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon use AWS Batch. The setup process for these services is similar. This is because AWS Batch uses Amazon ECS container instances in its compute environments. To use the AWS CLI with AWS Batch, you must use a version of the AWS CLI that supports the latest AWS Batch features. If you don't see support for an AWS Batch feature in the AWS CLI, upgrade to the latest version. For more information, see http://aws.amazon.com/cli/. Note Because AWS Batch uses components of Amazon EC2, you use the Amazon EC2 console for many of these steps. Complete the following tasks to get set up for AWS Batch. Topics • Create IAM account and administrative user • Create IAM roles for your compute environments and container instances • Create a key pair for your instances • Create a VPC • Create a security group • Install the AWS CLI Create IAM account and administrative user To get started, you need to create an AWS account and a single user that is typically granted administrative rights. To accomplish this, complete the following tutorials: Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. Create IAM account and administrative user 7 AWS Batch User Guide To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign"} +{"global_id": 389, "doc_id": "batch", "chunk_id": "6", "question_id": 2, "question": "What must you do if you don't see support for an AWS Batch feature in the AWS CLI?", "answer_span": "If you don't see support for an AWS Batch feature in the AWS CLI, upgrade to the latest version.", "chunk": "dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon use AWS Batch. The setup process for these services is similar. This is because AWS Batch uses Amazon ECS container instances in its compute environments. To use the AWS CLI with AWS Batch, you must use a version of the AWS CLI that supports the latest AWS Batch features. If you don't see support for an AWS Batch feature in the AWS CLI, upgrade to the latest version. For more information, see http://aws.amazon.com/cli/. Note Because AWS Batch uses components of Amazon EC2, you use the Amazon EC2 console for many of these steps. Complete the following tasks to get set up for AWS Batch. Topics • Create IAM account and administrative user • Create IAM roles for your compute environments and container instances • Create a key pair for your instances • Create a VPC • Create a security group • Install the AWS CLI Create IAM account and administrative user To get started, you need to create an AWS account and a single user that is typically granted administrative rights. To accomplish this, complete the following tutorials: Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. Create IAM account and administrative user 7 AWS Batch User Guide To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign"} +{"global_id": 390, "doc_id": "batch", "chunk_id": "6", "question_id": 3, "question": "What is the first step to create an AWS account?", "answer_span": "To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup.", "chunk": "dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon use AWS Batch. The setup process for these services is similar. This is because AWS Batch uses Amazon ECS container instances in its compute environments. To use the AWS CLI with AWS Batch, you must use a version of the AWS CLI that supports the latest AWS Batch features. If you don't see support for an AWS Batch feature in the AWS CLI, upgrade to the latest version. For more information, see http://aws.amazon.com/cli/. Note Because AWS Batch uses components of Amazon EC2, you use the Amazon EC2 console for many of these steps. Complete the following tasks to get set up for AWS Batch. Topics • Create IAM account and administrative user • Create IAM roles for your compute environments and container instances • Create a key pair for your instances • Create a VPC • Create a security group • Install the AWS CLI Create IAM account and administrative user To get started, you need to create an AWS account and a single user that is typically granted administrative rights. To accomplish this, complete the following tutorials: Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. Create IAM account and administrative user 7 AWS Batch User Guide To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign"} +{"global_id": 391, "doc_id": "batch", "chunk_id": "6", "question_id": 4, "question": "What is one of the tasks you need to complete to set up AWS Batch?", "answer_span": "Complete the following tasks to get set up for AWS Batch.", "chunk": "dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon use AWS Batch. The setup process for these services is similar. This is because AWS Batch uses Amazon ECS container instances in its compute environments. To use the AWS CLI with AWS Batch, you must use a version of the AWS CLI that supports the latest AWS Batch features. If you don't see support for an AWS Batch feature in the AWS CLI, upgrade to the latest version. For more information, see http://aws.amazon.com/cli/. Note Because AWS Batch uses components of Amazon EC2, you use the Amazon EC2 console for many of these steps. Complete the following tasks to get set up for AWS Batch. Topics • Create IAM account and administrative user • Create IAM roles for your compute environments and container instances • Create a key pair for your instances • Create a VPC • Create a security group • Install the AWS CLI Create IAM account and administrative user To get started, you need to create an AWS account and a single user that is typically granted administrative rights. To accomplish this, complete the following tutorials: Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. Create IAM account and administrative user 7 AWS Batch User Guide To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign"} +{"global_id": 392, "doc_id": "batch", "chunk_id": "7", "question_id": 1, "question": "What is the first step to sign up for an AWS account?", "answer_span": "1. Open https://portal.aws.amazon.com/billing/signup.", "chunk": "IAM account and administrative user 7 AWS Batch User Guide To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. 2. Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. Create a user with administrative access 8 AWS Batch User Guide For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User"} +{"global_id": 393, "doc_id": "batch", "chunk_id": "7", "question_id": 2, "question": "What is created when you sign up for an AWS account?", "answer_span": "When you sign up for an AWS account, an AWS account root user is created.", "chunk": "IAM account and administrative user 7 AWS Batch User Guide To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. 2. Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. Create a user with administrative access 8 AWS Batch User Guide For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User"} +{"global_id": 394, "doc_id": "batch", "chunk_id": "7", "question_id": 3, "question": "What should you do to secure your AWS account root user?", "answer_span": "Turn on multi-factor authentication (MFA) for your root user.", "chunk": "IAM account and administrative user 7 AWS Batch User Guide To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. 2. Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. Create a user with administrative access 8 AWS Batch User Guide For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User"} +{"global_id": 395, "doc_id": "batch", "chunk_id": "7", "question_id": 4, "question": "Where can you manage your account after signing up?", "answer_span": "At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account.", "chunk": "IAM account and administrative user 7 AWS Batch User Guide To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. 2. Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. Create a user with administrative access 8 AWS Batch User Guide For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User"} +{"global_id": 396, "doc_id": "batch", "chunk_id": "8", "question_id": 1, "question": "What is the first step to create a user with administrative access?", "answer_span": "1. Enable IAM Identity Center.", "chunk": "device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. Create a user with administrative access 8 AWS Batch User Guide For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying leastprivilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create IAM roles for your compute environments and container instances Your AWS Batch compute environments and container instances require AWS account credentials to make calls to other AWS APIs on your behalf. Create an AWS Identity and Access Management role that provides these credentials to your compute environments and container instances, then associate that role with your compute environments. Create IAM roles 9 AWS Batch User Guide Note To verify that your AWS account has the required permissions,"} +{"global_id": 397, "doc_id": "batch", "chunk_id": "8", "question_id": 2, "question": "Where can you find instructions for enabling AWS IAM Identity Center?", "answer_span": "For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide.", "chunk": "device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. Create a user with administrative access 8 AWS Batch User Guide For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying leastprivilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create IAM roles for your compute environments and container instances Your AWS Batch compute environments and container instances require AWS account credentials to make calls to other AWS APIs on your behalf. Create an AWS Identity and Access Management role that provides these credentials to your compute environments and container instances, then associate that role with your compute environments. Create IAM roles 9 AWS Batch User Guide Note To verify that your AWS account has the required permissions,"} +{"global_id": 398, "doc_id": "batch", "chunk_id": "8", "question_id": 3, "question": "What should you do to sign in as a user with administrative access?", "answer_span": "To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user.", "chunk": "device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. Create a user with administrative access 8 AWS Batch User Guide For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying leastprivilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create IAM roles for your compute environments and container instances Your AWS Batch compute environments and container instances require AWS account credentials to make calls to other AWS APIs on your behalf. Create an AWS Identity and Access Management role that provides these credentials to your compute environments and container instances, then associate that role with your compute environments. Create IAM roles 9 AWS Batch User Guide Note To verify that your AWS account has the required permissions,"} +{"global_id": 399, "doc_id": "batch", "chunk_id": "8", "question_id": 4, "question": "What is required for your AWS Batch compute environments and container instances?", "answer_span": "Your AWS Batch compute environments and container instances require AWS account credentials to make calls to other AWS APIs on your behalf.", "chunk": "device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. Create a user with administrative access 8 AWS Batch User Guide For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying leastprivilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create IAM roles for your compute environments and container instances Your AWS Batch compute environments and container instances require AWS account credentials to make calls to other AWS APIs on your behalf. Create an AWS Identity and Access Management role that provides these credentials to your compute environments and container instances, then associate that role with your compute environments. Create IAM roles 9 AWS Batch User Guide Note To verify that your AWS account has the required permissions,"} +{"global_id": 400, "doc_id": "batch", "chunk_id": "9", "question_id": 1, "question": "What role must be created to provide credentials to compute environments and container instances?", "answer_span": "Create an AWS Identity and Access Management role that provides these credentials to your compute environments and container instances, then associate that role with your compute environments.", "chunk": "APIs on your behalf. Create an AWS Identity and Access Management role that provides these credentials to your compute environments and container instances, then associate that role with your compute environments. Create IAM roles 9 AWS Batch User Guide Note To verify that your AWS account has the required permissions, see Initial IAM service set up for your account. The AWS Batch compute environment and container instance roles are automatically created for you in the console first-run experience. So, if you intend to use the AWS Batch console, you can move ahead to the next section. If you plan to use the AWS CLI instead, complete the procedures in Using service-linked roles for AWS Batch, Amazon ECS instance role, and Tutorial: Create the IAM execution role before creating your first compute environment. Create a key pair for your instances AWS uses public-key cryptography to secure the login information for your instance. A Linux instance, such as an AWS Batch compute environment container instance, has no password to use for SSH access. You use a key pair to log in to your instance securely. You specify the name of the key pair when you create your compute environment, then provide the private key when you log in using SSH. If you didn't create a key pair already, you can create one using the Amazon EC2 console. Note that, if you plan to launch instances in multiple AWS Regions, create a key pair in each Region. For more information about Regions, see Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select an AWS Region for the key pair. You can select any Region that's available to you, regardless of your location: however,"} +{"global_id": 401, "doc_id": "batch", "chunk_id": "9", "question_id": 2, "question": "What should you do if you plan to use the AWS CLI instead of the AWS Batch console?", "answer_span": "If you plan to use the AWS CLI instead, complete the procedures in Using service-linked roles for AWS Batch, Amazon ECS instance role, and Tutorial: Create the IAM execution role before creating your first compute environment.", "chunk": "APIs on your behalf. Create an AWS Identity and Access Management role that provides these credentials to your compute environments and container instances, then associate that role with your compute environments. Create IAM roles 9 AWS Batch User Guide Note To verify that your AWS account has the required permissions, see Initial IAM service set up for your account. The AWS Batch compute environment and container instance roles are automatically created for you in the console first-run experience. So, if you intend to use the AWS Batch console, you can move ahead to the next section. If you plan to use the AWS CLI instead, complete the procedures in Using service-linked roles for AWS Batch, Amazon ECS instance role, and Tutorial: Create the IAM execution role before creating your first compute environment. Create a key pair for your instances AWS uses public-key cryptography to secure the login information for your instance. A Linux instance, such as an AWS Batch compute environment container instance, has no password to use for SSH access. You use a key pair to log in to your instance securely. You specify the name of the key pair when you create your compute environment, then provide the private key when you log in using SSH. If you didn't create a key pair already, you can create one using the Amazon EC2 console. Note that, if you plan to launch instances in multiple AWS Regions, create a key pair in each Region. For more information about Regions, see Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select an AWS Region for the key pair. You can select any Region that's available to you, regardless of your location: however,"} +{"global_id": 402, "doc_id": "batch", "chunk_id": "9", "question_id": 3, "question": "How does AWS secure the login information for your instance?", "answer_span": "AWS uses public-key cryptography to secure the login information for your instance.", "chunk": "APIs on your behalf. Create an AWS Identity and Access Management role that provides these credentials to your compute environments and container instances, then associate that role with your compute environments. Create IAM roles 9 AWS Batch User Guide Note To verify that your AWS account has the required permissions, see Initial IAM service set up for your account. The AWS Batch compute environment and container instance roles are automatically created for you in the console first-run experience. So, if you intend to use the AWS Batch console, you can move ahead to the next section. If you plan to use the AWS CLI instead, complete the procedures in Using service-linked roles for AWS Batch, Amazon ECS instance role, and Tutorial: Create the IAM execution role before creating your first compute environment. Create a key pair for your instances AWS uses public-key cryptography to secure the login information for your instance. A Linux instance, such as an AWS Batch compute environment container instance, has no password to use for SSH access. You use a key pair to log in to your instance securely. You specify the name of the key pair when you create your compute environment, then provide the private key when you log in using SSH. If you didn't create a key pair already, you can create one using the Amazon EC2 console. Note that, if you plan to launch instances in multiple AWS Regions, create a key pair in each Region. For more information about Regions, see Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select an AWS Region for the key pair. You can select any Region that's available to you, regardless of your location: however,"} +{"global_id": 403, "doc_id": "batch", "chunk_id": "9", "question_id": 4, "question": "What should you do if you plan to launch instances in multiple AWS Regions?", "answer_span": "Note that, if you plan to launch instances in multiple AWS Regions, create a key pair in each Region.", "chunk": "APIs on your behalf. Create an AWS Identity and Access Management role that provides these credentials to your compute environments and container instances, then associate that role with your compute environments. Create IAM roles 9 AWS Batch User Guide Note To verify that your AWS account has the required permissions, see Initial IAM service set up for your account. The AWS Batch compute environment and container instance roles are automatically created for you in the console first-run experience. So, if you intend to use the AWS Batch console, you can move ahead to the next section. If you plan to use the AWS CLI instead, complete the procedures in Using service-linked roles for AWS Batch, Amazon ECS instance role, and Tutorial: Create the IAM execution role before creating your first compute environment. Create a key pair for your instances AWS uses public-key cryptography to secure the login information for your instance. A Linux instance, such as an AWS Batch compute environment container instance, has no password to use for SSH access. You use a key pair to log in to your instance securely. You specify the name of the key pair when you create your compute environment, then provide the private key when you log in using SSH. If you didn't create a key pair already, you can create one using the Amazon EC2 console. Note that, if you plan to launch instances in multiple AWS Regions, create a key pair in each Region. For more information about Regions, see Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select an AWS Region for the key pair. You can select any Region that's available to you, regardless of your location: however,"} +{"global_id": 404, "doc_id": "batch", "chunk_id": "10", "question_id": 1, "question": "What is the first step to create a key pair in Amazon EC2?", "answer_span": "Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.", "chunk": "Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select an AWS Region for the key pair. You can select any Region that's available to you, regardless of your location: however, key pairs are specific to a Region. For example, if you plan to launch an instance in the US West (Oregon) Region, create a key pair for the instance in the same Region. 3. In the navigation pane, choose Key Pairs, Create Key Pair. 4. In the Create Key Pair dialog box, for Key pair name, enter a name for the new key pair , and choose Create. Choose a name that you can remember, such as your user name, followed by key-pair, plus the Region name. For example, me-key-pair-uswest2. Create a key pair 10 AWS Batch 5. User Guide The private key file is automatically downloaded by your browser. The base file name is the name that you specified as the name of your key pair, and the file name extension is .pem. Save the private key file in a safe place. Important This is the only chance for you to save the private key file. You need to provide the name of your key pair when you launch an instance and the corresponding private key each time that you connect to the instance. 6. If you use an SSH client on a Mac or Linux computer to connect to your Linux instance, use the following command to set the permissions of your private key file. That way, only you can read it. $ chmod 400 your_user_name-key-pair-region_name.pem For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide. To connect to your instance using your key"} +{"global_id": 405, "doc_id": "batch", "chunk_id": "10", "question_id": 2, "question": "Why is it important to create a key pair in the same Region as the instance?", "answer_span": "key pairs are specific to a Region.", "chunk": "Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select an AWS Region for the key pair. You can select any Region that's available to you, regardless of your location: however, key pairs are specific to a Region. For example, if you plan to launch an instance in the US West (Oregon) Region, create a key pair for the instance in the same Region. 3. In the navigation pane, choose Key Pairs, Create Key Pair. 4. In the Create Key Pair dialog box, for Key pair name, enter a name for the new key pair , and choose Create. Choose a name that you can remember, such as your user name, followed by key-pair, plus the Region name. For example, me-key-pair-uswest2. Create a key pair 10 AWS Batch 5. User Guide The private key file is automatically downloaded by your browser. The base file name is the name that you specified as the name of your key pair, and the file name extension is .pem. Save the private key file in a safe place. Important This is the only chance for you to save the private key file. You need to provide the name of your key pair when you launch an instance and the corresponding private key each time that you connect to the instance. 6. If you use an SSH client on a Mac or Linux computer to connect to your Linux instance, use the following command to set the permissions of your private key file. That way, only you can read it. $ chmod 400 your_user_name-key-pair-region_name.pem For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide. To connect to your instance using your key"} +{"global_id": 406, "doc_id": "batch", "chunk_id": "10", "question_id": 3, "question": "What file extension is used for the private key file?", "answer_span": "the file name extension is .pem.", "chunk": "Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select an AWS Region for the key pair. You can select any Region that's available to you, regardless of your location: however, key pairs are specific to a Region. For example, if you plan to launch an instance in the US West (Oregon) Region, create a key pair for the instance in the same Region. 3. In the navigation pane, choose Key Pairs, Create Key Pair. 4. In the Create Key Pair dialog box, for Key pair name, enter a name for the new key pair , and choose Create. Choose a name that you can remember, such as your user name, followed by key-pair, plus the Region name. For example, me-key-pair-uswest2. Create a key pair 10 AWS Batch 5. User Guide The private key file is automatically downloaded by your browser. The base file name is the name that you specified as the name of your key pair, and the file name extension is .pem. Save the private key file in a safe place. Important This is the only chance for you to save the private key file. You need to provide the name of your key pair when you launch an instance and the corresponding private key each time that you connect to the instance. 6. If you use an SSH client on a Mac or Linux computer to connect to your Linux instance, use the following command to set the permissions of your private key file. That way, only you can read it. $ chmod 400 your_user_name-key-pair-region_name.pem For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide. To connect to your instance using your key"} +{"global_id": 407, "doc_id": "batch", "chunk_id": "10", "question_id": 4, "question": "What command should you use to set the permissions of your private key file on a Mac or Linux computer?", "answer_span": "$ chmod 400 your_user_name-key-pair-region_name.pem", "chunk": "Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select an AWS Region for the key pair. You can select any Region that's available to you, regardless of your location: however, key pairs are specific to a Region. For example, if you plan to launch an instance in the US West (Oregon) Region, create a key pair for the instance in the same Region. 3. In the navigation pane, choose Key Pairs, Create Key Pair. 4. In the Create Key Pair dialog box, for Key pair name, enter a name for the new key pair , and choose Create. Choose a name that you can remember, such as your user name, followed by key-pair, plus the Region name. For example, me-key-pair-uswest2. Create a key pair 10 AWS Batch 5. User Guide The private key file is automatically downloaded by your browser. The base file name is the name that you specified as the name of your key pair, and the file name extension is .pem. Save the private key file in a safe place. Important This is the only chance for you to save the private key file. You need to provide the name of your key pair when you launch an instance and the corresponding private key each time that you connect to the instance. 6. If you use an SSH client on a Mac or Linux computer to connect to your Linux instance, use the following command to set the permissions of your private key file. That way, only you can read it. $ chmod 400 your_user_name-key-pair-region_name.pem For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide. To connect to your instance using your key"} +{"global_id": 408, "doc_id": "batch", "chunk_id": "11", "question_id": 1, "question": "What command is used to set the permissions of your private key file?", "answer_span": "$ chmod 400 your_user_name-key-pair-region_name.pem", "chunk": "to your Linux instance, use the following command to set the permissions of your private key file. That way, only you can read it. $ chmod 400 your_user_name-key-pair-region_name.pem For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide. To connect to your instance using your key pair To connect to your Linux instance from a computer running Mac or Linux, specify the .pem file to your SSH client with the -i option and the path to your private key. To connect to your Linux instance from a computer running Windows, use either MindTerm or PuTTY. If you plan to use PuTTY, install it and use the following procedure to convert the .pem file to a .ppk file. (Optional) To prepare to connect to a Linux instance from Windows using PuTTY 1. Download and install PuTTY from http://www.chiark.greenend.org.uk/~sgtatham/putty/. Be sure to install the entire suite. 2. Start PuTTYgen (for example, from the Start menu, choose All Programs, PuTTY, and PuTTYgen). 3. Under Type of key to generate, choose RSA. If you're using an earlier version of PuTTYgen, choose SSH-2 RSA. Create a key pair 11 AWS Batch 4. User Guide Choose Load. By default, PuTTYgen displays only files with the extension .ppk. To locate your .pem file, choose the option to display files of all types. 5. Select the private key file that you created in the previous procedure and choose Open. Choose OK to dismiss the confirmation dialog box. 6. Choose Save private key. PuTTYgen displays a warning about saving the key without a passphrase. Choose Yes. 7. Specify the same name for the key that you used for the key pair. PuTTY automatically adds the .ppk file extension. Create a VPC With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources into a"} +{"global_id": 409, "doc_id": "batch", "chunk_id": "11", "question_id": 2, "question": "What should you specify to your SSH client when connecting to your Linux instance?", "answer_span": "specify the .pem file to your SSH client with the -i option and the path to your private key.", "chunk": "to your Linux instance, use the following command to set the permissions of your private key file. That way, only you can read it. $ chmod 400 your_user_name-key-pair-region_name.pem For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide. To connect to your instance using your key pair To connect to your Linux instance from a computer running Mac or Linux, specify the .pem file to your SSH client with the -i option and the path to your private key. To connect to your Linux instance from a computer running Windows, use either MindTerm or PuTTY. If you plan to use PuTTY, install it and use the following procedure to convert the .pem file to a .ppk file. (Optional) To prepare to connect to a Linux instance from Windows using PuTTY 1. Download and install PuTTY from http://www.chiark.greenend.org.uk/~sgtatham/putty/. Be sure to install the entire suite. 2. Start PuTTYgen (for example, from the Start menu, choose All Programs, PuTTY, and PuTTYgen). 3. Under Type of key to generate, choose RSA. If you're using an earlier version of PuTTYgen, choose SSH-2 RSA. Create a key pair 11 AWS Batch 4. User Guide Choose Load. By default, PuTTYgen displays only files with the extension .ppk. To locate your .pem file, choose the option to display files of all types. 5. Select the private key file that you created in the previous procedure and choose Open. Choose OK to dismiss the confirmation dialog box. 6. Choose Save private key. PuTTYgen displays a warning about saving the key without a passphrase. Choose Yes. 7. Specify the same name for the key that you used for the key pair. PuTTY automatically adds the .ppk file extension. Create a VPC With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources into a"} +{"global_id": 410, "doc_id": "batch", "chunk_id": "11", "question_id": 3, "question": "What is the first step to prepare to connect to a Linux instance from Windows using PuTTY?", "answer_span": "Download and install PuTTY from http://www.chiark.greenend.org.uk/~sgtatham/putty/. Be sure to install the entire suite.", "chunk": "to your Linux instance, use the following command to set the permissions of your private key file. That way, only you can read it. $ chmod 400 your_user_name-key-pair-region_name.pem For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide. To connect to your instance using your key pair To connect to your Linux instance from a computer running Mac or Linux, specify the .pem file to your SSH client with the -i option and the path to your private key. To connect to your Linux instance from a computer running Windows, use either MindTerm or PuTTY. If you plan to use PuTTY, install it and use the following procedure to convert the .pem file to a .ppk file. (Optional) To prepare to connect to a Linux instance from Windows using PuTTY 1. Download and install PuTTY from http://www.chiark.greenend.org.uk/~sgtatham/putty/. Be sure to install the entire suite. 2. Start PuTTYgen (for example, from the Start menu, choose All Programs, PuTTY, and PuTTYgen). 3. Under Type of key to generate, choose RSA. If you're using an earlier version of PuTTYgen, choose SSH-2 RSA. Create a key pair 11 AWS Batch 4. User Guide Choose Load. By default, PuTTYgen displays only files with the extension .ppk. To locate your .pem file, choose the option to display files of all types. 5. Select the private key file that you created in the previous procedure and choose Open. Choose OK to dismiss the confirmation dialog box. 6. Choose Save private key. PuTTYgen displays a warning about saving the key without a passphrase. Choose Yes. 7. Specify the same name for the key that you used for the key pair. PuTTY automatically adds the .ppk file extension. Create a VPC With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources into a"} +{"global_id": 411, "doc_id": "batch", "chunk_id": "11", "question_id": 4, "question": "What type of key should you choose to generate in PuTTYgen?", "answer_span": "Under Type of key to generate, choose RSA.", "chunk": "to your Linux instance, use the following command to set the permissions of your private key file. That way, only you can read it. $ chmod 400 your_user_name-key-pair-region_name.pem For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide. To connect to your instance using your key pair To connect to your Linux instance from a computer running Mac or Linux, specify the .pem file to your SSH client with the -i option and the path to your private key. To connect to your Linux instance from a computer running Windows, use either MindTerm or PuTTY. If you plan to use PuTTY, install it and use the following procedure to convert the .pem file to a .ppk file. (Optional) To prepare to connect to a Linux instance from Windows using PuTTY 1. Download and install PuTTY from http://www.chiark.greenend.org.uk/~sgtatham/putty/. Be sure to install the entire suite. 2. Start PuTTYgen (for example, from the Start menu, choose All Programs, PuTTY, and PuTTYgen). 3. Under Type of key to generate, choose RSA. If you're using an earlier version of PuTTYgen, choose SSH-2 RSA. Create a key pair 11 AWS Batch 4. User Guide Choose Load. By default, PuTTYgen displays only files with the extension .ppk. To locate your .pem file, choose the option to display files of all types. 5. Select the private key file that you created in the previous procedure and choose Open. Choose OK to dismiss the confirmation dialog box. 6. Choose Save private key. PuTTYgen displays a warning about saving the key without a passphrase. Choose Yes. 7. Specify the same name for the key that you used for the key pair. PuTTY automatically adds the .ppk file extension. Create a VPC With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources into a"} +{"global_id": 412, "doc_id": "batch", "chunk_id": "12", "question_id": 1, "question": "What should you do when saving the key?", "answer_span": "a warning about saving the key without a passphrase. Choose Yes.", "chunk": "a warning about saving the key without a passphrase. Choose Yes. 7. Specify the same name for the key that you used for the key pair. PuTTY automatically adds the .ppk file extension. Create a VPC With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources into a virtual network that you've defined. We strongly recommend that you launch your container instances in a VPC. If you have a default VPC, you also can skip this section and move to the next task Create a security group. To determine whether you have a default VPC, see Supported Platforms in the Amazon EC2 Console in the Amazon EC2 User Guide For information about how to create an Amazon VPC, see Create a VPC only in the Amazon VPC User Guide. Refer to the following table to determine what options to select. Option Value Resources to create VPC only Name Optionally provide a name for your VPC. IPv4 CIDR block IPv4 CIDR manual input The CIDR block size must have a size between /16 and /28. Create a VPC 12 AWS Batch User Guide Option Value IPv6 CIDR block No IPv6 CIDR block Tenancy Default For more information about Amazon VPC, see What is Amazon VPC? in the Amazon VPC User Guide. Create a security group Security groups act as a firewall for associated compute environment container instances, controlling both inbound and outbound traffic at the container instance level. A security group can be used only in the VPC for which it is created. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that"} +{"global_id": 413, "doc_id": "batch", "chunk_id": "12", "question_id": 2, "question": "What does PuTTY automatically add to the key file?", "answer_span": "PuTTY automatically adds the .ppk file extension.", "chunk": "a warning about saving the key without a passphrase. Choose Yes. 7. Specify the same name for the key that you used for the key pair. PuTTY automatically adds the .ppk file extension. Create a VPC With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources into a virtual network that you've defined. We strongly recommend that you launch your container instances in a VPC. If you have a default VPC, you also can skip this section and move to the next task Create a security group. To determine whether you have a default VPC, see Supported Platforms in the Amazon EC2 Console in the Amazon EC2 User Guide For information about how to create an Amazon VPC, see Create a VPC only in the Amazon VPC User Guide. Refer to the following table to determine what options to select. Option Value Resources to create VPC only Name Optionally provide a name for your VPC. IPv4 CIDR block IPv4 CIDR manual input The CIDR block size must have a size between /16 and /28. Create a VPC 12 AWS Batch User Guide Option Value IPv6 CIDR block No IPv6 CIDR block Tenancy Default For more information about Amazon VPC, see What is Amazon VPC? in the Amazon VPC User Guide. Create a security group Security groups act as a firewall for associated compute environment container instances, controlling both inbound and outbound traffic at the container instance level. A security group can be used only in the VPC for which it is created. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that"} +{"global_id": 414, "doc_id": "batch", "chunk_id": "12", "question_id": 3, "question": "What is recommended for launching container instances?", "answer_span": "We strongly recommend that you launch your container instances in a VPC.", "chunk": "a warning about saving the key without a passphrase. Choose Yes. 7. Specify the same name for the key that you used for the key pair. PuTTY automatically adds the .ppk file extension. Create a VPC With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources into a virtual network that you've defined. We strongly recommend that you launch your container instances in a VPC. If you have a default VPC, you also can skip this section and move to the next task Create a security group. To determine whether you have a default VPC, see Supported Platforms in the Amazon EC2 Console in the Amazon EC2 User Guide For information about how to create an Amazon VPC, see Create a VPC only in the Amazon VPC User Guide. Refer to the following table to determine what options to select. Option Value Resources to create VPC only Name Optionally provide a name for your VPC. IPv4 CIDR block IPv4 CIDR manual input The CIDR block size must have a size between /16 and /28. Create a VPC 12 AWS Batch User Guide Option Value IPv6 CIDR block No IPv6 CIDR block Tenancy Default For more information about Amazon VPC, see What is Amazon VPC? in the Amazon VPC User Guide. Create a security group Security groups act as a firewall for associated compute environment container instances, controlling both inbound and outbound traffic at the container instance level. A security group can be used only in the VPC for which it is created. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that"} +{"global_id": 415, "doc_id": "batch", "chunk_id": "12", "question_id": 4, "question": "What do security groups control?", "answer_span": "Security groups act as a firewall for associated compute environment container instances, controlling both inbound and outbound traffic at the container instance level.", "chunk": "a warning about saving the key without a passphrase. Choose Yes. 7. Specify the same name for the key that you used for the key pair. PuTTY automatically adds the .ppk file extension. Create a VPC With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources into a virtual network that you've defined. We strongly recommend that you launch your container instances in a VPC. If you have a default VPC, you also can skip this section and move to the next task Create a security group. To determine whether you have a default VPC, see Supported Platforms in the Amazon EC2 Console in the Amazon EC2 User Guide For information about how to create an Amazon VPC, see Create a VPC only in the Amazon VPC User Guide. Refer to the following table to determine what options to select. Option Value Resources to create VPC only Name Optionally provide a name for your VPC. IPv4 CIDR block IPv4 CIDR manual input The CIDR block size must have a size between /16 and /28. Create a VPC 12 AWS Batch User Guide Option Value IPv6 CIDR block No IPv6 CIDR block Tenancy Default For more information about Amazon VPC, see What is Amazon VPC? in the Amazon VPC User Guide. Create a security group Security groups act as a firewall for associated compute environment container instances, controlling both inbound and outbound traffic at the container instance level. A security group can be used only in the VPC for which it is created. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that"} +{"global_id": 416, "doc_id": "batch", "chunk_id": "13", "question_id": 1, "question": "What can you add to a security group to connect to your container instance from your IP address?", "answer_span": "You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH.", "chunk": "which it is created. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your tasks. Note that if you plan to launch container instances in multiple Regions, you need to create a security group in each Region. For more information, see Regions and Availability Zones in the Amazon EC2 User Guide. Note You need the public IP address of your local computer, which you can get using a service. For example, we provide the following service: http://checkip.amazonaws.com/ or https:// checkip.amazonaws.com/. To locate another service that provides your IP address, use the search phrase \"what is my IP address.\" If you're connecting through an Internet service provider (ISP) or from behind a firewall without a static IP address, find out the range of IP addresses that are used by client computers. To create a security group using the console 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Security Groups. 3. Choose Create security group. Create a security group 13 AWS Batch 4. User Guide Enter a name and description for the security group. You cannot change the name and description of a security group after it is created. 5. From VPC, choose the VPC. 6. (Optional) By default, new security groups start with only an outbound rule that allows all traffic to leave the resource. You must add rules to enable any inbound traffic or to restrict the outbound traffic. AWS Batch container instances don't require any inbound ports to be open. However, you might want to add an SSH rule. That way, you can"} +{"global_id": 417, "doc_id": "batch", "chunk_id": "13", "question_id": 2, "question": "What should you do if you plan to launch container instances in multiple Regions?", "answer_span": "Note that if you plan to launch container instances in multiple Regions, you need to create a security group in each Region.", "chunk": "which it is created. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your tasks. Note that if you plan to launch container instances in multiple Regions, you need to create a security group in each Region. For more information, see Regions and Availability Zones in the Amazon EC2 User Guide. Note You need the public IP address of your local computer, which you can get using a service. For example, we provide the following service: http://checkip.amazonaws.com/ or https:// checkip.amazonaws.com/. To locate another service that provides your IP address, use the search phrase \"what is my IP address.\" If you're connecting through an Internet service provider (ISP) or from behind a firewall without a static IP address, find out the range of IP addresses that are used by client computers. To create a security group using the console 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Security Groups. 3. Choose Create security group. Create a security group 13 AWS Batch 4. User Guide Enter a name and description for the security group. You cannot change the name and description of a security group after it is created. 5. From VPC, choose the VPC. 6. (Optional) By default, new security groups start with only an outbound rule that allows all traffic to leave the resource. You must add rules to enable any inbound traffic or to restrict the outbound traffic. AWS Batch container instances don't require any inbound ports to be open. However, you might want to add an SSH rule. That way, you can"} +{"global_id": 418, "doc_id": "batch", "chunk_id": "13", "question_id": 3, "question": "How can you find your public IP address?", "answer_span": "Note You need the public IP address of your local computer, which you can get using a service.", "chunk": "which it is created. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your tasks. Note that if you plan to launch container instances in multiple Regions, you need to create a security group in each Region. For more information, see Regions and Availability Zones in the Amazon EC2 User Guide. Note You need the public IP address of your local computer, which you can get using a service. For example, we provide the following service: http://checkip.amazonaws.com/ or https:// checkip.amazonaws.com/. To locate another service that provides your IP address, use the search phrase \"what is my IP address.\" If you're connecting through an Internet service provider (ISP) or from behind a firewall without a static IP address, find out the range of IP addresses that are used by client computers. To create a security group using the console 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Security Groups. 3. Choose Create security group. Create a security group 13 AWS Batch 4. User Guide Enter a name and description for the security group. You cannot change the name and description of a security group after it is created. 5. From VPC, choose the VPC. 6. (Optional) By default, new security groups start with only an outbound rule that allows all traffic to leave the resource. You must add rules to enable any inbound traffic or to restrict the outbound traffic. AWS Batch container instances don't require any inbound ports to be open. However, you might want to add an SSH rule. That way, you can"} +{"global_id": 419, "doc_id": "batch", "chunk_id": "13", "question_id": 4, "question": "What is the first step to create a security group using the console?", "answer_span": "Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.", "chunk": "which it is created. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your tasks. Note that if you plan to launch container instances in multiple Regions, you need to create a security group in each Region. For more information, see Regions and Availability Zones in the Amazon EC2 User Guide. Note You need the public IP address of your local computer, which you can get using a service. For example, we provide the following service: http://checkip.amazonaws.com/ or https:// checkip.amazonaws.com/. To locate another service that provides your IP address, use the search phrase \"what is my IP address.\" If you're connecting through an Internet service provider (ISP) or from behind a firewall without a static IP address, find out the range of IP addresses that are used by client computers. To create a security group using the console 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Security Groups. 3. Choose Create security group. Create a security group 13 AWS Batch 4. User Guide Enter a name and description for the security group. You cannot change the name and description of a security group after it is created. 5. From VPC, choose the VPC. 6. (Optional) By default, new security groups start with only an outbound rule that allows all traffic to leave the resource. You must add rules to enable any inbound traffic or to restrict the outbound traffic. AWS Batch container instances don't require any inbound ports to be open. However, you might want to add an SSH rule. That way, you can"} +{"global_id": 420, "doc_id": "batch", "chunk_id": "14", "question_id": 1, "question": "What must you do to enable any inbound traffic or restrict outbound traffic?", "answer_span": "You must add rules to enable any inbound traffic or to restrict the outbound traffic.", "chunk": "outbound rule that allows all traffic to leave the resource. You must add rules to enable any inbound traffic or to restrict the outbound traffic. AWS Batch container instances don't require any inbound ports to be open. However, you might want to add an SSH rule. That way, you can log into the container instance and examine the containers in jobs with Docker commands. If you want your container instance to host a job that runs a web server, you can also add rules for HTTP. Complete the following steps to add these optional security group rules. On the Inbound tab, create the following rules and choose Create: • Choose Add Rule. For Type, choose HTTP. For Source, choose Anywhere (0.0.0.0/0). • Choose Add Rule. For Type, choose SSH. For Source, choose Custom IP, and specify the public IP address of your computer or network in Classless Inter-Domain Routing (CIDR) notation. If your company allocates addresses from a range, specify the entire range, such as 203.0.113.0/24. To specify an individual IP address in CIDR notation, choose My IP. This adds the routing prefix /32 to the public IP address. Note For security reasons, we don't recommend that you allow SSH access from all IP addresses (0.0.0.0/0) to your instance but only for testing purposes and only for a short time. 7. You can add tags now, or you can add them later. To add a tag, choose Add new tag and enter the tag key and value. 8. Choose Create security group. To create a security group using the command line, see create-security-group (AWS CLI) For more information about security groups, see Work with security groups. Create a security group 14 AWS Batch User Guide Install the AWS CLI To use the AWS CLI with AWS Batch, install the latest"} +{"global_id": 421, "doc_id": "batch", "chunk_id": "14", "question_id": 2, "question": "What is the purpose of adding an SSH rule to the AWS Batch container instance?", "answer_span": "However, you might want to add an SSH rule. That way, you can log into the container instance and examine the containers in jobs with Docker commands.", "chunk": "outbound rule that allows all traffic to leave the resource. You must add rules to enable any inbound traffic or to restrict the outbound traffic. AWS Batch container instances don't require any inbound ports to be open. However, you might want to add an SSH rule. That way, you can log into the container instance and examine the containers in jobs with Docker commands. If you want your container instance to host a job that runs a web server, you can also add rules for HTTP. Complete the following steps to add these optional security group rules. On the Inbound tab, create the following rules and choose Create: • Choose Add Rule. For Type, choose HTTP. For Source, choose Anywhere (0.0.0.0/0). • Choose Add Rule. For Type, choose SSH. For Source, choose Custom IP, and specify the public IP address of your computer or network in Classless Inter-Domain Routing (CIDR) notation. If your company allocates addresses from a range, specify the entire range, such as 203.0.113.0/24. To specify an individual IP address in CIDR notation, choose My IP. This adds the routing prefix /32 to the public IP address. Note For security reasons, we don't recommend that you allow SSH access from all IP addresses (0.0.0.0/0) to your instance but only for testing purposes and only for a short time. 7. You can add tags now, or you can add them later. To add a tag, choose Add new tag and enter the tag key and value. 8. Choose Create security group. To create a security group using the command line, see create-security-group (AWS CLI) For more information about security groups, see Work with security groups. Create a security group 14 AWS Batch User Guide Install the AWS CLI To use the AWS CLI with AWS Batch, install the latest"} +{"global_id": 422, "doc_id": "batch", "chunk_id": "14", "question_id": 3, "question": "What steps should you follow to add optional security group rules?", "answer_span": "Complete the following steps to add these optional security group rules.", "chunk": "outbound rule that allows all traffic to leave the resource. You must add rules to enable any inbound traffic or to restrict the outbound traffic. AWS Batch container instances don't require any inbound ports to be open. However, you might want to add an SSH rule. That way, you can log into the container instance and examine the containers in jobs with Docker commands. If you want your container instance to host a job that runs a web server, you can also add rules for HTTP. Complete the following steps to add these optional security group rules. On the Inbound tab, create the following rules and choose Create: • Choose Add Rule. For Type, choose HTTP. For Source, choose Anywhere (0.0.0.0/0). • Choose Add Rule. For Type, choose SSH. For Source, choose Custom IP, and specify the public IP address of your computer or network in Classless Inter-Domain Routing (CIDR) notation. If your company allocates addresses from a range, specify the entire range, such as 203.0.113.0/24. To specify an individual IP address in CIDR notation, choose My IP. This adds the routing prefix /32 to the public IP address. Note For security reasons, we don't recommend that you allow SSH access from all IP addresses (0.0.0.0/0) to your instance but only for testing purposes and only for a short time. 7. You can add tags now, or you can add them later. To add a tag, choose Add new tag and enter the tag key and value. 8. Choose Create security group. To create a security group using the command line, see create-security-group (AWS CLI) For more information about security groups, see Work with security groups. Create a security group 14 AWS Batch User Guide Install the AWS CLI To use the AWS CLI with AWS Batch, install the latest"} +{"global_id": 423, "doc_id": "batch", "chunk_id": "14", "question_id": 4, "question": "What is the recommendation regarding SSH access from all IP addresses?", "answer_span": "Note For security reasons, we don't recommend that you allow SSH access from all IP addresses (0.0.0.0/0) to your instance but only for testing purposes and only for a short time.", "chunk": "outbound rule that allows all traffic to leave the resource. You must add rules to enable any inbound traffic or to restrict the outbound traffic. AWS Batch container instances don't require any inbound ports to be open. However, you might want to add an SSH rule. That way, you can log into the container instance and examine the containers in jobs with Docker commands. If you want your container instance to host a job that runs a web server, you can also add rules for HTTP. Complete the following steps to add these optional security group rules. On the Inbound tab, create the following rules and choose Create: • Choose Add Rule. For Type, choose HTTP. For Source, choose Anywhere (0.0.0.0/0). • Choose Add Rule. For Type, choose SSH. For Source, choose Custom IP, and specify the public IP address of your computer or network in Classless Inter-Domain Routing (CIDR) notation. If your company allocates addresses from a range, specify the entire range, such as 203.0.113.0/24. To specify an individual IP address in CIDR notation, choose My IP. This adds the routing prefix /32 to the public IP address. Note For security reasons, we don't recommend that you allow SSH access from all IP addresses (0.0.0.0/0) to your instance but only for testing purposes and only for a short time. 7. You can add tags now, or you can add them later. To add a tag, choose Add new tag and enter the tag key and value. 8. Choose Create security group. To create a security group using the command line, see create-security-group (AWS CLI) For more information about security groups, see Work with security groups. Create a security group 14 AWS Batch User Guide Install the AWS CLI To use the AWS CLI with AWS Batch, install the latest"} +{"global_id": 424, "doc_id": "batch", "chunk_id": "15", "question_id": 1, "question": "What is the purpose of the AWS Batch first-run wizard?", "answer_span": "You can use the AWS Batch first-run wizard to get started quickly with AWS Batch.", "chunk": "security group. To create a security group using the command line, see create-security-group (AWS CLI) For more information about security groups, see Work with security groups. Create a security group 14 AWS Batch User Guide Install the AWS CLI To use the AWS CLI with AWS Batch, install the latest AWS CLI version. For information about installing the AWS CLI or upgrading it to the latest version, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide. Install the AWS CLI 15 AWS Batch User Guide Getting started with AWS Batch tutorials You can use the AWS Batch first-run wizard to get started quickly with AWS Batch. After you complete the Prerequisites, you can use the first-run wizard to create a compute environment, a job definition, and a job queue. You can also submit a sample \"Hello World\" job using the AWS Batch first-run wizard to test your configuration. If you already have a Docker image that you want to launch in AWS Batch, you can use that image to create a job definition. Afterward, you can use the AWS Batch first-run wizard to create a compute environment, job queue, and submit a sample Hello World job. Getting started with Amazon EC2 orchestration using the Wizard Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic. Overview This tutorial demonstrates how to setup"} +{"global_id": 425, "doc_id": "batch", "chunk_id": "15", "question_id": 2, "question": "What should you do after completing the prerequisites for AWS Batch?", "answer_span": "After you complete the Prerequisites, you can use the first-run wizard to create a compute environment, a job definition, and a job queue.", "chunk": "security group. To create a security group using the command line, see create-security-group (AWS CLI) For more information about security groups, see Work with security groups. Create a security group 14 AWS Batch User Guide Install the AWS CLI To use the AWS CLI with AWS Batch, install the latest AWS CLI version. For information about installing the AWS CLI or upgrading it to the latest version, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide. Install the AWS CLI 15 AWS Batch User Guide Getting started with AWS Batch tutorials You can use the AWS Batch first-run wizard to get started quickly with AWS Batch. After you complete the Prerequisites, you can use the first-run wizard to create a compute environment, a job definition, and a job queue. You can also submit a sample \"Hello World\" job using the AWS Batch first-run wizard to test your configuration. If you already have a Docker image that you want to launch in AWS Batch, you can use that image to create a job definition. Afterward, you can use the AWS Batch first-run wizard to create a compute environment, job queue, and submit a sample Hello World job. Getting started with Amazon EC2 orchestration using the Wizard Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic. Overview This tutorial demonstrates how to setup"} +{"global_id": 426, "doc_id": "batch", "chunk_id": "15", "question_id": 3, "question": "What does Amazon EC2 provide?", "answer_span": "Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud.", "chunk": "security group. To create a security group using the command line, see create-security-group (AWS CLI) For more information about security groups, see Work with security groups. Create a security group 14 AWS Batch User Guide Install the AWS CLI To use the AWS CLI with AWS Batch, install the latest AWS CLI version. For information about installing the AWS CLI or upgrading it to the latest version, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide. Install the AWS CLI 15 AWS Batch User Guide Getting started with AWS Batch tutorials You can use the AWS Batch first-run wizard to get started quickly with AWS Batch. After you complete the Prerequisites, you can use the first-run wizard to create a compute environment, a job definition, and a job queue. You can also submit a sample \"Hello World\" job using the AWS Batch first-run wizard to test your configuration. If you already have a Docker image that you want to launch in AWS Batch, you can use that image to create a job definition. Afterward, you can use the AWS Batch first-run wizard to create a compute environment, job queue, and submit a sample Hello World job. Getting started with Amazon EC2 orchestration using the Wizard Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic. Overview This tutorial demonstrates how to setup"} +{"global_id": 427, "doc_id": "batch", "chunk_id": "15", "question_id": 4, "question": "How does using Amazon EC2 benefit application development and deployment?", "answer_span": "Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster.", "chunk": "security group. To create a security group using the command line, see create-security-group (AWS CLI) For more information about security groups, see Work with security groups. Create a security group 14 AWS Batch User Guide Install the AWS CLI To use the AWS CLI with AWS Batch, install the latest AWS CLI version. For information about installing the AWS CLI or upgrading it to the latest version, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide. Install the AWS CLI 15 AWS Batch User Guide Getting started with AWS Batch tutorials You can use the AWS Batch first-run wizard to get started quickly with AWS Batch. After you complete the Prerequisites, you can use the first-run wizard to create a compute environment, a job definition, and a job queue. You can also submit a sample \"Hello World\" job using the AWS Batch first-run wizard to test your configuration. If you already have a Docker image that you want to launch in AWS Batch, you can use that image to create a job definition. Afterward, you can use the AWS Batch first-run wizard to create a compute environment, job queue, and submit a sample Hello World job. Getting started with Amazon EC2 orchestration using the Wizard Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic. Overview This tutorial demonstrates how to setup"} +{"global_id": 428, "doc_id": "batch", "chunk_id": "16", "question_id": 1, "question": "What does Amazon EC2 enable you to do?", "answer_span": "Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic.", "chunk": "to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic. Overview This tutorial demonstrates how to setup AWS Batch with the Wizard to configure Amazon EC2 and run Hello World. Intended Audience This tutorial is designed for system administrators and developers responsible for setting up, testing, and deploying AWS Batch. Features Used This tutorial shows you how to use the AWS Batch console wizard to: • Create and configure an Amazon EC2 compute environment • Create a job queue. Getting started with Amazon EC2 using the Wizard 16 AWS Batch User Guide • Create a job definition • Create and submit a job to run • View the output of the job in CloudWatch Time Required It should take about 10–15 minutes to complete this tutorial. Regional Restrictions There are no country or regional restrictions associated with using this solution. Resource Usage Costs There's no charge for creating an AWS account. However, by implementing this solution, you might incur some or all of the costs that are listed in the following table. Description Cost (US dollars) Amazon EC2 instance You pay for each Amazon EC2 instance that is created. For more information about pricing, see Amazon EC2 Pricing. Prerequisites Before you begin: • Create an AWS account if you don't have one. • Create the ecsInstanceRole Instance role. Step 1: Create a compute environment Important To get started as simply and quickly as possible, this tutorial includes steps with default settings. Before creating for production use, we recommend that you familiarize yourself with all settings and deploy with the settings that meet your requirements. To create a"} +{"global_id": 429, "doc_id": "batch", "chunk_id": "16", "question_id": 2, "question": "Who is the intended audience for this tutorial?", "answer_span": "This tutorial is designed for system administrators and developers responsible for setting up, testing, and deploying AWS Batch.", "chunk": "to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic. Overview This tutorial demonstrates how to setup AWS Batch with the Wizard to configure Amazon EC2 and run Hello World. Intended Audience This tutorial is designed for system administrators and developers responsible for setting up, testing, and deploying AWS Batch. Features Used This tutorial shows you how to use the AWS Batch console wizard to: • Create and configure an Amazon EC2 compute environment • Create a job queue. Getting started with Amazon EC2 using the Wizard 16 AWS Batch User Guide • Create a job definition • Create and submit a job to run • View the output of the job in CloudWatch Time Required It should take about 10–15 minutes to complete this tutorial. Regional Restrictions There are no country or regional restrictions associated with using this solution. Resource Usage Costs There's no charge for creating an AWS account. However, by implementing this solution, you might incur some or all of the costs that are listed in the following table. Description Cost (US dollars) Amazon EC2 instance You pay for each Amazon EC2 instance that is created. For more information about pricing, see Amazon EC2 Pricing. Prerequisites Before you begin: • Create an AWS account if you don't have one. • Create the ecsInstanceRole Instance role. Step 1: Create a compute environment Important To get started as simply and quickly as possible, this tutorial includes steps with default settings. Before creating for production use, we recommend that you familiarize yourself with all settings and deploy with the settings that meet your requirements. To create a"} +{"global_id": 430, "doc_id": "batch", "chunk_id": "16", "question_id": 3, "question": "How long is it expected to take to complete this tutorial?", "answer_span": "It should take about 10–15 minutes to complete this tutorial.", "chunk": "to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic. Overview This tutorial demonstrates how to setup AWS Batch with the Wizard to configure Amazon EC2 and run Hello World. Intended Audience This tutorial is designed for system administrators and developers responsible for setting up, testing, and deploying AWS Batch. Features Used This tutorial shows you how to use the AWS Batch console wizard to: • Create and configure an Amazon EC2 compute environment • Create a job queue. Getting started with Amazon EC2 using the Wizard 16 AWS Batch User Guide • Create a job definition • Create and submit a job to run • View the output of the job in CloudWatch Time Required It should take about 10–15 minutes to complete this tutorial. Regional Restrictions There are no country or regional restrictions associated with using this solution. Resource Usage Costs There's no charge for creating an AWS account. However, by implementing this solution, you might incur some or all of the costs that are listed in the following table. Description Cost (US dollars) Amazon EC2 instance You pay for each Amazon EC2 instance that is created. For more information about pricing, see Amazon EC2 Pricing. Prerequisites Before you begin: • Create an AWS account if you don't have one. • Create the ecsInstanceRole Instance role. Step 1: Create a compute environment Important To get started as simply and quickly as possible, this tutorial includes steps with default settings. Before creating for production use, we recommend that you familiarize yourself with all settings and deploy with the settings that meet your requirements. To create a"} +{"global_id": 431, "doc_id": "batch", "chunk_id": "16", "question_id": 4, "question": "What is a prerequisite before starting the tutorial?", "answer_span": "Create an AWS account if you don't have one.", "chunk": "to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic. Overview This tutorial demonstrates how to setup AWS Batch with the Wizard to configure Amazon EC2 and run Hello World. Intended Audience This tutorial is designed for system administrators and developers responsible for setting up, testing, and deploying AWS Batch. Features Used This tutorial shows you how to use the AWS Batch console wizard to: • Create and configure an Amazon EC2 compute environment • Create a job queue. Getting started with Amazon EC2 using the Wizard 16 AWS Batch User Guide • Create a job definition • Create and submit a job to run • View the output of the job in CloudWatch Time Required It should take about 10–15 minutes to complete this tutorial. Regional Restrictions There are no country or regional restrictions associated with using this solution. Resource Usage Costs There's no charge for creating an AWS account. However, by implementing this solution, you might incur some or all of the costs that are listed in the following table. Description Cost (US dollars) Amazon EC2 instance You pay for each Amazon EC2 instance that is created. For more information about pricing, see Amazon EC2 Pricing. Prerequisites Before you begin: • Create an AWS account if you don't have one. • Create the ecsInstanceRole Instance role. Step 1: Create a compute environment Important To get started as simply and quickly as possible, this tutorial includes steps with default settings. Before creating for production use, we recommend that you familiarize yourself with all settings and deploy with the settings that meet your requirements. To create a"} +{"global_id": 432, "doc_id": "batch", "chunk_id": "17", "question_id": 1, "question": "What is the first step to create a compute environment?", "answer_span": "Step 1: Create a compute environment", "chunk": "role. Step 1: Create a compute environment Important To get started as simply and quickly as possible, this tutorial includes steps with default settings. Before creating for production use, we recommend that you familiarize yourself with all settings and deploy with the settings that meet your requirements. To create a compute environment for an Amazon EC2 orchestration, do the following: Prerequisites 17 AWS Batch User Guide 1. Open the AWS Batch console first-run wizard. 2. For Configure job and orchestration type, choose Amazon Elastic Compute Cloud(Amazon EC2). 3. Choose Next. 4. In the Compute environment configuration section for Name, specify a unique name for your compute environment. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 5. For Instance role, choose an existing instance role that has the required IAM permissions attached. This instance role allows the Amazon ECS container instances in your compute environment to make calls to the required AWS API operations. For more information, see Amazon ECS instance role. The default name of the Instance role is ecsInstanceRole. 6. For Instance configuration you can leave the default settings. 7. For Network configuration use your default VPC for the AWS Region. 8. Choose Next. Step 2: Create a job queue A job queue stores your submitted jobs until the AWS Batch Scheduler runs the job on a resource in your compute environment. For more information, see Job queues To create a job queue for an Amazon EC2 orchestration, do the following: 1. For Job queue configuration for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration"} +{"global_id": 433, "doc_id": "batch", "chunk_id": "17", "question_id": 2, "question": "What should you do before creating for production use?", "answer_span": "we recommend that you familiarize yourself with all settings and deploy with the settings that meet your requirements.", "chunk": "role. Step 1: Create a compute environment Important To get started as simply and quickly as possible, this tutorial includes steps with default settings. Before creating for production use, we recommend that you familiarize yourself with all settings and deploy with the settings that meet your requirements. To create a compute environment for an Amazon EC2 orchestration, do the following: Prerequisites 17 AWS Batch User Guide 1. Open the AWS Batch console first-run wizard. 2. For Configure job and orchestration type, choose Amazon Elastic Compute Cloud(Amazon EC2). 3. Choose Next. 4. In the Compute environment configuration section for Name, specify a unique name for your compute environment. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 5. For Instance role, choose an existing instance role that has the required IAM permissions attached. This instance role allows the Amazon ECS container instances in your compute environment to make calls to the required AWS API operations. For more information, see Amazon ECS instance role. The default name of the Instance role is ecsInstanceRole. 6. For Instance configuration you can leave the default settings. 7. For Network configuration use your default VPC for the AWS Region. 8. Choose Next. Step 2: Create a job queue A job queue stores your submitted jobs until the AWS Batch Scheduler runs the job on a resource in your compute environment. For more information, see Job queues To create a job queue for an Amazon EC2 orchestration, do the following: 1. For Job queue configuration for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration"} +{"global_id": 434, "doc_id": "batch", "chunk_id": "17", "question_id": 3, "question": "What is the default name of the Instance role?", "answer_span": "The default name of the Instance role is ecsInstanceRole.", "chunk": "role. Step 1: Create a compute environment Important To get started as simply and quickly as possible, this tutorial includes steps with default settings. Before creating for production use, we recommend that you familiarize yourself with all settings and deploy with the settings that meet your requirements. To create a compute environment for an Amazon EC2 orchestration, do the following: Prerequisites 17 AWS Batch User Guide 1. Open the AWS Batch console first-run wizard. 2. For Configure job and orchestration type, choose Amazon Elastic Compute Cloud(Amazon EC2). 3. Choose Next. 4. In the Compute environment configuration section for Name, specify a unique name for your compute environment. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 5. For Instance role, choose an existing instance role that has the required IAM permissions attached. This instance role allows the Amazon ECS container instances in your compute environment to make calls to the required AWS API operations. For more information, see Amazon ECS instance role. The default name of the Instance role is ecsInstanceRole. 6. For Instance configuration you can leave the default settings. 7. For Network configuration use your default VPC for the AWS Region. 8. Choose Next. Step 2: Create a job queue A job queue stores your submitted jobs until the AWS Batch Scheduler runs the job on a resource in your compute environment. For more information, see Job queues To create a job queue for an Amazon EC2 orchestration, do the following: 1. For Job queue configuration for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration"} +{"global_id": 435, "doc_id": "batch", "chunk_id": "17", "question_id": 4, "question": "What does a job queue do in AWS Batch?", "answer_span": "A job queue stores your submitted jobs until the AWS Batch Scheduler runs the job on a resource in your compute environment.", "chunk": "role. Step 1: Create a compute environment Important To get started as simply and quickly as possible, this tutorial includes steps with default settings. Before creating for production use, we recommend that you familiarize yourself with all settings and deploy with the settings that meet your requirements. To create a compute environment for an Amazon EC2 orchestration, do the following: Prerequisites 17 AWS Batch User Guide 1. Open the AWS Batch console first-run wizard. 2. For Configure job and orchestration type, choose Amazon Elastic Compute Cloud(Amazon EC2). 3. Choose Next. 4. In the Compute environment configuration section for Name, specify a unique name for your compute environment. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 5. For Instance role, choose an existing instance role that has the required IAM permissions attached. This instance role allows the Amazon ECS container instances in your compute environment to make calls to the required AWS API operations. For more information, see Amazon ECS instance role. The default name of the Instance role is ecsInstanceRole. 6. For Instance configuration you can leave the default settings. 7. For Network configuration use your default VPC for the AWS Region. 8. Choose Next. Step 2: Create a job queue A job queue stores your submitted jobs until the AWS Batch Scheduler runs the job on a resource in your compute environment. For more information, see Job queues To create a job queue for an Amazon EC2 orchestration, do the following: 1. For Job queue configuration for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration"} +{"global_id": 436, "doc_id": "batch", "chunk_id": "18", "question_id": 1, "question": "What is the maximum length for the job queue name?", "answer_span": "The name can be up to 128 characters in length.", "chunk": "an Amazon EC2 orchestration, do the following: 1. For Job queue configuration for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 3: Create a job definition AWS Batch job definitions specify how jobs are to be run. Even though each job must reference a job definition, many of the parameters that are specified in the job definition can be overridden at runtime. Step 2: Create a job queue 18 AWS Batch User Guide To create the job definition: 1. For Create a job definition a. for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). b. For Command - optional you can change hello world to a custom message or leave it as is. 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 4: Create a job To create a job, do the following: 1. In the Job configuration section for Name, specify a unique name for the job. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 5: Review and create On the Review and create page, review the configuration steps. If you need to make changes, choose Edit. When you're finished, choose Create resources. 1. For Review and create choose Create resources. 2. A window opens as AWS"} +{"global_id": 437, "doc_id": "batch", "chunk_id": "18", "question_id": 2, "question": "What can the job queue name contain?", "answer_span": "It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).", "chunk": "an Amazon EC2 orchestration, do the following: 1. For Job queue configuration for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 3: Create a job definition AWS Batch job definitions specify how jobs are to be run. Even though each job must reference a job definition, many of the parameters that are specified in the job definition can be overridden at runtime. Step 2: Create a job queue 18 AWS Batch User Guide To create the job definition: 1. For Create a job definition a. for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). b. For Command - optional you can change hello world to a custom message or leave it as is. 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 4: Create a job To create a job, do the following: 1. In the Job configuration section for Name, specify a unique name for the job. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 5: Review and create On the Review and create page, review the configuration steps. If you need to make changes, choose Edit. When you're finished, choose Create resources. 1. For Review and create choose Create resources. 2. A window opens as AWS"} +{"global_id": 438, "doc_id": "batch", "chunk_id": "18", "question_id": 3, "question": "What should you do if you need to make changes on the Review and create page?", "answer_span": "If you need to make changes, choose Edit.", "chunk": "an Amazon EC2 orchestration, do the following: 1. For Job queue configuration for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 3: Create a job definition AWS Batch job definitions specify how jobs are to be run. Even though each job must reference a job definition, many of the parameters that are specified in the job definition can be overridden at runtime. Step 2: Create a job queue 18 AWS Batch User Guide To create the job definition: 1. For Create a job definition a. for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). b. For Command - optional you can change hello world to a custom message or leave it as is. 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 4: Create a job To create a job, do the following: 1. In the Job configuration section for Name, specify a unique name for the job. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 5: Review and create On the Review and create page, review the configuration steps. If you need to make changes, choose Edit. When you're finished, choose Create resources. 1. For Review and create choose Create resources. 2. A window opens as AWS"} +{"global_id": 439, "doc_id": "batch", "chunk_id": "18", "question_id": 4, "question": "What is specified in AWS Batch job definitions?", "answer_span": "AWS Batch job definitions specify how jobs are to be run.", "chunk": "an Amazon EC2 orchestration, do the following: 1. For Job queue configuration for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 3: Create a job definition AWS Batch job definitions specify how jobs are to be run. Even though each job must reference a job definition, many of the parameters that are specified in the job definition can be overridden at runtime. Step 2: Create a job queue 18 AWS Batch User Guide To create the job definition: 1. For Create a job definition a. for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). b. For Command - optional you can change hello world to a custom message or leave it as is. 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 4: Create a job To create a job, do the following: 1. In the Job configuration section for Name, specify a unique name for the job. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 5: Review and create On the Review and create page, review the configuration steps. If you need to make changes, choose Edit. When you're finished, choose Create resources. 1. For Review and create choose Create resources. 2. A window opens as AWS"} +{"global_id": 440, "doc_id": "batch", "chunk_id": "19", "question_id": 1, "question": "What should you do if you need to make changes on the Review and create page?", "answer_span": "If you need to make changes, choose Edit.", "chunk": "leave the default value. 3. Choose Next. Step 5: Review and create On the Review and create page, review the configuration steps. If you need to make changes, choose Edit. When you're finished, choose Create resources. 1. For Review and create choose Create resources. 2. A window opens as AWS Batch starts to allocate your resources. Once complete choose Go to dashboard. On the dashboard you should see all of your allocated resources and that the job is in the Runnable state. Your job is scheduled to run and should complete in 2–3 minuets. Step 6: View the Job's output To view the Job's output, do the following: Step 4: Create a job 19 AWS Batch User Guide 1. In the navigation pane choose Jobs. 2. In the Job queue drop down choose the Job queue you created for the tutorial. 3. The Jobs table lists all of your Jobs and what their current status is. Once the Job's Status is Succeeded choose the Name of the Job to view the Job's details. 4. In the Details pane choose Log stream name. The CloudWatch console for the Job will open and there should be one event with the Message of hello world or your custom message. Step 7: Clean up your tutorial resources You are charged for the Amazon EC2 instance while it is enabled. You can delete the instance to stop incurring charges. To delete the resources you created, do the following: 1. In the navigation pane choose Job queue. 2. In the Job queue table choose the Job queue you created for the tutorial. 3. Choose Disable. Once the Job queue State is Disabled you can choose Delete. 4. Once the Job queue is deleted, in the navigation pane choose Compute environments. 5. Choose the compute environment you"} +{"global_id": 441, "doc_id": "batch", "chunk_id": "19", "question_id": 2, "question": "What happens after you choose Create resources?", "answer_span": "A window opens as AWS Batch starts to allocate your resources.", "chunk": "leave the default value. 3. Choose Next. Step 5: Review and create On the Review and create page, review the configuration steps. If you need to make changes, choose Edit. When you're finished, choose Create resources. 1. For Review and create choose Create resources. 2. A window opens as AWS Batch starts to allocate your resources. Once complete choose Go to dashboard. On the dashboard you should see all of your allocated resources and that the job is in the Runnable state. Your job is scheduled to run and should complete in 2–3 minuets. Step 6: View the Job's output To view the Job's output, do the following: Step 4: Create a job 19 AWS Batch User Guide 1. In the navigation pane choose Jobs. 2. In the Job queue drop down choose the Job queue you created for the tutorial. 3. The Jobs table lists all of your Jobs and what their current status is. Once the Job's Status is Succeeded choose the Name of the Job to view the Job's details. 4. In the Details pane choose Log stream name. The CloudWatch console for the Job will open and there should be one event with the Message of hello world or your custom message. Step 7: Clean up your tutorial resources You are charged for the Amazon EC2 instance while it is enabled. You can delete the instance to stop incurring charges. To delete the resources you created, do the following: 1. In the navigation pane choose Job queue. 2. In the Job queue table choose the Job queue you created for the tutorial. 3. Choose Disable. Once the Job queue State is Disabled you can choose Delete. 4. Once the Job queue is deleted, in the navigation pane choose Compute environments. 5. Choose the compute environment you"} +{"global_id": 442, "doc_id": "batch", "chunk_id": "19", "question_id": 3, "question": "How can you view the Job's output?", "answer_span": "To view the Job's output, do the following: Step 4: Create a job.", "chunk": "leave the default value. 3. Choose Next. Step 5: Review and create On the Review and create page, review the configuration steps. If you need to make changes, choose Edit. When you're finished, choose Create resources. 1. For Review and create choose Create resources. 2. A window opens as AWS Batch starts to allocate your resources. Once complete choose Go to dashboard. On the dashboard you should see all of your allocated resources and that the job is in the Runnable state. Your job is scheduled to run and should complete in 2–3 minuets. Step 6: View the Job's output To view the Job's output, do the following: Step 4: Create a job 19 AWS Batch User Guide 1. In the navigation pane choose Jobs. 2. In the Job queue drop down choose the Job queue you created for the tutorial. 3. The Jobs table lists all of your Jobs and what their current status is. Once the Job's Status is Succeeded choose the Name of the Job to view the Job's details. 4. In the Details pane choose Log stream name. The CloudWatch console for the Job will open and there should be one event with the Message of hello world or your custom message. Step 7: Clean up your tutorial resources You are charged for the Amazon EC2 instance while it is enabled. You can delete the instance to stop incurring charges. To delete the resources you created, do the following: 1. In the navigation pane choose Job queue. 2. In the Job queue table choose the Job queue you created for the tutorial. 3. Choose Disable. Once the Job queue State is Disabled you can choose Delete. 4. Once the Job queue is deleted, in the navigation pane choose Compute environments. 5. Choose the compute environment you"} +{"global_id": 443, "doc_id": "batch", "chunk_id": "19", "question_id": 4, "question": "What should you do to stop incurring charges for the Amazon EC2 instance?", "answer_span": "You can delete the instance to stop incurring charges.", "chunk": "leave the default value. 3. Choose Next. Step 5: Review and create On the Review and create page, review the configuration steps. If you need to make changes, choose Edit. When you're finished, choose Create resources. 1. For Review and create choose Create resources. 2. A window opens as AWS Batch starts to allocate your resources. Once complete choose Go to dashboard. On the dashboard you should see all of your allocated resources and that the job is in the Runnable state. Your job is scheduled to run and should complete in 2–3 minuets. Step 6: View the Job's output To view the Job's output, do the following: Step 4: Create a job 19 AWS Batch User Guide 1. In the navigation pane choose Jobs. 2. In the Job queue drop down choose the Job queue you created for the tutorial. 3. The Jobs table lists all of your Jobs and what their current status is. Once the Job's Status is Succeeded choose the Name of the Job to view the Job's details. 4. In the Details pane choose Log stream name. The CloudWatch console for the Job will open and there should be one event with the Message of hello world or your custom message. Step 7: Clean up your tutorial resources You are charged for the Amazon EC2 instance while it is enabled. You can delete the instance to stop incurring charges. To delete the resources you created, do the following: 1. In the navigation pane choose Job queue. 2. In the Job queue table choose the Job queue you created for the tutorial. 3. Choose Disable. Once the Job queue State is Disabled you can choose Delete. 4. Once the Job queue is deleted, in the navigation pane choose Compute environments. 5. Choose the compute environment you"} +{"global_id": 444, "doc_id": "batch", "chunk_id": "20", "question_id": 1, "question": "What should you choose after selecting the Job queue you created for the tutorial?", "answer_span": "Choose Disable.", "chunk": "queue. 2. In the Job queue table choose the Job queue you created for the tutorial. 3. Choose Disable. Once the Job queue State is Disabled you can choose Delete. 4. Once the Job queue is deleted, in the navigation pane choose Compute environments. 5. Choose the compute environment you created for this tutorial and then choose Disable. It may take 1–2 minuets for the compute environment to complete being disabled. 6. Once the compute environment’s State is Disabled, choose Delete. It may take 1–2 minuets for the compute environment to be deleted. Additional resources After you complete the tutorial, you might want to explore the following topics:: • Explore the AWS Batch core components. For more information, see Components of AWS Batch. • Learn more about the different Compute Environments available in AWS Batch. • Learn more about Job queues and their different scheduling options. • Learn more about Job definitions and the different configuration options. • Learn more about the different types of Jobs. Step 7: Clean up your tutorial resources 20"} +{"global_id": 445, "doc_id": "batch", "chunk_id": "20", "question_id": 2, "question": "What do you do once the Job queue State is Disabled?", "answer_span": "you can choose Delete.", "chunk": "queue. 2. In the Job queue table choose the Job queue you created for the tutorial. 3. Choose Disable. Once the Job queue State is Disabled you can choose Delete. 4. Once the Job queue is deleted, in the navigation pane choose Compute environments. 5. Choose the compute environment you created for this tutorial and then choose Disable. It may take 1–2 minuets for the compute environment to complete being disabled. 6. Once the compute environment’s State is Disabled, choose Delete. It may take 1–2 minuets for the compute environment to be deleted. Additional resources After you complete the tutorial, you might want to explore the following topics:: • Explore the AWS Batch core components. For more information, see Components of AWS Batch. • Learn more about the different Compute Environments available in AWS Batch. • Learn more about Job queues and their different scheduling options. • Learn more about Job definitions and the different configuration options. • Learn more about the different types of Jobs. Step 7: Clean up your tutorial resources 20"} +{"global_id": 446, "doc_id": "batch", "chunk_id": "20", "question_id": 3, "question": "How long may it take for the compute environment to complete being disabled?", "answer_span": "It may take 1–2 minuets for the compute environment to complete being disabled.", "chunk": "queue. 2. In the Job queue table choose the Job queue you created for the tutorial. 3. Choose Disable. Once the Job queue State is Disabled you can choose Delete. 4. Once the Job queue is deleted, in the navigation pane choose Compute environments. 5. Choose the compute environment you created for this tutorial and then choose Disable. It may take 1–2 minuets for the compute environment to complete being disabled. 6. Once the compute environment’s State is Disabled, choose Delete. It may take 1–2 minuets for the compute environment to be deleted. Additional resources After you complete the tutorial, you might want to explore the following topics:: • Explore the AWS Batch core components. For more information, see Components of AWS Batch. • Learn more about the different Compute Environments available in AWS Batch. • Learn more about Job queues and their different scheduling options. • Learn more about Job definitions and the different configuration options. • Learn more about the different types of Jobs. Step 7: Clean up your tutorial resources 20"} +{"global_id": 447, "doc_id": "batch", "chunk_id": "20", "question_id": 4, "question": "What topics might you want to explore after completing the tutorial?", "answer_span": "you might want to explore the following topics:: • Explore the AWS Batch core components.", "chunk": "queue. 2. In the Job queue table choose the Job queue you created for the tutorial. 3. Choose Disable. Once the Job queue State is Disabled you can choose Delete. 4. Once the Job queue is deleted, in the navigation pane choose Compute environments. 5. Choose the compute environment you created for this tutorial and then choose Disable. It may take 1–2 minuets for the compute environment to complete being disabled. 6. Once the compute environment’s State is Disabled, choose Delete. It may take 1–2 minuets for the compute environment to be deleted. Additional resources After you complete the tutorial, you might want to explore the following topics:: • Explore the AWS Batch core components. For more information, see Components of AWS Batch. • Learn more about the different Compute Environments available in AWS Batch. • Learn more about Job queues and their different scheduling options. • Learn more about Job definitions and the different configuration options. • Learn more about the different types of Jobs. Step 7: Clean up your tutorial resources 20"} +{"global_id": 448, "doc_id": "eks", "chunk_id": "0", "question_id": 1, "question": "What does Amazon EKS provide?", "answer_span": "Amazon Elastic Kubernetes Service (EKS) provides a fully managed Kubernetes service that eliminates the complexity of operating Kubernetes clusters.", "chunk": "Amazon EKS User Guide What is Amazon EKS? Amazon EKS: Simplified Kubernetes Management Amazon Elastic Kubernetes Service (EKS) provides a fully managed Kubernetes service that eliminates the complexity of operating Kubernetes clusters. With EKS, you can: • Deploy applications faster with less operational overhead • Scale seamlessly to meet changing workload demands • Improve security through AWS integration and automated updates • Choose between standard EKS or fully automated EKS Auto Mode Amazon Elastic Kubernetes Service (Amazon EKS) is the premiere platform for running Kubernetes clusters, both in the Amazon Web Services (AWS) cloud and in your own data centers (EKS Anywhere and Amazon EKS Hybrid Nodes). Amazon EKS simplifies building, securing, and maintaining Kubernetes clusters. It can be more cost effective at providing enough resources to meet peak demand than maintaining your own data centers. Two of the main approaches to using Amazon EKS are as follows: • EKS standard: AWS manages the Kubernetes control plane when you create a cluster with EKS. Components that manage nodes, schedule workloads, integrate with the AWS cloud, and store and scale control plane information to keep your clusters up and running, are handled for you automatically. • EKS Auto Mode: Using the EKS Auto Mode feature, EKS extends its control to manage Nodes (Kubernetes data plane) as well. It simplifies Kubernetes management by automatically provisioning infrastructure, selecting optimal compute instances, dynamically scaling resources, continuously optimizing costs, patching operating systems, and integrating with AWS security services. The following diagram illustrates how Amazon EKS integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic"} +{"global_id": 449, "doc_id": "eks", "chunk_id": "0", "question_id": 2, "question": "What are two main approaches to using Amazon EKS?", "answer_span": "Two of the main approaches to using Amazon EKS are as follows: • EKS standard: AWS manages the Kubernetes control plane when you create a cluster with EKS. • EKS Auto Mode: Using the EKS Auto Mode feature, EKS extends its control to manage Nodes (Kubernetes data plane) as well.", "chunk": "Amazon EKS User Guide What is Amazon EKS? Amazon EKS: Simplified Kubernetes Management Amazon Elastic Kubernetes Service (EKS) provides a fully managed Kubernetes service that eliminates the complexity of operating Kubernetes clusters. With EKS, you can: • Deploy applications faster with less operational overhead • Scale seamlessly to meet changing workload demands • Improve security through AWS integration and automated updates • Choose between standard EKS or fully automated EKS Auto Mode Amazon Elastic Kubernetes Service (Amazon EKS) is the premiere platform for running Kubernetes clusters, both in the Amazon Web Services (AWS) cloud and in your own data centers (EKS Anywhere and Amazon EKS Hybrid Nodes). Amazon EKS simplifies building, securing, and maintaining Kubernetes clusters. It can be more cost effective at providing enough resources to meet peak demand than maintaining your own data centers. Two of the main approaches to using Amazon EKS are as follows: • EKS standard: AWS manages the Kubernetes control plane when you create a cluster with EKS. Components that manage nodes, schedule workloads, integrate with the AWS cloud, and store and scale control plane information to keep your clusters up and running, are handled for you automatically. • EKS Auto Mode: Using the EKS Auto Mode feature, EKS extends its control to manage Nodes (Kubernetes data plane) as well. It simplifies Kubernetes management by automatically provisioning infrastructure, selecting optimal compute instances, dynamically scaling resources, continuously optimizing costs, patching operating systems, and integrating with AWS security services. The following diagram illustrates how Amazon EKS integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic"} +{"global_id": 450, "doc_id": "eks", "chunk_id": "0", "question_id": 3, "question": "How does Amazon EKS help with application deployment?", "answer_span": "With EKS, you can: • Deploy applications faster with less operational overhead.", "chunk": "Amazon EKS User Guide What is Amazon EKS? Amazon EKS: Simplified Kubernetes Management Amazon Elastic Kubernetes Service (EKS) provides a fully managed Kubernetes service that eliminates the complexity of operating Kubernetes clusters. With EKS, you can: • Deploy applications faster with less operational overhead • Scale seamlessly to meet changing workload demands • Improve security through AWS integration and automated updates • Choose between standard EKS or fully automated EKS Auto Mode Amazon Elastic Kubernetes Service (Amazon EKS) is the premiere platform for running Kubernetes clusters, both in the Amazon Web Services (AWS) cloud and in your own data centers (EKS Anywhere and Amazon EKS Hybrid Nodes). Amazon EKS simplifies building, securing, and maintaining Kubernetes clusters. It can be more cost effective at providing enough resources to meet peak demand than maintaining your own data centers. Two of the main approaches to using Amazon EKS are as follows: • EKS standard: AWS manages the Kubernetes control plane when you create a cluster with EKS. Components that manage nodes, schedule workloads, integrate with the AWS cloud, and store and scale control plane information to keep your clusters up and running, are handled for you automatically. • EKS Auto Mode: Using the EKS Auto Mode feature, EKS extends its control to manage Nodes (Kubernetes data plane) as well. It simplifies Kubernetes management by automatically provisioning infrastructure, selecting optimal compute instances, dynamically scaling resources, continuously optimizing costs, patching operating systems, and integrating with AWS security services. The following diagram illustrates how Amazon EKS integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic"} +{"global_id": 451, "doc_id": "eks", "chunk_id": "0", "question_id": 4, "question": "What is the benefit of using EKS Auto Mode?", "answer_span": "It simplifies Kubernetes management by automatically provisioning infrastructure, selecting optimal compute instances, dynamically scaling resources, continuously optimizing costs, patching operating systems, and integrating with AWS security services.", "chunk": "Amazon EKS User Guide What is Amazon EKS? Amazon EKS: Simplified Kubernetes Management Amazon Elastic Kubernetes Service (EKS) provides a fully managed Kubernetes service that eliminates the complexity of operating Kubernetes clusters. With EKS, you can: • Deploy applications faster with less operational overhead • Scale seamlessly to meet changing workload demands • Improve security through AWS integration and automated updates • Choose between standard EKS or fully automated EKS Auto Mode Amazon Elastic Kubernetes Service (Amazon EKS) is the premiere platform for running Kubernetes clusters, both in the Amazon Web Services (AWS) cloud and in your own data centers (EKS Anywhere and Amazon EKS Hybrid Nodes). Amazon EKS simplifies building, securing, and maintaining Kubernetes clusters. It can be more cost effective at providing enough resources to meet peak demand than maintaining your own data centers. Two of the main approaches to using Amazon EKS are as follows: • EKS standard: AWS manages the Kubernetes control plane when you create a cluster with EKS. Components that manage nodes, schedule workloads, integrate with the AWS cloud, and store and scale control plane information to keep your clusters up and running, are handled for you automatically. • EKS Auto Mode: Using the EKS Auto Mode feature, EKS extends its control to manage Nodes (Kubernetes data plane) as well. It simplifies Kubernetes management by automatically provisioning infrastructure, selecting optimal compute instances, dynamically scaling resources, continuously optimizing costs, patching operating systems, and integrating with AWS security services. The following diagram illustrates how Amazon EKS integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic"} +{"global_id": 452, "doc_id": "eks", "chunk_id": "1", "question_id": 1, "question": "What does Amazon EKS help you accelerate?", "answer_span": "Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security.", "chunk": "integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic Kubernetes Service. Features of Amazon EKS Amazon EKS provides the following high-level features: Management interfaces EKS offers multiple interfaces to provision, manage, and maintain clusters, including AWS Management Console, Amazon EKS API/SDKs, CDK, AWS CLI, eksctl CLI, AWS CloudFormation, and Terraform. For more information, see Get started and Configure clusters. Features of Amazon EKS 2 Amazon EKS User Guide Access control tools EKS relies on both Kubernetes and AWS Identity and Access Management (AWS IAM) features to manage access from users and workloads. For more information, see the section called “Kubernetes API access” and the section called “Workload access to AWS ”. Compute resources For compute resources, EKS allows the full range of Amazon EC2 instance types and AWS innovations such as Nitro and Graviton with Amazon EKS for you to optimize the compute for your workloads. For more information, see Manage compute. Storage EKS Auto Mode automatically creates storage classes using EBS volumes. Using Container Storage Interface (CSI) drivers, you can also use Amazon S3, Amazon EFS, Amazon FSX, and Amazon File Cache for your application storage needs. For more information, see App data storage. Security The shared responsibility model is employed as it relates to Security in Amazon EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics"} +{"global_id": 453, "doc_id": "eks", "chunk_id": "1", "question_id": 2, "question": "What management interfaces does EKS offer?", "answer_span": "EKS offers multiple interfaces to provision, manage, and maintain clusters, including AWS Management Console, Amazon EKS API/SDKs, CDK, AWS CLI, eksctl CLI, AWS CloudFormation, and Terraform.", "chunk": "integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic Kubernetes Service. Features of Amazon EKS Amazon EKS provides the following high-level features: Management interfaces EKS offers multiple interfaces to provision, manage, and maintain clusters, including AWS Management Console, Amazon EKS API/SDKs, CDK, AWS CLI, eksctl CLI, AWS CloudFormation, and Terraform. For more information, see Get started and Configure clusters. Features of Amazon EKS 2 Amazon EKS User Guide Access control tools EKS relies on both Kubernetes and AWS Identity and Access Management (AWS IAM) features to manage access from users and workloads. For more information, see the section called “Kubernetes API access” and the section called “Workload access to AWS ”. Compute resources For compute resources, EKS allows the full range of Amazon EC2 instance types and AWS innovations such as Nitro and Graviton with Amazon EKS for you to optimize the compute for your workloads. For more information, see Manage compute. Storage EKS Auto Mode automatically creates storage classes using EBS volumes. Using Container Storage Interface (CSI) drivers, you can also use Amazon S3, Amazon EFS, Amazon FSX, and Amazon File Cache for your application storage needs. For more information, see App data storage. Security The shared responsibility model is employed as it relates to Security in Amazon EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics"} +{"global_id": 454, "doc_id": "eks", "chunk_id": "1", "question_id": 3, "question": "What compute resources does EKS allow?", "answer_span": "For compute resources, EKS allows the full range of Amazon EC2 instance types and AWS innovations such as Nitro and Graviton with Amazon EKS for you to optimize the compute for your workloads.", "chunk": "integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic Kubernetes Service. Features of Amazon EKS Amazon EKS provides the following high-level features: Management interfaces EKS offers multiple interfaces to provision, manage, and maintain clusters, including AWS Management Console, Amazon EKS API/SDKs, CDK, AWS CLI, eksctl CLI, AWS CloudFormation, and Terraform. For more information, see Get started and Configure clusters. Features of Amazon EKS 2 Amazon EKS User Guide Access control tools EKS relies on both Kubernetes and AWS Identity and Access Management (AWS IAM) features to manage access from users and workloads. For more information, see the section called “Kubernetes API access” and the section called “Workload access to AWS ”. Compute resources For compute resources, EKS allows the full range of Amazon EC2 instance types and AWS innovations such as Nitro and Graviton with Amazon EKS for you to optimize the compute for your workloads. For more information, see Manage compute. Storage EKS Auto Mode automatically creates storage classes using EBS volumes. Using Container Storage Interface (CSI) drivers, you can also use Amazon S3, Amazon EFS, Amazon FSX, and Amazon File Cache for your application storage needs. For more information, see App data storage. Security The shared responsibility model is employed as it relates to Security in Amazon EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics"} +{"global_id": 455, "doc_id": "eks", "chunk_id": "1", "question_id": 4, "question": "What monitoring tools are mentioned for Amazon EKS?", "answer_span": "Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator.", "chunk": "integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic Kubernetes Service. Features of Amazon EKS Amazon EKS provides the following high-level features: Management interfaces EKS offers multiple interfaces to provision, manage, and maintain clusters, including AWS Management Console, Amazon EKS API/SDKs, CDK, AWS CLI, eksctl CLI, AWS CloudFormation, and Terraform. For more information, see Get started and Configure clusters. Features of Amazon EKS 2 Amazon EKS User Guide Access control tools EKS relies on both Kubernetes and AWS Identity and Access Management (AWS IAM) features to manage access from users and workloads. For more information, see the section called “Kubernetes API access” and the section called “Workload access to AWS ”. Compute resources For compute resources, EKS allows the full range of Amazon EC2 instance types and AWS innovations such as Nitro and Graviton with Amazon EKS for you to optimize the compute for your workloads. For more information, see Manage compute. Storage EKS Auto Mode automatically creates storage classes using EBS volumes. Using Container Storage Interface (CSI) drivers, you can also use Amazon S3, Amazon EFS, Amazon FSX, and Amazon File Cache for your application storage needs. For more information, see App data storage. Security The shared responsibility model is employed as it relates to Security in Amazon EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics"} +{"global_id": 456, "doc_id": "eks", "chunk_id": "2", "question_id": 1, "question": "What tools are included for monitoring Amazon EKS clusters?", "answer_span": "Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator.", "chunk": "EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics Server. Kubernetes compatibility and support Amazon EKS is certified Kubernetes-conformant, so you can deploy Kubernetes-compatible applications without refactoring and use Kubernetes community tooling and plugins. EKS offers both standard support and eks/latest/userguide/kubernetes-versions-extended.html[extended support,type=\"documentation\"] for Kubernetes. For more information, see eks/latest/ userguide/kubernetes-versions.html[Understand the Kubernetes version lifecycle on EKS,type=\"documentation\"]. Related services Services to use with Amazon EKS Related services 3 Amazon EKS User Guide You can use other AWS services with the clusters that you deploy using Amazon EKS: Amazon EC2 Obtain on-demand, scalable compute capacity with Amazon EC2. Amazon EBS Attach scalable, high-performance block storage resources with Amazon EBS. Amazon ECR Store container images securely with Amazon ECR. Amazon CloudWatch Monitor AWS resources and applications in real time with Amazon CloudWatch. Amazon Prometheus Track metrics for containerized applications with Amazon Managed Service for Prometheus. Elastic Load Balancing Distribute incoming traffic across multiple targets with Elastic Load Balancing. Amazon GuardDuty Detect threats to EKS clusters with Amazon GuardDuty. AWS Resilience Hub Assess EKS cluster resiliency with AWS Resilience Hub. Amazon EKS Pricing Amazon EKS has per cluster pricing based on Kubernetes cluster version support, pricing for Amazon EKS Auto Mode, and per vCPU pricing for Amazon EKS Hybrid Nodes. When using Amazon EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the"} +{"global_id": 457, "doc_id": "eks", "chunk_id": "2", "question_id": 2, "question": "What type of support does Amazon EKS offer for Kubernetes?", "answer_span": "EKS offers both standard support and eks/latest/userguide/kubernetes-versions-extended.html[extended support,type=\"documentation\"] for Kubernetes.", "chunk": "EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics Server. Kubernetes compatibility and support Amazon EKS is certified Kubernetes-conformant, so you can deploy Kubernetes-compatible applications without refactoring and use Kubernetes community tooling and plugins. EKS offers both standard support and eks/latest/userguide/kubernetes-versions-extended.html[extended support,type=\"documentation\"] for Kubernetes. For more information, see eks/latest/ userguide/kubernetes-versions.html[Understand the Kubernetes version lifecycle on EKS,type=\"documentation\"]. Related services Services to use with Amazon EKS Related services 3 Amazon EKS User Guide You can use other AWS services with the clusters that you deploy using Amazon EKS: Amazon EC2 Obtain on-demand, scalable compute capacity with Amazon EC2. Amazon EBS Attach scalable, high-performance block storage resources with Amazon EBS. Amazon ECR Store container images securely with Amazon ECR. Amazon CloudWatch Monitor AWS resources and applications in real time with Amazon CloudWatch. Amazon Prometheus Track metrics for containerized applications with Amazon Managed Service for Prometheus. Elastic Load Balancing Distribute incoming traffic across multiple targets with Elastic Load Balancing. Amazon GuardDuty Detect threats to EKS clusters with Amazon GuardDuty. AWS Resilience Hub Assess EKS cluster resiliency with AWS Resilience Hub. Amazon EKS Pricing Amazon EKS has per cluster pricing based on Kubernetes cluster version support, pricing for Amazon EKS Auto Mode, and per vCPU pricing for Amazon EKS Hybrid Nodes. When using Amazon EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the"} +{"global_id": 458, "doc_id": "eks", "chunk_id": "2", "question_id": 3, "question": "Which AWS service can be used to store container images securely?", "answer_span": "Amazon ECR Store container images securely with Amazon ECR.", "chunk": "EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics Server. Kubernetes compatibility and support Amazon EKS is certified Kubernetes-conformant, so you can deploy Kubernetes-compatible applications without refactoring and use Kubernetes community tooling and plugins. EKS offers both standard support and eks/latest/userguide/kubernetes-versions-extended.html[extended support,type=\"documentation\"] for Kubernetes. For more information, see eks/latest/ userguide/kubernetes-versions.html[Understand the Kubernetes version lifecycle on EKS,type=\"documentation\"]. Related services Services to use with Amazon EKS Related services 3 Amazon EKS User Guide You can use other AWS services with the clusters that you deploy using Amazon EKS: Amazon EC2 Obtain on-demand, scalable compute capacity with Amazon EC2. Amazon EBS Attach scalable, high-performance block storage resources with Amazon EBS. Amazon ECR Store container images securely with Amazon ECR. Amazon CloudWatch Monitor AWS resources and applications in real time with Amazon CloudWatch. Amazon Prometheus Track metrics for containerized applications with Amazon Managed Service for Prometheus. Elastic Load Balancing Distribute incoming traffic across multiple targets with Elastic Load Balancing. Amazon GuardDuty Detect threats to EKS clusters with Amazon GuardDuty. AWS Resilience Hub Assess EKS cluster resiliency with AWS Resilience Hub. Amazon EKS Pricing Amazon EKS has per cluster pricing based on Kubernetes cluster version support, pricing for Amazon EKS Auto Mode, and per vCPU pricing for Amazon EKS Hybrid Nodes. When using Amazon EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the"} +{"global_id": 459, "doc_id": "eks", "chunk_id": "2", "question_id": 4, "question": "What is the basis for Amazon EKS pricing?", "answer_span": "Amazon EKS has per cluster pricing based on Kubernetes cluster version support, pricing for Amazon EKS Auto Mode, and per vCPU pricing for Amazon EKS Hybrid Nodes.", "chunk": "EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics Server. Kubernetes compatibility and support Amazon EKS is certified Kubernetes-conformant, so you can deploy Kubernetes-compatible applications without refactoring and use Kubernetes community tooling and plugins. EKS offers both standard support and eks/latest/userguide/kubernetes-versions-extended.html[extended support,type=\"documentation\"] for Kubernetes. For more information, see eks/latest/ userguide/kubernetes-versions.html[Understand the Kubernetes version lifecycle on EKS,type=\"documentation\"]. Related services Services to use with Amazon EKS Related services 3 Amazon EKS User Guide You can use other AWS services with the clusters that you deploy using Amazon EKS: Amazon EC2 Obtain on-demand, scalable compute capacity with Amazon EC2. Amazon EBS Attach scalable, high-performance block storage resources with Amazon EBS. Amazon ECR Store container images securely with Amazon ECR. Amazon CloudWatch Monitor AWS resources and applications in real time with Amazon CloudWatch. Amazon Prometheus Track metrics for containerized applications with Amazon Managed Service for Prometheus. Elastic Load Balancing Distribute incoming traffic across multiple targets with Elastic Load Balancing. Amazon GuardDuty Detect threats to EKS clusters with Amazon GuardDuty. AWS Resilience Hub Assess EKS cluster resiliency with AWS Resilience Hub. Amazon EKS Pricing Amazon EKS has per cluster pricing based on Kubernetes cluster version support, pricing for Amazon EKS Auto Mode, and per vCPU pricing for Amazon EKS Hybrid Nodes. When using Amazon EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the"} +{"global_id": 460, "doc_id": "eks", "chunk_id": "3", "question_id": 1, "question": "What do you pay for when using EKS?", "answer_span": "EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes.", "chunk": "EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the volume capacity through Amazon EBS, and the IPv4 address through Amazon VPC. Amazon EKS Pricing 4 Amazon EKS User Guide Visit the respective pricing pages of the AWS services you are using with your Kubernetes applications for detailed pricing information. • For Amazon EKS cluster, Amazon EKS Auto Mode, and Amazon EKS Hybrid Nodes pricing, see Amazon EKS Pricing. • For Amazon EC2 pricing, see Amazon EC2 On-Demand Pricing and Amazon EC2 Spot Pricing. • For AWS Fargate pricing, see AWS Fargate Pricing. • You can use your savings plans for compute used in Amazon EKS clusters. For more information, see Pricing with Savings Plans. Common use cases in Amazon EKS Amazon EKS offers robust managed Kubernetes services on AWS, designed to optimize containerized applications. The following are a few of the most common use cases of Amazon EKS, helping you leverage its strengths for your specific needs. Deploying high-availability applications Using Elastic Load Balancing, you can make sure that your applications are highly available across multiple Availability Zones. Building microservices architectures Use Kubernetes service discovery features with AWS Cloud Map or Amazon VPC Lattice to build resilient systems. Automating software release process Manage continuous integration and continuous deployment (CICD) pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS"} +{"global_id": 461, "doc_id": "eks", "chunk_id": "3", "question_id": 2, "question": "What is one of the common use cases of Amazon EKS?", "answer_span": "Deploying high-availability applications Using Elastic Load Balancing, you can make sure that your applications are highly available across multiple Availability Zones.", "chunk": "EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the volume capacity through Amazon EBS, and the IPv4 address through Amazon VPC. Amazon EKS Pricing 4 Amazon EKS User Guide Visit the respective pricing pages of the AWS services you are using with your Kubernetes applications for detailed pricing information. • For Amazon EKS cluster, Amazon EKS Auto Mode, and Amazon EKS Hybrid Nodes pricing, see Amazon EKS Pricing. • For Amazon EC2 pricing, see Amazon EC2 On-Demand Pricing and Amazon EC2 Spot Pricing. • For AWS Fargate pricing, see AWS Fargate Pricing. • You can use your savings plans for compute used in Amazon EKS clusters. For more information, see Pricing with Savings Plans. Common use cases in Amazon EKS Amazon EKS offers robust managed Kubernetes services on AWS, designed to optimize containerized applications. The following are a few of the most common use cases of Amazon EKS, helping you leverage its strengths for your specific needs. Deploying high-availability applications Using Elastic Load Balancing, you can make sure that your applications are highly available across multiple Availability Zones. Building microservices architectures Use Kubernetes service discovery features with AWS Cloud Map or Amazon VPC Lattice to build resilient systems. Automating software release process Manage continuous integration and continuous deployment (CICD) pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS"} +{"global_id": 462, "doc_id": "eks", "chunk_id": "3", "question_id": 3, "question": "How can you run serverless applications with Amazon EKS?", "answer_span": "Use AWS Fargate with Amazon EKS to run serverless applications.", "chunk": "EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the volume capacity through Amazon EBS, and the IPv4 address through Amazon VPC. Amazon EKS Pricing 4 Amazon EKS User Guide Visit the respective pricing pages of the AWS services you are using with your Kubernetes applications for detailed pricing information. • For Amazon EKS cluster, Amazon EKS Auto Mode, and Amazon EKS Hybrid Nodes pricing, see Amazon EKS Pricing. • For Amazon EC2 pricing, see Amazon EC2 On-Demand Pricing and Amazon EC2 Spot Pricing. • For AWS Fargate pricing, see AWS Fargate Pricing. • You can use your savings plans for compute used in Amazon EKS clusters. For more information, see Pricing with Savings Plans. Common use cases in Amazon EKS Amazon EKS offers robust managed Kubernetes services on AWS, designed to optimize containerized applications. The following are a few of the most common use cases of Amazon EKS, helping you leverage its strengths for your specific needs. Deploying high-availability applications Using Elastic Load Balancing, you can make sure that your applications are highly available across multiple Availability Zones. Building microservices architectures Use Kubernetes service discovery features with AWS Cloud Map or Amazon VPC Lattice to build resilient systems. Automating software release process Manage continuous integration and continuous deployment (CICD) pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS"} +{"global_id": 463, "doc_id": "eks", "chunk_id": "3", "question_id": 4, "question": "What should you visit for detailed pricing information on AWS services used with Kubernetes applications?", "answer_span": "Visit the respective pricing pages of the AWS services you are using with your Kubernetes applications for detailed pricing information.", "chunk": "EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the volume capacity through Amazon EBS, and the IPv4 address through Amazon VPC. Amazon EKS Pricing 4 Amazon EKS User Guide Visit the respective pricing pages of the AWS services you are using with your Kubernetes applications for detailed pricing information. • For Amazon EKS cluster, Amazon EKS Auto Mode, and Amazon EKS Hybrid Nodes pricing, see Amazon EKS Pricing. • For Amazon EC2 pricing, see Amazon EC2 On-Demand Pricing and Amazon EC2 Spot Pricing. • For AWS Fargate pricing, see AWS Fargate Pricing. • You can use your savings plans for compute used in Amazon EKS clusters. For more information, see Pricing with Savings Plans. Common use cases in Amazon EKS Amazon EKS offers robust managed Kubernetes services on AWS, designed to optimize containerized applications. The following are a few of the most common use cases of Amazon EKS, helping you leverage its strengths for your specific needs. Deploying high-availability applications Using Elastic Load Balancing, you can make sure that your applications are highly available across multiple Availability Zones. Building microservices architectures Use Kubernetes service discovery features with AWS Cloud Map or Amazon VPC Lattice to build resilient systems. Automating software release process Manage continuous integration and continuous deployment (CICD) pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS"} +{"global_id": 464, "doc_id": "eks", "chunk_id": "4", "question_id": 1, "question": "What does AWS Fargate allow you to focus on when running serverless applications?", "answer_span": "This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure.", "chunk": "pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS is compatible with popular machine learning frameworks such as TensorFlow, MXNet, and PyTorch. With GPU support, you can handle even complex machine learning tasks effectively. Common use cases 5 Amazon EKS User Guide Deploying consistently on premises and in the cloud To simplify running Kubernetes in on-premises environments, you can use the same Amazon EKS clusters, features, and tools to run self-managed nodes on AWS Outposts or can use Amazon EKS Hybrid Nodes with your own infrastructure. For self-contained, air-gapped environments, you can use Amazon EKS Anywhere to automate Kubernetes cluster lifecycle management on your own infrastructure. Running cost-effective batch processing and big data workloads Utilize Spot Instances to run your batch processing and big data workloads such as Apache Hadoop and Spark, at a fraction of the cost. This lets you take advantage of unused Amazon EC2 capacity at discounted prices. Securing application and ensuring compliance Implement strong security practices and maintain compliance with Amazon EKS, which integrates with AWS security services such as AWS Identity and Access Management (IAM), Amazon Virtual Private Cloud (Amazon VPC), and AWS Key Management Service (AWS KMS). This ensures data privacy and protection as per industry standards. Amazon EKS architecture Amazon EKS aligns with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with"} +{"global_id": 465, "doc_id": "eks", "chunk_id": "4", "question_id": 2, "question": "Which machine learning frameworks is Amazon EKS compatible with?", "answer_span": "Amazon EKS is compatible with popular machine learning frameworks such as TensorFlow, MXNet, and PyTorch.", "chunk": "pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS is compatible with popular machine learning frameworks such as TensorFlow, MXNet, and PyTorch. With GPU support, you can handle even complex machine learning tasks effectively. Common use cases 5 Amazon EKS User Guide Deploying consistently on premises and in the cloud To simplify running Kubernetes in on-premises environments, you can use the same Amazon EKS clusters, features, and tools to run self-managed nodes on AWS Outposts or can use Amazon EKS Hybrid Nodes with your own infrastructure. For self-contained, air-gapped environments, you can use Amazon EKS Anywhere to automate Kubernetes cluster lifecycle management on your own infrastructure. Running cost-effective batch processing and big data workloads Utilize Spot Instances to run your batch processing and big data workloads such as Apache Hadoop and Spark, at a fraction of the cost. This lets you take advantage of unused Amazon EC2 capacity at discounted prices. Securing application and ensuring compliance Implement strong security practices and maintain compliance with Amazon EKS, which integrates with AWS security services such as AWS Identity and Access Management (IAM), Amazon Virtual Private Cloud (Amazon VPC), and AWS Key Management Service (AWS KMS). This ensures data privacy and protection as per industry standards. Amazon EKS architecture Amazon EKS aligns with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with"} +{"global_id": 466, "doc_id": "eks", "chunk_id": "4", "question_id": 3, "question": "What can you use to automate Kubernetes cluster lifecycle management in self-contained environments?", "answer_span": "For self-contained, air-gapped environments, you can use Amazon EKS Anywhere to automate Kubernetes cluster lifecycle management on your own infrastructure.", "chunk": "pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS is compatible with popular machine learning frameworks such as TensorFlow, MXNet, and PyTorch. With GPU support, you can handle even complex machine learning tasks effectively. Common use cases 5 Amazon EKS User Guide Deploying consistently on premises and in the cloud To simplify running Kubernetes in on-premises environments, you can use the same Amazon EKS clusters, features, and tools to run self-managed nodes on AWS Outposts or can use Amazon EKS Hybrid Nodes with your own infrastructure. For self-contained, air-gapped environments, you can use Amazon EKS Anywhere to automate Kubernetes cluster lifecycle management on your own infrastructure. Running cost-effective batch processing and big data workloads Utilize Spot Instances to run your batch processing and big data workloads such as Apache Hadoop and Spark, at a fraction of the cost. This lets you take advantage of unused Amazon EC2 capacity at discounted prices. Securing application and ensuring compliance Implement strong security practices and maintain compliance with Amazon EKS, which integrates with AWS security services such as AWS Identity and Access Management (IAM), Amazon Virtual Private Cloud (Amazon VPC), and AWS Key Management Service (AWS KMS). This ensures data privacy and protection as per industry standards. Amazon EKS architecture Amazon EKS aligns with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with"} +{"global_id": 467, "doc_id": "eks", "chunk_id": "4", "question_id": 4, "question": "How does Amazon EKS ensure data privacy and protection?", "answer_span": "Implement strong security practices and maintain compliance with Amazon EKS, which integrates with AWS security services such as AWS Identity and Access Management (IAM), Amazon Virtual Private Cloud (Amazon VPC), and AWS Key Management Service (AWS KMS).", "chunk": "pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS is compatible with popular machine learning frameworks such as TensorFlow, MXNet, and PyTorch. With GPU support, you can handle even complex machine learning tasks effectively. Common use cases 5 Amazon EKS User Guide Deploying consistently on premises and in the cloud To simplify running Kubernetes in on-premises environments, you can use the same Amazon EKS clusters, features, and tools to run self-managed nodes on AWS Outposts or can use Amazon EKS Hybrid Nodes with your own infrastructure. For self-contained, air-gapped environments, you can use Amazon EKS Anywhere to automate Kubernetes cluster lifecycle management on your own infrastructure. Running cost-effective batch processing and big data workloads Utilize Spot Instances to run your batch processing and big data workloads such as Apache Hadoop and Spark, at a fraction of the cost. This lets you take advantage of unused Amazon EC2 capacity at discounted prices. Securing application and ensuring compliance Implement strong security practices and maintain compliance with Amazon EKS, which integrates with AWS security services such as AWS Identity and Access Management (IAM), Amazon Virtual Private Cloud (Amazon VPC), and AWS Key Management Service (AWS KMS). This ensures data privacy and protection as per industry standards. Amazon EKS architecture Amazon EKS aligns with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with"} +{"global_id": 468, "doc_id": "eks", "chunk_id": "5", "question_id": 1, "question": "What does Amazon EKS ensure for every cluster?", "answer_span": "Amazon EKS ensures every cluster has its own unique Kubernetes control plane.", "chunk": "with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with no overlaps between clusters or AWS accounts. The setup includes: Distributed components The control plane positions at least two API server instances and three etcd instances across three AWS Availability Zones within an AWS Region. Optimal performance Amazon EKS actively monitors and adjusts control plane instances to maintain peak performance. Architecture 6 Amazon EKS User Guide Resilience If a control plane instance falters, Amazon EKS quickly replaces it, using different Availability Zone if needed. Consistent uptime By running clusters across multiple Availability Zones, a reliable API server endpoint availability Service Level Agreement (SLA) is achieved. Amazon EKS uses Amazon Virtual Private Cloud (Amazon VPC) to limit traffic between control plane components within a single cluster. Cluster components can’t view or receive communication from other clusters or AWS accounts, except when authorized by Kubernetes role-based access control (RBAC) policies. Compute In addition to the control plane, an Amazon EKS cluster has a set of worker machines called nodes. Selecting the appropriate Amazon EKS cluster node type is crucial for meeting your specific requirements and optimizing resource utilization. Amazon EKS offers the following primary node types: EKS Auto Mode EKS Auto Mode extends AWS management beyond the control plane to include the data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod"} +{"global_id": 469, "doc_id": "eks", "chunk_id": "5", "question_id": 2, "question": "How many API server instances are positioned in the control plane?", "answer_span": "The control plane positions at least two API server instances and three etcd instances across three AWS Availability Zones within an AWS Region.", "chunk": "with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with no overlaps between clusters or AWS accounts. The setup includes: Distributed components The control plane positions at least two API server instances and three etcd instances across three AWS Availability Zones within an AWS Region. Optimal performance Amazon EKS actively monitors and adjusts control plane instances to maintain peak performance. Architecture 6 Amazon EKS User Guide Resilience If a control plane instance falters, Amazon EKS quickly replaces it, using different Availability Zone if needed. Consistent uptime By running clusters across multiple Availability Zones, a reliable API server endpoint availability Service Level Agreement (SLA) is achieved. Amazon EKS uses Amazon Virtual Private Cloud (Amazon VPC) to limit traffic between control plane components within a single cluster. Cluster components can’t view or receive communication from other clusters or AWS accounts, except when authorized by Kubernetes role-based access control (RBAC) policies. Compute In addition to the control plane, an Amazon EKS cluster has a set of worker machines called nodes. Selecting the appropriate Amazon EKS cluster node type is crucial for meeting your specific requirements and optimizing resource utilization. Amazon EKS offers the following primary node types: EKS Auto Mode EKS Auto Mode extends AWS management beyond the control plane to include the data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod"} +{"global_id": 470, "doc_id": "eks", "chunk_id": "5", "question_id": 3, "question": "What does Amazon EKS use to limit traffic between control plane components?", "answer_span": "Amazon EKS uses Amazon Virtual Private Cloud (Amazon VPC) to limit traffic between control plane components within a single cluster.", "chunk": "with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with no overlaps between clusters or AWS accounts. The setup includes: Distributed components The control plane positions at least two API server instances and three etcd instances across three AWS Availability Zones within an AWS Region. Optimal performance Amazon EKS actively monitors and adjusts control plane instances to maintain peak performance. Architecture 6 Amazon EKS User Guide Resilience If a control plane instance falters, Amazon EKS quickly replaces it, using different Availability Zone if needed. Consistent uptime By running clusters across multiple Availability Zones, a reliable API server endpoint availability Service Level Agreement (SLA) is achieved. Amazon EKS uses Amazon Virtual Private Cloud (Amazon VPC) to limit traffic between control plane components within a single cluster. Cluster components can’t view or receive communication from other clusters or AWS accounts, except when authorized by Kubernetes role-based access control (RBAC) policies. Compute In addition to the control plane, an Amazon EKS cluster has a set of worker machines called nodes. Selecting the appropriate Amazon EKS cluster node type is crucial for meeting your specific requirements and optimizing resource utilization. Amazon EKS offers the following primary node types: EKS Auto Mode EKS Auto Mode extends AWS management beyond the control plane to include the data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod"} +{"global_id": 471, "doc_id": "eks", "chunk_id": "5", "question_id": 4, "question": "What is the purpose of EKS Auto Mode?", "answer_span": "EKS Auto Mode extends AWS management beyond the control plane to include the data plane, automating cluster infrastructure management.", "chunk": "with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with no overlaps between clusters or AWS accounts. The setup includes: Distributed components The control plane positions at least two API server instances and three etcd instances across three AWS Availability Zones within an AWS Region. Optimal performance Amazon EKS actively monitors and adjusts control plane instances to maintain peak performance. Architecture 6 Amazon EKS User Guide Resilience If a control plane instance falters, Amazon EKS quickly replaces it, using different Availability Zone if needed. Consistent uptime By running clusters across multiple Availability Zones, a reliable API server endpoint availability Service Level Agreement (SLA) is achieved. Amazon EKS uses Amazon Virtual Private Cloud (Amazon VPC) to limit traffic between control plane components within a single cluster. Cluster components can’t view or receive communication from other clusters or AWS accounts, except when authorized by Kubernetes role-based access control (RBAC) policies. Compute In addition to the control plane, an Amazon EKS cluster has a set of worker machines called nodes. Selecting the appropriate Amazon EKS cluster node type is crucial for meeting your specific requirements and optimizing resource utilization. Amazon EKS offers the following primary node types: EKS Auto Mode EKS Auto Mode extends AWS management beyond the control plane to include the data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod"} +{"global_id": 472, "doc_id": "eks", "chunk_id": "6", "question_id": 1, "question": "What does EKS Auto Mode do?", "answer_span": "EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features.", "chunk": "data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod Disruption Budgets, and includes managed components that would otherwise require add-on management. This option is ideal for users who want to leverage AWS expertise for day-to-day operations, minimize operational overhead, and focus on application development rather than infrastructure management. AWS Fargate Fargate is a serverless compute engine for containers that eliminates the need to manage the underlying instances. With Fargate, you specify your application’s resource needs, and AWS automatically provisions, scales, and maintains the infrastructure. This option is ideal for users who prioritize ease-of-use and want to concentrate on application development and deployment rather than managing infrastructure. Compute 7 Amazon EKS User Guide Karpenter Karpenter is a flexible, high-performance Kubernetes cluster autoscaler that helps improve application availability and cluster efficiency. Karpenter launches right-sized compute resources in response to changing application load. This option can provision just-in-time compute resources that meet the requirements of your workload. Managed node groups Managed node groups are a blend of automation and customization for managing a collection of Amazon EC2 instances within an Amazon EKS cluster. AWS takes care of tasks like patching, updating, and scaling nodes, easing operational aspects. In parallel, custom kubelet arguments are supported, opening up possibilities for advanced CPU and memory management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling,"} +{"global_id": 473, "doc_id": "eks", "chunk_id": "6", "question_id": 2, "question": "What is AWS Fargate?", "answer_span": "AWS Fargate Fargate is a serverless compute engine for containers that eliminates the need to manage the underlying instances.", "chunk": "data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod Disruption Budgets, and includes managed components that would otherwise require add-on management. This option is ideal for users who want to leverage AWS expertise for day-to-day operations, minimize operational overhead, and focus on application development rather than infrastructure management. AWS Fargate Fargate is a serverless compute engine for containers that eliminates the need to manage the underlying instances. With Fargate, you specify your application’s resource needs, and AWS automatically provisions, scales, and maintains the infrastructure. This option is ideal for users who prioritize ease-of-use and want to concentrate on application development and deployment rather than managing infrastructure. Compute 7 Amazon EKS User Guide Karpenter Karpenter is a flexible, high-performance Kubernetes cluster autoscaler that helps improve application availability and cluster efficiency. Karpenter launches right-sized compute resources in response to changing application load. This option can provision just-in-time compute resources that meet the requirements of your workload. Managed node groups Managed node groups are a blend of automation and customization for managing a collection of Amazon EC2 instances within an Amazon EKS cluster. AWS takes care of tasks like patching, updating, and scaling nodes, easing operational aspects. In parallel, custom kubelet arguments are supported, opening up possibilities for advanced CPU and memory management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling,"} +{"global_id": 474, "doc_id": "eks", "chunk_id": "6", "question_id": 3, "question": "What is the purpose of Karpenter?", "answer_span": "Karpenter is a flexible, high-performance Kubernetes cluster autoscaler that helps improve application availability and cluster efficiency.", "chunk": "data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod Disruption Budgets, and includes managed components that would otherwise require add-on management. This option is ideal for users who want to leverage AWS expertise for day-to-day operations, minimize operational overhead, and focus on application development rather than infrastructure management. AWS Fargate Fargate is a serverless compute engine for containers that eliminates the need to manage the underlying instances. With Fargate, you specify your application’s resource needs, and AWS automatically provisions, scales, and maintains the infrastructure. This option is ideal for users who prioritize ease-of-use and want to concentrate on application development and deployment rather than managing infrastructure. Compute 7 Amazon EKS User Guide Karpenter Karpenter is a flexible, high-performance Kubernetes cluster autoscaler that helps improve application availability and cluster efficiency. Karpenter launches right-sized compute resources in response to changing application load. This option can provision just-in-time compute resources that meet the requirements of your workload. Managed node groups Managed node groups are a blend of automation and customization for managing a collection of Amazon EC2 instances within an Amazon EKS cluster. AWS takes care of tasks like patching, updating, and scaling nodes, easing operational aspects. In parallel, custom kubelet arguments are supported, opening up possibilities for advanced CPU and memory management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling,"} +{"global_id": 475, "doc_id": "eks", "chunk_id": "6", "question_id": 4, "question": "What do managed node groups provide?", "answer_span": "Managed node groups are a blend of automation and customization for managing a collection of Amazon EC2 instances within an Amazon EKS cluster.", "chunk": "data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod Disruption Budgets, and includes managed components that would otherwise require add-on management. This option is ideal for users who want to leverage AWS expertise for day-to-day operations, minimize operational overhead, and focus on application development rather than infrastructure management. AWS Fargate Fargate is a serverless compute engine for containers that eliminates the need to manage the underlying instances. With Fargate, you specify your application’s resource needs, and AWS automatically provisions, scales, and maintains the infrastructure. This option is ideal for users who prioritize ease-of-use and want to concentrate on application development and deployment rather than managing infrastructure. Compute 7 Amazon EKS User Guide Karpenter Karpenter is a flexible, high-performance Kubernetes cluster autoscaler that helps improve application availability and cluster efficiency. Karpenter launches right-sized compute resources in response to changing application load. This option can provision just-in-time compute resources that meet the requirements of your workload. Managed node groups Managed node groups are a blend of automation and customization for managing a collection of Amazon EC2 instances within an Amazon EKS cluster. AWS takes care of tasks like patching, updating, and scaling nodes, easing operational aspects. In parallel, custom kubelet arguments are supported, opening up possibilities for advanced CPU and memory management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling,"} +{"global_id": 476, "doc_id": "eks", "chunk_id": "7", "question_id": 1, "question": "What do AWS Identity and Access Management (IAM) roles enhance?", "answer_span": "Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster.", "chunk": "management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling, and maintaining the nodes, giving you total control over the underlying infrastructure. This option is suitable for users who need granular control and customization of their nodes and are ready to invest time in managing and maintaining their infrastructure. Amazon EKS Hybrid Nodes With Amazon EKS Hybrid Nodes, you can use your on-premises and edge infrastructure as nodes in Amazon EKS clusters. Amazon EKS Hybrid Nodes unifies Kubernetes management across environments and offloads Kubernetes control plane management to AWS for your onpremises and edge applications. Kubernetes concepts Amazon Elastic Kubernetes Service (Amazon EKS) is an AWS managed service based on the open source Kubernetes project. While there are things you need to know about how the Amazon EKS service integrates with AWS Cloud (particularly when you first create an Amazon EKS cluster), once it’s up and running, you use your Amazon EKS cluster in much that same way as you would any other Kubernetes cluster. So to begin managing Kubernetes clusters and deploying workloads, you need at least a basic understanding of Kubernetes concepts. Kubernetes concepts 8 Amazon EKS User Guide This page divides Kubernetes concepts into three sections: the section called “Why Kubernetes?”, the section called “Clusters”, and the section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and"} +{"global_id": 477, "doc_id": "eks", "chunk_id": "7", "question_id": 2, "question": "What do self-managed nodes offer users?", "answer_span": "Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster.", "chunk": "management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling, and maintaining the nodes, giving you total control over the underlying infrastructure. This option is suitable for users who need granular control and customization of their nodes and are ready to invest time in managing and maintaining their infrastructure. Amazon EKS Hybrid Nodes With Amazon EKS Hybrid Nodes, you can use your on-premises and edge infrastructure as nodes in Amazon EKS clusters. Amazon EKS Hybrid Nodes unifies Kubernetes management across environments and offloads Kubernetes control plane management to AWS for your onpremises and edge applications. Kubernetes concepts Amazon Elastic Kubernetes Service (Amazon EKS) is an AWS managed service based on the open source Kubernetes project. While there are things you need to know about how the Amazon EKS service integrates with AWS Cloud (particularly when you first create an Amazon EKS cluster), once it’s up and running, you use your Amazon EKS cluster in much that same way as you would any other Kubernetes cluster. So to begin managing Kubernetes clusters and deploying workloads, you need at least a basic understanding of Kubernetes concepts. Kubernetes concepts 8 Amazon EKS User Guide This page divides Kubernetes concepts into three sections: the section called “Why Kubernetes?”, the section called “Clusters”, and the section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and"} +{"global_id": 478, "doc_id": "eks", "chunk_id": "7", "question_id": 3, "question": "What is the purpose of Amazon EKS Hybrid Nodes?", "answer_span": "With Amazon EKS Hybrid Nodes, you can use your on-premises and edge infrastructure as nodes in Amazon EKS clusters.", "chunk": "management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling, and maintaining the nodes, giving you total control over the underlying infrastructure. This option is suitable for users who need granular control and customization of their nodes and are ready to invest time in managing and maintaining their infrastructure. Amazon EKS Hybrid Nodes With Amazon EKS Hybrid Nodes, you can use your on-premises and edge infrastructure as nodes in Amazon EKS clusters. Amazon EKS Hybrid Nodes unifies Kubernetes management across environments and offloads Kubernetes control plane management to AWS for your onpremises and edge applications. Kubernetes concepts Amazon Elastic Kubernetes Service (Amazon EKS) is an AWS managed service based on the open source Kubernetes project. While there are things you need to know about how the Amazon EKS service integrates with AWS Cloud (particularly when you first create an Amazon EKS cluster), once it’s up and running, you use your Amazon EKS cluster in much that same way as you would any other Kubernetes cluster. So to begin managing Kubernetes clusters and deploying workloads, you need at least a basic understanding of Kubernetes concepts. Kubernetes concepts 8 Amazon EKS User Guide This page divides Kubernetes concepts into three sections: the section called “Why Kubernetes?”, the section called “Clusters”, and the section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and"} +{"global_id": 479, "doc_id": "eks", "chunk_id": "7", "question_id": 4, "question": "What does the first section of Kubernetes concepts describe?", "answer_span": "The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS.", "chunk": "management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling, and maintaining the nodes, giving you total control over the underlying infrastructure. This option is suitable for users who need granular control and customization of their nodes and are ready to invest time in managing and maintaining their infrastructure. Amazon EKS Hybrid Nodes With Amazon EKS Hybrid Nodes, you can use your on-premises and edge infrastructure as nodes in Amazon EKS clusters. Amazon EKS Hybrid Nodes unifies Kubernetes management across environments and offloads Kubernetes control plane management to AWS for your onpremises and edge applications. Kubernetes concepts Amazon Elastic Kubernetes Service (Amazon EKS) is an AWS managed service based on the open source Kubernetes project. While there are things you need to know about how the Amazon EKS service integrates with AWS Cloud (particularly when you first create an Amazon EKS cluster), once it’s up and running, you use your Amazon EKS cluster in much that same way as you would any other Kubernetes cluster. So to begin managing Kubernetes clusters and deploying workloads, you need at least a basic understanding of Kubernetes concepts. Kubernetes concepts 8 Amazon EKS User Guide This page divides Kubernetes concepts into three sections: the section called “Why Kubernetes?”, the section called “Clusters”, and the section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and"} +{"global_id": 480, "doc_id": "eks", "chunk_id": "8", "question_id": 1, "question": "What is the main focus of the Workloads section?", "answer_span": "The Workloads section covers how Kubernetes applications are built, stored, run, and managed.", "chunk": "section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and what your responsibilities are for creating and maintaining Kubernetes clusters. Topics • Why Kubernetes? • Clusters • Workloads • Next steps As you go through this content, links will lead you to further descriptions of Kubernetes concepts in both Amazon EKS and Kubernetes documentation, in case you want to take deep dives into any of the topics we cover here. For details about how Amazon EKS implements Kubernetes control plane and compute features, see the section called “Architecture”. Why Kubernetes? Kubernetes was designed to improve availability and scalability when running mission-critical, production-quality containerized applications. Rather than just running Kubernetes on a single machine (although that is possible), Kubernetes achieves those goals by allowing you to run applications across sets of computers that can expand or contract to meet demand. Kubernetes includes features that make it easier for you to: • Deploy applications on multiple machines (using containers deployed in Pods) • Monitor container health and restart failed containers • Scale containers up and down based on load • Update containers with new versions • Allocate resources between containers • Balance traffic across machines Having Kubernetes automate these types of complex tasks allows an application developer to focus on building and improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity"} +{"global_id": 481, "doc_id": "eks", "chunk_id": "8", "question_id": 2, "question": "What are some features of Kubernetes that help with application management?", "answer_span": "Kubernetes includes features that make it easier for you to: • Deploy applications on multiple machines (using containers deployed in Pods) • Monitor container health and restart failed containers • Scale containers up and down based on load • Update containers with new versions • Allocate resources between containers • Balance traffic across machines", "chunk": "section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and what your responsibilities are for creating and maintaining Kubernetes clusters. Topics • Why Kubernetes? • Clusters • Workloads • Next steps As you go through this content, links will lead you to further descriptions of Kubernetes concepts in both Amazon EKS and Kubernetes documentation, in case you want to take deep dives into any of the topics we cover here. For details about how Amazon EKS implements Kubernetes control plane and compute features, see the section called “Architecture”. Why Kubernetes? Kubernetes was designed to improve availability and scalability when running mission-critical, production-quality containerized applications. Rather than just running Kubernetes on a single machine (although that is possible), Kubernetes achieves those goals by allowing you to run applications across sets of computers that can expand or contract to meet demand. Kubernetes includes features that make it easier for you to: • Deploy applications on multiple machines (using containers deployed in Pods) • Monitor container health and restart failed containers • Scale containers up and down based on load • Update containers with new versions • Allocate resources between containers • Balance traffic across machines Having Kubernetes automate these types of complex tasks allows an application developer to focus on building and improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity"} +{"global_id": 482, "doc_id": "eks", "chunk_id": "8", "question_id": 3, "question": "What is the purpose of creating configuration files in Kubernetes?", "answer_span": "The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application.", "chunk": "section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and what your responsibilities are for creating and maintaining Kubernetes clusters. Topics • Why Kubernetes? • Clusters • Workloads • Next steps As you go through this content, links will lead you to further descriptions of Kubernetes concepts in both Amazon EKS and Kubernetes documentation, in case you want to take deep dives into any of the topics we cover here. For details about how Amazon EKS implements Kubernetes control plane and compute features, see the section called “Architecture”. Why Kubernetes? Kubernetes was designed to improve availability and scalability when running mission-critical, production-quality containerized applications. Rather than just running Kubernetes on a single machine (although that is possible), Kubernetes achieves those goals by allowing you to run applications across sets of computers that can expand or contract to meet demand. Kubernetes includes features that make it easier for you to: • Deploy applications on multiple machines (using containers deployed in Pods) • Monitor container health and restart failed containers • Scale containers up and down based on load • Update containers with new versions • Allocate resources between containers • Balance traffic across machines Having Kubernetes automate these types of complex tasks allows an application developer to focus on building and improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity"} +{"global_id": 483, "doc_id": "eks", "chunk_id": "8", "question_id": 4, "question": "Why was Kubernetes designed?", "answer_span": "Kubernetes was designed to improve availability and scalability when running mission-critical, production-quality containerized applications.", "chunk": "section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and what your responsibilities are for creating and maintaining Kubernetes clusters. Topics • Why Kubernetes? • Clusters • Workloads • Next steps As you go through this content, links will lead you to further descriptions of Kubernetes concepts in both Amazon EKS and Kubernetes documentation, in case you want to take deep dives into any of the topics we cover here. For details about how Amazon EKS implements Kubernetes control plane and compute features, see the section called “Architecture”. Why Kubernetes? Kubernetes was designed to improve availability and scalability when running mission-critical, production-quality containerized applications. Rather than just running Kubernetes on a single machine (although that is possible), Kubernetes achieves those goals by allowing you to run applications across sets of computers that can expand or contract to meet demand. Kubernetes includes features that make it easier for you to: • Deploy applications on multiple machines (using containers deployed in Pods) • Monitor container health and restart failed containers • Scale containers up and down based on load • Update containers with new versions • Allocate resources between containers • Balance traffic across machines Having Kubernetes automate these types of complex tasks allows an application developer to focus on building and improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity"} +{"global_id": 484, "doc_id": "eks", "chunk_id": "9", "question_id": 1, "question": "What format do developers typically use to create configuration files for Kubernetes?", "answer_span": "The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application.", "chunk": "improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity rules, and more. Attributes of Kubernetes To achieve its goals, Kubernetes has the following attributes: • Containerized — Kubernetes is a container orchestration tool. To use Kubernetes, you must first have your applications containerized. Depending on the type of application, this could be as a set of microservices, as batch jobs or in other forms. Then, your applications can take advantage of a Kubernetes workflow that encompasses a huge ecosystem of tools, where containers can be stored as images in a container registry, deployed to a Kubernetes cluster, and run on an available node. You can build and test individual containers on your local computer with Docker or another container runtime, before deploying them to your Kubernetes cluster. • Scalable — If the demand for your applications exceeds the capacity of the running instances of those applications, Kubernetes is able to scale up. As needed, Kubernetes can tell if applications require more CPU or memory and respond by either automatically expanding available capacity or using more of existing capacity. Scaling can be done at the Pod level, if there is enough compute available to just run more instances of the application (horizontal Pod autoscaling), or at the node level, if more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads"} +{"global_id": 485, "doc_id": "eks", "chunk_id": "9", "question_id": 2, "question": "What is a key requirement for using Kubernetes?", "answer_span": "To use Kubernetes, you must first have your applications containerized.", "chunk": "improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity rules, and more. Attributes of Kubernetes To achieve its goals, Kubernetes has the following attributes: • Containerized — Kubernetes is a container orchestration tool. To use Kubernetes, you must first have your applications containerized. Depending on the type of application, this could be as a set of microservices, as batch jobs or in other forms. Then, your applications can take advantage of a Kubernetes workflow that encompasses a huge ecosystem of tools, where containers can be stored as images in a container registry, deployed to a Kubernetes cluster, and run on an available node. You can build and test individual containers on your local computer with Docker or another container runtime, before deploying them to your Kubernetes cluster. • Scalable — If the demand for your applications exceeds the capacity of the running instances of those applications, Kubernetes is able to scale up. As needed, Kubernetes can tell if applications require more CPU or memory and respond by either automatically expanding available capacity or using more of existing capacity. Scaling can be done at the Pod level, if there is enough compute available to just run more instances of the application (horizontal Pod autoscaling), or at the node level, if more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads"} +{"global_id": 486, "doc_id": "eks", "chunk_id": "9", "question_id": 3, "question": "How does Kubernetes respond if the demand for applications exceeds capacity?", "answer_span": "Kubernetes is able to scale up.", "chunk": "improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity rules, and more. Attributes of Kubernetes To achieve its goals, Kubernetes has the following attributes: • Containerized — Kubernetes is a container orchestration tool. To use Kubernetes, you must first have your applications containerized. Depending on the type of application, this could be as a set of microservices, as batch jobs or in other forms. Then, your applications can take advantage of a Kubernetes workflow that encompasses a huge ecosystem of tools, where containers can be stored as images in a container registry, deployed to a Kubernetes cluster, and run on an available node. You can build and test individual containers on your local computer with Docker or another container runtime, before deploying them to your Kubernetes cluster. • Scalable — If the demand for your applications exceeds the capacity of the running instances of those applications, Kubernetes is able to scale up. As needed, Kubernetes can tell if applications require more CPU or memory and respond by either automatically expanding available capacity or using more of existing capacity. Scaling can be done at the Pod level, if there is enough compute available to just run more instances of the application (horizontal Pod autoscaling), or at the node level, if more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads"} +{"global_id": 487, "doc_id": "eks", "chunk_id": "9", "question_id": 4, "question": "What happens if an application or node becomes unhealthy or unavailable?", "answer_span": "Kubernetes can move running workloads.", "chunk": "improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity rules, and more. Attributes of Kubernetes To achieve its goals, Kubernetes has the following attributes: • Containerized — Kubernetes is a container orchestration tool. To use Kubernetes, you must first have your applications containerized. Depending on the type of application, this could be as a set of microservices, as batch jobs or in other forms. Then, your applications can take advantage of a Kubernetes workflow that encompasses a huge ecosystem of tools, where containers can be stored as images in a container registry, deployed to a Kubernetes cluster, and run on an available node. You can build and test individual containers on your local computer with Docker or another container runtime, before deploying them to your Kubernetes cluster. • Scalable — If the demand for your applications exceeds the capacity of the running instances of those applications, Kubernetes is able to scale up. As needed, Kubernetes can tell if applications require more CPU or memory and respond by either automatically expanding available capacity or using more of existing capacity. Scaling can be done at the Pod level, if there is enough compute available to just run more instances of the application (horizontal Pod autoscaling), or at the node level, if more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads"} +{"global_id": 488, "doc_id": "eks", "chunk_id": "10", "question_id": 1, "question": "What services can delete unnecessary Pods and shut down unneeded nodes?", "answer_span": "Cluster Autoscaler or Karpenter", "chunk": "more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads to another available node. You can force the issue by simply deleting a running instance of a workload or node that’s running your workloads. The bottom line here is that workloads can be brought up in other locations if they can no longer run where they are. • Declarative — Kubernetes uses active reconciliation to constantly check that the state that you declare for your cluster matches the actual state. By applying Kubernetes objects to a cluster, typically through YAML-formatted configuration files, you can, for example, ask to start up the workloads you want to run on your cluster. You can later change the configurations to do something like use a later version of a container or allocate more memory. Kubernetes will do what it needs to do to establish the desired state. This can include bringing nodes up or down, stopping and restarting workloads, or pulling updated containers. • Composable — Because an application typically consists of multiple components, you want to be able to manage a set of these components (often represented by multiple containers) together. While Docker Compose offers a way to do this directly with Docker, the Kubernetes Why Kubernetes? 10 Amazon EKS User Guide Kompose command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like"} +{"global_id": 489, "doc_id": "eks", "chunk_id": "10", "question_id": 2, "question": "What happens if an application or node becomes unhealthy or unavailable?", "answer_span": "Kubernetes can move running workloads to another available node.", "chunk": "more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads to another available node. You can force the issue by simply deleting a running instance of a workload or node that’s running your workloads. The bottom line here is that workloads can be brought up in other locations if they can no longer run where they are. • Declarative — Kubernetes uses active reconciliation to constantly check that the state that you declare for your cluster matches the actual state. By applying Kubernetes objects to a cluster, typically through YAML-formatted configuration files, you can, for example, ask to start up the workloads you want to run on your cluster. You can later change the configurations to do something like use a later version of a container or allocate more memory. Kubernetes will do what it needs to do to establish the desired state. This can include bringing nodes up or down, stopping and restarting workloads, or pulling updated containers. • Composable — Because an application typically consists of multiple components, you want to be able to manage a set of these components (often represented by multiple containers) together. While Docker Compose offers a way to do this directly with Docker, the Kubernetes Why Kubernetes? 10 Amazon EKS User Guide Kompose command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like"} +{"global_id": 490, "doc_id": "eks", "chunk_id": "10", "question_id": 3, "question": "How does Kubernetes ensure that the declared state matches the actual state?", "answer_span": "Kubernetes uses active reconciliation to constantly check that the state that you declare for your cluster matches the actual state.", "chunk": "more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads to another available node. You can force the issue by simply deleting a running instance of a workload or node that’s running your workloads. The bottom line here is that workloads can be brought up in other locations if they can no longer run where they are. • Declarative — Kubernetes uses active reconciliation to constantly check that the state that you declare for your cluster matches the actual state. By applying Kubernetes objects to a cluster, typically through YAML-formatted configuration files, you can, for example, ask to start up the workloads you want to run on your cluster. You can later change the configurations to do something like use a later version of a container or allocate more memory. Kubernetes will do what it needs to do to establish the desired state. This can include bringing nodes up or down, stopping and restarting workloads, or pulling updated containers. • Composable — Because an application typically consists of multiple components, you want to be able to manage a set of these components (often represented by multiple containers) together. While Docker Compose offers a way to do this directly with Docker, the Kubernetes Why Kubernetes? 10 Amazon EKS User Guide Kompose command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like"} +{"global_id": 491, "doc_id": "eks", "chunk_id": "10", "question_id": 4, "question": "What command can help manage multiple components in Kubernetes?", "answer_span": "the Kubernetes Kompose command can help you do that with Kubernetes.", "chunk": "more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads to another available node. You can force the issue by simply deleting a running instance of a workload or node that’s running your workloads. The bottom line here is that workloads can be brought up in other locations if they can no longer run where they are. • Declarative — Kubernetes uses active reconciliation to constantly check that the state that you declare for your cluster matches the actual state. By applying Kubernetes objects to a cluster, typically through YAML-formatted configuration files, you can, for example, ask to start up the workloads you want to run on your cluster. You can later change the configurations to do something like use a later version of a container or allocate more memory. Kubernetes will do what it needs to do to establish the desired state. This can include bringing nodes up or down, stopping and restarting workloads, or pulling updated containers. • Composable — Because an application typically consists of multiple components, you want to be able to manage a set of these components (often represented by multiple containers) together. While Docker Compose offers a way to do this directly with Docker, the Kubernetes Why Kubernetes? 10 Amazon EKS User Guide Kompose command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like"} +{"global_id": 492, "doc_id": "eks", "chunk_id": "11", "question_id": 1, "question": "What can command help you do with Kubernetes?", "answer_span": "command can help you do that with Kubernetes.", "chunk": "command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like to meet your needs. APIs and configuration files are open to direct modifications. Third-parties are encouraged to write their own Controllers, to extend both infrastructure and end-user Kubernetes features. Webhooks let you set up cluster rules to enforce policies and adapt to changing conditions. For more ideas on how to extend Kubernetes clusters, see Extending Kubernetes. • Portable — Many organizations have standardized their operations on Kubernetes because it allows them to manage all of their application needs in the same way. Developers can use the same pipelines to build and store containerized applications. Those applications can then be deployed to Kubernetes clusters running on-premises, in clouds, on point-of-sales terminals in restaurants, or on IOT devices dispersed across company’s remote sites. Its open source nature makes it possible for people to develop these special Kubernetes distributions, along will tools needed to manage them. Managing Kubernetes Kubernetes source code is freely available, so with your own equipment you could install and manage Kubernetes yourself. However, self-managing Kubernetes requires deep operational expertise and takes time and effort to maintain. For those reasons, most people deploying production workloads choose a cloud provider (such as Amazon EKS) or on-premises provider (such as Amazon EKS Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS"} +{"global_id": 493, "doc_id": "eks", "chunk_id": "11", "question_id": 2, "question": "What is the nature of the Kubernetes project?", "answer_span": "the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like to meet your needs.", "chunk": "command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like to meet your needs. APIs and configuration files are open to direct modifications. Third-parties are encouraged to write their own Controllers, to extend both infrastructure and end-user Kubernetes features. Webhooks let you set up cluster rules to enforce policies and adapt to changing conditions. For more ideas on how to extend Kubernetes clusters, see Extending Kubernetes. • Portable — Many organizations have standardized their operations on Kubernetes because it allows them to manage all of their application needs in the same way. Developers can use the same pipelines to build and store containerized applications. Those applications can then be deployed to Kubernetes clusters running on-premises, in clouds, on point-of-sales terminals in restaurants, or on IOT devices dispersed across company’s remote sites. Its open source nature makes it possible for people to develop these special Kubernetes distributions, along will tools needed to manage them. Managing Kubernetes Kubernetes source code is freely available, so with your own equipment you could install and manage Kubernetes yourself. However, self-managing Kubernetes requires deep operational expertise and takes time and effort to maintain. For those reasons, most people deploying production workloads choose a cloud provider (such as Amazon EKS) or on-premises provider (such as Amazon EKS Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS"} +{"global_id": 494, "doc_id": "eks", "chunk_id": "11", "question_id": 3, "question": "Why do many organizations standardize their operations on Kubernetes?", "answer_span": "Many organizations have standardized their operations on Kubernetes because it allows them to manage all of their application needs in the same way.", "chunk": "command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like to meet your needs. APIs and configuration files are open to direct modifications. Third-parties are encouraged to write their own Controllers, to extend both infrastructure and end-user Kubernetes features. Webhooks let you set up cluster rules to enforce policies and adapt to changing conditions. For more ideas on how to extend Kubernetes clusters, see Extending Kubernetes. • Portable — Many organizations have standardized their operations on Kubernetes because it allows them to manage all of their application needs in the same way. Developers can use the same pipelines to build and store containerized applications. Those applications can then be deployed to Kubernetes clusters running on-premises, in clouds, on point-of-sales terminals in restaurants, or on IOT devices dispersed across company’s remote sites. Its open source nature makes it possible for people to develop these special Kubernetes distributions, along will tools needed to manage them. Managing Kubernetes Kubernetes source code is freely available, so with your own equipment you could install and manage Kubernetes yourself. However, self-managing Kubernetes requires deep operational expertise and takes time and effort to maintain. For those reasons, most people deploying production workloads choose a cloud provider (such as Amazon EKS) or on-premises provider (such as Amazon EKS Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS"} +{"global_id": 495, "doc_id": "eks", "chunk_id": "11", "question_id": 4, "question": "What do most people deploying production workloads choose for managing Kubernetes?", "answer_span": "most people deploying production workloads choose a cloud provider (such as Amazon EKS) or on-premises provider (such as Amazon EKS Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts.", "chunk": "command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like to meet your needs. APIs and configuration files are open to direct modifications. Third-parties are encouraged to write their own Controllers, to extend both infrastructure and end-user Kubernetes features. Webhooks let you set up cluster rules to enforce policies and adapt to changing conditions. For more ideas on how to extend Kubernetes clusters, see Extending Kubernetes. • Portable — Many organizations have standardized their operations on Kubernetes because it allows them to manage all of their application needs in the same way. Developers can use the same pipelines to build and store containerized applications. Those applications can then be deployed to Kubernetes clusters running on-premises, in clouds, on point-of-sales terminals in restaurants, or on IOT devices dispersed across company’s remote sites. Its open source nature makes it possible for people to develop these special Kubernetes distributions, along will tools needed to manage them. Managing Kubernetes Kubernetes source code is freely available, so with your own equipment you could install and manage Kubernetes yourself. However, self-managing Kubernetes requires deep operational expertise and takes time and effort to maintain. For those reasons, most people deploying production workloads choose a cloud provider (such as Amazon EKS) or on-premises provider (such as Amazon EKS Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS"} +{"global_id": 496, "doc_id": "eks", "chunk_id": "12", "question_id": 1, "question": "What does Amazon EKS allow you to do regarding hardware?", "answer_span": "With Amazon EKS, this means that you can consume the best cloud resources offered by AWS, including computer instances (Amazon Elastic Compute Cloud), your own private environment (Amazon VPC), central identity and permissions management (IAM), and storage (Amazon EBS).", "chunk": "Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS Amazon EKS can save you on upfront costs. With Amazon EKS, this means that you can consume the best cloud resources offered by AWS, including computer instances (Amazon Elastic Compute Cloud), your own private environment (Amazon VPC), central identity and permissions management (IAM), and storage (Amazon EBS). AWS manages the computers, networks, data centers, and all the other physical components needed to run Kubernetes. Likewise, you don’t have to plan your datacenter to handle the maximum capacity on your highest-demand days. For Amazon EKS Anywhere, or other on premises Kubernetes clusters, you are responsible for managing the infrastructure used in your Kubernetes deployments, but you can still rely on AWS to help you keep Kubernetes up to date. Why Kubernetes? 11 Amazon EKS User Guide • Control plane management — Amazon EKS manages the security and availability of the AWShosted Kubernetes control plane, which is responsible for scheduling containers, managing the availability of applications, and other key tasks, so you can focus on your application workloads. If your cluster breaks, AWS should have the means to restore your cluster to a running state. For Amazon EKS Anywhere, you would manage the control plane yourself. • Tested upgrades — When you upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running"} +{"global_id": 497, "doc_id": "eks", "chunk_id": "12", "question_id": 2, "question": "What is the responsibility of users for Amazon EKS Anywhere?", "answer_span": "For Amazon EKS Anywhere, or other on premises Kubernetes clusters, you are responsible for managing the infrastructure used in your Kubernetes deployments.", "chunk": "Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS Amazon EKS can save you on upfront costs. With Amazon EKS, this means that you can consume the best cloud resources offered by AWS, including computer instances (Amazon Elastic Compute Cloud), your own private environment (Amazon VPC), central identity and permissions management (IAM), and storage (Amazon EBS). AWS manages the computers, networks, data centers, and all the other physical components needed to run Kubernetes. Likewise, you don’t have to plan your datacenter to handle the maximum capacity on your highest-demand days. For Amazon EKS Anywhere, or other on premises Kubernetes clusters, you are responsible for managing the infrastructure used in your Kubernetes deployments, but you can still rely on AWS to help you keep Kubernetes up to date. Why Kubernetes? 11 Amazon EKS User Guide • Control plane management — Amazon EKS manages the security and availability of the AWShosted Kubernetes control plane, which is responsible for scheduling containers, managing the availability of applications, and other key tasks, so you can focus on your application workloads. If your cluster breaks, AWS should have the means to restore your cluster to a running state. For Amazon EKS Anywhere, you would manage the control plane yourself. • Tested upgrades — When you upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running"} +{"global_id": 498, "doc_id": "eks", "chunk_id": "12", "question_id": 3, "question": "What does Amazon EKS manage regarding the control plane?", "answer_span": "Amazon EKS manages the security and availability of the AWShosted Kubernetes control plane, which is responsible for scheduling containers, managing the availability of applications, and other key tasks.", "chunk": "Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS Amazon EKS can save you on upfront costs. With Amazon EKS, this means that you can consume the best cloud resources offered by AWS, including computer instances (Amazon Elastic Compute Cloud), your own private environment (Amazon VPC), central identity and permissions management (IAM), and storage (Amazon EBS). AWS manages the computers, networks, data centers, and all the other physical components needed to run Kubernetes. Likewise, you don’t have to plan your datacenter to handle the maximum capacity on your highest-demand days. For Amazon EKS Anywhere, or other on premises Kubernetes clusters, you are responsible for managing the infrastructure used in your Kubernetes deployments, but you can still rely on AWS to help you keep Kubernetes up to date. Why Kubernetes? 11 Amazon EKS User Guide • Control plane management — Amazon EKS manages the security and availability of the AWShosted Kubernetes control plane, which is responsible for scheduling containers, managing the availability of applications, and other key tasks, so you can focus on your application workloads. If your cluster breaks, AWS should have the means to restore your cluster to a running state. For Amazon EKS Anywhere, you would manage the control plane yourself. • Tested upgrades — When you upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running"} +{"global_id": 499, "doc_id": "eks", "chunk_id": "12", "question_id": 4, "question": "What can users rely on when upgrading their clusters?", "answer_span": "When you upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions.", "chunk": "Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS Amazon EKS can save you on upfront costs. With Amazon EKS, this means that you can consume the best cloud resources offered by AWS, including computer instances (Amazon Elastic Compute Cloud), your own private environment (Amazon VPC), central identity and permissions management (IAM), and storage (Amazon EBS). AWS manages the computers, networks, data centers, and all the other physical components needed to run Kubernetes. Likewise, you don’t have to plan your datacenter to handle the maximum capacity on your highest-demand days. For Amazon EKS Anywhere, or other on premises Kubernetes clusters, you are responsible for managing the infrastructure used in your Kubernetes deployments, but you can still rely on AWS to help you keep Kubernetes up to date. Why Kubernetes? 11 Amazon EKS User Guide • Control plane management — Amazon EKS manages the security and availability of the AWShosted Kubernetes control plane, which is responsible for scheduling containers, managing the availability of applications, and other key tasks, so you can focus on your application workloads. If your cluster breaks, AWS should have the means to restore your cluster to a running state. For Amazon EKS Anywhere, you would manage the control plane yourself. • Tested upgrades — When you upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running"} +{"global_id": 500, "doc_id": "eks", "chunk_id": "13", "question_id": 1, "question": "What services can you rely on to upgrade your clusters?", "answer_span": "you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions.", "chunk": "upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running of your workloads. Instead of building and managing those add-ons yourself, AWS provides the section called “Amazon EKS add-ons” that you can use with your clusters. Amazon EKS Anywhere provides Curated Packages that include builds of many popular open source projects. So you don’t have to build the software yourself or manage critical security patches, bug fixes, or upgrades. Likewise, if the defaults meet your needs, it’s typical for very little configuration of those add-ons to be needed. See the section called “Extend Clusters” for details on extending your cluster with add-ons. Kubernetes in action The following diagram shows key activities you would do as a Kubernetes Admin or Application Developer to create and use a Kubernetes cluster. In the process, it illustrates how Kubernetes components interact with each other, using the AWS cloud as the example of the underlying cloud provider. Why Kubernetes? 12 Amazon EKS User Guide A Kubernetes Admin creates the Kubernetes cluster using a tool specific to the type of provider on which the cluster will be built. This example uses the AWS cloud as the provider, which offers the managed Kubernetes service called Amazon EKS. The managed service automatically allocates the resources needed to create the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon"} +{"global_id": 501, "doc_id": "eks", "chunk_id": "13", "question_id": 2, "question": "What does AWS provide to help with add-ons for Kubernetes?", "answer_span": "AWS provides the section called “Amazon EKS add-ons” that you can use with your clusters.", "chunk": "upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running of your workloads. Instead of building and managing those add-ons yourself, AWS provides the section called “Amazon EKS add-ons” that you can use with your clusters. Amazon EKS Anywhere provides Curated Packages that include builds of many popular open source projects. So you don’t have to build the software yourself or manage critical security patches, bug fixes, or upgrades. Likewise, if the defaults meet your needs, it’s typical for very little configuration of those add-ons to be needed. See the section called “Extend Clusters” for details on extending your cluster with add-ons. Kubernetes in action The following diagram shows key activities you would do as a Kubernetes Admin or Application Developer to create and use a Kubernetes cluster. In the process, it illustrates how Kubernetes components interact with each other, using the AWS cloud as the example of the underlying cloud provider. Why Kubernetes? 12 Amazon EKS User Guide A Kubernetes Admin creates the Kubernetes cluster using a tool specific to the type of provider on which the cluster will be built. This example uses the AWS cloud as the provider, which offers the managed Kubernetes service called Amazon EKS. The managed service automatically allocates the resources needed to create the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon"} +{"global_id": 502, "doc_id": "eks", "chunk_id": "13", "question_id": 3, "question": "What does Amazon EKS Anywhere provide for managing software?", "answer_span": "Amazon EKS Anywhere provides Curated Packages that include builds of many popular open source projects.", "chunk": "upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running of your workloads. Instead of building and managing those add-ons yourself, AWS provides the section called “Amazon EKS add-ons” that you can use with your clusters. Amazon EKS Anywhere provides Curated Packages that include builds of many popular open source projects. So you don’t have to build the software yourself or manage critical security patches, bug fixes, or upgrades. Likewise, if the defaults meet your needs, it’s typical for very little configuration of those add-ons to be needed. See the section called “Extend Clusters” for details on extending your cluster with add-ons. Kubernetes in action The following diagram shows key activities you would do as a Kubernetes Admin or Application Developer to create and use a Kubernetes cluster. In the process, it illustrates how Kubernetes components interact with each other, using the AWS cloud as the example of the underlying cloud provider. Why Kubernetes? 12 Amazon EKS User Guide A Kubernetes Admin creates the Kubernetes cluster using a tool specific to the type of provider on which the cluster will be built. This example uses the AWS cloud as the provider, which offers the managed Kubernetes service called Amazon EKS. The managed service automatically allocates the resources needed to create the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon"} +{"global_id": 503, "doc_id": "eks", "chunk_id": "13", "question_id": 4, "question": "What does the managed service Amazon EKS automatically allocate?", "answer_span": "The managed service automatically allocates the resources needed to create the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management.", "chunk": "upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running of your workloads. Instead of building and managing those add-ons yourself, AWS provides the section called “Amazon EKS add-ons” that you can use with your clusters. Amazon EKS Anywhere provides Curated Packages that include builds of many popular open source projects. So you don’t have to build the software yourself or manage critical security patches, bug fixes, or upgrades. Likewise, if the defaults meet your needs, it’s typical for very little configuration of those add-ons to be needed. See the section called “Extend Clusters” for details on extending your cluster with add-ons. Kubernetes in action The following diagram shows key activities you would do as a Kubernetes Admin or Application Developer to create and use a Kubernetes cluster. In the process, it illustrates how Kubernetes components interact with each other, using the AWS cloud as the example of the underlying cloud provider. Why Kubernetes? 12 Amazon EKS User Guide A Kubernetes Admin creates the Kubernetes cluster using a tool specific to the type of provider on which the cluster will be built. This example uses the AWS cloud as the provider, which offers the managed Kubernetes service called Amazon EKS. The managed service automatically allocates the resources needed to create the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon"} +{"global_id": 504, "doc_id": "eks", "chunk_id": "14", "question_id": 1, "question": "What does the managed service allocate for running workloads?", "answer_span": "allocates zero or more Amazon EC2 instances as Kubernetes nodes for running workloads.", "chunk": "the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon EC2 instances as Kubernetes nodes for running workloads. AWS manages one Amazon VPC itself for the control plane, while the other Amazon VPC contains the customer nodes that run workloads. Many of the Kubernetes Admin’s tasks going forward are done using Kubernetes tools such as kubectl. That tool makes requests for services directly to the cluster’s control plane. The ways that queries and changes are made to the cluster are then very similar to the ways you would do them on any Kubernetes cluster. An application developer wanting to deploy workloads to this cluster can perform several tasks. The developer needs to build the application into one or more container images, then push those images to a container registry that is accessible to the Kubernetes cluster. AWS offers the Amazon Elastic Container Registry (Amazon ECR) for that purpose. Why Kubernetes? 13 Amazon EKS User Guide To run the application, the developer can create YAML-formatted configuration files that tell the cluster how to run the application, including which containers to pull from the registry and how to wrap those containers in Pods. The control plane (scheduler) schedules the containers to one or more nodes and the container runtime on each node actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to"} +{"global_id": 505, "doc_id": "eks", "chunk_id": "14", "question_id": 2, "question": "What tool does the Kubernetes Admin use to make requests for services?", "answer_span": "That tool makes requests for services directly to the cluster’s control plane.", "chunk": "the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon EC2 instances as Kubernetes nodes for running workloads. AWS manages one Amazon VPC itself for the control plane, while the other Amazon VPC contains the customer nodes that run workloads. Many of the Kubernetes Admin’s tasks going forward are done using Kubernetes tools such as kubectl. That tool makes requests for services directly to the cluster’s control plane. The ways that queries and changes are made to the cluster are then very similar to the ways you would do them on any Kubernetes cluster. An application developer wanting to deploy workloads to this cluster can perform several tasks. The developer needs to build the application into one or more container images, then push those images to a container registry that is accessible to the Kubernetes cluster. AWS offers the Amazon Elastic Container Registry (Amazon ECR) for that purpose. Why Kubernetes? 13 Amazon EKS User Guide To run the application, the developer can create YAML-formatted configuration files that tell the cluster how to run the application, including which containers to pull from the registry and how to wrap those containers in Pods. The control plane (scheduler) schedules the containers to one or more nodes and the container runtime on each node actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to"} +{"global_id": 506, "doc_id": "eks", "chunk_id": "14", "question_id": 3, "question": "What must an application developer do to deploy workloads to the cluster?", "answer_span": "The developer needs to build the application into one or more container images, then push those images to a container registry that is accessible to the Kubernetes cluster.", "chunk": "the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon EC2 instances as Kubernetes nodes for running workloads. AWS manages one Amazon VPC itself for the control plane, while the other Amazon VPC contains the customer nodes that run workloads. Many of the Kubernetes Admin’s tasks going forward are done using Kubernetes tools such as kubectl. That tool makes requests for services directly to the cluster’s control plane. The ways that queries and changes are made to the cluster are then very similar to the ways you would do them on any Kubernetes cluster. An application developer wanting to deploy workloads to this cluster can perform several tasks. The developer needs to build the application into one or more container images, then push those images to a container registry that is accessible to the Kubernetes cluster. AWS offers the Amazon Elastic Container Registry (Amazon ECR) for that purpose. Why Kubernetes? 13 Amazon EKS User Guide To run the application, the developer can create YAML-formatted configuration files that tell the cluster how to run the application, including which containers to pull from the registry and how to wrap those containers in Pods. The control plane (scheduler) schedules the containers to one or more nodes and the container runtime on each node actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to"} +{"global_id": 507, "doc_id": "eks", "chunk_id": "14", "question_id": 4, "question": "What does the control plane do with the containers?", "answer_span": "The control plane (scheduler) schedules the containers to one or more nodes and the container runtime on each node actually pulls and runs the needed containers.", "chunk": "the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon EC2 instances as Kubernetes nodes for running workloads. AWS manages one Amazon VPC itself for the control plane, while the other Amazon VPC contains the customer nodes that run workloads. Many of the Kubernetes Admin’s tasks going forward are done using Kubernetes tools such as kubectl. That tool makes requests for services directly to the cluster’s control plane. The ways that queries and changes are made to the cluster are then very similar to the ways you would do them on any Kubernetes cluster. An application developer wanting to deploy workloads to this cluster can perform several tasks. The developer needs to build the application into one or more container images, then push those images to a container registry that is accessible to the Kubernetes cluster. AWS offers the Amazon Elastic Container Registry (Amazon ECR) for that purpose. Why Kubernetes? 13 Amazon EKS User Guide To run the application, the developer can create YAML-formatted configuration files that tell the cluster how to run the application, including which containers to pull from the registry and how to wrap those containers in Pods. The control plane (scheduler) schedules the containers to one or more nodes and the container runtime on each node actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to"} +{"global_id": 508, "doc_id": "eks", "chunk_id": "15", "question_id": 1, "question": "What can a developer set up to balance traffic to available containers?", "answer_span": "The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world.", "chunk": "actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to use the application can connect to the application endpoint to access it. The following sections go through details of each of these features, from the perspective of Kubernetes Clusters and Workloads. Clusters If your job is to start and manage Kubernetes clusters, you should know how Kubernetes clusters are created, enhanced, managed, and deleted. You should also know what the components are that make up a cluster and what you need to do to maintain those components. Tools for managing clusters handle the overlap between the Kubernetes services and the underlying hardware provider. For that reason, automation of these tasks tend to be done by the Kubernetes provider (such as Amazon EKS or Amazon EKS Anywhere) using tools that are specific to the provider. For example, to start an Amazon EKS cluster you can use eksctl create cluster, while for Amazon EKS Anywhere you can use eksctl anywhere create cluster. Note that while these commands create a Kubernetes cluster, they are specific to the provider and are not part of the Kubernetes project itself. Cluster creation and management tools The Kubernetes project offers tools for creating a Kubernetes cluster manually. So if you want to install Kubernetes on a single machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported"} +{"global_id": 509, "doc_id": "eks", "chunk_id": "15", "question_id": 2, "question": "What should someone managing Kubernetes clusters know?", "answer_span": "If your job is to start and manage Kubernetes clusters, you should know how Kubernetes clusters are created, enhanced, managed, and deleted.", "chunk": "actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to use the application can connect to the application endpoint to access it. The following sections go through details of each of these features, from the perspective of Kubernetes Clusters and Workloads. Clusters If your job is to start and manage Kubernetes clusters, you should know how Kubernetes clusters are created, enhanced, managed, and deleted. You should also know what the components are that make up a cluster and what you need to do to maintain those components. Tools for managing clusters handle the overlap between the Kubernetes services and the underlying hardware provider. For that reason, automation of these tasks tend to be done by the Kubernetes provider (such as Amazon EKS or Amazon EKS Anywhere) using tools that are specific to the provider. For example, to start an Amazon EKS cluster you can use eksctl create cluster, while for Amazon EKS Anywhere you can use eksctl anywhere create cluster. Note that while these commands create a Kubernetes cluster, they are specific to the provider and are not part of the Kubernetes project itself. Cluster creation and management tools The Kubernetes project offers tools for creating a Kubernetes cluster manually. So if you want to install Kubernetes on a single machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported"} +{"global_id": 510, "doc_id": "eks", "chunk_id": "15", "question_id": 3, "question": "What tools can be used to create a Kubernetes cluster manually?", "answer_span": "So if you want to install Kubernetes on a single machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools.", "chunk": "actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to use the application can connect to the application endpoint to access it. The following sections go through details of each of these features, from the perspective of Kubernetes Clusters and Workloads. Clusters If your job is to start and manage Kubernetes clusters, you should know how Kubernetes clusters are created, enhanced, managed, and deleted. You should also know what the components are that make up a cluster and what you need to do to maintain those components. Tools for managing clusters handle the overlap between the Kubernetes services and the underlying hardware provider. For that reason, automation of these tasks tend to be done by the Kubernetes provider (such as Amazon EKS or Amazon EKS Anywhere) using tools that are specific to the provider. For example, to start an Amazon EKS cluster you can use eksctl create cluster, while for Amazon EKS Anywhere you can use eksctl anywhere create cluster. Note that while these commands create a Kubernetes cluster, they are specific to the provider and are not part of the Kubernetes project itself. Cluster creation and management tools The Kubernetes project offers tools for creating a Kubernetes cluster manually. So if you want to install Kubernetes on a single machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported"} +{"global_id": 511, "doc_id": "eks", "chunk_id": "15", "question_id": 4, "question": "What is the purpose of automation in managing Kubernetes clusters?", "answer_span": "For that reason, automation of these tasks tend to be done by the Kubernetes provider (such as Amazon EKS or Amazon EKS Anywhere) using tools that are specific to the provider.", "chunk": "actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to use the application can connect to the application endpoint to access it. The following sections go through details of each of these features, from the perspective of Kubernetes Clusters and Workloads. Clusters If your job is to start and manage Kubernetes clusters, you should know how Kubernetes clusters are created, enhanced, managed, and deleted. You should also know what the components are that make up a cluster and what you need to do to maintain those components. Tools for managing clusters handle the overlap between the Kubernetes services and the underlying hardware provider. For that reason, automation of these tasks tend to be done by the Kubernetes provider (such as Amazon EKS or Amazon EKS Anywhere) using tools that are specific to the provider. For example, to start an Amazon EKS cluster you can use eksctl create cluster, while for Amazon EKS Anywhere you can use eksctl anywhere create cluster. Note that while these commands create a Kubernetes cluster, they are specific to the provider and are not part of the Kubernetes project itself. Cluster creation and management tools The Kubernetes project offers tools for creating a Kubernetes cluster manually. So if you want to install Kubernetes on a single machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported"} +{"global_id": 512, "doc_id": "eks", "chunk_id": "16", "question_id": 1, "question": "What tools can be used to create and manage Kubernetes clusters?", "answer_span": "you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools.", "chunk": "machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported by an established Kubernetes provider, such as Amazon EKS or Amazon EKS Anywhere. In AWS Cloud, you can create Amazon EKS clusters using CLI tools, such as eksctl, or more declarative tools, such as Terraform (see Amazon EKS Blueprints for Terraform). You can also create a cluster from the AWS Management Console. See Amazon EKS features for a list what you get with Amazon EKS. Kubernetes responsibilities that Amazon EKS takes on for you include: Clusters 14 Amazon EKS User Guide • Managed control plane — AWS makes sure that the Amazon EKS cluster is available and scalable because it manages the control plane for you and makes it available across AWS Availability Zones. • Node management — Instead of manually adding nodes, you can have Amazon EKS create nodes automatically as needed, using Managed Node Groups (see the section called “Managed node groups”) or Karpenter. Managed Node Groups have integrations with Kubernetes Cluster Autoscaling. Using node management tools, you can take advantage of cost savings, with things like Spot Instances and node consolidation, and availability, using Scheduling features to set how workloads are deployed and nodes are selected. • Cluster networking — Using CloudFormation templates, eksctl sets up networking between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities"} +{"global_id": 513, "doc_id": "eks", "chunk_id": "16", "question_id": 2, "question": "What does Amazon EKS manage for you?", "answer_span": "Kubernetes responsibilities that Amazon EKS takes on for you include: Clusters 14 Amazon EKS User Guide • Managed control plane — AWS makes sure that the Amazon EKS cluster is available and scalable because it manages the control plane for you and makes it available across AWS Availability Zones.", "chunk": "machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported by an established Kubernetes provider, such as Amazon EKS or Amazon EKS Anywhere. In AWS Cloud, you can create Amazon EKS clusters using CLI tools, such as eksctl, or more declarative tools, such as Terraform (see Amazon EKS Blueprints for Terraform). You can also create a cluster from the AWS Management Console. See Amazon EKS features for a list what you get with Amazon EKS. Kubernetes responsibilities that Amazon EKS takes on for you include: Clusters 14 Amazon EKS User Guide • Managed control plane — AWS makes sure that the Amazon EKS cluster is available and scalable because it manages the control plane for you and makes it available across AWS Availability Zones. • Node management — Instead of manually adding nodes, you can have Amazon EKS create nodes automatically as needed, using Managed Node Groups (see the section called “Managed node groups”) or Karpenter. Managed Node Groups have integrations with Kubernetes Cluster Autoscaling. Using node management tools, you can take advantage of cost savings, with things like Spot Instances and node consolidation, and availability, using Scheduling features to set how workloads are deployed and nodes are selected. • Cluster networking — Using CloudFormation templates, eksctl sets up networking between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities"} +{"global_id": 514, "doc_id": "eks", "chunk_id": "16", "question_id": 3, "question": "How can you create Amazon EKS clusters in AWS Cloud?", "answer_span": "In AWS Cloud, you can create Amazon EKS clusters using CLI tools, such as eksctl, or more declarative tools, such as Terraform.", "chunk": "machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported by an established Kubernetes provider, such as Amazon EKS or Amazon EKS Anywhere. In AWS Cloud, you can create Amazon EKS clusters using CLI tools, such as eksctl, or more declarative tools, such as Terraform (see Amazon EKS Blueprints for Terraform). You can also create a cluster from the AWS Management Console. See Amazon EKS features for a list what you get with Amazon EKS. Kubernetes responsibilities that Amazon EKS takes on for you include: Clusters 14 Amazon EKS User Guide • Managed control plane — AWS makes sure that the Amazon EKS cluster is available and scalable because it manages the control plane for you and makes it available across AWS Availability Zones. • Node management — Instead of manually adding nodes, you can have Amazon EKS create nodes automatically as needed, using Managed Node Groups (see the section called “Managed node groups”) or Karpenter. Managed Node Groups have integrations with Kubernetes Cluster Autoscaling. Using node management tools, you can take advantage of cost savings, with things like Spot Instances and node consolidation, and availability, using Scheduling features to set how workloads are deployed and nodes are selected. • Cluster networking — Using CloudFormation templates, eksctl sets up networking between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities"} +{"global_id": 515, "doc_id": "eks", "chunk_id": "16", "question_id": 4, "question": "What is the purpose of Managed Node Groups in Amazon EKS?", "answer_span": "Instead of manually adding nodes, you can have Amazon EKS create nodes automatically as needed, using Managed Node Groups.", "chunk": "machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported by an established Kubernetes provider, such as Amazon EKS or Amazon EKS Anywhere. In AWS Cloud, you can create Amazon EKS clusters using CLI tools, such as eksctl, or more declarative tools, such as Terraform (see Amazon EKS Blueprints for Terraform). You can also create a cluster from the AWS Management Console. See Amazon EKS features for a list what you get with Amazon EKS. Kubernetes responsibilities that Amazon EKS takes on for you include: Clusters 14 Amazon EKS User Guide • Managed control plane — AWS makes sure that the Amazon EKS cluster is available and scalable because it manages the control plane for you and makes it available across AWS Availability Zones. • Node management — Instead of manually adding nodes, you can have Amazon EKS create nodes automatically as needed, using Managed Node Groups (see the section called “Managed node groups”) or Karpenter. Managed Node Groups have integrations with Kubernetes Cluster Autoscaling. Using node management tools, you can take advantage of cost savings, with things like Spot Instances and node consolidation, and availability, using Scheduling features to set how workloads are deployed and nodes are selected. • Cluster networking — Using CloudFormation templates, eksctl sets up networking between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities"} +{"global_id": 516, "doc_id": "eks", "chunk_id": "17", "question_id": 1, "question": "What does Amazon EKS save you from having to build?", "answer_span": "Amazon EKS saves you from having to build and add software components that are commonly used to support Kubernetes clusters.", "chunk": "between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities (see the section called “Pod Identity”), which provides a means of letting Pods tap into AWS cloud methods of managing credentials and permissions. • Add-Ons — Amazon EKS saves you from having to build and add software components that are commonly used to support Kubernetes clusters. For example, when you create an Amazon EKS cluster from the AWS Management Console, it automatically adds the Amazon EKS kube-proxy (the section called “kube-proxy”), Amazon VPC CNI plugin for Kubernetes (the section called “Amazon VPC CNI”), and CoreDNS (the section called “CoreDNS”) add-ons. See the section called “Amazon EKS add-ons” for more on these add-ons, including a list of which are available. To run your clusters on your own on-premises computers and networks, Amazon offers Amazon EKS Anywhere. Instead of the AWS Cloud being the provider, you have the choice of running Amazon EKS Anywhere on VMWare vSphere, bare metal (Tinkerbell provider), Snow, CloudStack, or Nutanix platforms using your own equipment. Amazon EKS Anywhere is based on the same Amazon EKS Distro software that is used by Amazon EKS. However, Amazon EKS Anywhere relies on different implementations of the Kubernetes Cluster API (CAPI) interface to manage the full lifecycle of the machines in an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User"} +{"global_id": 517, "doc_id": "eks", "chunk_id": "17", "question_id": 2, "question": "What is the purpose of Amazon EKS Pod Identities?", "answer_span": "Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities, which provides a means of letting Pods tap into AWS cloud methods of managing credentials and permissions.", "chunk": "between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities (see the section called “Pod Identity”), which provides a means of letting Pods tap into AWS cloud methods of managing credentials and permissions. • Add-Ons — Amazon EKS saves you from having to build and add software components that are commonly used to support Kubernetes clusters. For example, when you create an Amazon EKS cluster from the AWS Management Console, it automatically adds the Amazon EKS kube-proxy (the section called “kube-proxy”), Amazon VPC CNI plugin for Kubernetes (the section called “Amazon VPC CNI”), and CoreDNS (the section called “CoreDNS”) add-ons. See the section called “Amazon EKS add-ons” for more on these add-ons, including a list of which are available. To run your clusters on your own on-premises computers and networks, Amazon offers Amazon EKS Anywhere. Instead of the AWS Cloud being the provider, you have the choice of running Amazon EKS Anywhere on VMWare vSphere, bare metal (Tinkerbell provider), Snow, CloudStack, or Nutanix platforms using your own equipment. Amazon EKS Anywhere is based on the same Amazon EKS Distro software that is used by Amazon EKS. However, Amazon EKS Anywhere relies on different implementations of the Kubernetes Cluster API (CAPI) interface to manage the full lifecycle of the machines in an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User"} +{"global_id": 518, "doc_id": "eks", "chunk_id": "17", "question_id": 3, "question": "What platforms can Amazon EKS Anywhere run on?", "answer_span": "You have the choice of running Amazon EKS Anywhere on VMWare vSphere, bare metal (Tinkerbell provider), Snow, CloudStack, or Nutanix platforms using your own equipment.", "chunk": "between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities (see the section called “Pod Identity”), which provides a means of letting Pods tap into AWS cloud methods of managing credentials and permissions. • Add-Ons — Amazon EKS saves you from having to build and add software components that are commonly used to support Kubernetes clusters. For example, when you create an Amazon EKS cluster from the AWS Management Console, it automatically adds the Amazon EKS kube-proxy (the section called “kube-proxy”), Amazon VPC CNI plugin for Kubernetes (the section called “Amazon VPC CNI”), and CoreDNS (the section called “CoreDNS”) add-ons. See the section called “Amazon EKS add-ons” for more on these add-ons, including a list of which are available. To run your clusters on your own on-premises computers and networks, Amazon offers Amazon EKS Anywhere. Instead of the AWS Cloud being the provider, you have the choice of running Amazon EKS Anywhere on VMWare vSphere, bare metal (Tinkerbell provider), Snow, CloudStack, or Nutanix platforms using your own equipment. Amazon EKS Anywhere is based on the same Amazon EKS Distro software that is used by Amazon EKS. However, Amazon EKS Anywhere relies on different implementations of the Kubernetes Cluster API (CAPI) interface to manage the full lifecycle of the machines in an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User"} +{"global_id": 519, "doc_id": "eks", "chunk_id": "17", "question_id": 4, "question": "What added responsibility comes with running Amazon EKS Anywhere?", "answer_span": "Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data.", "chunk": "between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities (see the section called “Pod Identity”), which provides a means of letting Pods tap into AWS cloud methods of managing credentials and permissions. • Add-Ons — Amazon EKS saves you from having to build and add software components that are commonly used to support Kubernetes clusters. For example, when you create an Amazon EKS cluster from the AWS Management Console, it automatically adds the Amazon EKS kube-proxy (the section called “kube-proxy”), Amazon VPC CNI plugin for Kubernetes (the section called “Amazon VPC CNI”), and CoreDNS (the section called “CoreDNS”) add-ons. See the section called “Amazon EKS add-ons” for more on these add-ons, including a list of which are available. To run your clusters on your own on-premises computers and networks, Amazon offers Amazon EKS Anywhere. Instead of the AWS Cloud being the provider, you have the choice of running Amazon EKS Anywhere on VMWare vSphere, bare metal (Tinkerbell provider), Snow, CloudStack, or Nutanix platforms using your own equipment. Amazon EKS Anywhere is based on the same Amazon EKS Distro software that is used by Amazon EKS. However, Amazon EKS Anywhere relies on different implementations of the Kubernetes Cluster API (CAPI) interface to manage the full lifecycle of the machines in an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User"} +{"global_id": 520, "doc_id": "eks", "chunk_id": "18", "question_id": 1, "question": "What is the responsibility of managing the control plane in an Amazon EKS Anywhere cluster?", "answer_span": "Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data.", "chunk": "an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User Guide Cluster components Kubernetes cluster components are divided into two major areas: control plane and worker nodes. Control Plane Components manage the cluster and provide access to its APIs. Worker nodes (sometimes just referred to as Nodes) provide the places where the actual workloads are run. Node Components consist of services that run on each node to communicate with the control plane and run containers. The set of worker nodes for your cluster is referred to as the Data Plane. Control plane The control plane consists of a set of services that manage the cluster. These services may all be running on a single computer or may be spread across multiple computers. Internally, these are referred to as Control Plane Instances (CPIs). How CPIs are run depends on the size of the cluster and requirements for high availability. As demand increase in the cluster, a control plane service can scale to provide more instances of that service, with requests being load balanced between the instances. Tasks that components of the Kubernetes control plane performs include: • Communicating with cluster components (API server) — The API server (kube-apiserver) exposes the Kubernetes API so requests to the cluster can be made from both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within"} +{"global_id": 521, "doc_id": "eks", "chunk_id": "18", "question_id": 2, "question": "What are the two major areas into which Kubernetes cluster components are divided?", "answer_span": "Kubernetes cluster components are divided into two major areas: control plane and worker nodes.", "chunk": "an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User Guide Cluster components Kubernetes cluster components are divided into two major areas: control plane and worker nodes. Control Plane Components manage the cluster and provide access to its APIs. Worker nodes (sometimes just referred to as Nodes) provide the places where the actual workloads are run. Node Components consist of services that run on each node to communicate with the control plane and run containers. The set of worker nodes for your cluster is referred to as the Data Plane. Control plane The control plane consists of a set of services that manage the cluster. These services may all be running on a single computer or may be spread across multiple computers. Internally, these are referred to as Control Plane Instances (CPIs). How CPIs are run depends on the size of the cluster and requirements for high availability. As demand increase in the cluster, a control plane service can scale to provide more instances of that service, with requests being load balanced between the instances. Tasks that components of the Kubernetes control plane performs include: • Communicating with cluster components (API server) — The API server (kube-apiserver) exposes the Kubernetes API so requests to the cluster can be made from both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within"} +{"global_id": 522, "doc_id": "eks", "chunk_id": "18", "question_id": 3, "question": "What is referred to as the Data Plane in a Kubernetes cluster?", "answer_span": "The set of worker nodes for your cluster is referred to as the Data Plane.", "chunk": "an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User Guide Cluster components Kubernetes cluster components are divided into two major areas: control plane and worker nodes. Control Plane Components manage the cluster and provide access to its APIs. Worker nodes (sometimes just referred to as Nodes) provide the places where the actual workloads are run. Node Components consist of services that run on each node to communicate with the control plane and run containers. The set of worker nodes for your cluster is referred to as the Data Plane. Control plane The control plane consists of a set of services that manage the cluster. These services may all be running on a single computer or may be spread across multiple computers. Internally, these are referred to as Control Plane Instances (CPIs). How CPIs are run depends on the size of the cluster and requirements for high availability. As demand increase in the cluster, a control plane service can scale to provide more instances of that service, with requests being load balanced between the instances. Tasks that components of the Kubernetes control plane performs include: • Communicating with cluster components (API server) — The API server (kube-apiserver) exposes the Kubernetes API so requests to the cluster can be made from both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within"} +{"global_id": 523, "doc_id": "eks", "chunk_id": "18", "question_id": 4, "question": "What does the API server (kube-apiserver) expose?", "answer_span": "The API server (kube-apiserver) exposes the Kubernetes API so requests to the cluster can be made from both inside and outside of the cluster.", "chunk": "an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User Guide Cluster components Kubernetes cluster components are divided into two major areas: control plane and worker nodes. Control Plane Components manage the cluster and provide access to its APIs. Worker nodes (sometimes just referred to as Nodes) provide the places where the actual workloads are run. Node Components consist of services that run on each node to communicate with the control plane and run containers. The set of worker nodes for your cluster is referred to as the Data Plane. Control plane The control plane consists of a set of services that manage the cluster. These services may all be running on a single computer or may be spread across multiple computers. Internally, these are referred to as Control Plane Instances (CPIs). How CPIs are run depends on the size of the cluster and requirements for high availability. As demand increase in the cluster, a control plane service can scale to provide more instances of that service, with requests being load balanced between the instances. Tasks that components of the Kubernetes control plane performs include: • Communicating with cluster components (API server) — The API server (kube-apiserver) exposes the Kubernetes API so requests to the cluster can be made from both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within"} +{"global_id": 524, "doc_id": "eks", "chunk_id": "19", "question_id": 1, "question": "What types of requests can come from outside commands regarding a cluster's objects?", "answer_span": "requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod.", "chunk": "both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within the cluster, such as a query to the kubelet service for the status of a Pod. • Store data about the cluster (etcd key value store) — The etcd service provides the critical role of keeping track of the current state of the cluster. If the etcd service became inaccessible, you would be unable to update or query the status of the cluster, though workloads would continue to run for a while. For that reason, critical clusters typically have multiple, loadbalanced instances of the etcd service running at a time and do periodic backups of the etcd key value store in case of data loss or corruption. Keep in mind that, in Amazon EKS, this is all handled for you automatically by default. Amazon EKS Anywhere provides instruction for etcd backup and restore. See the etcd Data Model to learn how etcd manages data. • Schedule Pods to nodes (Scheduler) — Requests to start or stop a Pod in Kubernetes are directed to the Kubernetes Scheduler (kube-scheduler). Because a cluster could have multiple nodes that are capable of running the Pod, it is up to the Scheduler to choose which node (or nodes, in the case of replicas) the Pod should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed"} +{"global_id": 525, "doc_id": "eks", "chunk_id": "19", "question_id": 2, "question": "What role does the etcd service play in a cluster?", "answer_span": "The etcd service provides the critical role of keeping track of the current state of the cluster.", "chunk": "both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within the cluster, such as a query to the kubelet service for the status of a Pod. • Store data about the cluster (etcd key value store) — The etcd service provides the critical role of keeping track of the current state of the cluster. If the etcd service became inaccessible, you would be unable to update or query the status of the cluster, though workloads would continue to run for a while. For that reason, critical clusters typically have multiple, loadbalanced instances of the etcd service running at a time and do periodic backups of the etcd key value store in case of data loss or corruption. Keep in mind that, in Amazon EKS, this is all handled for you automatically by default. Amazon EKS Anywhere provides instruction for etcd backup and restore. See the etcd Data Model to learn how etcd manages data. • Schedule Pods to nodes (Scheduler) — Requests to start or stop a Pod in Kubernetes are directed to the Kubernetes Scheduler (kube-scheduler). Because a cluster could have multiple nodes that are capable of running the Pod, it is up to the Scheduler to choose which node (or nodes, in the case of replicas) the Pod should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed"} +{"global_id": 526, "doc_id": "eks", "chunk_id": "19", "question_id": 3, "question": "What happens if the etcd service becomes inaccessible?", "answer_span": "If the etcd service became inaccessible, you would be unable to update or query the status of the cluster, though workloads would continue to run for a while.", "chunk": "both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within the cluster, such as a query to the kubelet service for the status of a Pod. • Store data about the cluster (etcd key value store) — The etcd service provides the critical role of keeping track of the current state of the cluster. If the etcd service became inaccessible, you would be unable to update or query the status of the cluster, though workloads would continue to run for a while. For that reason, critical clusters typically have multiple, loadbalanced instances of the etcd service running at a time and do periodic backups of the etcd key value store in case of data loss or corruption. Keep in mind that, in Amazon EKS, this is all handled for you automatically by default. Amazon EKS Anywhere provides instruction for etcd backup and restore. See the etcd Data Model to learn how etcd manages data. • Schedule Pods to nodes (Scheduler) — Requests to start or stop a Pod in Kubernetes are directed to the Kubernetes Scheduler (kube-scheduler). Because a cluster could have multiple nodes that are capable of running the Pod, it is up to the Scheduler to choose which node (or nodes, in the case of replicas) the Pod should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed"} +{"global_id": 527, "doc_id": "eks", "chunk_id": "19", "question_id": 4, "question": "Who is responsible for scheduling Pods to nodes in Kubernetes?", "answer_span": "Requests to start or stop a Pod in Kubernetes are directed to the Kubernetes Scheduler (kube-scheduler).", "chunk": "both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within the cluster, such as a query to the kubelet service for the status of a Pod. • Store data about the cluster (etcd key value store) — The etcd service provides the critical role of keeping track of the current state of the cluster. If the etcd service became inaccessible, you would be unable to update or query the status of the cluster, though workloads would continue to run for a while. For that reason, critical clusters typically have multiple, loadbalanced instances of the etcd service running at a time and do periodic backups of the etcd key value store in case of data loss or corruption. Keep in mind that, in Amazon EKS, this is all handled for you automatically by default. Amazon EKS Anywhere provides instruction for etcd backup and restore. See the etcd Data Model to learn how etcd manages data. • Schedule Pods to nodes (Scheduler) — Requests to start or stop a Pod in Kubernetes are directed to the Kubernetes Scheduler (kube-scheduler). Because a cluster could have multiple nodes that are capable of running the Pod, it is up to the Scheduler to choose which node (or nodes, in the case of replicas) the Pod should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed"} +{"global_id": 528, "doc_id": "eks", "chunk_id": "20", "question_id": 1, "question": "What happens if there is not enough available capacity to run the requested Pod on an existing node?", "answer_span": "the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions.", "chunk": "should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed node groups”) or Karpenter that can automatically start up new nodes to handle the workloads. • Keep components in desired state (Controller Manager) — The Kubernetes Controller Manager runs as a daemon process (kube-controller-manager) to watch the state of the cluster and make changes to the cluster to reestablish the expected states. In particular, there are several controllers that watch over different Kubernetes objects, which includes a statefulsetcontroller, endpoint-controller, cronjob-controller, node-controller, and others. • Manage cloud resources (Cloud Controller Manager) — Interactions between Kubernetes and the cloud provider that carries out requests for the underlying data center resources are handled by the Cloud Controller Manager (cloud-controller-manager). Controllers managed by the Cloud Controller Manager can include a route controller (for setting up cloud network routes), service controller (for using cloud load balancing services), and node lifecycle controller (to keep nodes in sync with Kubernetes throughout their lifecycles). Worker Nodes (data plane) For a single-node Kubernetes cluster, workloads run on the same machine as the control plane. However, a more standard configuration is to have one or more separate computer systems (Nodes) that are dedicated to running Kubernetes workloads. When you first create a Kubernetes cluster, some cluster creation tools allow you to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) —"} +{"global_id": 529, "doc_id": "eks", "chunk_id": "20", "question_id": 2, "question": "What is the role of the Kubernetes Controller Manager?", "answer_span": "The Kubernetes Controller Manager runs as a daemon process (kube-controller-manager) to watch the state of the cluster and make changes to the cluster to reestablish the expected states.", "chunk": "should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed node groups”) or Karpenter that can automatically start up new nodes to handle the workloads. • Keep components in desired state (Controller Manager) — The Kubernetes Controller Manager runs as a daemon process (kube-controller-manager) to watch the state of the cluster and make changes to the cluster to reestablish the expected states. In particular, there are several controllers that watch over different Kubernetes objects, which includes a statefulsetcontroller, endpoint-controller, cronjob-controller, node-controller, and others. • Manage cloud resources (Cloud Controller Manager) — Interactions between Kubernetes and the cloud provider that carries out requests for the underlying data center resources are handled by the Cloud Controller Manager (cloud-controller-manager). Controllers managed by the Cloud Controller Manager can include a route controller (for setting up cloud network routes), service controller (for using cloud load balancing services), and node lifecycle controller (to keep nodes in sync with Kubernetes throughout their lifecycles). Worker Nodes (data plane) For a single-node Kubernetes cluster, workloads run on the same machine as the control plane. However, a more standard configuration is to have one or more separate computer systems (Nodes) that are dedicated to running Kubernetes workloads. When you first create a Kubernetes cluster, some cluster creation tools allow you to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) —"} +{"global_id": 530, "doc_id": "eks", "chunk_id": "20", "question_id": 3, "question": "What does the Cloud Controller Manager handle?", "answer_span": "Interactions between Kubernetes and the cloud provider that carries out requests for the underlying data center resources are handled by the Cloud Controller Manager (cloud-controller-manager).", "chunk": "should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed node groups”) or Karpenter that can automatically start up new nodes to handle the workloads. • Keep components in desired state (Controller Manager) — The Kubernetes Controller Manager runs as a daemon process (kube-controller-manager) to watch the state of the cluster and make changes to the cluster to reestablish the expected states. In particular, there are several controllers that watch over different Kubernetes objects, which includes a statefulsetcontroller, endpoint-controller, cronjob-controller, node-controller, and others. • Manage cloud resources (Cloud Controller Manager) — Interactions between Kubernetes and the cloud provider that carries out requests for the underlying data center resources are handled by the Cloud Controller Manager (cloud-controller-manager). Controllers managed by the Cloud Controller Manager can include a route controller (for setting up cloud network routes), service controller (for using cloud load balancing services), and node lifecycle controller (to keep nodes in sync with Kubernetes throughout their lifecycles). Worker Nodes (data plane) For a single-node Kubernetes cluster, workloads run on the same machine as the control plane. However, a more standard configuration is to have one or more separate computer systems (Nodes) that are dedicated to running Kubernetes workloads. When you first create a Kubernetes cluster, some cluster creation tools allow you to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) —"} +{"global_id": 531, "doc_id": "eks", "chunk_id": "20", "question_id": 4, "question": "What is a more standard configuration for running Kubernetes workloads?", "answer_span": "a more standard configuration is to have one or more separate computer systems (Nodes) that are dedicated to running Kubernetes workloads.", "chunk": "should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed node groups”) or Karpenter that can automatically start up new nodes to handle the workloads. • Keep components in desired state (Controller Manager) — The Kubernetes Controller Manager runs as a daemon process (kube-controller-manager) to watch the state of the cluster and make changes to the cluster to reestablish the expected states. In particular, there are several controllers that watch over different Kubernetes objects, which includes a statefulsetcontroller, endpoint-controller, cronjob-controller, node-controller, and others. • Manage cloud resources (Cloud Controller Manager) — Interactions between Kubernetes and the cloud provider that carries out requests for the underlying data center resources are handled by the Cloud Controller Manager (cloud-controller-manager). Controllers managed by the Cloud Controller Manager can include a route controller (for setting up cloud network routes), service controller (for using cloud load balancing services), and node lifecycle controller (to keep nodes in sync with Kubernetes throughout their lifecycles). Worker Nodes (data plane) For a single-node Kubernetes cluster, workloads run on the same machine as the control plane. However, a more standard configuration is to have one or more separate computer systems (Nodes) that are dedicated to running Kubernetes workloads. When you first create a Kubernetes cluster, some cluster creation tools allow you to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) —"} +{"global_id": 532, "doc_id": "eks", "chunk_id": "21", "question_id": 1, "question": "What is the role of the kubelet in managing nodes?", "answer_span": "The API server communicates with the kubelet service running on each node to make sure that the node is properly registered and Pods requested by the Scheduler are running.", "chunk": "to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) — The API server communicates with the kubelet service running on each node to make sure that the node is properly registered and Pods requested by the Scheduler are running. The kubelet can read the Pod manifests and set up storage volumes or other features needed by the Pods on the local system. It can also check on the health of the locally running containers. • Run containers on a node (container runtime) — The Container Runtime on each node manages the containers requested for each Pod assigned to the node. That means that it can pull container images from the appropriate registry, run the container, stop it, and responds to queries about the container. The default container runtime is containerd. As of Kubernetes 1.24, the special integration of Docker (dockershim) that could be used as the container runtime was Clusters 17 Amazon EKS User Guide dropped from Kubernetes. While you can still use Docker to test and run containers on your local system, to use Docker with Kubernetes you would now have to Install Docker Engine on each node to use it with Kubernetes. • Manage networking between containers (kube-proxy) — To be able to support communication between Pods, Kubernetes uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes"} +{"global_id": 533, "doc_id": "eks", "chunk_id": "21", "question_id": 2, "question": "What does the Container Runtime manage on each node?", "answer_span": "The Container Runtime on each node manages the containers requested for each Pod assigned to the node.", "chunk": "to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) — The API server communicates with the kubelet service running on each node to make sure that the node is properly registered and Pods requested by the Scheduler are running. The kubelet can read the Pod manifests and set up storage volumes or other features needed by the Pods on the local system. It can also check on the health of the locally running containers. • Run containers on a node (container runtime) — The Container Runtime on each node manages the containers requested for each Pod assigned to the node. That means that it can pull container images from the appropriate registry, run the container, stop it, and responds to queries about the container. The default container runtime is containerd. As of Kubernetes 1.24, the special integration of Docker (dockershim) that could be used as the container runtime was Clusters 17 Amazon EKS User Guide dropped from Kubernetes. While you can still use Docker to test and run containers on your local system, to use Docker with Kubernetes you would now have to Install Docker Engine on each node to use it with Kubernetes. • Manage networking between containers (kube-proxy) — To be able to support communication between Pods, Kubernetes uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes"} +{"global_id": 534, "doc_id": "eks", "chunk_id": "21", "question_id": 3, "question": "What is the default container runtime mentioned in the text?", "answer_span": "The default container runtime is containerd.", "chunk": "to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) — The API server communicates with the kubelet service running on each node to make sure that the node is properly registered and Pods requested by the Scheduler are running. The kubelet can read the Pod manifests and set up storage volumes or other features needed by the Pods on the local system. It can also check on the health of the locally running containers. • Run containers on a node (container runtime) — The Container Runtime on each node manages the containers requested for each Pod assigned to the node. That means that it can pull container images from the appropriate registry, run the container, stop it, and responds to queries about the container. The default container runtime is containerd. As of Kubernetes 1.24, the special integration of Docker (dockershim) that could be used as the container runtime was Clusters 17 Amazon EKS User Guide dropped from Kubernetes. While you can still use Docker to test and run containers on your local system, to use Docker with Kubernetes you would now have to Install Docker Engine on each node to use it with Kubernetes. • Manage networking between containers (kube-proxy) — To be able to support communication between Pods, Kubernetes uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes"} +{"global_id": 535, "doc_id": "eks", "chunk_id": "21", "question_id": 4, "question": "How does Kubernetes support communication between Pods?", "answer_span": "Kubernetes uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods.", "chunk": "to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) — The API server communicates with the kubelet service running on each node to make sure that the node is properly registered and Pods requested by the Scheduler are running. The kubelet can read the Pod manifests and set up storage volumes or other features needed by the Pods on the local system. It can also check on the health of the locally running containers. • Run containers on a node (container runtime) — The Container Runtime on each node manages the containers requested for each Pod assigned to the node. That means that it can pull container images from the appropriate registry, run the container, stop it, and responds to queries about the container. The default container runtime is containerd. As of Kubernetes 1.24, the special integration of Docker (dockershim) that could be used as the container runtime was Clusters 17 Amazon EKS User Guide dropped from Kubernetes. While you can still use Docker to test and run containers on your local system, to use Docker with Kubernetes you would now have to Install Docker Engine on each node to use it with Kubernetes. • Manage networking between containers (kube-proxy) — To be able to support communication between Pods, Kubernetes uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes"} +{"global_id": 536, "doc_id": "eks", "chunk_id": "22", "question_id": 1, "question": "What feature is used to set up Pod networks that track IP addresses and ports?", "answer_span": "uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods.", "chunk": "uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes to support the cluster, but are not run in the control plane. These services often run directly on nodes in the kube-system namespace or in its own namespace (as is often done with third-party service providers). A common example is the CoreDNS service, which provides DNS services to the cluster. Refer to Discovering builtin services for information on how to see which cluster services are running in kube-system on your cluster. There are different types of add-ons you can consider adding to your clusters. To keep your clusters healthy, you can add observability features (see Monitor clusters) that allow you to do things like logging, auditing, and metrics. With this information, you can troubleshoot problems that occur, often through the same observability interfaces. Examples of these types of services include Amazon GuardDuty, CloudWatch (see the section called “Amazon CloudWatch”), AWS Distro for OpenTelemetry, Amazon VPC CNI plugin for Kubernetes (see the section called “Amazon VPC CNI”), and Grafana Kubernetes Monitoring. For storage (see App data storage), add-ons to Amazon EKS include Amazon Elastic Block Store CSI Driver (see the section called “Amazon EBS”), Amazon Elastic File System CSI Driver (see the section called “Amazon EFS”), and several third-party storage add-ons such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of"} +{"global_id": 537, "doc_id": "eks", "chunk_id": "22", "question_id": 2, "question": "What runs on every node to allow communication between Pods?", "answer_span": "The kube-proxy service runs on every node to allow that communication between Pods to take place.", "chunk": "uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes to support the cluster, but are not run in the control plane. These services often run directly on nodes in the kube-system namespace or in its own namespace (as is often done with third-party service providers). A common example is the CoreDNS service, which provides DNS services to the cluster. Refer to Discovering builtin services for information on how to see which cluster services are running in kube-system on your cluster. There are different types of add-ons you can consider adding to your clusters. To keep your clusters healthy, you can add observability features (see Monitor clusters) that allow you to do things like logging, auditing, and metrics. With this information, you can troubleshoot problems that occur, often through the same observability interfaces. Examples of these types of services include Amazon GuardDuty, CloudWatch (see the section called “Amazon CloudWatch”), AWS Distro for OpenTelemetry, Amazon VPC CNI plugin for Kubernetes (see the section called “Amazon VPC CNI”), and Grafana Kubernetes Monitoring. For storage (see App data storage), add-ons to Amazon EKS include Amazon Elastic Block Store CSI Driver (see the section called “Amazon EBS”), Amazon Elastic File System CSI Driver (see the section called “Amazon EFS”), and several third-party storage add-ons such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of"} +{"global_id": 538, "doc_id": "eks", "chunk_id": "22", "question_id": 3, "question": "What is a common example of a service that provides DNS services to the cluster?", "answer_span": "A common example is the CoreDNS service, which provides DNS services to the cluster.", "chunk": "uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes to support the cluster, but are not run in the control plane. These services often run directly on nodes in the kube-system namespace or in its own namespace (as is often done with third-party service providers). A common example is the CoreDNS service, which provides DNS services to the cluster. Refer to Discovering builtin services for information on how to see which cluster services are running in kube-system on your cluster. There are different types of add-ons you can consider adding to your clusters. To keep your clusters healthy, you can add observability features (see Monitor clusters) that allow you to do things like logging, auditing, and metrics. With this information, you can troubleshoot problems that occur, often through the same observability interfaces. Examples of these types of services include Amazon GuardDuty, CloudWatch (see the section called “Amazon CloudWatch”), AWS Distro for OpenTelemetry, Amazon VPC CNI plugin for Kubernetes (see the section called “Amazon VPC CNI”), and Grafana Kubernetes Monitoring. For storage (see App data storage), add-ons to Amazon EKS include Amazon Elastic Block Store CSI Driver (see the section called “Amazon EBS”), Amazon Elastic File System CSI Driver (see the section called “Amazon EFS”), and several third-party storage add-ons such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of"} +{"global_id": 539, "doc_id": "eks", "chunk_id": "22", "question_id": 4, "question": "What does Kubernetes define as a Workload?", "answer_span": "Kubernetes defines a Workload as 'an application running on Kubernetes.'", "chunk": "uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes to support the cluster, but are not run in the control plane. These services often run directly on nodes in the kube-system namespace or in its own namespace (as is often done with third-party service providers). A common example is the CoreDNS service, which provides DNS services to the cluster. Refer to Discovering builtin services for information on how to see which cluster services are running in kube-system on your cluster. There are different types of add-ons you can consider adding to your clusters. To keep your clusters healthy, you can add observability features (see Monitor clusters) that allow you to do things like logging, auditing, and metrics. With this information, you can troubleshoot problems that occur, often through the same observability interfaces. Examples of these types of services include Amazon GuardDuty, CloudWatch (see the section called “Amazon CloudWatch”), AWS Distro for OpenTelemetry, Amazon VPC CNI plugin for Kubernetes (see the section called “Amazon VPC CNI”), and Grafana Kubernetes Monitoring. For storage (see App data storage), add-ons to Amazon EKS include Amazon Elastic Block Store CSI Driver (see the section called “Amazon EBS”), Amazon Elastic File System CSI Driver (see the section called “Amazon EFS”), and several third-party storage add-ons such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of"} +{"global_id": 540, "doc_id": "eks", "chunk_id": "23", "question_id": 1, "question": "What is defined as a Workload in Kubernetes?", "answer_span": "Kubernetes defines a Workload as \"an application running on Kubernetes.\"", "chunk": "such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of a set of microservices run as Containers in Pods, or could be run as a batch job or other type of applications. The job of Kubernetes is to make sure that the requests that you make for those objects to be set up or deployed are carried out. As someone deploying applications, you Workloads 18 Amazon EKS User Guide should learn about how containers are built, how Pods are defined, and what methods you can use for deploying them. Containers The most basic element of an application workload that you deploy and manage in Kubernetes is a Pod . A Pod represents a way of holding the components of an application as well as defining specifications that describe the Pod’s attributes. Contrast this to something like an RPM or Deb package, which packages together software for a Linux system, but does not itself run as an entity. Because the Pod is the smallest deployable unit, it typically holds a single container. However, multiple containers can be in a Pod in cases where the containers are tightly coupled. For example, a web server container might be packaged in a Pod with a sidecar type of container that may provide logging, monitoring, or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers"} +{"global_id": 541, "doc_id": "eks", "chunk_id": "23", "question_id": 2, "question": "What is the most basic element of an application workload in Kubernetes?", "answer_span": "The most basic element of an application workload that you deploy and manage in Kubernetes is a Pod.", "chunk": "such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of a set of microservices run as Containers in Pods, or could be run as a batch job or other type of applications. The job of Kubernetes is to make sure that the requests that you make for those objects to be set up or deployed are carried out. As someone deploying applications, you Workloads 18 Amazon EKS User Guide should learn about how containers are built, how Pods are defined, and what methods you can use for deploying them. Containers The most basic element of an application workload that you deploy and manage in Kubernetes is a Pod . A Pod represents a way of holding the components of an application as well as defining specifications that describe the Pod’s attributes. Contrast this to something like an RPM or Deb package, which packages together software for a Linux system, but does not itself run as an entity. Because the Pod is the smallest deployable unit, it typically holds a single container. However, multiple containers can be in a Pod in cases where the containers are tightly coupled. For example, a web server container might be packaged in a Pod with a sidecar type of container that may provide logging, monitoring, or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers"} +{"global_id": 542, "doc_id": "eks", "chunk_id": "23", "question_id": 3, "question": "What does a Pod represent in Kubernetes?", "answer_span": "A Pod represents a way of holding the components of an application as well as defining specifications that describe the Pod’s attributes.", "chunk": "such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of a set of microservices run as Containers in Pods, or could be run as a batch job or other type of applications. The job of Kubernetes is to make sure that the requests that you make for those objects to be set up or deployed are carried out. As someone deploying applications, you Workloads 18 Amazon EKS User Guide should learn about how containers are built, how Pods are defined, and what methods you can use for deploying them. Containers The most basic element of an application workload that you deploy and manage in Kubernetes is a Pod . A Pod represents a way of holding the components of an application as well as defining specifications that describe the Pod’s attributes. Contrast this to something like an RPM or Deb package, which packages together software for a Linux system, but does not itself run as an entity. Because the Pod is the smallest deployable unit, it typically holds a single container. However, multiple containers can be in a Pod in cases where the containers are tightly coupled. For example, a web server container might be packaged in a Pod with a sidecar type of container that may provide logging, monitoring, or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers"} +{"global_id": 543, "doc_id": "eks", "chunk_id": "23", "question_id": 4, "question": "Can multiple containers be in a Pod, and under what circumstances?", "answer_span": "However, multiple containers can be in a Pod in cases where the containers are tightly coupled.", "chunk": "such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of a set of microservices run as Containers in Pods, or could be run as a batch job or other type of applications. The job of Kubernetes is to make sure that the requests that you make for those objects to be set up or deployed are carried out. As someone deploying applications, you Workloads 18 Amazon EKS User Guide should learn about how containers are built, how Pods are defined, and what methods you can use for deploying them. Containers The most basic element of an application workload that you deploy and manage in Kubernetes is a Pod . A Pod represents a way of holding the components of an application as well as defining specifications that describe the Pod’s attributes. Contrast this to something like an RPM or Deb package, which packages together software for a Linux system, but does not itself run as an entity. Because the Pod is the smallest deployable unit, it typically holds a single container. However, multiple containers can be in a Pod in cases where the containers are tightly coupled. For example, a web server container might be packaged in a Pod with a sidecar type of container that may provide logging, monitoring, or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers"} +{"global_id": 544, "doc_id": "eks", "chunk_id": "24", "question_id": 1, "question": "What ensures that both containers in a Pod always run on the same node?", "answer_span": "In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node.", "chunk": "or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers in a Pod running as though they are in the same isolated host. The effect of this is that the containers share a single IP address that provides access to the Pod and the containers can communicate with each other as though they were running on their own localhost. Pod specifications (PodSpec) define the desired state of the Pod. You can deploy an individual Pod or multiple Pods by using workload resources to manage Pod Templates. Workload resources include Deployments (to manage multiple Pod Replicas), StatefulSets (to deploy Pods that need to be unique, such as database Pods), and DaemonSets (where a Pod needs to run continuously on every node). More on those later. While a Pod is the smallest unit you deploy, a container is the smallest unit that you build and manage. Building Containers The Pod is really just a structure around one or more containers, with each container itself holding the file system, executables, configuration files, libraries, and other components to actually run the application. Because a company called Docker Inc. first popularized containers, some people refer to containers as Docker Containers. However, the Open Container Initiative has since defined container runtimes, images, and distribution methods for the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside"} +{"global_id": 545, "doc_id": "eks", "chunk_id": "24", "question_id": 2, "question": "What do Pod specifications (PodSpec) define?", "answer_span": "Pod specifications (PodSpec) define the desired state of the Pod.", "chunk": "or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers in a Pod running as though they are in the same isolated host. The effect of this is that the containers share a single IP address that provides access to the Pod and the containers can communicate with each other as though they were running on their own localhost. Pod specifications (PodSpec) define the desired state of the Pod. You can deploy an individual Pod or multiple Pods by using workload resources to manage Pod Templates. Workload resources include Deployments (to manage multiple Pod Replicas), StatefulSets (to deploy Pods that need to be unique, such as database Pods), and DaemonSets (where a Pod needs to run continuously on every node). More on those later. While a Pod is the smallest unit you deploy, a container is the smallest unit that you build and manage. Building Containers The Pod is really just a structure around one or more containers, with each container itself holding the file system, executables, configuration files, libraries, and other components to actually run the application. Because a company called Docker Inc. first popularized containers, some people refer to containers as Docker Containers. However, the Open Container Initiative has since defined container runtimes, images, and distribution methods for the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside"} +{"global_id": 546, "doc_id": "eks", "chunk_id": "24", "question_id": 3, "question": "What is the smallest unit you deploy?", "answer_span": "While a Pod is the smallest unit you deploy, a container is the smallest unit that you build and manage.", "chunk": "or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers in a Pod running as though they are in the same isolated host. The effect of this is that the containers share a single IP address that provides access to the Pod and the containers can communicate with each other as though they were running on their own localhost. Pod specifications (PodSpec) define the desired state of the Pod. You can deploy an individual Pod or multiple Pods by using workload resources to manage Pod Templates. Workload resources include Deployments (to manage multiple Pod Replicas), StatefulSets (to deploy Pods that need to be unique, such as database Pods), and DaemonSets (where a Pod needs to run continuously on every node). More on those later. While a Pod is the smallest unit you deploy, a container is the smallest unit that you build and manage. Building Containers The Pod is really just a structure around one or more containers, with each container itself holding the file system, executables, configuration files, libraries, and other components to actually run the application. Because a company called Docker Inc. first popularized containers, some people refer to containers as Docker Containers. However, the Open Container Initiative has since defined container runtimes, images, and distribution methods for the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside"} +{"global_id": 547, "doc_id": "eks", "chunk_id": "24", "question_id": 4, "question": "What is typically used to start building a container?", "answer_span": "When you build a container, you typically start with a Dockerfile (literally named that).", "chunk": "or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers in a Pod running as though they are in the same isolated host. The effect of this is that the containers share a single IP address that provides access to the Pod and the containers can communicate with each other as though they were running on their own localhost. Pod specifications (PodSpec) define the desired state of the Pod. You can deploy an individual Pod or multiple Pods by using workload resources to manage Pod Templates. Workload resources include Deployments (to manage multiple Pod Replicas), StatefulSets (to deploy Pods that need to be unique, such as database Pods), and DaemonSets (where a Pod needs to run continuously on every node). More on those later. While a Pod is the smallest unit you deploy, a container is the smallest unit that you build and manage. Building Containers The Pod is really just a structure around one or more containers, with each container itself holding the file system, executables, configuration files, libraries, and other components to actually run the application. Because a company called Docker Inc. first popularized containers, some people refer to containers as Docker Containers. However, the Open Container Initiative has since defined container runtimes, images, and distribution methods for the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside"} +{"global_id": 548, "doc_id": "eks", "chunk_id": "25", "question_id": 1, "question": "What are containers often referred to as?", "answer_span": "others often refer to containers as OCI Containers, Linux Containers, or just Containers.", "chunk": "the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside that Dockerfile, you identify: • A base image — A base container image is a container that is typically built from either a minimal version of an operating system’s file system (such as Red Hat Enterprise Linux or Ubuntu) or a minimal system that is enhanced to provide software to run specific types of applications (such as a nodejs or python apps). • Application software — You can add your application software to your container in much the same way you would add it to a Linux system. For example, in your Dockerfile you can run npm and yarn to install a Java application or yum and dnf to install RPM packages. In other words, using a RUN command in a Dockerfile, you can run any command that is available in the file system of your base image to install software or configure software inside of the resulting container image. • Instructions — The Dockerfile reference describes the instructions you can add to a Dockerfile when you configure it. These include instructions used to build what is in the container itself (ADD or COPY files from the local system), identify commands to execute when the container is run (CMD or ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build"} +{"global_id": 549, "doc_id": "eks", "chunk_id": "25", "question_id": 2, "question": "What is typically the starting point for building a container?", "answer_span": "you typically start with a Dockerfile (literally named that).", "chunk": "the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside that Dockerfile, you identify: • A base image — A base container image is a container that is typically built from either a minimal version of an operating system’s file system (such as Red Hat Enterprise Linux or Ubuntu) or a minimal system that is enhanced to provide software to run specific types of applications (such as a nodejs or python apps). • Application software — You can add your application software to your container in much the same way you would add it to a Linux system. For example, in your Dockerfile you can run npm and yarn to install a Java application or yum and dnf to install RPM packages. In other words, using a RUN command in a Dockerfile, you can run any command that is available in the file system of your base image to install software or configure software inside of the resulting container image. • Instructions — The Dockerfile reference describes the instructions you can add to a Dockerfile when you configure it. These include instructions used to build what is in the container itself (ADD or COPY files from the local system), identify commands to execute when the container is run (CMD or ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build"} +{"global_id": 550, "doc_id": "eks", "chunk_id": "25", "question_id": 3, "question": "What can you add to your container in a similar way to a Linux system?", "answer_span": "You can add your application software to your container in much the same way you would add it to a Linux system.", "chunk": "the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside that Dockerfile, you identify: • A base image — A base container image is a container that is typically built from either a minimal version of an operating system’s file system (such as Red Hat Enterprise Linux or Ubuntu) or a minimal system that is enhanced to provide software to run specific types of applications (such as a nodejs or python apps). • Application software — You can add your application software to your container in much the same way you would add it to a Linux system. For example, in your Dockerfile you can run npm and yarn to install a Java application or yum and dnf to install RPM packages. In other words, using a RUN command in a Dockerfile, you can run any command that is available in the file system of your base image to install software or configure software inside of the resulting container image. • Instructions — The Dockerfile reference describes the instructions you can add to a Dockerfile when you configure it. These include instructions used to build what is in the container itself (ADD or COPY files from the local system), identify commands to execute when the container is run (CMD or ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build"} +{"global_id": 551, "doc_id": "eks", "chunk_id": "25", "question_id": 4, "question": "What does the Dockerfile reference describe?", "answer_span": "The Dockerfile reference describes the instructions you can add to a Dockerfile when you configure it.", "chunk": "the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside that Dockerfile, you identify: • A base image — A base container image is a container that is typically built from either a minimal version of an operating system’s file system (such as Red Hat Enterprise Linux or Ubuntu) or a minimal system that is enhanced to provide software to run specific types of applications (such as a nodejs or python apps). • Application software — You can add your application software to your container in much the same way you would add it to a Linux system. For example, in your Dockerfile you can run npm and yarn to install a Java application or yum and dnf to install RPM packages. In other words, using a RUN command in a Dockerfile, you can run any command that is available in the file system of your base image to install software or configure software inside of the resulting container image. • Instructions — The Dockerfile reference describes the instructions you can add to a Dockerfile when you configure it. These include instructions used to build what is in the container itself (ADD or COPY files from the local system), identify commands to execute when the container is run (CMD or ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build"} +{"global_id": 552, "doc_id": "eks", "chunk_id": "26", "question_id": 1, "question": "What tools are mentioned as alternatives to the docker command for building container images?", "answer_span": "other tools that are available to build container images include podman and nerdctl.", "chunk": "ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build container images include podman and nerdctl. See Building Better Container Images or Overview of Docker Build to learn about building containers. Storing Containers Once you’ve built your container image, you can store it in a container distribution registry on your workstation or on a public container registry. Running a private container registry on your workstation allows you to store container images locally, making them readily available to you. To store container images in a more public manner, you can push them to a public container registry. Public container registries provide a central location for storing and distributing container images. Examples of public container registries include the Amazon Elastic Container Registry, Red Hat Quay registry, and Docker Hub registry. When running containerized workloads on Amazon Elastic Kubernetes Service (Amazon EKS) we recommend pulling copies of Docker Official Images that are stored in Amazon Elastic Container Registry. Amazon ECR has been storing these images since 2021. You can search for popular Workloads 20 Amazon EKS User Guide container images in the Amazon ECR Public Gallery, and specifically for the Docker Hub images, you can search the Amazon ECR Docker Gallery. Running containers Because containers are built in a standard format, a container can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start"} +{"global_id": 553, "doc_id": "eks", "chunk_id": "26", "question_id": 2, "question": "What is the purpose of a private container registry?", "answer_span": "Running a private container registry on your workstation allows you to store container images locally, making them readily available to you.", "chunk": "ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build container images include podman and nerdctl. See Building Better Container Images or Overview of Docker Build to learn about building containers. Storing Containers Once you’ve built your container image, you can store it in a container distribution registry on your workstation or on a public container registry. Running a private container registry on your workstation allows you to store container images locally, making them readily available to you. To store container images in a more public manner, you can push them to a public container registry. Public container registries provide a central location for storing and distributing container images. Examples of public container registries include the Amazon Elastic Container Registry, Red Hat Quay registry, and Docker Hub registry. When running containerized workloads on Amazon Elastic Kubernetes Service (Amazon EKS) we recommend pulling copies of Docker Official Images that are stored in Amazon Elastic Container Registry. Amazon ECR has been storing these images since 2021. You can search for popular Workloads 20 Amazon EKS User Guide container images in the Amazon ECR Public Gallery, and specifically for the Docker Hub images, you can search the Amazon ECR Docker Gallery. Running containers Because containers are built in a standard format, a container can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start"} +{"global_id": 554, "doc_id": "eks", "chunk_id": "26", "question_id": 3, "question": "Which public container registries are mentioned in the text?", "answer_span": "Examples of public container registries include the Amazon Elastic Container Registry, Red Hat Quay registry, and Docker Hub registry.", "chunk": "ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build container images include podman and nerdctl. See Building Better Container Images or Overview of Docker Build to learn about building containers. Storing Containers Once you’ve built your container image, you can store it in a container distribution registry on your workstation or on a public container registry. Running a private container registry on your workstation allows you to store container images locally, making them readily available to you. To store container images in a more public manner, you can push them to a public container registry. Public container registries provide a central location for storing and distributing container images. Examples of public container registries include the Amazon Elastic Container Registry, Red Hat Quay registry, and Docker Hub registry. When running containerized workloads on Amazon Elastic Kubernetes Service (Amazon EKS) we recommend pulling copies of Docker Official Images that are stored in Amazon Elastic Container Registry. Amazon ECR has been storing these images since 2021. You can search for popular Workloads 20 Amazon EKS User Guide container images in the Amazon ECR Public Gallery, and specifically for the Docker Hub images, you can search the Amazon ECR Docker Gallery. Running containers Because containers are built in a standard format, a container can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start"} +{"global_id": 555, "doc_id": "eks", "chunk_id": "26", "question_id": 4, "question": "What command can be used to run a container on a local desktop?", "answer_span": "you can use docker run or podman run commands to start", "chunk": "ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build container images include podman and nerdctl. See Building Better Container Images or Overview of Docker Build to learn about building containers. Storing Containers Once you’ve built your container image, you can store it in a container distribution registry on your workstation or on a public container registry. Running a private container registry on your workstation allows you to store container images locally, making them readily available to you. To store container images in a more public manner, you can push them to a public container registry. Public container registries provide a central location for storing and distributing container images. Examples of public container registries include the Amazon Elastic Container Registry, Red Hat Quay registry, and Docker Hub registry. When running containerized workloads on Amazon Elastic Kubernetes Service (Amazon EKS) we recommend pulling copies of Docker Official Images that are stored in Amazon Elastic Container Registry. Amazon ECR has been storing these images since 2021. You can search for popular Workloads 20 Amazon EKS User Guide container images in the Amazon ECR Public Gallery, and specifically for the Docker Hub images, you can search the Amazon ECR Docker Gallery. Running containers Because containers are built in a standard format, a container can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start"} +{"global_id": 556, "doc_id": "eks", "chunk_id": "27", "question_id": 1, "question": "What is required for a machine to run a container?", "answer_span": "can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm).", "chunk": "can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start up a container on the localhost. For Kubernetes, however, each worker node has a container runtime deployed and it is up to Kubernetes to request that a node run a container. Once a container has been assigned to run on a node, the node looks to see if the requested version of the container image already exists on the node. If it doesn’t, Kubernetes tells the container runtime to pull that container from the appropriate container registry, then run that container locally. Keep in mind that a container image refers to the software package that is moved around between your laptop, the container registry, and Kubernetes nodes. A container refers to a running instance of that image. Pods Once your containers are ready, working with Pods includes configuring, deploying, and making the Pods accessible. Configuring Pods When you define a Pod, you assign a set of attributes to it. Those attributes must include at least the Pod name and the container image to run. However, there are many other things you want to configure with your Pod definitions as well (see the PodSpec page for details on what can go into a Pod). These include: • Storage — When a running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device"} +{"global_id": 557, "doc_id": "eks", "chunk_id": "27", "question_id": 2, "question": "What commands can be used to start a container on the localhost?", "answer_span": "you can use docker run or podman run commands to start up a container on the localhost.", "chunk": "can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start up a container on the localhost. For Kubernetes, however, each worker node has a container runtime deployed and it is up to Kubernetes to request that a node run a container. Once a container has been assigned to run on a node, the node looks to see if the requested version of the container image already exists on the node. If it doesn’t, Kubernetes tells the container runtime to pull that container from the appropriate container registry, then run that container locally. Keep in mind that a container image refers to the software package that is moved around between your laptop, the container registry, and Kubernetes nodes. A container refers to a running instance of that image. Pods Once your containers are ready, working with Pods includes configuring, deploying, and making the Pods accessible. Configuring Pods When you define a Pod, you assign a set of attributes to it. Those attributes must include at least the Pod name and the container image to run. However, there are many other things you want to configure with your Pod definitions as well (see the PodSpec page for details on what can go into a Pod). These include: • Storage — When a running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device"} +{"global_id": 558, "doc_id": "eks", "chunk_id": "27", "question_id": 3, "question": "What does Kubernetes do when a container image is not found on a node?", "answer_span": "If it doesn’t, Kubernetes tells the container runtime to pull that container from the appropriate container registry, then run that container locally.", "chunk": "can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start up a container on the localhost. For Kubernetes, however, each worker node has a container runtime deployed and it is up to Kubernetes to request that a node run a container. Once a container has been assigned to run on a node, the node looks to see if the requested version of the container image already exists on the node. If it doesn’t, Kubernetes tells the container runtime to pull that container from the appropriate container registry, then run that container locally. Keep in mind that a container image refers to the software package that is moved around between your laptop, the container registry, and Kubernetes nodes. A container refers to a running instance of that image. Pods Once your containers are ready, working with Pods includes configuring, deploying, and making the Pods accessible. Configuring Pods When you define a Pod, you assign a set of attributes to it. Those attributes must include at least the Pod name and the container image to run. However, there are many other things you want to configure with your Pod definitions as well (see the PodSpec page for details on what can go into a Pod). These include: • Storage — When a running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device"} +{"global_id": 559, "doc_id": "eks", "chunk_id": "27", "question_id": 4, "question": "What must be included when defining a Pod?", "answer_span": "Those attributes must include at least the Pod name and the container image to run.", "chunk": "can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start up a container on the localhost. For Kubernetes, however, each worker node has a container runtime deployed and it is up to Kubernetes to request that a node run a container. Once a container has been assigned to run on a node, the node looks to see if the requested version of the container image already exists on the node. If it doesn’t, Kubernetes tells the container runtime to pull that container from the appropriate container registry, then run that container locally. Keep in mind that a container image refers to the software package that is moved around between your laptop, the container registry, and Kubernetes nodes. A container refers to a running instance of that image. Pods Once your containers are ready, working with Pods includes configuring, deploying, and making the Pods accessible. Configuring Pods When you define a Pod, you assign a set of attributes to it. Those attributes must include at least the Pod name and the container image to run. However, there are many other things you want to configure with your Pod definitions as well (see the PodSpec page for details on what can go into a Pod). These include: • Storage — When a running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device"} +{"global_id": 560, "doc_id": "eks", "chunk_id": "28", "question_id": 1, "question": "What happens to data storage in a running container when it is stopped and deleted?", "answer_span": "data storage in that container will disappear, unless you set up more permanent storage.", "chunk": "running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device from the local computer. With one of those storage types available from your cluster, you can mount the storage volume to a selected mount point in your container’s file system. A Persistent Volume is one that continues to exist after the Pod is deleted, while an Ephemeral Volume is deleted when the Pod is deleted. If your cluster administrator created different storage classes for your cluster, you might have the Workloads 21 Amazon EKS User Guide option for choosing the attributes of the storage you use, such as whether the volume is deleted or reclaimed after use, whether it will expand if more space is needed, and even whether it meets certain performance requirements. • Secrets — By making Secrets available to containers in Pod specs, you can provide the permissions those containers need to access file systems, data bases, or other protected assets. Keys, passwords, and tokens are among the items that can be stored as secrets. Using secrets makes it so you don’t have to store this information in container images, but need only make the secrets available to running containers. Similar to Secrets are ConfigMaps. A ConfigMap tends to hold less critical information, such as key-value pairs for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the"} +{"global_id": 561, "doc_id": "eks", "chunk_id": "28", "question_id": 2, "question": "What types of storage does Kubernetes support?", "answer_span": "Storage types include CephFS, NFS, iSCSI, and others.", "chunk": "running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device from the local computer. With one of those storage types available from your cluster, you can mount the storage volume to a selected mount point in your container’s file system. A Persistent Volume is one that continues to exist after the Pod is deleted, while an Ephemeral Volume is deleted when the Pod is deleted. If your cluster administrator created different storage classes for your cluster, you might have the Workloads 21 Amazon EKS User Guide option for choosing the attributes of the storage you use, such as whether the volume is deleted or reclaimed after use, whether it will expand if more space is needed, and even whether it meets certain performance requirements. • Secrets — By making Secrets available to containers in Pod specs, you can provide the permissions those containers need to access file systems, data bases, or other protected assets. Keys, passwords, and tokens are among the items that can be stored as secrets. Using secrets makes it so you don’t have to store this information in container images, but need only make the secrets available to running containers. Similar to Secrets are ConfigMaps. A ConfigMap tends to hold less critical information, such as key-value pairs for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the"} +{"global_id": 562, "doc_id": "eks", "chunk_id": "28", "question_id": 3, "question": "What is the difference between a Persistent Volume and an Ephemeral Volume?", "answer_span": "A Persistent Volume is one that continues to exist after the Pod is deleted, while an Ephemeral Volume is deleted when the Pod is deleted.", "chunk": "running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device from the local computer. With one of those storage types available from your cluster, you can mount the storage volume to a selected mount point in your container’s file system. A Persistent Volume is one that continues to exist after the Pod is deleted, while an Ephemeral Volume is deleted when the Pod is deleted. If your cluster administrator created different storage classes for your cluster, you might have the Workloads 21 Amazon EKS User Guide option for choosing the attributes of the storage you use, such as whether the volume is deleted or reclaimed after use, whether it will expand if more space is needed, and even whether it meets certain performance requirements. • Secrets — By making Secrets available to containers in Pod specs, you can provide the permissions those containers need to access file systems, data bases, or other protected assets. Keys, passwords, and tokens are among the items that can be stored as secrets. Using secrets makes it so you don’t have to store this information in container images, but need only make the secrets available to running containers. Similar to Secrets are ConfigMaps. A ConfigMap tends to hold less critical information, such as key-value pairs for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the"} +{"global_id": 563, "doc_id": "eks", "chunk_id": "28", "question_id": 4, "question": "What can be stored as secrets in Kubernetes?", "answer_span": "Keys, passwords, and tokens are among the items that can be stored as secrets.", "chunk": "running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device from the local computer. With one of those storage types available from your cluster, you can mount the storage volume to a selected mount point in your container’s file system. A Persistent Volume is one that continues to exist after the Pod is deleted, while an Ephemeral Volume is deleted when the Pod is deleted. If your cluster administrator created different storage classes for your cluster, you might have the Workloads 21 Amazon EKS User Guide option for choosing the attributes of the storage you use, such as whether the volume is deleted or reclaimed after use, whether it will expand if more space is needed, and even whether it meets certain performance requirements. • Secrets — By making Secrets available to containers in Pod specs, you can provide the permissions those containers need to access file systems, data bases, or other protected assets. Keys, passwords, and tokens are among the items that can be stored as secrets. Using secrets makes it so you don’t have to store this information in container images, but need only make the secrets available to running containers. Similar to Secrets are ConfigMaps. A ConfigMap tends to hold less critical information, such as key-value pairs for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the"} +{"global_id": 564, "doc_id": "eks", "chunk_id": "29", "question_id": 1, "question": "What can be requested for each container in terms of resources?", "answer_span": "For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the container can use.", "chunk": "for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the container can use. See Resource Management for Pods and Containers for examples. • Disruptions — Pods can be disrupted involuntarily (a node goes down) or voluntarily (an upgrade is desired). By configuring a Pod disruption budget, you can exert some control over how available your application remains when disruptions occur. See Specifying a Disruption Budget for your application for examples. • Namespaces — Kubernetes provides different ways to isolate Kubernetes components and workloads from each other. Running all the Pods for a particular application in the same Namespace is a common way to secure and manage those Pods together. You can create your own namespaces to use or choose to not indicate a namespace (which causes Kubernetes to use the default namespace). Kubernetes control plane components typically run in the kubesystem namespace. The configuration just described is typically gathered together in a YAML file to be applied to the Kubernetes cluster. For personal Kubernetes clusters, you might just store these YAML files on your local system. However, with more critical clusters and workloads, GitOps is a popular way to automate storage and updates to both workload and Kubernetes infrastructure resources. The objects used to gather together and deploy Pod information is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide • Stateless applications — A stateless"} +{"global_id": 565, "doc_id": "eks", "chunk_id": "29", "question_id": 2, "question": "What is a Pod disruption budget used for?", "answer_span": "By configuring a Pod disruption budget, you can exert some control over how available your application remains when disruptions occur.", "chunk": "for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the container can use. See Resource Management for Pods and Containers for examples. • Disruptions — Pods can be disrupted involuntarily (a node goes down) or voluntarily (an upgrade is desired). By configuring a Pod disruption budget, you can exert some control over how available your application remains when disruptions occur. See Specifying a Disruption Budget for your application for examples. • Namespaces — Kubernetes provides different ways to isolate Kubernetes components and workloads from each other. Running all the Pods for a particular application in the same Namespace is a common way to secure and manage those Pods together. You can create your own namespaces to use or choose to not indicate a namespace (which causes Kubernetes to use the default namespace). Kubernetes control plane components typically run in the kubesystem namespace. The configuration just described is typically gathered together in a YAML file to be applied to the Kubernetes cluster. For personal Kubernetes clusters, you might just store these YAML files on your local system. However, with more critical clusters and workloads, GitOps is a popular way to automate storage and updates to both workload and Kubernetes infrastructure resources. The objects used to gather together and deploy Pod information is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide • Stateless applications — A stateless"} +{"global_id": 566, "doc_id": "eks", "chunk_id": "29", "question_id": 3, "question": "What is a common way to secure and manage Pods for a particular application?", "answer_span": "Running all the Pods for a particular application in the same Namespace is a common way to secure and manage those Pods together.", "chunk": "for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the container can use. See Resource Management for Pods and Containers for examples. • Disruptions — Pods can be disrupted involuntarily (a node goes down) or voluntarily (an upgrade is desired). By configuring a Pod disruption budget, you can exert some control over how available your application remains when disruptions occur. See Specifying a Disruption Budget for your application for examples. • Namespaces — Kubernetes provides different ways to isolate Kubernetes components and workloads from each other. Running all the Pods for a particular application in the same Namespace is a common way to secure and manage those Pods together. You can create your own namespaces to use or choose to not indicate a namespace (which causes Kubernetes to use the default namespace). Kubernetes control plane components typically run in the kubesystem namespace. The configuration just described is typically gathered together in a YAML file to be applied to the Kubernetes cluster. For personal Kubernetes clusters, you might just store these YAML files on your local system. However, with more critical clusters and workloads, GitOps is a popular way to automate storage and updates to both workload and Kubernetes infrastructure resources. The objects used to gather together and deploy Pod information is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide • Stateless applications — A stateless"} +{"global_id": 567, "doc_id": "eks", "chunk_id": "29", "question_id": 4, "question": "What is GitOps used for in the context of Kubernetes?", "answer_span": "However, with more critical clusters and workloads, GitOps is a popular way to automate storage and updates to both workload and Kubernetes infrastructure resources.", "chunk": "for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the container can use. See Resource Management for Pods and Containers for examples. • Disruptions — Pods can be disrupted involuntarily (a node goes down) or voluntarily (an upgrade is desired). By configuring a Pod disruption budget, you can exert some control over how available your application remains when disruptions occur. See Specifying a Disruption Budget for your application for examples. • Namespaces — Kubernetes provides different ways to isolate Kubernetes components and workloads from each other. Running all the Pods for a particular application in the same Namespace is a common way to secure and manage those Pods together. You can create your own namespaces to use or choose to not indicate a namespace (which causes Kubernetes to use the default namespace). Kubernetes control plane components typically run in the kubesystem namespace. The configuration just described is typically gathered together in a YAML file to be applied to the Kubernetes cluster. For personal Kubernetes clusters, you might just store these YAML files on your local system. However, with more critical clusters and workloads, GitOps is a popular way to automate storage and updates to both workload and Kubernetes infrastructure resources. The objects used to gather together and deploy Pod information is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide • Stateless applications — A stateless"} +{"global_id": 568, "doc_id": "eks", "chunk_id": "30", "question_id": 1, "question": "What is the main factor that determines the method for deploying Pods?", "answer_span": "The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods.", "chunk": "is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide • Stateless applications — A stateless application doesn’t save a client’s session data, so another session doesn’t need to refer back to what happened to a previous session. This makes it easier to just replace Pods with new ones if they become unhealthy or move them around without saving state. If you are running a stateless application (such as a web server), you can use a Deployment to deploy Podsand ReplicaSets. A ReplicaSet defines how many instances of a Pod that you want running concurrently. Although you can run a ReplicaSet directly, it is common to run replicas directly within a Deployment, to define how many replicas of a Pod should be running at a time. • Stateful applications — A stateful application is one where the identity of the Pod and the order in which Pods are launched are important. These applications need persistent storage that is stable and need to be deployed and scaled in a consistent manner. To deploy a stateful application in Kubernetes, you can use StatefulSets. An example of an application that is typically run as a StatefulSet is a database. Within a StatefulSet, you could define replicas, the Pod and its containers, storage volumes to mount, and locations in the container where data are stored. See Run a Replicated Stateful Application for an example of a database being deployed as a ReplicaSet. • Per-node applications — There are times when you want to run an application on every node in your Kubernetes cluster. For example, your data center might require"} +{"global_id": 569, "doc_id": "eks", "chunk_id": "30", "question_id": 2, "question": "What characterizes a stateless application?", "answer_span": "A stateless application doesn’t save a client’s session data, so another session doesn’t need to refer back to what happened to a previous session.", "chunk": "is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide • Stateless applications — A stateless application doesn’t save a client’s session data, so another session doesn’t need to refer back to what happened to a previous session. This makes it easier to just replace Pods with new ones if they become unhealthy or move them around without saving state. If you are running a stateless application (such as a web server), you can use a Deployment to deploy Podsand ReplicaSets. A ReplicaSet defines how many instances of a Pod that you want running concurrently. Although you can run a ReplicaSet directly, it is common to run replicas directly within a Deployment, to define how many replicas of a Pod should be running at a time. • Stateful applications — A stateful application is one where the identity of the Pod and the order in which Pods are launched are important. These applications need persistent storage that is stable and need to be deployed and scaled in a consistent manner. To deploy a stateful application in Kubernetes, you can use StatefulSets. An example of an application that is typically run as a StatefulSet is a database. Within a StatefulSet, you could define replicas, the Pod and its containers, storage volumes to mount, and locations in the container where data are stored. See Run a Replicated Stateful Application for an example of a database being deployed as a ReplicaSet. • Per-node applications — There are times when you want to run an application on every node in your Kubernetes cluster. For example, your data center might require"} +{"global_id": 570, "doc_id": "eks", "chunk_id": "30", "question_id": 3, "question": "What is a common way to deploy Pods for stateless applications?", "answer_span": "If you are running a stateless application (such as a web server), you can use a Deployment to deploy Pods and ReplicaSets.", "chunk": "is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide • Stateless applications — A stateless application doesn’t save a client’s session data, so another session doesn’t need to refer back to what happened to a previous session. This makes it easier to just replace Pods with new ones if they become unhealthy or move them around without saving state. If you are running a stateless application (such as a web server), you can use a Deployment to deploy Podsand ReplicaSets. A ReplicaSet defines how many instances of a Pod that you want running concurrently. Although you can run a ReplicaSet directly, it is common to run replicas directly within a Deployment, to define how many replicas of a Pod should be running at a time. • Stateful applications — A stateful application is one where the identity of the Pod and the order in which Pods are launched are important. These applications need persistent storage that is stable and need to be deployed and scaled in a consistent manner. To deploy a stateful application in Kubernetes, you can use StatefulSets. An example of an application that is typically run as a StatefulSet is a database. Within a StatefulSet, you could define replicas, the Pod and its containers, storage volumes to mount, and locations in the container where data are stored. See Run a Replicated Stateful Application for an example of a database being deployed as a ReplicaSet. • Per-node applications — There are times when you want to run an application on every node in your Kubernetes cluster. For example, your data center might require"} +{"global_id": 571, "doc_id": "eks", "chunk_id": "30", "question_id": 4, "question": "What is an example of an application that is typically run as a StatefulSet?", "answer_span": "An example of an application that is typically run as a StatefulSet is a database.", "chunk": "is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide • Stateless applications — A stateless application doesn’t save a client’s session data, so another session doesn’t need to refer back to what happened to a previous session. This makes it easier to just replace Pods with new ones if they become unhealthy or move them around without saving state. If you are running a stateless application (such as a web server), you can use a Deployment to deploy Podsand ReplicaSets. A ReplicaSet defines how many instances of a Pod that you want running concurrently. Although you can run a ReplicaSet directly, it is common to run replicas directly within a Deployment, to define how many replicas of a Pod should be running at a time. • Stateful applications — A stateful application is one where the identity of the Pod and the order in which Pods are launched are important. These applications need persistent storage that is stable and need to be deployed and scaled in a consistent manner. To deploy a stateful application in Kubernetes, you can use StatefulSets. An example of an application that is typically run as a StatefulSet is a database. Within a StatefulSet, you could define replicas, the Pod and its containers, storage volumes to mount, and locations in the container where data are stored. See Run a Replicated Stateful Application for an example of a database being deployed as a ReplicaSet. • Per-node applications — There are times when you want to run an application on every node in your Kubernetes cluster. For example, your data center might require"} +{"global_id": 572, "doc_id": "eks", "chunk_id": "31", "question_id": 1, "question": "What is a DaemonSet used for in Kubernetes?", "answer_span": "For Kubernetes, you can use a DaemonSet to ensure that the selected application runs on every node in your cluster.", "chunk": "container where data are stored. See Run a Replicated Stateful Application for an example of a database being deployed as a ReplicaSet. • Per-node applications — There are times when you want to run an application on every node in your Kubernetes cluster. For example, your data center might require that every computer run a monitoring application or a particular remote access service. For Kubernetes, you can use a DaemonSet to ensure that the selected application runs on every node in your cluster. • Applications run to completion — There are some applications you want to run to complete a particular task. This could include one that runs monthly status reports or cleans out old data. A Job object can be used to set up an application to start up and run, then exit when the task is done. A CronJob object lets you set up an application to run at a specific hour, minute, day of the month, month, or day of the week, using a structure defined by the Linux crontab format. Making applications accessible from the network With applications often deployed as a set of microservices that moved around to different places, Kubernetes needed a way for those microservices to be able to find each other. Also, for others to access an application outside of the Kubernetes cluster, Kubernetes needed a way to expose that application on outside addresses and ports. These networking-related features are done with Service and Ingress objects, respectively: • Services — Because a Pod can move around to different nodes and addresses, another Pod that needs to communicate with the first Pod could find it difficult to locate where it is. To solve Workloads 23 Amazon EKS User Guide this problem, Kubernetes lets you represent an application as a Service. With a Service,"} +{"global_id": 573, "doc_id": "eks", "chunk_id": "31", "question_id": 2, "question": "What is the purpose of a Job object in Kubernetes?", "answer_span": "A Job object can be used to set up an application to start up and run, then exit when the task is done.", "chunk": "container where data are stored. See Run a Replicated Stateful Application for an example of a database being deployed as a ReplicaSet. • Per-node applications — There are times when you want to run an application on every node in your Kubernetes cluster. For example, your data center might require that every computer run a monitoring application or a particular remote access service. For Kubernetes, you can use a DaemonSet to ensure that the selected application runs on every node in your cluster. • Applications run to completion — There are some applications you want to run to complete a particular task. This could include one that runs monthly status reports or cleans out old data. A Job object can be used to set up an application to start up and run, then exit when the task is done. A CronJob object lets you set up an application to run at a specific hour, minute, day of the month, month, or day of the week, using a structure defined by the Linux crontab format. Making applications accessible from the network With applications often deployed as a set of microservices that moved around to different places, Kubernetes needed a way for those microservices to be able to find each other. Also, for others to access an application outside of the Kubernetes cluster, Kubernetes needed a way to expose that application on outside addresses and ports. These networking-related features are done with Service and Ingress objects, respectively: • Services — Because a Pod can move around to different nodes and addresses, another Pod that needs to communicate with the first Pod could find it difficult to locate where it is. To solve Workloads 23 Amazon EKS User Guide this problem, Kubernetes lets you represent an application as a Service. With a Service,"} +{"global_id": 574, "doc_id": "eks", "chunk_id": "31", "question_id": 3, "question": "How does Kubernetes allow applications to be accessible from the network?", "answer_span": "Kubernetes needed a way to expose that application on outside addresses and ports.", "chunk": "container where data are stored. See Run a Replicated Stateful Application for an example of a database being deployed as a ReplicaSet. • Per-node applications — There are times when you want to run an application on every node in your Kubernetes cluster. For example, your data center might require that every computer run a monitoring application or a particular remote access service. For Kubernetes, you can use a DaemonSet to ensure that the selected application runs on every node in your cluster. • Applications run to completion — There are some applications you want to run to complete a particular task. This could include one that runs monthly status reports or cleans out old data. A Job object can be used to set up an application to start up and run, then exit when the task is done. A CronJob object lets you set up an application to run at a specific hour, minute, day of the month, month, or day of the week, using a structure defined by the Linux crontab format. Making applications accessible from the network With applications often deployed as a set of microservices that moved around to different places, Kubernetes needed a way for those microservices to be able to find each other. Also, for others to access an application outside of the Kubernetes cluster, Kubernetes needed a way to expose that application on outside addresses and ports. These networking-related features are done with Service and Ingress objects, respectively: • Services — Because a Pod can move around to different nodes and addresses, another Pod that needs to communicate with the first Pod could find it difficult to locate where it is. To solve Workloads 23 Amazon EKS User Guide this problem, Kubernetes lets you represent an application as a Service. With a Service,"} +{"global_id": 575, "doc_id": "eks", "chunk_id": "31", "question_id": 4, "question": "What are Services used for in Kubernetes?", "answer_span": "Kubernetes lets you represent an application as a Service.", "chunk": "container where data are stored. See Run a Replicated Stateful Application for an example of a database being deployed as a ReplicaSet. • Per-node applications — There are times when you want to run an application on every node in your Kubernetes cluster. For example, your data center might require that every computer run a monitoring application or a particular remote access service. For Kubernetes, you can use a DaemonSet to ensure that the selected application runs on every node in your cluster. • Applications run to completion — There are some applications you want to run to complete a particular task. This could include one that runs monthly status reports or cleans out old data. A Job object can be used to set up an application to start up and run, then exit when the task is done. A CronJob object lets you set up an application to run at a specific hour, minute, day of the month, month, or day of the week, using a structure defined by the Linux crontab format. Making applications accessible from the network With applications often deployed as a set of microservices that moved around to different places, Kubernetes needed a way for those microservices to be able to find each other. Also, for others to access an application outside of the Kubernetes cluster, Kubernetes needed a way to expose that application on outside addresses and ports. These networking-related features are done with Service and Ingress objects, respectively: • Services — Because a Pod can move around to different nodes and addresses, another Pod that needs to communicate with the first Pod could find it difficult to locate where it is. To solve Workloads 23 Amazon EKS User Guide this problem, Kubernetes lets you represent an application as a Service. With a Service,"} +{"global_id": 576, "doc_id": "eks", "chunk_id": "32", "question_id": 1, "question": "What does Kubernetes allow you to represent an application as?", "answer_span": "Kubernetes lets you represent an application as a Service.", "chunk": "Pod can move around to different nodes and addresses, another Pod that needs to communicate with the first Pod could find it difficult to locate where it is. To solve Workloads 23 Amazon EKS User Guide this problem, Kubernetes lets you represent an application as a Service. With a Service, you can identify a Pod or set of Pods with a particular name, then indicate what port exposes that application’s service from the Pod and what ports another application could use to contact that service. Another Pod within a cluster can simply request a Service by name and Kubernetes will direct that request to the proper port for an instance of the Pod running that service. • Ingress — Ingress is what can make applications represented by Kubernetes Services available to clients that are outside of the cluster. Basic features of Ingress include a load balancer (managed by Ingress), the Ingress controller, and rules for routing requests from the controller to the Service. There are several Ingress Controllers that you can choose from with Kubernetes. Next steps Understanding basic Kubernetes concepts and how they relate to Amazon EKS will help you navigate both the Amazon EKS documentation and Kubernetes documentation to find the information you need to manage Amazon EKS clusters and deploy workloads to those clusters. To begin using Amazon EKS, choose from the following: • the section called “Create cluster (eksctl)” • the section called “Create a cluster” • the section called “Sample deployment (Linux)” • Cluster management Deploy Amazon EKS clusters across cloud and on-premises environments Understand Amazon EKS deployment options Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that enables you to run Kubernetes seamlessly in the cloud and in your on-premises environments. In the cloud, Amazon EKS automates Kubernetes cluster infrastructure"} +{"global_id": 577, "doc_id": "eks", "chunk_id": "32", "question_id": 2, "question": "How can another Pod within a cluster request a Service?", "answer_span": "Another Pod within a cluster can simply request a Service by name.", "chunk": "Pod can move around to different nodes and addresses, another Pod that needs to communicate with the first Pod could find it difficult to locate where it is. To solve Workloads 23 Amazon EKS User Guide this problem, Kubernetes lets you represent an application as a Service. With a Service, you can identify a Pod or set of Pods with a particular name, then indicate what port exposes that application’s service from the Pod and what ports another application could use to contact that service. Another Pod within a cluster can simply request a Service by name and Kubernetes will direct that request to the proper port for an instance of the Pod running that service. • Ingress — Ingress is what can make applications represented by Kubernetes Services available to clients that are outside of the cluster. Basic features of Ingress include a load balancer (managed by Ingress), the Ingress controller, and rules for routing requests from the controller to the Service. There are several Ingress Controllers that you can choose from with Kubernetes. Next steps Understanding basic Kubernetes concepts and how they relate to Amazon EKS will help you navigate both the Amazon EKS documentation and Kubernetes documentation to find the information you need to manage Amazon EKS clusters and deploy workloads to those clusters. To begin using Amazon EKS, choose from the following: • the section called “Create cluster (eksctl)” • the section called “Create a cluster” • the section called “Sample deployment (Linux)” • Cluster management Deploy Amazon EKS clusters across cloud and on-premises environments Understand Amazon EKS deployment options Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that enables you to run Kubernetes seamlessly in the cloud and in your on-premises environments. In the cloud, Amazon EKS automates Kubernetes cluster infrastructure"} +{"global_id": 578, "doc_id": "eks", "chunk_id": "32", "question_id": 3, "question": "What is the purpose of Ingress in Kubernetes?", "answer_span": "Ingress is what can make applications represented by Kubernetes Services available to clients that are outside of the cluster.", "chunk": "Pod can move around to different nodes and addresses, another Pod that needs to communicate with the first Pod could find it difficult to locate where it is. To solve Workloads 23 Amazon EKS User Guide this problem, Kubernetes lets you represent an application as a Service. With a Service, you can identify a Pod or set of Pods with a particular name, then indicate what port exposes that application’s service from the Pod and what ports another application could use to contact that service. Another Pod within a cluster can simply request a Service by name and Kubernetes will direct that request to the proper port for an instance of the Pod running that service. • Ingress — Ingress is what can make applications represented by Kubernetes Services available to clients that are outside of the cluster. Basic features of Ingress include a load balancer (managed by Ingress), the Ingress controller, and rules for routing requests from the controller to the Service. There are several Ingress Controllers that you can choose from with Kubernetes. Next steps Understanding basic Kubernetes concepts and how they relate to Amazon EKS will help you navigate both the Amazon EKS documentation and Kubernetes documentation to find the information you need to manage Amazon EKS clusters and deploy workloads to those clusters. To begin using Amazon EKS, choose from the following: • the section called “Create cluster (eksctl)” • the section called “Create a cluster” • the section called “Sample deployment (Linux)” • Cluster management Deploy Amazon EKS clusters across cloud and on-premises environments Understand Amazon EKS deployment options Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that enables you to run Kubernetes seamlessly in the cloud and in your on-premises environments. In the cloud, Amazon EKS automates Kubernetes cluster infrastructure"} +{"global_id": 579, "doc_id": "eks", "chunk_id": "32", "question_id": 4, "question": "What is Amazon EKS?", "answer_span": "Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that enables you to run Kubernetes seamlessly in the cloud and in your on-premises environments.", "chunk": "Pod can move around to different nodes and addresses, another Pod that needs to communicate with the first Pod could find it difficult to locate where it is. To solve Workloads 23 Amazon EKS User Guide this problem, Kubernetes lets you represent an application as a Service. With a Service, you can identify a Pod or set of Pods with a particular name, then indicate what port exposes that application’s service from the Pod and what ports another application could use to contact that service. Another Pod within a cluster can simply request a Service by name and Kubernetes will direct that request to the proper port for an instance of the Pod running that service. • Ingress — Ingress is what can make applications represented by Kubernetes Services available to clients that are outside of the cluster. Basic features of Ingress include a load balancer (managed by Ingress), the Ingress controller, and rules for routing requests from the controller to the Service. There are several Ingress Controllers that you can choose from with Kubernetes. Next steps Understanding basic Kubernetes concepts and how they relate to Amazon EKS will help you navigate both the Amazon EKS documentation and Kubernetes documentation to find the information you need to manage Amazon EKS clusters and deploy workloads to those clusters. To begin using Amazon EKS, choose from the following: • the section called “Create cluster (eksctl)” • the section called “Create a cluster” • the section called “Sample deployment (Linux)” • Cluster management Deploy Amazon EKS clusters across cloud and on-premises environments Understand Amazon EKS deployment options Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that enables you to run Kubernetes seamlessly in the cloud and in your on-premises environments. In the cloud, Amazon EKS automates Kubernetes cluster infrastructure"} +{"global_id": 580, "doc_id": "eks", "chunk_id": "33", "question_id": 1, "question": "What is Amazon EKS?", "answer_span": "Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that enables you to run Kubernetes seamlessly in the cloud and in your on-premises environments.", "chunk": "Deploy Amazon EKS clusters across cloud and on-premises environments Understand Amazon EKS deployment options Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that enables you to run Kubernetes seamlessly in the cloud and in your on-premises environments. In the cloud, Amazon EKS automates Kubernetes cluster infrastructure management for the Kubernetes control plane and nodes. This is essential for scheduling containers, managing application availability, dynamically scaling resources, optimizing compute, storing cluster data, and performing other critical functions. With Amazon EKS, you get the robust performance, scalability, reliability, and availability of AWS infrastructure, along with native integrations with AWS networking, security, storage, and observability services. Next steps 24 Amazon EKS User Guide To simplify running Kubernetes in your on-premises environments, you can use the same Amazon EKS clusters, features, and tools to the section called “Nodes” or Amazon EKS Hybrid Nodes on your own infrastructure, or you can use Amazon EKS Anywhere for self-contained air-gapped environments. Amazon EKS in the cloud You can use Amazon EKS with compute in AWS Regions, AWS Local Zones, and AWS Wavelength Zones. With Amazon EKS in the cloud, the security, scalability, and availability of the Kubernetes control plane is fully managed by AWS in the AWS Region. When running applications with compute in AWS Regions, you get the full breadth of AWS and Amazon EKS features, including Amazon EKS Auto Mode, which fully automates Kubernetes cluster infrastructure management for compute, storage, and networking on AWS with a single click. When running applications with compute in AWS Local Zones and AWS Wavelength Zones, you can use Amazon EKS self-managed nodes to connect Amazon EC2 instances for your cluster compute and can use the other available AWS services in AWS Local Zones and AWS Wavelength Zones. For more information see AWS Local Zones features"} +{"global_id": 581, "doc_id": "eks", "chunk_id": "33", "question_id": 2, "question": "What does Amazon EKS automate in the cloud?", "answer_span": "In the cloud, Amazon EKS automates Kubernetes cluster infrastructure management for the Kubernetes control plane and nodes.", "chunk": "Deploy Amazon EKS clusters across cloud and on-premises environments Understand Amazon EKS deployment options Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that enables you to run Kubernetes seamlessly in the cloud and in your on-premises environments. In the cloud, Amazon EKS automates Kubernetes cluster infrastructure management for the Kubernetes control plane and nodes. This is essential for scheduling containers, managing application availability, dynamically scaling resources, optimizing compute, storing cluster data, and performing other critical functions. With Amazon EKS, you get the robust performance, scalability, reliability, and availability of AWS infrastructure, along with native integrations with AWS networking, security, storage, and observability services. Next steps 24 Amazon EKS User Guide To simplify running Kubernetes in your on-premises environments, you can use the same Amazon EKS clusters, features, and tools to the section called “Nodes” or Amazon EKS Hybrid Nodes on your own infrastructure, or you can use Amazon EKS Anywhere for self-contained air-gapped environments. Amazon EKS in the cloud You can use Amazon EKS with compute in AWS Regions, AWS Local Zones, and AWS Wavelength Zones. With Amazon EKS in the cloud, the security, scalability, and availability of the Kubernetes control plane is fully managed by AWS in the AWS Region. When running applications with compute in AWS Regions, you get the full breadth of AWS and Amazon EKS features, including Amazon EKS Auto Mode, which fully automates Kubernetes cluster infrastructure management for compute, storage, and networking on AWS with a single click. When running applications with compute in AWS Local Zones and AWS Wavelength Zones, you can use Amazon EKS self-managed nodes to connect Amazon EC2 instances for your cluster compute and can use the other available AWS services in AWS Local Zones and AWS Wavelength Zones. For more information see AWS Local Zones features"} +{"global_id": 582, "doc_id": "eks", "chunk_id": "33", "question_id": 3, "question": "What options are available for running Amazon EKS in on-premises environments?", "answer_span": "To simplify running Kubernetes in your on-premises environments, you can use the same Amazon EKS clusters, features, and tools to the section called “Nodes” or Amazon EKS Hybrid Nodes on your own infrastructure, or you can use Amazon EKS Anywhere for self-contained air-gapped environments.", "chunk": "Deploy Amazon EKS clusters across cloud and on-premises environments Understand Amazon EKS deployment options Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that enables you to run Kubernetes seamlessly in the cloud and in your on-premises environments. In the cloud, Amazon EKS automates Kubernetes cluster infrastructure management for the Kubernetes control plane and nodes. This is essential for scheduling containers, managing application availability, dynamically scaling resources, optimizing compute, storing cluster data, and performing other critical functions. With Amazon EKS, you get the robust performance, scalability, reliability, and availability of AWS infrastructure, along with native integrations with AWS networking, security, storage, and observability services. Next steps 24 Amazon EKS User Guide To simplify running Kubernetes in your on-premises environments, you can use the same Amazon EKS clusters, features, and tools to the section called “Nodes” or Amazon EKS Hybrid Nodes on your own infrastructure, or you can use Amazon EKS Anywhere for self-contained air-gapped environments. Amazon EKS in the cloud You can use Amazon EKS with compute in AWS Regions, AWS Local Zones, and AWS Wavelength Zones. With Amazon EKS in the cloud, the security, scalability, and availability of the Kubernetes control plane is fully managed by AWS in the AWS Region. When running applications with compute in AWS Regions, you get the full breadth of AWS and Amazon EKS features, including Amazon EKS Auto Mode, which fully automates Kubernetes cluster infrastructure management for compute, storage, and networking on AWS with a single click. When running applications with compute in AWS Local Zones and AWS Wavelength Zones, you can use Amazon EKS self-managed nodes to connect Amazon EC2 instances for your cluster compute and can use the other available AWS services in AWS Local Zones and AWS Wavelength Zones. For more information see AWS Local Zones features"} +{"global_id": 583, "doc_id": "eks", "chunk_id": "33", "question_id": 4, "question": "What is Amazon EKS Auto Mode?", "answer_span": "When running applications with compute in AWS Regions, you get the full breadth of AWS and Amazon EKS features, including Amazon EKS Auto Mode, which fully automates Kubernetes cluster infrastructure management for compute, storage, and networking on AWS with a single click.", "chunk": "Deploy Amazon EKS clusters across cloud and on-premises environments Understand Amazon EKS deployment options Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that enables you to run Kubernetes seamlessly in the cloud and in your on-premises environments. In the cloud, Amazon EKS automates Kubernetes cluster infrastructure management for the Kubernetes control plane and nodes. This is essential for scheduling containers, managing application availability, dynamically scaling resources, optimizing compute, storing cluster data, and performing other critical functions. With Amazon EKS, you get the robust performance, scalability, reliability, and availability of AWS infrastructure, along with native integrations with AWS networking, security, storage, and observability services. Next steps 24 Amazon EKS User Guide To simplify running Kubernetes in your on-premises environments, you can use the same Amazon EKS clusters, features, and tools to the section called “Nodes” or Amazon EKS Hybrid Nodes on your own infrastructure, or you can use Amazon EKS Anywhere for self-contained air-gapped environments. Amazon EKS in the cloud You can use Amazon EKS with compute in AWS Regions, AWS Local Zones, and AWS Wavelength Zones. With Amazon EKS in the cloud, the security, scalability, and availability of the Kubernetes control plane is fully managed by AWS in the AWS Region. When running applications with compute in AWS Regions, you get the full breadth of AWS and Amazon EKS features, including Amazon EKS Auto Mode, which fully automates Kubernetes cluster infrastructure management for compute, storage, and networking on AWS with a single click. When running applications with compute in AWS Local Zones and AWS Wavelength Zones, you can use Amazon EKS self-managed nodes to connect Amazon EC2 instances for your cluster compute and can use the other available AWS services in AWS Local Zones and AWS Wavelength Zones. For more information see AWS Local Zones features"} +{"global_id": 584, "doc_id": "eks", "chunk_id": "34", "question_id": 1, "question": "What can you use to connect Amazon EC2 instances for your cluster compute in AWS Local Zones and AWS Wavelength Zones?", "answer_span": "you can use Amazon EKS self-managed nodes to connect Amazon EC2 instances for your cluster compute", "chunk": "with compute in AWS Local Zones and AWS Wavelength Zones, you can use Amazon EKS self-managed nodes to connect Amazon EC2 instances for your cluster compute and can use the other available AWS services in AWS Local Zones and AWS Wavelength Zones. For more information see AWS Local Zones features and AWS Wavelength Zones features. Amazon EKS in AWS Regions Amazon EKS in Local/Wav elength Zones Kuberenetes control plane management AWS-managed AWS-managed Kubernetes control plane location AWS Regions AWS Regions Kubernetes data plane • Amazon EKS Auto Mode • Amazon EKS Managed Node Groups • Amazon EKS Managed Node Groups (Local Zones only) • Amazon EC2 self-managed nodes • Amazon EC2 self-managed nodes • AWS Fargate Kubernetes data plane location Amazon EKS in the cloud AWS Regions AWS Local or Wavelength Zones 25 Amazon EKS User Guide Amazon EKS in your data center or edge environments If you need to run applications in your own data centers or edge environments, you can use Amazon EKS on AWS Outposts or Amazon EKS Hybrid Nodes. You can use self-managed nodes with Amazon EC2 instances on AWS Outposts for your cluster compute, or you can use Amazon EKS Hybrid Nodes with your own on-premises or edge infrastructure for your cluster compute. AWS Outposts is AWS-managed infrastructure that you run in your data centers or co-location facilities, whereas Amazon EKS Hybrid Nodes runs on your physical or virtual machines that you manage in your on-premises or edge environments. Amazon EKS on AWS Outposts and Amazon EKS Hybrid Nodes require a reliable connection from your on-premises environments to an AWS Region, and you can use the same Amazon EKS clusters, features, and tools you use to run applications in the cloud. When running on AWS Outposts, you can alternatively deploy the entire Kubernetes cluster on"} +{"global_id": 585, "doc_id": "eks", "chunk_id": "34", "question_id": 2, "question": "What infrastructure does AWS Outposts provide?", "answer_span": "AWS Outposts is AWS-managed infrastructure that you run in your data centers or co-location facilities", "chunk": "with compute in AWS Local Zones and AWS Wavelength Zones, you can use Amazon EKS self-managed nodes to connect Amazon EC2 instances for your cluster compute and can use the other available AWS services in AWS Local Zones and AWS Wavelength Zones. For more information see AWS Local Zones features and AWS Wavelength Zones features. Amazon EKS in AWS Regions Amazon EKS in Local/Wav elength Zones Kuberenetes control plane management AWS-managed AWS-managed Kubernetes control plane location AWS Regions AWS Regions Kubernetes data plane • Amazon EKS Auto Mode • Amazon EKS Managed Node Groups • Amazon EKS Managed Node Groups (Local Zones only) • Amazon EC2 self-managed nodes • Amazon EC2 self-managed nodes • AWS Fargate Kubernetes data plane location Amazon EKS in the cloud AWS Regions AWS Local or Wavelength Zones 25 Amazon EKS User Guide Amazon EKS in your data center or edge environments If you need to run applications in your own data centers or edge environments, you can use Amazon EKS on AWS Outposts or Amazon EKS Hybrid Nodes. You can use self-managed nodes with Amazon EC2 instances on AWS Outposts for your cluster compute, or you can use Amazon EKS Hybrid Nodes with your own on-premises or edge infrastructure for your cluster compute. AWS Outposts is AWS-managed infrastructure that you run in your data centers or co-location facilities, whereas Amazon EKS Hybrid Nodes runs on your physical or virtual machines that you manage in your on-premises or edge environments. Amazon EKS on AWS Outposts and Amazon EKS Hybrid Nodes require a reliable connection from your on-premises environments to an AWS Region, and you can use the same Amazon EKS clusters, features, and tools you use to run applications in the cloud. When running on AWS Outposts, you can alternatively deploy the entire Kubernetes cluster on"} +{"global_id": 586, "doc_id": "eks", "chunk_id": "34", "question_id": 3, "question": "What do Amazon EKS Hybrid Nodes run on?", "answer_span": "Amazon EKS Hybrid Nodes runs on your physical or virtual machines that you manage in your on-premises or edge environments", "chunk": "with compute in AWS Local Zones and AWS Wavelength Zones, you can use Amazon EKS self-managed nodes to connect Amazon EC2 instances for your cluster compute and can use the other available AWS services in AWS Local Zones and AWS Wavelength Zones. For more information see AWS Local Zones features and AWS Wavelength Zones features. Amazon EKS in AWS Regions Amazon EKS in Local/Wav elength Zones Kuberenetes control plane management AWS-managed AWS-managed Kubernetes control plane location AWS Regions AWS Regions Kubernetes data plane • Amazon EKS Auto Mode • Amazon EKS Managed Node Groups • Amazon EKS Managed Node Groups (Local Zones only) • Amazon EC2 self-managed nodes • Amazon EC2 self-managed nodes • AWS Fargate Kubernetes data plane location Amazon EKS in the cloud AWS Regions AWS Local or Wavelength Zones 25 Amazon EKS User Guide Amazon EKS in your data center or edge environments If you need to run applications in your own data centers or edge environments, you can use Amazon EKS on AWS Outposts or Amazon EKS Hybrid Nodes. You can use self-managed nodes with Amazon EC2 instances on AWS Outposts for your cluster compute, or you can use Amazon EKS Hybrid Nodes with your own on-premises or edge infrastructure for your cluster compute. AWS Outposts is AWS-managed infrastructure that you run in your data centers or co-location facilities, whereas Amazon EKS Hybrid Nodes runs on your physical or virtual machines that you manage in your on-premises or edge environments. Amazon EKS on AWS Outposts and Amazon EKS Hybrid Nodes require a reliable connection from your on-premises environments to an AWS Region, and you can use the same Amazon EKS clusters, features, and tools you use to run applications in the cloud. When running on AWS Outposts, you can alternatively deploy the entire Kubernetes cluster on"} +{"global_id": 587, "doc_id": "eks", "chunk_id": "34", "question_id": 4, "question": "What is required for Amazon EKS on AWS Outposts and Amazon EKS Hybrid Nodes?", "answer_span": "Amazon EKS on AWS Outposts and Amazon EKS Hybrid Nodes require a reliable connection from your on-premises environments to an AWS Region", "chunk": "with compute in AWS Local Zones and AWS Wavelength Zones, you can use Amazon EKS self-managed nodes to connect Amazon EC2 instances for your cluster compute and can use the other available AWS services in AWS Local Zones and AWS Wavelength Zones. For more information see AWS Local Zones features and AWS Wavelength Zones features. Amazon EKS in AWS Regions Amazon EKS in Local/Wav elength Zones Kuberenetes control plane management AWS-managed AWS-managed Kubernetes control plane location AWS Regions AWS Regions Kubernetes data plane • Amazon EKS Auto Mode • Amazon EKS Managed Node Groups • Amazon EKS Managed Node Groups (Local Zones only) • Amazon EC2 self-managed nodes • Amazon EC2 self-managed nodes • AWS Fargate Kubernetes data plane location Amazon EKS in the cloud AWS Regions AWS Local or Wavelength Zones 25 Amazon EKS User Guide Amazon EKS in your data center or edge environments If you need to run applications in your own data centers or edge environments, you can use Amazon EKS on AWS Outposts or Amazon EKS Hybrid Nodes. You can use self-managed nodes with Amazon EC2 instances on AWS Outposts for your cluster compute, or you can use Amazon EKS Hybrid Nodes with your own on-premises or edge infrastructure for your cluster compute. AWS Outposts is AWS-managed infrastructure that you run in your data centers or co-location facilities, whereas Amazon EKS Hybrid Nodes runs on your physical or virtual machines that you manage in your on-premises or edge environments. Amazon EKS on AWS Outposts and Amazon EKS Hybrid Nodes require a reliable connection from your on-premises environments to an AWS Region, and you can use the same Amazon EKS clusters, features, and tools you use to run applications in the cloud. When running on AWS Outposts, you can alternatively deploy the entire Kubernetes cluster on"} +{"global_id": 588, "doc_id": "eks", "chunk_id": "35", "question_id": 1, "question": "What is required for Amazon EKS Hybrid Nodes to function?", "answer_span": "Amazon EKS Hybrid Nodes require a reliable connection from your on-premises environments to an AWS Region.", "chunk": "Amazon EKS Hybrid Nodes require a reliable connection from your on-premises environments to an AWS Region, and you can use the same Amazon EKS clusters, features, and tools you use to run applications in the cloud. When running on AWS Outposts, you can alternatively deploy the entire Kubernetes cluster on AWS Outposts with Amazon EKS local clusters on AWS Outposts. Amazon EKS Hybrid Nodes Amazon EKS on AWS Outposts Kuberenetes control plane management AWS-managed AWS-managed Kubernetes control plane location AWS Regions AWS Regions or AWS Outposts Kubernetes data plane Customer-managed physical or virtual machines Amazon EC2 self-managed nodes Kubernetes data plane location Customer data center or edge environment Customer data center or edge environment Amazon EKS Anywhere for air-gapped environments Amazon EKS Anywhere simplifies Kubernetes cluster management through the automation of undifferentiated heavy lifting such as infrastructure setup and Kubernetes cluster lifecycle operations in on-premises and edge environments. Unlike Amazon EKS, Amazon EKS Anywhere is a customer-managed product and customers are responsible for cluster lifecycle operations and maintenance of Amazon EKS Anywhere clusters. Amazon EKS Anywhere is built on the Kubernetes sub-project Cluster API (CAPI) and supports a range of infrastructure including VMware vSphere, Amazon EKS in your data center or edge environments 26 Amazon EKS User Guide bare metal, Nutanix, Apache CloudStack, and AWS Snow. Amazon EKS Anywhere can be run in airgapped environments and offers optional integrations with regional AWS services for observability and identity management. To receive support for Amazon EKS Anywhere and access to AWSvended Kubernetes add-ons, you can purchase Amazon EKS Anywhere Enterprise Subscriptions. Amazon EKS Anywhere Kuberenetes control plane management Customer-managed Kubernetes control plane location Customer data center or edge environment Kubernetes data plane Customer-managed physical or virtual machines Kubernetes data plane location Customer data center or edge environment Amazon EKS tooling You can"} +{"global_id": 589, "doc_id": "eks", "chunk_id": "35", "question_id": 2, "question": "What does Amazon EKS Anywhere simplify?", "answer_span": "Amazon EKS Anywhere simplifies Kubernetes cluster management through the automation of undifferentiated heavy lifting such as infrastructure setup and Kubernetes cluster lifecycle operations in on-premises and edge environments.", "chunk": "Amazon EKS Hybrid Nodes require a reliable connection from your on-premises environments to an AWS Region, and you can use the same Amazon EKS clusters, features, and tools you use to run applications in the cloud. When running on AWS Outposts, you can alternatively deploy the entire Kubernetes cluster on AWS Outposts with Amazon EKS local clusters on AWS Outposts. Amazon EKS Hybrid Nodes Amazon EKS on AWS Outposts Kuberenetes control plane management AWS-managed AWS-managed Kubernetes control plane location AWS Regions AWS Regions or AWS Outposts Kubernetes data plane Customer-managed physical or virtual machines Amazon EC2 self-managed nodes Kubernetes data plane location Customer data center or edge environment Customer data center or edge environment Amazon EKS Anywhere for air-gapped environments Amazon EKS Anywhere simplifies Kubernetes cluster management through the automation of undifferentiated heavy lifting such as infrastructure setup and Kubernetes cluster lifecycle operations in on-premises and edge environments. Unlike Amazon EKS, Amazon EKS Anywhere is a customer-managed product and customers are responsible for cluster lifecycle operations and maintenance of Amazon EKS Anywhere clusters. Amazon EKS Anywhere is built on the Kubernetes sub-project Cluster API (CAPI) and supports a range of infrastructure including VMware vSphere, Amazon EKS in your data center or edge environments 26 Amazon EKS User Guide bare metal, Nutanix, Apache CloudStack, and AWS Snow. Amazon EKS Anywhere can be run in airgapped environments and offers optional integrations with regional AWS services for observability and identity management. To receive support for Amazon EKS Anywhere and access to AWSvended Kubernetes add-ons, you can purchase Amazon EKS Anywhere Enterprise Subscriptions. Amazon EKS Anywhere Kuberenetes control plane management Customer-managed Kubernetes control plane location Customer data center or edge environment Kubernetes data plane Customer-managed physical or virtual machines Kubernetes data plane location Customer data center or edge environment Amazon EKS tooling You can"} +{"global_id": 590, "doc_id": "eks", "chunk_id": "35", "question_id": 3, "question": "Who is responsible for cluster lifecycle operations in Amazon EKS Anywhere?", "answer_span": "Unlike Amazon EKS, Amazon EKS Anywhere is a customer-managed product and customers are responsible for cluster lifecycle operations and maintenance of Amazon EKS Anywhere clusters.", "chunk": "Amazon EKS Hybrid Nodes require a reliable connection from your on-premises environments to an AWS Region, and you can use the same Amazon EKS clusters, features, and tools you use to run applications in the cloud. When running on AWS Outposts, you can alternatively deploy the entire Kubernetes cluster on AWS Outposts with Amazon EKS local clusters on AWS Outposts. Amazon EKS Hybrid Nodes Amazon EKS on AWS Outposts Kuberenetes control plane management AWS-managed AWS-managed Kubernetes control plane location AWS Regions AWS Regions or AWS Outposts Kubernetes data plane Customer-managed physical or virtual machines Amazon EC2 self-managed nodes Kubernetes data plane location Customer data center or edge environment Customer data center or edge environment Amazon EKS Anywhere for air-gapped environments Amazon EKS Anywhere simplifies Kubernetes cluster management through the automation of undifferentiated heavy lifting such as infrastructure setup and Kubernetes cluster lifecycle operations in on-premises and edge environments. Unlike Amazon EKS, Amazon EKS Anywhere is a customer-managed product and customers are responsible for cluster lifecycle operations and maintenance of Amazon EKS Anywhere clusters. Amazon EKS Anywhere is built on the Kubernetes sub-project Cluster API (CAPI) and supports a range of infrastructure including VMware vSphere, Amazon EKS in your data center or edge environments 26 Amazon EKS User Guide bare metal, Nutanix, Apache CloudStack, and AWS Snow. Amazon EKS Anywhere can be run in airgapped environments and offers optional integrations with regional AWS services for observability and identity management. To receive support for Amazon EKS Anywhere and access to AWSvended Kubernetes add-ons, you can purchase Amazon EKS Anywhere Enterprise Subscriptions. Amazon EKS Anywhere Kuberenetes control plane management Customer-managed Kubernetes control plane location Customer data center or edge environment Kubernetes data plane Customer-managed physical or virtual machines Kubernetes data plane location Customer data center or edge environment Amazon EKS tooling You can"} +{"global_id": 591, "doc_id": "eks", "chunk_id": "35", "question_id": 4, "question": "What types of infrastructure does Amazon EKS Anywhere support?", "answer_span": "Amazon EKS Anywhere is built on the Kubernetes sub-project Cluster API (CAPI) and supports a range of infrastructure including VMware vSphere, bare metal, Nutanix, Apache CloudStack, and AWS Snow.", "chunk": "Amazon EKS Hybrid Nodes require a reliable connection from your on-premises environments to an AWS Region, and you can use the same Amazon EKS clusters, features, and tools you use to run applications in the cloud. When running on AWS Outposts, you can alternatively deploy the entire Kubernetes cluster on AWS Outposts with Amazon EKS local clusters on AWS Outposts. Amazon EKS Hybrid Nodes Amazon EKS on AWS Outposts Kuberenetes control plane management AWS-managed AWS-managed Kubernetes control plane location AWS Regions AWS Regions or AWS Outposts Kubernetes data plane Customer-managed physical or virtual machines Amazon EC2 self-managed nodes Kubernetes data plane location Customer data center or edge environment Customer data center or edge environment Amazon EKS Anywhere for air-gapped environments Amazon EKS Anywhere simplifies Kubernetes cluster management through the automation of undifferentiated heavy lifting such as infrastructure setup and Kubernetes cluster lifecycle operations in on-premises and edge environments. Unlike Amazon EKS, Amazon EKS Anywhere is a customer-managed product and customers are responsible for cluster lifecycle operations and maintenance of Amazon EKS Anywhere clusters. Amazon EKS Anywhere is built on the Kubernetes sub-project Cluster API (CAPI) and supports a range of infrastructure including VMware vSphere, Amazon EKS in your data center or edge environments 26 Amazon EKS User Guide bare metal, Nutanix, Apache CloudStack, and AWS Snow. Amazon EKS Anywhere can be run in airgapped environments and offers optional integrations with regional AWS services for observability and identity management. To receive support for Amazon EKS Anywhere and access to AWSvended Kubernetes add-ons, you can purchase Amazon EKS Anywhere Enterprise Subscriptions. Amazon EKS Anywhere Kuberenetes control plane management Customer-managed Kubernetes control plane location Customer data center or edge environment Kubernetes data plane Customer-managed physical or virtual machines Kubernetes data plane location Customer data center or edge environment Amazon EKS tooling You can"} +{"global_id": 592, "doc_id": "eks", "chunk_id": "36", "question_id": 1, "question": "What can you purchase for Amazon EKS Anywhere?", "answer_span": "you can purchase Amazon EKS Anywhere Enterprise Subscriptions.", "chunk": "add-ons, you can purchase Amazon EKS Anywhere Enterprise Subscriptions. Amazon EKS Anywhere Kuberenetes control plane management Customer-managed Kubernetes control plane location Customer data center or edge environment Kubernetes data plane Customer-managed physical or virtual machines Kubernetes data plane location Customer data center or edge environment Amazon EKS tooling You can use the Amazon EKS Connector to register and connect any conformant Kubernetes cluster to AWS and view it in the Amazon EKS console. After a cluster is connected, you can see the status, configuration, and workloads for that cluster in the Amazon EKS console. You can use this feature to view connected clusters in Amazon EKS console, but the Amazon EKS Connector does not enable management or mutating operations for your connected clusters through the Amazon EKS console. Amazon EKS Distro is the AWS distribution of the underlying Kubernetes components that power all Amazon EKS offerings. It includes the core components required for a functioning Kubernetes cluster such as Kubernetes control plane components (etcd, kube-apiserver, kube-scheduler, kubecontroller-manager) and networking components (CoreDNS, kube-proxy, CNI plugins). Amazon EKS Distro can be used to self-manage Kubernetes clusters with your choice of tooling. Amazon EKS Distro deployments are not covered by AWS Support Plans. Amazon EKS tooling 27 Amazon EKS User Guide Set up to use Amazon EKS To prepare for the command-line management of your Amazon EKS clusters, you need to install several tools. Use the following to set up credentials, create and modify clusters, and work with clusters once they are running: • Set up AWS CLI – Get the AWS CLI to set up and manage the services you need to work with Amazon EKS clusters. In particular, you need AWS CLI to configure credentials, but you also need it with other AWS services. • Set up kubectl and eksctl –"} +{"global_id": 593, "doc_id": "eks", "chunk_id": "36", "question_id": 2, "question": "What does the Amazon EKS Connector allow you to do?", "answer_span": "You can use the Amazon EKS Connector to register and connect any conformant Kubernetes cluster to AWS and view it in the Amazon EKS console.", "chunk": "add-ons, you can purchase Amazon EKS Anywhere Enterprise Subscriptions. Amazon EKS Anywhere Kuberenetes control plane management Customer-managed Kubernetes control plane location Customer data center or edge environment Kubernetes data plane Customer-managed physical or virtual machines Kubernetes data plane location Customer data center or edge environment Amazon EKS tooling You can use the Amazon EKS Connector to register and connect any conformant Kubernetes cluster to AWS and view it in the Amazon EKS console. After a cluster is connected, you can see the status, configuration, and workloads for that cluster in the Amazon EKS console. You can use this feature to view connected clusters in Amazon EKS console, but the Amazon EKS Connector does not enable management or mutating operations for your connected clusters through the Amazon EKS console. Amazon EKS Distro is the AWS distribution of the underlying Kubernetes components that power all Amazon EKS offerings. It includes the core components required for a functioning Kubernetes cluster such as Kubernetes control plane components (etcd, kube-apiserver, kube-scheduler, kubecontroller-manager) and networking components (CoreDNS, kube-proxy, CNI plugins). Amazon EKS Distro can be used to self-manage Kubernetes clusters with your choice of tooling. Amazon EKS Distro deployments are not covered by AWS Support Plans. Amazon EKS tooling 27 Amazon EKS User Guide Set up to use Amazon EKS To prepare for the command-line management of your Amazon EKS clusters, you need to install several tools. Use the following to set up credentials, create and modify clusters, and work with clusters once they are running: • Set up AWS CLI – Get the AWS CLI to set up and manage the services you need to work with Amazon EKS clusters. In particular, you need AWS CLI to configure credentials, but you also need it with other AWS services. • Set up kubectl and eksctl –"} +{"global_id": 594, "doc_id": "eks", "chunk_id": "36", "question_id": 3, "question": "What is included in Amazon EKS Distro?", "answer_span": "It includes the core components required for a functioning Kubernetes cluster such as Kubernetes control plane components (etcd, kube-apiserver, kube-scheduler, kubecontroller-manager) and networking components (CoreDNS, kube-proxy, CNI plugins).", "chunk": "add-ons, you can purchase Amazon EKS Anywhere Enterprise Subscriptions. Amazon EKS Anywhere Kuberenetes control plane management Customer-managed Kubernetes control plane location Customer data center or edge environment Kubernetes data plane Customer-managed physical or virtual machines Kubernetes data plane location Customer data center or edge environment Amazon EKS tooling You can use the Amazon EKS Connector to register and connect any conformant Kubernetes cluster to AWS and view it in the Amazon EKS console. After a cluster is connected, you can see the status, configuration, and workloads for that cluster in the Amazon EKS console. You can use this feature to view connected clusters in Amazon EKS console, but the Amazon EKS Connector does not enable management or mutating operations for your connected clusters through the Amazon EKS console. Amazon EKS Distro is the AWS distribution of the underlying Kubernetes components that power all Amazon EKS offerings. It includes the core components required for a functioning Kubernetes cluster such as Kubernetes control plane components (etcd, kube-apiserver, kube-scheduler, kubecontroller-manager) and networking components (CoreDNS, kube-proxy, CNI plugins). Amazon EKS Distro can be used to self-manage Kubernetes clusters with your choice of tooling. Amazon EKS Distro deployments are not covered by AWS Support Plans. Amazon EKS tooling 27 Amazon EKS User Guide Set up to use Amazon EKS To prepare for the command-line management of your Amazon EKS clusters, you need to install several tools. Use the following to set up credentials, create and modify clusters, and work with clusters once they are running: • Set up AWS CLI – Get the AWS CLI to set up and manage the services you need to work with Amazon EKS clusters. In particular, you need AWS CLI to configure credentials, but you also need it with other AWS services. • Set up kubectl and eksctl –"} +{"global_id": 595, "doc_id": "eks", "chunk_id": "36", "question_id": 4, "question": "What tools do you need to set up for managing Amazon EKS clusters?", "answer_span": "Use the following to set up credentials, create and modify clusters, and work with clusters once they are running: • Set up AWS CLI – Get the AWS CLI to set up and manage the services you need to work with Amazon EKS clusters.", "chunk": "add-ons, you can purchase Amazon EKS Anywhere Enterprise Subscriptions. Amazon EKS Anywhere Kuberenetes control plane management Customer-managed Kubernetes control plane location Customer data center or edge environment Kubernetes data plane Customer-managed physical or virtual machines Kubernetes data plane location Customer data center or edge environment Amazon EKS tooling You can use the Amazon EKS Connector to register and connect any conformant Kubernetes cluster to AWS and view it in the Amazon EKS console. After a cluster is connected, you can see the status, configuration, and workloads for that cluster in the Amazon EKS console. You can use this feature to view connected clusters in Amazon EKS console, but the Amazon EKS Connector does not enable management or mutating operations for your connected clusters through the Amazon EKS console. Amazon EKS Distro is the AWS distribution of the underlying Kubernetes components that power all Amazon EKS offerings. It includes the core components required for a functioning Kubernetes cluster such as Kubernetes control plane components (etcd, kube-apiserver, kube-scheduler, kubecontroller-manager) and networking components (CoreDNS, kube-proxy, CNI plugins). Amazon EKS Distro can be used to self-manage Kubernetes clusters with your choice of tooling. Amazon EKS Distro deployments are not covered by AWS Support Plans. Amazon EKS tooling 27 Amazon EKS User Guide Set up to use Amazon EKS To prepare for the command-line management of your Amazon EKS clusters, you need to install several tools. Use the following to set up credentials, create and modify clusters, and work with clusters once they are running: • Set up AWS CLI – Get the AWS CLI to set up and manage the services you need to work with Amazon EKS clusters. In particular, you need AWS CLI to configure credentials, but you also need it with other AWS services. • Set up kubectl and eksctl –"} +{"global_id": 596, "doc_id": "eks", "chunk_id": "37", "question_id": 1, "question": "What is the purpose of the AWS CLI?", "answer_span": "The AWS CLI is a command line tool for working with AWS services, including Amazon EKS.", "chunk": "• Set up AWS CLI – Get the AWS CLI to set up and manage the services you need to work with Amazon EKS clusters. In particular, you need AWS CLI to configure credentials, but you also need it with other AWS services. • Set up kubectl and eksctl – The eksctl CLI interacts with AWS to create, modify, and delete Amazon EKS clusters. Once a cluster is up, use the open source kubectl command to manage Kubernetes objects within your Amazon EKS clusters. • Set up a development environment (optional)– Consider adding the following tools: • Local deployment tool – If you’re new to Kubernetes, consider installing a local deployment tool like minikube or kind. These tools allow you to have an Amazon EKS cluster on your local machine for testing applications. • Package manager – helm is a popular package manager for Kubernetes that simplifies the installation and management of complex packages. With Helm, it’s easier to install and manage packages like the AWS Load Balancer Controller on your Amazon EKS cluster. Next steps • Set up AWS CLI • Set up kubectl and eksctl • Quickstart: Deploy a web app and store data Set up AWS CLI The AWS CLI is a command line tool for working with AWS services, including Amazon EKS. It is also used to authenticate IAM users or roles for access to the Amazon EKS cluster and other AWS resources from your local machine. To provision resources in AWS from the command line, you need to obtain an AWS access key ID and secret key to use in the command line. Then you need to configure these credentials in the AWS CLI. If you haven’t already installed the AWS CLI, see Install or update the latest version of the AWS CLI in the"} +{"global_id": 597, "doc_id": "eks", "chunk_id": "37", "question_id": 2, "question": "What tool is recommended for managing Kubernetes objects within Amazon EKS clusters?", "answer_span": "Once a cluster is up, use the open source kubectl command to manage Kubernetes objects within your Amazon EKS clusters.", "chunk": "• Set up AWS CLI – Get the AWS CLI to set up and manage the services you need to work with Amazon EKS clusters. In particular, you need AWS CLI to configure credentials, but you also need it with other AWS services. • Set up kubectl and eksctl – The eksctl CLI interacts with AWS to create, modify, and delete Amazon EKS clusters. Once a cluster is up, use the open source kubectl command to manage Kubernetes objects within your Amazon EKS clusters. • Set up a development environment (optional)– Consider adding the following tools: • Local deployment tool – If you’re new to Kubernetes, consider installing a local deployment tool like minikube or kind. These tools allow you to have an Amazon EKS cluster on your local machine for testing applications. • Package manager – helm is a popular package manager for Kubernetes that simplifies the installation and management of complex packages. With Helm, it’s easier to install and manage packages like the AWS Load Balancer Controller on your Amazon EKS cluster. Next steps • Set up AWS CLI • Set up kubectl and eksctl • Quickstart: Deploy a web app and store data Set up AWS CLI The AWS CLI is a command line tool for working with AWS services, including Amazon EKS. It is also used to authenticate IAM users or roles for access to the Amazon EKS cluster and other AWS resources from your local machine. To provision resources in AWS from the command line, you need to obtain an AWS access key ID and secret key to use in the command line. Then you need to configure these credentials in the AWS CLI. If you haven’t already installed the AWS CLI, see Install or update the latest version of the AWS CLI in the"} +{"global_id": 598, "doc_id": "eks", "chunk_id": "37", "question_id": 3, "question": "What is eksctl used for?", "answer_span": "The eksctl CLI interacts with AWS to create, modify, and delete Amazon EKS clusters.", "chunk": "• Set up AWS CLI – Get the AWS CLI to set up and manage the services you need to work with Amazon EKS clusters. In particular, you need AWS CLI to configure credentials, but you also need it with other AWS services. • Set up kubectl and eksctl – The eksctl CLI interacts with AWS to create, modify, and delete Amazon EKS clusters. Once a cluster is up, use the open source kubectl command to manage Kubernetes objects within your Amazon EKS clusters. • Set up a development environment (optional)– Consider adding the following tools: • Local deployment tool – If you’re new to Kubernetes, consider installing a local deployment tool like minikube or kind. These tools allow you to have an Amazon EKS cluster on your local machine for testing applications. • Package manager – helm is a popular package manager for Kubernetes that simplifies the installation and management of complex packages. With Helm, it’s easier to install and manage packages like the AWS Load Balancer Controller on your Amazon EKS cluster. Next steps • Set up AWS CLI • Set up kubectl and eksctl • Quickstart: Deploy a web app and store data Set up AWS CLI The AWS CLI is a command line tool for working with AWS services, including Amazon EKS. It is also used to authenticate IAM users or roles for access to the Amazon EKS cluster and other AWS resources from your local machine. To provision resources in AWS from the command line, you need to obtain an AWS access key ID and secret key to use in the command line. Then you need to configure these credentials in the AWS CLI. If you haven’t already installed the AWS CLI, see Install or update the latest version of the AWS CLI in the"} +{"global_id": 599, "doc_id": "eks", "chunk_id": "37", "question_id": 4, "question": "What do you need to configure in the AWS CLI to provision resources?", "answer_span": "Then you need to configure these credentials in the AWS CLI.", "chunk": "• Set up AWS CLI – Get the AWS CLI to set up and manage the services you need to work with Amazon EKS clusters. In particular, you need AWS CLI to configure credentials, but you also need it with other AWS services. • Set up kubectl and eksctl – The eksctl CLI interacts with AWS to create, modify, and delete Amazon EKS clusters. Once a cluster is up, use the open source kubectl command to manage Kubernetes objects within your Amazon EKS clusters. • Set up a development environment (optional)– Consider adding the following tools: • Local deployment tool – If you’re new to Kubernetes, consider installing a local deployment tool like minikube or kind. These tools allow you to have an Amazon EKS cluster on your local machine for testing applications. • Package manager – helm is a popular package manager for Kubernetes that simplifies the installation and management of complex packages. With Helm, it’s easier to install and manage packages like the AWS Load Balancer Controller on your Amazon EKS cluster. Next steps • Set up AWS CLI • Set up kubectl and eksctl • Quickstart: Deploy a web app and store data Set up AWS CLI The AWS CLI is a command line tool for working with AWS services, including Amazon EKS. It is also used to authenticate IAM users or roles for access to the Amazon EKS cluster and other AWS resources from your local machine. To provision resources in AWS from the command line, you need to obtain an AWS access key ID and secret key to use in the command line. Then you need to configure these credentials in the AWS CLI. If you haven’t already installed the AWS CLI, see Install or update the latest version of the AWS CLI in the"} +{"global_id": 600, "doc_id": "eks", "chunk_id": "38", "question_id": 1, "question": "What do you need to obtain to use the command line with AWS?", "answer_span": "you need to obtain an AWS access key ID and secret key to use in the command line.", "chunk": "you need to obtain an AWS access key ID and secret key to use in the command line. Then you need to configure these credentials in the AWS CLI. If you haven’t already installed the AWS CLI, see Install or update the latest version of the AWS CLI in the AWS Command Line Interface User Guide. Next steps 28 Amazon EKS User Guide To create an access key 1. Sign into the AWS Management Console. 2. For single-user or multiple-user accounts: • Single-user account –:: In the top right, choose your AWS user name to open the navigation menu. For example, choose webadmin . • Multiple-user account –:: Choose IAM from the list of services. From the IAM Dashboard, select Users, and choose the name of the user. 3. Choose Security credentials. 4. Under Access keys, choose Create access key. 5. Choose Command Line Interface (CLI), then choose Next. 6. Choose Create access key. 7. Choose Download .csv file. To configure the AWS CLI After installing the AWS CLI, do the following steps to configure it. For more information, see Configure the AWS CLI in the AWS Command Line Interface User Guide. 1. In a terminal window, enter the following command: aws configure Optionally, you can configure a named profile, such as --profile cluster-admin. If you configure a named profile in the AWS CLI, you must always pass this flag in subsequent commands. 2. Enter your AWS credentials. For example: Access Key ID [None]: AKIAIOSFODNN7EXAMPLE Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY Default region name [None]: region-code Default output format [None]: json To create an access key 29 Amazon EKS User Guide To get a security token If needed, run the following command to get a new security token for the AWS CLI. For more information, see get-session-token in the AWS CLI"} +{"global_id": 601, "doc_id": "eks", "chunk_id": "38", "question_id": 2, "question": "Where can you find instructions to install or update the AWS CLI?", "answer_span": "see Install or update the latest version of the AWS CLI in the AWS Command Line Interface User Guide.", "chunk": "you need to obtain an AWS access key ID and secret key to use in the command line. Then you need to configure these credentials in the AWS CLI. If you haven’t already installed the AWS CLI, see Install or update the latest version of the AWS CLI in the AWS Command Line Interface User Guide. Next steps 28 Amazon EKS User Guide To create an access key 1. Sign into the AWS Management Console. 2. For single-user or multiple-user accounts: • Single-user account –:: In the top right, choose your AWS user name to open the navigation menu. For example, choose webadmin . • Multiple-user account –:: Choose IAM from the list of services. From the IAM Dashboard, select Users, and choose the name of the user. 3. Choose Security credentials. 4. Under Access keys, choose Create access key. 5. Choose Command Line Interface (CLI), then choose Next. 6. Choose Create access key. 7. Choose Download .csv file. To configure the AWS CLI After installing the AWS CLI, do the following steps to configure it. For more information, see Configure the AWS CLI in the AWS Command Line Interface User Guide. 1. In a terminal window, enter the following command: aws configure Optionally, you can configure a named profile, such as --profile cluster-admin. If you configure a named profile in the AWS CLI, you must always pass this flag in subsequent commands. 2. Enter your AWS credentials. For example: Access Key ID [None]: AKIAIOSFODNN7EXAMPLE Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY Default region name [None]: region-code Default output format [None]: json To create an access key 29 Amazon EKS User Guide To get a security token If needed, run the following command to get a new security token for the AWS CLI. For more information, see get-session-token in the AWS CLI"} +{"global_id": 602, "doc_id": "eks", "chunk_id": "38", "question_id": 3, "question": "What is the first step to create an access key?", "answer_span": "Sign into the AWS Management Console.", "chunk": "you need to obtain an AWS access key ID and secret key to use in the command line. Then you need to configure these credentials in the AWS CLI. If you haven’t already installed the AWS CLI, see Install or update the latest version of the AWS CLI in the AWS Command Line Interface User Guide. Next steps 28 Amazon EKS User Guide To create an access key 1. Sign into the AWS Management Console. 2. For single-user or multiple-user accounts: • Single-user account –:: In the top right, choose your AWS user name to open the navigation menu. For example, choose webadmin . • Multiple-user account –:: Choose IAM from the list of services. From the IAM Dashboard, select Users, and choose the name of the user. 3. Choose Security credentials. 4. Under Access keys, choose Create access key. 5. Choose Command Line Interface (CLI), then choose Next. 6. Choose Create access key. 7. Choose Download .csv file. To configure the AWS CLI After installing the AWS CLI, do the following steps to configure it. For more information, see Configure the AWS CLI in the AWS Command Line Interface User Guide. 1. In a terminal window, enter the following command: aws configure Optionally, you can configure a named profile, such as --profile cluster-admin. If you configure a named profile in the AWS CLI, you must always pass this flag in subsequent commands. 2. Enter your AWS credentials. For example: Access Key ID [None]: AKIAIOSFODNN7EXAMPLE Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY Default region name [None]: region-code Default output format [None]: json To create an access key 29 Amazon EKS User Guide To get a security token If needed, run the following command to get a new security token for the AWS CLI. For more information, see get-session-token in the AWS CLI"} +{"global_id": 603, "doc_id": "eks", "chunk_id": "38", "question_id": 4, "question": "What command do you enter in the terminal to configure the AWS CLI?", "answer_span": "In a terminal window, enter the following command: aws configure", "chunk": "you need to obtain an AWS access key ID and secret key to use in the command line. Then you need to configure these credentials in the AWS CLI. If you haven’t already installed the AWS CLI, see Install or update the latest version of the AWS CLI in the AWS Command Line Interface User Guide. Next steps 28 Amazon EKS User Guide To create an access key 1. Sign into the AWS Management Console. 2. For single-user or multiple-user accounts: • Single-user account –:: In the top right, choose your AWS user name to open the navigation menu. For example, choose webadmin . • Multiple-user account –:: Choose IAM from the list of services. From the IAM Dashboard, select Users, and choose the name of the user. 3. Choose Security credentials. 4. Under Access keys, choose Create access key. 5. Choose Command Line Interface (CLI), then choose Next. 6. Choose Create access key. 7. Choose Download .csv file. To configure the AWS CLI After installing the AWS CLI, do the following steps to configure it. For more information, see Configure the AWS CLI in the AWS Command Line Interface User Guide. 1. In a terminal window, enter the following command: aws configure Optionally, you can configure a named profile, such as --profile cluster-admin. If you configure a named profile in the AWS CLI, you must always pass this flag in subsequent commands. 2. Enter your AWS credentials. For example: Access Key ID [None]: AKIAIOSFODNN7EXAMPLE Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY Default region name [None]: region-code Default output format [None]: json To create an access key 29 Amazon EKS User Guide To get a security token If needed, run the following command to get a new security token for the AWS CLI. For more information, see get-session-token in the AWS CLI"} +{"global_id": 604, "doc_id": "eks", "chunk_id": "39", "question_id": 1, "question": "What is the default validity period of the security token?", "answer_span": "By default, the token is valid for 15 minutes.", "chunk": "Default region name [None]: region-code Default output format [None]: json To create an access key 29 Amazon EKS User Guide To get a security token If needed, run the following command to get a new security token for the AWS CLI. For more information, see get-session-token in the AWS CLI Command Reference. By default, the token is valid for 15 minutes. To change the default session timeout, pass the -duration-seconds flag. For example: aws sts get-session-token --duration-seconds 3600 This command returns the temporary security credentials for an AWS CLI session. You should see the following response output: { \"Credentials\": { \"AccessKeyId\": \"ASIA5FTRU3LOEXAMPLE\", \"SecretAccessKey\": \"JnKgvwfqUD9mNsPoi9IbxAYEXAMPLE\", \"SessionToken\": \"VERYLONGSESSIONTOKENSTRING\", \"Expiration\": \"2023-02-17T03:14:24+00:00\" } } To verify the user identity If needed, run the following command to verify the AWS credentials for your IAM user identity (such as ClusterAdmin) for the terminal session. aws sts get-caller-identity This command returns the Amazon Resource Name (ARN) of the IAM entity that’s configured for the AWS CLI. You should see the following example response output: { \"UserId\": \"AKIAIOSFODNN7EXAMPLE\", \"Account\": \"01234567890\", \"Arn\": \"arn:aws:iam::01234567890:user/ClusterAdmin\" } To get a security token 30 Amazon EKS User Guide Next steps • Set up kubectl and eksctl • Quickstart: Deploy a web app and store data Set up kubectl and eksctl Once the AWS CLI is installed, there are two other tools you should install to create and manage your Kubernetes clusters: • kubectl: The kubectl command line tool is the main tool you will use to manage resources within your Kubernetes cluster. This page describes how to download and set up the kubectl binary that matches the version of your Kubernetes cluster. See Install or update kubectl. • eksctl: The eksctl command line tool is made for creating EKS clusters in the AWS cloud or on-premises (with EKS Anywhere), as well as modifying"} +{"global_id": 605, "doc_id": "eks", "chunk_id": "39", "question_id": 2, "question": "What command is used to get a new security token for the AWS CLI?", "answer_span": "If needed, run the following command to get a new security token for the AWS CLI.", "chunk": "Default region name [None]: region-code Default output format [None]: json To create an access key 29 Amazon EKS User Guide To get a security token If needed, run the following command to get a new security token for the AWS CLI. For more information, see get-session-token in the AWS CLI Command Reference. By default, the token is valid for 15 minutes. To change the default session timeout, pass the -duration-seconds flag. For example: aws sts get-session-token --duration-seconds 3600 This command returns the temporary security credentials for an AWS CLI session. You should see the following response output: { \"Credentials\": { \"AccessKeyId\": \"ASIA5FTRU3LOEXAMPLE\", \"SecretAccessKey\": \"JnKgvwfqUD9mNsPoi9IbxAYEXAMPLE\", \"SessionToken\": \"VERYLONGSESSIONTOKENSTRING\", \"Expiration\": \"2023-02-17T03:14:24+00:00\" } } To verify the user identity If needed, run the following command to verify the AWS credentials for your IAM user identity (such as ClusterAdmin) for the terminal session. aws sts get-caller-identity This command returns the Amazon Resource Name (ARN) of the IAM entity that’s configured for the AWS CLI. You should see the following example response output: { \"UserId\": \"AKIAIOSFODNN7EXAMPLE\", \"Account\": \"01234567890\", \"Arn\": \"arn:aws:iam::01234567890:user/ClusterAdmin\" } To get a security token 30 Amazon EKS User Guide Next steps • Set up kubectl and eksctl • Quickstart: Deploy a web app and store data Set up kubectl and eksctl Once the AWS CLI is installed, there are two other tools you should install to create and manage your Kubernetes clusters: • kubectl: The kubectl command line tool is the main tool you will use to manage resources within your Kubernetes cluster. This page describes how to download and set up the kubectl binary that matches the version of your Kubernetes cluster. See Install or update kubectl. • eksctl: The eksctl command line tool is made for creating EKS clusters in the AWS cloud or on-premises (with EKS Anywhere), as well as modifying"} +{"global_id": 606, "doc_id": "eks", "chunk_id": "39", "question_id": 3, "question": "What does the command 'aws sts get-caller-identity' return?", "answer_span": "This command returns the Amazon Resource Name (ARN) of the IAM entity that’s configured for the AWS CLI.", "chunk": "Default region name [None]: region-code Default output format [None]: json To create an access key 29 Amazon EKS User Guide To get a security token If needed, run the following command to get a new security token for the AWS CLI. For more information, see get-session-token in the AWS CLI Command Reference. By default, the token is valid for 15 minutes. To change the default session timeout, pass the -duration-seconds flag. For example: aws sts get-session-token --duration-seconds 3600 This command returns the temporary security credentials for an AWS CLI session. You should see the following response output: { \"Credentials\": { \"AccessKeyId\": \"ASIA5FTRU3LOEXAMPLE\", \"SecretAccessKey\": \"JnKgvwfqUD9mNsPoi9IbxAYEXAMPLE\", \"SessionToken\": \"VERYLONGSESSIONTOKENSTRING\", \"Expiration\": \"2023-02-17T03:14:24+00:00\" } } To verify the user identity If needed, run the following command to verify the AWS credentials for your IAM user identity (such as ClusterAdmin) for the terminal session. aws sts get-caller-identity This command returns the Amazon Resource Name (ARN) of the IAM entity that’s configured for the AWS CLI. You should see the following example response output: { \"UserId\": \"AKIAIOSFODNN7EXAMPLE\", \"Account\": \"01234567890\", \"Arn\": \"arn:aws:iam::01234567890:user/ClusterAdmin\" } To get a security token 30 Amazon EKS User Guide Next steps • Set up kubectl and eksctl • Quickstart: Deploy a web app and store data Set up kubectl and eksctl Once the AWS CLI is installed, there are two other tools you should install to create and manage your Kubernetes clusters: • kubectl: The kubectl command line tool is the main tool you will use to manage resources within your Kubernetes cluster. This page describes how to download and set up the kubectl binary that matches the version of your Kubernetes cluster. See Install or update kubectl. • eksctl: The eksctl command line tool is made for creating EKS clusters in the AWS cloud or on-premises (with EKS Anywhere), as well as modifying"} +{"global_id": 607, "doc_id": "eks", "chunk_id": "39", "question_id": 4, "question": "What are the two tools mentioned for managing Kubernetes clusters?", "answer_span": "Once the AWS CLI is installed, there are two other tools you should install to create and manage your Kubernetes clusters: • kubectl: The kubectl command line tool is the main tool you will use to manage resources within your Kubernetes cluster.", "chunk": "Default region name [None]: region-code Default output format [None]: json To create an access key 29 Amazon EKS User Guide To get a security token If needed, run the following command to get a new security token for the AWS CLI. For more information, see get-session-token in the AWS CLI Command Reference. By default, the token is valid for 15 minutes. To change the default session timeout, pass the -duration-seconds flag. For example: aws sts get-session-token --duration-seconds 3600 This command returns the temporary security credentials for an AWS CLI session. You should see the following response output: { \"Credentials\": { \"AccessKeyId\": \"ASIA5FTRU3LOEXAMPLE\", \"SecretAccessKey\": \"JnKgvwfqUD9mNsPoi9IbxAYEXAMPLE\", \"SessionToken\": \"VERYLONGSESSIONTOKENSTRING\", \"Expiration\": \"2023-02-17T03:14:24+00:00\" } } To verify the user identity If needed, run the following command to verify the AWS credentials for your IAM user identity (such as ClusterAdmin) for the terminal session. aws sts get-caller-identity This command returns the Amazon Resource Name (ARN) of the IAM entity that’s configured for the AWS CLI. You should see the following example response output: { \"UserId\": \"AKIAIOSFODNN7EXAMPLE\", \"Account\": \"01234567890\", \"Arn\": \"arn:aws:iam::01234567890:user/ClusterAdmin\" } To get a security token 30 Amazon EKS User Guide Next steps • Set up kubectl and eksctl • Quickstart: Deploy a web app and store data Set up kubectl and eksctl Once the AWS CLI is installed, there are two other tools you should install to create and manage your Kubernetes clusters: • kubectl: The kubectl command line tool is the main tool you will use to manage resources within your Kubernetes cluster. This page describes how to download and set up the kubectl binary that matches the version of your Kubernetes cluster. See Install or update kubectl. • eksctl: The eksctl command line tool is made for creating EKS clusters in the AWS cloud or on-premises (with EKS Anywhere), as well as modifying"} +{"global_id": 608, "doc_id": "eks", "chunk_id": "40", "question_id": 1, "question": "What does the page describe about kubectl?", "answer_span": "page describes how to download and set up the kubectl binary that matches the version of your Kubernetes cluster.", "chunk": "page describes how to download and set up the kubectl binary that matches the version of your Kubernetes cluster. See Install or update kubectl. • eksctl: The eksctl command line tool is made for creating EKS clusters in the AWS cloud or on-premises (with EKS Anywhere), as well as modifying and deleting those clusters. See Install eksctl. Install or update kubectl This topic helps you to download and install, or update, the kubectl binary on your device. The binary is identical to the upstream community versions. The binary is not unique to Amazon EKS or AWS. Use the steps below to get the specific version of kubectl that you need, although many builders simply run brew install kubectl to install it. Note You must use a kubectl version that is within one minor version difference of your Amazon EKS cluster control plane. For example, a 1.32 kubectl client works with Kubernetes 1.31, 1.32, and 1.33 clusters. Step 1: Check if kubectl is installed Determine whether you already have kubectl installed on your device. kubectl version --client Next steps 31"} +{"global_id": 609, "doc_id": "eks", "chunk_id": "40", "question_id": 2, "question": "What is eksctl used for?", "answer_span": "The eksctl command line tool is made for creating EKS clusters in the AWS cloud or on-premises (with EKS Anywhere), as well as modifying and deleting those clusters.", "chunk": "page describes how to download and set up the kubectl binary that matches the version of your Kubernetes cluster. See Install or update kubectl. • eksctl: The eksctl command line tool is made for creating EKS clusters in the AWS cloud or on-premises (with EKS Anywhere), as well as modifying and deleting those clusters. See Install eksctl. Install or update kubectl This topic helps you to download and install, or update, the kubectl binary on your device. The binary is identical to the upstream community versions. The binary is not unique to Amazon EKS or AWS. Use the steps below to get the specific version of kubectl that you need, although many builders simply run brew install kubectl to install it. Note You must use a kubectl version that is within one minor version difference of your Amazon EKS cluster control plane. For example, a 1.32 kubectl client works with Kubernetes 1.31, 1.32, and 1.33 clusters. Step 1: Check if kubectl is installed Determine whether you already have kubectl installed on your device. kubectl version --client Next steps 31"} +{"global_id": 610, "doc_id": "eks", "chunk_id": "40", "question_id": 3, "question": "What must you ensure about the kubectl version in relation to your Amazon EKS cluster?", "answer_span": "You must use a kubectl version that is within one minor version difference of your Amazon EKS cluster control plane.", "chunk": "page describes how to download and set up the kubectl binary that matches the version of your Kubernetes cluster. See Install or update kubectl. • eksctl: The eksctl command line tool is made for creating EKS clusters in the AWS cloud or on-premises (with EKS Anywhere), as well as modifying and deleting those clusters. See Install eksctl. Install or update kubectl This topic helps you to download and install, or update, the kubectl binary on your device. The binary is identical to the upstream community versions. The binary is not unique to Amazon EKS or AWS. Use the steps below to get the specific version of kubectl that you need, although many builders simply run brew install kubectl to install it. Note You must use a kubectl version that is within one minor version difference of your Amazon EKS cluster control plane. For example, a 1.32 kubectl client works with Kubernetes 1.31, 1.32, and 1.33 clusters. Step 1: Check if kubectl is installed Determine whether you already have kubectl installed on your device. kubectl version --client Next steps 31"} +{"global_id": 611, "doc_id": "eks", "chunk_id": "40", "question_id": 4, "question": "What command can you run to check if kubectl is installed?", "answer_span": "kubectl version --client", "chunk": "page describes how to download and set up the kubectl binary that matches the version of your Kubernetes cluster. See Install or update kubectl. • eksctl: The eksctl command line tool is made for creating EKS clusters in the AWS cloud or on-premises (with EKS Anywhere), as well as modifying and deleting those clusters. See Install eksctl. Install or update kubectl This topic helps you to download and install, or update, the kubectl binary on your device. The binary is identical to the upstream community versions. The binary is not unique to Amazon EKS or AWS. Use the steps below to get the specific version of kubectl that you need, although many builders simply run brew install kubectl to install it. Note You must use a kubectl version that is within one minor version difference of your Amazon EKS cluster control plane. For example, a 1.32 kubectl client works with Kubernetes 1.31, 1.32, and 1.33 clusters. Step 1: Check if kubectl is installed Determine whether you already have kubectl installed on your device. kubectl version --client Next steps 31"} +{"global_id": 612, "doc_id": "lambda", "chunk_id": "0", "question_id": 1, "question": "What does AWS Lambda allow you to do?", "answer_span": "You can use AWS Lambda to run code without provisioning or managing servers.", "chunk": "AWS Lambda Developer Guide What is AWS Lambda? You can use AWS Lambda to run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and manages all the computing resources, including server and operating system maintenance, capacity provisioning, automatic scaling, and logging. You organize your code into Lambda functions. The Lambda service runs your function only when needed and scales automatically. For pricing information, see AWS Lambda Pricing for details. When using Lambda, you are responsible only for your code. Lambda manages the compute fleet that offers a balance of memory, CPU, network, and other resources to run your code. Because Lambda manages these resources, you cannot log in to compute instances or customize the operating system on provided runtimes. When to use Lambda Lambda is an ideal compute service for application scenarios that need to scale up rapidly, and scale down to zero when not in demand. For example, you can use Lambda for: • Stream processing: Use Lambda and Amazon Kinesis to process real-time streaming data for application activity tracking, transaction order processing, clickstream analysis, data cleansing, log filtering, indexing, social media analysis, Internet of Things (IoT) device data telemetry, and metering. • Web applications: Combine Lambda with other AWS services to build powerful web applications that automatically scale up and down and run in a highly available configuration across multiple data centers. To build web applications with AWS services, developers can use infrastructure as code (IaC) and orchestration tools such as AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React"} +{"global_id": 613, "doc_id": "lambda", "chunk_id": "0", "question_id": 2, "question": "What is one of the responsibilities of the user when using AWS Lambda?", "answer_span": "When using Lambda, you are responsible only for your code.", "chunk": "AWS Lambda Developer Guide What is AWS Lambda? You can use AWS Lambda to run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and manages all the computing resources, including server and operating system maintenance, capacity provisioning, automatic scaling, and logging. You organize your code into Lambda functions. The Lambda service runs your function only when needed and scales automatically. For pricing information, see AWS Lambda Pricing for details. When using Lambda, you are responsible only for your code. Lambda manages the compute fleet that offers a balance of memory, CPU, network, and other resources to run your code. Because Lambda manages these resources, you cannot log in to compute instances or customize the operating system on provided runtimes. When to use Lambda Lambda is an ideal compute service for application scenarios that need to scale up rapidly, and scale down to zero when not in demand. For example, you can use Lambda for: • Stream processing: Use Lambda and Amazon Kinesis to process real-time streaming data for application activity tracking, transaction order processing, clickstream analysis, data cleansing, log filtering, indexing, social media analysis, Internet of Things (IoT) device data telemetry, and metering. • Web applications: Combine Lambda with other AWS services to build powerful web applications that automatically scale up and down and run in a highly available configuration across multiple data centers. To build web applications with AWS services, developers can use infrastructure as code (IaC) and orchestration tools such as AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React"} +{"global_id": 614, "doc_id": "lambda", "chunk_id": "0", "question_id": 3, "question": "What is an ideal application scenario for using AWS Lambda?", "answer_span": "Lambda is an ideal compute service for application scenarios that need to scale up rapidly, and scale down to zero when not in demand.", "chunk": "AWS Lambda Developer Guide What is AWS Lambda? You can use AWS Lambda to run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and manages all the computing resources, including server and operating system maintenance, capacity provisioning, automatic scaling, and logging. You organize your code into Lambda functions. The Lambda service runs your function only when needed and scales automatically. For pricing information, see AWS Lambda Pricing for details. When using Lambda, you are responsible only for your code. Lambda manages the compute fleet that offers a balance of memory, CPU, network, and other resources to run your code. Because Lambda manages these resources, you cannot log in to compute instances or customize the operating system on provided runtimes. When to use Lambda Lambda is an ideal compute service for application scenarios that need to scale up rapidly, and scale down to zero when not in demand. For example, you can use Lambda for: • Stream processing: Use Lambda and Amazon Kinesis to process real-time streaming data for application activity tracking, transaction order processing, clickstream analysis, data cleansing, log filtering, indexing, social media analysis, Internet of Things (IoT) device data telemetry, and metering. • Web applications: Combine Lambda with other AWS services to build powerful web applications that automatically scale up and down and run in a highly available configuration across multiple data centers. To build web applications with AWS services, developers can use infrastructure as code (IaC) and orchestration tools such as AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React"} +{"global_id": 615, "doc_id": "lambda", "chunk_id": "0", "question_id": 4, "question": "Which AWS services can be combined with Lambda to build web applications?", "answer_span": "Combine Lambda with other AWS services to build powerful web applications that automatically scale up and down and run in a highly available configuration across multiple data centers.", "chunk": "AWS Lambda Developer Guide What is AWS Lambda? You can use AWS Lambda to run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and manages all the computing resources, including server and operating system maintenance, capacity provisioning, automatic scaling, and logging. You organize your code into Lambda functions. The Lambda service runs your function only when needed and scales automatically. For pricing information, see AWS Lambda Pricing for details. When using Lambda, you are responsible only for your code. Lambda manages the compute fleet that offers a balance of memory, CPU, network, and other resources to run your code. Because Lambda manages these resources, you cannot log in to compute instances or customize the operating system on provided runtimes. When to use Lambda Lambda is an ideal compute service for application scenarios that need to scale up rapidly, and scale down to zero when not in demand. For example, you can use Lambda for: • Stream processing: Use Lambda and Amazon Kinesis to process real-time streaming data for application activity tracking, transaction order processing, clickstream analysis, data cleansing, log filtering, indexing, social media analysis, Internet of Things (IoT) device data telemetry, and metering. • Web applications: Combine Lambda with other AWS services to build powerful web applications that automatically scale up and down and run in a highly available configuration across multiple data centers. To build web applications with AWS services, developers can use infrastructure as code (IaC) and orchestration tools such as AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React"} +{"global_id": 616, "doc_id": "lambda", "chunk_id": "1", "question_id": 1, "question": "What services can be used to build mobile backends?", "answer_span": "Build backends using Lambda and Amazon API Gateway to authenticate and process API requests.", "chunk": "AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React Native frontends. • IoT backends: Build serverless backends using Lambda to handle web, mobile, IoT, and thirdparty API requests. • File processing: Use Amazon Simple Storage Service (Amazon S3) to trigger Lambda data processing in real time after an upload. When to use Lambda 1 AWS Lambda Developer Guide • Database Operations and Integration: Use Lambda to process database interactions both reactively and proactively, from handling queue messages for Amazon RDS operations like user registrations and order submissions, to responding to DynamoDB changes for audit logging, data replication, and automated workflows. • Scheduled and Periodic Tasks: Use Lambda with EventBridge rules to execute time-based operations such as database maintenance, data archiving, report generation, and other scheduled business processes using cron-like expressions. How Lambda works Because Lambda is a serverless, event-driven compute service, it uses a different programming paradigm than traditional web applications. The following model illustrates how Lambda fundamentally works: 1. You write and organize your code in Lambda functions, which are the basic building blocks you use to create a Lambda application. 2. You control security and access through Lambda permissions, using execution roles to manage what AWS services your functions can interact with and what resource policies can interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and"} +{"global_id": 617, "doc_id": "lambda", "chunk_id": "1", "question_id": 2, "question": "What is the purpose of using Amazon Simple Storage Service (Amazon S3) with Lambda?", "answer_span": "Use Amazon Simple Storage Service (Amazon S3) to trigger Lambda data processing in real time after an upload.", "chunk": "AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React Native frontends. • IoT backends: Build serverless backends using Lambda to handle web, mobile, IoT, and thirdparty API requests. • File processing: Use Amazon Simple Storage Service (Amazon S3) to trigger Lambda data processing in real time after an upload. When to use Lambda 1 AWS Lambda Developer Guide • Database Operations and Integration: Use Lambda to process database interactions both reactively and proactively, from handling queue messages for Amazon RDS operations like user registrations and order submissions, to responding to DynamoDB changes for audit logging, data replication, and automated workflows. • Scheduled and Periodic Tasks: Use Lambda with EventBridge rules to execute time-based operations such as database maintenance, data archiving, report generation, and other scheduled business processes using cron-like expressions. How Lambda works Because Lambda is a serverless, event-driven compute service, it uses a different programming paradigm than traditional web applications. The following model illustrates how Lambda fundamentally works: 1. You write and organize your code in Lambda functions, which are the basic building blocks you use to create a Lambda application. 2. You control security and access through Lambda permissions, using execution roles to manage what AWS services your functions can interact with and what resource policies can interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and"} +{"global_id": 618, "doc_id": "lambda", "chunk_id": "1", "question_id": 3, "question": "How does Lambda handle database operations?", "answer_span": "Use Lambda to process database interactions both reactively and proactively, from handling queue messages for Amazon RDS operations like user registrations and order submissions, to responding to DynamoDB changes for audit logging, data replication, and automated workflows.", "chunk": "AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React Native frontends. • IoT backends: Build serverless backends using Lambda to handle web, mobile, IoT, and thirdparty API requests. • File processing: Use Amazon Simple Storage Service (Amazon S3) to trigger Lambda data processing in real time after an upload. When to use Lambda 1 AWS Lambda Developer Guide • Database Operations and Integration: Use Lambda to process database interactions both reactively and proactively, from handling queue messages for Amazon RDS operations like user registrations and order submissions, to responding to DynamoDB changes for audit logging, data replication, and automated workflows. • Scheduled and Periodic Tasks: Use Lambda with EventBridge rules to execute time-based operations such as database maintenance, data archiving, report generation, and other scheduled business processes using cron-like expressions. How Lambda works Because Lambda is a serverless, event-driven compute service, it uses a different programming paradigm than traditional web applications. The following model illustrates how Lambda fundamentally works: 1. You write and organize your code in Lambda functions, which are the basic building blocks you use to create a Lambda application. 2. You control security and access through Lambda permissions, using execution roles to manage what AWS services your functions can interact with and what resource policies can interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and"} +{"global_id": 619, "doc_id": "lambda", "chunk_id": "1", "question_id": 4, "question": "What is the first step in how Lambda works?", "answer_span": "You write and organize your code in Lambda functions, which are the basic building blocks you use to create a Lambda application.", "chunk": "AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React Native frontends. • IoT backends: Build serverless backends using Lambda to handle web, mobile, IoT, and thirdparty API requests. • File processing: Use Amazon Simple Storage Service (Amazon S3) to trigger Lambda data processing in real time after an upload. When to use Lambda 1 AWS Lambda Developer Guide • Database Operations and Integration: Use Lambda to process database interactions both reactively and proactively, from handling queue messages for Amazon RDS operations like user registrations and order submissions, to responding to DynamoDB changes for audit logging, data replication, and automated workflows. • Scheduled and Periodic Tasks: Use Lambda with EventBridge rules to execute time-based operations such as database maintenance, data archiving, report generation, and other scheduled business processes using cron-like expressions. How Lambda works Because Lambda is a serverless, event-driven compute service, it uses a different programming paradigm than traditional web applications. The following model illustrates how Lambda fundamentally works: 1. You write and organize your code in Lambda functions, which are the basic building blocks you use to create a Lambda application. 2. You control security and access through Lambda permissions, using execution roles to manage what AWS services your functions can interact with and what resource policies can interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and"} +{"global_id": 620, "doc_id": "lambda", "chunk_id": "2", "question_id": 1, "question": "What format is the event data passed to Lambda functions?", "answer_span": "passing event data in JSON format", "chunk": "interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and extensions. Tip To learn how to build serverless solutions, check out the Serverless Developer Guide. Key features Configure, control, and deploy secure applications: • Environment variables modify application behavior without new code deployments. • Versions safely test new features while maintaining stable production environments. How Lambda works 2 AWS Lambda Developer Guide • Lambda layers optimize code reuse and maintenance by sharing common components across multiple functions. • Code signing enforce security compliance by ensuring only approved code reaches production systems. Scale and perform reliably: • Concurrency and scaling controls precisely manage application responsiveness and resource utilization during traffic spikes. • Lambda SnapStart significantly reduce cold start times. Lambda SnapStart can provide as low as sub-second startup performance, typically with no changes to your function code. • Response streaming optimize function performance by delivering large payloads incrementally for real-time processing. • Container images package functions with complex dependencies using container workflows. Connect and integrate seamlessly: • VPC networks secure sensitive resources and internal services. • File system integration that shares persistent data and manage stateful operations across function invocations. • Function URLs create public-facing APIs and endpoints without additional services. • Lambda extensions augment functions with monitoring, security, and operational tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks"} +{"global_id": 621, "doc_id": "lambda", "chunk_id": "2", "question_id": 2, "question": "What do Lambda layers optimize?", "answer_span": "Lambda layers optimize code reuse and maintenance by sharing common components across multiple functions.", "chunk": "interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and extensions. Tip To learn how to build serverless solutions, check out the Serverless Developer Guide. Key features Configure, control, and deploy secure applications: • Environment variables modify application behavior without new code deployments. • Versions safely test new features while maintaining stable production environments. How Lambda works 2 AWS Lambda Developer Guide • Lambda layers optimize code reuse and maintenance by sharing common components across multiple functions. • Code signing enforce security compliance by ensuring only approved code reaches production systems. Scale and perform reliably: • Concurrency and scaling controls precisely manage application responsiveness and resource utilization during traffic spikes. • Lambda SnapStart significantly reduce cold start times. Lambda SnapStart can provide as low as sub-second startup performance, typically with no changes to your function code. • Response streaming optimize function performance by delivering large payloads incrementally for real-time processing. • Container images package functions with complex dependencies using container workflows. Connect and integrate seamlessly: • VPC networks secure sensitive resources and internal services. • File system integration that shares persistent data and manage stateful operations across function invocations. • Function URLs create public-facing APIs and endpoints without additional services. • Lambda extensions augment functions with monitoring, security, and operational tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks"} +{"global_id": 622, "doc_id": "lambda", "chunk_id": "2", "question_id": 3, "question": "What feature allows for safe testing of new features in Lambda?", "answer_span": "Versions safely test new features while maintaining stable production environments.", "chunk": "interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and extensions. Tip To learn how to build serverless solutions, check out the Serverless Developer Guide. Key features Configure, control, and deploy secure applications: • Environment variables modify application behavior without new code deployments. • Versions safely test new features while maintaining stable production environments. How Lambda works 2 AWS Lambda Developer Guide • Lambda layers optimize code reuse and maintenance by sharing common components across multiple functions. • Code signing enforce security compliance by ensuring only approved code reaches production systems. Scale and perform reliably: • Concurrency and scaling controls precisely manage application responsiveness and resource utilization during traffic spikes. • Lambda SnapStart significantly reduce cold start times. Lambda SnapStart can provide as low as sub-second startup performance, typically with no changes to your function code. • Response streaming optimize function performance by delivering large payloads incrementally for real-time processing. • Container images package functions with complex dependencies using container workflows. Connect and integrate seamlessly: • VPC networks secure sensitive resources and internal services. • File system integration that shares persistent data and manage stateful operations across function invocations. • Function URLs create public-facing APIs and endpoints without additional services. • Lambda extensions augment functions with monitoring, security, and operational tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks"} +{"global_id": 623, "doc_id": "lambda", "chunk_id": "2", "question_id": 4, "question": "What do VPC networks provide in the context of Lambda?", "answer_span": "VPC networks secure sensitive resources and internal services.", "chunk": "interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and extensions. Tip To learn how to build serverless solutions, check out the Serverless Developer Guide. Key features Configure, control, and deploy secure applications: • Environment variables modify application behavior without new code deployments. • Versions safely test new features while maintaining stable production environments. How Lambda works 2 AWS Lambda Developer Guide • Lambda layers optimize code reuse and maintenance by sharing common components across multiple functions. • Code signing enforce security compliance by ensuring only approved code reaches production systems. Scale and perform reliably: • Concurrency and scaling controls precisely manage application responsiveness and resource utilization during traffic spikes. • Lambda SnapStart significantly reduce cold start times. Lambda SnapStart can provide as low as sub-second startup performance, typically with no changes to your function code. • Response streaming optimize function performance by delivering large payloads incrementally for real-time processing. • Container images package functions with complex dependencies using container workflows. Connect and integrate seamlessly: • VPC networks secure sensitive resources and internal services. • File system integration that shares persistent data and manage stateful operations across function invocations. • Function URLs create public-facing APIs and endpoints without additional services. • Lambda extensions augment functions with monitoring, security, and operational tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks"} +{"global_id": 624, "doc_id": "lambda", "chunk_id": "3", "question_id": 1, "question": "What are Lambda functions used for?", "answer_span": "A Lambda function is a small block of code that runs in response to events.", "chunk": "tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks you use to build Lambda applications. To write functions, it's essential to understand the core concepts and components that make up the Lambda Related information 3 AWS Lambda Developer Guide programming model. This section will guide you through the fundamental elements you need to know to start building serverless applications with Lambda. • Lambda functions and function handlers - A Lambda function is a small block of code that runs in response to events. functions are the basic building blocks you use to build applications. Function handlers are the entry point for event objects that your Lambda function code processes. • Lambda execution environment and runtimes - Lambda execution environments manage the resources required to run your function. Run times are the language-specific environments your functions run in. • Events and triggers - how other AWS services invoke your functions in response to specific events. • Lambda permissions and roles - how you control who can access your functions and what other AWS services your functions can interact with. Tip If you want to start by understanding serverless development more generally, see Understanding the difference between traditional and serverless development in the AWS Serverless Developer Guide. Lambda functions and function handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon"} +{"global_id": 625, "doc_id": "lambda", "chunk_id": "3", "question_id": 2, "question": "What do function handlers represent in Lambda?", "answer_span": "Function handlers are the entry point for event objects that your Lambda function code processes.", "chunk": "tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks you use to build Lambda applications. To write functions, it's essential to understand the core concepts and components that make up the Lambda Related information 3 AWS Lambda Developer Guide programming model. This section will guide you through the fundamental elements you need to know to start building serverless applications with Lambda. • Lambda functions and function handlers - A Lambda function is a small block of code that runs in response to events. functions are the basic building blocks you use to build applications. Function handlers are the entry point for event objects that your Lambda function code processes. • Lambda execution environment and runtimes - Lambda execution environments manage the resources required to run your function. Run times are the language-specific environments your functions run in. • Events and triggers - how other AWS services invoke your functions in response to specific events. • Lambda permissions and roles - how you control who can access your functions and what other AWS services your functions can interact with. Tip If you want to start by understanding serverless development more generally, see Understanding the difference between traditional and serverless development in the AWS Serverless Developer Guide. Lambda functions and function handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon"} +{"global_id": 626, "doc_id": "lambda", "chunk_id": "3", "question_id": 3, "question": "What do Lambda execution environments manage?", "answer_span": "Lambda execution environments manage the resources required to run your function.", "chunk": "tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks you use to build Lambda applications. To write functions, it's essential to understand the core concepts and components that make up the Lambda Related information 3 AWS Lambda Developer Guide programming model. This section will guide you through the fundamental elements you need to know to start building serverless applications with Lambda. • Lambda functions and function handlers - A Lambda function is a small block of code that runs in response to events. functions are the basic building blocks you use to build applications. Function handlers are the entry point for event objects that your Lambda function code processes. • Lambda execution environment and runtimes - Lambda execution environments manage the resources required to run your function. Run times are the language-specific environments your functions run in. • Events and triggers - how other AWS services invoke your functions in response to specific events. • Lambda permissions and roles - how you control who can access your functions and what other AWS services your functions can interact with. Tip If you want to start by understanding serverless development more generally, see Understanding the difference between traditional and serverless development in the AWS Serverless Developer Guide. Lambda functions and function handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon"} +{"global_id": 627, "doc_id": "lambda", "chunk_id": "3", "question_id": 4, "question": "Where can you find information on how Lambda works?", "answer_span": "For information on how Lambda works, see How Lambda works.", "chunk": "tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks you use to build Lambda applications. To write functions, it's essential to understand the core concepts and components that make up the Lambda Related information 3 AWS Lambda Developer Guide programming model. This section will guide you through the fundamental elements you need to know to start building serverless applications with Lambda. • Lambda functions and function handlers - A Lambda function is a small block of code that runs in response to events. functions are the basic building blocks you use to build applications. Function handlers are the entry point for event objects that your Lambda function code processes. • Lambda execution environment and runtimes - Lambda execution environments manage the resources required to run your function. Run times are the language-specific environments your functions run in. • Events and triggers - how other AWS services invoke your functions in response to specific events. • Lambda permissions and roles - how you control who can access your functions and what other AWS services your functions can interact with. Tip If you want to start by understanding serverless development more generally, see Understanding the difference between traditional and serverless development in the AWS Serverless Developer Guide. Lambda functions and function handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon"} +{"global_id": 628, "doc_id": "lambda", "chunk_id": "4", "question_id": 1, "question": "What are Lambda functions used for?", "answer_span": "In Lambda, functions are the fundamental building blocks you use to create applications.", "chunk": "handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. You can think of a function as a kind of self-contained program with the following properties. A Lambda function handler is the method in your function code that processes events. When a function runs in response to an event, Lambda runs the function handler. Data about the event that caused the function to run is passed directly to the handler. While the code in a Lambda function can contain more than one method or function, Lambda functions can only have one handler. To create a Lambda function, you bundle your function code and its dependencies in a deployment package. Lambda supports two types of deployment package, .zip file archives and container images. Lambda functions and function handlers 4 AWS Lambda Developer Guide • A function has one specific job or purpose • They run only when needed in response to specific events • They automatically stop running when finished Lambda execution environment and runtimes Lambda functions run inside a secure, isolated execution environment which Lambda manages for you. This execution environment manages the processes and resources that are needed to run your function. When a function is first invoked, Lambda creates a new execution environment for the function to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and"} +{"global_id": 629, "doc_id": "lambda", "chunk_id": "4", "question_id": 2, "question": "What is a Lambda function handler?", "answer_span": "A Lambda function handler is the method in your function code that processes events.", "chunk": "handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. You can think of a function as a kind of self-contained program with the following properties. A Lambda function handler is the method in your function code that processes events. When a function runs in response to an event, Lambda runs the function handler. Data about the event that caused the function to run is passed directly to the handler. While the code in a Lambda function can contain more than one method or function, Lambda functions can only have one handler. To create a Lambda function, you bundle your function code and its dependencies in a deployment package. Lambda supports two types of deployment package, .zip file archives and container images. Lambda functions and function handlers 4 AWS Lambda Developer Guide • A function has one specific job or purpose • They run only when needed in response to specific events • They automatically stop running when finished Lambda execution environment and runtimes Lambda functions run inside a secure, isolated execution environment which Lambda manages for you. This execution environment manages the processes and resources that are needed to run your function. When a function is first invoked, Lambda creates a new execution environment for the function to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and"} +{"global_id": 630, "doc_id": "lambda", "chunk_id": "4", "question_id": 3, "question": "How many handlers can a Lambda function have?", "answer_span": "Lambda functions can only have one handler.", "chunk": "handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. You can think of a function as a kind of self-contained program with the following properties. A Lambda function handler is the method in your function code that processes events. When a function runs in response to an event, Lambda runs the function handler. Data about the event that caused the function to run is passed directly to the handler. While the code in a Lambda function can contain more than one method or function, Lambda functions can only have one handler. To create a Lambda function, you bundle your function code and its dependencies in a deployment package. Lambda supports two types of deployment package, .zip file archives and container images. Lambda functions and function handlers 4 AWS Lambda Developer Guide • A function has one specific job or purpose • They run only when needed in response to specific events • They automatically stop running when finished Lambda execution environment and runtimes Lambda functions run inside a secure, isolated execution environment which Lambda manages for you. This execution environment manages the processes and resources that are needed to run your function. When a function is first invoked, Lambda creates a new execution environment for the function to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and"} +{"global_id": 631, "doc_id": "lambda", "chunk_id": "4", "question_id": 4, "question": "What happens to the execution environment after a function has finished running?", "answer_span": "if the function is invoked again, Lambda can re-use the existing execution environment.", "chunk": "handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. You can think of a function as a kind of self-contained program with the following properties. A Lambda function handler is the method in your function code that processes events. When a function runs in response to an event, Lambda runs the function handler. Data about the event that caused the function to run is passed directly to the handler. While the code in a Lambda function can contain more than one method or function, Lambda functions can only have one handler. To create a Lambda function, you bundle your function code and its dependencies in a deployment package. Lambda supports two types of deployment package, .zip file archives and container images. Lambda functions and function handlers 4 AWS Lambda Developer Guide • A function has one specific job or purpose • They run only when needed in response to specific events • They automatically stop running when finished Lambda execution environment and runtimes Lambda functions run inside a secure, isolated execution environment which Lambda manages for you. This execution environment manages the processes and resources that are needed to run your function. When a function is first invoked, Lambda creates a new execution environment for the function to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and"} +{"global_id": 632, "doc_id": "lambda", "chunk_id": "5", "question_id": 1, "question": "What happens to the Lambda execution environment after a function has finished running?", "answer_span": "After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment.", "chunk": "to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and your function. Lambda provides a number of managed runtimes for the most popular programming languages, or you can create your own. For managed runtimes, Lambda automatically applies security updates and patches to functions using the runtime. Events and triggers You can also invoke a Lambda function directly by using the Lambda console, AWS CLI, or one of the AWS Software Development Kits (SDKs). It's more usual in a production application for your function to be invoked by another AWS service in response to a particular event. For example, you might want a function to run whenever an item is added to an Amazon DynamoDB table. To make your function respond to events, you set up a trigger. A trigger connects your function to an event source, and your function can have multiple triggers. When an event occurs, Lambda receives event data as a JSON document and converts it into an object that your code can process. You might define the following JSON format for your event and the Lambda runtime converts this JSON to an object before passing it to your function's handler. Example custom Lambda event { \"Location\": \"SEA\", \"WeatherData\":{ Lambda execution environment and runtimes 5 AWS Lambda Developer Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then"} +{"global_id": 633, "doc_id": "lambda", "chunk_id": "5", "question_id": 2, "question": "How does Lambda handle security updates for managed runtimes?", "answer_span": "For managed runtimes, Lambda automatically applies security updates and patches to functions using the runtime.", "chunk": "to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and your function. Lambda provides a number of managed runtimes for the most popular programming languages, or you can create your own. For managed runtimes, Lambda automatically applies security updates and patches to functions using the runtime. Events and triggers You can also invoke a Lambda function directly by using the Lambda console, AWS CLI, or one of the AWS Software Development Kits (SDKs). It's more usual in a production application for your function to be invoked by another AWS service in response to a particular event. For example, you might want a function to run whenever an item is added to an Amazon DynamoDB table. To make your function respond to events, you set up a trigger. A trigger connects your function to an event source, and your function can have multiple triggers. When an event occurs, Lambda receives event data as a JSON document and converts it into an object that your code can process. You might define the following JSON format for your event and the Lambda runtime converts this JSON to an object before passing it to your function's handler. Example custom Lambda event { \"Location\": \"SEA\", \"WeatherData\":{ Lambda execution environment and runtimes 5 AWS Lambda Developer Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then"} +{"global_id": 634, "doc_id": "lambda", "chunk_id": "5", "question_id": 3, "question": "What is a trigger in the context of AWS Lambda?", "answer_span": "A trigger connects your function to an event source, and your function can have multiple triggers.", "chunk": "to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and your function. Lambda provides a number of managed runtimes for the most popular programming languages, or you can create your own. For managed runtimes, Lambda automatically applies security updates and patches to functions using the runtime. Events and triggers You can also invoke a Lambda function directly by using the Lambda console, AWS CLI, or one of the AWS Software Development Kits (SDKs). It's more usual in a production application for your function to be invoked by another AWS service in response to a particular event. For example, you might want a function to run whenever an item is added to an Amazon DynamoDB table. To make your function respond to events, you set up a trigger. A trigger connects your function to an event source, and your function can have multiple triggers. When an event occurs, Lambda receives event data as a JSON document and converts it into an object that your code can process. You might define the following JSON format for your event and the Lambda runtime converts this JSON to an object before passing it to your function's handler. Example custom Lambda event { \"Location\": \"SEA\", \"WeatherData\":{ Lambda execution environment and runtimes 5 AWS Lambda Developer Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then"} +{"global_id": 635, "doc_id": "lambda", "chunk_id": "5", "question_id": 4, "question": "What format does Lambda receive event data in?", "answer_span": "When an event occurs, Lambda receives event data as a JSON document and converts it into an object that your code can process.", "chunk": "to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and your function. Lambda provides a number of managed runtimes for the most popular programming languages, or you can create your own. For managed runtimes, Lambda automatically applies security updates and patches to functions using the runtime. Events and triggers You can also invoke a Lambda function directly by using the Lambda console, AWS CLI, or one of the AWS Software Development Kits (SDKs). It's more usual in a production application for your function to be invoked by another AWS service in response to a particular event. For example, you might want a function to run whenever an item is added to an Amazon DynamoDB table. To make your function respond to events, you set up a trigger. A trigger connects your function to an event source, and your function can have multiple triggers. When an event occurs, Lambda receives event data as a JSON document and converts it into an object that your code can process. You might define the following JSON format for your event and the Lambda runtime converts this JSON to an object before passing it to your function's handler. Example custom Lambda event { \"Location\": \"SEA\", \"WeatherData\":{ Lambda execution environment and runtimes 5 AWS Lambda Developer Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then"} +{"global_id": 636, "doc_id": "lambda", "chunk_id": "6", "question_id": 1, "question": "What are the two main types of permissions that need to be configured for Lambda?", "answer_span": "For Lambda, there are two main types of permissions that you need to configure: • Permissions that your function needs to access other AWS services • Permissions that other users and AWS services need to access your function", "chunk": "Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then invoke your function with the batched events. For more information, see How event source mappings differ from direct triggers. To understand how a trigger works, start by completing the Use an Amazon S3 trigger tutorial, or for a general overview of using triggers and instructions on creating a trigger using the Lambda console, see Integrating other services. Lambda permissions and roles For Lambda, there are two main types of permissions that you need to configure: • Permissions that your function needs to access other AWS services • Permissions that other users and AWS services need to access your function The following sections describe both of these permission types and discuss best practices for applying least-privilege permissions. Permissions for functions to access other AWS resources Lambda functions often need to access other AWS resources and perform actions on them. For example, a function might read items from a DynamoDB table, store an object in an S3 bucket, or write to an Amazon SQS queue. To give functions the permissions they need to perform these actions, you use an execution role. A Lambda execution role is a special kind of AWS Identity and Access Management (IAM) role, an identity you create in your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the"} +{"global_id": 637, "doc_id": "lambda", "chunk_id": "6", "question_id": 2, "question": "What is a Lambda execution role?", "answer_span": "A Lambda execution role is a special kind of AWS Identity and Access Management (IAM) role, an identity you create in your account that has specific permissions associated with it defined in a policy.", "chunk": "Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then invoke your function with the batched events. For more information, see How event source mappings differ from direct triggers. To understand how a trigger works, start by completing the Use an Amazon S3 trigger tutorial, or for a general overview of using triggers and instructions on creating a trigger using the Lambda console, see Integrating other services. Lambda permissions and roles For Lambda, there are two main types of permissions that you need to configure: • Permissions that your function needs to access other AWS services • Permissions that other users and AWS services need to access your function The following sections describe both of these permission types and discuss best practices for applying least-privilege permissions. Permissions for functions to access other AWS resources Lambda functions often need to access other AWS resources and perform actions on them. For example, a function might read items from a DynamoDB table, store an object in an S3 bucket, or write to an Amazon SQS queue. To give functions the permissions they need to perform these actions, you use an execution role. A Lambda execution role is a special kind of AWS Identity and Access Management (IAM) role, an identity you create in your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the"} +{"global_id": 638, "doc_id": "lambda", "chunk_id": "6", "question_id": 3, "question": "What actions might a Lambda function perform on other AWS resources?", "answer_span": "For example, a function might read items from a DynamoDB table, store an object in an S3 bucket, or write to an Amazon SQS queue.", "chunk": "Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then invoke your function with the batched events. For more information, see How event source mappings differ from direct triggers. To understand how a trigger works, start by completing the Use an Amazon S3 trigger tutorial, or for a general overview of using triggers and instructions on creating a trigger using the Lambda console, see Integrating other services. Lambda permissions and roles For Lambda, there are two main types of permissions that you need to configure: • Permissions that your function needs to access other AWS services • Permissions that other users and AWS services need to access your function The following sections describe both of these permission types and discuss best practices for applying least-privilege permissions. Permissions for functions to access other AWS resources Lambda functions often need to access other AWS resources and perform actions on them. For example, a function might read items from a DynamoDB table, store an object in an S3 bucket, or write to an Amazon SQS queue. To give functions the permissions they need to perform these actions, you use an execution role. A Lambda execution role is a special kind of AWS Identity and Access Management (IAM) role, an identity you create in your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the"} +{"global_id": 639, "doc_id": "lambda", "chunk_id": "6", "question_id": 4, "question": "Can a single Lambda execution role be used by more than one function?", "answer_span": "Every Lambda function must have an execution role, and a single role can be used by more than one function.", "chunk": "Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then invoke your function with the batched events. For more information, see How event source mappings differ from direct triggers. To understand how a trigger works, start by completing the Use an Amazon S3 trigger tutorial, or for a general overview of using triggers and instructions on creating a trigger using the Lambda console, see Integrating other services. Lambda permissions and roles For Lambda, there are two main types of permissions that you need to configure: • Permissions that your function needs to access other AWS services • Permissions that other users and AWS services need to access your function The following sections describe both of these permission types and discuss best practices for applying least-privilege permissions. Permissions for functions to access other AWS resources Lambda functions often need to access other AWS resources and perform actions on them. For example, a function might read items from a DynamoDB table, store an object in an S3 bucket, or write to an Amazon SQS queue. To give functions the permissions they need to perform these actions, you use an execution role. A Lambda execution role is a special kind of AWS Identity and Access Management (IAM) role, an identity you create in your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the"} +{"global_id": 640, "doc_id": "lambda", "chunk_id": "7", "question_id": 1, "question": "What must every Lambda function have?", "answer_span": "Every Lambda function must have an execution role, and a single role can be used by more than one function.", "chunk": "your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the function's execution role and is granted permission to take the actions defined in the role's policy. When you create a function in the Lambda console, Lambda automatically creates an execution role for your function. The role's policy gives your function basic permissions to write log outputs to Amazon CloudWatch Logs. To give your function permission to perform actions on other AWS resources, you need to edit the role to add the extra permissions. The easiest way to add permissions is to use an AWS managed policy. Managed policies are created and administered by AWS and provide permissions for many common use cases. For example, if your function performs CRUD operations on a DynamoDB table, you can add the AmazonDynamoDBFullAccess policy to your role. Permissions for other users and resources to access your function To grant other AWS service permission to access your Lambda function, you use a resourcebased policy. In IAM, resource-based policies are attached to a resource (in this case, your Lambda function) and define who can access the resource and what actions they are allowed to take. For another AWS service to invoke your function through a trigger, your function's resource-based policy must grant that service permission to use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or"} +{"global_id": 641, "doc_id": "lambda", "chunk_id": "7", "question_id": 2, "question": "What does the role's policy give your function permission to do?", "answer_span": "The role's policy gives your function basic permissions to write log outputs to Amazon CloudWatch Logs.", "chunk": "your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the function's execution role and is granted permission to take the actions defined in the role's policy. When you create a function in the Lambda console, Lambda automatically creates an execution role for your function. The role's policy gives your function basic permissions to write log outputs to Amazon CloudWatch Logs. To give your function permission to perform actions on other AWS resources, you need to edit the role to add the extra permissions. The easiest way to add permissions is to use an AWS managed policy. Managed policies are created and administered by AWS and provide permissions for many common use cases. For example, if your function performs CRUD operations on a DynamoDB table, you can add the AmazonDynamoDBFullAccess policy to your role. Permissions for other users and resources to access your function To grant other AWS service permission to access your Lambda function, you use a resourcebased policy. In IAM, resource-based policies are attached to a resource (in this case, your Lambda function) and define who can access the resource and what actions they are allowed to take. For another AWS service to invoke your function through a trigger, your function's resource-based policy must grant that service permission to use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or"} +{"global_id": 642, "doc_id": "lambda", "chunk_id": "7", "question_id": 3, "question": "How can you add extra permissions to your Lambda function's role?", "answer_span": "To give your function permission to perform actions on other AWS resources, you need to edit the role to add the extra permissions.", "chunk": "your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the function's execution role and is granted permission to take the actions defined in the role's policy. When you create a function in the Lambda console, Lambda automatically creates an execution role for your function. The role's policy gives your function basic permissions to write log outputs to Amazon CloudWatch Logs. To give your function permission to perform actions on other AWS resources, you need to edit the role to add the extra permissions. The easiest way to add permissions is to use an AWS managed policy. Managed policies are created and administered by AWS and provide permissions for many common use cases. For example, if your function performs CRUD operations on a DynamoDB table, you can add the AmazonDynamoDBFullAccess policy to your role. Permissions for other users and resources to access your function To grant other AWS service permission to access your Lambda function, you use a resourcebased policy. In IAM, resource-based policies are attached to a resource (in this case, your Lambda function) and define who can access the resource and what actions they are allowed to take. For another AWS service to invoke your function through a trigger, your function's resource-based policy must grant that service permission to use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or"} +{"global_id": 643, "doc_id": "lambda", "chunk_id": "7", "question_id": 4, "question": "What must your function's resource-based policy grant for another AWS service to invoke your function?", "answer_span": "your function's resource-based policy must grant that service permission to use the lambda:InvokeFunction action.", "chunk": "your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the function's execution role and is granted permission to take the actions defined in the role's policy. When you create a function in the Lambda console, Lambda automatically creates an execution role for your function. The role's policy gives your function basic permissions to write log outputs to Amazon CloudWatch Logs. To give your function permission to perform actions on other AWS resources, you need to edit the role to add the extra permissions. The easiest way to add permissions is to use an AWS managed policy. Managed policies are created and administered by AWS and provide permissions for many common use cases. For example, if your function performs CRUD operations on a DynamoDB table, you can add the AmazonDynamoDBFullAccess policy to your role. Permissions for other users and resources to access your function To grant other AWS service permission to access your Lambda function, you use a resourcebased policy. In IAM, resource-based policies are attached to a resource (in this case, your Lambda function) and define who can access the resource and what actions they are allowed to take. For another AWS service to invoke your function through a trigger, your function's resource-based policy must grant that service permission to use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or"} +{"global_id": 644, "doc_id": "lambda", "chunk_id": "8", "question_id": 1, "question": "What action is used to invoke a Lambda function?", "answer_span": "use the lambda:InvokeFunction action.", "chunk": "use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or resource. You can also use an identity-based policy that's associated with the user. Best practices for Lambda permissions When you set permissions using IAM policies, security best practice is to grant only the permissions required to perform a task. This is known as the principle of least privilege. To get started granting permissions for your function, you might choose to use an AWS managed policy. Managed policies can be the quickest and easiest way to grant permissions to perform a task, but they might also include other permissions you don't need. As you move from early development through test and production, we recommend you reduce permissions to only those needed by defining your own customer-managed policies. The same principle applies when granting permissions to access your function using a resourcebased policy. For example, if you want to give permission to Amazon S3 to invoke your function, Lambda permissions and roles 7 AWS Lambda Developer Guide best practice is to limit access to individual buckets, or buckets in particular AWS accounts, rather than giving blanket permissions to the S3 service. Lambda permissions and roles 8 AWS Lambda Developer Guide Running code with Lambda When you write a Lambda function, you are creating code that will run in a unique serverless environment. Understanding how Lambda actually runs your code involves two key aspects: the programming model that defines how your code interacts with Lambda, and the execution environment lifecycle that determines how Lambda manages your code's runtime environment. The Lambda programming model"} +{"global_id": 645, "doc_id": "lambda", "chunk_id": "8", "question_id": 2, "question": "What is the principle of least privilege in the context of Lambda permissions?", "answer_span": "security best practice is to grant only the permissions required to perform a task.", "chunk": "use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or resource. You can also use an identity-based policy that's associated with the user. Best practices for Lambda permissions When you set permissions using IAM policies, security best practice is to grant only the permissions required to perform a task. This is known as the principle of least privilege. To get started granting permissions for your function, you might choose to use an AWS managed policy. Managed policies can be the quickest and easiest way to grant permissions to perform a task, but they might also include other permissions you don't need. As you move from early development through test and production, we recommend you reduce permissions to only those needed by defining your own customer-managed policies. The same principle applies when granting permissions to access your function using a resourcebased policy. For example, if you want to give permission to Amazon S3 to invoke your function, Lambda permissions and roles 7 AWS Lambda Developer Guide best practice is to limit access to individual buckets, or buckets in particular AWS accounts, rather than giving blanket permissions to the S3 service. Lambda permissions and roles 8 AWS Lambda Developer Guide Running code with Lambda When you write a Lambda function, you are creating code that will run in a unique serverless environment. Understanding how Lambda actually runs your code involves two key aspects: the programming model that defines how your code interacts with Lambda, and the execution environment lifecycle that determines how Lambda manages your code's runtime environment. The Lambda programming model"} +{"global_id": 646, "doc_id": "lambda", "chunk_id": "8", "question_id": 3, "question": "What is recommended as you move from early development through test and production regarding permissions?", "answer_span": "we recommend you reduce permissions to only those needed by defining your own customer-managed policies.", "chunk": "use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or resource. You can also use an identity-based policy that's associated with the user. Best practices for Lambda permissions When you set permissions using IAM policies, security best practice is to grant only the permissions required to perform a task. This is known as the principle of least privilege. To get started granting permissions for your function, you might choose to use an AWS managed policy. Managed policies can be the quickest and easiest way to grant permissions to perform a task, but they might also include other permissions you don't need. As you move from early development through test and production, we recommend you reduce permissions to only those needed by defining your own customer-managed policies. The same principle applies when granting permissions to access your function using a resourcebased policy. For example, if you want to give permission to Amazon S3 to invoke your function, Lambda permissions and roles 7 AWS Lambda Developer Guide best practice is to limit access to individual buckets, or buckets in particular AWS accounts, rather than giving blanket permissions to the S3 service. Lambda permissions and roles 8 AWS Lambda Developer Guide Running code with Lambda When you write a Lambda function, you are creating code that will run in a unique serverless environment. Understanding how Lambda actually runs your code involves two key aspects: the programming model that defines how your code interacts with Lambda, and the execution environment lifecycle that determines how Lambda manages your code's runtime environment. The Lambda programming model"} +{"global_id": 647, "doc_id": "lambda", "chunk_id": "8", "question_id": 4, "question": "What should you limit access to when granting permissions to Amazon S3 to invoke your function?", "answer_span": "best practice is to limit access to individual buckets, or buckets in particular AWS accounts, rather than giving blanket permissions to the S3 service.", "chunk": "use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or resource. You can also use an identity-based policy that's associated with the user. Best practices for Lambda permissions When you set permissions using IAM policies, security best practice is to grant only the permissions required to perform a task. This is known as the principle of least privilege. To get started granting permissions for your function, you might choose to use an AWS managed policy. Managed policies can be the quickest and easiest way to grant permissions to perform a task, but they might also include other permissions you don't need. As you move from early development through test and production, we recommend you reduce permissions to only those needed by defining your own customer-managed policies. The same principle applies when granting permissions to access your function using a resourcebased policy. For example, if you want to give permission to Amazon S3 to invoke your function, Lambda permissions and roles 7 AWS Lambda Developer Guide best practice is to limit access to individual buckets, or buckets in particular AWS accounts, rather than giving blanket permissions to the S3 service. Lambda permissions and roles 8 AWS Lambda Developer Guide Running code with Lambda When you write a Lambda function, you are creating code that will run in a unique serverless environment. Understanding how Lambda actually runs your code involves two key aspects: the programming model that defines how your code interacts with Lambda, and the execution environment lifecycle that determines how Lambda manages your code's runtime environment. The Lambda programming model"} +{"global_id": 648, "doc_id": "lambda", "chunk_id": "9", "question_id": 1, "question": "What are the two key aspects involved in understanding how Lambda runs your code?", "answer_span": "Understanding how Lambda actually runs your code involves two key aspects: the programming model that defines how your code interacts with Lambda, and the execution environment lifecycle that determines how Lambda manages your code's runtime environment.", "chunk": "creating code that will run in a unique serverless environment. Understanding how Lambda actually runs your code involves two key aspects: the programming model that defines how your code interacts with Lambda, and the execution environment lifecycle that determines how Lambda manages your code's runtime environment. The Lambda programming model Programming model functions as a common set of rules for how Lambda works with your code, regardless of whether you're writing in Python, Java, or any other supported language. The programming model includes your runtime and handler. 1. Lambda receives an event. 2. Lambda uses the runtime (like Python or Java) to prepare the event in a format your code can use. 3. The runtime sends the formatted event to your handler. 4. Your handler processes the event using the code you've written in your Lambda function. Essential to this model is the handler, where Lambda sends events to be processed by your code. Think of it as the entry point to your code. When Lambda receives an event, it passes this event and some context information to your handler. The handler then runs your code to process these events - for example, it might read a file when it's uploaded to Amazon S3, analyze an image, or update a database. Once your code finishes processing an event, the handler is ready to process the next one. The Lambda execution model While the programming model defines how Lambda interacts with your code, Execution environment is where Lambda actually runs your function — it's a secure, isolated compute space created specifically for your function. Each environment follows a lifecycle of three phases. 1. Initialization: Lambda creates the environment and gets everything ready to run your function. This includes setting up your chosen runtime, loading your code, and running any startup code"} +{"global_id": 649, "doc_id": "lambda", "chunk_id": "9", "question_id": 2, "question": "What does the Lambda programming model include?", "answer_span": "The programming model includes your runtime and handler.", "chunk": "creating code that will run in a unique serverless environment. Understanding how Lambda actually runs your code involves two key aspects: the programming model that defines how your code interacts with Lambda, and the execution environment lifecycle that determines how Lambda manages your code's runtime environment. The Lambda programming model Programming model functions as a common set of rules for how Lambda works with your code, regardless of whether you're writing in Python, Java, or any other supported language. The programming model includes your runtime and handler. 1. Lambda receives an event. 2. Lambda uses the runtime (like Python or Java) to prepare the event in a format your code can use. 3. The runtime sends the formatted event to your handler. 4. Your handler processes the event using the code you've written in your Lambda function. Essential to this model is the handler, where Lambda sends events to be processed by your code. Think of it as the entry point to your code. When Lambda receives an event, it passes this event and some context information to your handler. The handler then runs your code to process these events - for example, it might read a file when it's uploaded to Amazon S3, analyze an image, or update a database. Once your code finishes processing an event, the handler is ready to process the next one. The Lambda execution model While the programming model defines how Lambda interacts with your code, Execution environment is where Lambda actually runs your function — it's a secure, isolated compute space created specifically for your function. Each environment follows a lifecycle of three phases. 1. Initialization: Lambda creates the environment and gets everything ready to run your function. This includes setting up your chosen runtime, loading your code, and running any startup code"} +{"global_id": 650, "doc_id": "lambda", "chunk_id": "9", "question_id": 3, "question": "What is the role of the handler in the Lambda programming model?", "answer_span": "Essential to this model is the handler, where Lambda sends events to be processed by your code.", "chunk": "creating code that will run in a unique serverless environment. Understanding how Lambda actually runs your code involves two key aspects: the programming model that defines how your code interacts with Lambda, and the execution environment lifecycle that determines how Lambda manages your code's runtime environment. The Lambda programming model Programming model functions as a common set of rules for how Lambda works with your code, regardless of whether you're writing in Python, Java, or any other supported language. The programming model includes your runtime and handler. 1. Lambda receives an event. 2. Lambda uses the runtime (like Python or Java) to prepare the event in a format your code can use. 3. The runtime sends the formatted event to your handler. 4. Your handler processes the event using the code you've written in your Lambda function. Essential to this model is the handler, where Lambda sends events to be processed by your code. Think of it as the entry point to your code. When Lambda receives an event, it passes this event and some context information to your handler. The handler then runs your code to process these events - for example, it might read a file when it's uploaded to Amazon S3, analyze an image, or update a database. Once your code finishes processing an event, the handler is ready to process the next one. The Lambda execution model While the programming model defines how Lambda interacts with your code, Execution environment is where Lambda actually runs your function — it's a secure, isolated compute space created specifically for your function. Each environment follows a lifecycle of three phases. 1. Initialization: Lambda creates the environment and gets everything ready to run your function. This includes setting up your chosen runtime, loading your code, and running any startup code"} +{"global_id": 651, "doc_id": "lambda", "chunk_id": "9", "question_id": 4, "question": "What are the three phases of the Lambda execution environment lifecycle?", "answer_span": "Each environment follows a lifecycle of three phases.", "chunk": "creating code that will run in a unique serverless environment. Understanding how Lambda actually runs your code involves two key aspects: the programming model that defines how your code interacts with Lambda, and the execution environment lifecycle that determines how Lambda manages your code's runtime environment. The Lambda programming model Programming model functions as a common set of rules for how Lambda works with your code, regardless of whether you're writing in Python, Java, or any other supported language. The programming model includes your runtime and handler. 1. Lambda receives an event. 2. Lambda uses the runtime (like Python or Java) to prepare the event in a format your code can use. 3. The runtime sends the formatted event to your handler. 4. Your handler processes the event using the code you've written in your Lambda function. Essential to this model is the handler, where Lambda sends events to be processed by your code. Think of it as the entry point to your code. When Lambda receives an event, it passes this event and some context information to your handler. The handler then runs your code to process these events - for example, it might read a file when it's uploaded to Amazon S3, analyze an image, or update a database. Once your code finishes processing an event, the handler is ready to process the next one. The Lambda execution model While the programming model defines how Lambda interacts with your code, Execution environment is where Lambda actually runs your function — it's a secure, isolated compute space created specifically for your function. Each environment follows a lifecycle of three phases. 1. Initialization: Lambda creates the environment and gets everything ready to run your function. This includes setting up your chosen runtime, loading your code, and running any startup code"} +{"global_id": 652, "doc_id": "lambda", "chunk_id": "10", "question_id": 1, "question": "What are the three phases of the Lambda environment lifecycle?", "answer_span": "Each environment follows a lifecycle of three phases.", "chunk": "function — it's a secure, isolated compute space created specifically for your function. Each environment follows a lifecycle of three phases. 1. Initialization: Lambda creates the environment and gets everything ready to run your function. This includes setting up your chosen runtime, loading your code, and running any startup code you've written. 2. Invocation: When events arrive, Lambda uses this environment to run your function. The environment can process many events over time, one after another. As more events come in, Running code 9 AWS Lambda Developer Guide Lambda creates additional environments to handle the increased demand. When demand drops, Lambda stops environments that are no longer needed. 3. Shutdown: Eventually, Lambda will shut down environments. Before doing this, it gives your function a chance to clean up any remaining tasks. This environment handles important aspects of running your function. It provides your function with memory and a /tmp directory for temporary storage. It maintains resources like database connections between invocations, so your function can reuse them. It offers features like provisioned concurrency, where Lambda prepares environments in advance to improve performance. Understanding the Lambda programming model Lambda provides a programming model that is common to all of the runtimes. The programming model defines the interface between your code and the Lambda system. You tell Lambda the entry point to your function by defining a handler in the function configuration. The runtime passes in objects to the handler that contain the invocation event and the context, such as the function name and request ID. When the handler finishes processing the first event, the runtime sends it another. The function's class stays in memory, so clients and variables that are declared outside of the handler method in initialization code can be reused. To save processing time on subsequent events, create reusable"} +{"global_id": 653, "doc_id": "lambda", "chunk_id": "10", "question_id": 2, "question": "What happens during the Initialization phase?", "answer_span": "Lambda creates the environment and gets everything ready to run your function.", "chunk": "function — it's a secure, isolated compute space created specifically for your function. Each environment follows a lifecycle of three phases. 1. Initialization: Lambda creates the environment and gets everything ready to run your function. This includes setting up your chosen runtime, loading your code, and running any startup code you've written. 2. Invocation: When events arrive, Lambda uses this environment to run your function. The environment can process many events over time, one after another. As more events come in, Running code 9 AWS Lambda Developer Guide Lambda creates additional environments to handle the increased demand. When demand drops, Lambda stops environments that are no longer needed. 3. Shutdown: Eventually, Lambda will shut down environments. Before doing this, it gives your function a chance to clean up any remaining tasks. This environment handles important aspects of running your function. It provides your function with memory and a /tmp directory for temporary storage. It maintains resources like database connections between invocations, so your function can reuse them. It offers features like provisioned concurrency, where Lambda prepares environments in advance to improve performance. Understanding the Lambda programming model Lambda provides a programming model that is common to all of the runtimes. The programming model defines the interface between your code and the Lambda system. You tell Lambda the entry point to your function by defining a handler in the function configuration. The runtime passes in objects to the handler that contain the invocation event and the context, such as the function name and request ID. When the handler finishes processing the first event, the runtime sends it another. The function's class stays in memory, so clients and variables that are declared outside of the handler method in initialization code can be reused. To save processing time on subsequent events, create reusable"} +{"global_id": 654, "doc_id": "lambda", "chunk_id": "10", "question_id": 3, "question": "How does Lambda handle increased demand for function execution?", "answer_span": "As more events come in, Lambda creates additional environments to handle the increased demand.", "chunk": "function — it's a secure, isolated compute space created specifically for your function. Each environment follows a lifecycle of three phases. 1. Initialization: Lambda creates the environment and gets everything ready to run your function. This includes setting up your chosen runtime, loading your code, and running any startup code you've written. 2. Invocation: When events arrive, Lambda uses this environment to run your function. The environment can process many events over time, one after another. As more events come in, Running code 9 AWS Lambda Developer Guide Lambda creates additional environments to handle the increased demand. When demand drops, Lambda stops environments that are no longer needed. 3. Shutdown: Eventually, Lambda will shut down environments. Before doing this, it gives your function a chance to clean up any remaining tasks. This environment handles important aspects of running your function. It provides your function with memory and a /tmp directory for temporary storage. It maintains resources like database connections between invocations, so your function can reuse them. It offers features like provisioned concurrency, where Lambda prepares environments in advance to improve performance. Understanding the Lambda programming model Lambda provides a programming model that is common to all of the runtimes. The programming model defines the interface between your code and the Lambda system. You tell Lambda the entry point to your function by defining a handler in the function configuration. The runtime passes in objects to the handler that contain the invocation event and the context, such as the function name and request ID. When the handler finishes processing the first event, the runtime sends it another. The function's class stays in memory, so clients and variables that are declared outside of the handler method in initialization code can be reused. To save processing time on subsequent events, create reusable"} +{"global_id": 655, "doc_id": "lambda", "chunk_id": "10", "question_id": 4, "question": "What does the Lambda programming model define?", "answer_span": "The programming model defines the interface between your code and the Lambda system.", "chunk": "function — it's a secure, isolated compute space created specifically for your function. Each environment follows a lifecycle of three phases. 1. Initialization: Lambda creates the environment and gets everything ready to run your function. This includes setting up your chosen runtime, loading your code, and running any startup code you've written. 2. Invocation: When events arrive, Lambda uses this environment to run your function. The environment can process many events over time, one after another. As more events come in, Running code 9 AWS Lambda Developer Guide Lambda creates additional environments to handle the increased demand. When demand drops, Lambda stops environments that are no longer needed. 3. Shutdown: Eventually, Lambda will shut down environments. Before doing this, it gives your function a chance to clean up any remaining tasks. This environment handles important aspects of running your function. It provides your function with memory and a /tmp directory for temporary storage. It maintains resources like database connections between invocations, so your function can reuse them. It offers features like provisioned concurrency, where Lambda prepares environments in advance to improve performance. Understanding the Lambda programming model Lambda provides a programming model that is common to all of the runtimes. The programming model defines the interface between your code and the Lambda system. You tell Lambda the entry point to your function by defining a handler in the function configuration. The runtime passes in objects to the handler that contain the invocation event and the context, such as the function name and request ID. When the handler finishes processing the first event, the runtime sends it another. The function's class stays in memory, so clients and variables that are declared outside of the handler method in initialization code can be reused. To save processing time on subsequent events, create reusable"} +{"global_id": 656, "doc_id": "lambda", "chunk_id": "11", "question_id": 1, "question": "What happens when the handler finishes processing the first event?", "answer_span": "When the handler finishes processing the first event, the runtime sends it another.", "chunk": "name and request ID. When the handler finishes processing the first event, the runtime sends it another. The function's class stays in memory, so clients and variables that are declared outside of the handler method in initialization code can be reused. To save processing time on subsequent events, create reusable resources like AWS SDK clients during initialization. Once initialized, each instance of your function can process thousands of requests. Your function also has access to local storage in the /tmp directory, a transient cache that can be used for multiple invocations. For more information, see Execution environment. When AWS X-Ray tracing is enabled, the runtime records separate subsegments for initialization and execution. The runtime captures logging output from your function and sends it to Amazon CloudWatch Logs. In addition to logging your function's output, the runtime also logs entries when function invocation starts and ends. This includes a report log with the request ID, billed duration, initialization duration, and other details. If your function throws an error, the runtime returns that error to the invoker. Running code 10 AWS Lambda Developer Guide Note Logging is subject to CloudWatch Logs quotas. Log data can be lost due to throttling or, in some cases, when an instance of your function is stopped. Lambda scales your function by running additional instances of it as demand increases, and by stopping instances as demand decreases. This model leads to variations in application architecture, such as: • Unless noted otherwise, incoming requests might be processed out of order or concurrently. • Do not rely on instances of your function being long lived, instead store your application's state elsewhere. • Use local storage and class-level objects to increase performance, but keep to a minimum the size of your deployment package and the amount of data that you"} +{"global_id": 657, "doc_id": "lambda", "chunk_id": "11", "question_id": 2, "question": "What can be reused in the function's class?", "answer_span": "The function's class stays in memory, so clients and variables that are declared outside of the handler method in initialization code can be reused.", "chunk": "name and request ID. When the handler finishes processing the first event, the runtime sends it another. The function's class stays in memory, so clients and variables that are declared outside of the handler method in initialization code can be reused. To save processing time on subsequent events, create reusable resources like AWS SDK clients during initialization. Once initialized, each instance of your function can process thousands of requests. Your function also has access to local storage in the /tmp directory, a transient cache that can be used for multiple invocations. For more information, see Execution environment. When AWS X-Ray tracing is enabled, the runtime records separate subsegments for initialization and execution. The runtime captures logging output from your function and sends it to Amazon CloudWatch Logs. In addition to logging your function's output, the runtime also logs entries when function invocation starts and ends. This includes a report log with the request ID, billed duration, initialization duration, and other details. If your function throws an error, the runtime returns that error to the invoker. Running code 10 AWS Lambda Developer Guide Note Logging is subject to CloudWatch Logs quotas. Log data can be lost due to throttling or, in some cases, when an instance of your function is stopped. Lambda scales your function by running additional instances of it as demand increases, and by stopping instances as demand decreases. This model leads to variations in application architecture, such as: • Unless noted otherwise, incoming requests might be processed out of order or concurrently. • Do not rely on instances of your function being long lived, instead store your application's state elsewhere. • Use local storage and class-level objects to increase performance, but keep to a minimum the size of your deployment package and the amount of data that you"} +{"global_id": 658, "doc_id": "lambda", "chunk_id": "11", "question_id": 3, "question": "What does the runtime do when AWS X-Ray tracing is enabled?", "answer_span": "When AWS X-Ray tracing is enabled, the runtime records separate subsegments for initialization and execution.", "chunk": "name and request ID. When the handler finishes processing the first event, the runtime sends it another. The function's class stays in memory, so clients and variables that are declared outside of the handler method in initialization code can be reused. To save processing time on subsequent events, create reusable resources like AWS SDK clients during initialization. Once initialized, each instance of your function can process thousands of requests. Your function also has access to local storage in the /tmp directory, a transient cache that can be used for multiple invocations. For more information, see Execution environment. When AWS X-Ray tracing is enabled, the runtime records separate subsegments for initialization and execution. The runtime captures logging output from your function and sends it to Amazon CloudWatch Logs. In addition to logging your function's output, the runtime also logs entries when function invocation starts and ends. This includes a report log with the request ID, billed duration, initialization duration, and other details. If your function throws an error, the runtime returns that error to the invoker. Running code 10 AWS Lambda Developer Guide Note Logging is subject to CloudWatch Logs quotas. Log data can be lost due to throttling or, in some cases, when an instance of your function is stopped. Lambda scales your function by running additional instances of it as demand increases, and by stopping instances as demand decreases. This model leads to variations in application architecture, such as: • Unless noted otherwise, incoming requests might be processed out of order or concurrently. • Do not rely on instances of your function being long lived, instead store your application's state elsewhere. • Use local storage and class-level objects to increase performance, but keep to a minimum the size of your deployment package and the amount of data that you"} +{"global_id": 659, "doc_id": "lambda", "chunk_id": "11", "question_id": 4, "question": "What happens to log data due to CloudWatch Logs quotas?", "answer_span": "Log data can be lost due to throttling or, in some cases, when an instance of your function is stopped.", "chunk": "name and request ID. When the handler finishes processing the first event, the runtime sends it another. The function's class stays in memory, so clients and variables that are declared outside of the handler method in initialization code can be reused. To save processing time on subsequent events, create reusable resources like AWS SDK clients during initialization. Once initialized, each instance of your function can process thousands of requests. Your function also has access to local storage in the /tmp directory, a transient cache that can be used for multiple invocations. For more information, see Execution environment. When AWS X-Ray tracing is enabled, the runtime records separate subsegments for initialization and execution. The runtime captures logging output from your function and sends it to Amazon CloudWatch Logs. In addition to logging your function's output, the runtime also logs entries when function invocation starts and ends. This includes a report log with the request ID, billed duration, initialization duration, and other details. If your function throws an error, the runtime returns that error to the invoker. Running code 10 AWS Lambda Developer Guide Note Logging is subject to CloudWatch Logs quotas. Log data can be lost due to throttling or, in some cases, when an instance of your function is stopped. Lambda scales your function by running additional instances of it as demand increases, and by stopping instances as demand decreases. This model leads to variations in application architecture, such as: • Unless noted otherwise, incoming requests might be processed out of order or concurrently. • Do not rely on instances of your function being long lived, instead store your application's state elsewhere. • Use local storage and class-level objects to increase performance, but keep to a minimum the size of your deployment package and the amount of data that you"} +{"global_id": 660, "doc_id": "lambda", "chunk_id": "12", "question_id": 1, "question": "What should you not rely on regarding instances of your function?", "answer_span": "Do not rely on instances of your function being long lived, instead store your application's state elsewhere.", "chunk": "of order or concurrently. • Do not rely on instances of your function being long lived, instead store your application's state elsewhere. • Use local storage and class-level objects to increase performance, but keep to a minimum the size of your deployment package and the amount of data that you transfer onto the execution environment. For a hands-on introduction to the programming model in your preferred programming language, see the following chapters. • Building Lambda functions with Node.js • Building Lambda functions with Python • Building Lambda functions with Ruby • Building Lambda functions with Java • Building Lambda functions with Go • Building Lambda functions with C# • Building Lambda functions with PowerShell Understanding the Lambda execution environment lifecycle Lambda invokes your function in an execution environment, which provides a secure and isolated runtime environment. The execution environment manages the resources required to run your function. The execution environment also provides lifecycle support for the function's runtime and any external extensions associated with your function. Running code 11 AWS Lambda Developer Guide The function's runtime communicates with Lambda using the Runtime API. Extensions communicate with Lambda using the Extensions API. Extensions can also receive log messages and other telemetry from the function by using the Telemetry API. When you create your Lambda function, you specify configuration information, such as the amount of memory available and the maximum execution time allowed for your function. Lambda uses this information to set up the execution environment. The function's runtime and each external extension are processes that run within the execution environment. Permissions, resources, credentials, and environment variables are shared between the function and the extensions. Topics • Lambda execution environment lifecycle • Cold starts and latency • Reducing cold starts with Provisioned Concurrency • Optimizing static initialization Lambda execution environment lifecycle Running"} +{"global_id": 661, "doc_id": "lambda", "chunk_id": "12", "question_id": 2, "question": "What does the execution environment manage?", "answer_span": "The execution environment manages the resources required to run your function.", "chunk": "of order or concurrently. • Do not rely on instances of your function being long lived, instead store your application's state elsewhere. • Use local storage and class-level objects to increase performance, but keep to a minimum the size of your deployment package and the amount of data that you transfer onto the execution environment. For a hands-on introduction to the programming model in your preferred programming language, see the following chapters. • Building Lambda functions with Node.js • Building Lambda functions with Python • Building Lambda functions with Ruby • Building Lambda functions with Java • Building Lambda functions with Go • Building Lambda functions with C# • Building Lambda functions with PowerShell Understanding the Lambda execution environment lifecycle Lambda invokes your function in an execution environment, which provides a secure and isolated runtime environment. The execution environment manages the resources required to run your function. The execution environment also provides lifecycle support for the function's runtime and any external extensions associated with your function. Running code 11 AWS Lambda Developer Guide The function's runtime communicates with Lambda using the Runtime API. Extensions communicate with Lambda using the Extensions API. Extensions can also receive log messages and other telemetry from the function by using the Telemetry API. When you create your Lambda function, you specify configuration information, such as the amount of memory available and the maximum execution time allowed for your function. Lambda uses this information to set up the execution environment. The function's runtime and each external extension are processes that run within the execution environment. Permissions, resources, credentials, and environment variables are shared between the function and the extensions. Topics • Lambda execution environment lifecycle • Cold starts and latency • Reducing cold starts with Provisioned Concurrency • Optimizing static initialization Lambda execution environment lifecycle Running"} +{"global_id": 662, "doc_id": "lambda", "chunk_id": "12", "question_id": 3, "question": "What type of information do you specify when creating your Lambda function?", "answer_span": "When you create your Lambda function, you specify configuration information, such as the amount of memory available and the maximum execution time allowed for your function.", "chunk": "of order or concurrently. • Do not rely on instances of your function being long lived, instead store your application's state elsewhere. • Use local storage and class-level objects to increase performance, but keep to a minimum the size of your deployment package and the amount of data that you transfer onto the execution environment. For a hands-on introduction to the programming model in your preferred programming language, see the following chapters. • Building Lambda functions with Node.js • Building Lambda functions with Python • Building Lambda functions with Ruby • Building Lambda functions with Java • Building Lambda functions with Go • Building Lambda functions with C# • Building Lambda functions with PowerShell Understanding the Lambda execution environment lifecycle Lambda invokes your function in an execution environment, which provides a secure and isolated runtime environment. The execution environment manages the resources required to run your function. The execution environment also provides lifecycle support for the function's runtime and any external extensions associated with your function. Running code 11 AWS Lambda Developer Guide The function's runtime communicates with Lambda using the Runtime API. Extensions communicate with Lambda using the Extensions API. Extensions can also receive log messages and other telemetry from the function by using the Telemetry API. When you create your Lambda function, you specify configuration information, such as the amount of memory available and the maximum execution time allowed for your function. Lambda uses this information to set up the execution environment. The function's runtime and each external extension are processes that run within the execution environment. Permissions, resources, credentials, and environment variables are shared between the function and the extensions. Topics • Lambda execution environment lifecycle • Cold starts and latency • Reducing cold starts with Provisioned Concurrency • Optimizing static initialization Lambda execution environment lifecycle Running"} +{"global_id": 663, "doc_id": "lambda", "chunk_id": "12", "question_id": 4, "question": "What APIs do the function's runtime and extensions communicate with Lambda using?", "answer_span": "The function's runtime communicates with Lambda using the Runtime API. Extensions communicate with Lambda using the Extensions API.", "chunk": "of order or concurrently. • Do not rely on instances of your function being long lived, instead store your application's state elsewhere. • Use local storage and class-level objects to increase performance, but keep to a minimum the size of your deployment package and the amount of data that you transfer onto the execution environment. For a hands-on introduction to the programming model in your preferred programming language, see the following chapters. • Building Lambda functions with Node.js • Building Lambda functions with Python • Building Lambda functions with Ruby • Building Lambda functions with Java • Building Lambda functions with Go • Building Lambda functions with C# • Building Lambda functions with PowerShell Understanding the Lambda execution environment lifecycle Lambda invokes your function in an execution environment, which provides a secure and isolated runtime environment. The execution environment manages the resources required to run your function. The execution environment also provides lifecycle support for the function's runtime and any external extensions associated with your function. Running code 11 AWS Lambda Developer Guide The function's runtime communicates with Lambda using the Runtime API. Extensions communicate with Lambda using the Extensions API. Extensions can also receive log messages and other telemetry from the function by using the Telemetry API. When you create your Lambda function, you specify configuration information, such as the amount of memory available and the maximum execution time allowed for your function. Lambda uses this information to set up the execution environment. The function's runtime and each external extension are processes that run within the execution environment. Permissions, resources, credentials, and environment variables are shared between the function and the extensions. Topics • Lambda execution environment lifecycle • Cold starts and latency • Reducing cold starts with Provisioned Concurrency • Optimizing static initialization Lambda execution environment lifecycle Running"} +{"global_id": 664, "doc_id": "lambda", "chunk_id": "13", "question_id": 1, "question": "What are the three tasks performed during the Init phase?", "answer_span": "In the Init phase, Lambda performs three tasks: • Start all extensions (Extension init) • Bootstrap the runtime (Runtime init) • Run the function's static code (Function init)", "chunk": "extension are processes that run within the execution environment. Permissions, resources, credentials, and environment variables are shared between the function and the extensions. Topics • Lambda execution environment lifecycle • Cold starts and latency • Reducing cold starts with Provisioned Concurrency • Optimizing static initialization Lambda execution environment lifecycle Running code 12 AWS Lambda Developer Guide Each phase starts with an event that Lambda sends to the runtime and to all registered extensions. The runtime and each extension indicate completion by sending a Next API request. Lambda freezes the execution environment when the runtime and each extension have completed and there are no pending events. Topics • Init phase • Failures during the Init phase • Restore phase (Lambda SnapStart only) • Invoke phase • Failures during the invoke phase • Shutdown phase Init phase In the Init phase, Lambda performs three tasks: • Start all extensions (Extension init) • Bootstrap the runtime (Runtime init) • Run the function's static code (Function init) • Run any before-checkpoint runtime hooks (Lambda SnapStart only) The Init phase ends when the runtime and all extensions signal that they are ready by sending a Next API request. The Init phase is limited to 10 seconds. If all three tasks do not complete within 10 seconds, Lambda retries the Init phase at the time of the first function invocation with the configured function timeout. When Lambda SnapStart is activated, the Init phase happens when you publish a function version. Lambda saves a snapshot of the memory and disk state of the initialized execution environment, persists the encrypted snapshot, and caches it for low-latency access. If you have a before-checkpoint runtime hook, then the code runs at the end of Init phase. Note The 10-second timeout doesn't apply to functions that are using provisioned concurrency or"} +{"global_id": 665, "doc_id": "lambda", "chunk_id": "13", "question_id": 2, "question": "What happens if the tasks in the Init phase do not complete within 10 seconds?", "answer_span": "If all three tasks do not complete within 10 seconds, Lambda retries the Init phase at the time of the first function invocation with the configured function timeout.", "chunk": "extension are processes that run within the execution environment. Permissions, resources, credentials, and environment variables are shared between the function and the extensions. Topics • Lambda execution environment lifecycle • Cold starts and latency • Reducing cold starts with Provisioned Concurrency • Optimizing static initialization Lambda execution environment lifecycle Running code 12 AWS Lambda Developer Guide Each phase starts with an event that Lambda sends to the runtime and to all registered extensions. The runtime and each extension indicate completion by sending a Next API request. Lambda freezes the execution environment when the runtime and each extension have completed and there are no pending events. Topics • Init phase • Failures during the Init phase • Restore phase (Lambda SnapStart only) • Invoke phase • Failures during the invoke phase • Shutdown phase Init phase In the Init phase, Lambda performs three tasks: • Start all extensions (Extension init) • Bootstrap the runtime (Runtime init) • Run the function's static code (Function init) • Run any before-checkpoint runtime hooks (Lambda SnapStart only) The Init phase ends when the runtime and all extensions signal that they are ready by sending a Next API request. The Init phase is limited to 10 seconds. If all three tasks do not complete within 10 seconds, Lambda retries the Init phase at the time of the first function invocation with the configured function timeout. When Lambda SnapStart is activated, the Init phase happens when you publish a function version. Lambda saves a snapshot of the memory and disk state of the initialized execution environment, persists the encrypted snapshot, and caches it for low-latency access. If you have a before-checkpoint runtime hook, then the code runs at the end of Init phase. Note The 10-second timeout doesn't apply to functions that are using provisioned concurrency or"} +{"global_id": 666, "doc_id": "lambda", "chunk_id": "13", "question_id": 3, "question": "What does Lambda do when SnapStart is activated?", "answer_span": "When Lambda SnapStart is activated, the Init phase happens when you publish a function version.", "chunk": "extension are processes that run within the execution environment. Permissions, resources, credentials, and environment variables are shared between the function and the extensions. Topics • Lambda execution environment lifecycle • Cold starts and latency • Reducing cold starts with Provisioned Concurrency • Optimizing static initialization Lambda execution environment lifecycle Running code 12 AWS Lambda Developer Guide Each phase starts with an event that Lambda sends to the runtime and to all registered extensions. The runtime and each extension indicate completion by sending a Next API request. Lambda freezes the execution environment when the runtime and each extension have completed and there are no pending events. Topics • Init phase • Failures during the Init phase • Restore phase (Lambda SnapStart only) • Invoke phase • Failures during the invoke phase • Shutdown phase Init phase In the Init phase, Lambda performs three tasks: • Start all extensions (Extension init) • Bootstrap the runtime (Runtime init) • Run the function's static code (Function init) • Run any before-checkpoint runtime hooks (Lambda SnapStart only) The Init phase ends when the runtime and all extensions signal that they are ready by sending a Next API request. The Init phase is limited to 10 seconds. If all three tasks do not complete within 10 seconds, Lambda retries the Init phase at the time of the first function invocation with the configured function timeout. When Lambda SnapStart is activated, the Init phase happens when you publish a function version. Lambda saves a snapshot of the memory and disk state of the initialized execution environment, persists the encrypted snapshot, and caches it for low-latency access. If you have a before-checkpoint runtime hook, then the code runs at the end of Init phase. Note The 10-second timeout doesn't apply to functions that are using provisioned concurrency or"} +{"global_id": 667, "doc_id": "lambda", "chunk_id": "13", "question_id": 4, "question": "What indicates the completion of the Init phase?", "answer_span": "The Init phase ends when the runtime and all extensions signal that they are ready by sending a Next API request.", "chunk": "extension are processes that run within the execution environment. Permissions, resources, credentials, and environment variables are shared between the function and the extensions. Topics • Lambda execution environment lifecycle • Cold starts and latency • Reducing cold starts with Provisioned Concurrency • Optimizing static initialization Lambda execution environment lifecycle Running code 12 AWS Lambda Developer Guide Each phase starts with an event that Lambda sends to the runtime and to all registered extensions. The runtime and each extension indicate completion by sending a Next API request. Lambda freezes the execution environment when the runtime and each extension have completed and there are no pending events. Topics • Init phase • Failures during the Init phase • Restore phase (Lambda SnapStart only) • Invoke phase • Failures during the invoke phase • Shutdown phase Init phase In the Init phase, Lambda performs three tasks: • Start all extensions (Extension init) • Bootstrap the runtime (Runtime init) • Run the function's static code (Function init) • Run any before-checkpoint runtime hooks (Lambda SnapStart only) The Init phase ends when the runtime and all extensions signal that they are ready by sending a Next API request. The Init phase is limited to 10 seconds. If all three tasks do not complete within 10 seconds, Lambda retries the Init phase at the time of the first function invocation with the configured function timeout. When Lambda SnapStart is activated, the Init phase happens when you publish a function version. Lambda saves a snapshot of the memory and disk state of the initialized execution environment, persists the encrypted snapshot, and caches it for low-latency access. If you have a before-checkpoint runtime hook, then the code runs at the end of Init phase. Note The 10-second timeout doesn't apply to functions that are using provisioned concurrency or"} +{"global_id": 668, "doc_id": "lambda", "chunk_id": "14", "question_id": 1, "question": "What happens if a function crashes or times out during the Init phase?", "answer_span": "If a function crashes or times out during the Init phase, Lambda emits error information in the INIT_REPORT log.", "chunk": "memory and disk state of the initialized execution environment, persists the encrypted snapshot, and caches it for low-latency access. If you have a before-checkpoint runtime hook, then the code runs at the end of Init phase. Note The 10-second timeout doesn't apply to functions that are using provisioned concurrency or SnapStart. For provisioned concurrency and SnapStart functions, your initialization code Running code 13 AWS Lambda Developer Guide can run for up to 15 minutes. The time limit is 130 seconds or the configured function timeout (maximum 900 seconds), whichever is higher. When you use provisioned concurrency, Lambda initializes the execution environment when you configure the PC settings for a function. Lambda also ensures that initialized execution environments are always available in advance of invocations. You may see gaps between your function's invocation and initialization phases. Depending on your function's runtime and memory configuration, you may also see variable latency on the first invocation on an initialized execution environment. For functions using on-demand concurrency, Lambda may occasionally initialize execution environments ahead of invocation requests. When this happens, you may also observe a time gap between your function's initialization and invocation phases. We recommend you to not take a dependency on this behavior. Failures during the Init phase If a function crashes or times out during the Init phase, Lambda emits error information in the INIT_REPORT log. Example — INIT_REPORT log for timeout INIT_REPORT Init Duration: 1236.04 ms Phase: init Status: timeout Example — INIT_REPORT log for extension failure INIT_REPORT Init Duration: 1236.04 ms Phase: init Status: error Error Type: Extension.Crash If the Init phase is successful, Lambda doesn't emit the INIT_REPORT log unless SnapStart or provisioned concurrency is enabled. SnapStart and provisioned concurrency functions always emit INIT_REPORT. For more information, see Monitoring for Lambda SnapStart. Restore phase (Lambda SnapStart only) When"} +{"global_id": 669, "doc_id": "lambda", "chunk_id": "14", "question_id": 2, "question": "How long can initialization code run for functions using provisioned concurrency or SnapStart?", "answer_span": "For provisioned concurrency and SnapStart functions, your initialization code can run for up to 15 minutes.", "chunk": "memory and disk state of the initialized execution environment, persists the encrypted snapshot, and caches it for low-latency access. If you have a before-checkpoint runtime hook, then the code runs at the end of Init phase. Note The 10-second timeout doesn't apply to functions that are using provisioned concurrency or SnapStart. For provisioned concurrency and SnapStart functions, your initialization code Running code 13 AWS Lambda Developer Guide can run for up to 15 minutes. The time limit is 130 seconds or the configured function timeout (maximum 900 seconds), whichever is higher. When you use provisioned concurrency, Lambda initializes the execution environment when you configure the PC settings for a function. Lambda also ensures that initialized execution environments are always available in advance of invocations. You may see gaps between your function's invocation and initialization phases. Depending on your function's runtime and memory configuration, you may also see variable latency on the first invocation on an initialized execution environment. For functions using on-demand concurrency, Lambda may occasionally initialize execution environments ahead of invocation requests. When this happens, you may also observe a time gap between your function's initialization and invocation phases. We recommend you to not take a dependency on this behavior. Failures during the Init phase If a function crashes or times out during the Init phase, Lambda emits error information in the INIT_REPORT log. Example — INIT_REPORT log for timeout INIT_REPORT Init Duration: 1236.04 ms Phase: init Status: timeout Example — INIT_REPORT log for extension failure INIT_REPORT Init Duration: 1236.04 ms Phase: init Status: error Error Type: Extension.Crash If the Init phase is successful, Lambda doesn't emit the INIT_REPORT log unless SnapStart or provisioned concurrency is enabled. SnapStart and provisioned concurrency functions always emit INIT_REPORT. For more information, see Monitoring for Lambda SnapStart. Restore phase (Lambda SnapStart only) When"} +{"global_id": 670, "doc_id": "lambda", "chunk_id": "14", "question_id": 3, "question": "What does Lambda do when you configure the PC settings for a function?", "answer_span": "When you use provisioned concurrency, Lambda initializes the execution environment when you configure the PC settings for a function.", "chunk": "memory and disk state of the initialized execution environment, persists the encrypted snapshot, and caches it for low-latency access. If you have a before-checkpoint runtime hook, then the code runs at the end of Init phase. Note The 10-second timeout doesn't apply to functions that are using provisioned concurrency or SnapStart. For provisioned concurrency and SnapStart functions, your initialization code Running code 13 AWS Lambda Developer Guide can run for up to 15 minutes. The time limit is 130 seconds or the configured function timeout (maximum 900 seconds), whichever is higher. When you use provisioned concurrency, Lambda initializes the execution environment when you configure the PC settings for a function. Lambda also ensures that initialized execution environments are always available in advance of invocations. You may see gaps between your function's invocation and initialization phases. Depending on your function's runtime and memory configuration, you may also see variable latency on the first invocation on an initialized execution environment. For functions using on-demand concurrency, Lambda may occasionally initialize execution environments ahead of invocation requests. When this happens, you may also observe a time gap between your function's initialization and invocation phases. We recommend you to not take a dependency on this behavior. Failures during the Init phase If a function crashes or times out during the Init phase, Lambda emits error information in the INIT_REPORT log. Example — INIT_REPORT log for timeout INIT_REPORT Init Duration: 1236.04 ms Phase: init Status: timeout Example — INIT_REPORT log for extension failure INIT_REPORT Init Duration: 1236.04 ms Phase: init Status: error Error Type: Extension.Crash If the Init phase is successful, Lambda doesn't emit the INIT_REPORT log unless SnapStart or provisioned concurrency is enabled. SnapStart and provisioned concurrency functions always emit INIT_REPORT. For more information, see Monitoring for Lambda SnapStart. Restore phase (Lambda SnapStart only) When"} +{"global_id": 671, "doc_id": "lambda", "chunk_id": "14", "question_id": 4, "question": "What is emitted in the INIT_REPORT log if the Init phase is successful?", "answer_span": "If the Init phase is successful, Lambda doesn't emit the INIT_REPORT log unless SnapStart or provisioned concurrency is enabled.", "chunk": "memory and disk state of the initialized execution environment, persists the encrypted snapshot, and caches it for low-latency access. If you have a before-checkpoint runtime hook, then the code runs at the end of Init phase. Note The 10-second timeout doesn't apply to functions that are using provisioned concurrency or SnapStart. For provisioned concurrency and SnapStart functions, your initialization code Running code 13 AWS Lambda Developer Guide can run for up to 15 minutes. The time limit is 130 seconds or the configured function timeout (maximum 900 seconds), whichever is higher. When you use provisioned concurrency, Lambda initializes the execution environment when you configure the PC settings for a function. Lambda also ensures that initialized execution environments are always available in advance of invocations. You may see gaps between your function's invocation and initialization phases. Depending on your function's runtime and memory configuration, you may also see variable latency on the first invocation on an initialized execution environment. For functions using on-demand concurrency, Lambda may occasionally initialize execution environments ahead of invocation requests. When this happens, you may also observe a time gap between your function's initialization and invocation phases. We recommend you to not take a dependency on this behavior. Failures during the Init phase If a function crashes or times out during the Init phase, Lambda emits error information in the INIT_REPORT log. Example — INIT_REPORT log for timeout INIT_REPORT Init Duration: 1236.04 ms Phase: init Status: timeout Example — INIT_REPORT log for extension failure INIT_REPORT Init Duration: 1236.04 ms Phase: init Status: error Error Type: Extension.Crash If the Init phase is successful, Lambda doesn't emit the INIT_REPORT log unless SnapStart or provisioned concurrency is enabled. SnapStart and provisioned concurrency functions always emit INIT_REPORT. For more information, see Monitoring for Lambda SnapStart. Restore phase (Lambda SnapStart only) When"} +{"global_id": 672, "doc_id": "lambda", "chunk_id": "15", "question_id": 1, "question": "What happens if the Init phase is successful?", "answer_span": "If the Init phase is successful, Lambda doesn't emit the INIT_REPORT log unless SnapStart or provisioned concurrency is enabled.", "chunk": "1236.04 ms Phase: init Status: error Error Type: Extension.Crash If the Init phase is successful, Lambda doesn't emit the INIT_REPORT log unless SnapStart or provisioned concurrency is enabled. SnapStart and provisioned concurrency functions always emit INIT_REPORT. For more information, see Monitoring for Lambda SnapStart. Restore phase (Lambda SnapStart only) When you first invoke a SnapStart function and as the function scales up, Lambda resumes new execution environments from the persisted snapshot instead of initializing the function from scratch. If you have an after-restore runtime hook, the code runs at the end of the Restore phase. You are charged for the duration of after-restore runtime hooks. The runtime must load and afterRunning code 14 AWS Lambda Developer Guide restore runtime hooks must complete within the timeout limit (10 seconds). Otherwise, you'll get a SnapStartTimeoutException. When the Restore phase completes, Lambda invokes the function handler (the Invoke phase). Failures during the Restore phase If the Restore phase fails, Lambda emits error information in the RESTORE_REPORT log. Example — RESTORE_REPORT log for timeout RESTORE_REPORT Restore Duration: 1236.04 ms Status: timeout Example — RESTORE_REPORT log for runtime hook failure RESTORE_REPORT Restore Duration: 1236.04 ms Status: error Error Type: Runtime.ExitError For more information about the RESTORE_REPORT log, see Monitoring for Lambda SnapStart. Invoke phase When a Lambda function is invoked in response to a Next API request, Lambda sends an Invoke event to the runtime and to each extension. The function's timeout setting limits the duration of the entire Invoke phase. For example, if you set the function timeout as 360 seconds, the function and all extensions need to complete within 360 seconds. Note that there is no independent post-invoke phase. The duration is the sum of all invocation time (runtime + extensions) and is not calculated until the function and all extensions have finished"} +{"global_id": 673, "doc_id": "lambda", "chunk_id": "15", "question_id": 2, "question": "What does Lambda do during the Restore phase for SnapStart functions?", "answer_span": "When you first invoke a SnapStart function and as the function scales up, Lambda resumes new execution environments from the persisted snapshot instead of initializing the function from scratch.", "chunk": "1236.04 ms Phase: init Status: error Error Type: Extension.Crash If the Init phase is successful, Lambda doesn't emit the INIT_REPORT log unless SnapStart or provisioned concurrency is enabled. SnapStart and provisioned concurrency functions always emit INIT_REPORT. For more information, see Monitoring for Lambda SnapStart. Restore phase (Lambda SnapStart only) When you first invoke a SnapStart function and as the function scales up, Lambda resumes new execution environments from the persisted snapshot instead of initializing the function from scratch. If you have an after-restore runtime hook, the code runs at the end of the Restore phase. You are charged for the duration of after-restore runtime hooks. The runtime must load and afterRunning code 14 AWS Lambda Developer Guide restore runtime hooks must complete within the timeout limit (10 seconds). Otherwise, you'll get a SnapStartTimeoutException. When the Restore phase completes, Lambda invokes the function handler (the Invoke phase). Failures during the Restore phase If the Restore phase fails, Lambda emits error information in the RESTORE_REPORT log. Example — RESTORE_REPORT log for timeout RESTORE_REPORT Restore Duration: 1236.04 ms Status: timeout Example — RESTORE_REPORT log for runtime hook failure RESTORE_REPORT Restore Duration: 1236.04 ms Status: error Error Type: Runtime.ExitError For more information about the RESTORE_REPORT log, see Monitoring for Lambda SnapStart. Invoke phase When a Lambda function is invoked in response to a Next API request, Lambda sends an Invoke event to the runtime and to each extension. The function's timeout setting limits the duration of the entire Invoke phase. For example, if you set the function timeout as 360 seconds, the function and all extensions need to complete within 360 seconds. Note that there is no independent post-invoke phase. The duration is the sum of all invocation time (runtime + extensions) and is not calculated until the function and all extensions have finished"} +{"global_id": 674, "doc_id": "lambda", "chunk_id": "15", "question_id": 3, "question": "What is the timeout limit for restore runtime hooks?", "answer_span": "restore runtime hooks must complete within the timeout limit (10 seconds).", "chunk": "1236.04 ms Phase: init Status: error Error Type: Extension.Crash If the Init phase is successful, Lambda doesn't emit the INIT_REPORT log unless SnapStart or provisioned concurrency is enabled. SnapStart and provisioned concurrency functions always emit INIT_REPORT. For more information, see Monitoring for Lambda SnapStart. Restore phase (Lambda SnapStart only) When you first invoke a SnapStart function and as the function scales up, Lambda resumes new execution environments from the persisted snapshot instead of initializing the function from scratch. If you have an after-restore runtime hook, the code runs at the end of the Restore phase. You are charged for the duration of after-restore runtime hooks. The runtime must load and afterRunning code 14 AWS Lambda Developer Guide restore runtime hooks must complete within the timeout limit (10 seconds). Otherwise, you'll get a SnapStartTimeoutException. When the Restore phase completes, Lambda invokes the function handler (the Invoke phase). Failures during the Restore phase If the Restore phase fails, Lambda emits error information in the RESTORE_REPORT log. Example — RESTORE_REPORT log for timeout RESTORE_REPORT Restore Duration: 1236.04 ms Status: timeout Example — RESTORE_REPORT log for runtime hook failure RESTORE_REPORT Restore Duration: 1236.04 ms Status: error Error Type: Runtime.ExitError For more information about the RESTORE_REPORT log, see Monitoring for Lambda SnapStart. Invoke phase When a Lambda function is invoked in response to a Next API request, Lambda sends an Invoke event to the runtime and to each extension. The function's timeout setting limits the duration of the entire Invoke phase. For example, if you set the function timeout as 360 seconds, the function and all extensions need to complete within 360 seconds. Note that there is no independent post-invoke phase. The duration is the sum of all invocation time (runtime + extensions) and is not calculated until the function and all extensions have finished"} +{"global_id": 675, "doc_id": "lambda", "chunk_id": "15", "question_id": 4, "question": "What is emitted in the RESTORE_REPORT log if the Restore phase fails?", "answer_span": "If the Restore phase fails, Lambda emits error information in the RESTORE_REPORT log.", "chunk": "1236.04 ms Phase: init Status: error Error Type: Extension.Crash If the Init phase is successful, Lambda doesn't emit the INIT_REPORT log unless SnapStart or provisioned concurrency is enabled. SnapStart and provisioned concurrency functions always emit INIT_REPORT. For more information, see Monitoring for Lambda SnapStart. Restore phase (Lambda SnapStart only) When you first invoke a SnapStart function and as the function scales up, Lambda resumes new execution environments from the persisted snapshot instead of initializing the function from scratch. If you have an after-restore runtime hook, the code runs at the end of the Restore phase. You are charged for the duration of after-restore runtime hooks. The runtime must load and afterRunning code 14 AWS Lambda Developer Guide restore runtime hooks must complete within the timeout limit (10 seconds). Otherwise, you'll get a SnapStartTimeoutException. When the Restore phase completes, Lambda invokes the function handler (the Invoke phase). Failures during the Restore phase If the Restore phase fails, Lambda emits error information in the RESTORE_REPORT log. Example — RESTORE_REPORT log for timeout RESTORE_REPORT Restore Duration: 1236.04 ms Status: timeout Example — RESTORE_REPORT log for runtime hook failure RESTORE_REPORT Restore Duration: 1236.04 ms Status: error Error Type: Runtime.ExitError For more information about the RESTORE_REPORT log, see Monitoring for Lambda SnapStart. Invoke phase When a Lambda function is invoked in response to a Next API request, Lambda sends an Invoke event to the runtime and to each extension. The function's timeout setting limits the duration of the entire Invoke phase. For example, if you set the function timeout as 360 seconds, the function and all extensions need to complete within 360 seconds. Note that there is no independent post-invoke phase. The duration is the sum of all invocation time (runtime + extensions) and is not calculated until the function and all extensions have finished"} +{"global_id": 676, "doc_id": "lambda", "chunk_id": "16", "question_id": 1, "question": "What is the function timeout set to?", "answer_span": "set the function timeout as 360 seconds", "chunk": "set the function timeout as 360 seconds, the function and all extensions need to complete within 360 seconds. Note that there is no independent post-invoke phase. The duration is the sum of all invocation time (runtime + extensions) and is not calculated until the function and all extensions have finished executing. The invoke phase ends after the runtime and all extensions signal that they are done by sending a Next API request. Failures during the invoke phase If the Lambda function crashes or times out during the Invoke phase, Lambda resets the execution environment. The following diagram illustrates Lambda execution environment behavior when there's an invoke failure: Running code 15 AWS Lambda Developer Guide In the previous diagram: • The first phase is the INIT phase, which runs without errors. • The second phase is the INVOKE phase, which runs without errors. • At some point, suppose your function runs into an invoke failure (common causes include function timeouts, runtime errors, memory exhaustion, VPC connectivity issues, permission errors, concurrency limits, and various configuration problems). For a complete list of possible invocation failures, see the section called “Invocation”. The third phase, labeled INVOKE WITH ERROR , illustrates this scenario. When this happens, the Lambda service performs a reset. The reset behaves like a Shutdown event. First, Lambda shuts down the runtime, then sends a Shutdown event to each registered external extension. The event includes the reason for the shutdown. If this environment is used for a new invocation, Lambda re-initializes the extension and runtime together with the next invocation. Note that the Lambda reset does not clear the /tmp directory content prior to the next init phase. This behavior is consistent with the regular shutdown phase. Note AWS is currently implementing changes to the Lambda service. Due to these changes, you"} +{"global_id": 677, "doc_id": "lambda", "chunk_id": "16", "question_id": 2, "question": "What happens if the Lambda function crashes or times out during the Invoke phase?", "answer_span": "If the Lambda function crashes or times out during the Invoke phase, Lambda resets the execution environment.", "chunk": "set the function timeout as 360 seconds, the function and all extensions need to complete within 360 seconds. Note that there is no independent post-invoke phase. The duration is the sum of all invocation time (runtime + extensions) and is not calculated until the function and all extensions have finished executing. The invoke phase ends after the runtime and all extensions signal that they are done by sending a Next API request. Failures during the invoke phase If the Lambda function crashes or times out during the Invoke phase, Lambda resets the execution environment. The following diagram illustrates Lambda execution environment behavior when there's an invoke failure: Running code 15 AWS Lambda Developer Guide In the previous diagram: • The first phase is the INIT phase, which runs without errors. • The second phase is the INVOKE phase, which runs without errors. • At some point, suppose your function runs into an invoke failure (common causes include function timeouts, runtime errors, memory exhaustion, VPC connectivity issues, permission errors, concurrency limits, and various configuration problems). For a complete list of possible invocation failures, see the section called “Invocation”. The third phase, labeled INVOKE WITH ERROR , illustrates this scenario. When this happens, the Lambda service performs a reset. The reset behaves like a Shutdown event. First, Lambda shuts down the runtime, then sends a Shutdown event to each registered external extension. The event includes the reason for the shutdown. If this environment is used for a new invocation, Lambda re-initializes the extension and runtime together with the next invocation. Note that the Lambda reset does not clear the /tmp directory content prior to the next init phase. This behavior is consistent with the regular shutdown phase. Note AWS is currently implementing changes to the Lambda service. Due to these changes, you"} +{"global_id": 678, "doc_id": "lambda", "chunk_id": "16", "question_id": 3, "question": "What does the reset behave like when an invoke failure occurs?", "answer_span": "The reset behaves like a Shutdown event.", "chunk": "set the function timeout as 360 seconds, the function and all extensions need to complete within 360 seconds. Note that there is no independent post-invoke phase. The duration is the sum of all invocation time (runtime + extensions) and is not calculated until the function and all extensions have finished executing. The invoke phase ends after the runtime and all extensions signal that they are done by sending a Next API request. Failures during the invoke phase If the Lambda function crashes or times out during the Invoke phase, Lambda resets the execution environment. The following diagram illustrates Lambda execution environment behavior when there's an invoke failure: Running code 15 AWS Lambda Developer Guide In the previous diagram: • The first phase is the INIT phase, which runs without errors. • The second phase is the INVOKE phase, which runs without errors. • At some point, suppose your function runs into an invoke failure (common causes include function timeouts, runtime errors, memory exhaustion, VPC connectivity issues, permission errors, concurrency limits, and various configuration problems). For a complete list of possible invocation failures, see the section called “Invocation”. The third phase, labeled INVOKE WITH ERROR , illustrates this scenario. When this happens, the Lambda service performs a reset. The reset behaves like a Shutdown event. First, Lambda shuts down the runtime, then sends a Shutdown event to each registered external extension. The event includes the reason for the shutdown. If this environment is used for a new invocation, Lambda re-initializes the extension and runtime together with the next invocation. Note that the Lambda reset does not clear the /tmp directory content prior to the next init phase. This behavior is consistent with the regular shutdown phase. Note AWS is currently implementing changes to the Lambda service. Due to these changes, you"} +{"global_id": 679, "doc_id": "lambda", "chunk_id": "16", "question_id": 4, "question": "Does the Lambda reset clear the /tmp directory content prior to the next init phase?", "answer_span": "Note that the Lambda reset does not clear the /tmp directory content prior to the next init phase.", "chunk": "set the function timeout as 360 seconds, the function and all extensions need to complete within 360 seconds. Note that there is no independent post-invoke phase. The duration is the sum of all invocation time (runtime + extensions) and is not calculated until the function and all extensions have finished executing. The invoke phase ends after the runtime and all extensions signal that they are done by sending a Next API request. Failures during the invoke phase If the Lambda function crashes or times out during the Invoke phase, Lambda resets the execution environment. The following diagram illustrates Lambda execution environment behavior when there's an invoke failure: Running code 15 AWS Lambda Developer Guide In the previous diagram: • The first phase is the INIT phase, which runs without errors. • The second phase is the INVOKE phase, which runs without errors. • At some point, suppose your function runs into an invoke failure (common causes include function timeouts, runtime errors, memory exhaustion, VPC connectivity issues, permission errors, concurrency limits, and various configuration problems). For a complete list of possible invocation failures, see the section called “Invocation”. The third phase, labeled INVOKE WITH ERROR , illustrates this scenario. When this happens, the Lambda service performs a reset. The reset behaves like a Shutdown event. First, Lambda shuts down the runtime, then sends a Shutdown event to each registered external extension. The event includes the reason for the shutdown. If this environment is used for a new invocation, Lambda re-initializes the extension and runtime together with the next invocation. Note that the Lambda reset does not clear the /tmp directory content prior to the next init phase. This behavior is consistent with the regular shutdown phase. Note AWS is currently implementing changes to the Lambda service. Due to these changes, you"} +{"global_id": 680, "doc_id": "lambda", "chunk_id": "17", "question_id": 1, "question": "What does the Lambda reset not clear before the next init phase?", "answer_span": "Note that the Lambda reset does not clear the /tmp directory content prior to the next init phase.", "chunk": "extension and runtime together with the next invocation. Note that the Lambda reset does not clear the /tmp directory content prior to the next init phase. This behavior is consistent with the regular shutdown phase. Note AWS is currently implementing changes to the Lambda service. Due to these changes, you may see minor differences between the structure and content of system log messages and trace segments emitted by different Lambda functions in your AWS account. If your function's system log configuration is set to plain text, this change affects the log messages captured in CloudWatch Logs when your function experiences an invoke failure. The following examples show log outputs in both old and new formats. These changes will be implemented during the coming weeks, and all functions in all AWS Regions except the China and GovCloud regions will transition to use the newformat log messages and trace segments. Example CloudWatch Logs log output (runtime or extension crash) - old style START RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Version: $LATEST RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Error: Runtime exited without providing a reason Runtime.ExitError Running code 16 AWS Lambda Developer Guide END RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 REPORT RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Duration: 933.59 ms Billed Duration: 934 ms Memory Size: 128 MB Max Memory Used: 9 MB Example CloudWatch Logs log output (function timeout) - old style START RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 Version: $LATEST 2024-03-04T17:22:38.033Z b70435cc-261c-4438-b9b6-efe4c8f04b21 Task timed out after 3.00 seconds END RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 REPORT RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 Duration: 3004.92 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 33 MB Init Duration: 111.23 ms The new format for CloudWatch logs includes an additional statusfield in the REPORT line. In the case of a runtime or extension crash, the REPORT line also includes a field ErrorType. Example CloudWatch Logs log output (runtime or extension crash) - new style START RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Version:"} +{"global_id": 681, "doc_id": "lambda", "chunk_id": "17", "question_id": 2, "question": "What may you see due to the changes being implemented in the Lambda service?", "answer_span": "Due to these changes, you may see minor differences between the structure and content of system log messages and trace segments emitted by di��erent Lambda functions in your AWS account.", "chunk": "extension and runtime together with the next invocation. Note that the Lambda reset does not clear the /tmp directory content prior to the next init phase. This behavior is consistent with the regular shutdown phase. Note AWS is currently implementing changes to the Lambda service. Due to these changes, you may see minor differences between the structure and content of system log messages and trace segments emitted by different Lambda functions in your AWS account. If your function's system log configuration is set to plain text, this change affects the log messages captured in CloudWatch Logs when your function experiences an invoke failure. The following examples show log outputs in both old and new formats. These changes will be implemented during the coming weeks, and all functions in all AWS Regions except the China and GovCloud regions will transition to use the newformat log messages and trace segments. Example CloudWatch Logs log output (runtime or extension crash) - old style START RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Version: $LATEST RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Error: Runtime exited without providing a reason Runtime.ExitError Running code 16 AWS Lambda Developer Guide END RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 REPORT RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Duration: 933.59 ms Billed Duration: 934 ms Memory Size: 128 MB Max Memory Used: 9 MB Example CloudWatch Logs log output (function timeout) - old style START RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 Version: $LATEST 2024-03-04T17:22:38.033Z b70435cc-261c-4438-b9b6-efe4c8f04b21 Task timed out after 3.00 seconds END RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 REPORT RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 Duration: 3004.92 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 33 MB Init Duration: 111.23 ms The new format for CloudWatch logs includes an additional statusfield in the REPORT line. In the case of a runtime or extension crash, the REPORT line also includes a field ErrorType. Example CloudWatch Logs log output (runtime or extension crash) - new style START RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Version:"} +{"global_id": 682, "doc_id": "lambda", "chunk_id": "17", "question_id": 3, "question": "What happens if your function's system log configuration is set to plain text?", "answer_span": "this change affects the log messages captured in CloudWatch Logs when your function experiences an invoke failure.", "chunk": "extension and runtime together with the next invocation. Note that the Lambda reset does not clear the /tmp directory content prior to the next init phase. This behavior is consistent with the regular shutdown phase. Note AWS is currently implementing changes to the Lambda service. Due to these changes, you may see minor differences between the structure and content of system log messages and trace segments emitted by different Lambda functions in your AWS account. If your function's system log configuration is set to plain text, this change affects the log messages captured in CloudWatch Logs when your function experiences an invoke failure. The following examples show log outputs in both old and new formats. These changes will be implemented during the coming weeks, and all functions in all AWS Regions except the China and GovCloud regions will transition to use the newformat log messages and trace segments. Example CloudWatch Logs log output (runtime or extension crash) - old style START RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Version: $LATEST RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Error: Runtime exited without providing a reason Runtime.ExitError Running code 16 AWS Lambda Developer Guide END RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 REPORT RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Duration: 933.59 ms Billed Duration: 934 ms Memory Size: 128 MB Max Memory Used: 9 MB Example CloudWatch Logs log output (function timeout) - old style START RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 Version: $LATEST 2024-03-04T17:22:38.033Z b70435cc-261c-4438-b9b6-efe4c8f04b21 Task timed out after 3.00 seconds END RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 REPORT RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 Duration: 3004.92 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 33 MB Init Duration: 111.23 ms The new format for CloudWatch logs includes an additional statusfield in the REPORT line. In the case of a runtime or extension crash, the REPORT line also includes a field ErrorType. Example CloudWatch Logs log output (runtime or extension crash) - new style START RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Version:"} +{"global_id": 683, "doc_id": "lambda", "chunk_id": "17", "question_id": 4, "question": "What will the new format for CloudWatch logs include in the REPORT line?", "answer_span": "The new format for CloudWatch logs includes an additional statusfield in the REPORT line.", "chunk": "extension and runtime together with the next invocation. Note that the Lambda reset does not clear the /tmp directory content prior to the next init phase. This behavior is consistent with the regular shutdown phase. Note AWS is currently implementing changes to the Lambda service. Due to these changes, you may see minor differences between the structure and content of system log messages and trace segments emitted by different Lambda functions in your AWS account. If your function's system log configuration is set to plain text, this change affects the log messages captured in CloudWatch Logs when your function experiences an invoke failure. The following examples show log outputs in both old and new formats. These changes will be implemented during the coming weeks, and all functions in all AWS Regions except the China and GovCloud regions will transition to use the newformat log messages and trace segments. Example CloudWatch Logs log output (runtime or extension crash) - old style START RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Version: $LATEST RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Error: Runtime exited without providing a reason Runtime.ExitError Running code 16 AWS Lambda Developer Guide END RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 REPORT RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Duration: 933.59 ms Billed Duration: 934 ms Memory Size: 128 MB Max Memory Used: 9 MB Example CloudWatch Logs log output (function timeout) - old style START RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 Version: $LATEST 2024-03-04T17:22:38.033Z b70435cc-261c-4438-b9b6-efe4c8f04b21 Task timed out after 3.00 seconds END RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 REPORT RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 Duration: 3004.92 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 33 MB Init Duration: 111.23 ms The new format for CloudWatch logs includes an additional statusfield in the REPORT line. In the case of a runtime or extension crash, the REPORT line also includes a field ErrorType. Example CloudWatch Logs log output (runtime or extension crash) - new style START RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Version:"} +{"global_id": 684, "doc_id": "lambda", "chunk_id": "18", "question_id": 1, "question": "What is included in the new format for CloudWatch logs in the REPORT line?", "answer_span": "The new format for CloudWatch logs includes an additional status field in the REPORT line.", "chunk": "Duration: 111.23 ms The new format for CloudWatch logs includes an additional statusfield in the REPORT line. In the case of a runtime or extension crash, the REPORT line also includes a field ErrorType. Example CloudWatch Logs log output (runtime or extension crash) - new style START RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Version: $LATEST END RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd REPORT RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Duration: 133.61 ms Billed Duration: 133 ms Memory Size: 128 MB Max Memory Used: 31 MB Init Duration: 80.00 ms Status: error Error Type: Runtime.ExitError Example CloudWatch Logs log output (function timeout) - new style START RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda Version: $LATEST END RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda REPORT RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda Duration: 3016.78 ms Billed Duration: 3016 ms Memory Size: 128 MB Max Memory Used: 31 MB Init Duration: 84.00 ms Status: timeout • The fourth phase represents the INVOKE phase immediately following an invoke failure. Here, Lambda initializes the environment again by re-running the INIT phase. This is called a suppressed init. When suppressed inits occur, Lambda doesn't explicitly report an additional INIT phase in CloudWatch Logs. Instead, you may notice that the duration in the REPORT line includes an additional INIT duration + the INVOKE duration. For example, suppose you see the following logs in CloudWatch: Running code 17 AWS Lambda Developer Guide 2022-12-20T01:00:00.000-08:00 START RequestId: XXX Version: $LATEST 2022-12-20T01:00:02.500-08:00 END RequestId: XXX 2022-12-20T01:00:02.500-08:00 REPORT RequestId: XXX Duration: 3022.91 ms Billed Duration: 3000 ms Memory Size: 512 MB Max Memory Used: 157 MB In this example, the difference between the REPORT and START timestamps is 2.5 seconds. This doesn't match the reported duration of 3022.91 millseconds, because it doesn't take into account the extra INIT (suppressed init) that Lambda performed. In this example, you can infer that the actual INVOKE phase took 2.5 seconds. For more insight into this behavior, you can use the Accessing real-time"} +{"global_id": 685, "doc_id": "lambda", "chunk_id": "18", "question_id": 2, "question": "What does the REPORT line include in the case of a runtime or extension crash?", "answer_span": "In the case of a runtime or extension crash, the REPORT line also includes a field ErrorType.", "chunk": "Duration: 111.23 ms The new format for CloudWatch logs includes an additional statusfield in the REPORT line. In the case of a runtime or extension crash, the REPORT line also includes a field ErrorType. Example CloudWatch Logs log output (runtime or extension crash) - new style START RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Version: $LATEST END RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd REPORT RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Duration: 133.61 ms Billed Duration: 133 ms Memory Size: 128 MB Max Memory Used: 31 MB Init Duration: 80.00 ms Status: error Error Type: Runtime.ExitError Example CloudWatch Logs log output (function timeout) - new style START RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda Version: $LATEST END RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda REPORT RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda Duration: 3016.78 ms Billed Duration: 3016 ms Memory Size: 128 MB Max Memory Used: 31 MB Init Duration: 84.00 ms Status: timeout • The fourth phase represents the INVOKE phase immediately following an invoke failure. Here, Lambda initializes the environment again by re-running the INIT phase. This is called a suppressed init. When suppressed inits occur, Lambda doesn't explicitly report an additional INIT phase in CloudWatch Logs. Instead, you may notice that the duration in the REPORT line includes an additional INIT duration + the INVOKE duration. For example, suppose you see the following logs in CloudWatch: Running code 17 AWS Lambda Developer Guide 2022-12-20T01:00:00.000-08:00 START RequestId: XXX Version: $LATEST 2022-12-20T01:00:02.500-08:00 END RequestId: XXX 2022-12-20T01:00:02.500-08:00 REPORT RequestId: XXX Duration: 3022.91 ms Billed Duration: 3000 ms Memory Size: 512 MB Max Memory Used: 157 MB In this example, the difference between the REPORT and START timestamps is 2.5 seconds. This doesn't match the reported duration of 3022.91 millseconds, because it doesn't take into account the extra INIT (suppressed init) that Lambda performed. In this example, you can infer that the actual INVOKE phase took 2.5 seconds. For more insight into this behavior, you can use the Accessing real-time"} +{"global_id": 686, "doc_id": "lambda", "chunk_id": "18", "question_id": 3, "question": "What happens during the fourth phase following an invoke failure?", "answer_span": "The fourth phase represents the INVOKE phase immediately following an invoke failure.", "chunk": "Duration: 111.23 ms The new format for CloudWatch logs includes an additional statusfield in the REPORT line. In the case of a runtime or extension crash, the REPORT line also includes a field ErrorType. Example CloudWatch Logs log output (runtime or extension crash) - new style START RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Version: $LATEST END RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd REPORT RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Duration: 133.61 ms Billed Duration: 133 ms Memory Size: 128 MB Max Memory Used: 31 MB Init Duration: 80.00 ms Status: error Error Type: Runtime.ExitError Example CloudWatch Logs log output (function timeout) - new style START RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda Version: $LATEST END RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda REPORT RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda Duration: 3016.78 ms Billed Duration: 3016 ms Memory Size: 128 MB Max Memory Used: 31 MB Init Duration: 84.00 ms Status: timeout • The fourth phase represents the INVOKE phase immediately following an invoke failure. Here, Lambda initializes the environment again by re-running the INIT phase. This is called a suppressed init. When suppressed inits occur, Lambda doesn't explicitly report an additional INIT phase in CloudWatch Logs. Instead, you may notice that the duration in the REPORT line includes an additional INIT duration + the INVOKE duration. For example, suppose you see the following logs in CloudWatch: Running code 17 AWS Lambda Developer Guide 2022-12-20T01:00:00.000-08:00 START RequestId: XXX Version: $LATEST 2022-12-20T01:00:02.500-08:00 END RequestId: XXX 2022-12-20T01:00:02.500-08:00 REPORT RequestId: XXX Duration: 3022.91 ms Billed Duration: 3000 ms Memory Size: 512 MB Max Memory Used: 157 MB In this example, the difference between the REPORT and START timestamps is 2.5 seconds. This doesn't match the reported duration of 3022.91 millseconds, because it doesn't take into account the extra INIT (suppressed init) that Lambda performed. In this example, you can infer that the actual INVOKE phase took 2.5 seconds. For more insight into this behavior, you can use the Accessing real-time"} +{"global_id": 687, "doc_id": "lambda", "chunk_id": "18", "question_id": 4, "question": "What is a suppressed init in the context of Lambda?", "answer_span": "When suppressed inits occur, Lambda doesn't explicitly report an additional INIT phase in CloudWatch Logs.", "chunk": "Duration: 111.23 ms The new format for CloudWatch logs includes an additional statusfield in the REPORT line. In the case of a runtime or extension crash, the REPORT line also includes a field ErrorType. Example CloudWatch Logs log output (runtime or extension crash) - new style START RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Version: $LATEST END RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd REPORT RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Duration: 133.61 ms Billed Duration: 133 ms Memory Size: 128 MB Max Memory Used: 31 MB Init Duration: 80.00 ms Status: error Error Type: Runtime.ExitError Example CloudWatch Logs log output (function timeout) - new style START RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda Version: $LATEST END RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda REPORT RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda Duration: 3016.78 ms Billed Duration: 3016 ms Memory Size: 128 MB Max Memory Used: 31 MB Init Duration: 84.00 ms Status: timeout • The fourth phase represents the INVOKE phase immediately following an invoke failure. Here, Lambda initializes the environment again by re-running the INIT phase. This is called a suppressed init. When suppressed inits occur, Lambda doesn't explicitly report an additional INIT phase in CloudWatch Logs. Instead, you may notice that the duration in the REPORT line includes an additional INIT duration + the INVOKE duration. For example, suppose you see the following logs in CloudWatch: Running code 17 AWS Lambda Developer Guide 2022-12-20T01:00:00.000-08:00 START RequestId: XXX Version: $LATEST 2022-12-20T01:00:02.500-08:00 END RequestId: XXX 2022-12-20T01:00:02.500-08:00 REPORT RequestId: XXX Duration: 3022.91 ms Billed Duration: 3000 ms Memory Size: 512 MB Max Memory Used: 157 MB In this example, the difference between the REPORT and START timestamps is 2.5 seconds. This doesn't match the reported duration of 3022.91 millseconds, because it doesn't take into account the extra INIT (suppressed init) that Lambda performed. In this example, you can infer that the actual INVOKE phase took 2.5 seconds. For more insight into this behavior, you can use the Accessing real-time"} +{"global_id": 688, "doc_id": "lambda", "chunk_id": "19", "question_id": 1, "question": "What is the reported duration mentioned in the text?", "answer_span": "This doesn't match the reported duration of 3022.91 millseconds", "chunk": "seconds. This doesn't match the reported duration of 3022.91 millseconds, because it doesn't take into account the extra INIT (suppressed init) that Lambda performed. In this example, you can infer that the actual INVOKE phase took 2.5 seconds. For more insight into this behavior, you can use the Accessing real-time telemetry data for extensions using the Telemetry API. The Telemetry API emits INIT_START, INIT_RUNTIME_DONE, and INIT_REPORT events with phase=invoke whenever suppressed inits occur in between invoke phases. • The fifth phase represents the SHUTDOWN phase, which runs without errors. Shutdown phase When Lambda is about to shut down the runtime, it sends a Shutdown event to each registered external extension. Extensions can use this time for final cleanup tasks. The Shutdown event is a response to a Next API request. Duration limit: The maximum duration of the Shutdown phase depends on the configuration of registered extensions: • 0 ms – A function with no registered extensions • 500 ms – A function with a registered internal extension • 2,000 ms – A function with one or more registered external extensions If the runtime or an extension does not respond to the Shutdown event within the limit, Lambda ends the process using a SIGKILL signal. After the function and all extensions have completed, Lambda maintains the execution environment for some time in anticipation of another function invocation. However, Lambda terminates execution environments every few hours to allow for runtime updates and maintenance —even for functions that are invoked continuously. You should not assume that the execution Running code 18 AWS Lambda Developer Guide environment will persist indefinitely. For more information, see Implement statelessness in functions. When the function is invoked again, Lambda thaws the environment for reuse. Reusing the execution environment has the following implications: • Objects declared outside of the"} +{"global_id": 689, "doc_id": "lambda", "chunk_id": "19", "question_id": 2, "question": "What does the Telemetry API emit during the invoke phases?", "answer_span": "The Telemetry API emits INIT_START, INIT_RUNTIME_DONE, and INIT_REPORT events with phase=invoke", "chunk": "seconds. This doesn't match the reported duration of 3022.91 millseconds, because it doesn't take into account the extra INIT (suppressed init) that Lambda performed. In this example, you can infer that the actual INVOKE phase took 2.5 seconds. For more insight into this behavior, you can use the Accessing real-time telemetry data for extensions using the Telemetry API. The Telemetry API emits INIT_START, INIT_RUNTIME_DONE, and INIT_REPORT events with phase=invoke whenever suppressed inits occur in between invoke phases. • The fifth phase represents the SHUTDOWN phase, which runs without errors. Shutdown phase When Lambda is about to shut down the runtime, it sends a Shutdown event to each registered external extension. Extensions can use this time for final cleanup tasks. The Shutdown event is a response to a Next API request. Duration limit: The maximum duration of the Shutdown phase depends on the configuration of registered extensions: • 0 ms – A function with no registered extensions • 500 ms – A function with a registered internal extension • 2,000 ms – A function with one or more registered external extensions If the runtime or an extension does not respond to the Shutdown event within the limit, Lambda ends the process using a SIGKILL signal. After the function and all extensions have completed, Lambda maintains the execution environment for some time in anticipation of another function invocation. However, Lambda terminates execution environments every few hours to allow for runtime updates and maintenance —even for functions that are invoked continuously. You should not assume that the execution Running code 18 AWS Lambda Developer Guide environment will persist indefinitely. For more information, see Implement statelessness in functions. When the function is invoked again, Lambda thaws the environment for reuse. Reusing the execution environment has the following implications: • Objects declared outside of the"} +{"global_id": 690, "doc_id": "lambda", "chunk_id": "19", "question_id": 3, "question": "What happens if the runtime or an extension does not respond to the Shutdown event within the limit?", "answer_span": "Lambda ends the process using a SIGKILL signal.", "chunk": "seconds. This doesn't match the reported duration of 3022.91 millseconds, because it doesn't take into account the extra INIT (suppressed init) that Lambda performed. In this example, you can infer that the actual INVOKE phase took 2.5 seconds. For more insight into this behavior, you can use the Accessing real-time telemetry data for extensions using the Telemetry API. The Telemetry API emits INIT_START, INIT_RUNTIME_DONE, and INIT_REPORT events with phase=invoke whenever suppressed inits occur in between invoke phases. • The fifth phase represents the SHUTDOWN phase, which runs without errors. Shutdown phase When Lambda is about to shut down the runtime, it sends a Shutdown event to each registered external extension. Extensions can use this time for final cleanup tasks. The Shutdown event is a response to a Next API request. Duration limit: The maximum duration of the Shutdown phase depends on the configuration of registered extensions: • 0 ms – A function with no registered extensions • 500 ms – A function with a registered internal extension • 2,000 ms – A function with one or more registered external extensions If the runtime or an extension does not respond to the Shutdown event within the limit, Lambda ends the process using a SIGKILL signal. After the function and all extensions have completed, Lambda maintains the execution environment for some time in anticipation of another function invocation. However, Lambda terminates execution environments every few hours to allow for runtime updates and maintenance —even for functions that are invoked continuously. You should not assume that the execution Running code 18 AWS Lambda Developer Guide environment will persist indefinitely. For more information, see Implement statelessness in functions. When the function is invoked again, Lambda thaws the environment for reuse. Reusing the execution environment has the following implications: • Objects declared outside of the"} +{"global_id": 691, "doc_id": "lambda", "chunk_id": "19", "question_id": 4, "question": "What should you not assume about the execution environment?", "answer_span": "You should not assume that the execution environment will persist indefinitely.", "chunk": "seconds. This doesn't match the reported duration of 3022.91 millseconds, because it doesn't take into account the extra INIT (suppressed init) that Lambda performed. In this example, you can infer that the actual INVOKE phase took 2.5 seconds. For more insight into this behavior, you can use the Accessing real-time telemetry data for extensions using the Telemetry API. The Telemetry API emits INIT_START, INIT_RUNTIME_DONE, and INIT_REPORT events with phase=invoke whenever suppressed inits occur in between invoke phases. • The fifth phase represents the SHUTDOWN phase, which runs without errors. Shutdown phase When Lambda is about to shut down the runtime, it sends a Shutdown event to each registered external extension. Extensions can use this time for final cleanup tasks. The Shutdown event is a response to a Next API request. Duration limit: The maximum duration of the Shutdown phase depends on the configuration of registered extensions: • 0 ms – A function with no registered extensions • 500 ms – A function with a registered internal extension • 2,000 ms – A function with one or more registered external extensions If the runtime or an extension does not respond to the Shutdown event within the limit, Lambda ends the process using a SIGKILL signal. After the function and all extensions have completed, Lambda maintains the execution environment for some time in anticipation of another function invocation. However, Lambda terminates execution environments every few hours to allow for runtime updates and maintenance —even for functions that are invoked continuously. You should not assume that the execution Running code 18 AWS Lambda Developer Guide environment will persist indefinitely. For more information, see Implement statelessness in functions. When the function is invoked again, Lambda thaws the environment for reuse. Reusing the execution environment has the following implications: • Objects declared outside of the"} +{"global_id": 692, "doc_id": "lambda", "chunk_id": "20", "question_id": 1, "question": "What should you not assume about the execution environment in AWS Lambda?", "answer_span": "not assume that the execution Running code 18 AWS Lambda Developer Guide environment will persist indefinitely.", "chunk": "not assume that the execution Running code 18 AWS Lambda Developer Guide environment will persist indefinitely. For more information, see Implement statelessness in functions. When the function is invoked again, Lambda thaws the environment for reuse. Reusing the execution environment has the following implications: • Objects declared outside of the function's handler method remain initialized, providing additional optimization when the function is invoked again. For example, if your Lambda function establishes a database connection, instead of reestablishing the connection, the original connection is used in subsequent invocations. We recommend adding logic in your code to check if a connection exists before creating a new one. • Each execution environment provides between 512 MB and 10,240 MB, in 1-MB increments, of disk space in the /tmp directory. The directory content remains when the execution environment is frozen, providing a transient cache that can be used for multiple invocations. You can add extra code to check if the cache has the data that you stored. For more information on deployment size limits, see Lambda quotas. • Background processes or callbacks that were initiated by your Lambda function and did not complete when the function ended resume if Lambda reuses the execution environment. Make sure that any background processes or callbacks in your code are complete before the code exits. Cold starts and latency When Lambda receives a request to run a function via the Lambda API, the service first prepares an execution environment. During this initialization phase, the service downloads your code, starts the environment, and runs any initialization code outside of the main handler. Finally, Lambda runs the handler code. In this diagram, the first two steps of downloading the code and setting up the environment are frequently referred to as a “cold start”. You are not charged for this time,"} +{"global_id": 693, "doc_id": "lambda", "chunk_id": "20", "question_id": 2, "question": "What happens when the function is invoked again in AWS Lambda?", "answer_span": "When the function is invoked again, Lambda thaws the environment for reuse.", "chunk": "not assume that the execution Running code 18 AWS Lambda Developer Guide environment will persist indefinitely. For more information, see Implement statelessness in functions. When the function is invoked again, Lambda thaws the environment for reuse. Reusing the execution environment has the following implications: • Objects declared outside of the function's handler method remain initialized, providing additional optimization when the function is invoked again. For example, if your Lambda function establishes a database connection, instead of reestablishing the connection, the original connection is used in subsequent invocations. We recommend adding logic in your code to check if a connection exists before creating a new one. • Each execution environment provides between 512 MB and 10,240 MB, in 1-MB increments, of disk space in the /tmp directory. The directory content remains when the execution environment is frozen, providing a transient cache that can be used for multiple invocations. You can add extra code to check if the cache has the data that you stored. For more information on deployment size limits, see Lambda quotas. • Background processes or callbacks that were initiated by your Lambda function and did not complete when the function ended resume if Lambda reuses the execution environment. Make sure that any background processes or callbacks in your code are complete before the code exits. Cold starts and latency When Lambda receives a request to run a function via the Lambda API, the service first prepares an execution environment. During this initialization phase, the service downloads your code, starts the environment, and runs any initialization code outside of the main handler. Finally, Lambda runs the handler code. In this diagram, the first two steps of downloading the code and setting up the environment are frequently referred to as a “cold start”. You are not charged for this time,"} +{"global_id": 694, "doc_id": "lambda", "chunk_id": "20", "question_id": 3, "question": "What is the range of disk space provided by each execution environment in AWS Lambda?", "answer_span": "Each execution environment provides between 512 MB and 10,240 MB, in 1-MB increments, of disk space in the /tmp directory.", "chunk": "not assume that the execution Running code 18 AWS Lambda Developer Guide environment will persist indefinitely. For more information, see Implement statelessness in functions. When the function is invoked again, Lambda thaws the environment for reuse. Reusing the execution environment has the following implications: • Objects declared outside of the function's handler method remain initialized, providing additional optimization when the function is invoked again. For example, if your Lambda function establishes a database connection, instead of reestablishing the connection, the original connection is used in subsequent invocations. We recommend adding logic in your code to check if a connection exists before creating a new one. • Each execution environment provides between 512 MB and 10,240 MB, in 1-MB increments, of disk space in the /tmp directory. The directory content remains when the execution environment is frozen, providing a transient cache that can be used for multiple invocations. You can add extra code to check if the cache has the data that you stored. For more information on deployment size limits, see Lambda quotas. • Background processes or callbacks that were initiated by your Lambda function and did not complete when the function ended resume if Lambda reuses the execution environment. Make sure that any background processes or callbacks in your code are complete before the code exits. Cold starts and latency When Lambda receives a request to run a function via the Lambda API, the service first prepares an execution environment. During this initialization phase, the service downloads your code, starts the environment, and runs any initialization code outside of the main handler. Finally, Lambda runs the handler code. In this diagram, the first two steps of downloading the code and setting up the environment are frequently referred to as a “cold start”. You are not charged for this time,"} +{"global_id": 695, "doc_id": "lambda", "chunk_id": "20", "question_id": 4, "question": "What is referred to as a 'cold start' in AWS Lambda?", "answer_span": "During this initialization phase, the service downloads your code, starts the environment, and runs any initialization code outside of the main handler.", "chunk": "not assume that the execution Running code 18 AWS Lambda Developer Guide environment will persist indefinitely. For more information, see Implement statelessness in functions. When the function is invoked again, Lambda thaws the environment for reuse. Reusing the execution environment has the following implications: • Objects declared outside of the function's handler method remain initialized, providing additional optimization when the function is invoked again. For example, if your Lambda function establishes a database connection, instead of reestablishing the connection, the original connection is used in subsequent invocations. We recommend adding logic in your code to check if a connection exists before creating a new one. • Each execution environment provides between 512 MB and 10,240 MB, in 1-MB increments, of disk space in the /tmp directory. The directory content remains when the execution environment is frozen, providing a transient cache that can be used for multiple invocations. You can add extra code to check if the cache has the data that you stored. For more information on deployment size limits, see Lambda quotas. • Background processes or callbacks that were initiated by your Lambda function and did not complete when the function ended resume if Lambda reuses the execution environment. Make sure that any background processes or callbacks in your code are complete before the code exits. Cold starts and latency When Lambda receives a request to run a function via the Lambda API, the service first prepares an execution environment. During this initialization phase, the service downloads your code, starts the environment, and runs any initialization code outside of the main handler. Finally, Lambda runs the handler code. In this diagram, the first two steps of downloading the code and setting up the environment are frequently referred to as a “cold start”. You are not charged for this time,"} +{"global_id": 696, "doc_id": "lambda", "chunk_id": "21", "question_id": 1, "question": "What are the first two steps referred to as in the Lambda execution process?", "answer_span": "the first two steps of downloading the code and setting up the environment are frequently referred to as a “cold start”", "chunk": "starts the environment, and runs any initialization code outside of the main handler. Finally, Lambda runs the handler code. In this diagram, the first two steps of downloading the code and setting up the environment are frequently referred to as a “cold start”. You are not charged for this time, but it does add latency to your overall invocation duration. Running code 19 AWS Lambda Developer Guide After the invocation completes, the execution environment is frozen. To improve resource management and performance, Lambda retains the execution environment for a period of time. During this time, if another request arrives for the same function, Lambda can reuse the environment. This second request typically finishes more quickly, since the execution environment is already fully set up. This is called a “warm start”. Cold starts typically occur in under 1% of invocations. The duration of a cold start varies from under 100 ms to over 1 second. In general, cold starts are typically more common in development and test functions than production workloads. This is because development and test functions are usually invoked less frequently. Reducing cold starts with Provisioned Concurrency If you need predictable function start times for your workload, provisioned concurrency is the recommended solution to ensure the lowest possible latency. This feature pre-initializes execution environments, reducing cold starts. For example, a function with a provisioned concurrency of 6 has 6 execution environments prewarmed. Optimizing static initialization Static initialization happens before the handler code starts running in a function. This is the initialization code that you provide, that is outside of the main handler. This code is often used to import libraries and dependencies, set up configurations, and initialize connections to other services. Running code 20 AWS Lambda Developer Guide The following Python example shows importing, and configuring modules, and creating"} +{"global_id": 697, "doc_id": "lambda", "chunk_id": "21", "question_id": 2, "question": "What happens to the execution environment after the invocation completes?", "answer_span": "the execution environment is frozen", "chunk": "starts the environment, and runs any initialization code outside of the main handler. Finally, Lambda runs the handler code. In this diagram, the first two steps of downloading the code and setting up the environment are frequently referred to as a “cold start”. You are not charged for this time, but it does add latency to your overall invocation duration. Running code 19 AWS Lambda Developer Guide After the invocation completes, the execution environment is frozen. To improve resource management and performance, Lambda retains the execution environment for a period of time. During this time, if another request arrives for the same function, Lambda can reuse the environment. This second request typically finishes more quickly, since the execution environment is already fully set up. This is called a “warm start”. Cold starts typically occur in under 1% of invocations. The duration of a cold start varies from under 100 ms to over 1 second. In general, cold starts are typically more common in development and test functions than production workloads. This is because development and test functions are usually invoked less frequently. Reducing cold starts with Provisioned Concurrency If you need predictable function start times for your workload, provisioned concurrency is the recommended solution to ensure the lowest possible latency. This feature pre-initializes execution environments, reducing cold starts. For example, a function with a provisioned concurrency of 6 has 6 execution environments prewarmed. Optimizing static initialization Static initialization happens before the handler code starts running in a function. This is the initialization code that you provide, that is outside of the main handler. This code is often used to import libraries and dependencies, set up configurations, and initialize connections to other services. Running code 20 AWS Lambda Developer Guide The following Python example shows importing, and configuring modules, and creating"} +{"global_id": 698, "doc_id": "lambda", "chunk_id": "21", "question_id": 3, "question": "What is the recommended solution to reduce cold starts for predictable function start times?", "answer_span": "provisioned concurrency is the recommended solution to ensure the lowest possible latency", "chunk": "starts the environment, and runs any initialization code outside of the main handler. Finally, Lambda runs the handler code. In this diagram, the first two steps of downloading the code and setting up the environment are frequently referred to as a “cold start”. You are not charged for this time, but it does add latency to your overall invocation duration. Running code 19 AWS Lambda Developer Guide After the invocation completes, the execution environment is frozen. To improve resource management and performance, Lambda retains the execution environment for a period of time. During this time, if another request arrives for the same function, Lambda can reuse the environment. This second request typically finishes more quickly, since the execution environment is already fully set up. This is called a “warm start”. Cold starts typically occur in under 1% of invocations. The duration of a cold start varies from under 100 ms to over 1 second. In general, cold starts are typically more common in development and test functions than production workloads. This is because development and test functions are usually invoked less frequently. Reducing cold starts with Provisioned Concurrency If you need predictable function start times for your workload, provisioned concurrency is the recommended solution to ensure the lowest possible latency. This feature pre-initializes execution environments, reducing cold starts. For example, a function with a provisioned concurrency of 6 has 6 execution environments prewarmed. Optimizing static initialization Static initialization happens before the handler code starts running in a function. This is the initialization code that you provide, that is outside of the main handler. This code is often used to import libraries and dependencies, set up configurations, and initialize connections to other services. Running code 20 AWS Lambda Developer Guide The following Python example shows importing, and configuring modules, and creating"} +{"global_id": 699, "doc_id": "lambda", "chunk_id": "21", "question_id": 4, "question": "What does static initialization involve in the context of Lambda functions?", "answer_span": "Static initialization happens before the handler code starts running in a function", "chunk": "starts the environment, and runs any initialization code outside of the main handler. Finally, Lambda runs the handler code. In this diagram, the first two steps of downloading the code and setting up the environment are frequently referred to as a “cold start”. You are not charged for this time, but it does add latency to your overall invocation duration. Running code 19 AWS Lambda Developer Guide After the invocation completes, the execution environment is frozen. To improve resource management and performance, Lambda retains the execution environment for a period of time. During this time, if another request arrives for the same function, Lambda can reuse the environment. This second request typically finishes more quickly, since the execution environment is already fully set up. This is called a “warm start”. Cold starts typically occur in under 1% of invocations. The duration of a cold start varies from under 100 ms to over 1 second. In general, cold starts are typically more common in development and test functions than production workloads. This is because development and test functions are usually invoked less frequently. Reducing cold starts with Provisioned Concurrency If you need predictable function start times for your workload, provisioned concurrency is the recommended solution to ensure the lowest possible latency. This feature pre-initializes execution environments, reducing cold starts. For example, a function with a provisioned concurrency of 6 has 6 execution environments prewarmed. Optimizing static initialization Static initialization happens before the handler code starts running in a function. This is the initialization code that you provide, that is outside of the main handler. This code is often used to import libraries and dependencies, set up configurations, and initialize connections to other services. Running code 20 AWS Lambda Developer Guide The following Python example shows importing, and configuring modules, and creating"} +{"global_id": 700, "doc_id": "lambda", "chunk_id": "22", "question_id": 1, "question": "What is the purpose of the initialization code in AWS Lambda?", "answer_span": "This code is often used to import libraries and dependencies, set up configurations, and initialize connections to other services.", "chunk": "the initialization code that you provide, that is outside of the main handler. This code is often used to import libraries and dependencies, set up configurations, and initialize connections to other services. Running code 20 AWS Lambda Developer Guide The following Python example shows importing, and configuring modules, and creating the Amazon S3 client during the initialization phase, before the lambda_handler function runs during invoke. import os import json import cv2 import logging import boto3 s3 = boto3.client('s3') logger = logging.getLogger() logger.setLevel(logging.INFO) def lambda_handler(event, context): # Handler logic... The largest contributor of latency before function execution comes from initialization code. This code runs when a new execution environment is created for the first time. The initialization code is not run again if an invocation uses a warm execution environment. Factors that affect initialization code latency include: • The size of the function package, in terms of imported libraries and dependencies, and Lambda layers. • The amount of code and initialization work. • The performance of libraries and other services in setting up connections and other resources. There are a number of steps that developers can take to optimize static initialization latency. If a function has many objects and connections, you may be able to rearchitect a single function into multiple, specialized functions. These are individually smaller and each have less initialization code. It’s important that functions only import the libraries and dependencies that they need. For example, if you only use Amazon DynamoDB in the AWS SDK, you can require an individual service instead of the entire SDK. Compare the following three examples: // Instead of const AWS = require('aws-sdk'), use: const DynamoDB = require('aws-sdk/clients/dynamodb') Running code 21"} +{"global_id": 701, "doc_id": "lambda", "chunk_id": "22", "question_id": 2, "question": "When does the initialization code run in AWS Lambda?", "answer_span": "The initialization code runs when a new execution environment is created for the first time.", "chunk": "the initialization code that you provide, that is outside of the main handler. This code is often used to import libraries and dependencies, set up configurations, and initialize connections to other services. Running code 20 AWS Lambda Developer Guide The following Python example shows importing, and configuring modules, and creating the Amazon S3 client during the initialization phase, before the lambda_handler function runs during invoke. import os import json import cv2 import logging import boto3 s3 = boto3.client('s3') logger = logging.getLogger() logger.setLevel(logging.INFO) def lambda_handler(event, context): # Handler logic... The largest contributor of latency before function execution comes from initialization code. This code runs when a new execution environment is created for the first time. The initialization code is not run again if an invocation uses a warm execution environment. Factors that affect initialization code latency include: • The size of the function package, in terms of imported libraries and dependencies, and Lambda layers. • The amount of code and initialization work. • The performance of libraries and other services in setting up connections and other resources. There are a number of steps that developers can take to optimize static initialization latency. If a function has many objects and connections, you may be able to rearchitect a single function into multiple, specialized functions. These are individually smaller and each have less initialization code. It’s important that functions only import the libraries and dependencies that they need. For example, if you only use Amazon DynamoDB in the AWS SDK, you can require an individual service instead of the entire SDK. Compare the following three examples: // Instead of const AWS = require('aws-sdk'), use: const DynamoDB = require('aws-sdk/clients/dynamodb') Running code 21"} +{"global_id": 702, "doc_id": "lambda", "chunk_id": "22", "question_id": 3, "question": "What factors affect the latency of initialization code?", "answer_span": "Factors that affect initialization code latency include: • The size of the function package, in terms of imported libraries and dependencies, and Lambda layers.", "chunk": "the initialization code that you provide, that is outside of the main handler. This code is often used to import libraries and dependencies, set up configurations, and initialize connections to other services. Running code 20 AWS Lambda Developer Guide The following Python example shows importing, and configuring modules, and creating the Amazon S3 client during the initialization phase, before the lambda_handler function runs during invoke. import os import json import cv2 import logging import boto3 s3 = boto3.client('s3') logger = logging.getLogger() logger.setLevel(logging.INFO) def lambda_handler(event, context): # Handler logic... The largest contributor of latency before function execution comes from initialization code. This code runs when a new execution environment is created for the first time. The initialization code is not run again if an invocation uses a warm execution environment. Factors that affect initialization code latency include: • The size of the function package, in terms of imported libraries and dependencies, and Lambda layers. • The amount of code and initialization work. • The performance of libraries and other services in setting up connections and other resources. There are a number of steps that developers can take to optimize static initialization latency. If a function has many objects and connections, you may be able to rearchitect a single function into multiple, specialized functions. These are individually smaller and each have less initialization code. It’s important that functions only import the libraries and dependencies that they need. For example, if you only use Amazon DynamoDB in the AWS SDK, you can require an individual service instead of the entire SDK. Compare the following three examples: // Instead of const AWS = require('aws-sdk'), use: const DynamoDB = require('aws-sdk/clients/dynamodb') Running code 21"} +{"global_id": 703, "doc_id": "lambda", "chunk_id": "22", "question_id": 4, "question": "What can developers do to optimize static initialization latency?", "answer_span": "There are a number of steps that developers can take to optimize static initialization latency.", "chunk": "the initialization code that you provide, that is outside of the main handler. This code is often used to import libraries and dependencies, set up configurations, and initialize connections to other services. Running code 20 AWS Lambda Developer Guide The following Python example shows importing, and configuring modules, and creating the Amazon S3 client during the initialization phase, before the lambda_handler function runs during invoke. import os import json import cv2 import logging import boto3 s3 = boto3.client('s3') logger = logging.getLogger() logger.setLevel(logging.INFO) def lambda_handler(event, context): # Handler logic... The largest contributor of latency before function execution comes from initialization code. This code runs when a new execution environment is created for the first time. The initialization code is not run again if an invocation uses a warm execution environment. Factors that affect initialization code latency include: • The size of the function package, in terms of imported libraries and dependencies, and Lambda layers. • The amount of code and initialization work. • The performance of libraries and other services in setting up connections and other resources. There are a number of steps that developers can take to optimize static initialization latency. If a function has many objects and connections, you may be able to rearchitect a single function into multiple, specialized functions. These are individually smaller and each have less initialization code. It’s important that functions only import the libraries and dependencies that they need. For example, if you only use Amazon DynamoDB in the AWS SDK, you can require an individual service instead of the entire SDK. Compare the following three examples: // Instead of const AWS = require('aws-sdk'), use: const DynamoDB = require('aws-sdk/clients/dynamodb') Running code 21"} +{"global_id": 704, "doc_id": "lambda", "chunk_id": "23", "question_id": 1, "question": "What is the alternative to using the entire SDK?", "answer_span": "an individual service instead of the entire SDK.", "chunk": "an individual service instead of the entire SDK. Compare the following three examples: // Instead of const AWS = require('aws-sdk'), use: const DynamoDB = require('aws-sdk/clients/dynamodb') Running code 21"} +{"global_id": 705, "doc_id": "lambda", "chunk_id": "23", "question_id": 2, "question": "What is the correct way to require DynamoDB?", "answer_span": "const DynamoDB = require('aws-sdk/clients/dynamodb')", "chunk": "an individual service instead of the entire SDK. Compare the following three examples: // Instead of const AWS = require('aws-sdk'), use: const DynamoDB = require('aws-sdk/clients/dynamodb') Running code 21"} +{"global_id": 706, "doc_id": "lambda", "chunk_id": "23", "question_id": 3, "question": "What should be used instead of 'const AWS = require('aws-sdk')'?", "answer_span": "use: const DynamoDB = require('aws-sdk/clients/dynamodb')", "chunk": "an individual service instead of the entire SDK. Compare the following three examples: // Instead of const AWS = require('aws-sdk'), use: const DynamoDB = require('aws-sdk/clients/dynamodb') Running code 21"} +{"global_id": 707, "doc_id": "lambda", "chunk_id": "23", "question_id": 4, "question": "What is the context of the examples provided?", "answer_span": "Compare the following three examples:", "chunk": "an individual service instead of the entire SDK. Compare the following three examples: // Instead of const AWS = require('aws-sdk'), use: const DynamoDB = require('aws-sdk/clients/dynamodb') Running code 21"}