Dataset Viewer
Auto-converted to Parquet
Title
stringlengths
18
136
Content
stringlengths
293
255k
Category
stringclasses
1 value
Amazon_Aurora_Migration_Handbook
This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 1 Amazon Aurora Migration Handbook July 2020 This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 2 Notices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change withou t notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 20 Amazon Web Services Inc or its affiliates All rights reserved This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 3 Contents Introduction 5 Database Migration Considerations 6 Migration Phases 7 Features and Compatibility 7 Performance 8 Cost 9 Availability and Durability 9 Planning and Testing a Database Migration 11 Homogeneous Migrations 11 Summary of Available Migration Methods 12 Migrating Large Databases to Amazon Aurora 15 Partition and Shard Consolidation on Amazon Aurora 16 MySQL and MySQL compatible Migration Options at a Glance 17 Migrating from Amazon RDS for MySQL 18 Migrating from MySQL Compatible Databases 23 Heterogeneous Migrations 26 Schema Migration 27 Data Migration 28 Example Migration Scenarios 28 SelfManaged Homogeneous Migrations 28 Multi Threaded Migration Using mydumper and myloader 39 Heterogeneous Migrations 45 Testing and Cutover 46 Migration Testing 46 This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 4 Cutover 47 Troubleshooting 49 Troubleshooting MySQL Specific Issues 49 Conclusion 54 Contributors 55 Further Reading 56 Document Revisions 56 This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 5 Abstract This paper outlines the best practices for planning executing and troubleshooting database migrations from MySQL compatible and non MySQL compatible database products to Amazon Aurora It also teaches Amazon Aurora database administrators how to diagnose and troubleshoot common migration and replication erro rs Introduc tion For decades traditional relational databases have been the primary choice for data storage and persistence These database systems continue to rely on monolithic architectures and were not designed to take advantage of cloud infrastructure These monolithic architectures present many challenges particularly in areas such as cost flexibility and availability In order to address these challenges AWS redesigned relational database for the cloud infrastructure and introduced Amazon Aurora Amazon Aurora is a MySQL compatible relational database engine that combines the speed availability and security of high end commercial databases with the simplicity and cost effectiveness of open source databases Aurora provides up to five times better performance than MySQL and comparable performance of high end commercial databases Amazon Aurora is priced at one tenth the cost of commercial engines Amazon Aurora is available through the Amazon Relational Database Service (Amazon RDS) platform Like other Amazon RDS databases Aurora is a fully managed database service With the Amazon RDS platform most database management tasks such as hardware provisioning softwa re patching setup configuration monitoring and backup are completely automated Amazon Aurora is built for mission critical workloads and is highly available by default An Aurora database cluster spans multiple Availability Zones (AZs) in a region providing out ofthebox durability and fault tolerance to your data across physical data centers An Availability Zone is composed of one or more highly available data centers operated by Amazon AZs are isolated from each other and are connected through lo w latency links Each segment of your database volume is replicated six times across these AZs This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 6 Aurora cluster volumes automatically grow as the amount of data in your database increases with no performance or availability impact so there is no need for estimating and provisioning large amount of database storage ahead of time An Aurora cluster volume can grow to a maximum size of 64 terabytes (TB) You are only charged for the space that you use in an Aurora cluster volume Aurora's automated backup capability supports point intime recovery of your data enabling you to restore your database to any second during your retention period up to the last five minutes Automated backups are stored in Amazon Simpl e Storage Service (Amazon S3) which is designed for 99999999999% durability Amazon Aurora backups are automatic incremental and continuous and have no impact on database performance For applications that need read only replicas you can create up to 15 Aurora Replicas per Aurora database with very low replica lag These replicas share the same underlying storage as the source instance lowering costs and avoiding the need to perform writes at the replica nodes Amazon Aurora is highly secure and all ows you to encrypt your databases using keys that you create and control through AWS Key Management Service (AWS KMS) On a database instance running with Amazon Aurora encryption data stored at rest in the underlying storage is encrypted as are the auto mated backups snapshots and replicas in the same cluster Amazon Aurora uses SSL (AES 256) to secure data in transit For a complete list of Aurora features see Amazon Aurora Given the rich feature se t and cost effectiveness of Amazon Aurora it is increasingly viewed as the go to database for mission critical applications Database Migration Considerations A database represents a critical component in the architecture of most applications Migrating t he database to a new platform is a significant event in an application’s lifecycle and may have an impact on application functionality performance and reliability You should take a few important considerations into account before embarking on your first migration project to Amazon Aurora Migrations are among the most time consuming and critical tasks handled by database administrators Although the task has become easier with the advent of managed This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 7 migration services such as AWS Database Migration Service large scale database migrations still require adequate planning and execution to meet strict compatibility and performance requirements Migration Phases Because database migrations tend to be complex we adv ocate taking a phased iterative approach Figure 1 Migration phases This paper examines the following major contributors to the success of every database migration project: • Factors that justify the migration to Amazon Aurora such as compatibility performance cost and high availability and durability • Best practices for choosing the optimal migration method • Best practices for planning and executing a migration • Migration troubleshooting hints This section discusses imp ortant considerations that apply to most database migration projects For an extended discussion of related topics see the Amazon Web Services (AWS) whitepaper Migrating Your Databases to Amazon Aurora Features and Compatibility Although most applications can be architected to work with many relational database engines you should make sure that your application works with Amazon Aurora Amazon Aurora is designed to be wire compatible with MySQL 5 55657 and 80 Therefore most of the code applications driver s and tools that are used today with MySQL databases can be used with Aurora with little or no change However certain MySQL features like the MyISAM storage engine are not available with Amazon Aurora Also due to the managed nature of the Aurora ser vice SSH This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 8 access to database nodes is restricted which may affect your ability to install third party tools or plugins on the database host For more details see Aurora on Amazon RDS in the Amazon Relational Database Service (Amazon RDS) User Guide Performance Performance is often the key motivation behind database migrations However deploying your database on Amazon Aurora can be beneficial even if your applications don’t have performance issues For example Amazon Aurora scalability features can greatly reduce the amount of engineering effort that is required to prepare your database platform for future traffic growth You should include benchmarks and performance evaluations in every migration project Therefore many successful database migration projects start with performance evaluations of the new database platform Although the RDS Aurora Performance Assessment Benchmarking paper gives you a decent idea of overall database performance these benchmarks do not emulate the data access patterns of your applications For more useful results test the database performance for time sensitive workloads by running your queries (or subset of your queries) on the new platform directly Consider these strategies : • If your current database is MySQL migrate to Amazon Aurora with downtime and performance test your database with a test or staging version of your application or by replaying the production workload • If you are on a non MySQL compliant engine you can selectively copy the busiest tables to Amazon Aurora and test your queries for t hose tables This gives you a good starting point Of course testing after complete data migration will provide a full picture of real world performance of your application on the new platform Amazon Aurora delivers comparable performance with commercia l engines and significant improvement over MySQL performance It does this by tightly integrating the database engine with an SSD based virtualized storage layer designed for database workloads This reduces writes to the storage system minimizes lock con tention and This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 9 eliminates delays created by database process threads Our tests with SysBench on r38xlarge instances show that Amazon Aurora delivers over 585000 reads per second and 107000 writes per second five times higher than MySQL running the same benchmark on the same hardware One area where Amazon Aurora significantly improves upon traditional MySQL is highly concurrent workloads In order to maximize your workload’s throughput on Amazon Aurora we recommend architecting your applications to driv e a large number of concurrent queries Cost Amazon Aurora provides consistent high performance together with the security availability and reliability of a commercial database at one tenth the cost Owning and running databases come with associated cost s Before planning a database migration an analysis of the total cost of ownership (TCO ) of the new database platform is imperative Migration to a new database platform should ideally lower the total cost of ownership while providing your applications with similar or better features If you are running an open source database engine (MySQL Postgres) your costs are largely related to hardware server management and database management activities However if you are running a commercial database engine (Oracle SQL Server DB2 etc) a significant portion of your cost is database licensing Amazon Aurora can even be more cost efficient than open source databases because its high scalability helps you reduce the number of database instances that are required to handle the same workload For more details see the Amazon RDS for Aurora Pricing page Availability and Durability High availability and disaster recovery are important considerations for databases Your application may already have very strict recovery time objective (RTO) and recovery point objective (RPO) requirements Amazon Aurora can help you meet or exceed your availability goals by having the following components: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 10 1 Read replicas : Increase read throughput to support high volume application requests by creating up to 15 database Aurora replicas Amazon Aurora Replicas share the same underlying storage as the source inst ance lowering costs and avoiding the need to perform writes at the replica nodes This frees up more processing power to serve read requests and reduces the replica lag time often down to single digit milliseconds Aurora provides a reader endpoint so th e application can connect without having to keep track of replicas as they are added and removed Aurora also supports auto scaling where it automatically adds and removes replicas in response to changes in performance metrics that you specify Aurora sup ports cross region read replicas Cross region replicas provide fast local reads to your users and each region can have an additional 15 Aurora replicas to further scale local reads 2 Global Database : You can choose between Global Database which provides the best replication performance and traditional binlog based replication You can also set up your own binlog replication with external MySQL databases Amazon Aurora Global Database is de signed for globally distributed applications allowing a single Amazon Aurora database to span multiple AWS regions It replicates your data with no impact on database performance enables fast local reads with low latency in each region and provides disa ster recovery from region wide outages 3 Multi AZ: Aurora stores copies of the data in a DB cluster across multiple Availability Zones in a single AWS Region regardless of whether the instances in the DB cluster span multiple Availability Zones For more i nformation on Aurora see Managing an Amazon Aurora DB Cluster When data is written to the primary DB instance Aurora synchronously replicates the data across Availability Zones to six storage nodes associated with your cluster volume Doing so provides data redundancy eliminates I/O freezes and minimizes latency spikes during system backups Running a DB instance with high availability can enhance availability during planned system maintenance and help protect your databases against failure and Availability Zone disruption For more information about durability and availability features in Amazon Aurora see Aurora on Amazon RDS in the Amazon RDS User Guide This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 11 Planning and Testing a Database Migration After you determine that Amazon Aurora is the right fit for your application the next step is to decide on a migration approach and create a database migration plan Here are the suggested high level steps: 1 Review the available migration techniques described in this document and choose one that satisfies your requirements 2 Prepare a migration plan in the form of a step bystep checklist A checklist ensures that all migration steps are executed in the correct order and that the migration process flow can be controlled (eg suspended or resumed) without the risk of important steps be ing missed 3 Prepare a shadow checklist with rollback procedures Ideally you should be able to roll the migration back to a known consistent state from any point in the migration checklist 4 Use the checklist to perform a test migration and take note of the time required to complete each step If any missing steps are identified add them to the checklist If any issues are identified during the test migration address them and rerun the test migration 5 Test all rollback procedures If any rollback proced ure has not been tested successfully assume that it will not work 6 After you complete the test migration and become fully comfortable with the migration plan execute the migration Homogeneous Migrations Amazon Aurora was designed as a drop in replacement for MySQL 56 It offers a wide range of options for homogeneous migrations (eg migrations from MySQL and MySQL compatible databases) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 12 Summary of Available Migration Methods This section lists common migration sources and the migration metho ds available to them in order of preference Detailed descriptions step bystep instructions and tips for advanced migration scenarios are available in subsequent sections Common method is widely adopted is built aurora read replica asynchronized wit h source master RDS or self managed MySQL databases Figure 1 Common migration sources and migration methods for Amazon Aurora Amazon RDS Snapshot Migration Compatible sources: • Amazon RDS for MySQL 56 • Amazon RDS for MySQL 51 and 55 (after upgrading to RDS for MySQL 56) Feature highlights: • Managed point andclick service available through the AWS Management Console • Best migration speed and ease of use of all migration methods • Can be used with binary log replication for near zero migration downtime For details see Migrating Data from a MySQL DB Instance to an Amazon Aurora DB Cluster in the Amazon RDS User Guide This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 13 Percona XtraBackup Compatible sources and limitations : • Onpremises or self managed MySQL 56 in EC2 can be migrated zero downtime migration • You can’t restore into an existing RDS instance using this method • The total size is limited to 6 TB • User accounts functions and stored procedures are not imported automatically Feature highlights: • Managed backup ingestion from Percona XtraBackup files stored in an Amazon Simple Storage Servi ce (Amazon S3) bucket • High performance • Can be used with binary log replication for near zero migration downtime For details see Migrating Data from MySQL by using an Amazon S3 bucket in the Amazon RDS User Guide SelfManaged Export/Import Compatible sources: • MySQL and MySQL compatible databases such as MySQL MariaDB or Percona Server including managed servers such as Amazon RDS for MySQL or MariaDB • NonMySQL compatible databases DMS Migration Compatible sources: • MySQL compatible and non MySQL compatible databases Feature highlights: • Supports heterogeneous and homogenous migrations This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 14 • Managed point andclick data migration service available through the AWS Management Console • Schemas must be migrated separately • Supports CDC replication for near zero migration downtime For details see What Is AWS Database Migration Service? in the AWS DMS User Guide For a heterogeneous migration where you are migrating from a database engine other than MySQL to a MySQL datab ase AWS DMS is almost always the best migration tool to use But for homogeneous migration where you are migrating from a MySQL database to a MySQL database native tools can be more effective Using Any MySQL Compatible Database as a Source for AWS DMS: Before you begin to work with a MySQL database as a source for AWS DMS make sure that you the following prerequisites These prerequisites apply to either self managed or Amazon managed sources You must have an account for AWS DMS that has the Replicati on Admin Role The role needs the following privileges: • Replication Client: This privilege is required for change data capture (CDC) tasks only In other words full loadonly tasks don’t require this privilege • Replication Slave: This privilege is required for change data capture (CDC) tasks only In other words full loadonly tasks don’t require this privilege • Super: This privilege is required only in MySQL versions before 566 DMS highlights for non MySQL compatible sources: • Requires manual schema conversion from source database format into MySQL compatible format • Data migration can be performed manually using a universal data format such as comma separated values (CSV) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 15 • Change data capture (CDC) replication might be possible with third party tool s for near zero migration downtime Migrating Large Databases to Amazon Aurora Migration of large datasets presents unique challenges in every database migration project Many successful large database migration projects use a combination of the following strategies: • Migration with continuous replication: Large databases typically have extended downtime requirements while moving data from source to target To reduce the downtime you can first load baseline data from source to target and then enable replica tion (using MySQL native tools AWS DMS or third party tools) for changes to catch up • Copy static tables first: If your database relies on large static tables with reference data you may migrate these large tables to the target database before migratin g your active dataset You can leverage AWS DMS to copy tables selectively or export and import these tables manually • Multiphase migration: Migration of large database with thousands of tables can be broken down into multiple phases For example you may move a set of tables with no cross joins queries every weekend until the source database is fully migrated to the target database Note that in order to achieve this you need to make changes in your application to connect to two databases simultaneously while your dataset is on two distinct nodes Although this is not a common migration pattern this is an option nonetheless • Database clean up: Many large databases contain data and tables that remain unused In many cases developers and DBAs keep backup copies of tables in the same database or they just simply forget to drop unused tables Whatever the reason a database migration project p rovides an opportunity to clean up the existing database before the migration If some tables are not being used you might either drop them or archive them to another database You might also delete old data from large tables or archive that data to flat files This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 16 Partition and Shard Consolidation on Amazon Aurora If you are running multiple shards or functional partitions of your database to achieve high performance you have an opportunity to consolidate these partitions or shards on a single Aurora databa se A single Amazon Aurora instance can scale up to 64 TB supports thousands of tables and supports a significantly higher number of reads and writes than a standard MySQL database Consolidating these partitions on a single Aurora instance not only redu ces the total cost of ownership and simplify database management but it also significantly improves performance of cross partition queries • Functional partitions : Functional partitioning means dedicating different nodes to different tasks For example i n an e commerce application you might have one database node serving product catalog data and another database node capturing and processing orders As a result these partitions usually have distinct nonoverlapping schemas o Consolidation strateg y: Migrate each functional partition as a distinct schema to your target Aurora instance If your source database is MySQL compliant use native MySQL tools to migrate the schema and then use AWS DMS to migrate the data either one time or continuously using replication If your source database is non MySQL complaint use AWS Schema Conversion Tool to migrate the schemas to Aurora and use AWS DMS for one time load or continuous replication • Data shards : If you have the same schema with distinct sets of data acros s multiple nodes you are leveraging database sharding For example a high traffic blogging service may shard user activity and data across multiple database shards while keeping the same table schema This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 17 o Consolidation strategy : Since all shards share the sa me database schema you only need to create the target schema once If you are using a MySQL compliant database use native tools to migrate the database schema to Aurora If you are using a non MySQL database use AWS Schema Conversion Tool to migrate the database schema to Aurora Once the database schema has been migrated it is best to stop writes to the database shards and use native tools or an AWS DMS one time data load to migrate an individual shard to Aurora If writes to the application cannot be stopped for an extended period you might still use AWS DMS with replication but only after proper planning and testing MySQL and MySQL compatible Migration Options at a Glance Source Database Type Migration with Downtime Near zero Downtime Migration Amazon RDS MySQL Option 1: RDS snapshot migration Option 2: Manual migration using native tools* Option 3: Schema migration using native tools and data load using AWS DMS Option 1: Migration using native tools + binlog replication Option 2: RDS snapshot migration + binlog replication Option 3: Schema migration using native tools + AWS DMS for data movement MySQL Amazon EC2 or onpremises Option 1: Schema migration with native tools + AWS DMS for data load Option 1: Schema migration using native tools + A WS DMS to move data Oracle/SQL server Option 1: AWS Schema Conversion Tool + AWS DMS (recommended) Option 2: Manual or third party tool for schema conversion + manual or thirdparty data load in target Option 1: AWS Schema Conversion Tool + AWS DMS (recommended) Option 2: Manual or third party tool for schema conversion This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 18 Migrating from Amazon RDS for MySQL If you are migrating from an RDS MySQL 56 database (DB) instance the recommended approach is to use the snapshot migration feature Snapshot m igration is a fully managed point andclick feature that is available through the AWS Management Console You can use it to migrate an RDS MySQL 56 DB instance snapshot into a new Aurora DB cluster It is the fastest and easiest to use of all the migrati on methods described in this document For more information about the snapshot migration feature see Migrating Data to an Amazon Aurora DB Cluster in the Amazon RDS User Guide This section provides ideas for projects that use the snapshot migration feature The liststyle layout in our example instructions can help you prepare your own migration checklist Estimating Space Requirements for Snapshot Migration When you migrate a snapshot of a MySQL DB instance to an Aurora DB cluster Aurora uses an Am azon Elastic Block Store (Amazon EBS) volume to format the data from the snapshot before migrating it There are some cases where additional space is needed to format the data for migration The two features that can potentially cause space issues during m igration are MyISAM tables and using the ROW_FORMAT=COMPRESSED option If you are not using either of these features in your source database then you can skip this section because you should not have space issues During migration MyISAM tables are conve rted to InnoDB and any compressed tables are uncompressed Consequently there must be adequate room for the additional copies of any such tables The size of the migration volume is based on the allocated size of the source MySQL database that the snapsho t was made from Therefore if you have MyISAM or compressed tables that make up a small percentage of the overall database size and there is available space in the original database then migration should succeed without encountering any space issues How ever if the original database would not have enough room to store a copy of converted MyISAM tables as well as another (uncompressed) copy of compressed tables then the migration volume will not be big enough In this situation you would need to modify the source Amazon RDS MySQL This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 19 database to increase the database size allocation to make room for the additional copies of these tables take a new snapshot of the database and then migrate the new snapshot When migrating data into your DB cluster observe the following guidelines and limitations: • Although Amazon Aurora supports up to 64 TB of storage the process of migrating a snapshot into an Aurora DB cluster is limited by the size of the Amazon EBS volume of the snapshot and therefore is limited to a m aximum size of 6 TB Non MyISAM tables in the source database can be up to 6 TB in size However due to additional space requirements during conversion make sure that none of the MyISAM and compressed tables being migrated from your MySQL DB instance exc eed 3 TB in size For more information see Migrating Data from an Amazon RDS MySQL DB Instance to an Amazon Aurora MySQL DB Cluster You might want to modify your d atabase schema (convert MyISAM tables to InnoDB and remove ROW_FORMAT=COMPRESSED ) prior to migrating it into Amazon Aurora This can be helpful in the following cases: • You want to speed up the migration process • You are unsure of how much space you need t o provision • You have attempted to migrate your data and the migration has failed due to a lack of provisioned space Make sure that you are not making these changes in your production Amazon RDS MySQL database but rather on a database instance that was restored from your production snapshot For more details on doing this see Reducing the Amount of Space Required to Migrate Data into Amazon Aurora in the Amazon RDS User Guide The naming conventions used in this section are as follows: • Source RDS DB instance refers to the RDS MySQL 56 DB instance that you are migrating from • Target Aurora DB cluster refers to the Aurora DB cluster that you are migrating to This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 20 Migrating with Downtime When migration downtime is acceptable you can use the following high level procedure to migrate an RDS MySQL 56 DB instance to Amazon Aurora: 1 Stop all write activity against the source RDS DB instance Database downtime begins here 2 Take a snapshot of the source RDS DB instance 3 Wait until the snapshot shows as Available in the AWS Management Console 4 Use the AWS Management Console to migrate the snapshot to a new Aurora DB cluster For instructions see Migra ting Data to an Amazon Aurora DB Cluster in the Amazon RDS User Guide 5 Wait until the snapshot migration finishes and the target Aurora DB cluster enters the Available state The time to migrate a snapshot primarily depends on the size of the database You can determine it ahead of the production migration by running a test migration 6 Configure applications to connect to the newly created target Aurora DB cluster instead of the source RDS DB instance 7 Resume write activity against the target Aurora DB cluster Database downtime ends here Migrating with Near Zero Downtime If prolonged migration downtime is not acceptable you can perform a near zero downtime migration through a combination of snapshot migration and binary log replication Perform the high level procedure as follows: 1 On the source RDS DB instance ensure that a utomated backups are enabled 2 Create a Read Replica of the source RDS DB instance 3 After you create the Read Replica manually stop replication and obtain binary log coordinates 4 Take a snapshot of the Read Replica This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 21 5 Use the AWS Management Console to migrat e the Read Replica snapshot to a new Aurora DB cluster 6 Wait until snapshot migration finishes and the target Aurora DB cluster enters the Available state 7 On the target Aurora DB cluster configure binary log replication from the source RDS DB instance using the binary log coordinates that you obtained in step 3 8 Wait for the replication to catch up that is for the replication lag to reach zero 9 Begin cut over by stopping all write activity against the source RDS DB instance Application downt ime begins here 10 Verify that there is no outstanding replication lag and then configure applications to connect to the newly created target Aurora DB cluster instead of the source RDS DB instance 11 Complete cut over by resuming write activity Application downtime ends here 12 Terminate replication between the source RDS DB instance and the target Aurora DB cluster For a detailed description of this procedure see Replication Between Aurora and MySQL or Between Aurora and Another Aurora DB Cluster in the Amazon RDS Us er Guide If you don’t want to set up replication manually you can also create an Aurora Read Replica from a source RDS MySQL 56 DB instance by using the RDS Management Console The RDS automation does the following: 1 Creates a snapshot of the source RDS DB instance 2 Migrates the snapshot to a new Aurora DB cluster 3 Establishes binary log replication between the source RDS DB instance and the target Aurora DB cluster After replication is established you can complete the cut over steps as described previously This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 22 Migrating from Amazon RDS for MySQL Engine Versions Other than 56 Direct snapshot migration is only supported for RDS MySQL 56 DB instance snapshots You can migrate RDS MySQL DB instances that are running other engine versions by u sing the following procedures RDS for MySQL 51 and 55 Follow these steps to migrate RDS MySQL 51 or 55 DB instances to Amazon Aurora: 1 Upgrade the RDS MySQL 51 or 55 DB instance to MySQL 56 • You can upgrade RDS MySQL 55 DB instances directly to MySQL 56 • You must upgrade RDS MySQL 51 DB instances to MySQL 55 first and then to MySQL 56 2 After you upgrade the instance to MySQL 56 test your applications against the upgraded database and address any compatibility or performance co ncerns 3 After your application passes the compatibility and performance tests against MySQL 56 migrate the RDS MySQL 56 DB instance to Amazon Aurora Depending on your requirements choose the Migrating with Downtime or Migrating with Near Zero Downtime procedures described earlier For more information about upgrading RDS MySQL engine versions see Upgrading the MySQL DB Engine in the Amazon RDS User Guide RDS for MySQL 57 For migrations from RDS MySQL 57 DB instances the snapshot migration approach is not supported because the database engine version ca n’t be downgraded to MySQL 56 In this case we recommend a manual dump andimport procedure for migrating MySQL compatible databases described later in this whitepaper Such a procedure may be slower than snapshot migration but you can still perform it with near zero downtime using binary log replication This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 23 Migrating from MySQL Compatible Databases Moving to Amazon Aurora is still a relatively simple process if you are migrating from an RDS MariaDB instance an RDS MySQL 57 DB instance or a se lf managed MySQL compatible database such as MySQL MariaDB or Percona Server running on Amazon Elastic Compute Cloud (Amazon EC2) or on premises There are many techniques you can use to migrate your MySQL compatible database workload to Amazon Aurora This section describes various migration options to help you choose the most optimal solution for your use case Percona XtraBackup Amazon Aurora supports migration from Percona XtraBackup files that are stored in an Amazon S3 bucket Migrating from binar y backup files can be significantly faster than migrating from logical schema and data dumps using tools like mysqldump Logical imports work by executing SQL commands to re create the schema and data from your source database which involves considerable processing overhead By comparison you can use a more efficient binary ingestion method to ingest Percona XtraBackup files This migration method is compatible with source servers using MySQL versions and 56 Migrating from Percona XtraBackup files invol ves three steps: 1 Use the innobackupex tool to create a backup of the source database 2 Upload backup files to an Amazon S3 bucket 3 Restore backup files through the AWS Management Console For details and step bystep instructions see Migrating data from MySQL by using an Amazon S3 Bucket in the Amazon RDS User Guide SelfManaged Export/Import You can use a variety of export/import tools to migrate your data and schema to Amazon Aurora The tools can be described as “MySQL native” because they are either part of a MySQL project or were designed specifically for MySQL compatible databases This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 24 Examples of native migration tools include the following: 1 MySQL utilities such as mysqldump mysqlimport and mysql command line client 2 Third party utilities such as mydumper and myloader For details see this mydumper project page 3 Builtin MySQL commands such as SELECT INTO OUTFILE and LOAD DATA INFILE Native tools are a great option for power users or database administrators who want to maintain full control over the migration process Self managed migrations involve more steps and are typically slower than RDS snapshot or Percona XtraBackup migrations but they offer the best compatibility and flexibility For an in depth discussion of the best practices for self managed migrations see the AWS whitepaper Best Practices for Migrating MySQ L Databases to Amazon Aurora You can execute a self managed migration with downtime (without replication) or with nearzero downt ime (with binary log replication) SelfManaged Migration with Downtime The high level procedure for migrating to Amazon Aurora from a MySQL compatible database is as follows: 1 Stop all write activity against the source database Application downtime begin s here 2 Perform a schema and data dump from the source database 3 Import the dump into the target Aurora DB cluster 4 Configure applications to connect to the newly created target Aurora DB cluster instead of the source database 5 Resume write activity Appli cation downtime ends here For an in depth discussion of performance best practices for self managed migrations see the AWS whitepaper Best Practices for Migrating MySQL Databases to Amazon Aurora This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 25 SelfManaged Migration with Near Zero Downtime The following is the high level procedure for near zero downtime migration into Amazon Aurora from a MySQL compatible database: 1 On the source database enable binary logging and ensure that binary log files are retained for at least the amount of time that is required t o complete the remaining migration steps 2 Perform a schema and data export from the source database Make sure that the export metadata contains binary log coordinates that are required to establish replication at a later time 3 Import the dump into the tar get Aurora DB cluster 4 On the target Aurora DB cluster configure binary log replication from the source database using the binary log coordinates that you obtained in step 2 5 Wait for the replication to catch up that is for the replication lag to reach zero 6 Stop all write activity against the source database instance Application downtime begins here 7 Double check that there is no outstanding replication lag Then configure applications to connect to the newly created target Aurora DB cluster inst ead of the source database 8 Resume write activity Application downtime ends here 9 Terminate replication between the source database and the target Aurora DB cluster For an in depth discussion of performance best practices of self managed migrations see the AWS whitepaper Best Practices for Mig rating MySQL Databases to Amazon Aurora AWS Database Migration Service AWS Database Migration Service is a managed database migra tion service that is available through the AWS Management Console It can perform a range of tasks from simple migrations with downtime to near zero downtime migrations using CDC replication This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 26 AWS Database Migration Service may be the preferred option if y our source database can’t be migrated using methods described previously such as the RDS MySQL 56 DB snapshot migration Percona XtraBackup migration or native export/import tools AWS Database Migration Service might also be advantageous if your migrat ion project requires advanced data transformations such as the following : • Remapping schema or table names • Advanced data filtering • Migrating and replicating multiple database servers into a single Aurora DB cluster Compared to the migration methods describe d previously AWS DMS carries certain limitations: • It does not migrate secondary schema objects such as indexes foreign key definitions triggers or stored procedures Such objects must be migrated or created manually prior to data migration • The DMS CDC replication uses plain SQL statements from binlog to apply data changes in the target database Therefore it might be slower and more resource intensive than the native master/slave binary log replication in MySQL For step bystep instructions on how to migrate your database using AWS DMS see the AWS whitepaper Migrating Your Databases to Amazon Aurora Heterogeneous Migrations If you a re migrating a non MySQL compatible database to Amazon Aurora several options can help you complete the project quickly and easily A heterogeneous migration project can be split into two phases: 1 Schema migration to review and convert the source schema objects (eg tables procedures and triggers) into a MySQL compatible representation 2 Data migration to populate the newly created schema with data contained in the source database Optionally you can use a CDC replication for near zero downtime migratio n This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 27 Schema Migration You must convert database objects such as tables views functions and stored procedures to a MySQL 56 compatible format before you can use them with Amazon Aurora This section describes two main options for converting schema objects Whichever migration method you choose always make sure that the converted objects are not only compatible with Aurora but also follow MySQL’s best practices for schema design AWS Schema Conversion Tool The AWS Schema Conversion Tool (AWS SCT) can great ly reduce the engineering effort associated with migrations from Oracle Microsoft SQL Server Sybase DB2 Azure SQL Database Terradata Greenplum Vertica Cassandra and PostgreSQL etc AWS SCT can automatically convert the source database schema and a majority of the custom code including views stored procedures and functions to a format compatible with Amazon Aurora Any code that can’t be automatically converted is clearly marked so that it can be processed manually For more information see the AWS Schema Conversion Tool User Guide For step by step instructions on how to convert a non MySQL compatible schema using the AWS Schema Conversion Tool see t he AWS whitepaper Migrating Your Databases to Amazon Aurora Manual Schema Migration If your source database is not in the scope of SCT comp atible databases you can either manually rewrite your database object definitions or use available third party tools to migrate schema to a format compatible with Amazon Aurora Many applications use data access layers that abstract schema design from business application code In such cases you can consider redesigning your schema objects specifically for Amazon Aurora and adapting the data access layer to the new schema This might require a greater upfront engineering effort but it allows the new s chema to incorporate all the best practices for performance and scalability This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 28 Data Migration After the database objects are successfully converted and migrated to Amazon Aurora it’s time to migrate the data itself The task of moving data from a non MySQL compatible database to Amazon Aurora is best done using AWS DMS AWS DMS supports initial data migration as well as CDC replication After the migration task starts AWS DMS manages all the complexities of the process including data type transformations compression and parallel data transfer The CDC functionality automatically replicates any changes that are made to the source database during the migration process For more information see the AWS Database Migration Service User Guide For step bystep instructions on how to migrate data from a non MySQL compatible database into an Amazon Aurora cluster using AWS DMS see the AWS whitepaper Migrating Your Databases to Amazon Aurora Example Migration Scenarios There are several approaches for performing both self managed homogeneo us migration and heterogeneous migrations SelfManaged Homogeneous Migrations This section provides examples of migration scenarios from self managed MySQL compatible databases to Amazon Aurora For an in depth discussion of homogeneous migration best pra ctices see the AWS whitepaper Best Practices for Migrating MySQL Databases to Amazon Aurora Note: If you are migrating from an Amazon RDS MySQL DB instance you can use the RDS snapshot migration feature instead of doing a self managed migration See the Migrating from Amazon RDS for MySQL section for more details This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 29 Migrating Using Percona XtraBackup One option for migrating data from MySQL to Amazon Aurora is to use the Percona XtraBackup utility For more information about usin g Percona Xtrabackup utility see Migrating Data from an External MySQL Database in the Amazon RDS User Guide Approach This scenario uses the Percona XtraBackup utility to take a binary backup of the source MySQL database The backup files are then uploaded to an Amazon S3 bucket and restored into a new Amazon Aurora DB cluster When to Use You can adopt this approach for small to large scale migrations when the following conditions are met: • The source database is a MySQL 55 or 56 database • You have administrative system level access to the source database • You are migrating database servers in a 1 to1 fashion: one source MySQL server becomes one new Aurora DB cluster When to Consider Other Options This approach is not currently supported in the following scenarios • Migrating into existing Aurora DB clusters • Migrating multiple source MySQL servers into a single Aurora DB cluster Examples For a step bystep example see Migrating Data from an External MySQL Database in the Amazon RDS User Guide OneStep Migration Using mysqldump Another migration option uses the mysqldump utility to migrate data from MySQL to Amazon Aurora This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 30 Approach This scenario uses the mysqldump utility to export schema and data definitions from the source server and import them into the target Auro ra DB cluster in a single step without creating any intermediate dump files When to Use You can adopt this approach for many small scale migrations when the following conditions are met: • The data set is very small (up to 1 2 GB) • The network connection between source and target databases is fast and stable • Migration performance is not critically important and the cost of re trying the migration is very low • There is no need to do any intermediate schema or data transformations When to Cons ider Other Options This approach might not be an optimal choice if any of the following conditions are true • You are migrating from an RDS MySQL DB instance or a self managed MySQL 55 or 56 database In that case you might get better results with snapsho t migration or Percona XtraBackup respectively For more • details see the Migrating from Amazon RDS for MySQL and Percona XtraBackup sections • It is impossible to establish a network connection from a single client instance to source and target databases due to network architecture or security considerations • The network connection between source and target databases is unstable or very slow • The data set is larger than 10 GB • Migration performance is critically important • An intermediate dump file is required in order to perform schema or data manipulations before you can import the schema/data This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 31 Notes For the sake of simplicity this scenario assumes the following: 1 Migration commands are executed from a client instance running a Linux operating system 2 The source server is a self managed MySQL database (eg running on Amazon EC2 or on premises) that is configured to allow connections from the client instance 3 The target Aurora DB cluster already exists and is configured to allow connections from the client instance If you don’t yet have an Aurora DB cluster review the stepbystep cluster launch instructions in the Amazon RDS User Guide 17 4 Export from the source database is performed using a privileged super user MySQL ac count For simplicity this scenario assumes that the user holds all permissions available in MySQL 5 Import into Amazon Aurora is performed using the Aurora master user account that is the account whose name and password were specified during the cluster launch process Examples The following command when filled with the source and target server and user information migrates data and all objects in the named schema(s) between the source and t arget servers mysqldump host=<source_server_address> \ user=<source_user> \ password=<source_user_password> \ databases <schema(s)> \ singletransaction \ compress | mysql host=<target_cluster_endpoint> \ user=<target_user> \ password=<target_user_password> This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 32 Descriptions of the options and option v alues for the mysqldump command are as follows: • <source_server_address> : DNS name or IP address of the source server • <source_user> : MySQL user account name on the source server • <source_user_password> : MySQL user account password on the source server • <schema(s)> : One or more schema names • <target_cluster_endpoint> : Cluster DNS endpoint of the target Aurora cluster • <target_user> : Aurora master user name • <target_user_password> : Aurora master user password • single transaction : Enforces a consi stent dump from the source database Can be skipped if the source database is not receiving any write traffic • compress : Enables network data compression See the mysqldump docume ntation for more details Example: mysqldump host=source mysqlexamplecom \ user=mysql_admin_user \ password=mysql_user_password \ databases schema1 \ singletransaction \ compress | mysql host=auroracluster xxxxxamazonawscom \ user=aurora_master_user \ password=aurora_user_password Note: This migration approach requires application downtime while the dump and import are in progress You can avoid application downtime by extending the scenario with MySQL binary log replication See the Self Managed Migration with Near Zero Downtime section for more details This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 33 FlatFile Migration Using Files in CSV Format This scenario demonstrates a schema and data migration using flat file dumps that is dumps that do not encapsulate data in SQL statements Many database administrators prefer to use flat files over SQL format files for the following reasons: • Lack of SQL encap sulation results in smaller dump files and reduces processing overhead during import • Flatfile dumps are easier to process using OS level tools; they are also easier to manage (eg split or combine) • Flatfile formats are compatible with a wide range of database engines both SQL and NoSQL Approach The scenario uses a hybrid migration approach: • Use the mysqldump utility to create a schema only dump in SQL format The dump describes the structure of schema objects (eg tables views and functions) but does not contain data • Use SELECT INTO OUTFILE SQL commands to create dataonly dumps in CSV format The dumps are created in a one filepertable fashion and contain table data only (no schema definitions) The import phase can be executed in two ways: • Traditional approach: Transfer all dump files to an Amazon EC2 instance located in the same AWS Region and Availability Zone as the target Aurora DB cluster After transferring the dump files you can import them into Amazon Aurora using the mysql command line client and LOAD DATA LOCAL INFILE SQL commands for SQL format schema dumps and the flat file data dumps respectively This is the approach that is demonstrated later in this section This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 34 • Alternative approach: Transfer the SQL format schema dumps t o an Amazon EC2 client instance and import them using the mysql command line client You can transfer the flat file data dumps to an Amazon S3 bucket and then import them into Amazon Aurora using LOAD DATA FROM S3 SQL commands For more information including an example of loading data from Amazon S3 see Migrating Data from MySQL by Using an Amazon S3 Bucket in the Amazon RDS User Guide When to Use You can adopt this approach for most migration projects where performance and flexibility are important: • You can dump small data sets and import them one table at a time You can also run multiple SELECT INTO OUTFILE and LOAD DATA INFILE operations in parallel for best performance • Data that is stored in flat file dumps is not encapsulated in database specific SQL statements Therefore it can be handled and processed easily by the systems participating in the data exchange When to Consider Other Options You might choose not to use this approach if any of the following conditions are true: • You are migrating from an RDS MySQL DB instance or a self managed MySQL 56 database In that case you might get better results with snapshot migration or Percona XtraBackup respectively See the Migrating from Amazon RDS for MySQL and Percona XtraBackup sections for more details • The data set is very small and does not require a high performance migration approach • You want the migration process to be as simple as possible and you don’t require any of the performance and flexibility benefits listed earlier Notes To simplify the demons tration this scenario assumes the following: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 35 1 Migration commands are executed from client instances running a Linux operating system: o Client instance A is located in the source server’s network o Client instance B is located in the same Amazon VPC Availability Zone and Subnet as the target Aurora DB cluster 2 The source server is a self managed MySQL database (eg running on Amazon EC2 or on premises) configured to allow connections from client instance A 3 The target Aurora DB cluster already exist s and is configured to allow connections from client instance B If you don’t have an Aurora DB cluster yet review the stepbystep cluster launch instruct ions in the Amazon RDS User Guide 4 Communication is allowed between both client instances 5 Export from the source database is performed using a privileged super user MySQL account For simplicity this scenario assumes that the user holds all permissions available in MySQL 6 Import into Amazon Aurora is performed using the master user account that is the account whose name and password were specified during the cluster launch process Note that this migration approach requires application downtime while t he dump and import are in progress You can avoid application downtime by extending the scenario with MySQL binary log replication See the Self Managed Migration with Near Zero Downtime sectio n for more details Examples In this scenario you migrate a MySQL schema named myschema The first step of the migration is to create a schema only dump of all objects mysqldump host=<source_server_address> \ user=<source_user> \ password=<source_user_password> \ databases <schema(s)> \ singletransaction \ nodata > myschema_dumpsql This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 36 Descriptions of the options and option values for the mysqldump command are as follows: • <source_server_address> : DNS name or IP address of th e source server • <source_user> : MySQL user account name on the source server • <source_user_password> : MySQL user account password on the source server • <schema(s)> : One or more schema names • <target_cluster_endpoint> : Cluster DNS endpoint of the target Aur ora cluster • <target_user> : Aurora master user name • <target_user_password> : Aurora master user password • single transaction : Enforces a consistent dump from the source database Can be skipped if the source database is not receiving any write traffic • nodata : Creates a schema only dump without row data For more details see mysqldump in the MySQL 56 Reference Manual Example: admin@clientA:~$ mysqldump host=11223344 user=root \ password=pAssw0rd databases myschema \ singletransaction nodata > myschema_dump_schema_onlysql After you complete the schema only dump you can obtain data dumps for each table After logging in to the source MyS QL server use the SELECT INTO OUTFILE statement to dump each table’s data into a separate CSV file admin@clientA:~$ mysql host=11223344 user=root password=pAssw0rd mysql> show tables from myschema; + + This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 37 | Tables_in_myschema | + + | t1 | | t2 | | t3 | | t4 | + + 4 rows in set (000 sec) mysql> SELECT * INTO OUTFILE '/home/admin/dump/myschema_dump_t1csv' FIELDS TERMINATED BY '' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY ' \n' FROM myschemat1; Query OK 4194304 rows affected (235 sec) (repeat for all remaining tables) For more information about SELECT INTO statement syntax see SELECT INTO Syntax in the MySQL 56 Reference Manual After you complete all dump operations the /home/admin/dump directory contains five files: one schema only dump and four data dumps on e per table admin@clientA:~/dump$ ls sh1 total 685M 40K myschema_dump_schema_onlysql 172M myschema_dump_t1csv 172M myschema_dump_t2csv 172M myschema_dump_t3csv 172M myschema_dump_t4csv Next you compress and transfer the files to client instance B located in the same AWS Region and Availability Zone as the target Aurora DB cluster You can use any file transfer method available to you (eg FTP or Amazon S3) This example uses SCP with SSH private key authentication admin@clientA:~/dump$ gzip mysc hema_dump_*csv admin@clientA:~/dump$ scp i sshkeypem myschema_dump_* \ <clientB_ssh_user>@<clientB_address>:/home/ec2 user/ This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 38 After transferring all the files you can decompress them and import the schema and data Import the schema dump first because a ll relevant tables must exist before any data can be inserted into them admin@clientB:~/dump$ gunzip myschema_dump_*csvgz admin@clientB:~$ mysql host=<cluster_endpoint> user=master \ password=pAssw0rd < myschema_dump_schema_onlysql With the schem a objects created the next step is to connect to the Aurora DB cluster endpoint and import the data files Note the following: • The mysql client invocation includes a localinfile parameter which is required to enable support for LOAD DATA LOCAL INFILE commands • Before importing data from dump files use a SET command to disable foreign key constraint checks for the duration of the database session Disabling foreign key checks not only improves import performance but it also lets you import data files in arbitrary order admin@clientB:~$ mysql localinfile host=<cluster_endpoint> \ user=master password=pAssw0rd mysql> SET foreign_key_checks = 0; Query OK 0 rows affected (000 sec) mysql> LOAD DATA LOCAL INFILE '/home/ec2 user/myschema_dump_t1csv' > INTO TABLE myschemat1 > FIELDS TERMINATED BY '' OPTIONALLY ENCLOSED BY '"' > LINES TERMINATED BY ' \n'; Query OK 4194304 rows affected (1 min 266 sec) Records: 4194304 Deleted: 0 Skipped: 0 Warnings: 0 (repeat for all rema ining CSV files) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 39 mysql> SET foreign_key_checks = 1; Query OK 0 rows affected (000 sec) That’s it you have imported the schema and data dumps into the Aurora DB cluster You can find more tips and best practices for self managed migrations in the AWS whitepaper Best Practices for Migrating MySQL Databases to Amazon Aurora Multi Threaded Migration Using mydumper and myloader Mydumper and myloader are popular open source MySQL export/import tools designed to address performance issues associated with the lega cy mysqldump program They operate on SQL format dumps and offer advanced features such as the following: • Dumping and loading data using multiple parallel threads • Creating dump files in a file pertable fashion • Creating chunked dumps in a multiple filespertable fashion • Dumping data and metadata into separate files for easier parsing and management • Configurable transaction size during import • Ability to schedule dumps in regular intervals For more details see the MySQL Data Dumper project page Approach The scenario uses the mydumper and myloader tools to perform a multi threaded schema and data migration without the need to manually invoke any SQL commands or desig n custom migration scripts The migration is performed in two steps: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 40 1 Use the mydumper tool to create a schema and data dump using multiple parallel threads 2 Use the myloader tool to process the dump files and import them into an Aurora DB cluster also in multi threaded fashion Note that mydumper and myloader might not be readily available in the package repository of your Linux/Unix distribution For your convenience the scenario also shows how to build the tools from source code When to Use You can adopt this approach in most migration projects: • The utilities are easy to use and enable database users to perform multi threaded dumps and imports without the need to develop custom migration scripts • Both tools are highly flexible and have reasonable co nfiguration defaults You can adjust the default configuration to satisfy the requirements of both small and large scale migrations When to Consider Other Options You might decide not to use this approach if any of the following conditions are true: • You are migrating from an RDS MySQL DB instance or a self managed MySQL 55 or 56 database In that case you might get better results with snapshot migration or Percona XtraBackup respectively See the Migrating from Amazon RDS for MySQL and Percona XtraBackup sections for more details • You can’t use third party software because of operating system limitations • Your data transformation processes require intermediate dump files in a flat file forma t and not an SQL format Notes To simplify the demonstration this scenario assumes the following: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 41 1 You execute the migration commands from client instances running a Linux operating system: a Client instance A is located in the source server’s network b Clien t instance B is located in the same Amazon VPC Availability Zone and Subnet as the target Aurora cluster 2 The source server is a self managed MySQL database (eg running on Amazon EC2 or on premises) configured to allow connections from client instance A 3 The target Aurora DB cluster already exists and is configured to allow connections from client instance B If you don’t have an Aurora DB cluster yet review the stepbystep cluster launch instructions in the Amazon RDS User Guide 4 Communication is allowed between both client instances 5 You perform the export from the source database using a privileged super user MySQL account For simplicity the example assumes that the user holds all permissions available in MySQL 6 You perform the import into Amazon Aurora using the master user account that is the account whose n ame and password were specified during the cluster launch process 7 The Amazon Linux 2016033 operating system is used to demonstrate the configuration and compilation steps for mydumper and myloader Note : This migration approach requires application down time while the dump and import are in progress You can avoid application downtime by extending the scenario with MySQL binary log replication See the Self Managed Migration with Near Zero Dow ntime section for more details Examples (Preparing Tools) The first step is to obtain and build the mydumper and myloader tools See the MySQL Data Dumper project page for up todate download links and to ensure that tools are prepared on both client instances This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 42 The utilities depend on several packages that you should install first [ec2user@clientA ~]$ sudo yum install glib2 devel mysql56 \ mysql56devel zlib devel pcre devel openssl devel g++ gcc c++ cmake The next steps involve creating a directory to hold the program sources and then fetching and unpacking the source archive [ec2user@clientA ~]$ mkdir mydumper [ec2 user@clientA ~]$ cd mydumper/ [ec2user@clientA mydumper]$ wget https://launchp adnet/mydumper/09/091/+download/mydumper 091targz 20160629 21:39:03 (153 KB/s) ‘mydumper 091targz’ saved [44463/44463] [ec2user@clientA mydumper]$ tar zxf mydumper 091targz [ec2user@clientA mydumper]$ cd mydumper 091 Next you b uild the binary executables [ec2user@clientA mydumper 091]$ cmake (…) [ec2user@clientA mydumper 091]$ make Scanning dependencies of target mydumper [ 25%] Building C object CMakeFiles/mydumperdir/mydumperco [ 50%] Building C object CMakeFiles/mydumperdir/server_detectco [ 75%] Building C object CMakeFiles/mydumperdir/g_unix_signalco Linking C executable mydumper [ 75%] Built target mydumper Scanning dependencies of target myloader [100%] Building C object CMakeFiles/myloaderdi r/myloaderco Linking C executable myloader [100%] Built target myloader This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 43 Optionally you can move the binaries to a location defined in the operating system $PATH so that they can be executed more conveniently [ec2user@clientA mydumper 091]$ sudo mv mydumper /usr/local/bin/mydumper [ec2user@clientA mydumper 091]$ sudo mv myloader /usr/local/bin/myloader As a final step confirm that both utilities are available in the system [ec2user@clientA ~]$ mydumper V mydumper 091 built against MySQL 5631 [ec2user@clientA ~]$ myloader V myloader 091 built against MySQL 5631 Examples (Migration) After completing the preparation steps you can perform the migration The mydumper command uses the following basic syntax mydumper h <source_serve r_address> u <source_user> \ p <source_user_password> B <source_schema> \ t <thread_count> o <output_directory> Descriptions of the parameter values are as follows: • <source_server_address> : DNS name or IP address of the source server • <source_user> : MySQL user account name on the source server • <source_user_password> : MySQL user account password on the source server • <source_schema> : Name of the schema to dump • <thread_count> : Number of parallel threads used to dump the data This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 44 • <output_directory> : Name of the directory where dump files should be placed Note : mydumper is a highly customizable data dumping tool For a complete list of supported parameters and their default values use the builtin help mydumper help The example dump is executed as follows [ec2user@clientA ~]$ mydumper h 11223344 u root \ p pAssw0rd B myschema t 4 o myschema_dump/ The operation results in the following files being created in the dump directory [ec2user@clientA ~]$ ls sh1 myschema_dum p/ total 733M 40K metadata 40K myschema schemacreatesql 40K myschemat1 schemasql 184M myschemat1sql 40K myschemat2 schemasql 184M myschemat2sql 40K myschemat3 schemasql 184M myschemat3sql 40K myschemat4 schemasql 184M myschemat4sql The directory contains a collection of metadata files in addition to schema and data dumps You don’t have to manipulate these files directly It’s enough that the directory structure is understood by the myloader tool Compress the entire directory and transfer it to client instance B [ec2user@clientA ~]$ tar czf myschema_dumptargz myschema_dump [ec2user@clientA ~]$ scp i sshkeypem myschema_dumptargz \ <clientB_ssh_user>@<clientB_address>:/home/ec2 user/ This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 45 When the transfer is complete connect to client instance B and verify that the myloader utility is available [ec2user@clientB ~]$ myloader V myloader 091 built against MySQL 5631 Now you can u npack the dump and import it The syntax used for the myloader command is very similar to what you already used for mydumper The only difference is the d (source directory) parameter replacing the o (target directory) parameter [ec2user@clientB ~]$ tar zxf myschema_dumptargz [ec2user@clientB ~]$ myloader h <cluster_dns_endpoint> \ u master p pAssw0rd B myschema t 4 d myschema_dump/ Useful Tips • The concurrency level (thread count) does not have to be the same for export and import operations A good rule of thumb is to use one thread per server CPU core (for dumps) and one thread per two CPU cores (for imports) • The schema and data dumps produced by mydumper use an SQL format and are compatible with MySQL 56 Although you will typically use the pair of mydumper and myloader tools together for best results technically you can import the dump files from myloader by using any other MySQL compatible client tool You can find more tips and best practices for self managed migrations in t he AWS whitepaper Best Practices for Migrating MySQL Databases to Amazon Aurora Heterogeneous Migrations For detailed step bystep instructions on how to migrate schema and data from a non MySQL compatib le database into an Aurora DB cluster using AWS SCT and AWS DMS see the AWS whitepaper Migrating Your Databases to Amazon Aurora Prior to running migration we suggest you to review Proof of Concept with Aurora to This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 46 understand the volume of data and representative of your production environment as a blueprint Testing and Cutover Once the schema and data have been successfully migrated from the source database to Amazon Aurora you are no w ready to perform end toend testing of your migration process The testing approach should be refined after each test migration and the final migration plan should include a test plan that ensures adequate testing of the migrated database Migration T esting Test Category Purpose Basic acceptance tests These pre cutover tests should be automatically executed upon completion of the data migration process Their primary purpose is to verify whether the data migration was successful Following are some common outputs from these tests: • Total number of items processed • Total number of items imported • Total number of items skipped • Total number of warnings • Total number of errors If any of these totals reported by the tests deviate from the expec ted values then it means the migration was not successful and the issues need to be resolved before moving to the next step in the process or the next round of testing This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 47 Test Category Purpose Functional tests These post cutover tests exercise the functionality of the applicat ion(s) using Aurora for data storage They include a combination of automated and manual tests The primary purpose of the functional tests is to identify problems in the application caused by the migration of the data to Aurora Nonfunctional tests Thes e post cutover tests assess the nonfunctional characteristics of the application such as performance under varying levels of load User acceptance tests These post cutover tests should be executed by the end users of the application once the final data migration and cutover is complete The purpose of these tests is for the end users to decide if the application is sufficiently usable to meet its primary function in the organization Cutover Once you have completed the final migration and testing it is time to point your application to the Amazon Aurora database This phase of migration is known as cutover If the planning and testing phase has been executed properly cutover should not lead to unexpected issues Precutover Actions • Choose a cutover window: Identify a block of time when you can accomplish cutover to the new database with minimum disruption to the business Normally you would select a low activity period for the database (typically nights and/or weekends) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 48 • Make sure changes are caught up: If a near zero downtime migration approach was used to replicate database changes from the source to the target database make sure that all database changes are caught up and your target database is not significantly lagging behind the sour ce database • Prepare scripts to make the application configuration changes: In order to accomplish the cutover you need to modify database connection details in your application configuration files Large and complex applications may require updates to co nnection details in multiple places Make sure you have the necessary scripts ready to update the connection configuration quickly and reliably • Stop the application: Stop the application processes on the source database and put the source database in read only mode so that no further writes can be made to the source database If the source database changes aren’t fully caught up with the target database wait for some time while these changes are fully propagated to the target database • Execute pre cutove r tests: Run automated pre cutover tests to make sure that the data migration was successful Cutover • Execute cutover: If pre cutover checks were completed successfully you can now point your application to Amazon Aurora Execute scripts created in the p re cutover phase to change the application configuration to point to the new Aurora database • Start your application: At this point you may start your application If you have an ability to stop users from accessing the application while the application is running exercise that option until you have executed your post cutover checks Post cutover Checks • Execute post cutover tests: Execute predefined automated or manual test cases to make sure your application works as expected with the new database It ’s a good strategy to start testing read only functionality of the database first before executing tests that write to the database This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 49 Enable user access and closely monitor: If your test cases were executed successfully you may give user access to the app lication to complete the migration process Both application and database should be closely monitored at this time Troubleshooting The following sections provide examples of common issues and error messages to help you troubleshoot heterogenous DMS migrat ions Troubleshooting MyS QL Specific Issues The following issues are specific to using AWS DMS with MySQL databases Topics • CDC Task Failing for Amazon RDS DB Instance Endpoint Because Binary Logging Disabled • Connections to a target MySQL instance are disconnected during a task • Adding Autocommit to a MySQL compatible Endpoint • Disable Foreign Keys on a Target MySQL compatible Endpoint • Characters Replaced with Question Mark • "Bad event" Log Entries • Change Data Capture with MySQL 55 • Increasing Binary Log Retention for Amazon RDS DB Instances • Log Message: Some changes from the source database had no impact when applied to the target database • Error: Identifier too long • Error: Unsupported Character Set Causes Field Data Conversion to Fail • Error: Codepage 1252 to UTF8 [120112] A field data conversion failed This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 50 CDC Task Failing for Amazon RDS DB Instance E ndpoint Because Binary Logging Disabled This issue occurs with Amazon RDS DB instances because automated backups are disabled Enable automatic backups by setting the backup retention period to a non zero value Connections to a target MySQL instance are disconnected during a task If you have a task with LOBs that is getting disconnected from a MySQL target with the following type of errors in the task log you might need to adjust some of your task settings [TARGET_LOAD ]E: RetCode: SQL_ ERROR SqlState : 08S01 NativeError: 2013 Message: [ MySQL][ODBC 53(w) Driver ][mysqld5716log]Lost connection to MySQL server during query [122502] ODBC general error To solve the issue where a task is being disconnected from a MySQL target do the following: • Check that you have your database variable max_allowed_packet set large enough to hold your largest LOB • Check that you have the following variables set to have a large timeout value We suggest you use a value of at least 5 minutes for each of these variables o net_read_timeout o net_write_timeout o wait_timeout o interactive_timeout Adding Autocommit to a MySQL compatible Endpoint To add autocommit to a target MySQL compatible endpoint use the following procedure: This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 51 1 Sign in to the AWS Management Console and sel ect DMS 2 Select Endpoints 3 Select the MySQL compatible target endpoint that you want to add autocommit to 4 Select Modify 5 Select Advanced and then add the following code to the Extra connection attributes text box: Initstmt = SET AUTOCOMMIT= 1 6 Choose Modify Disable Foreign Keys on a Target MySQL compatible Endpoint You can disable foreign key checks on MySQL by adding the following to the Extra Connection Attributes in the Advanced section of the target MySQL Am azon Aurora with MySQL compatibility or MariaDB endpoint To disable foreign keys on a target MySQL compatible endpoint use the following procedure: 1 Sign in to the AWS Management Console and select DMS 2 Select Endpoints 3 Select the MySQL Aurora MySQL or MariaDB target endpoint that you want to disable foreign keys 4 Select Modify 5 Select Advanced and then add the following code to the Extra connection attributes text box: Initstmt =SET FOREIGN_KEY_CHECKS= 0 This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 52 6 Choose Modify Characters Replaced with Question Mark The most common situation that causes this issue is when the source endpoint characters have been encoded by a character set that AWS DMS doesn't support For example AWS DMS engine versions prior to version 311 do n't support the UTF8MB4 character set Bad event Log Entries Bad event entries in the migration logs usually indicate that an unsupported DDL operation was attempted on the source database endpoint Unsupported DDL operations cause an event that the repli cation instance cannot skip so a bad event is logged To fix this issue restart the task from the beginning which will reload the tables and will start capturing changes at a point after the unsupported DDL operation was issued Change Data Capture with MySQL 55 AWS DMS change data capture (CDC) for Amazon RDS MySQL compatible databases requires full image row based binary logging which is not supported in MySQL version 55 or lower To use AWS DMS CDC you must up upgrade your Amazon RDS DB instance t o MySQL version 56 Increasing Binary Log Retention for Amazon RDS DB Instances AWS DMS requires the retention of binary log files for change data capture To increase log retention on an Amazon RDS DB instance use the following procedure The following example increases the binary log retention to 24 hours call mysqlrds_set_confi guration( 'binlog retention hours' 24); This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 53 Log Message: Some changes from the source database had no impact when applied to the target database When AWS DMS updates a MySQL database column’s value to its existing value a message of zero rows a ffected is returned from MySQL This behavior is unlike other database engines such as Oracle and SQL Server that perform an update of one row even when the replacing value is the same as the current one Error: Identifier too long The following error oc curs when an identifier is too long: TARGET_LOAD E: RetCode: SQL_ERROR SqlState: HY000 NativeError: 1059 Message: MySQLhttp://ODBC 53(w) Driverhttp://mysqld 5610Identifier name '<name>' is too long 122502 ODBC general error (ar_odbc_stmtc: 4054) When AWS DMS is set to create the tables and primary keys in the target database it currently does not use the same names for the Primary Keys that were used in the source database Instead AWS DMS creates the Primary Key na me based on the tables name When the table name is long the auto generated identifier created can be longer than the allowed limits for MySQL The solve this issue currently pre create the tables and Primary Keys in the target database and use a task w ith the task setting Target table preparation mode set to Do nothing or Truncate to populate the target tables Error: Unsupported Character Set Causes Field Data Conversion to Fail The following error occurs when an unsupported character set causes a fi eld data conversion to fail: "[SOURCE_CAPTURE ]E: Column '<column name>' uses an unsupported character set [120112] A field data conversion failed (mysql_endpoint_capturec: 2154) This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 54 This error often occurs because of tables or databases using U TF8MB4 encoding AWS DMS engine versions prior to 311 don't support the UTF8MB4 character set In addition check your database's parameters related to connections The following command can be used to see these parameters: SHOW VARIABLES LIKE '%char%' ; Error: Codepage 1252 to UTF8 [120112] A field data conversion failed The following error can occur during a migration if you have non codepage 1252 characters in the source MySQL database [SOURCE_CAPTURE ]E: Error converting column 'column_xyz' in tabl e 'table_xyz with codepage 1252 to UTF8 [120112] A field data conversion failed (mysql_endpoint_capturec: 2248) As a workaround you can use the CharsetMapping extra connection attribute with your source MySQL endpoint to specify character set mapping You might need to restart the AWS DMS migration task from the beginning if you add this extra connection attribute For example the following extra connection a ttribute could be used for a MySQL source endpoint where the source character set is utf8 or latin1 65001 is the UTF8 code page identifier CharsetMapping =utf865001 CharsetMapping =latin165001 Conclusion Amazon Aurora is a high performance highly available and enterprise grade database built for the cloud Leveraging Amazon Aurora can result in better performance and greater availability than other open source databases and lower costs than most commercial grade databases This paper proposes stra tegies for identifying the best method to migrate databases to Amazon Aurora and details the procedures for planning This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 55 and executing those migrations In particular AWS Database Migration Service (AWS DMS) as well as the AWS Schema Conversion Tool are the r ecommended tools for heterogeneous migration scenarios These powerful tools can greatly reduce the cost and complexity of database migrations Multiple factors contribute to a successful database migration: • The choice of the database product • A migration approach (eg methods tools) that meets performance and uptime requirements • Welldefined migration procedures that enable database administrators to prepare test and complete all migration steps with confidence • The ability to identify diagnose and deal with issues with little or no interruption to the migration process We hope that the guidance provided in this document will help you introduce meaningful improvements in all of these areas and that it will ultimately contribute to creating a bette r overall experience for your database migrations into Amazon Aurora Contributors Contributors to this document include : • Bala Mugunthan Sr Partner Solution Architect Amazon Web Services • Ashar Abbas Database Specialty Architect • Sijie Han SA Manager A mazon Web Services • Szymon Komendera Database Engineer Amazon Web Services This paper has been archived For the latest Ama zon Aurora Migration content refer to: https://d1awsstaticcom/whitepapers/RDS/Migrating your databases to Amazon Aurorapdf Amazon Web Services Amazon Aurora Migration Handbook 56 Further Reading For additional information see: • Aurora on Amazon RDS User Guide • Migrating Your Databases t o Amazon Aurora AWS whitepaper • Best Practices for Migrating MySQL Databases to Amazon Aurora AWS whitepaper Document Revisions Date Description July 2020 Added information for the large databases migrations on Amazon Aurora and functional p artition and data shard consolidation strategies are discussed in homogenous migration s ection s Multi threaded migration using mydumper and myload er open source tools are introduced Overall basic acceptance testing functional test non functional test and user acceptance tests are explained in the testing phase and pre cutover and post cut overs phase scenarios are further explained September 2019 First publication
General
A_Practical_Guide_to_Cloud_Migration_Migrating_Services_to_AWS
Archived A Practical Gui de to Cl oud Migration Migratin g Service s to AWS December 2015 This paper has been archived For the latest technical content see: https://docsawsamazoncom/prescriptiveguidance/latest/mrpsolution/mrpsolutionpdfArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 2 of 13 © 2015 Amazon Web Services Inc or its affiliates All rights reserved Notices This document is provided for informational purposes only It represents AWS’s current product offerings and practices as of the date of issue of this document which are subject to change without notice C ustomers are responsible for making their own independent assessment of the information in this document and any use of AWS’s products or services each of which is provided “as is” without warranty of any kind whether express or implied This document do es not create any warranties representations contractual commitments conditions or assurances from AWS its affiliates suppliers or licensors The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this docum ent is not part of nor does it modify any agreement between AWS and its customers ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 3 of 13 Contents Abstract 3 Introduction 4 AWS Cloud Adoption Framework 4 Manageable Areas of Focus 4 Successful Migrations 5 Breaking Down the Economics 6 Understand OnPremises Costs 6 Migration Cost Considerations 8 Migration Options 10 Conclusion 12 Further Reading 13 Contributors 13 Abstract To achieve full benefits of moving applications to the Amazon Web Services (AWS) platform it is critical to design a cloud migration model that delivers optimal cost efficiency This includes establishing a compelling business case acquiring new skills within the IT organization implemen ting new business processes and defining the application migration methodology to transform your business model from a traditional on premises computing platform to a cloud infrastructure ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 4 of 13 Perspective Areas of Focus Introduction Cloudbased computing introduces a radical shift in how technology is obtained used and managed as well as how organizations budget and pay for technology services With the AWS cloud platform project teams can easily configure the virtual network using t heir AWS account to launch new computing environments in a matter of minutes Organizations can optimize spending with the ability to quickly reconfigure the computing environment to adapt to changing business requirements Capacity can be automatically sc aled —up or down —to meet fluctuating usage patterns Services can be temporarily taken offline or shut down permanently as business demands dictate In addition with pay peruse billing AWS services become an operational expense rather than a capital expense AWS Cloud Adoption Framework Each organization will experience a unique cloud adoption journey but benefit from a structured framework that guides them through the process of transforming their people processes and technology The AWS Cloud Adopt ion Framework (AWS CAF) offers structure to help organizations develop an efficient and effective plan for their cloud adoption journey Guidance and best practices prescribed within the framework can help you build a comprehensive approach to cloud comput ing across your organization throughout your IT lifecycle Manageable Areas of Focus The AWS CAF breaks down the complicated planning process into manageable areas of focus Perspectives represent top level areas of focus spanning people process and te chnology Components identify specific aspects within each Perspective that require attention while Activities provide prescriptive guidance to help build actionable plans The AWS Cloud Adoption Framework is flexible and adaptable allowing organizations to use Perspectives Components and Activities as building blocks for their unique journey Business Perspective Focuses on identifying measuring and creating business value using technology services The Components and Activities within the Business Perspective can help you develop a business case for cloud align ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 5 of 13 business and technology strategy and support stakeholder engagement Platform Perspective Focuses on describing the structure and relationship of technology elements and services in complex IT environments Components and Activities within the Perspective can help you develop conceptual and functional models of your IT environment Maturity Perspective Focuses on defining the target state of an organization's capabilities measuring maturity and optimizing resources Components within Maturity Perspective can help assess the organization's maturity level develop a heat map to prioritize initiatives and sequence initiatives to develop the roadm ap for execution People Perspective Focuses on organizational capacity capability and change management functions required to implement change throughout the organization Components and Activities in the Perspective assist with defining capability and skill requirements assessing current organizational state acquiring necessary skills and organizational re alignment Process Perspective Focuses on managing portfolios programs and proj ects to deliver expected business outcome on time and within budget while keeping risks at acceptable levels Operations Perspective Focuses on enabling the ongoing operation of IT environments Components and Activities guide operating procedures service management change management and recovery Security Perspective Focuse s on helping organizations achieve risk management and compliance goals with guidance enabling rigorous methods to describe structure of security and compliance processes systems and personnel Components and Activities assist with assessment control selection and compliance validation with DevSecOps principles and automation Successful Migrations The path to the cloud is a journey to business results AWS has helped hundreds of customers achieve their business goals at every stage of their journey While every organization’s path will be unique there are common patterns approaches and best pract ices that can be implemented to streamline the process 1 Define your approach to cloud computing from business case to strategy to change management to technology 2 Build a solid foundation for your enterprise workloads on AWS by assessing and validating yo ur application portfolio and integrating your unique IT environment with solutions based on AWS cloud services Perspective Areas of Focus ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 6 of 13 3 Design and optimize your business applications to be cloud aware taking direct advantage of the benefits of AWS services 4 Meet your internal and external compliance requirements by developing and implementing automated security policies and controls based on proven validated designs Early planning communication and buy in are essential Understanding the forcing function (tim e cost availability etc) is key and will be different for each organization When defining the migration model organizations must have a clear strategy map out a realistic project timeline and limit the number of variables and dependencies for trans itioning on premises applications to the cloud Throughout the project build momentum with key constituents with regular meetings and reporting to review progress and status of the migration project to keep people enthused while also setting realistic ex pectations about the availability timeframe Breaking Down the Economics Understand On Premises Costs Having a clear understanding of your current costs is an important first step of your journey This provides the baseline for defining the migration model that delivers optimal cost efficiency Onpremises data centers have costs associated with the servers storage networking power cooling physical space and IT labor required to support applications and services running in the production environment Although many of these costs will be eliminated or reduced after applications and infrastructure are moved to the AWS platform knowing your current run rate will help determine which applications are good candidates to move to AWS which applications need to be rewrit ten to benefit from cloud efficiencies and which applications should be retired The following questions should be evaluated when calculating the cost of on premises computing: Understanding Costs To build a migration model for optimal efficiency it is important to accurately understand the current costs of running onpremises applications as well as the interim costs incurred during the transition ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 7 of 13 “Georgetown’s modernization strategy is not just about upgrading old systems; it is about changing the way we do business building new partnerships with the community and working to embrace innovation Cloud has been an important component of this Although we thought the primary driver would be cost savings we have found that agility innovation and the opportuni ty to change paths is where the true value of the cloud has impacted our environment “Traditional IT models with heavy customization and sunk costs in capital infrastructures —where 90% of spend is just to keep the trains running —does not give you the opp ortunity to keep up and grow” Beth Ann Bergsmark Interim Deputy CIO and AVP Chief Enterprise Architect Georgetown University  Labor How much do you spend on maintaining your environment (broken disks patching hosts servers going offline etc)?  Network How much bandwidth do you need? What is your bandwidth peak to average ratio? What are you assuming for network gear? What if you need to scale beyond a single rack?  Capacity What is the cost of over provisioning for peak capacity? How do you plan for capacity? How much buffer capacity are you planning on carrying? If small what is your plan if you need to add more? What if you need less capacity? What is your plan to be abl e to scale down costs? How many servers have you added in the past year? Anticipating next year?  Availability / Power Do you have a disaster recovery (DR) facility? What was your power utility bill for your data center(s) last year? Have you budgeted for both average and peak power requirements? Do you have separate costs for cooling/ HVAC? Are you accounting for 2N power? If not what happens when you have a power issue to your rack?  Servers What is your average server utilization? How much do you overpr ovision for peak load? What is the cost of over provisioning?  Space Will you run out of data center space? When is your lease up? ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 8 of 13 Migration Cost Considerations To achieve the maximum benefits of adopting the AWS cloud platform new work pract ices that drive efficiency and agility will need to be implemented:  IT staff will need to acquire new skills  New business processes will need to be defined  Existing business processes will need to be modified Migration Bubble AWS uses the term “migration bubble” to describe the time and cost of moving applications and infrastructure from on premises data centers to the AWS platform Although the cloud can provide significant savings costs may increase as you move into the migration bubble It i s important to plan the migration to coincide with hardware retirement license and maintenance expiration and other opportunities to reduce cost The savings and cost avoidance associated with a full all in migration to AWS will allow you to fund the mig ration bubble and even shorten the duration by applying more resources when appropriate Time Figure 1: Migration Bubble Level of Effort The cost of migration has many levers that can be pulled in order to speed up or slow down the process including labor process tooling consulting and technology Each of these has a corresponding cost associated with it based on the level of effort required to move the application to the AWS platform Migration Bubble Planning • • • • • • Planning and Assessment Duplicate Environments Staff Training Migration Consulting 3rd Party Tooling Lease Penalties Operation and Optimization Cost of Migration $ ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 9 of 13 To calculate a realistic total cost of ownership (TCO) you need to understand what these costs are and plan for them Cost considerations include items such as:  Labor During the transition existing staff will need to continue to maintain the production environment learn new skills and decommission the old infrastructure once the migration is complete Additional labor costs in the migration bubble include:  Staff time to plan and assess project scope and project plan to migrate applications and infrastructure  Retaining consulting partners with the expertise to streamline migration of applications and infrastructure as well as training staff with new skills  Due to the general lack of cloud experience for most organization s it is necessary to bring in outside consulting support to help guide the process  Process Penalty fees associated with early termination of contracts may be incurred (facilities software licenses etc) once applications or infrastructure are decommissioned  The cost of tooling to automate the migration of data and virtual machines from on premises to AWS  Technology Duplicate environments will be required to keep production applications/infrastructure available while transitioning to the AWS platform Cost considerations include:  Cost to maintain production environment during migration  Cost of AWS platform comp onents to run new cloud based applications  Licensing of automated migration tools license to accelerate the migration process ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 10 of 13 “I wanted to move to a model where we can deliver more to our citizens and r educe the cost of delivering those services to them I wanted a product line that has the ability to scale and grow with my department AWS was an easy fit for us and the way we do business” Chris Chiancone CIO City of McKinney City of McKinney City of McKinney Texas Turns to AWS to Deliver More Advanced Services for Less Money The City of McKinney Texas about 15 miles north of Dallas and home to 155000 people was ranked the No 1 Best Place to live in 2014 by Money Magazine The city’s IT department is going all in on AWS and uses the platform to run a wide range of services and applications such as its land management and records management systems By using AWS the city’s IT department can focus on delivering new and better services for its fast growing population and city employees instead of spending resources buying and maintaining IT infrastructure City of McKinney chose AWS for our ability to scale and grow with the needs of the city’s IT department AWS provides an easy fit for the way the city does business Without having to own the infrastructure the C ity of McKinney has the ability to use cloud resources to address business needs By moving from a CapEx to an OpEx model they can now return funds to critical city projects Migration Options Once y ou understand the current costs of an on premises production system the next step is to identify applications that will benefit from cloud cost and efficiencies Applications are either critical or strategic If they do not fit into either category they should be taken off the priority list Instead categorize these as legacy applications and determine if they need to be replaced or in some cases eliminated Figure 2 illustrates decision points that should be considered in ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 11 of 13 “A university is really a small city with departments running about 1000 diverse small services across at the university We made the decision to go down the cloud journey and have been working with AWS for the past 4 years In building our business case we wanted the ability to give our customers flexible IT services th at were cost neutral “We embraced a cloud first strategy with all new services a built in the cloud In parallel we are migrating legacy services to the AWS platform with the goal of moving 80% of these applications by the end of 2017” Mike Chapple P hD Senior Director IT Services Delivery University of Notre Dame selecting applications to move to the AWS platform focusing on the “6 Rs” — retire retain re host re platform re purchase and re factor Decommission Refactor for AWS Rebuild Application Architecture AWS VM Import Org/Ops Change Do Not Move Move the App Infrastructure Design Build AWS Lift and Shift (Minimal Change) Determine Migration 3rd Party Tools Impact Analysis Management Plan Identify Environment Process Manually Move App and Data Ops Changes Migration and UAT Testing Signoff Operate Discover Assess (Enterprise Architecture and Determine Migration Path Application Lift and Shift Determine Migration Process Plan Migration and Sequencing 3rd Party Migration Tool Tuning Cutover Applications) Vendor S/PaaS (if available) Move the Application Refactor for AWS Recode App Components Manually Move App and Data Architect AWS Environment Replatform (typically legacy applications) Rearchitect Application Recode Application and Deploy App Migrate Data Figure 2: Migration Options Applications that deliver increased ROI through reduced operation costs or deliver increased business results should be at the top of the priority list Then you can determine the best migration path for each workload to optimize cost in the migration process ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 12 of 13 Conclusion Many organizations are extending or moving their business applications to AWS to simplify infrastructure management deploy quicker provide greater availability increase agility allow for faster innovation and lower cost Having a clear understanding of existing infrastructure costs the components of your migration bubble and their corresponding costs and projected savings will help you calculate payback time and projected ROI With a long history in enabling enterprises to successfully adopt cloud computing Amazon Web Services delivers a mature set of services specifically designed for the unique security compliance privacy and governance requirements of large organizations With a technology platform that is both broad and deep Professional Services and Support organizations robust training programs and an ecosystem tens ofthousands strong AWS can help you move faster and do more With AWS you can:  Take advantage of more services storage options and security controls than any other cloud platform  Deliver on stringent standards with the broadest set of certifications accreditations and controls in the industry  Get deep assistance with our global cloud focused enterprise professional services support and training teams ArchivedAmazon Web Services – A Practical Guide to Cloud Migration December 2015 Page 13 of 13 Further Reading For additional help please consult the following sources:  The AWS Cloud Adoption Framework http://d0awsstaticcom/whitepapers/aws_cloud_adoption_frameworkp df Contributors The following individuals and organizations contributed to this document:  Blake Chism Practice Manager AWS Public Sector Sales Var  Carina Veksler Public Sector Solutions AWS Public Sector Sales Var
General
Amazon_Aurora_MySQL_Database_Administrators_Handbook_Connection_Management
This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Aurora MySQL Database Administrato r’s Handbook Connection Management First Published January 2018 Updated October 20 2021 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlNotices Customers are responsible for making their own independent assessment of the information in this document This document: (a) is for informational purposes only (b) represents current AWS product offerings and practices which are subject to change without notice and (c) does not create any commitments or assurances from AWS and its affiliates suppliers or licensors AWS products or services are provided “as is” without warranties representations or conditions of any kind whether express or implied The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements and this document is not part of nor does it modify any agreement between AWS and its customers © 20 21 Amazon Web Services Inc or its affiliates All rights reserved This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlContents Introduction 1 DNS endpoints 2 Connection handling in Aurora MySQL and MySQL 2 Common misconceptions 4 Best practices 5 Using smart drivers 5 DNS caching 7 Connection management and pooling 7 Connection scaling 9 Transaction management and autocommit 10 Connection handshakes 12 Load balancing with the reader endpoint 12 Designing for fault tolerance and quick recovery 13 Server configuration 14 Conclusion 16 Contributors 16 Further reading 16 Document revisions 17 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAbstract This paper outlines the best practices for managing database connections setting server connection parameters and configuring client programs drivers and connectors It’s a recommended read for Amazon Aurora MySQL Database Administrators (DBAs) and application developers This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 1 Introduction Amazon Aurora MySQL (Aurora MySQL) is a managed relational database engine wirecompatible with MySQL 56 and 57 Most of the drivers connectors and tools that you currently use with MySQL can be used with Aurora MySQL with little or no change Aurora MySQL database (DB) clusters provide advanced fe atures such as: • One primary instance that supports read/write operations and up to 15 Aurora Replicas that support read only operations Each of the Replicas can be automatically promoted to the primary role if the current primary instance fails • A cluster endpoint that automatically follows the primary instance in case of failover • A reader endpoint that includes all Aurora Replicas and is automatically updated when Aurora Replicas are added or removed • Ability to create custom DNS endpoints contain ing a user configured group of database instances within a single cluster • Internal server connection pooling and thread multiplexing for improved scalability • Near instantaneous database restarts and crash recovery • Access to near realtime cluster metada ta that enables application developers to build smart drivers connecting directly to individual instances based on their read/write or read only role Client side components (applications drivers connectors and proxies) that use sub optimal configurati on might not be able to react to recovery actions and DB cluster topology changes or the reaction might be delayed This can contribute to unexpected downtime and performance issues To prevent that and make the most of Aurora MySQL features AWS encourag es Database Administrators (DBAs) and application developers to implement the best practices outlined in this whitepaper This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 2 DNS endpoints An Aurora DB cluster consists of one or more instances and a cluster volume that manages the data for those instances There are two types of instances: • Primary instance – Supports read and write statements Currently there can be one primary instance per DB cluster • Aurora Replica – Supports read only statements A DB cluster can have up to 15 Aurora Replicas The Auror a Replicas can be used for read scaling and are automatically used as failover targets in case of a primary instance failure Amazon Aurora supports the following types of Domain Name System (DNS) endpoints: • Cluster endpoint – Connects you to the primary instance and automatically follows the primary instance in case of failover that is when the current primary instance is demoted and one of the Aurora Replicas is promoted in its place • Reader endpoint – Includes all Aurora Replicas in the DB cluster und er a single DNS CNAME You can use the reader endpoint to implement DNS round robin load balancing for read only connections • Instance endpoint – Each instance in the DB cluster has its own individual endpoint You can use this endpoint to connect directly to a specific instance • Custom endpoints – User defined DNS endpoints containing a selected group of instances from a given cluster For more information refer to the Overview of Amazon Aurora page Connection handling in Aurora MySQL and MySQL MySQL Community Edition manages connections in a one thread perconnection fashion This means that each individual user connection receives a dedicated operating system thread in the mysqld process Issues with this type of connection handling include: • Relatively high memory use when there is a large number of user connections even if the connections are completely idle • Higher internal server contention and context switching overhead when working with thousands of user connections This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 3 Aurora MySQL supports a thread pool approach that addresses these issues You can characterize the thread pool approach as follows: • It uses thread multiplexing where a number of worker threads can switch between user sessions (connections) A worker thread is not fixe d or dedicated to a single user session Whenever a connection is not active (for example is idle waiting for user input waiting for I/O and so on) the worker thread can switch to another connection and do useful work You can think of worker threads as CPU cores in a multi core system Even though you only have a few cores you can easily run hundreds of programs simultaneously because they're not all active at the same time This highly efficient approach means that Aurora MySQL can handle thousands of concurrent clients with just a handful of worker threads • The thread pool automatically scales itself The Aurora MySQL database process continuously monitors its thread pool state and launches new workers or destroys existing ones as needed This is tr ansparent to the user and doesn’t need any manual configuration Server thread pooling reduces the server side cost of maintaining connections However it doesn’t eliminate the cost of setting up these connections in the first place Opening and closing c onnections isn't as simple as sending a single TCP packet For busy workloads with short lived connections (for example keyvalue or online transaction processing (OLTP) ) consider using an application side connection pool The following is a network pack et trace for a MySQL connection handshake taking place between a client and a MySQL compatible server located in the same Availability Zone: 04:23:29547316 IP client32918 > servermysql: tcp 0 04:23:29547478 IP servermysql > client32918: tcp 0 04:23:29547496 IP client32918 > servermysql: tcp 0 04:23:29547823 IP servermysql > client32918: tcp 78 04:23:29547839 IP client32918 > servermysql: tcp 0 04:23:29547865 IP client32918 > servermysql: tcp 191 04:23:29547993 IP servermysql > client329 18: tcp 0 04:23:29548047 IP servermysql > client32918: tcp 11 04:23:29548091 IP client32918 > servermysql: tcp 37 04:23:29548361 IP servermysql > client32918: tcp 99 04:23:29587272 IP client32918 > servermysql: tcp 0 This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 4 This is a packet trace for closing the connection: 04:23:37117523 IP client32918 > servermysql: tcp 13 04:23:37117818 IP servermysql > client32918: tcp 56 04:23:37117842 IP client32918 > servermysql: tcp 0 As you can see even the simple act of opening and closing a single connection involves an exchange of several network packets The connection overhead becomes more pronounced when you consider SQL statements issued by drivers as part of connection setup (for example SET variable_name = value commands used to set session level configuration) Server side thread pooling doesn’t eliminate this type of overhead Common misconceptions The following are common misconceptions for database connection management • If the server uses connection pooling you don’t need a pool on the application side As explained previously this isn’t true for workloads where connections are opened and torn down very frequently and clients run relatively few statements per connectio n You might not need a connection pool if your connections are long lived This means that connection activity time is much longer than the time required to open and close the connection You can run a packet trace with tcpdump and see how many packets yo u need to open or close connections versus how many packets you need to run your queries within those connections Even if the connections are long lived you can still benefit from using a connection pool to protect the database against connection surges that is large bursts of new connection attempts • Idle connections don’t use memory This isn’t true because the operating system and the database process both allocate an in memory descriptor for each user connection What is typically true is that Auror a MySQL uses less memory than MySQL Community Edition to maintain the same number of connections However memory usage for idle connections is still not zero even with Aurora MySQL The general best practice is to avoid opening significantly more connect ions than you need This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 5 • Downtime depends entirely on database stability and database features This isn’t true because the application design and configuration play an important role in determining how fast user traffic can recover following a database event For more details refer to the Best practices section of this whitepaper Best practices The following are best practices for managing database connections and configuring connection drivers and pools Using smart drivers The cluster and reader endpoints abstract the role changes (primary instance promotion and demotion) and topology changes (addition and removal of instances) occurring in the DB cluster However DNS updates are not instantaneous In addition they can sometimes contribute to a slightly longer delay between the time a database event occurs and the time it’s noticed and handled by the application Aurora MySQL exposes near realtime metadata about DB instances in the INFORMATION_SCHEMAREPLICA_HOST_STATUS table Here is an example of a query against the metadata table: mysql> select server_id if(session_id = 'MASTER_SESSION_ID' 'writer' 'reader' ) as role replica_lag_in_milliseconds from information_schemareplica_host_status; + + + + | server_id | role | replica_lag_in_milliseconds | + + + + | aurora nodeusw2a | writer | 0 | | aurora nodeusw2b | reader | 19253999710083008 | + + + + 2 rows in set (000 sec) Notice that the table contains cluster wide metadata You can query the table on any instance in the DB cluster For the purpose of this whitepaper a smart driver is a database driver or connector with the ability to read DB cluster topology from the metadata table It can rou te new connections to individual instance endpoints without relying on high level cluster This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 6 endpoints A smart driver is also typically capable of load balancing read only connections across the available Aurora Replicas in a round robin fashion The MariaDB Connector/J is an example of a third party Java Database Connectivity (JDBC) smart driver with native support for Aurora MySQL DB clusters Application developers can draw inspiration from the MariaDB driver to build drivers and connectors for languages other than Java Refer to the MariaDB Connector/J page for details The AWS JDBC Driver for MySQL (preview) is a client driver designed for the high availability of Aurora MySQL The AWS JDBC Driver for MySQL is drop in compatible with the MySQL Connector/J driver The AWS JDBC Driver for MySQL takes full advantage of the failover capabilities of Aurora MySQL The AWS JDBC Driver for MySQL fully maintains a cache of the DB cluster topology and each DB in stance's role either primary DB instance or Aurora Replica It uses this topology to bypass the delays caused by DNS resolution so that a connection to the new primary DB instance is established as fast as possible Refer to the AWS JDBC Driver for MySQL GitHub repository for details If you’re using a smart driver the recommendations listed in the following sections still apply A smart driver can automate and abstract certain layers of database connectivity However it doesn’t automatically configure itself with optimal settings or automatically make the application resilient to failures For example when using a smart driver you still need to ensure that the connection val idation and recycling functions are configured correctly there’s no excessive DNS caching in the underlying system and network layers transactions are managed correctly and so on It’s a good idea to evaluate the use of smart drivers in your setup Note that if a third party driver contains Aurora MySQL –specific functionality it doesn’t mean that it has been officially tested validated or certified by AWS Also note that due to the advanced builtin features and higher overall complexity smart driver s are likely to receive updates and bug fixes more frequently than traditional (bare bones) drivers You should regularly review the driver’s release notes and use the latest available version whenever possible This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 7 DNS caching Unless you use a smart databas e driver you depend on DNS record updates and DNS propagation for failovers instance scaling and load balancing across Aurora Replicas Currently Aurora DNS zones use a short Time ToLive (TTL) of five seconds Ensure that your network and client confi gurations don’t further increase the DNS cache TTL Remember that DNS caching can occur anywhere from your network layer through the operating system to the application container For example Java virtual machines (JVMs) are notorious for caching DNS in definitely unless configured otherwise Here are some examples of issues that can occur if you don’t follow DNS caching best practices: • After a new primary instance is promoted during a failover applications continue to send write traffic to the old insta nce Data modifying statements will fail because that instance is no longer the primary instance • After a DB instance is scaled up or down applications are unable to connect to it Due to DNS caching applications continue to use the old IP address of tha t instance which is no longer valid • Aurora Replicas can experience unequal utilization for example one DB instance receiving significantly more traffic than the others Connection management and pooling Always close database connections explicitly inst ead of relying on the development framework or language destructors to do it There are situations especially in container based or code asaservice scenarios when the underlying code container isn’t immediately destroyed after the code completes In su ch cases you might experience database connection leaks where connections are left open and continue to hold resources (for example memory and locks) If you can’t rely on client applications (or interactive clients) to close idle connections use the server’s wait_timeout and interactive_timeout parameters to configure idle connection timeout The default timeout value is fairly high at 28800 seconds ( 8 hours) You should tune it down to a value that’s acceptable in your environment Refer to the MySQL Reference Manual for details Consider using connection pooling to protect the database against connection surges Also consider connection pooling if the appli cation opens large numbers of connections (for example thousands or more per second) and the connections are short lived that is the time required for connection setup and teardown is significant compared to the This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 8 total connection lifetime If your develo pment framework or language doesn’t support connection pooling you can use a connection proxy instead Amazon RDS Proxy is a fully managed highly available database proxy for Amazon Relational Database Service (Amazon RDS) that makes applications more scalable more resilient to database failures and more secure ProxySQL MaxScale and ScaleArc are examples of third party proxies compatible with the MySQL protocol Refer to the Connection scaling section of this document for more notes on connection pools versus proxies By using Amazon RDS Proxy you can allow your applications to pool and share database connections to improve their ability to scale Amazon RDS Proxy make s applications more resilient to database failures by automatically connecting to a standby DB instance while preserving application connections AWS recommend s the following for configuring connection pools and proxies: • Check and validate connection healt h when the connection is borrowed from the pool The validation query can be as simple as SELECT 1 However in Amazon Aurora you can also use connection checks that return a different value depending on whether the instance is a primary instance (read/wri te) or an Aurora Replica (read only) For example you can use the @@innodb_read_only variable to determine the instance role If the variable value is TRUE you're on an Aurora Replica • Check and validate connections periodically even when they're not borrowed It helps detect and clean up broken or unhealthy connections before an application thread attempts to use them • Don't let connections remain in the pool indefinitely Recycle connections by closing and reopening them periodically (for example ev ery 15 minutes) which frees the resources associated with these connections It also helps prevent dangerous situations such as runaway queries or zombie connections that clients have abandoned This recommendation applies to all connections not just idl e ones This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 9 Connection scaling The most common technique for scaling web service capacity is to add or remove application servers (instances) in response to changes in user traffic Each application server can use a database connection pool This approach ca uses the total number of database connections to grow proportionally with the number of application instances For example 20 application servers configured with 200 database connections each would require a total of 4000 database connections If the app lication pool scales up to 200 instances (for example during peak hours) the total connection count will reach 40000 Under a typical web application workload most of these connections are likely idle In extreme cases this can limit database scalabil ity: idle connections do take server resources and you’re opening significantly more of them than you need Also the total number of connections is not easy to control because it’s not something you configure directly but rather depends on the number of application servers You have two options in this situation: • Tune the connection pools on application instances Reduce the number of connections in the pool to the acceptable minimum This can be a stop gap solution but it might not be a long term solut ion as your application server fleet continues to grow • Introduce a connection proxy between the database and the application On one side the proxy connects to the database with a fixed number of connections On the other side the proxy accepts applicat ion connections and can provide additional features such as query caching connection buffering query rewriting/routing and load balancing Connection proxies • Amazon RDS Proxy is a fully managed highly available database proxy for Amazon RDS that makes applications more scalable more resilient to database failures and more secure Amazon RDS Proxy reduces the memory and CPU overhead for connection management on the database • Using Amazon RDS Proxy you can handle unpredictable surges in database traffic that otherwise might cause issues due to oversubscribing connections or creating new connections at a fast rate To protect the database against oversubscription you can control the number of database connections that are created This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 10 • Each RDS proxy performs connection pooling for the writer instance of its associated Amazon RDS or Aurora database Connection pooling is an optimization that reduces the overhead associated with opening and closing connections and with keeping many connections ope n simultaneously This overhead includes memory needed to handle each new connection It also involves CPU overhead to close each connection and open a new one such as Transport Layer Security/Secure Sockets Layer (TLS/SSL) handshaking authentication ne gotiating capabilities and so on Connection pooling simplifies your application logic You don't need to write application code to minimize the number of simultaneous open connections Connection pooling also cuts down on the amount of time a user must w ait to establish a connection to the database • To perform load balancing for read intensive workloads you can create a read only endpoint for RDS proxy That endpoint passes connections to the reader endpoint of the cluster That way your proxy connectio ns can take advantage of Aurora read scalability • ProxySQL MaxScale and ScaleArc are examples of third party proxies compatible with the MySQL protocol For even greater scalability and availability you can use multiple proxy instances behind a single D NS endpoint Transaction management and autocommit With autocommit enabled each SQL statement runs within its own transaction When the statement ends the transaction ends as well Between statements the client connection is not in transaction If you need a transaction to remain open for more than one statement you explicitly begin the transaction run the statements and then commit or roll back the transaction With autocommit disabled the connection is always in transaction You can commit or roll back the current transaction at which point the se rver immediately opens a new one Refer to the MySQL Reference Manual for details Running with autocommit disabled is not recommended because it encourages long running transactions where they’re not needed Open transactions block a server’s internal garbage collection mechanisms which are essential to maintaini ng optimal performance In extreme cases garbage collection backlog leads to excessive storage consumption elevated CPU utilization and query slowness This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 11 Recommendations : • Always run with autocommit mode enabled Set the autocommit parameter to 1 on the database side (which is the default) and on the application side (which might not be the default) • Always double check the autocommit settings on the application side For example Python drivers such as MySQLdb and PyMySQL disable autocommit by default • Manage transactions explicitly by using BEGIN/START TRANSACTION and COMMIT/ROLLBACK statements You should start transactions when you need them and commit as soon as the transactional work is done Note that these recommendations are not specific to Aurora MySQL They apply to MySQL and other databases that use the InnoDB storage engine Long transactions and garbage collection backlog are easy to monitor: • You can obtain the metadata of currently running transactions from the INFORMATION_SCHEMAINNODB_TRX table The TRX_STARTED column contains the transaction start time and you can use it to calculate transaction age A transaction is worth investigating if it has been running for several minutes or more Refer to the MySQL Reference Manua l for details about the table • You can read the size of the garbage collection backlog from the InnoDB’s trx_rseg_history_len counter in the INFORMATION_SCHEMAINNODB_METRICS table Refer to the MySQL Reference Manual for details about the table The larger the counter value is the more severe the impact might be in terms of query performance CPU usage and storage consumption Values in the range of tens of thousands indicate that the garbage collection is somewhat delayed Values in the range of millions or tens of millions might be dangerous and should be investigated Note – In Amazon Aurora all DB instances use the same storage volume which means that the garbage collection is cluster wide and not specific to each instance Consequently a runaway transaction on one instance can impact all instances Therefore you sho uld monitor long transactions on all DB instances This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 12 Connection handshakes A lot of work can happen behind the scenes when an application connector or a graphical user interface (GUI) tool opens a new database session Drivers and client tools commonly run series of statements to set up session configuration (for example SET SESSION variable = value ) This increases the cost of creating new connections and delays when your application can start issuing queries The cost of connection handshakes becomes even more important if your applications are very sensitive to latency OLTP or keyvalue workloads that expect single digit millisecond latency can be visibly impacted if each connection is expensive to open For example if the driver runs six statements to set up a connection and each statement takes just one millisecond to run your application will be delayed by six milliseconds before it issues its first query Recommendations : • Use the Aurora MySQL Advanced Au dit the General Query Log or network level packet traces (for example with tcpdump ) to obtain a record of statements run during a connection handshake Whether or not you’re experiencing connection or latency issues you should be familiar with the inte rnal operations of your database driver • For each handshake statement you should be able to explain its purpose and describe its impact on queries you'll subsequently run on that connection • Each handshake statement requires at least one network roundtrip and will contribute to higher overall se ssion latency If the number of handshake statements appears to be significant relative to the number of statements doing actual work determine if you can disable any of the handshake statements Consider using connection pooling to reduce the number of c onnection handshakes Load balancing with the reader endpoint Because the reader endpoint contains all Aurora Replicas it can provide DNS based round robin load balancing for new connections Every time you resolve the reader endpoint you'll get an inst ance IP that you can connect to chosen in round robin fashion DNS load balancing works at the connection level (not the individual query level) You must keep resolving the endpoint without caching DNS to get a different instance IP on each resolution I f you only resolve the endpoint once and then keep the connection in This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 13 your pool every query on that connection goes to the same instance If you cache DNS you receive the same instance IP each time you resolve the endpoint You can use Amazon RDS Proxy to create additional read only endpoints for an Aurora cluster These endpoints perform the same kind of load balancing as the Aurora reader endpoint Applications can reconnect more quickly to the proxy endpoints than the Aurora reader endpoint if reader in stances become unavailable If you don’t follow best practices these are examples of issues that can occur: • Unequal use of Aurora Replicas for example one of the Aurora Replicas is receiving most or all of the traffic while the other Aurora Replicas sit idle • After you add or scale an Aurora Replica it doesn’t receive traffic or it begins to receive traffic after an unexpectedly long delay • After you remove an Aurora Replica applications continue to send traffic to that instance For more information refer to the DNS endpoints and DNS caching sections of this document Designing for fault tolerance and quick recovery In large scale database operations you’re statistically more likely to experience issues such as connection interruptions or hardware failures You must also take operational actions more frequently such as scaling adding or removing DB instances and performing software upgrades The only scalable way of addressi ng this challenge is to assume that issues and changes will occur and design your applications accordingly Examples : • If Aurora MySQL detects that the primary instance has failed it can promote a new primary instance and fail over to it which typically h appens within 30 seconds Your application should be designed to recognize the change quickly and without manual intervention • If you create additional Aurora Replicas in an Aurora DB cluster your application should automatically recognize the new Aurora Replicas and send traffic to them • If you remove instances from a DB cluster your application should not try to connect to them This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 14 Test your applications extensively and prepare a list of assumptions about how the application should react to database events Then experimentally validate the assumptions If you don’t follow best practices database events (for example failovers scaling and software upgrades) might result in longer than expected downtime For example you might notice that a failover took 30 seconds (per the DB cluster’s event notifications) but the application remained down for much longer Server configuration There are two major server configuration variables worth mentioning in the context of this whitepaper : max_connections and max_connect_errors Configuration variable max_connections The configuration variable max_connections limits the number of database connections per Aurora DB instance The best practice is to set it slightly higher than the maximum number of connections you expect to open on each instance If you also enabled performance_schema be extra careful with the setting The Performance Schema memory structures are sized automatically based on server configuration variables including max_connections The higher you set the variable the more memory Performance Schema uses In extreme cases this can lead to out of memory issues on smaller instance types Note for T2 and T3 instance families Using Performance Schema on T2 and T3 Aurora DB instances with less than 8 GB of memory isn’t recommended To reduce the risk of out ofmemory issues on T2 and T3 instances: • Don’t enable Performance Schema • If you must use Performance Schema leave max_connections at the default value • Disable Performance Schema if you plan to increase max_connections to a value significantly greater than the default value Refer to the MySQL Reference Manual for details about the max_connections variable This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 15 Configuration variable max_connect_errors The configuration variable max_connect_errors determines how many successive interrupted connection requests are permitted from a given client host If the client host exceeds the number of successive failed connection attempts the server blocks it Further connection attempts from that client yield an error: Host 'host_name' is blocked because of many connection errors Unblock with 'mysqladmin flush hosts' A com mon (but incorrect) practice is to set the parameter to a very high value to avoid client connectivity issues This practice isn’t recommended because it: • Allows application owners to tolerate connection problems rather than identify and resolve the underl ying cause Connection issues can impact your application health so they should be resolved rather than ignored • Can hide real threats for example someone actively trying to break into the server If you experience “host is blocked” errors increasing t he value of the max_connect_errors variable isn’t the correct response Instead investigate the server’s diagnostic counters in the aborted_connects status variable and the host_cache table Then use the information to identify and fix clients that run in to connection issues Also note that this parameter has no effect if skip_name_resolve is set to 1 (default) Refer to the MySQL Reference Manual for details on the following: • Max_connect_errors variable • “Host is blocked ” error • Aborted_connects status variable • Host_cache table This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 16 Conclusion Understanding and implementing connection management best practices is critical to achieve scalability reduce downtime and ensure smooth integration between the application and database layers You can apply most of the recommendations provided in this whitepaper with little to no engineering effort The guidance provided in this whitepaper should help you introduce improvements in your current and future application deployments using Aurora MySQL DB clusters Contributor s Contributors to this document include: • Szymon Komendera Database Engineer Amazon Aurora • Samuel Selvan Database Specialist Solutions Architect Amazon Web Services Further reading For additional information refer to : • Aurora on Amazon RDS User Guide • Communication Errors and Aborted Connections in MySQL Reference Manual This version has been archived For the latest version of this document visit: https://docsawsamazoncom/whitepapers/latest/ amazonauroramysqldbadminhandbook/ amazonauroramysqldbadminhandbookhtmlAmazon Web Services Amazon Aurora MySQL Database Administrator’s Handbook Page 17 Document revisions Date Description October 20 2021 Minor content updates to follow new style guide and hyperlinks July 2021 Minor content updates to the following topics: Smart Drivers Connection Management and Pooling and Connection Scaling March 2019 Minor content updates to the following topics: Introduction DNS Endpoints and Server Configuration January 2018 First publication
General
A_Platform_for_Computing_at_the_Mobile_Edge_Joint_Solution_with_HPE_Saguna_and_AWS
"ArchivedA Platform for Computing at the Mobile Edge: Joint Solution with HPE Saguna and AWS Februar(...TRUNCATED)
General
Amazon_Elastic_File_System_Choosing_Between_Different_Throughput_and_Performance_Mode
"This paper has been archived For the latest technical content refer t o the AWS Wh i t epapers & Gu(...TRUNCATED)
General
5_Ways_the_Cloud_Can_Drive_Economic_Development
"Archived5 Ways the Cloud Can Drive Economic Development August 2018 This paper has been archived Fo(...TRUNCATED)
General
10_Considerations_for_a_Cloud_Procurement
"Archived10 Considerations for a Cloud Procurement March 2017 This version has been archived For the(...TRUNCATED)
General
Active_Directory_Domain_Services_on_AWS
"This version has been archived For the latest version of this document visit: https://docsawsamazon(...TRUNCATED)
General
Amazon_EC2_Reserved_Instances_and_Other_Reservation_Models
"Amazon EC2 Reserved Instances and Other AWS Reservation Models AWS Whitepaper Amazon EC2 Reserved I(...TRUNCATED)
General
AWS_Serverless_MultiTier_Architectures_Using_Amazon_API_Gateway_and_AWS_Lambda
"AWS Serverless Multi Tier Architectures With Amazon API Gateway and AWS Lambda First Published Nove(...TRUNCATED)
General
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
12