ID,Content 23andMe Case Study _ Life Sciences _ AWS.txt,"23andMe could migrate its existing environment with virtually no changes, and over time started incorporating more AWS services into its solution. The company is looking for further ways to optimize costs using AWS, exploring services like AWS Graviton processor, which delivers excellent price performance for cloud workloads running in Amazon EC2. The company is finding opportunities to be cost optimal while retaining the resources it needs for on-demand computing. “We’re about 10 months past migration, and the eventual goal is to drive a faster process from idea to validation. Our researchers are faster and more efficient, and our hope is to see a big research breakthrough,” says de Leon.  Increased scalability, supporting a compute job running on more than 80,000 virtual CPUs About 23andMe Español {font-family:"Cambria Math"; 日本語 mso-font-pitch:variable; font-family:"Arial",sans-serif; 한국어 {font-family:Cambria; Amazon MAP mso-bidi-font-size:12.0pt; AWS Services Used Arnold de Leon Sr. Program Manager, 23andMe margin:0in; Optimizing Value Running HPC on AWS   mso-pagination:widow-orphan; Optimized costs @font-face {page:WordSection1;}ol 23andMe can scale on demand to match compute capacity for actual workloads and then scale back down. “To give a sense of scale, we had a peak compute job running with over 80,000 virtual CPUs operating at once,” says de Leon. In addition, using Amazon EC2 instances has removed resource contention for 23andMe’s researchers. “Recently, we had a 3-week production workload finish 33 percent ahead of schedule. Since migrating to AWS, our ability to deliver compute resources to our researchers is now unmatched,” says Graham. mso-bidi-font-family:Cambria;}.MsoChpDefault ไทย font-family:"Cambria",serif; mso-default-props:yes; panose-1:2 4 5 3 5 4 6 3 2 4; Português {margin-bottom:0in;}ul Français Embracing the Cloud for Secure Data Storage Exploring Future Possibilities with Flexibility on AWS 23andMe quickly discovered the benefits of having a variety of Amazon EC2 instance types available for its use. “We have the entire menu of Amazon EC2 offerings available to us, and one way to achieve efficiency is finding an optimal fit for resource use,” says Justin Graham, manager of an infrastructure engineering group at 23andMe. As of 2022, the company uses many instance types flexibly, including Amazon EC2 X2i Instances, the next generation of memory-optimized instances delivering improvements in performance, price performance, and costs for memory-intensive workloads. 23andMe also uses AWS Batch to provide rightsizing and match resources to determine which instance types to use, which helps with price-performance optimization. mso-font-signature:3 0 0 0 1 0;}@font-face mso-ascii-font-family:Cambria;   panose-1:5 0 0 0 0 0 0 0 0 0; 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. The AWS Migration Acceleration Program (MAP) is a comprehensive and proven cloud migration program based upon AWS’s experience migrating thousands of enterprise customers to the cloud. mso-style-unhide:no; 2022 AWS Batch Türkçe English {mso-style-unhide:no; Tiếng Việt Headquartered in California, 23andMe is known for its at-home DNA collection kits. The company also uses its database of genetic information to further its understanding of biology and therapeutics to develop new drugs and therapies. Founded in 2006, 23andMe has collected an enormous amount of data and generated millions of lines of code for its research and therapeutics. They use this data for regression analysis, genome-wide association studies, and general correlation studies across datasets. The genetic testing market has been gaining momentum because of the increased prevalence of genetic diseases, better awareness among the public about the benefits of early detection, and falling costs of genetic sequencing over the past 16 years. mso-hansi-font-family:Cambria; mso-font-signature:-536869121 1107305727 33554432 0 415 0;}p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-type:export-only; Benefits of AWS Organizations of all sizes across all industries are transforming and delivering on their missions every day using AWS. Contact our experts and start your own AWS Cloud journey today. {font-family:Wingdings; Managing scientists’ file-based home directories presented another challenge. To solve this issue, 23andMe turned to Weka, an AWS Partner. The WekaIO parallel file system is functional, cost-effective, and compatible with Amazon S3. This helped 23andMe’s internal team implement changes with no disruption to the customer's experience. When the migration was complete, 23andMe started taking advantage of AWS services for HPC like Amazon EC2 C5 Instances, which deliver cost-effective high performance at a low price per compute ratio for running advanced compute-intensive workloads. It chose this type of Amazon EC2 instance because it was the closest analog to its previous computing resources. Increased efficiency, completing a 3-week production workload 33% ahead of schedule Amazon EC2 mso-generic-font-family:decorative; mso-fareast-font-family:Cambria; عربي While enjoying these benefits of using HPC services on AWS, 23andMe has not had to compromise on its initial spending goals. “Our goal was to keep our costs the same but gain flexibility, capability, and value. Savings is less about the bottom line and more about what we gain for what we spend,” says de Leon. 23andMe has achieved increases in cost optimization by using a variety of AWS services, including Amazon Relational Database Service (Amazon RDS), a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud, as well as Amazon EC2. 23andMe is all-in on AWS and aims to continue pursuing price-performance optimization for its workloads. To give a sense of scale, we had a peak compute job running with over 80,000 virtual CPUs operating at once. Using Amazon EC2 has removed the resource contention for 23andMe’s researchers."" Migrated smoothly to the cloud within 4 months 23andMe Innovates Drug and Therapeutic Discovery with HPC on AWS Amazon Simple Storage Service (Amazon S3), an object storage service that offers scalability, data availability, security, and performance. “If we care about a piece of data, we store it in Amazon S3,” says Arnold de Leon, program manager in charge of cloud spending at 23andMe. “It is an excellent way of securing data with regard to data durability.” 23andMe uses Amazon S3 intelligent tiering storage class to automatically migrate data to the most cost-effective access tier when access patterns change. mso-style-qformat:yes; Deutsch 23andMe used the AWS Migration Acceleration Program (AWS MAP), a comprehensive and proven cloud migration program based on the experience that AWS has in migrating thousands of enterprise customers to the cloud. Using AWS MAP, 23andMe could achieve a smooth migration in only 4 months. “What AWS MAP was offering us was the ability to do a fast, massive shift,” says de Leon. “Usually when you do that, it’s very expensive, but AWS MAP solved that problem.” 23andMe migrated everything out of its data center and into the cloud on AWS. One year after migrating to AWS, as the AWS MAP program ends for 23andMe, it is achieving equal or better price performance because of the team’s diligence in adopting AWS services. Amazon S3 Italiano mso-font-charset:0; 23andMe, a genomics and biotechnology company based in California, provides genetic information to customers and has crowdsourced billions of data points for study, resulting in scientific discoveries. Genomics and biotechnology company 23andMe provides direct-to-customer genetic testing, giving customers valuable insights into their genetics. 23andMe needed more scalability and flexibility in its high-performance computing (HPC) to manage multiple petabytes of data efficiently. The company had been using an on-premises solution but began using Amazon Web Services (AWS) in 2016 to store important data. In 2021, the company made a full migration to the cloud, a process that took only 4 months. Since adopting AWS HPC services, including Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload, and AWS Batch, which lets developers, scientists, and engineers easily and efficiently run hundreds of thousands of batch computing jobs on AWS, 23andMe has increased its scalability, flexibility, and cost optimization. mso-bidi-font-family:Cambria;}p.Normal0, li.Normal0, div.Normal0 mso-bidi-font-family:Cambria;}div.WordSection1 Learn more » mso-font-signature:3 0 0 0 -2147483647 0;}@font-face {mso-style-name:Normal0; font-size:11.0pt; Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Ρусский Removed compute resource contention among researchers mso-font-charset:77; 中文 (简体) {margin-bottom:0in;} 23andMe initially used an on-premises facility, but as its data storage and compute needs grew, the company began looking to the cloud for greater scalability and flexibility. Additionally, the company sought to reduce human operating costs for facility maintenance and accelerate its ability to adopt new hardware and tech by transitioning to the cloud. In 2016, the company began using mso-style-parent:""; AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. As it started using cloud services, 23andMe tried a hybrid solution, running workloads in its data center and on AWS concurrently. This solution provided some scalability but came with associated costs of migrating data back and forth between the on-premises data center and the cloud. To achieve better cost optimization while also gaining more flexibility and scalability, 23andMe decided to migrate fully to AWS in 2021. Get Started mso-generic-font-family:roman; Contact Sales" 36 new or updated datasets on the Registry of Open Data_ AI analysis-ready datasets and more _ AWS Public Sector Blog.txt,"AWS Public Sector Blog 36 new or updated datasets on the Registry of Open Data: AI analysis-ready datasets and more by Erin Chu | on 13 JUL 2023 | in Analytics , Announcements , Artificial Intelligence , AWS Data Exchange , Education , Open Source , Public Sector , Research | Permalink | Comments |  Share The AWS Open Data Sponsorship Program makes high-value, cloud-optimized datasets publicly available on Amazon Web Services (AWS). AWS works with data providers to democratize access to data by making it available to the public for analysis on AWS; develop new cloud-native techniques, formats, and tools that lower the cost of working with data; and encourage the development of communities that benefit from access to shared datasets. Through this program, customers are making over 100PB of high-value, cloud-optimized data available for public use. The full list of publicly available datasets are on the Registry of Open Data on AWS and are now also discoverable on AWS Data Exchange . This quarter, AWS released 36 new or updated datasets. As July 16 is Artificial Intelligence (AI) Appreciation Day , the AWS Open Data team is highlighting three unique datasets that are analysis-ready for AI. What will you build with these datasets? Three AI analysis-ready datasets on the Registry of Open Data NYUMets Brain Dataset from the NYU Langone Medical Center is one of the largest datasets in existence of cranial imaging, and the largest dataset of metastatic cancer, containing over 8,000 brain MRI studies, clinical data, and treatment records from cancer patients. Over 2,300 images have been annotated for metastatic tumor segmentations, making NYUMets: Brain a valuable source of segmented medical imaging. An AI model for segmentation tasks as well as a longitudinal tracking tool are available for NYUMets through MONAI. Learn more about this dataset . RACECAR Dataset from the University of Virginia is the first open dataset for full-scale and high-speed autonomous racing. RACECAR is suitable to explore issues regarding localization, object detection and tracking (LiDAR, Radar, and Camera), and mapping that arise at the limits of operation of the autonomous vehicle. You can get started with RACECAR with this SageMaker Studio Lab notebook . Aurora Multi-Sensor Dataset from Aurora Operations, Inc. is a large-scale multi-sensor dataset with highly accurate localization ground truth, captured between January 2017 and February 2018 in the metropolitan area of Pittsburgh, PA, USA. The de-identified dataset contains rich metadata, such as weather and semantic segmentation, and spans all four seasons, rain, snow, overcast and sunny days, different times of day, and a variety of traffic conditions. This data can be used to develop and evaluate large-scale long-term approaches to autonomous vehicle localization. Aurora is applicable to many research areas including 3D reconstruction, virtual tourism, HD map construction, and map compression. Full list of new or updated datasets These three datasets join 33 other new or updated datasets on the Registry of Open Data in the following categories. Climate and weather: ECMWF real-time forecasts from European Centre for Medium-Range Weather Forecasts NOAA Wang Sheeley Arge (WSA) Enlil from the National Oceanic and Atmospheric Administration (NOAA) ONS Open Data Portal from National Electric System Operator of Brazil Pohang Canal Dataset: A Multimodal Maritime Dataset for Autonomous Navigation in Restricted Waters from the Mobile Robotics & Intelligence Laboratory (MORIN Lab) Sup3rCC from National Renewable Energy Laboratory EURO-CORDEX – European component of the Coordinated Regional Downscaling Experiment from Helmholtz Centre Hereon / GERICS Geospatial: Astrophysics Division Galaxy Segmentation Benchmark Dataset from the National Aeronautics and Space Administration (NASA) Astrophysics Division Galaxy Morphology Benchmark Dataset from NASA ESA WorldCover Sentinel-1 and Sentinel-2 10m Annual Composites from the European Space Agency Korean Meteorological Agency (KMA) GK-2A Satellite Data from the Korean Meteorological Agency NASA / USGS Controlled Europa DTMs from NASA NASA / USGS Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) Targeted DTMs from NASA Nighttime-Fire-Flare from Universities Space Research Association (USRA) and NASA Black Marble PALSAR-2 ScanSAR Tropical Cyclone Mocha (L2.1) from the Japan Aerospace Exploration Agency (JAXA) PALSAR-2 ScanSAR Flooding in Rwanda (L2.1) from JAXA Solar Dynamics Observatory (SDO) Machine Learning Dataset from NASA Life sciences: Extracellular Electrophysiology Compression Benchmark from the Allen Institute for Neural Dynamics Long Read Sequencing Benchmark Data from the Garvan Institute Genomic Characterization of Metastatic Castration Resistant Prostate Cancer from the University of Chicago Harvard Electroencephalography Database from the Brain Data Science Platform The Human Sleep Project from the Brain Data Science Platform Integrative Analysis of Lung Adenocarcinoma in Environment and Genetics Lung cancer Etiology (Phase 2) from the University of Chicago National Cancer Institute Imaging Data Commons (IDC) Collections from the Imaging Data Commons Indexes for Kaiju from the University of Copenhagen Bioinformatics Center Molecular Profiling to Predict Response to Treatment (phs001965) from the University of Chicago NYUMets Brain Dataset from the NYU Langone Medical Center SPaRCNet data:Seizures, Rhythmic and Periodic Patterns in ICU Electroencephalography from the Brain Data Science Platform The University of California San Francisco Brain Metastases Stereotactic Radiosurgery (UCSF-BMSR) MRI Dataset from the University of California San Francisco UK Biobank Linkage Disequilibrium Matrices from the Broad Institute VirtualFlow Ligand Libraries from Harvard Medical School Machine learning: Aurora Multi-Sensor Dataset from Aurora Operations, Inc. RACECAR Dataset from University of Virginia Exceptional Responders Initiative from Amazon Amazon Seller Contact Intent Sequence from Amazon Open Food Facts Images from Open Food Facts Product Comparison Dataset for Online Shopping from Amazon What are people doing with open data? Amazon Location Service launched Open Data Maps for Amazon Location Service , a data provider option for the Maps feature based on OpenStreetMap . Oxford Nanopore Technologies benchmarked their genomic basecalling algorithms, which decodes DNA or RNA to sequence for analysis, on 20 different Amazon Elastic Compute Cloud (Amazon EC2) instances . HuggingFace hosted a Bio x ML Hackathon that challenged teams to leverage AI tools, open data, and cloud resources to solve problems at the intersection of the life sciences and artificial intelligence. How can you make your data available? Looking to make your data available? The AWS Open Data Sponsorship Program covers the cost of storage for publicly available high-value, cloud-optimized datasets. We work with data providers who seek to: Democratize access to data by making it available for analysis on AWS Develop new cloud-native techniques, formats, and tools that lower the cost of working with data Encourage the development of communities that benefit from access to shared datasets Learn how to propose your dataset to the AWS Open Data Sponsorship Program . Learn more about open data on AWS . Read more about open data on AWS: Largest metastatic cancer dataset now available at no cost to researchers worldwide Creating access control mechanisms for highly distributed datasets 33 new or updated datasets on the Registry of Open Data for Earth Day and more How researchers can meet new open data policies for federally-funded research with AWS Accelerating and democratizing research with the AWS Cloud Introducing 10 minute cloud tutorials for research Subscribe to the AWS Public Sector Blog newsletter to get the latest in AWS tools, solutions, and innovations from the public sector delivered to your inbox, or contact us . Please take a few minutes to share insights regarding your experience with the AWS Public Sector Blog in this survey , and we’ll use feedback from the survey to create more content aligned with the preferences of our readers. TAGS: Artificial Intelligence , AWS Open Data Sponsorship Program , climate , dataset , datasets , geospatial , geospatial data , life sciences , Machine Learning , open data , open data on AWS , public sector , Registry of Open Data on AWS Erin Chu Erin Chu is the life sciences lead on the Amazon Web Services (AWS) open data team. Trained to bridge the gap between the clinic and the lab, Erin is a veterinarian and a molecular geneticist, and spent the last four years in the companion animal genomics space. She is dedicated to helping speed time to science through interdisciplinary collaboration, communication, and learning. Comments View Comments Resources AWS in the Public Sector AWS for Government AWS for Education AWS for Nonprofits AWS for Public Sector Health AWS for Aerospace and Satellite Solutions Case Studies Fix This Podcast Additional Resources Contact Us Follow  AWS for Government Twitter  AWS Education Twitter  AWS Nonprofits Twitter  Newsletter Subscription" 54gene _ Case Study _ AWS.txt,"experimentation Français Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » Genomics research studying global population is crucial for learning how genomic variation impacts diseases and how data can be used to improve the well-being of all populations. Despite the diverse genetic makeup of people in Africa, the continent is vastly underrepresented in global genetic research, with less than 3 percent of genomic data coming from African populations. The mission of health technology startup 54gene is to bridge this gap to deliver precision medicine to Africa and the global population. Solution | Analyzing Datasets as Large as 30–40 TB in a Few Days   54gene Equalizes Precision Medicine by Increasing Diversity in Genetics Research Using AWS 54gene’s integrative digital solution has three major components: the clinical operations to enroll patients for collecting clinical and phenotypic data, the biobank that stores biospecimens, and the downstream genomic analysis, which uses technologies like genotyping and whole genome sequencing to generate insights. This large-scale genomic analysis needs access to robust HPC solutions to process a high throughput of data. “Our current architecture, which is exclusively on AWS, strikes a good balance between cost effectiveness and flexibility,” says Joshi. “We have varying sizes and designs of computing architecture to make our processes cost effective, and it has been really nice.” Using AWS ParallelCluster, 54gene can customize the kind of HPC that it wants to use depending on the type and size of the data coming in. The startup has one queue for handling terabytes of data with compute-optimized nodes and a separate queue for smaller tasks, like running short Python scripts. The AWS team provided support throughout the migration and design of GENIISYS. “AWS listens carefully to our questions and needs and works diligently to provide additional resources,” says He. 日本語 2023 AWS ParallelCluster is an open source cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. About 54gene Analyzed The company built a proprietary solution called GENIISYS on Amazon Web Services (AWS) to curate genetic, clinical, and phenotypic data from Africa and other diverse populations and generate insights that can lead to new treatments and diagnostics. Using multiple AWS services, including AWS ParallelCluster, an open-source cluster management tool that makes it simple to deploy and manage high performance computing (HPC) clusters on AWS, GENIISYS can scale to cost-effectively support massive datasets and power precision medicine for historically underserved demographics. 한국어 54gene is already seeing the benefits of AWS as it develops and scales new features of GENIISYS. “We are doing a lot of trial and error,” says Joshi. “On AWS, we can start small with novel ideas and deploy a lot of small applications, and the AWS team helps us determine which particular interface best suits us.” Overview | Opportunity | Solution | Outcome | AWS Services Used To store and visualize its datasets, 54gene uses Amazon Relational Database Service (Amazon RDS), which makes it simple to set up, operate, and scale databases in the cloud. “On Amazon RDS, we’re able to store metadata from our three major components of research and query our datasets efficiently,” says Joshi. The startup also uses Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload, to power its data analytics workflows. Using different HPC configurations, 54gene can analyze datasets as large as 30–40 TB in just a few days. And even while it’s achieving a throughput of more than 5 TB per week, the startup is reducing its costs on AWS. “Another factor that made us choose AWS is that AWS has a great presence in the African continent, including the close physical proximity of its data centers to our business units there,” says He.   54gene is using its data analytics infrastructure on AWS to drive research into specific diseases. For example, the startup is working to identify what genetic factors might lead to more serious cases of sickle cell disease in Nigeria and to tailor treatments to patients based on disease severity. 54gene stores all its genomic data using Amazon Simple Storage Service (Amazon S3), object storage built to retrieve any amount of data from anywhere. “Another great aspect of working on AWS is that we can configure data storage to be cost effective,” says Joshi. The company uses Amazon S3 Lifecycle policies to automatically migrate data to Amazon S3 Glacier storage classes—which are purpose-built for data archiving—to minimize storage costs.   To conveniently access data stored in Amazon S3 for processing using HPC clusters, the startup uses Amazon FSx for Lustre, which provides fully managed shared storage built on a popular high-performance file system. And 54gene’s computational scientists, many of whom had trained on traditional on-premises setups, adjusted easily to AWS. “What’s nice about AWS is that we are able to replicate a familiar environment for our computational scientists with minimal cloud training,” says Joshi. “AWS ParallelCluster is a great example of that.” Based in Nigeria, 54gene is a genomics startup that works with pharmaceutical and research partners to study genetic diseases and identify treatments. It’s focused on addressing the need for diverse datasets from underrepresented African populations. Amazon EC2 30–40 TB AWS Services Used Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Reduced 中文 (繁體) Bahasa Indonesia ไทย Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Our current architecture, which is exclusively on AWS, strikes a good balance between cost effectiveness and flexibility. We have varying sizes and designs of computing architecture to make our processes cost effective, and it has been really nice.” costs Achieved Overview Español Facilitated Ji He Senior Vice President of Technology, 54gene Get Started  flexible, scalable, and reliable cloud infrastructure Opportunity | Using AWS ParallelCluster to Build a Scalable, Cost-Effective Genomics Research Solution for 54gene  AWS ParallelCluster Türkçe With the flexibility and cost effectiveness of the cloud, 54gene is better able to study the effects of diseases on previously underrepresented African genetic data. The startup can also seamlessly integrate its highly curated clinical, phenotypic, and genetic data within one solution and build capacity for further research initiatives focused on targeted populations in Africa or specific disease areas. “We have the flexibility to do almost anything on AWS,” says Joshi. “From running quick scripts to genotyping in a matter of hours to analyzing terabytes of data efficiently, this flexibility has been really beneficial.” English Learn how 54gene in life sciences is curating diverse datasets to unlock genetic insights in Africa and globally using AWS. Outcome | Continuing to Increase Representation for African Genetic Data in Global Health Research  datasets that increase diversity in global genetic research Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud Learn more » Deutsch Nigeria-based 54gene collaborates with local research institutions and global pharmaceutical partners to study the many ethnolinguistic groups within Nigeria, better understand the diversity present on the continent, and uncover new biological insights. Its GENIISYS solution includes a state-of-the-art biorepository that stores highly curated clinical, phenotypic, and genetic data from the African population to facilitate research for a new wave of therapeutics. “Through GENIISYS, we wanted to create a gateway between genomics insights from Africa and research in other countries,” says Ji He, senior vice president of technology at 54gene. Amazon RDS Tiếng Việt Amazon S3 Italiano Customer Stories / Life Sciences To effectively collect and store genomic data and connect it to phenotypic information (such as clinical and demographic data), the startup needed a flexible cloud-based solution that could scale while still optimizing costs. “When we’re performing genotyping or whole genome sequencing, we generate huge amounts of data, and we have to process it at a high rate of throughput,” says Esha Joshi, bioinformatics engineer at 54gene. “We chose AWS because of its reliability and scalability and the fact that we have to pay only for what we use. That’s important for a startup because it can be difficult to anticipate computing and storage needs.” Contact Sales Learn more » Português of data analyzed in a few days" 6sense Case Study.txt,"Searching for a more scalable solution, 6sense began to explore Kubernetes, an open-source container orchestration system, to improve its data pipelines. In 2018, the company migrated its application and API services to two Kubernetes clusters and began using kOps, a set of tools for installing, operating, and deleting Kubernetes clusters in the cloud. Although a containerized architecture improved agility for 6sense, kOps was not fully managed, which required the 6sense team to perform significant day-to-day operations and management. “Using kOps, we experienced way too much maintenance overhead,” says Liaw. “We realized that if we could reduce these manual tasks, our team could focus its time on serving the customer instead of managing Kubernetes.” Français Benefits of AWS Amazon Elastic Compute Cloud (Amazon EC2) By migrating to fully managed Amazon EKS clusters, 6sense can effectively scale and manage its data pipeline, which has accelerated its speed to deliver insights to its customers. The company plans to further improve its scaling capabilities using Karpenter, an open-source Kubernetes cluster automatic scaler built alongside AWS.  Español Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service to run and scale Kubernetes applications in the cloud or on-premises. Searching for Scalable Pipeline Orchestration Improved speed to market for new applications and features 日本語 Using Amazon EKS, 6sense has seen a 400 percent improvement in workload throughput, giving it the ability to process 1–2 TB of data per day and growing. With this speed, 6sense can support highly complex workloads and quickly deliver valuable insights to its customers 65 percent faster.  With Enterprise Support, you get 24x7 technical support to automatically manage health of your environment, consultative architectural guidance delivered in the context of your applications and use-cases, and a designated Technical Account Manager (TAM) to coordinate access to proactive / preventative programs and AWS subject matter experts. Continuing to Enhance Scalability on AWS Contact Sales Get Started 한국어 6sense’s AWS-powered solution is not only extremely fast but also highly scalable. “We can scale a cluster on Amazon EKS almost infinitely to run as many things in parallel as possible,” says Premal Shah, senior vice president of engineering and infrastructure at 6sense. “We no longer need to worry about how much we can run per hour.” The company also relies on Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances, which are used to run large workloads at a significant cost savings and accelerate workloads by running parallel tasks. By using Amazon EC2 Spot Instances, 6sense can provision the capacity it needs to support its future expansion while optimizing for costs.  6sense Insights Inc.’s Revenue AI reinvents the way companies create, manage, and convert pipelines to revenue by capturing anonymous buying signals, targeting the right accounts, and recommending channels and messages to boost performance. Frees employees’ time to focus on high-value tasks and innovation Delivers insights to customers 65% faster Because Amazon EKS is a fully managed Kubernetes service, 6sense no longer needs to focus on managing or operating its Kubernetes clusters. Using this time savings, its team can dedicate time to improving the customer experience. “On AWS, we are able to increase developer velocity, reduce unnecessary red tape, and serve our customers as best as we can,” says Liaw. “We can push out new features, insights, and products to them as quickly as possible. The faster we can innovate to serve our customers, the better the experience is for everybody—including our team.” Improved developer productivity Improving Speed, Agility, and Innovation Using Amazon EKS Improved workload throughput by 400% AWS Services Used Processes 1–2 TB of data per day 中文 (繁體) Bahasa Indonesia 6sense has also vastly accelerated its development speeds by migrating to AWS. On Apache Mesos, the company was limited in its ability to build, test, and deploy new data pipelines due to limitations on container throughput. On Amazon EKS, 6sense can run up to 300 percent more containers per hour. It can also run the same number of Docker containers on Amazon EKS in approximately 50 percent of the time that it took under its previous solution. By achieving this level of speed and scalability, 6sense has improved developer productivity and accelerated its speed to market for new applications and features.  We can scale a cluster on Amazon EKS almost infinitely to run as many things in parallel as possible.” Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский عربي Learn more » 中文 (简体) In 2019, 6sense chose to invest in AWS Enterprise Support, which provides concierge-like service to support companies in achieving outcomes and finding success in the cloud. The AWS Enterprise Support team helped the company realize that it could alleviate the issues that it was facing by migrating to Amazon EKS, which is fully managed. “For 6sense, Amazon EKS was almost a drop-in replacement that magically worked better,” says Liaw. Premal Shah Senior Vice President of Engineering and Infrastructure, 6sense Insights Inc.   6sense migrated to Amazon Elastic Kubernetes Service (Amazon EKS), a managed container service to run and scale Kubernetes applications in the cloud or on premises. Using Amazon EKS, 6sense completes workloads significantly faster while reducing management needs, improving its speed of delivery, and freeing its developers to focus on innovative solutions. 6sense Insights Inc. (6sense) needed to effectively scale and manage its data pipelines so that it could better support its growth. With 6sense Revenue AI, a leading platform for predictable revenue growth, the company generates actionable insights for business-to-business sales and marketing teams. This service relies on artificial intelligence, machine learning, and big data processing, requiring 6sense to run complex workloads and process terabytes of data per day. When its open-source pipeline orchestration solution could no longer support these workloads, 6sense began exploring alternative solutions and chose to implement fully managed services from Amazon Web Services (AWS).  Headquartered in San Francisco, California, 6sense delivers data analytics, sales insights, and other predictions so that business-to-business revenue teams can better understand their buyers and customers. In 2014, the company began using Apache Mesos, an open-source solution that manages compute clusters, to orchestrate its data pipeline frameworks. “As we grew, we encountered several limitations on Apache Mesos,” says George Liaw, director of infrastructure engineering at 6sense. “We could only offer compute resources to one framework at a time, which slowed our processes. We also experienced scaling issues.”  Türkçe Facilitates a fully managed solution English 6sense Insights Inc. Improves Scalability and Accelerates Speed to Market by Migrating to Amazon EKS Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload.  AWS Enterprise Support Deutsch Tiếng Việt Italiano ไทย About 6sense Insights Inc. 2022 Amazon EC2 Spot Instances On AWS, 6sense freed its employees to focus on innovation, and the company will continue to use AWS services to develop new, value-generating solutions. “At 6sense, we are able to move quickly and innovate on AWS without being held back,” says Liaw. Amazon Elastic Kubernetes Service (Amazon EKS) In September 2021, 6sense began migrating its remaining workloads from legacy solutions running on Apache Mesos and kOps to Amazon EKS. The company migrated the majority of its application and API service workloads to Amazon EKS within the first week and developed a stable and usable pipeline orchestration solution by the end of 2021. “Once we started running Amazon EKS clusters, we unlocked valuable capabilities,” says Liaw. “We could test clusters with more flexible configurations without worrying about their stability.” By December 2021, the company was running 7–8 clusters on Amazon EKS and had completed 80 percent of its migration.  Português Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices." Accelerate Time to Business Value Using Amazon SageMaker at Scale with NatWest Group _ Case Study _ AWS.txt,"On AWS, NatWest Group can quickly launch personalized products and services to meet customer demands, boost satisfaction, and anticipate future needs. The bank’s data science teams are empowered to deliver significant business value with streamlined workflows and a self-service environment. In fact, NatWest Group is on track to double its number of use cases to 60 and achieve a 3-month time to value. Français The bank will continue to explore and create new, innovative solutions on AWS. For example, NatWest Group will soon introduce an ML offering that automatically sets prices for its products, improving the intelligence and efficiency of the pricing process.  2023 Español To equip its data teams with the skills that they need to use these tools, NatWest Group has encouraged its employees to embark on cloud learning journeys. It has hosted over 720 AWS Training courses for its data science teams to learn new skills, such as applying best practices for DevOps and building a data lake on AWS. Additionally, several employees obtained AWS Certifications, which are industry-recognized credentials that validate technical skills and cloud expertise. By offering these opportunities, NatWest Group has equipped its data science teams to build powerful, predictive ML models on AWS at a faster pace. 日本語 NatWest Group is one of the largest banks in the United Kingdom. Formally established in 1968, the company has origins dating back to 1727. NatWest Group seeks to use its rich legacy data to innovate and personalize its personal, business, and corporate banking and insurance services. To deliver these solutions at a faster pace, the bank needed a standardized ML approach. “We didn’t have a consistent way to access our data, generate insights, or build solutions,” says Andy McMahon, head of MLOps for data innovation for NatWest Group. “Our customers felt these challenges because it took a much longer time to derive value than we wanted.” Contact Sales for data science teams To deploy personalized solutions at an enterprise scale, NatWest Group chose to adopt Amazon SageMaker as its core ML technology. The bank also engaged AWS Professional Services, a global team of experts that can help companies realize their desired business outcomes when using AWS, to prepare for the project. During a series of workshops, NatWest Group and AWS Professional Services worked together to identify areas of improvement within the company’s ML landscape and created a strategy for development. After crafting a comprehensive plan, the teams began working on the project in July 2021.   한국어 Accelerate Time to Business Value Using Amazon SageMaker at Scale with NatWest Group Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon SageMaker Studio provides a single, web-based visual interface where you can perform all ML development steps, improving data science team productivity by up to 10x.. Learn more » Solution | Achieving an Agile DevOps Culture Using AWS ML Solutions Opportunity | Using Amazon SageMaker to Reduce Time to Value for NatWest Group AWS Services Used from 2–4 weeks to hours Outcome | Deploying Innovative Services at Scale Using Amazon SageMaker 中文 (繁體) Bahasa Indonesia NatWest Group employees now have fast and simple access to the data and tools that they need to build and train ML models. “We modernized our technology stack, simplified data access, and standardized our governance and operational procedures in a way that maintains the right risk behaviors,” says McMahon. “Using Amazon SageMaker, we can go from an idea on a whiteboard to a working ML solution in production in a few months versus 1 year or more.” NatWest Group launched its first offerings in November 2022, reducing its time to value from 12–18 months to only 7. 30+ ML use cases AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي To remain competitive in the fast-paced financial services industry, NatWest Group is under pressure to deliver increasingly personalized and premier services to its 19 million customers. The bank has built a variety of workflows to explore its data and build machine learning (ML) solutions that provide a bespoke experience based on customer demands. However, its legacy processes were slow and inconsistent, and NatWest Group wanted to accelerate its time to business value with ML. 中文 (简体)   Amazon SageMaker Studio About NatWest Group Overview built in 4 months The bank turned to Amazon Web Services (AWS) and adopted Amazon SageMaker, a service that data scientists and engineers use to build, train, and deploy ML models for virtually any use case with fully managed infrastructure, tools, and workflows. By centralizing its ML processes on AWS, NatWest Group has reduced the time that it takes to launch new products and services by several months and has embraced a more agile culture among its data science teams. In April 2022, NatWest Group launched an enterprise-wide, centralized ML workflow, which it powers by using Amazon SageMaker. And because the bank already had a presence on Amazon Simple Storage Service (Amazon S3)—an object storage service offering industry-leading scalability, data availability, security, and performance—this was the service of choice for its data lake migration. With simpler access to data and powerful ML tools, its data science teams have built over 30 ML use cases on Amazon SageMaker in the first 4 months after launch. These use cases include a solution that tailors marketing campaigns to specific customer segments and an application that automates simple fraud detection tasks so that investigators can focus on difficult, higher-value cases. Get Started Reduced time to value   Customer Stories / Financial Services “There’s so much that we’ve gained from using our data intelligently,” says Greig Cowan, head of data science for data innovation at NatWest Group. “On AWS, we have opened up many new avenues and opportunities for us to detect fraud, tailor our marketing, and understand our customers and their needs.” Türkçe English 720+ Promotes self-service environment NatWest Group is a British banking company that offers a wide range of services for personal, business, and corporate customers. It serves 19 million customers throughout the United Kingdom and Ireland. Greig Cowan Head of data science for data innovation, NatWest Group If you want to launch an environment for data science work, it could take 2–4 weeks. On AWS, we can spin up that environment within a few hours. At most, it takes 1 day.” AWS Service Catalog Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. To accelerate its employees’ workflows, NatWest Group uses AWS Service Catalog, which organizations use to create, organize, and govern infrastructure-as-code templates. Before the bank adopted this solution, data scientists or engineers would need to contact a centralized team if they wanted to provision an ML environment. Previously, it would take 2–4 weeks before the infrastructure was ready to use. Now, NatWest Group can launch a template from AWS Service Catalog and spin up an ML environment in just a few hours. Its data teams can begin working on projects much sooner and have more time to focus on building powerful ML models. This self-service environment not only empowers data science teams to derive business value faster, but it also encourages consistency. “As a large organization, we want to make sure anything that we build is scalable and consistent,” says McMahon. “On AWS, we have standardized our approach to data using a consistent language and framework, which can be rolled out across different use cases.” Reduced time to provision environment Deutsch Tiếng Việt Amazon S3 Italiano ไทย Learn how NatWest Group used Amazon SageMaker to create personalized customer journeys with secure machine learning. To learn more, visit aws.amazon.com/financial-services/machine-learning/. Learn more » AWS courses completed from 12–18 months to 7 NatWest Group has adopted a number of features on Amazon SageMaker to streamline its ML workflows with the security and governance required of a major financial institution. In particular, NatWest Group adopted Amazon SageMaker Studio, a single web-based visual interface where it can perform all ML development steps. Because Amazon SageMaker Studio is simple to use and configure, new users can quickly set it up and start building ML models sooner. Português Amazon SageMaker" Accelerate Your Analytics Journey on AWS with DXC Analytics and AI Platform _ AWS Partner Network (APN) Blog.txt,"AWS Partner Network (APN) Blog Accelerate Your Analytics Journey on AWS with DXC Analytics and AI Platform by Dhiraj Thakur and Murali Gowda | on 27 JUN 2023 | in Analytics , Artificial Intelligence , AWS Partner Network , Customer Solutions , Intermediate (200) , Thought Leadership | Permalink | Comments |  Share By Dhiraj Thakur, Solutions Architect – AWS By Murali Gowda, Advisor Architect – DXC Technology DXC Technology Analytics are an essential tool that helps companies accelerate their business outcomes, but the current approach to analytics taken by most companies limits their effectiveness. Rapid changes in business intelligence and analytics solutions mean companies are continually over-investing in solutions that rapidly age. They’re spending more time reevaluating, redesigning, and redeploying technologies than applying them to the business. They’re also making new commitments to expand their IT footprint at a time when most want to reduce their total estate. Analytics can unlock new value from data, but customers want to make faster decisions and gain greater competitive advantage. To benefit from the full power of analytics, customers need a solution they can deploy quickly and use to improve the effectiveness of their existing business intelligence over time—and avoid investing in tools that become obsolete before they’re deployed. With DXC Technology’s Analytics and AI Platform (AAIP) , an analytics platform as a service built on Amazon Web Services (AWS), you can develop and deploy new analytics applications in weeks. In this post, we walk through the features and benefits of AAIP, which helps you look further and deeper, gaining business insights from data you could not previously access or manage. DXC Technology is an AWS Premier Tier Services Partner and Managed Service Provider (MSP) that understands the complexities of migrating workloads to AWS in large-scale environments and the skills needed for success. Platform Overview Historically, several challenges held customers back from adopting advanced analytics: Siloed data and operational data stores hindered data access and discovery, thereby limiting insights generation. Data duplicated across multiple systems led to data quality issues. Managing data ingestion, data integration, and data quality all from a single, centralized location. Gaining approval on enterprise data models and entity relationship models from multiple business units. Regulatory and compliance issues. Complex, upfront costs, and heavy development marred with skills issues Limited by use of on-premises options. Administrative overhead. DXC Analytics and AI Platform is an analytics solution that rapidly improves the effectiveness and impact of your existing business intelligence landscape. AAIP addresses these challenges and eliminates the need to make continuous investments that expand the IT footprint and increase maintenance and upgrade costs. Figure 1 – DXC Analytics and AI Platform (AAIP). The bottom layer of the graphic above is DXC’s managed service offering where they offer to manage the platform. The next layer shows where DXC offers flexible deployment options including hybrid cloud, on-premises, and AWS deployments. Bundled with DXC’s managed service, AAIP takes the guesswork and complexity out of analytics with a fully managed, industrialized solution that incorporates the latest technologies. DXC follows AWS best practices for policies, architecture, and operational processes built to satisfy the requirements of enterprise grade security to protect data and IT infrastructure hosted in AWS. DXC provides the core industrialized platform complemented by AWS products and platform extensions from a rich services catalog, and custom options are also available. Customers can take advantage of rapid advances in artificial intelligence (AI), automation, and core analytics technologies offered from AWS. DXC’s solution accelerator, design patterns, and reference architecture speed up the implementation, allowing you to quickly access the right data and develop solutions that target the most critical needs. Using AAIP, customers can develop and deploy analytics apps that are more user-friendly and self-service oriented, using a pay-as-you-go mode. Solution Features and Benefits AAIP is a hardened software-defined architecture that combines the standard security and compliance controls with best-of-breed tooling to provide platform as a service (PaaS). The following diagram provides the benefits offered from AAIP as a service. Figure 2 – AAIP solution features and benefits. There are many benefits of AAIP available, including: Scale: A platform that scales as you grow. Seamlessly works with on-premises or cloud vendors, with multi- and hybrid-cloud deployment options. Support and maintenance: Leverages a pre-built monitoring and infrastructure configuration. Security: The enterprise-grade platform is built with high standards in security, including protection for most frequently occurring infrastructure (layer 3 and 4) attacks like distributed denial of service (DDoS), reflection attacks, and others. The platform is HITRUST certified and uses AWS Shield , a threat detection service that continuously monitors AWS accounts. Patching and scanning: Managed services functions include analytics workloads, service management, data backup/recovery, software patches/upgrades, continuous vulnerability management, and incident management. Operating system and security patches are reviewed and applied periodically. New instances are scanned prior to implementation, and anti-virus scanning is implemented. Data visualization tools: Robust data visualization tools and algorithms for advanced analytics and ML. Logging and monitoring: Provisioned resource tracking for continuous monitoring of account related activity across AWS infrastructure. Standard and selectable AWS and third-party tooling: Preconfigured ServiceNow for incident management and simplified workload monitoring. In case of any incident Amazon Simple Notification Service (Amazon SNS) sends the notification to users and triggers the ServiceNow incidents. Data pipelines: Batch, event- and API-driven data pipeline and workflow engines. In the following diagram, you can see how AAIP features support end-to-end cloud analytics adoption. Figure 3 – AAIP offering overview. The black box in Figure 3 shows DXC’s offerings in data analytics platform, including decades of extensive industry experience, enterprise-grade security and platform, and accelerators. The grey box shows DXC’s best practice guidance to customers to rapidly build the platform for their analytics needs. The purple box shows benefits to customers. AAIP provides distinct advantages to customers including: Accelerate the time to business value: DXC solution accelerators offer a T-shirt sizing-based platform, ingestion of the right data, and rapid execution of targeted business use cases. End-to-end managed services: DXC’s managed services leverage a deep pool of technical, business, and industry experts with field-tested methodologies, processes, tools delivered per an agreed service-level agreement (SLA). This includes monitoring, incident management, centralized logging, endpoint security, cloud security posture management, compliance, scanning, and threat detection. Solution accelerators: DXC offers accelerators such as reference architectures, design patterns, deployment automation, blueprints, and runbooks that cover the initial setup, onboarding, and ongoing run with adherence to SLAs. Full-service suite: Utilize a full set of analytics services to assist in achieving analytics insight goals. Supports delivery of advanced analytics (AI/ML, natural language processing) and actionable insights to business stakeholders. Conclusion In this post, you learned about the features and benefits of using DXC Technology’s Analytics and AI Platform (AAIP) on AWS. In an environment of competitive pressure emerging from AI and analytics, AAIP enables companies to unleash the potential of data in real-world, practical applications. AAIP is a proven analytics platform that’s built from AWS-native services and enables users to scale their business seamlessly and reduce go-to-market time significantly. DXC offers standardized services to advise and coach people, change organizational structures, and implement and run analytics platforms at scale. . . DXC Technology – AWS Partner Spotlight DXC Technology is an AWS Premier Tier Services Partner  that understands the complexities of migrating workloads to AWS in large-scale environments, and the skills needed for success. Contact Partner | Partner Overview | AWS Marketplace | Case Studies TAGS: AWS Competency Partners , AWS MSP Partner Program , AWS Partner Guest Post , AWS Partner Solutions Architects (SA) , AWS Partner Success Stories , AWS Premier Tier Services Partners , AWS Public Sector Partners , AWS Service Delivery Partners , AWS Solution Provider Partners , AWS Well-Architected Partners , DXC Technology , Managed Service Provider Comments View Comments Resources AWS Partner and Customer Case Studies AWS Partner Network Case Studies Why Work with AWS Partners Join the AWS Partner Network Partner Central Login AWS Training for Partners AWS Sponsorship Opportunities Follow  AWS Partners LinkedIn  AWS Partners Twitter  AWS Partners YouTube  AWS Email Updates  APN Blog RSS Feed" Accelerating customer onboarding using Amazon Connect _ NCS Case Study _ AWS.txt,"NCS, an AWS Partner, had been using AWS services to support various applications and IT environments for several years. The NCS Service Desk team wanted to expand its use of AWS by migrating to Amazon Connect, a pay-as-you-go, contact center offering with infinite scalability. “Amazon Connect met all our requirements, and we knew it would allow us to add innovative features on top of it in the future to meet our customers’ needs,” Cheung says. Amazon Comprehend On-demand scaling Français About NCS Group 2023 Amazon Connect is an omnichannel cloud contact center that allows you to set up a contact center in minutes that can scale to support millions of customers. With Amazon Connect you can stay ahead of customer expectations and outpace the competition at a lower cost.  Español Recently, NCS has started using AI and ML technologies such as Contact Lens for Amazon Connect, which the company now deploys for contact center analytics. “With Contact Lens for Amazon Connect, we can measure the quality of our customer calls by generating analytical reports within hours of a call,” says Sivabalan Murugaya. 日本語 To further improve its customer experience, NCS has integrated a survey in Amazon Connect to gauge customer sentiment after each call. “Our customer satisfaction scores have been very high, which is encouraging,” says Cheung. NCS has accelerated onboarding time, improved customer communications, and reduced costs by migrating its Service Desk contact center to Amazon Connect. The group is funneling savings back into the business and can more efficiently deploy staff to value-added projects. “We can invest more in our development efforts now,” Cheung says. “As a result, our team is spending more time exploring new features and innovations to serve our customers.” Get Started 한국어 Although NCS initially planned for the migration to take six months, the company completed it in just three months. “Because of the AWS integration and overall efficiency of Amazon Connect, we migrated 40 projects to Amazon Connect quickly and easily,” elaborates Murugaya. Overview | Opportunity | Solution | Outcome | AWS Services Used Opportunity | Transforming NCS Service Desk to be More Agile NCS Group, a subsidiary of Singtel Group, is a leading IT consulting firm that partners with governments and enterprises in the Asia Pacific region to advance communities through technology. It was established in 1981 and has 12,000 employees across the region.   reduction in system operations costs Jessica Cheung Practice Lead for EUC and Service Desk, NCS AWS Services Used Additionally, with the integration between Amazon Connect and the NCS knowledge base system, service desk agents can quickly search different databases for information. “We now have a consistent feed of accurate information to relay to our customers,” adds Murugaya. As part of an ongoing digital transformation, NCS sought to onboard new Service Desk customers faster by moving away from the solution’s on-premises IT environment. “The deployment time for new customers could take eight weeks because of software implementation and hardware procurement, and that was too long. We wanted technology that was agile, modular, cost effective, and easy to scale as we grew,” says Sivabalan Murugaya, lead consultant for EUC and Service Desk at NCS Group. On-demand scaling was a key point, as Service Desk call volumes are highly dynamic; from one day to the next the group might need 100 additional service center agents. 中文 (繁體) Bahasa Indonesia Data sovereignty Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) 3 weeks Amazon Comprehend is a natural-language processing (NLP) service that uses machine learning to uncover valuable insights and connections in text. Amazon Comprehend helps businesses simplify document processing, classify documents, redact personally identifying information, and more. Learn more » Outcome | Investing in New Features and AI Innovation Amazon Connect Overview Since 1981, NCS has been providing technology solutions and consulting services to government agencies and enterprises across the Asia Pacific region. The group employs 12,000 people, many of them working with the NCS Service Desk. “Through NCS Service Desk, we support our customers’ application, infrastructure, and end-user desktop needs,” explains Jessica Cheung, practice lead for EUC and Service Desk at NCS Group. customer onboarding time Contact Lens for Amazon Connect, a feature of Amazon Connect, provides a set of conversational analytics and quality management capabilities, powered by machine learning, that helps understand and classify the sentiment, trends, and compliance of your conversations. Learn more » NCS Service Desk serves healthcare organizations and local governments, making data sovereignty another critical consideration for a new Service Desk IT environment. NCS was also looking to implement technology that would facilitate efficient innovation with native AI capabilities. Türkçe NCS is also evaluating Amazon Comprehend to derive new insights from text within its knowledge base. Cheung concludes, “We are confident that with Amazon Connect and other AWS services, we can keep providing a better contact center solution for our global customers.” NCS migrated its on-premises Service Desk solution to Amazon Connect to halve onboarding time, reduce operations costs, and improve customer communications with new technologies such as artificial intelligence and machine learning. English Contact Lens for Amazon Connect NCS Accelerates Customer Onboarding by Moving its Contact Center to Amazon Connect Amazon Connect met all our requirements, and we knew it would allow us to add innovative features on top of it in the future to meet our customers’ needs.” complies with strict data residency requirements Deutsch Tiếng Việt The group is using Amazon Connect as an omnichannel call center solution, including Contact Lens for Amazon Connect to perform call analytics. Using Amazon Web Services (AWS), NCS onboards new customers twice as fast, reduced operations costs, and gains the agility to innovate new features with native artificial intelligence (AI) and machine learning (ML) capabilities. Taking advantage of Amazon Connect, NCS is delivering an omnichannel solution that integrates voice, chat, email, and AI to improve its overall customer experience. For example, the group typically uses in-house AI to handle end users’ emails within a minute. However, responses can take longer when customers present more complex issues. Using Amazon Connect, service desk agents receive the complex emails immediately and can provide a timely response. Italiano ไทย supports variable, volatile workloads Solution | Saving Time and Operations Cost with an Omnichannel Solution Onboarding new customers to Amazon Connect is likewise quicker and easier. Instead of six to eight weeks, onboarding now takes just three weeks. The group can scale its Service Desk solution up or down on demand and has reduced system operational costs by 30 percent. By leveraging various data centers within the AWS Asia Pacific Region, it also ensures compliance with customers’ stringent data sovereignty requirements. Learn more » NCS Group (NCS) is a multinational information technology company that serves governments and enterprises across Asia Pacific. To improve agility and onboard customers faster, NCS migrated its on-premises call center to Amazon Connect. 30% Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português" Accelerating Migration at Scale Using AWS Application Migration Service with 3M Company _ Case Study _ AWS.txt,"applications cutover in 12 hours 3M Company is a manufacturing company that uses science to improve lives and solve some of the world’s toughest challenges. 3M has corporate operations in 70 countries and sales in over 200. Get more flexibility and value out of your SAP investments with the world’s most secure, reliable, and extensive cloud infrastructure, 200+ AWS services to innovate, and, purpose-built SAP automation tooling to reduce risk and simplify operations. Learn more » Français scalability, flexibility, and resiliency Outcome | Developing Modern, Cloud-First Applications 2023 Solution | Migrating 2,200 Applications in 24 Months Using AWS Application Migration Service SAP on AWS Español Global manufacturer 3M Company migrated 2,200 applications to AWS in 24 months with minimal downtime, improving its scalability and resiliency, and optimizing costs to save millions of dollars. 日本語 AWS Services Used Contact Sales Customer Stories / Manufacturing Accelerating Migration at Scale Using AWS Application Migration Service with 3M Company Get Started 한국어 AWS Professional Services AWS Professional Services offerings help you achieve specific outcomes related to enterprise cloud adoption. Each offering delivers a set of activities, best practices, and documentation reflecting our experience supporting hundreds of customers in their journey to the AWS Cloud. Learn more » Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. The promise of the cloud—and what we achieved after we migrated to AWS—was the ability to flexibly scale and deploy with a very short lead time.” Improved About 3M Company Reduced 中文 (繁體) Bahasa Indonesia AWS DataSync by cost optimizing compute applications across thousands of servers migrated in 24 months Ρусский عربي AWS Application Migration Service 中文 (简体) Opportunity | Working alongside AWS Professional Services to Get to Migration at Scale for 3M Company   The migration at scale moved at significant speed. At one point, the team moved 500 applications in around 12 hours. Perhaps even more impressively, 3M’s largest and most critical workload—its enterprise resource planning solution, which included hundreds of terabytes of data and hundreds of applications—was cutover in under 20 hours. That solution was migrated to SAP on AWS, which offers proven approaches backed by expert experience supporting SAP customers in the cloud on AWS. “The speed and consistency in delivering our workloads to the cloud was truly a benefit of 3M working alongside AWS in our migration at scale,” says Hammer. “When we looked at the challenge that was presented to us—30 months or fewer to migrate nearly all our enterprise workloads from our aging data center to the cloud—the combined effort between 3M, AWS Professional Services, and other AWS engineering teams made that possible. We were able to hit our milestones and migrate our workloads; we reduced risks and, in many cases, introduced better capabilities using AWS, which provided the scalability and flexibility and resiliency that we didn’t have in the data center.” 3M is a global manufacturing company, producing products from adhesives to medical supplies to industrial abrasives, all with the mission to use science to improve lives and solve tough customer challenges. With corporate operations in 70 countries and sales in over 200, 3M needed greater scalability than was available using its on-premises data centers. There were long lead times for procuring and deploying hardware, making it difficult for 3M to meet the demands of existing workloads and slowing down new projects. 3M required greater stability and sustainability, neither of which the aging data center could provide. Overview Kyle Hammer Director of Cloud Transformation, 3M Company Türkçe To perform the migration, 3M used tools such as AWS Application Migration Service, which minimizes time-intensive, error-prone manual processes by automating the conversion of source servers to run natively on AWS. AWS Application Migration Service also simplifies application modernization with built-in and custom-optimization options. 3M also used AWS DataSync, a secure, online service that automates and accelerates moving data between on-premises and AWS storage services. Using these tools, 3M could replicate its workloads from on premises to AWS with minimal changes. 3M migrated some workloads that required more creative, flexible work-around capabilities, and using AWS tools, it could address those challenges as they arose. “We were able to maintain the pace that we needed even with those diverse workloads across many different systems,” says Hammer. After each wave of the migration, the company also took time to thoroughly and thoughtfully evaluate how the migration was going. “We captured data in each wave, and that data would help remediate challenges in subsequent migrations,” says Hammer. “That process was helpful for us to mitigate risk and improve the delivery.” Global manufacturer 3M Company (3M) needed a technology solution more flexible and scalable than its data centers. Not only were the data centers aging, but it was difficult to obtain new hardware when 3M needed to increase its capacity quickly. 3M began looking for a cloud-hosting solution to run its applications, including 11 different enterprise resource planning environments. 3M Enterprise IT selected Amazon Web Services (AWS) as its preferred cloud services provider and used AWS tools and expertise to migrate thousands of servers in 24 months. Now on AWS, 3M has increased its scalability and resiliency, and it has begun using automation to streamline processes such as server deployment and rightsizing. English 500 Now that 3M has completed its migration at scale, the company is delivering new applications with a cloud-first, serverless focus. 3M is planning to move its databases into AWS-native database services, such as Amazon Relational Database Service (Amazon RDS), a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. 3M is automating server builds in the cloud using the AWS interface. Now, users within 3M can build and deploy resources on AWS in minutes, compared to weeks or even months on premises. 3M is also using automation to correctly size compute instances for workloads and to schedule compute only when needed. “On AWS, we no longer need to run many of our systems 24 hours a day, like we used to do in our data center,” says Hammer. “That’s resulted in millions of dollars in compute savings from what we initially migrated to the cloud.” 3M is also optimizing its storage and backups, saving hundreds of thousands of dollars in its storage rightsizing efforts alone. 3M kicked off its 3M Cloud Transformation Program in 2020 to complete a migration at scale to AWS. “The promise of the cloud—and what we achieved after we migrated to AWS—was the ability to flexibly scale and deploy with a very short lead time,” says Kyle Hammer, director of cloud transformation at 3M. To complete its migration at scale, 3M began working alongside AWS Professional Services, a global team of experts that can help organizations realize desired business outcomes using AWS, to plan a migration. “Working alongside AWS Professional Services went very well,” says Hammer. “This migration would not have been successful in the time that we had allotted without the strong collaboration from AWS and AWS Professional Services.” AWS DataSync is a secure, online service that automates and accelerates moving data between on premises and AWS Storage services. Learn more » Deutsch Tiếng Việt Overview | Opportunity | Solution | Outcome | AWS Services Used Italiano ไทย 2,200 Saved millions of dollars Learn more » AWS Application Migration Service minimizes time-intensive, error-prone manual processes by automating the conversion of your source servers to run natively on AWS. “3M is driving to increase our presence with digital products and enterprise. We’re continuing to develop products that are supporting and solving challenges for our customers, and those will be developed in the cloud on AWS,” says Hammer. resource deployment time from weeks to minutes The 3M Cloud Transformation Program began with 8 months of designing and planning, followed by 24 months of migration at scale. 3M completed the transformation program with minimal downtime in 24 months with 51 waves, delivering 2,200 existing enterprise applications to AWS in addition to hundreds of other new instances and applications that were in development in that time frame. “We worked alongside AWS Professional Services to develop a solid plan that had the appropriate governance and controls in place so that we could review, flex, build, and scale to meet the migration needs,” says Hammer. “Through that methodology, we could adjust the technical processes and react quickly to keep the program on track and continue to deliver our migration at scale.” The end state of the migration included over 6,200 instances on Amazon Elastic Compute Cloud (Amazon EC2)—a service that provides secure and resizable compute capacity for virtually any workload—and petabytes of data migrated to other AWS services. Português" Accelerating Time to Market Using AWS and AWS Partner AccelByte _ Omeda Studios Case Study _ AWS.txt,"Omeda Studios was founded in 2020 with the mission to build community-driven games. Omeda’s founders began the Predecessor project in 2018, seeking to rebuild a defunct multiplayer online battle arena game they had enjoyed and make it available for PC and console. The studio had built a backend but found the architecture was not designed to scale with the expected numbers of players. The company knew it would need another solution. “We needed a reliable, resilient, and scalable backend that would handle hundreds of thousands of players,” says Miles. 68,000 players Français Outcome | Launching Predecessor for PC and Console Español ran successful playtest with no downtime 日本語 AWS Services Used Customer Stories / Games 2022 4–6 months 한국어 Tom Miles Vice President of Engineering, Omeda Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon DocumentDB Get Started In addition to AccelByte offering the services and features that the studio needed, Omeda also received great customer support from AccelByte. “The ease of integration with AccelByte was much simpler than anything else we tried,” says Miles. “Instead of struggling to integrate with an unfamiliar backend, the AccelByte team implemented it for us.” In April 2022, the studio ran a playtest—the third playtest for the game, and the first using AccelByte’s backend. Over 68,000 players logged in to play the game during the test weekend, playing 11 million total minutes. Omeda received overwhelmingly positive feedback from the test on social media, including positive feedback about the latency of the game. “There was no downtime for the infrastructure during the playtest,” says Steven Meilleur, founder and chief technology officer at Omeda. “It went off without a hitch, and we were able to accommodate all the players that wanted to gain access. It was impressive to see how AccelByte’s solutions on AWS held up with that kind of load.” Opportunity | Building a Reliable Backend for Predecessor 中文 (繁體) Bahasa Indonesia Omeda Studios Accelerates Time to Market Using AWS and AWS Partner AccelByte Omeda researched the options and found AccelByte, which offered game solutions that fit most closely with the experience Omeda wanted to offer. Using AWS, AccelByte provides account services; cloud game storage to track and save player progression and configurations; social services for players to make friends and establish groups; dedicated server fleet management services; monetization services; and tools such as stats, leaderboards, and achievements to boost player engagement. AccelByte has been an AWS Partner since 2019. “We wanted to serve our customers better by investing in running our technology on AWS as efficiently and reliably as possible,” says Train Chiou, vice president of customer success at AccelByte. “Our goal is to help our clients get to market quicker and not have to worry about reinventing the wheel. You don’t have to spend the first year of creating your game investing in technologies that have already been well established, and you can focus on making the game better.” Omeda began working alongside AccelByte in August 2021 to integrate the game with AccelByte’s backend, which helped the studio accelerate the launch of Predecessor by 4–6 months. The studio also saves time by using managed services. For persistent storage, the game backend services use Amazon DocumentDB (with MongoDB compatibility) (Amazon Document DB)—a scalable, highly durable, and fully managed database service for operating mission-critical MongoDB workloads—and Amazon Relational Database Service (Amazon RDS) for PostgreSQL, a managed service that makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. By using fully managed services, Omeda can focus its time on creating a great player experience. “Game studios take a long time to grow, so it’s pivotal for us to use resources where they are most needed: in developing the game,” says Miles. “Using AWS, we can spend more time on developing game features.” Omeda plans to release Predecessor by the end of 2022. “It’s a very short time scale for a game in general, let alone a game that’s going to be online,” says Miles. “Using AWS and AccelByte and having the cooperation from their teams facilitated our meeting those aggressive deadlines.” The studio is growing quickly, doubling its employee base in the 2 years since it was founded. After the PC release, the studio will also work on releasing the game for consoles. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Amazon EC2 Amazon RDS makes it easy to set up, operate, and scale PostgreSQL deployments in the cloud. With Amazon RDS, you can deploy scalable PostgreSQL deployments in minutes with cost-efficient and resizable hardware capacity. Learn more » Learn more »   Overview Predecessor, by 4–6 months using AWS Partner AccelByte’s game backend services built on AWS. Time for creativity Scalable solution Gaming company Omeda Studios accelerated the launch of its first game, About Omeda Studios Türkçe English Amazon RDS for PostgreSQL Amazon DocumentDB is a scalable, highly durable, and fully managed database service for operating mission-critical MongoDB workloads. Learn more » Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. “We’ve succeeded in rebuilding most of what we set out to build,” says Meilleur. “AWS has delivered what we needed in a time when we really needed it.” accelerated game launch Omeda turned to Amazon Web Services (AWS) and AccelByte, an AWS Partner and game technology company that provides game backend as a service. Using AccelByte services, built on AWS, Omeda accelerated the time to market for Predecessor and improved the reliability and elasticity of the game. “Our aim is to release the game to players as soon as we can, and AccelByte helped us with this,” says Tom Miles, vice president of engineering at Omeda. Deutsch Using AccelByte’s services on AWS, Omeda can scale the backend of its game to meet demand for hundreds of thousands of players. Compute for the game runs on Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. AccelByte has deployed its services on AWS to meet Omeda’s load and usage requirements, using different sized disk queues and deployment methodologies to accommodate Omeda’s target player concurrency and setting up the architecture to automatically scale up or down. Additionally, because AWS offers high service-level agreements, the reliability and uptime of the game service are high, with AccelByte targeting 99.9 percent uptime for its clients. “High uptime is key for a good player experience, and that’s one of the things we trust AWS to deliver,” says Miles. “You can make the best game in the world, but if players can’t play it because it’s down, it doesn’t even matter.” Tiếng Việt Founded in 2020, Omeda Studios is a London-based game studio that builds community-driven games. Its first game, Predecessor, is a multiplayer online battle arena game launching in 2022. focused on improving player experience rather than rebuilding backend Italiano ไทย Solution | Accelerating Production Using AccelByte and AWS Contact Sales High uptime is key for a good player experience, and that’s one of the things we trust AWS to deliver.” to support hundreds of thousands of concurrent players Omeda Studios (Omeda) needed a scalable, reliable backend to bring its game, Predecessor, to market quickly and support hundreds of thousands of players. With 50,000 fans in the game’s Discord server and 140,000 players who have signed up to playtest the game, Predecessor is Omeda’s first game, and the studio wanted to concentrate its small team on making the best player experience possible without focusing all its energy on building the game backend. Português" Achieving Burstable Scalability and Consistent Uptime Using AWS Lambda with TiVo _ Case Study _ AWS.txt,"Deploying the tech stack and architecture is cheap and simple. Because of the pricing tiers of some of the managed services that we’re using and the pay-as-you-go pricing model, it costs almost nothing to innovate."" Solution | Modernizing Hundreds of APIs Using AWS Lambda Français Increased 2023 Outcome | Improving Innovation Using Serverless Solutions Español Learn how TiVo in the media and entertainment industry achieved burstable scalability and consistent uptime of streaming services using AWS Lambda and Amazon API Gateway. performance taking only 30 ms at load TiVo plans to continue migrating the rest of its APIs to the cloud using AWS and is looking for ways to innovate further. With more investment in AWS solutions, the company has improved integration and connectivity. It benefits from managed services, like data sharing and data migration, because it is not egressing data. “We get a lot of benefits from using AWS at a very good pricing model. It is enticing to continue migrating to AWS,” says Devitt-Carolan.   日本語 By using AWS-managed and serverless solutions, TiVo has a better understanding of cost limits and can use this to instruct its architecture decisions and innovation. “Deploying the tech stack and architecture is cheap and simple, so that’s a clear benefit for us,” says Devitt-Carolan. “Because of the pricing tiers of some of the managed services that we’re using and the pay-as-you-go pricing model, it costs almost nothing to innovate.” Pairing low costs for early development testing alongside an understanding of the cost and usage patterns fits the incubation process of innovation for TiVo. Building off managed services costs the company only dollars per day, at most. Customer Stories / Media & Entertainment Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Adding new devices and accounts to TiVo’s solution, managing content and entitlement, and managing the arrival of guide and programming data are all powered by hundreds of APIs that interface with those datasets. Modernizing these APIs to improve scalability and connectivity was important to the company. TiVo interacts with its clients through the Amazon API Gateway. “Our use of Amazon API Gateway is tightly coupled with our authentication and authorization strategy,” says Devitt-Carolan. Using Amazon API Gateway, TiVo drives connectivity and forwards APIs to its microservices, legacy APIs, and serverless functions like AWS Lambda, a serverless, event-driven compute service that supports running code for virtually any type of application or backend service without provisioning or managing servers. All data processing from APIs is run at scale using AWS Lambda. Improved AWS Services Used Achieving Burstable Scalability and Consistent Uptime Using AWS Lambda with TiVo Reduced 中文 (繁體) Bahasa Indonesia Opportunity | Using Amazon API Gateway to Improve Scalability for TiVo innovation prompted by low development costs Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي TiVo makes it easy for people to find, watch, and enjoy what they love in one integrated experience, driving loyalty and engagement. In 2017 TiVo began developing microservices for better scalability and time to market, but the continued investment in its infrastructure impeded the desired benefits. “We have a lot of technology that’s interconnected, with dependencies across our services, data stores, and deployment models,” says Taram Devitt-Carolan, vice president of engineering at Xperi. 中文 (简体) The interconnectedness of services has performance cost benefits for TiVo. “Our goal is to treat APIs as a commodity,” says Devitt-Carolan. “If we need to call an API and load a particular piece of data, it costs only 30 ms at load, whether there is a concurrency of 1 or a concurrency of 1,000, which is excellent.” Overview Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. To run its microservices, TiVo uses Amazon Elastic Kubernetes Service (Amazon EKS), a managed Kubernetes service to run Kubernetes in the AWS Cloud and on-premises data centers. When the company develops a microservice, it runs on an Amazon EKS cluster that has been assimilated into the company’s modernized tech stack to be more compatible with its use cases. TiVo similarly uses Amazon Managed Streaming for Apache Kafka (Amazon MSK), which makes it simple to ingest and process streaming data in near real time with fully managed Apache Kafka, with a more distributed strategy to fit the company’s needs. “Using Amazon MSK and our infrastructure as code, we can make smaller clusters to support sets of APIs that are related to specific data,” says Devitt-Carolan. Taram Devitt-Carolan Vice President of Engineering, Xperi Türkçe hosting cost with pay-as-you-go pricing model English Amazon Elastic Kubernetes Service (Amazon EKS) automatically manages the availability and scalability of the Kubernetes control plane nodes responsible for scheduling containers, managing application availability, storing cluster data, and other key tasks. Amazon API Gateway TiVo creates DVR technology and provides television, on-demand, and streaming services to customers. The company has a solution designed to provide businesses with audience analytics and drive viewership. TiVo Brands LLC (TiVo), a wholly owned subsidiary of entertainment technology company Xperi Inc., is migrating hundreds of APIs to the cloud to achieve burstable scalability, expand growth globally, and achieve consistent uptime of its video services. Instead of investing in an on-premises solution that required an ongoing investment in its network infrastructure, TiVo engineering decided to invest in serverless technologies and managed solutions to power core features and critical use cases. TiVo chose Amazon Web Services (AWS) to modernize its on-premises solution by going serverless. In doing so, TiVo improved global scalability, reduced its technical debt, and facilitated innovation and engineering efforts without experiencing budget strain. Deutsch Amazon EKS TiVo uses AWS Lambda functions across a variety of use cases, both externally and internally. These range from calling services within its system to reading or writing operations. Alongside AWS Lambda, the company uses Amazon DynamoDB, a fast, flexible NoSQL database service for single-digit millisecond performance at virtually any scale. TiVo uses AWS Lambda and Amazon DynamoDB to make its APIs lightweight and to query and respond to clients in client use cases. “We have a good, immediate, and burstable scale strategy using Amazon DynamoDB and AWS Lambda, which empowers us to simplify our multiregion approach,” says Devitt-Carolan. By using these serverless services in tandem and modernizing its tech stack, the company improves scalability from a global perspective and can support hundreds of millions of calls per day. Tiếng Việt About TiVo Italiano ไทย Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. Learn more » Amazon DynamoDB Higher Learn more » scalability to support streaming globally AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use. AWS Lambda Português After carefully reviewing the factors slowing transformation, TiVo engineering selected AWS to host all new services so that the teams could focus on bringing value to the customer with the ease and elasticity of using serverless technologies. “Adopting more AWS-managed services facilitated better connectivity and synchronization across the tech stack,” says Devitt-Carolan. One of the primary managed services TiVo uses is Amazon API Gateway, which it uses to create, maintain, and secure APIs at virtually any scale. By modernizing its tech stack, TiVo achieves a separation of concerns and predictability at scale." Acrobits Uses Amazon Chime SDK to Easily Create Video Conferencing Application Boosting Collaboration for Global Users _ Acrobits Case Study _ AWS.txt,"Français Acrobits leverages Amazon Chime SDK to streamline application development, scale to support thousands of new customers, and increase communication and collaboration. 2023 Español Solution | Building a New Video Conferencing Solution with Amazon Chime SDK Acrobits worked alongside the Amazon Chime SDK team to create LinkUp, a new video conferencing solution that features audio, video, screen sharing, and chat functionality for desktop and mobile environments. The application uses AWS services, including Amazon Elastic Compute Cloud (Amazon EC2) instances for compute. “The Amazon Chime SDK team was a great help. Each time we had an issue, they responded right away,” adds Torreblanca. Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Acrobits is also considering integrating Amazon Chime SDK features such as speech-to-text and machine learning (ML) capabilities to analyze customer sentiment. “I can see us using machine learning in our call centers to track customers’ moods during calls,” Torreblanca says. “Amazon Chime SDK makes it easy for us to add new features that differentiate our application, and we plan to do that to make our customers even more comfortable using LinkUp.” 日本語 Outcome | Easing Development and Creating a Simple, Unified Application Experience With LinkUp, Acrobits customers across the globe have improved collaboration via desktop or mobile application. “Our customers simply open the application and press a button for comprehensive video and audio conferencing and chat capabilities, helping them communicate and collaborate more easily,” says Torreblanca. “Also, with features such as noise suppression in Amazon Chime SDK, we can drastically improve communication in call centers or even in noisy home environments.” LinkUp also provides user authentication, moderator controls, call recording, and calendar integration, as well as noise suppression through Amazon Voice Focus. Additionally, Acrobits developers used WebRTC Media, integrated into Amazon Chime SDK, for high-quality audio and video on WebRTC-enabled browsers and mobile systems. “WebRTC also uses encryption for the entire media element, which gave us confidence in the overall security of the environment,” says Torreblanca. Get Started 한국어 The company also needed the right technology to scale as customers adopted the solution. “To meet demand, we knew we had to scale from 10,000 to 100,000 to even 1 million endpoints based on what we were forecasting,” says Torreblanca. “The cloud was the only way to make that possible.” Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used About Acrobits AWS Services Used Improves 中文 (繁體) Bahasa Indonesia to support thousands of new customers Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Simplifies application development Acrobits provides white-label communication and collaboration applications to customers worldwide. To simplify development, the company chose to build on Amazon Web Services (AWS) and used the Amazon Chime SDK to create LinkUp, a new video collaboration platform. Recently, Acrobits needed to respond to customers who were asking for a new video conferencing tool. “The pandemic really initiated that, because many of our customers were caught by surprise and suddenly had people working from home. They needed to give their employees a remote solution for collaborating over video,” says Torreblanca. “Building a video collaboration solution from the ground up wasn’t something we were ready for or had the time and available resources to do on our own.” Scales “Our customers have high expectations, and there’s always a risk when we put out a new solution, but we were confident we could deliver because of the support and responsiveness we got from AWS. Overview By using Amazon Chime SDK and relying on additional AWS services, Acrobits can easily scale LinkUp to meet the video conferencing needs of thousands of customers without limitations. “CPU and memory requirements are intensive for any application, and video conferencing is even more so,” explains Torreblanca. “The moment we need to scale as the application grows, we must ensure we have the power to add thousands of new users immediately. AWS helps us do that. Our developers don’t need to worry about managing compute capacity and servers as the platform continues expanding.” Because Acrobits’ parent company Sinch, an AWS Partner, runs the majority of its business on AWS, Acrobits sought an AWS-based development solution. That search led the company to Amazon Chime SDK, a set of developer tools that helps builders easily integrate real-time voice, video, and messaging into applications. “Amazon Chime SDK is scalable and very robust,” says Torreblanca. “It is also purely an SDK solution without a defined UI, allowing us to develop a brandable user interface for our customers while also supporting our core white label business.” Türkçe Acrobits Uses Amazon Chime SDK to Easily Create Video Conferencing Application and Boost Collaboration for Global Users English By relying on Amazon Chime SDK, Acrobits was able to develop and launch LinkUp in months, offering on-demand scale to support thousands of new customers while improving collaboration for global users. The moment we need to scale as the application grows, we must ensure we have the power to add thousands of new users immediately. AWS helps us do that. Our developers don’t need to worry about managing compute capacity and servers as the platform continues expanding.” Acrobits is a rapidly growing provider of white-label communication and collaboration solutions delivered through a low-code platform. Owned by Sinch, which provides software development kits (SDKs) and application programming interfaces (APIs) for developers, Acrobits helps companies to create customizable and brandable enterprise-grade collaboration solutions in a variety of industries. “We serve 500 businesses in 74 countries and manage around 140 million endpoints” says Rafael Torreblanca, managing director at Acrobits. Amazon Chime SDK Because Amazon Chime SDK simplifies feature integration, Acrobits streamlined the development and management of LinkUp. “Amazon Chime SDK gives us a lot of flexibility in terms of tools we can use, and it has native interfaces for iOS and Android. This really simplified development,” says Torreblanca. “It was easy for us to integrate video, audio, chat, and noise suppression into the application.” Acrobits is a technology leader in mobile and desktop communication and collaboration solutions, providing white-label solutions to customers worldwide. The company’s solutions enable HD voice, video, and multi-messaging mobile and desktop products for system integrators, content service providers, and telecom companies across the communications industry. Deutsch Tiếng Việt Italiano ไทย collaboration in the hybrid workplace Rafael Torreblanca Managing Director, Acrobits Video conferencing may help to increase businesses’ productivity while working from home, but with the world reopening, a new trend has emerged: video conferencing fatigue—a trend that's largely driven by complex UIs. Acrobits designed LinkUp to offer a seamless experience for customers. ""LinkUp is not a complicated tool. It's a unified video collaboration platform with simple ways to create and start a meeting and invite people to attend,"" says Torreblanca. ""Using LinkUp, it's very easy for people to set up meetings, connect their calendars, present, and record calls from within the UI while adding a powerful collaboration component to our softphone apps."" Amazon EC2 Opportunity | Responding to Customer Demands for Better Collaboration With the Amazon Chime SDK, builders can easily add real-time voice, video, and messaging powered by machine learning into their applications. Português" Actuate AI Case study.txt,"Ben Ziomek Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Français Computer vision startup Actuate AI had a novel idea for identifying threats through security footage. Instead of focusing on facial recognition, which can be expensive, biased, and unreliable, the company set out to use artificial intelligence (AI) object recognition to detect weapons using security camera footage. The result of its efforts was a system that identifies weapons and intruders in real time and notifies stakeholders of immediate threats. However, Actuate AI didn’t want to impose expensive hardware costs on its customers’ security systems, so it knew it would need substantial cloud compute power for offsite inferencing and for scaling as the company grew. Added a security layer with minimal bandwidth usage, often lower than 50 kilobits per second per camera “Most security decision makers are concerned with being able to identify where people are in a building at any given time, being able to understand anomalous behaviors, and trying to identify violent situations before they happen,” says Ziomek. “Unless you know exactly the people who are going to be doing these acts, facial recognition doesn’t help. By focusing on object recognition, we can give our clients all of the security information they need in an instantaneous, easy-to-digest format that respects privacy.” Español About Actuate AI 日本語 Contact Sales For most applications, you just need raw GPU power. Having access to that has enabled us to cut our costs significantly and win some very large contracts."" Actuate AI Powers Its Real-Time Threat-Detection Security Tech Using Amazon EC2 Get Started 한국어 Like many startups, Actuate AI faces the challenge of scale—and it has found a suitable growth environment in the AWS Cloud. “For most applications, you just need raw GPU power,” says Ziomek. “Having access to that has enabled us to cut our costs significantly and win some very large contracts that would have been cost prohibitive had we been running on any other type of virtual machines. We’ve found that the level of granularity we get in monitoring and management on AWS has enabled us to maintain the same level of quality while we scale dramatically.” By focusing the AI inference engine on weapons and intruders rather than faces, Actuate AI is able to provide its clients actionable information with fewer false positives and without the racial bias inherent in many facial recognition–based AI models. Focusing on objects also enables Actuate AI to apply its technology to other relevant security and compliance tasks, including mask compliance, social distancing detection, intruder detection, people counting, and pedestrian traffic analysis. Actuate AI found an effective solution in Amazon Elastic Compute Cloud (Amazon EC2), a web service that provides secure, resizable compute capacity in the cloud, and a number of other Amazon Web Services (AWS) Cloud services. This solution enabled Actuate AI to offer an affordable, high-level security layer to existing systems for schools, businesses, and the US military. “We run on the cloud using AWS,” says Actuate AI cofounder and chief technology officer Ben Ziomek, “which lets us offer solutions that are more flexible, faster to install, and less expensive than those from almost anyone else on the market.” AWS Services Used Amazon EC2 C5 instances deliver cost-effective high performance at a low price per compute ratio for running advanced compute-intensive workloads. 中文 (繁體) Bahasa Indonesia Actuate AI is a software-based, computer vision AI startup that turns any security camera into a smart camera that monitors threats in real time, accelerating the response times of security firms, schools, corporations, and the US military. Amazon EC2 G4 Instances give Actuate AI a highly responsive, scalable solution that delivers enough power to run image processing and AI inference for eight jobs concurrently—but only when it’s needed. This flexibility enables Actuate AI to scale as necessary while reducing its accelerated computing costs by as much as 66 percent, giving it a huge competitive advantage over AI security firms using on-premises GPUs. “Even a really active camera is going to only have motion on it maybe 40 percent of the time during the day and less than 1 percent of the time at night,” says Ziomek. “On AWS, I only have to pay for the time I’m actually using it, which makes the cloud extremely beneficial to our business model. We have never had an issue with GPU instance availability on AWS.” Ρусский عربي Enabled a fully software-based AI detection system 中文 (简体) The potential applications of its technology are vast. Actuate AI is already working with some customers to track ingress and direct employees to temperature-monitoring stations in the wake of the COVID-19 pandemic, as well as with the US military to help with weapon cataloguing and tracking. Actuate AI currently uses CUDA by NVIDIA—a parallel computing platform and programming model that enables dramatic increases in computing performance by harnessing the power of NVIDIA GPUs—and intends to use NVIDIA A100 Tensor Core GPU–based Amazon EC2 instances to further test the limits of its AI. Actuate AI utilizes an in-house AI system that combined best practices from many industry-leading convolutional neural network–based AI models. Many of the system’s core functions, however, operate using AWS. The AI uses the processing power of an Amazon EC2 C5 Instance to monitor cameras for movement at all times. In doing so, the AI identifies relevant objects in less than half a second with the help of Amazon EC2 G4 Instances. Once the AI has decided that the event is a threat, the metadata is stored in Amazon DynamoDB, a key-value and document database that delivers single-digit millisecond performance at any scale. Actuate AI stores the images themselves in Amazon S3. Then, depending on the client’s preferences, Actuate AI uses Amazon API Gateway—a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale—to send the client push notifications about the threat. These notifications can be sent immediately to monitoring stations in under a second, dramatically increasing the client’s ability to respond to threats. Meeting the Future on AWS Overcoming the Shortcomings of Facial Recognition Amazon EC2 C5 Benefits of AWS When Ziomek and Actuate AI cofounder and CEO Sonny Tai decided to develop a computer vision AI security system, they knew that improving from the status quo meant changing some of the basics of traditional AI security solutions. Instead of relying on facial recognition, Actuate AI would use object recognition as the backbone of its inference engine. And rather than the expensive, on-premises hardware typically built into other AI security suites, the company would use accelerated cloud computing.   Reduced accelerated computing cost by 66% Türkçe Historically, a lot of building-monitoring security and defense tasks required expensive, specialized hardware, but Actuate AI is taking a software approach and moving said tasks to the cloud. “We can turn any camera into a smart camera and basically displace a lot of sensor suites by using off-the-shelf cameras that can gather almost-as-good data for a far cheaper price,” says Ziomek. “We’re able to do this with minimal bandwidth usage, often lower than 50 kilobits per second per camera.” Sends push notifications of suspicious activity in under a second   English Amazon EC2 G4 instances deliver the industry’s most cost-effective and versatile GPU instance for deploying machine learning models in production and graphics-intensive applications. Getting Powerful, Cost-Effective Compute Using Amazon EC2 Deutsch Detects firearms and intruders with greater than 99% accuracy in less than 0.5 seconds  Tiếng Việt Cofounder and Chief Technology Officer, Actuate AI Actuate AI runs all actions in the AWS Cloud—using everything from Amazon EC2 P3 Instances powered by NVIDIA V100 Tensor Core GPUs to Amazon EC2 G4 Instances powered by NVIDIA T4 Tensor Core GPUs, AWS Lambda, Amazon API Gateway, and Amazon DynamoDB serverless tools. Additionally, the company stores security images in Amazon Simple Storage Service (Amazon S3), which offers industry-leading scalability, data availability, security, and performance. The cloud architecture enables the company to avoid the cost, time, and liability involved in installing and maintaining expensive, onsite servers and to pass on the savings to its clients. “With AI, generally you need accelerated processing, or graphics processing units [GPUs], and those get expensive fast,” says Ziomek. “We save our customers money while still making everything work without having to do anything onsite, and that’s enabled by the fact that we’re a cloud-first solution.” Italiano ไทย Actuate AI’s inference engine relies on what may be the world’s largest database of labeled security camera footage—a library of more than 500,000 images that helps the company’s AI scour live video to detect very small objects in highly complex scenes with greater than 99 percent accuracy and an industry-leading false positive rate. Much like a graphically demanding video game, image-reliant AI inferencing requires access to powerful GPUs that can quickly analyze high-resolution images and video concurrently. Actuate AI’s models only run when motion is detected, so the number of camera feeds analyzed by the AI will increase as motion is detected by more cameras connected to Actuate AI’s security system. 2020 Learn more » Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Amazon EC2 G4 Instances Facilitated 100% cloud-based data production Português" ADP Developed an Innovative and Secure Digital Wallet in a Few Months Using AWS Services _ Case Study _ AWS.txt,"ADP has seen a positive response in usage of its digital wallet in the United States, processing nearly $1 billion of transactions in customer savings envelopes in the 7 months since launching the product. Contact Sales Français Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » 2023 Español 日本語 ADP Digital Wallet Architecture Diagram valuable flexibility with Earned Wage Access feature Founded in 1949, ADP serves one million customers in 140 countries with its human capital management software. As the source of pay for one in six Americans, ADP saw an opportunity to help enhance the employee experience through financial wellness offerings. The company wanted to move quickly to provide a socially responsible option for its existing customers and lead the way with a modern industry solution. The company’s digital wallet includes on-demand access to eligible workers’ earned wages before payday, support for online shopping, and many other cutting-edge features. ADP had been using AWS services since 2015 and had worked with Nuvalence on other business initiatives since 2019, so it decided to enlist both companies as it worked on this strategic initiative. “The AWS team has been with us through thick and thin and is always responsive. By using AWS, we have incorporated best practices while building resilient systems that can handle our global scale,” says Lohit Sarma, senior vice president of product development at ADP. “Nuvalence has been a strategic partner of ours, delivering high-quality work. Its expertise in building large-scale digital solutions was an ideal fit for our needs, and we brought the firm in to provide high-quality performance.” 한국어 The digital wallet development started in early 2022. Teams from ADP, Nuvalence, and AWS first aligned on the architecture and security requirements. AWS then made service recommendations that were based on the use case and the existing architecture. Nuvalence paired with ADP engineers to design and build the solution, maximizing the effectiveness of features from AWS services and providing the glue to connect to ADP’s infrastructure and existing set of services. Although similar projects often take several years to complete, ADP released the first version of its digital wallet in a few months. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Fortifies security As of 2022, ADP supports approximately 1.7 million Wisely card members across the United States and plans to keep investing in its digital wallet while rolling out additional features using AWS services. “ADP pays one in six workers and moves close to $100 billion in payroll per day in the United States,” says Lohit. “We have to be working 24/7 with high quality, resiliency, and reliability. We brought AWS and Nuvalence together because of these requirements.” Provides eligible members Get Started Lohit Sarma Senior Vice President of Product Development, ADP AWS Services Used 中文 (繁體) Bahasa Indonesia ADP needed flexibility and extensibility to offer a dynamic solution for a fast-moving market with many changing variables. ADP provides education for companies as they roll out the Earned Wage Access feature. With this support, companies can help eligible members make informed decisions while getting valuable access to earned wages when needed. “ADP takes great pride in being a company with high morals that is always there for its clients and their people,” says Lohit. “Using AWS services, we can give people tools to manage their finances and give them access to funds when they potentially need them the most.” Amazon S3 Ρусский عربي Solution | Launching Multiple Features Quickly Using Serverless Technology from AWS Lambda 中文 (简体) About ADP Overview To make that vision a reality, ADP needed to build a solution that supported high security and privacy standards, facilitated going to market quickly, and offered technology for innovation. ADP worked alongside Amazon Web Services (AWS) and Nuvalence, an AWS Partner, to use modern, cloud-native development practices to build the solution for its digital wallet. ADP built an innovative digital wallet in a few months alongside AWS and Nuvalence to make financial wellness tools more accessible to US workers. Customer Stories / Financial Services Türkçe speed, creating a digital wallet in a few months Because ADP manages employee and financial services, the company needed the solution to meet rigorous compliance-quality standards, including the Payment Card Industry Data Security Standard. To bolster the security of its digital wallet, ADP uses services like Amazon Simple Storage Service (Amazon S3), an object storage service built to retrieve virtually any amount of data from anywhere. Using Amazon S3, ADP can securely store flat text files involved in money movement. The solution also uses tokens for the card number to keep transactions secure. Because the payment credentials were loaded securely into the digital wallet, customers could use the digital card for purchases and make payments immediately without waiting for a physical card to arrive in the mail. “Data security and privacy are critical to everything we develop,” says Lohit. “Using AWS services, we could uphold our company’s existing standards while innovating on the implementation.” English ADP Developed an Innovative and Secure Digital Wallet in a Few Months Using AWS Services Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram using tokens and oversight Supported $1 billion Increased development Deutsch AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Opportunity | Selecting AWS and Nuvalence to Collaborate on ADP’s Digital Wallet Tiếng Việt With its digital wallet, ADP accomplished its mission of making financial wellness tools more accessible to US workers. The digital wallet is a safe and simple option through which employees without a traditional bank account can access their pay, giving them freedom in spending their wages. The Earned Wage Access feature gives eligible members access to their earned wages before payday, creating a viable alternative for customers who urgently need access to funds and eliminating the need to take out high-interest-rate loans. Human capital management company ADP serves one million customers in 140 countries. In the United States, ADP released its innovative digital wallet, which features tools to help card members with financial wellness. Italiano ไทย ADP, a global leader in human capital management solutions, wanted to provide workers across North America with unprecedented flexibility with a modern digital wallet. ADP’s vision was to use its robust workforce data and many years of experience to create a product that adapted to the modern way that people managed their money. Outcome | Investing in the Digital Wallet for Future Growth Using AWS Services Close Learn more » Click to enlarge for fullscreen viewing.  Data security and privacy are critical to everything we develop. Using AWS services, we could uphold our company’s existing standards while innovating on the implementation.” Architecture Diagram AWS Lambda $1 billion of processing transactions in customer savings envelopes in 7 months Português ADP met its goal to release the digital wallet quickly using AWS Lambda, a serverless, event-driven compute service that customers use to run code without thinking about servers or clusters. The digital wallet uses AWS Lambda to create a variety of different functions, minimizing the compute footprint of the service. “The team used AWS Lambda to provide an efficient and scalable approach to handling authentication, authorization, and other key functions for the wallet,” says Abe Sultan, partner at Nuvalence and executive sponsor of the Nuvalence team working with ADP. Using serverless technology, ADP could both go to market quickly and leave room to scale for future growth as the needs of the solution evolve." Adzuna doubles its email open rates using Amazon SES _ Adzuna Case Study _ AWS.txt,"At first, Adzuna relied on standard Amazon SES features while staff focused on content and deliverability. In recent years, Adzuna has shifted to using dedicated IP addresses and tools like Amazon CloudWatch, a service that provides observability of users’ AWS resources and applications on AWS and on premises. Handles large volumes needs of a growing user base Français For a job search engine to differentiate itself in a crowded market, it must be able to match job seekers to relevant jobs more swiftly and reliably than its competitors. Adzuna, a United Kingdom–based job aggregator that serves 20 countries, aims to achieve that goal by using smart technology to match people to the right jobs and sending personalized emails to users. To handle this substantial task, Adzuna required an email service that was reliable, simple to use, and that could scale as the company grew. The company turned to Amazon Web Services (AWS) and found Amazon Simple Email Service (Amazon SES), a high-scale inbound and outbound cloud email service, to be the solution for its requirements. Using Amazon SES, Adzuna can efficiently send billions of emails to its users across the globe. To support its goal of sending personalized emails to users, Adzuna needed an easy-to-use email service that could handle increasingly large volumes of email as the company grew. Amazon SES proved to be a simple, scalable solution. First, it integrated seamlessly with Adzuna’s existing AWS infrastructure. Second, because Amazon SES could be used as a Simple Mail Transfer Protocol, the Adzuna developers were able to automate the entire process. The team never had to log on to the service or worry about its inner workings, which meant that it could focus its energy on more important tasks like making necessary edits and updates to emails. Solution | Supporting Company Goals through Simplicity and Scalability Español Opportunity | Seeking Reliability, Scalability, and Cost Effectiveness for Large Volumes of Email 日本語 AWS Services Used Adzuna is a smart, transparent job search engine used by tens of millions of visitors per month across 20 countries globally. It uses the power of technology to match people to better, more fulfilling jobs and keep the world working. Bilal Ikram Email Marketing Manager, Adzuna Because its users rely on the accuracy and timeliness of Adzuna’s emailed job alerts, Adzuna required an email service that was, above all, reliable. “It’s important that there’s no downtime and that there are no deliverability issues or at least no server issues where emails just completely fail to send,” says Bilal Ikram, email marketing manager at Adzuna. Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Adzuna has continued to benefit from the scalability of Amazon SES and its additional features. In 2022, the company added an additional four countries, and it has used Amazon SES to meet the needs of its growing user base throughout the expansion. Achieved Adzuna Doubles Email Open Rates Using Amazon SES Amazon CloudWatch collects and visualizes real-time logs, metrics, and event data in automated dashboards to streamline your infrastructure and application maintenance. Learn more » Adzuna launched in 2011 as a job search site based in the United Kingdom, and it now operates in 20 countries, including the United States, Singapore, Australia, and India. Users can search the website by type of job and location and have the option to sign up with their email address for job alerts. When users sign up, Adzuna sends an initial welcome email and, after that, sends regular alerts when relevant jobs are posted to the site. With tens of millions of visitors every month, Adzuna sends around two billion personalized emails every year. Improved 中文 (繁體) Bahasa Indonesia Overall, Adzuna has benefited from using multiple AWS services for different purposes while keeping everything under the same umbrella. Outcome | Relying on an Integrated Suite of Solutions Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Learn more » 2022 Doubled Overview Türkçe “We can simply create commands that constantly send out the emails connected to Amazon SES without us having to worry about volumes,” Ikram says. Further, Adzuna set up Amazon SES so that it runs across multiple AWS Regions, helping to manage the workload and providing a backup option for sending emails if needed. “If we were to have an outage, we would have a fallback, which makes the network more reliable,” Ikram says. English of email as the company grows “It would be impossible for us to send volumes of emails with dynamic content to the same extent without using Amazon SES,” says Ikram. “It’s very important that we automate that process and send out emails that are relevant to our users.” “Using Amazon SES, I can focus more on improving the quality and content of the emails and our underlying metrics rather than having to worry about just sending the emails out on a daily basis,” Ikram says. “So that means we have more time to focus on the things that really matter—connecting our users to better, more fulfilling jobs.” Amazon SES lets you reach customers confidently without an on-premises Simple Mail Transfer Protocol (SMTP) system. About Adzuna email open rates Amazon Simple Email Service (Amazon SES) Deutsch a simple, seamless setup using AWS infrastructure Amazon SES turned out to be the most reliable tool for the company’s needs. The Adzuna team initially tested a few other email tools, but they weren’t scalable to the degree the company needed. Using the automation abilities of Amazon SES, the company has been able to handle its burgeoning volume of email since it began using the service in 2011—almost from the company’s start. Without these capabilities, Adzuna would be unable to perform a key service feature. Tiếng Việt Italiano ไทย Amazon CloudWatch Contact Sales Using Amazon SES, I can focus more on improving the quality and content of the emails and our underlying metrics rather than having to worry about just sending the emails out on a daily basis.” email click-through rates Supports Português Since Adzuna’s migration to dedicated internet protocol addresses, the company has seen a significant improvement in email open rates, which have almost doubled. It also saw improvements in click-through rates." AEON Case Study.txt,"Reduced costs Français Traffic surges can stifle our business. Using AWS, we can scale easily, and guarantee our customers a reliable service.” Español Amazon EC2 Scales automatically Learn how »  AEON Scales Card Processing System, Achieves 40% Market Growth Using AWS About AEON 日本語 Customer Stories / Financial Services / Cyprus 2023 AWS Migration Acceleration Program Get Started 한국어 John Abraham CEO, AEON Payment Technologies Overview | Opportunity | Solution | Outcome | AWS Services Used Opportunity: Faster Cloud Migration and Modernization Using AWS Migration Acceleration Program AEON is now able to easily comply with GDPR requirements too, using AWS Regions and Availability Zones. The company also set up its own data center close to the AWS EU (Frankfurt) Region data center to support personal identification number (PIN) encryption and decryption, and to meet local privacy requirements in the region. AWS Services Used 中文 (繁體) Bahasa Indonesia The company is now able to scale to meet traffic peaks within minutes. “During peak card usage times, we’re seeing 100 card transactions per second with a large number of people checking their accounts online,” says John Abraham, CEO at AEON. “Traffic surges can stifle our business. Thanks to Cloud Nomads and using AWS, we can scale easily, and guarantee our customers a reliable service.” Contact Sales Ρусский AEON’s next challenge was to ensure its card processing system was market ready and able to serve new territories in Europe and Africa. عربي Opportunity: A Streamlined, Scalable Card Processing Software System The AWS Migration Acceleration Program (MAP) is a comprehensive and proven cloud migration program based upon AWS’s experience migrating thousands of enterprise customers to the cloud.  Learn more » 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. AEON turned to AWS Partner Cloud Nomads when it realized its on-premises system was hampering growth. Its existing infrastructure couldn’t scale without significant investment in IT equipment. Its main challenge was to ensure its banking clients could meet customer usage peaks at the end and the beginning of each month, when employee wages are typically paid in. AEON scales to handle credit card usage peaks and support 40 percent growth. Using Amazon Web Services, it can comply with data laws and security protocols to support market entry in Europe and Africa. Overview AWS Directory Service Outcome: Building a Growth-Ready Infrastructure to Support New Markets AWS Customer Success Stories Türkçe The company completed its migration in just 3 months using the AWS Migration Acceleration Program (AWS MAP), which helps businesses speed their cloud migration and modernization journey with an outcome-driven methodology. Using AWS MAP gave AEON assurance over the migration process, providing its IT team with the confidence that the project would deliver the successful outcome it needed. English Based in Cyprus, AEON Payment Technologies wanted to move to the cloud to scale its card processing system for banking customers, and expand into new markets in Europe and Africa. It migrated in just 3 months using the AWS Migration Acceleration Program with the help of AWS Partner, Cloud Nomads. With its infrastructure running on AWS, AEON has increased the number of credit and debit cards it handles by 40 percent over 2 years. The business has also saved 33 percent of planned expenditure on IT, and can scale to handle traffic peaks within minutes. Critically, it can easily comply with Visa and Mastercard’s regulations, local data laws, and support Payment Card Industry Data Security Standard (PCI DSS) standards for card processing. AEON began by migrating its card processing software and databases to Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. AEON also uses Amazon EC2 instances for Windows and Linux to support the card processing system’s databases. This expansion meant the company needed support for PCI DSS compliance in new regions. Critically, it also meant that AEON had to comply with EU GDPR data privacy laws. In some of its target markets, it would also need to keep sensitive data within country borders to meet local regulations. The AEON team has worked closely with AWS to create a scalable and reliable cloud-based system. “In our business, technology can hinder progress—now, the opposite is true for AEON,” says Abraham. “Technology is aiding our growth. The fact that we handle traffic peaks without incident is a great achievement for both our IT team and AWS.” AEON has reduced its reliance on on-premises equipment and cut its planned infrastructure budget to one-third of its previous budget using cloud services. “The sales cycle in the card processing industry is long,” says Abraham. “Also, it’s essential to have infrastructure in place so new customers have confidence that we can support them right away. Using AWS, we have the flexibility to serve new customers instantly in our new markets without having to invest in expensive IT equipment and having it sit idle.” AEON is now evaluating AWS Outposts—which businesses can use to run AWS infrastructure and services on premises for a truly consistent hybrid experience—to support PIN encryption and decryption in the future. Deutsch AEON’s systems on Amazon Web Services (AWS) are certified to meet the regulations of its payment associates, Visa and Mastercard. This includes ensuring compliance for those companies’ card issuing and transaction acquisition regulations. With its systems built on AWS, AEON can also comply with the Payment Card Industry Data Security Standard (PCI DSS) requirements and the European Union (EU) General Data Protection Regulation (GDPR) for data privacy.The company has also cut IT expenditure to one-third of its previous budget and can now scale its system to handle traffic peaks within minutes. Cyprus-based AEON Payment Technologies is a third-party card processing software provider that provides value-added services to support the payment processing needs of the commercial banking industry. This includes card issuing, transaction management, and also authorization, reconciliation, and infrastructure services. Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. Access reliable, scalable infrastructure on demand. Scale capacity within minutes with SLA commitment of 99.99% availability. Learn more » Tiếng Việt AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD, activates your directory-aware workloads and AWS resources to use managed AD on AWS. Learn more » Italiano ไทย Amazon EC2 running Microsoft Windows Server is a secure, reliable, and high-performance environment for deploying Windows-based applications and workloads. Learn more » 40% growth Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. Using AWS, AEON can handle the complex PCI DSS security protocols in the cloud for its card processing software. “We have to have multiple levels of security in place to meet industry regulations—otherwise, we would not be able to operate,” says Abraham. “Because AWS is PCI DSS compliant, we could move to the cloud, easily meet these industry standards, and benefit from much faster card processing.” Solution: Delivering Full Compliance with Banking Protocols and Privacy Laws Based in Cyprus, AEON Payment Technologies wanted to move to the cloud to scale its card processing system for banking customers and expand into new markets in Europe and Africa. It migrated in just 3 months using the AWS Migration Acceleration Program. With its infrastructure running on AWS, AEON has increased the number of credit and debit cards it handles by 40 percent over 2 years. The business has also saved 33 percent of planned expenditure on IT, and can scale to handle traffic peaks within several minutes. Critically, it can now easily comply with local data laws and support Payment Card Industry Data Security Standard (PCI DSS) standards for card processing. Amazon EC2 Windows Instances Over the past 2 years, AEON has increased the number of credit and debit cards it handles by 40 percent. “Using AWS, we now support 11.5 million cards and 30,000 merchant card terminals,” says Abraham. “We can also guarantee the 99.999 percent uptime we need so that our banking clients limit downtime and manage reputational risk.” Português" ALTBalaji _ Amazon Web Services.txt,"AWS Elemental MediaTailor is a channel assembly and personalized ad-insertion service for video providers to create linear over-the-top (OTT) channels using existing video content. The service then lets you monetize those channels—or other live streams—with personalized advertising. Learn more » Amazon Redshift Français Shahabuddin Sheikh Chief Technology Officer, ALTBalaji India-based Español ALTBalaji launched its platform on the AWS Cloud, using Amazon CloudFront to securely deliver media content to millions of customers every day, Amazon Elastic Compute Cloud (Amazon EC2) instances to run applications, and Amazon Redshift as a data warehouse for analytics. About ALTBalaji    Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. Amazon CloudFront 日本語 2022 Zero Downtime live-stream views of Lock Upp ALTBalaji is a subscription-based video on demand (SVOD) platform that produces original over-the-top (OTT) media content. To broadcast live streams of its Indian reality show Lock Upp, the company chose to build its live streaming infrastructure on Amazon Web Services (AWS). India-based ALTBalaji is parent company Balaji Telefilms’ first foray into the digital entertainment space. ALTBalaji offers fresh, original, exclusive stories, tailored for Indian audiences across the world. 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used AWS Elemental MediaLive Amazon Personalize for targeted content recommendations to viewers and “AWS Elemental MediaLive removed the complexity of developing and operating our live streaming infrastructure, allowing us to focus on providing better user experience and producing unique, compelling content. We're now exploring new ways to enhance our customers' experience, and voice search is just the next step in our journey of constant improvement,” Sheikh concludes. To broadcast live streams of Lock Upp, ALTBalaji built its live streaming infrastructure on AWS Elemental MediaLive—a solution that encodes and transcodes real-time video for broadcast and streaming delivery. Results from a proof of concept (POC) revealed the company could easily add live streaming with advanced broadcasting capabilities to its platform and meet its challenging timeline. The team worked with its AWS Technical Account Manager (TAM) and Subject Matter Expert (SME) to conduct an AWS Infrastructure Event Management (IEM) analysis to right-size the live streaming infrastructure for load handling. In addition, it used AWS Elemental MediaTailor to set up server-side ad integration for live streams under free subscription accounts. ALTBalaji is now preparing for Lock Upp’s second season knowing it can deliver a reliable live streaming experience. It also plans to test AWS Services Used 中文 (繁體) Bahasa Indonesia 10x Furthermore, the live-streaming solution easily managed a tenfold increase in viewership during highly anticipated episodes showing nominations and evictions from Kangana Ranaut’s “jail”. Customer Success / Media Solution | Building Live Streaming Capabilities from Scratch Contact Sales Ρусский Launched in April 2017, عربي By using AWS Elemental MediaLive, ALTBalaji delivered its live streaming solution in weeks and ensured uninterrupted live streams of Lock Upp during its 72-day run for millions of viewers across India. ALTBalaji Develops Live Streaming Capabilities and Delivers Reality Show in Real Time to Millions 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Amazon Transcribe to allow viewers to use voice commands over typing to search for series content. Scales to meet tenfold surge in viewership Overview AWS Elemental MediaTailor Get Started 100 Million   Türkçe English Outcome | Ensuring Uninterrupted Live Streams for Millions of Viewers AWS Elemental MediaLive is a broadcast-grade live video processing service that creates high-quality streams for delivery to broadcast TVs and internet-connected devices. Learn more » ALTBalaji, a subsidiary of Balaji Telefilms Limited, is the group’s foray into the digital entertainment space. ALTBalaji is an SVOD platform aiming to provide 34 million subscribers with original over-the-top (OTT) Indian media content right at their fingertips. Subscribers can log in to ALTBalaji and access content—such as shows, movies, and music videos—via desktops, tablets, smartphones, and internet-connected TVs. Deutsch Amazon Rekognition to reduce the cost of video ad integration and other content operations. Furthermore, ALTBalaji wants to assess Tiếng Việt ALTBalaji had just over a month to deliver a live streaming solution in time for the start of the series. Shahabuddin Sheikh, chief technology officer at ALTBalaji, says, “Aside from meeting the deadline, we were also concerned about infrastructure downtime and service lags during the live streams, which would negatively impact the viewer experience.” Opportunity | Delivering a Live Streaming Solution in One Month Italiano ไทย In December 2021, ALTBalaji began production on an Indian reality competition series called Lock Upp. Local celebrities, including renowned Indian film stars, comedians, and sports stars, would be locked inside actor and show host Kangana Ranaut’s “jail” for 72 days, and voted out by viewers until there was a winner. It set a February 2022 launch date for Lock Upp and wanted to broadcast live streams of the show for its duration. Live streamed reality series for 72 days with no downtime Just 19 days after its premiere, Lock Upp garnered more than 100 million views, becoming the most-watched reality show in the Indian OTT space. During the airing of the series, ALTBalaji reported a tenfold increase in viewer data compared to its historical average. However, thanks to optimized workflows in its Amazon Redshift data warehouse, ALTBalaji handled the surge seamlessly. Furthermore, the company gained valuable insights into how often viewers paused and played streams, alongside behavior during live streaming ads and activities that influenced video view count. It plans to use this information to improve product development and user experience. ALTBalaji built its live streaming workflows using AWS Elemental MediaLive, a broadcast-grade live video processing service for high-quality video streams. As a result, it experienced zero downtime during its first live stream despite a tenfold increase in viewership. Learn more » Many viewers will be streaming from smaller towns in India where internet speeds are slower than major urban cities. To ensure an uninterrupted and enjoyable viewing experience from any location, ALTBalaji minimized lags that could cause streams to fail by finetuning AWS Elemental MediaLive. AWS Elemental MediaLive removed the complexity of developing and operating our live streaming infrastructure, allowing us to focus on providing better user experience and producing unique, compelling content.” By using AWS Elemental MediaLive, ALTBalaji delivered its live streaming solution in weeks and ensured uninterrupted live streams of Lock Upp for millions of viewers across India during its 72-day run. Sheikh describes the assistance from AWS as “hyper support”. Sheikh says, “Without AWS Elemental MediaLive, it would’ve taken several months to deliver our streaming solution. From the start, AWS understood the criticality of everything we were doing and stayed the course with the team even after the go-live date.” Português  Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more »" Amanotes Stays on Beat by Delivering Simple Music Games to Millions Worldwide on AWS.txt,"Français 120 million Español Expansion To stay ahead of competitors, Amanotes needs to innovate continuously to deliver more immersive game experiences, while managing costs effectively. With Amazon Elastic Container Service (Amazon ECS) and AWS Fargate, the business easily deploys applications across a scalable, multi-region infrastructure and minimizes its technology team’s management and maintenance workload. Average request processing time Learn More AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. 日本語 Customer Stories / Games The business is executing plans to complement its existing music ‘Play’ pillar with a ‘Learn’ pillar delivered through an educational music app, and a ‘Simulation’ pillar that gives users the ability to learn musical instruments through digital simulations. This strategy is designed to realize Amanotes’ vision of becoming the number one ecosystem for everyone to play, learn, create, and connect through music. Average time to deliver downloads 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Amanotes launched its business on the AWS Cloud for scalability, low latency, and stability. “We analyzed cloud providers and determined AWS had the extensive reach we required: 27 AWS Regions worldwide, each featuring multiple Availability Zones and hundreds of edge locations,” says Nguyen Nghi, Head of Technology at Amanotes. Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. Pursuing growth in China and Japan Get Started Solution | Running Music Games and Apps Seamlessly on Amazon CloudFront AWS Services Used Amanotes is running its application services, core database, and backend API services on the AWS Cloud. It uses Amazon CloudFront to deliver game content reliably and with low latency to its global user base.  “With Amazon CloudFront, we’re delivering content that includes five leading music games to more than 120 million monthly active users who, collectively, make more than 90 million download requests per day,” says Nghi. “We can also secure the content from cyberattacks that could compromise our reputation and slow our expansion into new markets.”   中文 (繁體) Bahasa Indonesia To learn more, visit aws.amazon.com/cloudfront. monthly active users of Amanotes’ games With Amazon CloudFront, we’re delivering content that includes five leading music games to more than 120 million monthly active users who, collectively, make more than 90 million download requests per day.” Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) 1.5 seconds Nguyen Nghi Head of Technology, Amanotes Outcome | Innovating with New Services and Connecting Global Users Through Music 2022 Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Overview Amazon Elastic Kubernetes Service Founded in 2014 and headquartered in Ho Chi Minh City, Vietnam, Amanotes oversees a portfolio of music games and apps, including Magic Tiles 3, Tiles Hop, and Dancing Road. Since its founding, users across the globe have downloaded Amanotes music games and apps more than 2.5 billion times. AWS Fargate Amanotes’ founders decided to focus on a niche the business describes as ‘Simple Music Games’; games that are intuitive and easy for users to interact with. In 2016, Amanotes developed Magic Tiles 3, a game requiring users to tap digital musical notes on their smartphone screens in sync with songs from selected genres. Amazon Elastic Container Service Amanotes plans to further leverage AWS Global Infrastructure and innovative solutions to grow its business in markets such as Japan and China. The business also believes new AWS edge locations in Hanoi and Ho Chi Minh City present opportunities to acquire new customers in its domestic market. Nghi says, “We aim to grow our business as much as possible, and AWS provides the speed and scale we need to do this.” Türkçe English 90 million content file download requests met daily The business delivers its content files in 1.5 seconds or less, with smaller files delivered in just 0.1 seconds. Average request processing time for the Amanotes API is around 100 milliseconds. This low latency leads to repeat gamers and attracts advertisers. This in turn increases revenue generation from in-game and reward-based advertisements, pay-to-play, and subscriptions.   Opportunity | Delivering Music Games with Speed and Scale  Amanotes is also leveraging Amazon Elastic Kubernetes Service (Amazon EKS) to run some of its services. “By leveraging managed services capabilities from Amazon EKS, our team can focus purely on application development without worrying about infrastructure,” says Nghi. Amanotes is a Vietnam-headquartered music game developer that publishes games to a global audience. To provide game downloads to global users reliably, securely, and with low latency, Amanotes chose to launch on AWS. About Amanotes Deutsch Amanotes uses Amazon CloudFront, Amazon Elastic Kubernetes Service, and Amazon Elastic Container Service to deliver games from a scalable, multi-region infrastructure via a global content delivery network. With AWS, Amanotes delivers tens of millions of downloads every day to customers around the world. Amanotes Stays on Beat by Delivering ‘Simple Music Games’ to Millions Worldwide on AWS Tiếng Việt With AWS, Amanotes has built on the success of Magic Tiles 3 to develop another four major music games: Tiles Hop, Dancing Road, Beat Blader 3D, and Dancing Race, growing into a global app publisher. It’s now one of the leading mobile game publishers in Southeast Asia and one of the top music game publishers worldwide.   Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Italiano ไทย Amazon CloudFront Amanotes delivers a low-latency, seamless gaming experience to players around the globe with Amazon CloudFront. Learn more » Personalizing user experiences is key to Amanotes’ growth strategy. The business plans to use machine learning through Amazon Personalize to generate more relevant music recommendations to gamers, increasing engagement and growing revenue by attracting more customers. 100 milliseconds In 2014, Nguyen Tuan Cuong and Vo Tuan Binh co-founded Amanotes to give users the ability to extend their interactions with music beyond listening. This meant using technology to create personalized experiences tailored to each users’ taste, consumption, and musical ability.  Português" Amazon OpenSearch Services vector database capabilities explained _ AWS Big Data Blog.txt,"AWS Big Data Blog Amazon OpenSearch Service’s vector database capabilities explained by Jon Handler , Dylan Tong , Jianwei Li , and Vamshi Vijay Nakkirtha | on 21 JUN 2023 | in Amazon OpenSearch Service , Amazon SageMaker , Artificial Intelligence , Customer Solutions , Foundational (100) , Intermediate (200) , Thought Leadership | Permalink | Comments |  Share OpenSearch is a scalable, flexible, and extensible open-source software suite for search, analytics, security monitoring, and observability applications, licensed under the Apache 2.0 license. It comprises a search engine, OpenSearch, which delivers low-latency search and aggregations, OpenSearch Dashboards, a visualization and dashboarding tool, and a suite of plugins that provide advanced capabilities like alerting, fine-grained access control, observability, security monitoring, and vector storage and processing. Amazon OpenSearch Service is a fully managed service that makes it simple to deploy, scale, and operate OpenSearch in the AWS Cloud. As an end-user, when you use OpenSearch’s search capabilities, you generally have a goal in mind—something you want to accomplish. Along the way, you use OpenSearch to gather information in support of achieving that goal (or maybe the information is the original goal). We’ve all become used to the “search box” interface, where you type some words, and the search engine brings back results based on word-to-word matching. Let’s say you want to buy a couch in order to spend cozy evenings with your family around the fire. You go to Amazon.com, and you type “a cozy place to sit by the fire.” Unfortunately, if you run that search on Amazon.com, you get items like fire pits, heating fans, and home decorations—not what you intended. The problem is that couch manufacturers probably didn’t use the words “cozy,” “place,” “sit,” and “fire” in their product titles or descriptions. In recent years, machine learning (ML) techniques have become increasingly popular to enhance search. Among them are the use of embedding models, a type of model that can encode a large body of data into an n-dimensional space where each entity is encoded into a vector, a data point in that space, and organized such that similar entities are closer together. An embedding model, for instance, could encode the semantics of a corpus. By searching for the vectors nearest to an encoded document — k-nearest neighbor (k-NN) search — you can find the most semantically similar documents. Sophisticated embedding models can support multiple modalities, for instance, encoding the image and text of a product catalog and enabling similarity matching on both modalities. A vector database provides efficient vector similarity search by providing specialized indexes like k-NN indexes. It also provides other database functionality like managing vector data alongside other data types, workload management, access control and more. OpenSearch’s k-NN plugin provides core vector database functionality for OpenSearch , so when your customer searches for “a cozy place to sit by the fire” in your catalog, you can encode that prompt and use OpenSearch to perform a nearest neighbor query to surface that 8-foot, blue couch with designer arranged photographs in front of fireplaces. Using OpenSearch Service as a vector database With OpenSearch Service’s vector database capabilities, you can implement semantic search, Retrieval Augmented Generation (RAG) with LLMs, recommendation engines, and search rich media. Semantic search With semantic search, you improve the relevance of retrieved results using language-based embeddings on search documents. You enable your search customers to use natural language queries, like “a cozy place to sit by the fire” to find their 8-foot-long blue couch. For more information, refer to Building a semantic search engine in OpenSearch to learn how semantic search can deliver a 15% relevance improvement, as measured by normalized discounted cumulative gain (nDCG) metrics compared with keyword search. For a concrete example, our Improve search relevance with ML in Amazon OpenSearch Service workshop explores the difference between keyword and semantic search, based on a Bidirectional Encoder Representations from Transformers (BERT) model, hosted by Amazon SageMaker to generate vectors and store them in OpenSearch. The workshop uses product question answers as an example to show how keyword search using the keywords/phrases of the query leads to some irrelevant results. Semantic search is able to retrieve more relevant documents by matching the context and semantics of the query. The following diagram shows an example architecture for a semantic search application with OpenSearch Service as the vector database. Retrieval Augmented Generation with LLMs RAG is a method for building trustworthy generative AI chatbots using generative LLMs like OpenAI, ChatGPT, or Amazon Titan Text . With the rise of generative LLMs, application developers are looking for ways to take advantage of this innovative technology. One popular use case involves delivering conversational experiences through intelligent agents. Perhaps you’re a software provider with knowledge bases for product information, customer self-service, or industry domain knowledge like tax reporting rules or medical information about diseases and treatments. A conversational search experience provides an intuitive interface for users to sift through information through dialog and Q&A. Generative LLMs on their own are prone to hallucinations —a situation where the model generates a believable but factually incorrect response. RAG solves this problem by complementing generative LLMs with an external knowledge base that is typically built using a vector database hydrated with vector-encoded knowledge articles. As illustrated in the following diagram, the query workflow starts with a question that is encoded and used to retrieve relevant knowledge articles from the vector database. Those results are sent to the generative LLM whose job is to augment those results, typically by summarizing the results as a conversational response. By complementing the generative model with a knowledge base, RAG grounds the model on facts to minimize hallucinations. You can learn more about building a RAG solution in the Retrieval Augmented Generation module of our semantic search workshop . Recommendation engine Recommendations are a common component in the search experience, especially for ecommerce applications. Adding a user experience feature like “more like this” or “customers who bought this also bought that” can drive additional revenue through getting customers what they want. Search architects employ many techniques and technologies to build recommendations, including Deep Neural Network (DNN) based recommendation algorithms such as the two-tower neural net model , YoutubeDNN . A trained embedding model encodes products, for example, into an embedding space where products that are frequently bought together are considered more similar, and therefore are represented as data points that are closer together in the embedding space. Another possibility is that product embeddings are based on co-rating similarity instead of purchase activity. You can employ this affinity data through calculating the vector similarity between a particular user’s embedding and vectors in the database to return recommended items. The following diagram shows an example architecture of building a recommendation engine with OpenSearch as a vector store. Media search Media search enables users to query the search engine with rich media like images, audio, and video. Its implementation is similar to semantic search—you create vector embeddings for your search documents and then query OpenSearch Service with a vector. The difference is you use a computer vision deep neural network (e.g. Convolutional Neural Network (CNN)) such as ResNet to convert images into vectors. The following diagram shows an example architecture of building an image search with OpenSearch as the vector store. Understanding the technology OpenSearch uses approximate nearest neighbor (ANN) algorithms from the NMSLIB , FAISS , and Lucene libraries to power k-NN search. These search methods employ ANN to improve search latency for large datasets. Of the three search methods the k-NN plugin provides, this method offers the best search scalability for large datasets. The engine details are as follows: Non-Metric Space Library (NMSLIB) – NMSLIB implements the HNSW ANN algorithm Facebook AI Similarity Search (FAISS) – FAISS implements both HNSW and IVF ANN algorithms Lucene – Lucene implements the HNSW algorithm Each of the three engines used for approximate k-NN search has its own attributes that make one more sensible to use than the others in a given situation. You can follow the general information in this section to help determine which engine will best meet your requirements. In general, NMSLIB and FAISS should be selected for large-scale use cases. Lucene is a good option for smaller deployments, but offers benefits like smart filtering where the optimal filtering strategy—pre-filtering, post-filtering, or exact k-NN—is automatically applied depending on the situation. The following table summarizes the differences between each option. . NMSLIB-HNSW FAISS-HNSW FAISS-IVF Lucene-HNSW Max Dimension 16,000 16,000 16,000 1024 Filter Post filter Post filter Post filter Filter while search Training Required No No Yes No Similarity Metrics l2, innerproduct, cosinesimil, l1, linf l2, innerproduct l2, innerproduct l2, cosinesimil Vector Volume Tens of billions Tens of billions Tens of billions < Ten million Indexing latency Low Low Lowest Low Query Latency & Quality Low latency & high quality Low latency & high quality Low latency & low quality High latency & high quality Vector Compression Flat Flat Product Quantization Flat Product Quantization Flat Memory Consumption High High Low with PQ Medium Low with PQ High Approximate and exact nearest-neighbor search The OpenSearch Service k-NN plugin supports three different methods for obtaining the k-nearest neighbors from an index of vectors: approximate k-NN, score script (exact k-NN), and painless extensions (exact k-NN). Approximate k-NN The first method takes an approximate nearest neighbor approach—it uses one of several algorithms to return the approximate k-nearest neighbors to a query vector. Usually, these algorithms sacrifice indexing speed and search accuracy in return for performance benefits such as lower latency, smaller memory footprints, and more scalable search. Approximate k-NN is the best choice for searches over large indexes (that is, hundreds of thousands of vectors or more) that require low latency. You should not use approximate k-NN if you want to apply a filter on the index before the k-NN search, which greatly reduces the number of vectors to be searched. In this case, you should use either the score script method or painless extensions. Score script The second method extends the OpenSearch Service score script functionality to run a brute force, exact k-NN search over knn_vector fields or fields that can represent binary objects. With this approach, you can run k-NN search on a subset of vectors in your index (sometimes referred to as a pre-filter search ). This approach is preferred for searches over smaller bodies of documents or when a pre-filter is needed. Using this approach on large indexes may lead to high latencies. Painless extensions The third method adds the distance functions as painless extensions that you can use in more complex combinations. Similar to the k-NN score script, you can use this method to perform a brute force, exact k-NN search across an index, which also supports pre-filtering. This approach has slightly slower query performance compared to the k-NN score script. If your use case requires more customization over the final score, you should use this approach over score script k-NN. Vector search algorithms The simple way to find similar vectors is to use k-nearest neighbors (k-NN) algorithms, which compute the distance between a query vector and the other vectors in the vector database. As we mentioned earlier, the score script k-NN and painless extensions search methods use the exact k-NN algorithms under the hood. However, in the case of extremely large datasets with high dimensionality, this creates a scaling problem that reduces the efficiency of the search. Approximate nearest neighbor (ANN) search methods can overcome this by employing tools that restructure indexes more efficiently and reduce the dimensionality of searchable vectors. There are different ANN search algorithms; for example, locality sensitive hashing, tree-based, cluster-based, and graph-based. OpenSearch implements two ANN algorithms: Hierarchical Navigable Small Worlds (HNSW) and Inverted File System (IVF). For a more detailed explanation of how the HNSW and IVF algorithms work in OpenSearch, see blog post “ Choose the k-NN algorithm for your billion-scale use case with OpenSearch ”. Hierarchical Navigable Small Worlds The HNSW algorithm is one of the most popular algorithms out there for ANN search. The core idea of the algorithm is to build a graph with edges connecting index vectors that are close to each other. Then, on search, this graph is partially traversed to find the approximate nearest neighbors to the query vector. To steer the traversal towards the query’s nearest neighbors, the algorithm always visits the closest candidate to the query vector next. Inverted File The IVF algorithm separates your index vectors into a set of buckets, then, to reduce your search time, only searches through a subset of these buckets. However, if the algorithm just randomly split up your vectors into different buckets, and only searched a subset of them, it would yield a poor approximation. The IVF algorithm uses a more elegant approach. First, before indexing begins, it assigns each bucket a representative vector. When a vector is indexed, it gets added to the bucket that has the closest representative vector. This way, vectors that are closer to each other are placed roughly in the same or nearby buckets. Vector similarity metrics All search engines use a similarity metric to rank and sort results and bring the most relevant results to the top. When you use a plain text query, the similarity metric is called TF-IDF, which measures the importance of the terms in the query and generates a score based on the number of textual matches. When your query includes a vector, the similarity metrics are spatial in nature, taking advantage of proximity in the vector space. OpenSearch supports several similarity or distance measures: Euclidean distance – The straight-line distance between points. L1 (Manhattan) distance – The sum of the differences of all of the vector components. L1 distance measures how many orthogonal city blocks you need to traverse from point A to point B. L-infinity (chessboard) distance – The number of moves a King would make on an n-dimensional chessboard. It’s different than Euclidean distance on the diagonals—a diagonal step on a 2-dimensional chessboard is 1.41 Euclidean units away, but 2 L-infinity units away. Inner product – The product of the magnitudes of two vectors and the cosine of the angle between them. Usually used for natural language processing (NLP) vector similarity. Cosine similarity – The cosine of the angle between two vectors in a vector space. Hamming distance – For binary-coded vectors, the number of bits that differ between the two vectors. Advantage of OpenSearch as a vector database When you use OpenSearch Service as a vector database, you can take advantage of the service’s features like usability, scalability, availability, interoperability, and security. More importantly, you can use OpenSearch’s search features to enhance the search experience. For example, you can use Learning to Rank in OpenSearch to integrate user clickthrough behavior data into your search application and improve search relevance. You can also combine OpenSearch text search and vector search capabilities to search documents with keyword and semantic similarity. You can also use other fields in the index to filter documents to improve relevance. For advanced users, you can use a hybrid scoring model to combine OpenSearch’s text-based relevance score, computed with the Okapi BM25 function and its vector search score to improve the ranking of your search results. Scale and limits OpenSearch as vector database support billions of vector records. Keep in mind the following calculator regarding number of vectors and dimensions to size your cluster. Number of vectors OpenSearch VectorDB takes advantage of the sharding capabilities of OpenSearch and can scale to billions of vectors at single-digit millisecond latencies by sharding vectors and scale horizontally by adding more nodes. The number of vectors that can fit in a single machine is a function of the off-heap memory availability on the machine. The number of nodes required will depend on the amount of memory that can be used for the algorithm per node and the total amount of memory required by the algorithm. The more nodes, the more memory and better performance. The amount of memory available per node is computed as memory_available = ( node_memory – jvm_size ) * circuit_breaker_limit , with the following parameters: node_memory – The total memory of the instance. jvm_size – The OpenSearch JVM heap size. This is set to half of the instance’s RAM, capped at approximately 32 GB. circuit_breaker_limit – The native memory usage threshold for the circuit breaker. This is set to 0.5. Total cluster memory estimation depends on total number of vector records and algorithms. HNSW and IVF have different memory requirements. You can refer to Memory Estimation for more details. Number of dimensions OpenSearch’s current dimension limit for the vector field knn_vector is 16,000 dimensions. Each dimension is represented as a 32-bit float. The more dimensions, the more memory you’ll need to index and search. The number of dimensions is usually determined by the embedding models that translate the entity to a vector. There are a lot of options to choose from when building your knn_vector field. To determine the correct methods and parameters to choose, refer to Choosing the right method . Customer stories: Amazon Music Amazon Music is always innovating to provide customers with unique and personalized experiences. One of Amazon Music’s approaches to music recommendations is a remix of a classic Amazon innovation, item-to-item collaborative filtering , and vector databases. Using data aggregated based on user listening behavior, Amazon Music has created an embedding model that encodes music tracks and customer representations into a vector space where neighboring vectors represent tracks that are similar. 100 million songs are encoded into vectors, indexed into OpenSearch, and served across multiple geographies to power real-time recommendations. OpenSearch currently manages 1.05 billion vectors and supports a peak load of 7,100 vector queries per second to power Amazon Music recommendations. The item-to-item collaborative filter continues to be among the most popular methods for online product recommendations because of its effectiveness at scaling to large customer bases and product catalogs. OpenSearch makes it easier to operationalize and further the scalability of the recommender by providing scale-out infrastructure and k-NN indexes that grow linearly with respect to the number of tracks and similarity search in logarithmic time. The following figure visualizes the high-dimensional space created by the vector embedding. Brand protection at Amazon Amazon strives to deliver the world’s most trustworthy shopping experience, offering customers the widest possible selection of authentic products. To earn and maintain our customers’ trust, we strictly prohibit the sale of counterfeit products, and we continue to invest in innovations that ensure only authentic products reach our customers. Amazon’s brand protection programs build trust with brands by accurately representing and completely protecting their brand. We strive to ensure that public perception mirrors the trustworthy experience we deliver. Our brand protection strategy focuses on four pillars: (1) Proactive Controls (2) Powerful Tools to Protect Brands (3) Holding Bad Actors Accountable (4) Protecting and Educating Customers. Amazon OpenSearch Service is a key part of Amazon’s Proactive Controls. In 2022, Amazon’s automated technology scanned more than 8 billion attempted changes daily to product detail pages for signs of potential abuse. Our proactive controls found more than 99% of blocked or removed listings before a brand ever had to find and report it. These listings were suspected of being fraudulent, infringing, counterfeit, or at risk of other forms of abuse. To perform these scans, Amazon created tooling that uses advanced and innovative techniques, including the use of advanced machine learning models to automate the detection of intellectual property infringements in listings across Amazon’s stores globally. A key technical challenge in implementing such automated system is the ability to search for protected intellectual property within a vast billion-vector corpus in a fast, scalable and cost effective manner. Leveraging Amazon OpenSearch Service’s scalable vector database capabilities and distributed architecture, we successfully developed an ingestion pipeline that has indexed a total of 68 billion, 128- and 1024-dimension vectors into OpenSearch Service to enable brands and automated systems to conduct infringement detection, in real-time, through a highly available and fast (sub-second) search API. Conclusion Whether you’re building a generative AI solution, searching rich media and audio, or bringing more semantic search to your existing search-based application, OpenSearch is a capable vector database. OpenSearch supports a variety of engines, algorithms, and distance measures that you can employ to build the right solution. OpenSearch provides a scalable engine that can support vector search at low latency and up to billions of vectors. With OpenSearch and its vector DB capabilities, your users can find that 8-foot-blue couch easily, and relax by a cozy fire. About the Authors Jon Handler is a Senior Principal Solutions Architect at Amazon Web Services based in Palo Alto, CA. Jon works closely with OpenSearch and Amazon OpenSearch Service, providing help and guidance to a broad range of customers who have search and log analytics workloads that they want to move to the AWS Cloud. Prior to joining AWS, Jon’s career as a software developer included four years of coding a large-scale, eCommerce search engine. Jon holds a Bachelor of the Arts from the University of Pennsylvania, and a Master of Science and a Ph. D. in Computer Science and Artificial Intelligence from Northwestern University. Jianwei Li is a Principal Analytics Specialist TAM at Amazon Web Services. Jianwei provides consultant service for customers to help customer design and build modern data platform. Jianwei has been working in big data domain as software developer, consultant and tech leader. Dylan Tong is a Senior Product Manager at Amazon Web Services. He leads the product initiatives for AI and machine learning (ML) on OpenSearch including OpenSearch’s vector database capabilities. Dylan has decades of experience working directly with customers and creating products and solutions in the database, analytics and AI/ML domain. Dylan holds a BSc and MEng degree in Computer Science from Cornell University.  Vamshi Vijay Nakkirtha is a Software Engineering Manager working on the OpenSearch Project and Amazon OpenSearch Service. His primary interests include distributed systems. He is an active contributor to various plugins, like k-NN, GeoSpatial, and dashboard-maps. Comments View Comments Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Anghami Case Study.txt,"With the recent rise of rival music services, Anghami recognized the growing significance of guiding customers towards the artists and content that align with their preferences. This became even more crucial given the extensive and expanding collection of Arabic and international music available on the platform. These music-recommendation features attract new customers, and foster greater user loyalty. The company has observed that users spend more time on the site when presented with additional song recommendations. Anghami's previous solution for generating recommendations used legacy code that made it difficult for its team to expand its functionality. Anghami decided to create a new, cloud-native solution on AWS. The new platform eliminated the liability of maintaining old code, and freed up more time for engineers to build new features and capabilities for customers. It also meant they could take advantage of versatile tools such as Amazon OpenSearch Service, which makes it easy to perform interactive log analytics, real-time application monitoring, and website searches. Amazon OpenSearch Service Français Outcome: Owning Audio Content and Delighting Customers Using AWS Amazon S3 Español The company aimed to develop a cutting-edge recommendations platform that could scale to handle its expanding user-base, while facilitating the creation of novel features and services for its customers. Anghami is a music-streaming service based in Abu Dhabi. It serves approximately 70 million users in Europe, the Middle East and North Africa (MENA), and the US, giving them access to more than 72 million songs and podcasts. Over the past 10 years, it grew from a homegrown start-up into the first Arab technology company to be listed on the Nasdaq stock exchange in February 2022. Anghami sets itself apart from competitors by helping customers find suitable audio content through personalized recommendations. When its previous technology platform proved difficult to maintain and develop new features for, it turned to Amazon Web Services (AWS). The company built a new platform on AWS that uses machine learning (ML) to generate recommendations. It can now quickly surface relevant content for users, attract top tech talent, rapidly develop new features that enrich customer experience, and support future product innovation. Opportunity: Reducing Technology Risk and Building a Platform for Innovation 日本語 Amazon SageMaker 2023 An AWS customer since its inception, Anghami reached out to AWS solution architects to investigate its technology options based on its future plans. After several in-depth workshops, they came up with a new architecture that is simple, powerful, and easy to maintain and develop on. Within 6 months of the initial architecture workshops with AWS, Anghami launched its cloud-based recommendations engine for its growing catalog of songs and podcasts. The service’s recommendation platform now runs on Amazon OpenSearch Service. Anghami stores its user behavior data and audio content on Amazon Simple Storage Service (Amazon S3), object storage built to retrieve any amount of data from anywhere. To run its large data workloads, the company uses Amazon EMR, which easily runs and scales Apache Spark, Hive, Presto, and other big workloads. These workloads include training nearly a decade’s worth of customer data that has been collected from millions of customers using the streaming music service daily. To train the machine learning models that produce music recommendations, Anghami uses Amazon SageMaker, which helps to build, train, and deploy ML models. 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Learn how »  Kevin Williams Vice President (VP) of Machine Learning, Anghami Anghami plans to continue growing its audio catalog and expanding its user base in the Middle East and beyond. “We want to own audio in the regions we operate, for podcasts, audiobooks, and music,” adds Williams. “Using AWS, we have everything we need to accomplish that. Our platform is flexible, reliable, scalable, and easy to maintain, so we can spend our efforts on valuable tasks that benefit customers instead of maintenance.” Get Started Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. Build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows. Learn more » Our platform is flexible, reliable, scalable, and easy to maintain, so we can spend our efforts on valuable tasks that benefit customers instead of maintenance.” AWS Services Used Overview 中文 (繁體) Bahasa Indonesia 10x Anghami Personalizes Music Recommendations Using Amazon OpenSearch Service Solution: Attracting Top Tech Talent and Developing Prototypes in Days on AWS Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) 72+ million Customer Stories / Media Entertainment / MENA Amazon EMR About Company Founded in 2012 in Beirut, Anghami offers free and paid audio-streaming services. Its premium service provides features such as the ability to download tracks and play them offline, rewind or fast-forward music, and view lyrics. AWS Customer Success Stories Türkçe Amazon EMR is the industry-leading cloud big data solution for petabyte-scale data processing, interactive analytics, and machine learning using open-source frameworks. English Anghami now has a technology foundation it can build on for years to come. “I'm excited about running development sprints and discovering the best customer experiences in a timely manner,” says Williams. 6 months songs and podcasts served seamlessly Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Deutsch Tiếng Việt Amazon OpenSearch Service makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. OpenSearch is an open source, distributed search and analytics suite derived from Elasticsearch. Anghami can also release new music to fans almost immediately. When new tracks drop, typically on Fridays, fans can access them within a minute of the official release. With the previous solution, the tech team couldn’t quickly add a single track to the catalog. However, using OpenSearch, the team can insert and serve songs with its machine learning algorithm within moments of the song’s release. “This is an essential feature that really makes us stand out compared to our rivals,” says Williams. “It’s satisfying to build on fans’ excitement about new releases.” Italiano ไทย Founded in 2010, Anghami provides a music-streaming service in the Middle East and North Africa (MENA), Europe and the US. The company has offices in Abu Dhabi, Beirut, Cairo, Dubai, and Riyadh, and employs more than 160 people. Anghami developers can now rapidly prototype new feature ideas from product teams and quickly develop queries to recommend content for users. Writing a search query and creating a prototype takes 1–2 days on AWS, as opposed to around 2 weeks on the previous system. Since launching on AWS, the team has created new functions on the service landing page that suggest artists and relevant playlists for customers to listen to, instead of just suggesting tracks. Building its platform on AWS has reduced the company’s technology risk because it is now easier to find talented engineers and DevOps staff. “As a tech company, you’re only as good as your talent,” says Kevin Williams, Vice President (VP) of Machine Learning at Anghami. “We can quickly find candidates with OpenSearch skills and others who are motivated to learn OpenSearch because it’s a widely used technology. It's also quicker to train up technical staff, because they can access existing documentation on AWS services.” Learn more » to migrate entire song database faster to develop music search queries Português Contact Sales" Announcing enhanced table extractions with Amazon Textract _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Announcing enhanced table extractions with Amazon Textract by Raj Pathak , Anjan Biswas , and Lalita Reddi | on 07 JUN 2023 | in Amazon Machine Learning , Amazon Textract , Artificial Intelligence | Permalink | Comments |  Share Amazon Textract is a machine learning (ML) service that automatically extracts text, handwriting, and data from any document or image. Amazon Textract has a Tables feature within the AnalyzeDocument API that offers the ability to automatically extract tabular structures from any document. In this post, we discuss the improvements made to the Tables feature and how it makes it easier to extract information in tabular structures from a wide variety of documents. Tabular structures in documents such as financial reports, paystubs, and certificate of analysis files are often formatted in a way that enables easy interpretation of information. They often also include information such as table title, table footer, section title, and summary rows within the tabular structure for better readability and organization. For a similar document prior to this enhancement, the Tables feature within AnalyzeDocument would have identified those elements as cells, and it didn’t extract titles and footers that are present outside the bounds of the table. In such cases, custom postprocessing logic to identify such information or extract it separately from the API’s JSON output was necessary. With this announcement of enhancements to the Table feature, the extraction of various aspects of tabular data becomes much simpler. In April 2023, Amazon Textract introduced the ability to automatically detect titles, footers, section titles, and summary rows present in documents via the Tables feature. In this post, we discuss these enhancements and give examples to help you understand and use them in your document processing workflows. We walk through how to use these improvements through code examples to use the API and process the response with the Amazon Textract Textractor library . Overview of solution The following image shows that the updated model not only identifies the table in the document but all corresponding table headers and footers. This sample financial report document contains table title, footer, section title, and summary rows. The Tables feature enhancement adds support for four new elements in the API response that allows you to extract each of these table elements with ease, and adds the ability to distinguish the type of table. Table elements Amazon Textract can identify several components of a table such as table cells and merged cells. These components, known as Block objects, encapsulate the details related to the component, such as the bounding geometry, relationships, and confidence score. A Block represents items that are recognized in a document within a group of pixels close to each other. The following are the new Table Blocks introduced in this enhancement: Table title – A new Block type called TABLE_TITLE that enables you to identify the title of a given table. Titles can be one or more lines, which are typically above a table or embedded as a cell within the table. Table footers – A new Block type called TABLE_FOOTER that enables you to identify the footers associated with a given table. Footers can be one or more lines that are typically below the table or embedded as a cell within the table. Section title – A new Block type called TABLE_SECTION_TITLE that enables you to identify if the cell detected is a section title. Summary cells – A new Block type called TABLE_SUMMARY that enables you to identify if the cell is a summary cell, such as a cell for totals on a paystub. Types of tables When Amazon Textract identifies a table in a document, it extracts all the details of the table into a top-level Block type of TABLE . Tables can come in various shapes and sizes. For example, documents often contain tables that may or may not have a discernible table header. To help distinguish these types of tables, we added two new entity types for a TABLE Block : SEMI_STRUCTURED_TABLE and STRUCTURED_TABLE . These entity types help you distinguish between a structured versus a semistructured table. Structured tables are tables that have clearly defined column headers. But with semi-structured tables, data might not follow a strict structure. For example, data may appear in tabular structure that isn’t a table with defined headers. The new entity types offer the flexibility to choose which tables to keep or remove during post-processing. The following image shows an example of STRUCTURED_TABLE and SEMI_STRUCTURED_TABLE . Analyzing the API output In this section, we explore how you can use the Amazon Textract Textractor library to postprocess the API output of AnalyzeDocument with the Tables feature enhancements. This allows you to extract relevant information from tables. Textractor is a library created to work seamlessly with Amazon Textract APIs and utilities to subsequently convert the JSON responses returned by the APIs into programmable objects. You can also use it to visualize entities on the document and export the data in formats such as comma-separated values (CSV) files. It’s intended to aid Amazon Textract customers in setting up their postprocessing pipelines. In our examples, we use the following sample page from a 10-K SEC filing document. The following code can be found within our GitHub repository . To process this document, we make use of the Textractor library and import it for us to postprocess the API outputs and visualize the data: pip install amazon-textract-textractor The first step is to call Amazon Textract AnalyzeDocument with Tables feature, denoted by the features=[TextractFeatures.TABLES] parameter to extract the table information. Note that this method invokes the real-time (or synchronous) AnalyzeDocument API, which supports single-page documents. However, you can use the asynchronous StartDocumentAnalysis API to process multi-page documents (with up to 3,000 pages). from PIL import Image from textractor import Textractor from textractor.visualizers.entitylist import EntityList from textractor.data.constants import TextractFeatures, Direction, DirectionalFinderType image = Image.open(""sec_filing.png"") # loads the document image with Pillow extractor = Textractor(region_name=""us-east-1"") # Initialize textractor client, modify region if required document = extractor.analyze_document( file_source=image, features=[TextractFeatures.TABLES], save_image=True ) The document object contains metadata about the document that can be reviewed. Notice that it recognizes one table in the document along with other entities in the document: This document holds the following data: Pages - 1 Words - 658 Lines - 122 Key-values - 0 Checkboxes - 0 Tables - 1 Queries - 0 Signatures - 0 Identity Documents - 0 Expense Documents – 0 Now that we have the API output containing the table information, we visualize the different elements of the table using the response structure discussed previously: table = EntityList(document.tables[0]) document.tables[0].visualize() The Textractor library highlights the various entities within the detected table with a different color code for each table element. Let’s dive deeper into how we can extract each element. The following code snippet demonstrates extracting the title of the table: table_title = table[0].title.text table_title 'The following table summarizes, by major security type, our cash, cash equivalents, restricted cash, and marketable securities that are measured at fair value on a recurring basis and are categorized using the fair value hierarchy (in millions):' Similarly, we can use the following code to extract the footers of the table. Notice that table_footers is a list, which means that there can be one or more footers associated with the table. We can iterate over this list to see all the footers present, and as shown in the following code snippet, the output displays three footers: table_footers = table[0].footers for footers in table_footers: print (footers.text) (1) The related unrealized gain (loss) recorded in ""Other income (expense), net"" was $(116) million and $1.0 billion in Q3 2021 and Q3 2022, and $6 million and $(11.3) billion for the nine months ended September 30, 2021 and 2022. (2) We are required to pledge or otherwise restrict a portion of our cash, cash equivalents, and marketable fixed income securities primarily as collateral for real estate, amounts due to third-party sellers in certain jurisdictions, debt, and standby and trade letters of credit. We classify cash, cash equivalents, and marketable fixed income securities with use restrictions of less than twelve months as ""Accounts receivable, net and other"" and of twelve months or longer as non-current ""Other assets"" on our consolidated balance sheets. See ""Note 4 - Commitments and Contingencies."" (3) Our equity investment in Rivian had a fair value of $15.6 billion and $5.2 billion as of December 31, 2021 and September 30, 2022, respectively. The investment was subject to regulatory sales restrictions resulting in a discount for lack of marketability of approximately $800 million as of December 31, 2021, which expired in Q1 2022. Generating data for downstream ingestion The Textractor library also helps you simplify the ingestion of table data into downstream systems or other workflows. For example, you can export the extracted table data into a human readable Microsoft Excel file. At the time of this writing, this is the only format that supports merged tables. table[0].to_excel(filepath=""sec_filing.xlsx"") We can also convert it to a Pandas DataFrame . DataFrame is a popular choice for data manipulation, analysis, and visualization in programming languages such as Python and R. In Python, DataFrame is a primary data structure in the Pandas library. It’s flexible and powerful, and is often the first choice for data analysis professionals for various data analysis and ML tasks. The following code snippet shows how to convert the extracted table information into a DataFrame with a single line of code: df=table[0].to_pandas() df Lastly, we can convert the table data into a CSV file. CSV files are often used to ingest data into relational databases or data warehouses. See the following code: table[0].to_csv() ',0,1,2,3,4,5\n0,,""December 31, 2021"",,September,""30, 2022"",\n1,,Total Estimated Fair Value,Cost or Amortized Cost,Gross Unrealized Gains,Gross Unrealized Losses,Total Estimated Fair Value\n2,Cash,""$ 10,942"",""$ 10,720"",$ -,$ -,""$ 10,720""\n3,Level 1 securities:,,,,,\n4,Money market funds,""20,312"",""16,697"",-,-,""16,697""\n5,Equity securities (1)(3),""1,646"",,,,""5,988""\n6,Level 2 securities:,,,,,\n7,Foreign government and agency securities,181,141,-,(2),139\n8,U.S. government and agency securities,""4,300"",""2,301"",-,(169),""2,132""\n9,Corporate debt securities,""35,764"",""20,229"",-,(799),""19,430""\n10,Asset-backed securities,""6,738"",""3,578"",-,(191),""3,387""\n11,Other fixed income securities,686,403,-,(22),381\n12,Equity securities (1)(3),""15,740"",,,,19\n13,,""$ 96,309"",""$ 54,069"",$ -,""$ (1,183)"",""$ 58,893""\n14,""Less: Restricted cash, cash equivalents, and marketable securities (2)"",(260),,,,(231)\n15,""Total cash, cash equivalents, and marketable securities"",""$ 96,049"",,,,""$ 58,662""\n'

Conclusion The introduction of these new block and entity types ( TABLE_TITLE , TABLE_FOOTER , STRUCTURED_TABLE , SEMI_STRUCTURED_TABLE , TABLE_SECTION_TITLE , TABLE_FOOTER , and TABLE_SUMMARY ) marks a significant advancement in extraction of tabular structures from documents with Amazon Textract. These tools provide a more nuanced and flexible approach, catering to both structured and semistructured tables and making sure that no important data is overlooked, regardless of its location in a document. This means we can now handle diverse data types and table structures with enhanced efficiency and accuracy. As we continue to embrace the power of automation in document processing workflows, these enhancements will no doubt pave the way for more streamlined workflows, higher productivity, and more insightful data analysis. For more information on AnalyzeDocument and the Tables feature, refer to AnalyzeDocument . About the authors Raj Pathak is a Senior Solutions Architect and Technologist specializing in Financial Services (Insurance, Banking, Capital Markets) and Machine Learning. He specializes in Natural Language Processing (NLP), Large Language Models (LLM) and Machine Learning infrastructure and operations projects (MLOps). Anjan Biswas is a Senior AI Services Solutions Architect with focus on AI/ML and Data Analytics. Anjan is part of the world-wide AI services team and works with customers to help them understand, and develop solutions to business problems with AI and ML. Anjan has over 14 years of experience working with global supply chain, manufacturing, and retail organizations and is actively helping customers get started and scale on AWS AI services. Lalita Reddi is a Senior Technical Product Manager with the Amazon Textract team. She is focused on building machine learning-based services for AWS customers. In her spare time, Lalita likes to play board games, and go on hikes. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" AppsFlyer Amazon EKS Case Study _ Advertising _ AWS.txt,"Français 2023 Español AppsFlyer also sought to simplify its offering on AWS. “We wanted to decrease the tooling and overall management and centralize our infrastructure,” says Victor Gershkovich, data platform team lead, real-time infrastructure at AppsFlyer. “Amazon EKS gives us the ability to do so with all the needed elements to run and control the Kubernetes cluster and use its services. We can deploy the application, control its lifecycle, and develop controllers and operators that fit our needs.”  AppsFlyer also enjoys maximum resource efficiency. Using AWS Graviton processors, the company can choose different CPU and storage types based on its needs. In fact, AppsFlyer has reduced its costs by an average of 65 percent thanks to this flexibility. “We improved performance, reduced costs, and did not harm our offering for our customers,” says Gershkovich. “We only improved it.” 日本語 AppsFlyer’s Solution Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » AppsFlyer is a mobile and measurement attribution company that helps its customers measure user activities across channels. Using its cloud-based solution, customers can access detailed analytics and make decisions that guide their campaign efforts. Get Started 한국어 About AppsFlyer Industry Challenge Daily, AppsFlyer’s ingress and internal service communication generates around eight hundred billion events. At peak hours, this traffic exceeds 12 million events per second. By adopting a scalable architecture based on Amazon EKS, AppsFlyer can scale infrastructure up and down based on load, paying for only what is used. The company has reduced latency by 30–90 percent, depending on the workload. It now performs version upgrades, configurations, and many other tasks in days or even hours instead of weeks. By binding each Kafka cluster to a different business logic, the company can avoid a single point of failure and tune each cluster for an optimal cost-performance ratio. Using this architecture, AppsFlyer has improved its performance and stability while reducing security risks. Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers AWS Services Used 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Benefits of Using AWS 中文 (简体) AppsFlyer runs over 1,000 microservices every day on Amazon EKS using Kafka clusters, with each cluster bound to specific business logic. This architecture is also powered by Amazon Graviton processors, which deliver optimal price performance for cloud workloads running in Amazon Elastic Compute Cloud (Amazon EC2), a broad and deep compute platform. We improved performance, reduced costs, and did not harm our offering for our customers. We only improved it.”  Discover how mobile and measurement attribution company AppsFlyer is running high-throughput advertising workloads in the cloud using Amazon EKS, reducing latency by 30–90 percent. AWS Graviton processors are designed by AWS to deliver the best price performance for your cloud workloads running in Amazon EC2. Learn more » Türkçe English Amazon Graviton Processor AppsFlyer saw an opportunity to optimize its advertising workloads and run them at scale on Amazon Web Services (AWS). The company migrated to a scalable, cloud-native architecture based on Amazon Elastic Kubernetes Service (Amazon EKS), a managed Kubernetes service to run Kubernetes in the AWS Cloud and on-premises data centers.  AppsFlyer Runs Near-Real-Time, Ultra-Low Latency, High-Throughput Workloads at Scale Using Amazon EKS Deutsch Running billions of workloads a day is no simple task. Traditional databases involve several moving parts, from continuous integration and continuous deployment pipelines to domain name services. As a result, day-to-day operations can become complex and time consuming; developers often need to focus their efforts on managing the infrastructure rather than developing new features and capabilities. Tiếng Việt Customer Stories / Advertising & Marketing Italiano ไทย Amazon EKS Victor Gershkovich Data Platform Team Lead, Real-time Infrastructure, AppsFlyer Learn more » Amazon EC2 Português" Arm Case Study.txt,"Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Français Benefits of AWS Enabling Experimentation and Innovation Can scale EDA environment quickly—from 5,000 cores to 30,000 cores—on demand By using AWS, the Arm Physical Design IP team can scale its EDA environment up or down quickly—from 5,000 cores to 30,000 cores—on demand. “This scalability and flexibility brought by AWS translates to a faster turnaround time,” says Moyer. “Using AWS, our EDA workload characterization turnaround time was reduced from a few months to a few weeks.” Español Amazon Elastic File System 日本語 Arm now plans to use the next generation of Amazon EC2 Arm instances, powered by Graviton2 processors with 64-bit Arm Neoverse cores. “The Graviton2 offers even better performance and scalability and caters to a larger number of different EDA workloads,” Moyer says. “We are looking forward to using these AWS processors for better performance and additional cost savings.” Arm was looking for agility improvement to keep development on schedule. “With our on-premises environment, our data center was constrained in terms of scalability, and deployment of additional compute capacity would typically take one month for approvals and at least three months to procure and install hardware,” says Vicki Mitchell, vice president of systems engineering for Arm. “We have aggressive deadlines, and waiting that long could make or break a project for us.” Moving EDA Workloads to the AWS Cloud Get Started 한국어 Initially, the Arm Physical Design Group ran its EDA workloads on Amazon Elastic Compute Cloud (Amazon EC2) Intel processor–based instances. It also used Amazon Simple Storage Service (Amazon S3), in combination with Amazon Elastic File System (Amazon EFS), for EDA data storage. When AWS announced the availability of Amazon EC2 A1 instances powered by Arm-based Graviton processors, the Arm Physical Design IP team began to run portions of its EDA workloads on A1 instances. “Taking advantage of Graviton instances gives us the opportunity to contribute to the development of the EDA ecosystem on Arm architecture,” says Moyer. In addition, Arm uses Amazon EC2 Spot Instances for all workloads. Spot Instances are spare compute capacity available at up to 90 percent less than On-Demand Instances. Based in Cambridge, United Kingdom, Arm designs and manufactures silicon IP for intelligent systems-on-chip. The company’s processors have enabled intelligent computing in more than 190 billion chips, powering products from sensors to smartphones to supercomputers. Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. Reducing Characterization Turnaround Time from Months to Weeks For many years, Arm relied on an on-premises environment to support electronic design automation (EDA) workloads, resulting in forecast challenges on compute capacity. “The nature of our Physical Design Group business demands a high-dynamic compute environment, and the flexibility to make changes on short notice,” says Philippe Moyer, vice president of design enablement for the Arm Physical Design Group. “In the past, the on-premises compute was sometimes sitting idle until the need arose, which is why the scalability and agility of the cloud is a good solution for our business.” AWS Services Used Running its EDA workloads on Arm-based Graviton instances, Arm is lowering its AWS operational costs. “The Graviton processor family enables us to reduce the AWS costs for our logic characterization workload by 30 percent per physical core versus using Intel-powered instances for the same throughput,” says Moyer. Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. To gain the agility and scalability needed, in 2017 Arm chose to move part of its EDA workload to Amazon Web Services (AWS). “Selecting AWS made sense to us. AWS is a market leader, and it really understands the semiconductor space,” says Mitchell. “We were also very impressed with the EDA knowledge of the AWS solution architects we worked with.” 中文 (繁體) Bahasa Indonesia Philippe Moyer Vice President of Design Enablement, Arm Amazon S3 Ρусский عربي 中文 (简体) Reduces characterization turnaround time from months to weeks Amazon EC2 A1 Instances Arm Accelerates Innovation with Compute Solutions on AWS Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices. You can use Spot Instances for various stateless, fault-tolerant, or flexible applications such as big data, containerized workloads, CI/CD, web servers, high-performance computing (HPC), and other test & development workloads. To learn more, visit aws.amazon.com/ec2/instance-types/a1. Türkçe Arm is a leading technology provider of silicon intellectual property (IP) for intelligent systems-on-chip that power billions of devices. Arm creates IP used by technology partners to develop integrated semiconductor circuits. The company estimates that 70 percent of the world’s population uses its technology in their smart devices and electronics. English With the company’s on-premises environment, Arm engineers sometimes had to wait for compute resources to begin working on projects. By using on-demand compute capacity, engineers are now free to innovate. “It’s much easier for our engineers to prototype and experiment in the cloud,” Mitchell says. “If they’re trying to validate a piece of logic or create a new feature, they can take advantage of Amazon EC2 Spot Instances to submit a job and get instantaneous scheduling without disrupting the project flow. They can move faster as a result.” Arm Reduces Characterization Turnaround Time and Costs by Using AWS Arm-Based Graviton Instances Amazon EC2 A1 instances deliver significant cost savings for scale-out and Arm-based applications such as web servers, containerized microservices, caching fleets, and distributed data stores that are supported by the extensive Arm ecosystem. Deutsch Tiếng Việt Using AWS, our EDA workload characterization turnaround time was reduced from a few months to a few weeks."" Italiano ไทย Contact Sales 2020 Decreasing AWS Costs by 30% Learn more » Enables experimentation and innovation for developers Amazon EC2 Spot Instances Cuts logic characterization workload costs by 30% with Arm-based Graviton instances About Arm Português Gains flexibility to avoid the extra cost of approximate evaluation" Arm Limited Case Study.txt,"Zhifeng Yun Technical Director, Arm Limited Français Benefits of AWS Amazon Elastic Compute Cloud (Amazon EC2) Español Arm hopes that its success in migrating and modernizing its EDA workloads will inspire other companies to change the way that they run workloads. “I would like to think that our experience using AWS not only benefits Arm but also benefits the EDA industry as a whole,” says Yun. “We want to demonstrate to the EDA industry not only the benefits of using AWS Graviton processors but also what a modernized cloud solution can do. Using AWS services has helped us realize the deep benefit of migrating to the cloud.” Completing the Migration for a Fully Modernized Solution 日本語 AWS Services Used Arm is both a consumer of and a supplier to AWS. The company supplied intellectual property for AWS Graviton processors, which are designed by AWS to deliver the best price performance for cloud workloads running in Amazon EC2. Using CPUs based on the Arm Neoverse N1 processor to support the design and verification of the future Arm chips is helping to drive Arm’s business success thanks to the CPUs’ delivery of higher performance at a lower cost. Using AWS is also helping Arm to achieve its sustainability goals. By continuing to migrate away from its on-site data center, optimizing its compute using Spot Instances, and taking advantage of the efficiencies of AWS Graviton processors, Arm is reducing its carbon footprint. The company has committed to being net-zero carbon certified by 2030. Get Started 한국어 Optimized compute costs through managed services Scaled up to 350,000 virtual CPUs Modernizing Its Solution to Accommodate Future Growth AWS Batch The company built its solution around AWS Batch, which lets developers, scientists, and engineers easily and efficiently run hundreds of thousands of batch and machine learning computing jobs on AWS. Arm uses Amazon Elastic Compute Cloud (Amazon EC2), which offers secure and resizable compute capacity for virtually any workload. A core part of the company’s solution is the use of Amazon EC2 Spot Instances, which let users take advantage of unused Amazon EC2 capacity on AWS. Because Arm’s EDA workloads have varying compute and memory requirements, Arm uses a variety of instance families and types. “Using AWS Batch facilitates selecting different instance types and mixing them together,” says Yun. “That helps us to achieve the scalability that we need.” Using the high scalability of AWS Batch, Arm can now run more than 53 million jobs per week and up to 9 million jobs per day. The company has scaled up to 350,000 virtual CPUs across more than 25,800 instances and is working on scaling up to 600,000, all using Spot Instances. AWS Graviton processor Can run more than 53 million jobs per week Increased engineer productivity 中文 (繁體) Bahasa Indonesia Founded in 1990, Arm Limited is a semiconductor and software design company based in the United Kingdom. It designs energy-efficient CPU and GPU processors and system-on-a-chip infrastructure and software. Arm Limited (Arm) is a global leader in the development of licensable compute technology for semiconductor companies. As of February 2022, over 200 billion chips have been shipped that are based on Arm’s architecture and manufactured by its partners over the last 3 decades. However, the company’s on-premises data centers could not grow with the pace of engineering requirements, and in 2016, Arm decided it needed to make significant changes to achieve its projected growth target for the next 5–10 years. By migrating from on-premises data centers to Amazon Web Services (AWS), Arm created a scalable and reliable cloud-based solution for running EDA workloads. Using this solution, the company has optimized its compute costs, increased its engineering productivity, accelerated speed to market for its products, and enhanced its product quality. Additionally, using CPUs on AWS that are based on Arm architecture for the design and verification of new Arm chips has helped it to drive business success. Using AWS Batch facilitates selecting different instance types and mixing them together. That helps us to achieve the scalability that we need.”  Arm, a semiconductor and software design company based in the United Kingdom, wanted to modernize its engineering solution. The company’s on-premises data centers didn’t position Arm for future growth. “We couldn’t do any of the customization or optimization that we needed to do,” says Zhifeng Yun, technical director at Arm. “We didn’t have a sustainable plan to drive efficiency or to reduce the total cost of ownership given the growing engineering requirements.” The company also wanted to advance its business intelligence and create a delivery engineering road map. In 2016, Arm evaluated different cloud providers and ultimately decided to use AWS. “We chose AWS because it has highly sophisticated infrastructure and services,” says Yun. “It offers a lot in terms of the variety of instance types as well as the customer focus and support we need to get things moving more quickly.” Arm evaluated its internal workloads, weighing the technical difficulty of migrating each one against the benefits it would bring to the business. “Our number one concern is about the quality of the product, and number two is about the time to market,” says Yun. “If we delay bringing our product to market, the impact to the entire industry could be huge. And that means a big cost not only in terms of revenue but also in terms of Arm’s reputation.” After its evaluation was complete, Arm decided to prioritize its most compute-heavy verification workloads for the migration. These workloads involve running millions of jobs—such as those that help verify the design of the CPU core—in parallel. Rather than using a lift-and-shift approach to the migration, Arm opted to modernize immediately to take advantage of cloud-native technology and managed services. Ρусский عربي About Arm Limited 中文 (简体) Scaling Up Verification Workloads to over 350,000 Virtual CPUs Decreased turnaround time for verification jobs Learn more » Arm’s ability to select instance types to fit different jobs provides additional benefits. “Having the instance fit the job makes a huge difference in the usage of CPU and memory,” says Yun. “If you have a limited selection of instance types and try to force the job to fit in, naturally, you’ll have a lot of wasted resources.” Because the company can use a large variety of Spot Instance types, Arm has been able to optimize its compute costs. “Using the AWS Graviton2 instance types provides 32 percent lower runtime for our simulation workloads,” Yun says. “That performance is quite attractive in EDA workloads.” Achieved 32% lower runtime for simulation workloads Accelerated speed to market for products Türkçe Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices. English Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Another benefit of using AWS is improved productivity for Arm’s engineering team. Before migrating to AWS, engineers had to submit jobs to a queue and wait for a resource to become available. Now, those verification jobs can be run with less waiting time, resulting in a much shorter turnaround time. This gives engineers more time to debug and tweak designs, if needed, meaning that products can be released on time or even earlier. “Because engineers can run as many necessary cycles as needed during the different design phases, we’ve been able to release product ahead of schedule, which doesn’t happen often in the EDA industry,” says Yun. Deutsch AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. Tiếng Việt Italiano ไทย Arm Accelerates Speed to Market by Migrating EDA Workflows to AWS Batch Arm will continue evaluating and prioritizing its workloads for migration. “We’ve been successful in migrating the most compute-intensive workloads to AWS,” says Yun. “But our goal was never limited to that.” The company will continue scaling workloads and hopes to run the complete design-verification process on AWS. “Our choice of using AWS was driven by the business. It’s driven by our understanding of the cloud,” Yun says. “It’s also driven by how we’re able to use what AWS has already created so we can build on top of that.” Contact Sales AWS Graviton processors are designed by AWS to deliver the best price performance for your cloud workloads running in Amazon EC2. 2022 Amazon EC2 Spot Instances Decreased carbon footprint Português Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today." Armitage Technologies case study.txt,"Armitage Technologies Ltd., is a full-service IT company founded in Hong Kong in 1972, specializing in providing 21st century solutions and building technologies. Armitage serves international clients from different industries and delivers projects reliably and punctually over the years including, but not limited to, project development, IT support & maintenance, and AI solutions. Français Norman Lam Head of Innovation, Armitage Technologies Ltd. Español 日本語 AWS Services Used 2022 Get Started 한국어 Armitage built the application using Amazon SageMaker and AWS Panorama, an AWS-managed edge computing device that brings computer vision to on-premises camera networks. With the computer vision application on AWS, organizers improved crowd control by accurately recording 10,000 people daily, reduced security forces by 30 percent, and ensured protection of video data. Armitage needed to quickly deploy the solution. However, connecting traditional on-premises camera management systems with existing IP cameras is a complicated, time-consuming process. Furthermore, streaming and processing on-premises video streams in the cloud for applications often requires high network bandwidth and infrastructure provisioning. “To support our computer vision application, we needed reliable technology that’s highly available, even during weather disruptions,” Lam says. Additionally, because of strict security requirements, Armitage needed to ensure video data remained in a local network while still being monitored remotely.   In late 2021, a public organization approached Armitage to help manage the number of people attending and leaving large public events. Norman Lam, head of innovation at Armitage Technologies Ltd, says, “Hong Kong has venue capacity limits because of COVID-19. We needed to develop a computer vision solution that connected seamlessly with IP cameras, so the organization can accurately record and control human traffic.” AWS Identity and Access Management (IAM) Benefits Amazon CloudWatch collects and visualizes real-time logs, metrics, and event data in automated dashboards to streamline your infrastructure and application maintenance. Learn more » Build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows Armitage implemented its computer vision application on AWS Panorama to count human and vehicle traffic at two large outdoor public events in August and September 2022. The solution connected the company’s application with 10 IP cameras mounted at park entrances and exits, providing parallel multi-model, multi-stream support with one Panorama appliance. Armitage also used Amazon SageMaker to reduce costs and development time for training custom AI models to count traffic. Amazon SageMaker Neo also helps developers optimize machine learning models for inference on supported edge devices to run faster with no loss in accuracy. By deploying multiple camera sources with an AI model and application in one appliance, Armitage implemented the computer vision solution in under two days, which is 50 percent faster than an on-premises device. “It was very simple to deploy the solution, train the models, and connect to the IP cameras on site,” Lam says. “Instead of having to purchase multiple devices to manage the cameras, we only needed one device to connect and manage the entire solution.” Also, video inference at the edge does not require video streamed to the cloud, and only results without personal data are sent to AWS for analytics. “AWS Panorama provides accuracy, reliability, and security, which were the three elements we needed for our solution,” says Lam. 中文 (繁體) Bahasa Indonesia • 50 percent – Halves the time to deploy computer vision solution • Highly available – Automated surveillance around the clock • 30 percent – reduction in event security team • Zero downtime – Reliably captures video streams despite weather disruptions • Highly secure – Restricts access to video data   An AWS Partner, Armitage leveraged AWS Cloud technology to support its computer vision solution. “AWS manages security concerns like external access and provides data encryption. Plus, AWS offers seamless scalability, which was key in supporting our expansion plans in the broader Asia-Pacific market,” says Lam. AWS Panorama Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) With AWS Identity and Access Management (IAM), you can specify who or what can access services and resources in AWS, centrally manage fine-grained permissions, and analyze access to refine permissions across AWS. Learn more » Learn more » Solution Overview About Company Bringing Computer Vision to Existing IP Cameras at the Edge on AWS   Türkçe Armitage Technologies built a computer vision application with multiple machine learning models using AWS Panorama to process real-time video from IP cameras, accurately count crowd flow, and automatically encrypt data.   Armitage Technologies Ltd. (Armitage)—a Hong Kong-based technology services company founded in 1972—has delivered more than 10,000 IT projects to enterprises across Hong Kong and Mainland China. Increasingly, Armitage is focusing on emerging technologies such as Internet of Things (IoT), machine learning, computer vision, and artificial intelligence (AI). As part of this strategy, the company specializes in providing computer vision solutions—AI-based applications that use digital images from cameras and deep learning models to identify and classify objects quickly and accurately. AWS Panorama is a collection of machine learning (ML) devices and a software development kit (SDK) that brings CV to on-premises internet protocol (IP) cameras. Learn more » English Armitage uses AWS Panorama, an edge computing device that brings computer vision to on-premises camera networks via the AWS Panorama Appliance. The appliance can run computer vision models on a local area network, which is key for organizations with bandwidth constraints and data residency requirements. AWS Panorama also includes an IP62 (international protection) rating to protect video capture from dust and water in outdoor environments. In addition, Armitage implemented AWS Identity and Access Management (IAM) for enhanced security. With AWS Panorama, Armitage process video feeds at the edge to control where data is stored and make highly accurate predictions from a single management interface. Additionally, the provider limits application access with local storage encryption. Deutsch Opportunity Helping a Public Organization Provide Crowd Control at Events Tiếng Việt Furthermore, the public organization needed fewer event management employees, reducing its security team by 30 percent for both events. It also benefited from the reliability of the Armitage solution, which experienced no downtime throughout the two outdoor events, despite weather disruptions. Using the Armitage computer vision solution with AWS Panorama, the organization accurately counted more than 10,000 people each day during the two public events, with personnel analyzing video feeds within one second. This aided in the reporting of real-time human traffic numbers to organizers, who closed the entrance to the event location immediately upon reaching full capacity. “The public organization could easily comply with COVID-19 regulations on capacity restrictions because of the accuracy of our computer vision solution on AWS Panorama,” says Lam. Italiano ไทย Amazon CloudWatch Contact Sales The public organization could easily comply with COVID-19 regulations around the number of people in one location because of the accuracy of our computer vision solution on AWS Panorama.” Quickly Deploying an AI/ML Solution with Scalability and Accuracy   Armitage Technologies Uses Computer Vision Application at the Edge with AWS Panorama to Improve Crowd Management at Public Venues Outcome Customer Stories / Software and Internet Overview | Opportunity | Solution | Benefits | Outcome | AWS Services Used Next, Armitage plans to expand its computer vision solution on AWS Panorama to include transportation, logistics, and construction use cases. Lam concludes, “We’re having conversations with potential customers and are confident we can expand our solution because of the scalability, reliability, and security of AWS.” IT solution provider Armitage Technologies Ltd. needed to design and deploy a computer vision application for public event organizers to capture and record the number of people gathered at events. Português Amazon SageMaker" Armut Case Study.txt,"Amazon Simple Email Service (SES) is a cost-effective, flexible, and scalable email service that enables developers to send mail from within any application. The company runs most of its infrastructure on AWS. This includes the machine learning services that power its matching algorithm, which links customers and professionals through the Armut website and mobile app.  AWS Lambda Français Benefits of AWS We’re launching in new countries and many of our internal services are going to use the notification system. Using AWS, we can grow with confidence.” Español Rerouted messages as push notifications or emails if SMS requests failed 日本語 The company also plans to implement the notification system for other brands to support growth. “We’re launching in new countries and many of our internal services are going to use the notification system,” says Ozgen. “Using AWS, we can grow with confidence.” Turkey-based Armut connects consumers with professionals offering a wide variety of services, including home improvement, tuition, home moving, and health and wellness. To manage jobs effectively, its service uses a matching algorithm and digital notifications. As the company grew, its legacy technology no longer met its needs, so Armut developed a new system using AWS, which increased notification reliability and the volume of jobs it could handle. The business is now generating more income and supporting a greater number of customers without any additional staff. Amazon Simple Email Service, a high-scale inbound and outbound cloud email service. The system uses Amazon Simple Notification Service (Amazon SNS) to send notifications to customers. Through this, Armut’s technical and customer teams can monitor and track notifications more closely than they could before. 한국어 Deniz Ozgen Associate Director of Engineering, Armut Serverless on AWS However, the existing notification system wasn’t keeping pace with the company’s rapid growth. Critically, Armut couldn’t track whether notifications had been sent, delivered, or read. It was clear from users’ feedback that notifications sometimes failed to send, which was often due to issues with local mobile network providers. In addition, the service was difficult to scale—with the tech team adjusting resources manually—and struggled to cope with sudden increases in demand. Armut uses Amazon MQ, a managed message broker service, to automatically resend failed notifications or reroute them to other channels. “With this setup, whenever a notification channel fails, it falls back to another channel, so all of the messages are delivered,” says Ozgen. With the old approach, local mobile network operators caused bottlenecks that prevented the timely delivery of the messages. Armut is a major local services marketplace in Turkey. It operates in seven other EMEA countries under the HomeRun brand. The company helps consumers find and arrange a wide variety of services including home improvement, lessons, moving, and health and wellness. Get Started After the customer and professional are matched, Armut’s platform provides a way for service providers to send quotes to the customer, and for both parties to agree to the work being carried out. The system then manages the entire workflow, through to job completion. Armut uses Amazon SageMaker, which helps data scientists and developers prepare, build, train, and deploy high-quality machine learning models. It also uses Amazon Kinesis Data Streams to easily stream data at any scale, and Amazon Managed Streaming for Apache Kafka to securely stream data with a fully managed, highly available Apache Kafka service. AWS Services Used Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easy to set up and operate message brokers on AWS. 中文 (繁體) Bahasa Indonesia Amazon SNS Ρусский عربي Learn more » 中文 (简体) Armut—which also operates under the HomeRun brand—aims to offer the best experience possible for both service providers and customers using the latest technologies available.  Delivering 1,000 Emails a Second and 1.5 Million a Day Using Machine Learning to Match Consumers and Professionals AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers.   Thanks to the use of AWS best practices, Armut also handles the millions of requests sent each month more efficiently. Around 20 million push notifications and 3 million SMS notifications are sent a month, with this expected to grow as more customers use the service. The ability to send more notifications also has a direct impact on income, as Armut charges professionals to provide quotes. Managed millions of notifications and requests, and thousands of emails Amazon MQ Türkçe Armut is looking to use AWS machine learning to analyze customer behavior to determine the most effective channels for reaching consumers and professionals. For example, if data shows that customers don’t regularly check emails, it could send notifications through SMS instead. English Supported customer growth and international expansion Building on Success Amazon Simple Notification Service (Amazon SNS) is a fully managed messaging service for both application-to-application (A2A) and application-to-person (A2P) communication. Armut Teknoloji Improves Customer Experience with Scalable Notification System Using AWS However, as the company grew by more than 1,000 percent over the last 5 years, its existing notification system no longer met its needs, with notifications often failing. It also didn’t scale well, while its day-to-day maintenance requirements were becoming challenging and time-consuming for the IT team. Armut decided to develop a new notification infrastructure built using Amazon Web Services (AWS) that could offer better performance and scale as its customer base expanded. Deutsch Amazon SES Improved accuracy of request matching process Armut can now send many more notifications in a given time period than it could previously—in one trial, the system sent 1,000 emails per second. It delivers more than 1.5 million emails a day over Tiếng Việt Armut helps consumers find and arrange a wide variety of services such as home improvement, lessons, moving, and health and wellness. One of the features it offers to customers is the ability to connect them with the right professionals for their needs. To do this, Armut needs an accurate matching algorithm and a fast, reliable digital notification system. Italiano ไทย Accurate notification tracking was a key benefit of the new system. “Traceability was the primary concern for this project,” says Ozgen. “Previously, we didn’t have this much visibility into our notification system.” 2022 Armut developed and implemented its new notification system in just 6 months using AWS Lambda, a serverless, event-driven compute service that lets it run code without thinking about servers or clusters. During the design and implementation phase, no customer data or service requests were lost—a key goal for Armut. “We were able to facilitate reliable communications with our customers throughout this transition,” says Deniz Ozgen, associate director of engineering at Armut. “It’s so important that they always know we’re here for them, helping them to take care of their to-do list.” About Armut Teknoloji The notifications sent to customers and professionals via email, SMS, or push notifications are central to the customer experience. They communicate the various steps needed for the work to be completed, such as confirming the job and setting up a time. They also notify customers if a professional arrives late, a job is cancelled, or payments are due. Improving Notifications with Serverless Technology Build and run applications without thinking about servers. Português" Auto-labeling module for deep learning-based Advanced Driver Assistance Systems on AWS _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Auto-labeling module for deep learning-based Advanced Driver Assistance Systems on AWS by Gopi Krishnamurthy and Shreyas Subramanian | on 03 JUL 2023 | in Amazon SageMaker , Amazon SageMaker Ground Truth , Artificial Intelligence , Intermediate (200) | Permalink | Comments |  Share In computer vision (CV), adding tags to identify objects of interest or bounding boxes to locate the objects is called labeling . It’s one of the prerequisite tasks to prepare training data to train a deep learning model. Hundreds of thousands of work hours are spent generating high-quality labels from images and videos for various CV use cases. You can use Amazon SageMaker Data Labeling in two ways to create these labels: Amazon SageMaker Ground Truth Plus – This service provides an expert workforce that is trained on ML tasks and can help meet your data security, privacy, and compliance requirements. You upload your data, and the Ground Truth Plus team creates and manages data labeling workflows and the workforce on your behalf. Amazon SageMaker Ground Truth – Alternatively, you can manage your own data labeling workflows and workforce to label data. Specifically, for deep learning-based autonomous vehicle (AV) and Advanced Driver Assistance Systems (ADAS), there is a need to label complex multi-modal data from scratch, including synchronized LiDAR, RADAR, and multi-camera streams. For example, the following figure shows a 3D bounding box around a car in the Point Cloud view for LiDAR data, aligned orthogonal LiDAR views on the side, and seven different camera streams with projected labels of the bounding box. AV/ADAS teams need to label several thousand frames from scratch, and rely on techniques like label consolidation, automatic calibration, frame selection, frame sequence interpolation, and active learning to get a single labeled dataset. Ground Truth supports these features. For a full list of features, refer to Amazon SageMaker Data Labeling Features . However, it can be challenging, expensive, and time-consuming to label tens of thousands of miles of recorded video and LiDAR data for companies that are in the business of creating AV/ADAS systems. One technique used to solve this problem today is auto-labeling, which is highlighted in the following diagram for a modular functions design for ADAS on AWS . In this post, we demonstrate how to use SageMaker features such as Amazon SageMaker JumpStart models and asynchronous inference capabilities along with Ground Truth’s functionality to perform auto-labeling. Auto-labeling overview Auto-labeling (sometimes referred to as pre-labeling ) occurs before or alongside manual labeling tasks. In this module, the best-so-far model trained for a particular task (for example, pedestrian detection or lane segmentation) is used to generate high-quality labels. Manual labelers simply verify or adjust the automatically created labels from the resulting dataset. This is easier, faster and cheaper than labeling these large datasets from scratch. Downstream modules such as the training or validation modules can use these labels as is. Active learning is another concept that is closely related to auto-labeling. It’s a machine learning (ML) technique that identifies data that should be labeled by your workers. Ground Truth’s automated data labeling functionality is an example of active learning. When Ground Truth starts an automated data labeling job, it selects a random sample of input data objects and sends them to human workers. When the labeled data is returned, it’s used to create a training set and a validation set. Ground Truth uses these datasets to train and validate the model used for auto-labeling. Ground Truth then runs a batch transform job to generate labels for unlabeled data, along with confidence scores for new data. Labeled data with low confidence scores is sent to human labelers. This process of training, validating, and batch transform is repeated until the full dataset is labeled. In contrast, auto-labeling assumes that a high-quality, pre-trained model exists (either privately within the company, or publicly in a hub). This model is used to generate labels that can be trusted and used for downstream tasks such as label verification tasks, training, or simulation. This pre-trained model in the case of AV/ADAS systems is deployed onto the car at the edge, and can be used within large-scale, batch inference jobs on the cloud to generate high-quality labels. JumpStart provides pretrained, open-source models for a wide range of problem types to help you get started with machine learning. You can use JumpStart to share models within your organization. Let’s get started! Solution overview For this post, we outline the major steps without going over every cell in our example notebook. To follow along or try it on your own, you can run the Jupyter notebook in Amazon SageMaker Studio . The following diagram provides a solution overview. Set up the role and session For this example, we used a Data Science 3.0 kernel in Studio on an ml.m5.large instance type. First, we do some basic imports and set up the role and session for use later in the notebook: import sagemaker, boto3, json from sagemaker import get_execution_role from utils import * Create your model using SageMaker In this step, we create a model for the auto-labeling task. You can choose from three options to create a model: Create a model from JumpStart – With JumpStart, we can perform inference on the pre-trained model, even without fine-tuning it first on a new dataset Use a model shared via JumpStart with your team or organization – You can use this option if you want to use a model developed by one of the teams within your organization Use an existing endpoint – You can use this option if you have an existing model already deployed in your account To use the first option, we select a model from JumpStart (here, we use mxnet-is-mask-rcnn-fpn-resnet101-v1d-coco . A list of models is available in the models_manifest.json file provided by JumpStart. We use this JumpStart model that is publicly available and trained on the instance segmentation task, but you are free to use a private model as well. In the following code, we use the image_uris , model_uris , and script_uris to retrieve the right parameter values to use this MXNet model in the sagemaker.model.Model API to create the model: from sagemaker import image_uris, model_uris, script_uris, hyperparameters from sagemaker.model import Model from sagemaker.predictor import Predictor from sagemaker.utils import name_from_base endpoint_name = name_from_base(f""jumpstart-example-infer-{model_id}"") inference_instance_type = ""ml.p3.2xlarge"" # Retrieve the inference docker container uri deploy_image_uri = image_uris.retrieve( region=None, framework=None, # automatically inferred from model_id image_scope=""inference"", model_id=model_id, model_version=model_version, instance_type=inference_instance_type, ) # Retrieve the inference script uri. This includes scripts for model loading, inference handling etc. deploy_source_uri = script_uris.retrieve( model_id=model_id, model_version=model_version, script_scope=""inference"" ) # Retrieve the base model uri base_model_uri = model_uris.retrieve( model_id=model_id, model_version=model_version, model_scope=""inference"" ) # Create the SageMaker model instance model = Model( image_uri=deploy_image_uri, source_dir=deploy_source_uri, model_data=base_model_uri, entry_point=""inference.py"", # entry point file in source_dir and present in deploy_source_uri role=aws_role, predictor_cls=Predictor, name=endpoint_name, ) Set up asynchronous inference and scaling Here we set up an asynchronous inference config before deploying the model. We chose asynchronous inference because it can handle large payload sizes and can meet near-real-time latency requirements. In addition, you can configure the endpoint to auto scale and apply a scaling policy to set the instance count to zero when there are no requests to process. In the following code, we set max_concurrent_invocations_per_instance to 4. We also set up auto scaling such that the endpoint scales up when needed and scales down to zero after the auto-labeling job is complete. from sagemaker.async_inference.async_inference_config import AsyncInferenceConfig async_config = AsyncInferenceConfig( output_path=f""s3://{sess.default_bucket()}/asyncinference/output"", max_concurrent_invocations_per_instance=4) . . . response = client.put_scaling_policy( PolicyName=""Invocations-ScalingPolicy"", ServiceNamespace=""sagemaker"", # The namespace of the AWS service that provides the resource. ResourceId=resource_id, # Endpoint name ScalableDimension=""sagemaker:variant:DesiredInstanceCount"", # SageMaker supports only Instance Count PolicyType=""TargetTrackingScaling"", # 'StepScaling'|'TargetTrackingScaling' TargetTrackingScalingPolicyConfiguration={ ""TargetValue"": 5.0, # The target value for the metric. - here the metric is - SageMakerVariantInvocationsPerInstance ""CustomizedMetricSpecification"": { ""MetricName"": ""ApproximateBacklogSizePerInstance"", ""Namespace"": ""AWS/SageMaker"", ""Dimensions"": [{""Name"": ""EndpointName"", ""Value"": endpoint_name}], ""Statistic"": ""Average"", }, ""ScaleInCooldown"": 300, ""ScaleOutCooldown"": 300 }, ) Download data and perform inference We use the Ford Multi-AV Seasonal dataset from the AWS Open Data Catalog. First, we download and prepare the date for inference. We have provided preprocessing steps to process the dataset in the notebook; you can change it to process your dataset. Then, using the SageMaker API, we can start the asynchronous inference job as follows: import glob import time max_images = 10 input_locations,output_locations, = [], [] for i, file in enumerate(glob.glob(""data/processedimages/*.png"")): input_1_s3_location = upload_image(sess,file,sess.default_bucket()) input_locations.append(input_1_s3_location) async_response = base_model_predictor.predict_async(input_path=input_1_s3_location) output_locations.append(async_response.output_path) if i > max_images: break This may take up to 30 minutes or more depending on how much data you have uploaded for asynchronous inference. You can visualize one of these inferences as follows: plot_response('data/single.out') Convert the asynchronous inference output to a Ground Truth input manifest In this step, we create an input manifest for a bounding box verification job on Ground Truth. We upload the Ground Truth UI template and label categories file, and create the verification job. The notebook linked to this post uses a private workforce to perform the labeling; you can change this if you’re using other types of workforces. For more details, refer to the full code in the notebook. Verify labels from the auto-labeling process in Ground Truth In this step, we complete the verification by accessing the labeling portal. For more details, refer to here . When you access the portal as a workforce member, you will be able to see the bounding boxes created by the JumpStart model and make adjustments as required. You can use this template to repeat auto-labeling with many task-specific models, potentially merge labels, and use the resulting labeled dataset in downstream tasks. Clean up In this step, we clean up by deleting the endpoint and the model created in previous steps: # Delete the SageMaker endpoint base_model_predictor.delete_model() base_model_predictor.delete_endpoint() Conclusion In this post, we walked through an auto-labeling process involving JumpStart and asynchronous inference. We used the results of the auto-labeling process to convert and visualize labeled data on a real-world dataset. You can use the solution to perform auto-labeling with many task-specific models, potentially merge labels, and use the resulting labeled dataset in downstream tasks. You can also explore using tools like the Segment Anything Model for generating segment masks as part of the auto-labeling process. In future posts in this series, we will cover the perception module and segmentation. For more information on JumpStart and asynchronous inference, refer to SageMaker JumpStart and Asynchronous inference , respectively. We encourage you to reuse this content for use cases beyond AV/ADAS, and reach out to AWS for any help. About the authors Gopi Krishnamurthy is a Senior AI/ML Solutions Architect at Amazon Web Services based in New York City. He works with large Automotive customers as their trusted advisor to transform their Machine Learning workloads and migrate to the cloud. His core interests include deep learning and serverless technologies. Outside of work, he likes to spend time with his family and explore a wide range of music. Shreyas Subramanian is a Principal AI/ML specialist Solutions Architect, and helps customers by using Machine Learning to solve their business challenges using the AWS platform. Shreyas has a background in large scale optimization and Machine Learning, and in use of Machine Learning and Reinforcement Learning for accelerating optimization tasks. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" AWS announces 21 startups selected for the AWS generative AI accelerator _ AWS Startups Blog.txt,"AWS Startups Blog AWS announces 21 startups selected for the AWS generative AI accelerator by Kathryn Van Nuys | on 24 MAY 2023 | in Announcements , Generative AI , Startup | Permalink |  Share AWS is excited to announce the cohort of startups accepted into the global AWS Generative AI Accelerator . The program kicks off May 24th at our San Francisco AWS Startup Loft and closes on July 27th. Over the course of their 10-week program, participants will receive tailored technical advice, dedicated mentorship, an opportunity to pitch their demos to venture capitalists (VCs) in the AWS network, and up to $300,000 in AWS credits. Critically, they will also have the opportunity to foster lifelong connections with their fellow founders and within AWS. Our finalists come from various industries, backgrounds, and geographic regions, but all they have one thing in common: they are using generative artificial intelligence (AI) technology to drive unprecedented innovation in their space. They’re exploring practical solutions to problems such as illiteracy and healthcare burnout and designing tools that drastically reduce time spent on costly, tedious tasks. No matter their vision, all of these startups are proving what’s possible with generative AI and boldly reinventing applications, data touchpoints, and customer experiences, to name a few. Backing the upcoming leaders of the generative AI landscape Startups are the lifeblood of innovation, and AWS is eager to support them in developing incredible generative AI solutions. Many of the AWS Startups team are former founders or VCs, and we embrace this chance to give back to these startups in meaningful, actionable ways. “Generative AI holds tremendous potential to revolutionize how humans interact with technology and with each other, while democratizing access to new and existing technology in a way that is unprecedented.” says Jon Jones, vice president of compute and AI/ML services at AWS. “Customers are already seeing value in streamlining processes, accelerating product development, and using AI as a trusted companion to increase productivity and better serve their clients.” We are excited to partner with these innovators on their journey to solve some of the world’s biggest challenges.” Drumroll, please Please join us in extending a warm welcome to the 21 AWS Generative AI Accelerator program finalists. Education Ello Ello leverages large language models (LLM) and AI solutions to perfectly tailor literacy lessons to each young student they reach. Through interactive reading sessions from real books, Ello becomes a motivational learning companion that transforms children into curious, enthusiastic readers. Marketing, social, and advertising Crate On a mission to create an open internet with no boundaries, Crate invites users to curate a personal, shareable artifact made up of their favorite pieces from anywhere on the web. The team puts AI in the hands of users to help them tell better stories with auto generated images, text, and instant summaries. q lip qlip is an AI-powered video highlights generator that helps users grow their social media presence by automatically repurposing long-form videos into short highlights primed for today’s audiences. OpenAds OpenAds solves advertising challenges for publishers, consumers, and advertisers by identifying and suggesting ads that match a business’ user experience UX, are tailored to customer advertising and privacy preferences, and keep creative control in the hands of advertisers. Entertainment and gaming Leonardo Ai Leonardo Ai is an AI-driven content production suite tailored for creators across diverse sectors, with a core focus on game development artists. Through the platform, developers can utilize generative AI solutions that integrate with their workflows to unlock their creativity and accelerate content production from months to minutes. Storia Built by leading AI researchers and engineers, Storia operates as a creative assistant for rapid film previsualization and production. Story producers can experiment with AI-generated videos, visualize what their product would look like shot in different styles, and build collaborative and comprehensive storyboards in minutes. Krikey Krikey uses generative AI to make it easier for creators to breathe life into animations, helping them automate character motion with a variety of 3D avatars, augmented reality (AR) gaming toolkits, and 3D animations. Animations can be seamlessly integrated and exported into the creator’s platform of choice, significantly shortening production time and enhancing the creative process. Poly Poly is an AI-enabled infinite design asset marketplace (offering seamless physically based rendering [PBR] textures, illustrations, icons, sounds, and many more) that lets anyone use or generate stunning, 8K high definition (HD) professional design assets in seconds with AI. Flawless To counteract the rising on-set production costs and time constraints, Flawless gives artists a suite of cinematic-quality AI-powered tools that allow them to rapidly and affordably iterate, experiment, and refine their content. Healthcare and life sciences Knowtex Knowtex empowers clinicians with voice AI automated note-taking and coding from natural conversation to combat burnout and allow focus on patient care. Vevo Vevo is building the world’s first atlas of how drugs interact with patient cells in living organisms at single cell resolution. Vevo’s foundation models trained on this atlas faithfully capture disease biology, enabling generative design of drugs that are more likely to treat disease in humans. Ordaōs Ordaōs is a human-enabled, machine-driven drug design company. Their miniPRO proteins help drug hunters deliver treatments that are safer and more effective than traditional discovery methods. Nosis Bio Nosis Bio is enabling the future of targeted drug delivery by integrating deep expertise in generative AI and high-throughput biochemistry. Finance Theia Insights Theia Insights leverages the power of AI to synthesize and distill financial data, generating real-time insights beyond human research capability, to inform the investment management community, helping individual and institutional investors make better decisions. Data and knowledge management Unwrap Powered by AI and ML, Unwrap analyzes data from multiple customer feedback channels at scale, providing them with auto-labeling, semantic search, and automatic alerts that strengthen the feedback loop between companies and their customers. Stack AI Stack is a no-code interface that helps businesses of all sizes build and deploy AI applications including chatbots, document processing, content creation, and automated customer support in minutes. Nixtla Nixtla is building a state-of-the-art disruptive open-source ecosystem that uses AI to unlock scalable, lightning-fast, and user-friendly time series forecasting and anomaly detection. Wand Wand enables businesses to sync data from multiple sources to rapidly build collaborative, measurable, and scalable AI solutions. From predictive models to customized LLMs, teams have the power to solve business problems and create value faster than ever before. Griptape Griptape’s open source framework and managed service enables developers to enhance LLMs with chain of thought capabilities, creating context-aware conversational, copilot, and autonomous agents. AI ethics, safety, and security Bunked Bunked distinguishes AI-generated content from real content using blockchain technology. Protopia AI Protopia AI provides data protection and privacy-preserving AI/ML technologies that specialize in enabling AI algorithms and software platforms to operate without the need to access plain-text information. The company works with enterprises and generative AI/LLM providers to enable maintaining ownership and confidentiality of enterprise data while using AI/ML solutions. AWS is excited to act as a catalyst for these forward-thinking startups. We continue to build upon the legacy of our previous accelerator programs—such as the AWS Impact Accelerator —to provide founders with the resources, guidance, and networking opportunities they need to scale and succeed. In the same way AWS democratized the cloud by expanding access to industry-leading technology, we look forward to offering our scale, expertise, and relationships to the next generation of companies at the forefront of generative AI innovation. TAGS: Accelerators Kathryn Van Nuys Kathryn Van Nuys is the Head of North America Startup Business Development at Amazon Web Services (AWS). Kathryn spent the earlier part of her career in financial services working in capital markets as well as sales and trading at Citigroup and Lehman Brothers. She later joined a number of early-stage startups, building their capital markets and partnership teams, before moving on to AWS to scale her efforts in helping startups to achieve growth. Resources AWS Activate AWS for Startups Resources Build Your Startup with AWS AWS for Startups Events Follow  AWS Startups Twitter  AWS Cloud Twitter  AWS Startups Facebook  AWS Startups Instagram  AWS Startups LinkedIn  Twitch  Email Updates" AWS Case Study - Ineos Team UK.txt,"Français Amazon S3 Greater HPC Scale, Lower Cost Español To get the performance INEOS TEAM UK required and on budget, the team worked with AWS Solutions Architects and AWS Professional Services consultants, who helped design an HPC environment based on multiple Availability Zones in multiple regions and Amazon EC2 Spot Instances, which provided a 65 percent cost saving compared to on-demand capacity. 日本語 Amazon FSx for Lustre Formed in 2018, INEOS TEAM UK aims to bring the America’s Cup—the oldest international sporting trophy in the world—to Great Britain. Based in Portsmouth, INEOS TEAM UK is led and backed by Sir Jim Ratcliffe, the founder and chairman of INEOS, a global chemical producer. The team also includes Sir Ben Ainslie, a previous America’s Cup winner, as principal and skipper and four times America’s Cup winner Grant Simmer as CEO. For the hull, whose design needed hundreds of compute cores for every simulation, the team used Amazon EC2 C5 instances in addition to the latest Amazon EC2 C5n Nitro-powered instances with Elastic Fabric Adapter (EFA) network interfaces. Amazon FSx for Lustre makes it easy and cost effective to launch and run the world’s most popular high-performance file system. Use it for workloads where speed matters, such as machine learning, high performance computing (HPC), video processing, and financial modeling. AWS Professional Services Enables engineers to be more innovative 한국어 INEOS TEAM UK was formed in 2018 to bring the America’s Cup to Great Britain in 2021 when the 36th edition of the race takes place in Auckland, New Zealand. A Technology Boat Race AWS allows us to take bigger design steps, simply because we have more time to understand our results.” Gains large-scale HPC capacity Using AWS, INEOS TEAM UK can process thousands of design simulations for its America’s Cup boat in one week versus in more than a month using an on-premises environment. INEOS TEAM UK will compete in the 36th edition of the America’s Cup in 2021. The team is using an HPC environment running on Amazon EC2 Spot Instances to help design its boat for the competition. INEOS TEAM UK Accelerates Boat Design for America’s Cup Using AWS Driving Better Innovation Amazon EC2 AWS Services Used Companies of all sizes across all industries are transforming their businesses every day using AWS. Explore our web hosting solutions and start your own AWS Cloud journey today. Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. The 36th America’s Cup race will be decided in Auckland, New Zealand in 2021. Like all the teams, INEOS TEAM UK will compete in a boat whose design will have followed guidelines set by race organizers to ensure the crew’s sailing skills are fully tested. 中文 (繁體) Bahasa Indonesia Ρусский To ensure fast disk performance for the thousands of simulations completed each week, the team also used Amazon FSx for Lustre to provide a fast, scalable, and secure high-performance file system based on Amazon Simple Storage Service (Amazon S3). عربي 中文 (简体) To run these simulations using the team’s on-premises high performance computing (HPC) resources could take more than a month. Nick Holroyd, head of design at INEOS TEAM UK, says, “With so many design decisions to make before the competition, a month was too long. It reduced the time our engineers had to consider the results, limiting the freedom they needed to be innovative and make the right choices.”   Learn More Benefits of AWS Get Started Holroyd says, “Heading towards a design deadline is always a frantic time. You have to make decisions fast. Using AWS, we have more time to think about what makes a design successful or not. We can then use this knowledge in our next design iteration. AWS allows us to take bigger design steps, simply because we have more time to understand our results.”   Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. English By running its CFD workloads on AWS, INEOS TEAM UK engineers have more time to innovate. They can wake up on a Monday morning with an idea and test it, knowing that by the end of the day, they’ll have a set of results to look at and build on. Despite the restrictions, teams still have control over features such as the shape of the boat’s monohull and foils, but with limited on-water testing, engineers must turn to computer-based simulations to optimize their designs. They depend on the computational power available to process thousands of simulations, exploring possible boat shapes and positions on the water. In the case of INEOS TEAM UK, for example, it needs 2,000−3,000 computational fluid dynamics (CFD) simulations to design the dimensions of just a single boat foil. “The speed combined with the low cost of the Amazon EC2 Spot Instances means we can do many thousands more simulations within our design budget,” says Holroyd. “One question I constantly ask myself is whether we’re spending our money wisely. Using AWS, I have no doubts because it massively compresses the computational turnaround, maximizing design time.” Adopting the AWS Cloud can provide you with sustainable business advantages. Supplementing your team with specialized skills and experience can help you achieve those results.  The aim of the restrictions—which limit on-water design trials, too—is also to control the cost of entering the race and to attract as many entrants as possible. About INEOS TEAM UK Deutsch The America’s Cup Dream Tiếng Việt Reduces HPC costs using Amazon EC2 Spot Instances Italiano ไทย On A Mission To Win 2020 Learn more » Supports thousands of simulations each week Sir Ben Ainslie, skipper and team principal at Ineos Team UK, and Max Star, CFD engineer, explain how using an HPC environment on AWS helped the team design the INEOS Team UK boat. The team turned to Amazon Web Services (AWS) to migrate its CFD simulations to the AWS Cloud. The team chose AWS because of the scale of its HPC resources as well as its cost-effectiveness. INEOS TEAM UK could keep its costs low by using Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances, which allow customers to access unused Amazon EC2 capacity. Português Nick Holroyd Head of Design, INEOS TEAM UK" AWS Case Study - StreamAMG.txt,"Français Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into data lakes, data stores, and analytics services. It can capture, transform, and deliver streaming data to Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, generic HTTP endpoints, and service providers like Datadog, New Relic, MongoDB, and Splunk. Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It's a fully managed, multi-region, multi-active, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. The flawless start to the season was greatly appreciated by StreamAMG's customers, according to Andrew and raised the company's profile across the industry: ""In the OTT industry reputation is key and our ability to consistently deliver scalable and resilient platforms has afforded us such a dependable reputation,"" he says. Español Live sports streaming provider StreamAMG quickly realized early in the year that the 2020 English football calendar would be radically different from what had gone before. ""We started working internally to formulate a plan which would deliver a technical solution that could scale above and beyond our requirements."" With COVID-19 disruption growing and matches played behind closed doors, the company began planning for a very different season – one where more users than ever would rely on its over-the-top (OTT) platforms to support their club, and clubs would increasingly rely on streamed matches as a revenue source. AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers, creating workload-aware cluster scaling logic, maintaining event integrations, or managing runtimes. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. 日本語 StreamAMG Scores Record Viewership and Uninterrupted Delivery To achieve that, the teams undertook a comprehensive application transformation, replacing the most important components of the previous application with a cloud-native system that underpinned the load-bearing parts of StreamAMG's products with microservices and serverless technologies based on AWS. A New Environment, a New Infrastructure Agility & Performance Get Started 한국어 Amazon Lambda A Great Time to Score Amazon CloudFront AWS Services Used 中文 (繁體) Bahasa Indonesia The company delivered 2.9 million streams, watched by hundreds of thousands of fans, and an overall data uplift of 500 percent – all without a hitch and with no updates to the architecture needed. Services include AWS API Gateway, AWS Lambda, Amazon CloudFront, Amazon DynamoDB, and Amazon ElastiCache for Memcached. StreamAMG also adopted Amazon Kinesis Data Firehose to collect and process actions and user activity in real-time, and stream the data for storage later on. StreamAMG enables organisations across sports, media and betting to deliver video content at scale and offer exceptional streaming experiences. Ρусский عربي Availability 中文 (简体) ""Being in the live sports business, failure is not really an option at all. Even going down for 10 seconds is going to impact tens of thousands or hundreds of thousands of users simultaneously. Scale and resiliency were definitely the two most important elements for us,"" Andrew De Bono, StreamAMG's CTO, says.  To support the unprecedented load the new season was likely to bring, StreamAMG began to reexamine its platform architecture to cope with the challenges ahead. Benefits of AWS Scalability, Elasticity, Cost To accommodate the uncertain demands of the season, the team wanted to create an infrastructure that could cope with the heaviest loads and still scale with demand. When the season began, they proved they had done just that: despite the massive spike in usage, StreamAMG delivered all matches with near zero downtime or interruption. Cost Optimization & Cost Savings As well as coping with unexpected demand, the scalability of the new system made a significant difference to StreamAMG's cost optimization, raising its performance ceiling without raising running costs. Due to the nature of live sports, StreamAMG's system might receive only light usage the majority of the time when no live matches are being played, and then see a huge spike in demand on matchday. Türkçe In the OTT industry reputation is key and our ability to consistently deliver scalable and resilient platforms has afforded us such a dependable reputation."" English Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. The previous system had to be primed to deal with maximum usage 24/7, even when the company knew 90 percent of the time that capacity wouldn't be required. That all changed with AWS. Elasticity Learn more Deutsch Amazon DynamoDB Tiếng Việt Amazon Kinesis Data Firehose In the first minutes and hours of the season, the StreamAMG team was able to monitor how the system was dealing with the matches through Amazon CloudWatch, which provided visibility on both the platform and the traffic in real time, allowing the company to be fully aware and in control of the application. Andrew De Bono CTO, StreamAMG Italiano ไทย ""We really are paying for every single user on our platform, nothing less and nothing more, so we really could align the cost with the actual usage, rather than taking on massive capex hits to support the increased capacity on our application,” says De Bono.  About StreamAMG 2020 Learn more » The company needed a set-up agile enough to deal with the uncertainties of the new situation, while still managing potentially millions of hits per minute with zero failover. While the streaming part of the business had to operate with the highest levels of availability, the company also needed to ensure its user membership, payment and entitlement management systems could easily handle the predicted jump in demand. And both elements needed to be able to scale to traffic levels that could be 400 percent to 500 percent of what the company might see in a normal season. A project of similar scale and significance might be expected to take several months, even without the disruption caused by COVID-19. But working to a hard deadline of the new season kickoff, the project was delivered in just 12 weeks, thanks to the close collaboration between the AWS and StreamAMG teams. Companies of all sizes across all industries are transforming their businesses every day using AWS. Start your own AWS Cloud journey today. Português" AWS Case Study_ Creditsafe.txt,"Creditsafe, headquartered in Dublin, Ireland, with 23 offices across 13 countries worldwide, specializes in business credit checking. Its database contains insights on more than 320 million businesses, with data coming from over 70 different countries and provided to over 200,000 subscribers globally. It is one of the world’s most-used providers of online business credit reports and, each month, it predicts more than 70 percent of all business insolvencies. Français About Creditsafe Amazon Redshift Español Migrating its terabytes of data and related tools to AWS was just the beginning, though. The migration was an opportunity to improve the accessibility and sharing of data across many regions and countries, strategize, and plan for the future. “For us, this wasn’t just lift and shift, but actually a way to improve our ways of working as an organization,” says Marsh. Working with AWS Partner Cognizant, Creditsafe identified its needs and worked out a timeline for the migration. Cognizant has years of experience on many migrations, meaning Creditsafe was able to implement real-world best practices. It began by migrating its UK data acquisition operations. Data from all Creditsafe’s providers now natively feed into Amazon Redshift, which uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes. The data then moves into the company’s data vaults, where it is ready for use. Eliminated burden of multiple on-premises servers 日本語 Creditsafe chose AWS as the platform for its data and all of its customer-facing services and products, such as business credit reports, international credit reports, and company monitoring. “The whole goal of moving to the cloud was to tick the three main boxes around scalability, reliability, and availability,” says Brian McGeough, director of production at Creditsafe. “Cloud eliminates risks like storage area network failures and other things that could be catastrophic. We can focus on delivering to our customers, not maintaining an on-premises system.” Cloud eliminates risks like storage area network failures and other things that could be catastrophic. We can focus on delivering to our customers, not maintaining an on-premises system.” 한국어 Rather than investing in building out and maintaining infrastructure, Creditsafe is reallocating staff and resources to expanding its data analysis skills, with plans to use artificial intelligence (AI) and machine learning (ML). “We found it very difficult to cross-reference data across regions and jurisdictions previously, because they were effectively independent systems and services,” says Marsh. “Now we can get more value out of our data and focus on innovating. That’s exciting.” AWS Glue AWS Glue is a serverless data integration service that makes it easier to discover, prepare, move, and integrate data from multiple sources for analytics, machine learning (ML), and application development. Amazon EC2 Creditsafe, founded in Oslo, Norway, and now headquartered in Dublin, Ireland, discovered early success providing data analysis to business customers. The company specializes in business credit checking. It has the biggest wholly owned database in the industry, containing insights on more than 320 million businesses. This data comes from over 70 different countries and is provided to over 200,000 subscribers globally. It is one of the world’s most-used providers of online business credit reports and, each month, predicts more than 70 percent of all business insolvencies. Over more than two decades, it had gradually built large and complex on-premises systems. The company migrated to Amazon Web Services (AWS) to optimize how it worked and to prepare for future success. A Quick and Successful Migration Increased reliability for terabytes of data with cloud storage 中文 (繁體) Bahasa Indonesia For its migration to AWS, Creditsafe used the AWS Migration Acceleration Program (MAP), a comprehensive and proven cloud migration program based on AWS experience in migrating thousands of enterprise customers to the cloud. Enterprise migrations can be complex and time-consuming, but MAP can help organizations accelerate their cloud migration and modernization journeys with an outcome-driven methodology. Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي The first phase of migration has seen about 20 percent of Creditsafe’s systems successfully migrated. “We’re happy with how it’s going and we’re now running parallel workstreams for other jurisdictions,” says Marsh. “The knowledge gained from the first migration will make it easier and faster. Using the AWS Migration Acceleration Program has definitely been the right choice for us.” 中文 (简体) Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Customers of all sizes and industries can store and protect any amount of data for virtually any use case, such as data lakes, cloud-native applications, and mobile apps.  Learn more » As the business grew, its on-premises systems needed to expand to accommodate increasing amounts of data. The overall system was built piece by piece, with more and more resources going into keeping that setup running. “We wanted to put our efforts into our core business, not running servers,” says Ryland Marsh, director of technical engineering at Creditsafe. “Migrating to AWS was a great opportunity to plan a new, optimized approach.” Improving the collection, storage, and analysis of data while the business remained operational was key.  Benefits of AWS Get Started Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. English Eliminating Risks and Gaining Flexibility and Resilience on AWS Brian McGeough Director of Production, Creditsafe Creditsafe Cuts Technology Administration Burden, Builds for Future Success Using AWS Improved transparency of data within the company Deutsch Tiếng Việt Amazon S3 Italiano ไทย Participation in MAP really did accelerate Creditsafe’s migration. “Working with Cognizant, we were able to scale much more quickly,” says McGeough. “We have technical expertise in house but using MAP let us plan and execute the migration with confidence—and Cognizant’s experience helped us direct that where it needed to go and fill in any gaps.” 2022 AWS Services Used Achieved successful phase-one migration of UK data acquisition operations Português" AWS Case Study_ Immowelt.txt,"AWS Lambda Français Gains visibility of IT costs and lowers maintenance overheads Español immowelt Modernizes Real Estate Portal, Controls Costs, and Boosts Innovation Using AWS AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits and bots that may affect availability, compromise security, or consume excessive resources. Since 1991, immowelt has run real estate portals that help German-speaking businesses and individuals find their dream property. The company is part of AVIV Group, one of the world’s largest digital real estate tech companies, which is in turn part of German publishing giant, Axel Springer SE. About immowelt Group 日本語 The immowelt Group (IWG) runs property-finding portals for German-speaking businesses and individuals. The company has more than 500 employees and is headquartered in Nuremburg, Germany. IWG is part of the AVIV Group, one of the world’s largest digital real estate tech companies, which is in turn part of publishing giant, Axel Springer SE. Improves availability of web portals, resulting in better customer experiences Get Started 한국어 And, by being all-in on AWS, immowelt now has access to a wide array of services that it can deploy easily as the business evolves. “We benefit from AWS expertise and the possibilities it gives us as we move forward,” says Acar. “Our AWS team is like an additional department supporting the business.” Faster Innovation with Updates Several Times a Day Increases frequency of software release cycles to several times a day immowelt wanted to improve its development team’s ability to create new features and solutions as well as modernize the organization’s infrastructure to support business growth. To do this, it needed to make its systems more reliable. “We wanted a world where we didn’t have to think about maintaining the underlying hardware and its limitations while working on scale,” says Cemal Acar, group leader of DevOps and infrastructure at immowelt. “We wanted to focus on innovation, expanding the business, and reducing time to market.” immowelt business and IT leaders now have greater visibility of IT costs, which makes budgeting and planning easier and more effective. Previously, the complexity of systems made it difficult to track where budget was being spent. The company has also cut the cost of IT maintenance and hardware purchases compared to its on-premises systems. Responsive Support Eases a Complex Project AWS Services Used 中文 (繁體) Bahasa Indonesia Greater Visibility and Cost Control Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Re-architecting workloads during a lift-and-shift migration is a major undertaking, but the leadership and technical teams believed that the long-term benefits outweighed any risks. “It would have been too expensive to move our whole setup to AWS in its previous state,” says Acar. “By lifting and shifting some legacy systems, while re-architecting others, we had an opportunity to create a platform to support further modernization in the future.” The immowelt Group runs popular real estate portals that help German-speaking businesses and individuals find their dream property. When its on-premises data centers threatened its ability to innovate and provide a responsive service to customers, it turned to AWS. The company completed a successful lift-and-shift migration to AWS and simultaneously re-architected its core infrastructure. Using AWS, immowelt has achieved greater visibility of IT costs, lowered maintenance overheads, and created a more efficient, flexible development process for future growth. 中文 (简体) immowelt received funding and expertise from AWS throughout the migration. The immowelt team used the AWS Migration Acceleration Program (MAP), which provides companies with guidance and help to identify gaps in skills ahead of migration. The program also awards credits and assesses how prepared the wider organization is for change, through a Migration Readiness Assessment, which covers people and organizational design aspects, as well as technology. By using AWS Well-Architected reviews, immowelt received support from AWS solution architects on a regular basis. “Through the AWS Migration Acceleration Program, we could progress faster with the changes we wanted. It also helped with our expenses,” says Acar. AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers.  Learn more » With the help and use of AWS, immowelt’s development teams were able to enhance the “you build it, you run it” approach, allowing them to roll out new features and fixes more frequently and quickly. It publishes updates several times a day, compared to every 2–4 weeks on the on-premises architecture. This means customers on its real estate portals experience a responsive service with the latest capabilities. “It’s another world for us now—and for our customers,” says Acar.   Benefits of AWS Using AWS, immowelt has achieved greater visibility of IT costs, lowered maintenance overheads, and created a more efficient development process with flexibility for future growth and innovation.   Amazon EC2 Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. When the migration team ran into challenges, it turned to AWS for assistance. AWS Professional Services provided advice on architectural issues for immowelt, while AWS Enterprise Support responded quickly to urgent issues. “We’d open a ticket and our AWS support team would help us to resolve the problem through online chats or phone conversations,” says Acar. “It would get more hands-on, too, if we needed it. We appreciated the high levels of responsiveness, professionalism, and expertise.” Cemal Acar Group Leader of DevOps and Infrastructure, immowelt English Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. Since the migration, the teams have increased their use of APIs and infrastructure as code from 50 percent to 99 percent. This makes it easier to reuse development work and gives developers more time for innovation. Engineers are empowered to take ownership of their work too, with opportunities to gain new cloud skills that boost their motivation and productivity. immowelt’s infrastructure consisted of two data centers, some of which regularly suffered outages, so that customers could not access the immowelt real estate portals. The existing applications were complex, and changes to code or systems in one area often caused problems or failures elsewhere. Keeping the system up and running required significant time and specialized skills held by only a few team members, which left the business vulnerable in the event of employees leaving their roles. Learn More Deutsch Tiếng Việt immowelt was finding its existing IT estate expensive and cumbersome to maintain, hindering the business and its ability to innovate. The company looked to modernize its infrastructure using Amazon Web Services (AWS). It ran multiple projects simultaneously, with one stream focused on re-architecting workloads that were hosted on-premises or already migrated to the cloud. It also aimed to migrate its remaining workloads to AWS as a straight lift-and-shift project.  Italiano ไทย Amazon CloudFront We benefit from AWS expertise and the possibilities it gives us as we move forward. Our AWS team is like an additional department supporting the business.” 2022 Migrating to AWS and Re-Architecting Core Systems AWS Web Application Firewall (AWS WAF) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Learn more about how you can achieve your cloud goals faster and more reliably with our unmatched migrationexperience and solutions. Português" AWS Customer Case Study _ Kepler Provides Effective Monitoring of Elderly Care Home Residents Using AWS _ AWS.txt,"Delivering a Stable System to Provide High-Quality Care Using Amazon CloudWatch Français Benefits of AWS Using AWS, Kepler can quickly and efficiently monitor residents in need with low-latency communication and easily scale to accommodate new residents. Today, Kepler Vision is growing very rapidly in Europe and is on track to achieve its mission: to look after the well-being of 1 million patients by 2030.  Amazon Elastic Container Service (ECS) Anywhere is a feature of Amazon ECS that enables you to easily run and manage container workloads on customer-managed infrastructure. Caregivers can now respond quickly to elderly residents without having to constantly monitor multiple video screens. Using Kepler Night Nurse, care homes can provide better quality care. “Now, residents aren’t disturbed unnecessarily by nightly rounds and can sleep through the night,” says Stokman. “If there are issues, caregivers can be there to help within minutes. Our solution also reduces false alarms so caregivers can provide care only when actually needed.” Español Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. Sensitive data is only accessible to approved Kepler staff. Amazon Simple Storage Service (Amazon S3) Kepler Vision Technologies, based in the Netherlands, created Kepler Night Nurse—a monitoring solution that looks after the well-being and safety of elderly residents in care homes using automated video analysis. The company built this hybrid solution on Amazon Web Services (AWS) and uses edge devices managed by Amazon Elastic Container Service (ECS). Using AWS, Kepler can easily scale to accommodate demand and increase the speed of connecting new sensor devices from 50 to 500 a week. The company also improved its development speed by reducing the time required for neural network training on Amazon Elastic Compute Cloud (Amazon EC2) from several weeks to just a few hours. Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Kepler Vision Technologies, based in the Netherlands, uses computer vision and deep learning to assist caregivers in looking after the elderly in care homes. The Kepler Night Nurse monitors care home residents and alerts staff to any issues so they can provide high-quality care. Kepler Provides Effective Observation of Elderly Care Home Residents Using AWS Get Started 한국어 Dr Harro Stokman Chief Executive Officer and Founder, Kepler Vision Technologies Kepler monitors its software to ensure its solution remains reliable by using Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), IT managers, and product owners. Monitors, manages, and restarts edge devices automatically with Amazon CloudWatch While developing Kepler Night Nurse, the company faced a challenge: care homes do not have the computing resources required to process and analyze video images, but processing images must occur on premises to protect residents’ privacy. Kepler found a solution by taking a hybrid approach and using edge devices. The Kepler Night Nurse Edge Box, managed using Amazon ECS Anywhere, allows it to easily run containers on customer-managed infrastructure. AWS Services Used Kepler plans to use AWS services to continue to develop Kepler Night Nurse. It is working to improve the efficiency of its neural network training and to add new functionality, such as faster video processing and even more accurate video recognition. Amazon CloudWatch, which provides on-premises edge device monitoring. CloudWatch automatically manages and restarts edge devices if they fail, meaning Kepler’s IT team only needs to intervene for complex issues. “Building on AWS means we have a highly available system that warns us when something isn’t working,” says Stokman. “This allows us to immediately address issues so no lives are put at risk.” Amazon ECS Anywhere 中文 (繁體) Bahasa Indonesia The world’s population is aging. By 2030, it’s estimated that Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Learn more » Kepler Night Nurse is helping to address the challenges of caring for an aging population. “We’ve improved the quality of care in elder care homes,” says Stokman. “Built on AWS, our solution helps staff provide attentive care while affording residents the privacy and dignity they deserve.” 中文 (简体) Amazon Elastic Container Service (Amazon ECS) Supporting Care Homes Looking After Elderly Residents Using AWS Learn more » Developing a Hybrid Solution to Address Privacy Using Amazon ECS Anywhere Kepler is a deep learning startup founded in 2018. Its Kepler Night Nurse software uses artificial intelligence (AI) to look after elderly residents in their rooms. When the fully automatic video analysis detects that a resident has fallen or needs attention, it sends a text message to care home staff within 30 seconds.   Substantially reduces time for neural-network algorithm training   Improving Neural Network Training for Accurate Video Analysis at Lower Cost Addressing the Challenges in Caring for the Elderly Türkçe Scales to meet immediate increases in demand English The training data is anonymized, encrypted, and stored on Kepler Vision Technologies Kepler is also now able to remotely monitor and configure all edge devices installed at care homes and deploy its solution to new customers in minutes. “Using AWS, we can easily manage the hybrid setup with a single control panel view of all services, which gives us total visibility of product performance at every customer site,” says Stokman. Deutsch Tiếng Việt 1 in 6 people will be aged 60 and over and a predicted Italiano ไทย Amazon S3 is an object storage service offering industry-leading scalability, data availability, security, and performance. Amazon CloudWatch Improves residents’ safety and well-being, notifying caregivers within 30 seconds of detected need 2022 The company also uses Amazon EC2 to train its neural networks and improve video recognition accuracy. The training takes only a few hours, compared to several weeks using an on-premises approach. “We have on-demand scalability for GPU workloads to train our neural network models when we need to,” says Stokman. It’s fast and extremely cost effective, and we only pay for what we use. We’ve saved 70 percent in IT costs, giving us the cashflow we need to grow fast.” 日本語 shortfall of global healthcare workers will reach 18 million. Kepler Vision Technologies’ solutions address the challenges that these trends present in caring for the elderly. The Kepler Night Nurse software analyzes videos from care home rooms and automatically alerts staff to any issues so they can ensure the well-being and safety of residents. Since its launch, Kepler has worked with AWS to develop its hybrid solution. AWS Activate, a program that offers startups no-cost tools and resources—including credits—was particularly beneficial to Kepler. “AWS was the best choice to help us develop our product because its services are easy to use and well-documented,” says Dr Harro Stokman, chief executive officer (CEO) and founder of Kepler Vision Technologies. “We also got a lot of support from our AWS team throughout the development process on which services to choose and how to best design our solution to work with them.” Português We’ve improved the quality of care in elder care homes. Built on AWS, our solution helps staff provide attentive care while affording residents the privacy and dignity they deserve.”" AWS releases smart meter data analytics _ AWS for Industries.txt,"AWS for Industries AWS releases smart meter data analytics by Sascha Janssen and Juan Yu | on 03 NOV 2020 | in Amazon Athena , Amazon Redshift , Amazon SageMaker , Industries , Power & Utilities , Sustainability , Technical How-to | Permalink | Comments |  Share Introduction Utilities have deployed MDMS (Meter Data Management Systems) since the late 90’s and MDMS deployments have accelerated alongside the deployment of smart metering and advance metering infrastructure (AMI) at utilities worldwide. MDMS collect energy consumption data from smart meter devices and send it to utility customer information systems (CIS) for billing and further processing. The most common MDMS use case for utilities is the performance of basic data validation, verification and editing (VEE) functions, and the creation of billing determinants from vast amounts of meter data. Nonetheless, petabytes of valuable energy consumption data remain trapped in legacy utility MDMS. Utilities confronting the need for transition driven by decarbonization and decentralization can benefit from unlocking the power of metering data and enriching it with other information sources like geographic information systems (GIS), CIS, and weather data. This provides compelling insights for various use cases such as forecasting energy usage, detecting system anomalies, and analyzing momentary service outages. Collectively, these uses cases present utilities with opportunities to improve customer satisfaction while increasing operational efficiency. An AWS Quick Start, which deploys a Smart Meter Data Analytics (MDA) platform on the AWS Cloud, helps utilities tap the unrealized value of energy consumption data while removing undifferentiated heavy lifting for utilities. This allows utilities to provide new services such as: Load prediction on the household, circuit, and distribution system level Deeper customer engagement through proactive notifications of high consumption or power outage status Predictive maintenance on distribution assets, circuit quality analytics, and much more This blog reviews the architecture of the AWS MDA Quick Start and its design aimed at providing utilities with a cost effective data platform to work with petabytes of energy consumption data. What does MDA Quick Start include? AWS MDA uses a data lake and machine learning capabilities to store the incoming meter reads, analyze them, and provide valuable insights. The Quick Start comes with three built-in algorithms to: Predict future energy consumption based on historical reads Detect unusual energy usage Provide details on meter outages The MDA platform is capable of processing up to 250TB of meter reads each day in batches. It also handles late-arriving data and prepares the data for different consumption endpoints like a data warehouse (Amazon Redshift), a machine learning pipeline (Amazon SageMaker), or APIs to make the data consumable for third-party applications. MDA architecture The core of the MDA is built on serverless components. Serverless ensures that the utility doesn’t have to manage infrastructure or provision it, and scaling is done automatically based on the load or the amount of the delivered meter reads. This approach minimizes utility cost. The following AWS services are included: A data lake formed by Amazon S3 buckets to store raw, clean, and partitioned business data. An extract, transform, load (ETL) process built with AWS Glue and AWS Glue workflow. Since AWS Glue only runs on demand, provisioning of infrastructure or managing nodes is not necessary. An Amazon Redshift cluster serves as a data warehouse for the business data. AWS Step Functions orchestrates machine learning pipelines. Amazon SageMaker supports model training and inferencing. A Jupyter Notebook with sample code to perform data science tasks and data visualization. Amazon API Gateway to expose the data, energy forecast, outages, and anomalies via HTTP. Data ingestion Utilities ingest meter data into the MDA from MDMS. An MDMS performs basic, but important, validations on the data before the data gets shipped to other systems. One advantage to this is that all data delivered to the MDA from the MDMS should be clean and can be directly processed. Furthermore, the MDMSs delivers the meter reads in batches, generally once a day, so the MDA must process the data when the batch arrives and finish processing it before the next batch arrives. Given their legacy architectures, the most commonly used interface to transfer data from MDMs are plain files over (S)FTP.  Utilities can connect their MDMS via AWS Storage Gateway for files, AWS DataSync, or AWS Transfer for SFTP to the data platform and store the meter read information directly to an S3 bucket, which is called a “landing zone.” From there, the ETL pipeline picks up the new meter reads and transforms them to a business valuable format. Data lake The heart of the MDA platform is the data lake. It is composed of three primary S3 buckets and an ETL pipeline that transforms the incoming data in batches and stores the results in different stages. The batch run can be either time- or event-based, depending on the delivery mechanism of the MDMS. The data lake handles late-arriving data and takes care of some basic aggregations (and re-aggregations). The workflow actively pushes the curated meter reads from the business zone to Amazon Redshift. The core ETL pipeline and its bucket layout The landing zone contains the raw data, which is a simple copy of the MDMS source data. On a periodic or event basis, the first AWS Glue job takes the raw data, cleans and transforms it to an internal schema, before they get stored in the “clean zone” bucket. The clean zone contains the original data converted into a standardized internal data schema. On top of that, dates are harmonized and unused fields are omitted. This optimizes the meter data for all subsequent steps. Another advantage of the standardized data schema is that different input formats can be adopted easily: only the first step of the pipeline needs to be adjusted in order to map different input formats to the internal schema, which allows all subsequent processes to work transparently with no further adjustment needed. A second AWS Glue job moves the data from the clean zone to the “business zone.” The business zone is the single point of truth for further aggregations and all downstream systems. Data is transformed to correct format and granularity for users. Data gets stored in Parquet and is partitioned by reading date and reading type. The column-based file format (Parquet) and the data partitioning enables efficient queries, therefore it is best practice to choose partition keys that correspond to the used query pattern. To prevent data from getting transformed twice,  Job Bookmarks are used on each job. Job Bookmarks are a feature to incrementally process the data and let AWS Glue keep track of data that has already been processed. For that, the ETL job persists state information from its previous run, so it can pick up where it has finished. This approach follows the modern data platform pattern, and more detailed descriptions can be found in this presentation . Handling late data In the meter world, late data is a common situation. Late data means that a certain meter didn’t deliver its consumption at the expected point in time due to issues with the network connection or the meter itself. If the meter is connected and working again, these reads get delivered in addition to the current reads. An example could be the following: Day 1 – both meter deliver the consumption reads: { meter_id: meter_1, reading_date: 2020/08/01, reading_value: 0.53, reading_type: INT } { meter_id: meter_2, reading_date: 2020/08/01, reading_value: 0.41, reading_type: INT } Day 2 – only meter_1 sends its consumption reads: { meter_id: meter_1, reading_date: 2020/08/02, reading_value: 0.32, reading_type: INT } Day 3 – both meter reads from meter 1 and 2 will be sent, the second meter also sends its outstanding read from the previous day: { meter_id: meter_1, reading_date: 2020/08/03, reading_value: 0.49, reading_type: INT } { meter_id: meter_2, reading_date: 2020/08/03, reading_value: 0.48, reading_type: INT } { meter_id: meter_2, reading_date: 2020/08/02, reading_value: 0.56, reading_type: INT } The data lake needs to handle the additional delivery of the third day. The ETL pipeline solves this automatically by sorting the additional read into the correct partition to make sure that each upstream system can find the late data and act on it. To make all following ETL steps aware of the late arriving data (that is, to re-aggregate monthly or daily datasets) a distinct list of all arriving dates in the current batch will be stored in a temporary file, which is only valid for the current pipeline run. distinct_dates = mapped_meter_readings\ .select( ‘reading_date’ )\ .distinct()\ .collect() distinct_dates_str_list = ‘,’ .join(value[ ‘reading_date’ ] for value in distinct_dates) This list can be consumed by everyone who is interested in the arrival of late data. The list defines which reading dates were delivered during the last batch. In this particular example, the list with the distinct value for each day would look like this: Day 1: {dates=[2020/08/01], …} Day 2: {dates=[2020/08/02], …} Day 3: {dates=[2020/08/03,2020/08/02], …} // day 3 has the late read from Aug 2nd Based on these results, an aggregation job that aggregates meter reads on a daily basis can derive which dates need to be re-aggregated. For day one and two, only the aggregation for the first and second day is expected. But on day three, the job needs to aggregate the data for the third and re-aggregate the consumption reads for the second. Because the re-aggregation is handled like the normal aggregation, the whole day will be calculated and previous results will be overwritten so no UPSERT is needed. Adopting a different input schema Different MDM systems deliver different file formats. Data input to the MDA is adaptable with minimal effort using a standardized internal data schema. The first step in the ETL pipeline transfers the input data from the landing zone to this internal schema. The schema is designed to hold all important information and it can be used as an input for different business zone representations. A closer look at the corresponding section of the AWS Glue jobs shows that it is fairly easy to adopt a different data schema by just changing the input mapping. The  ApplyMapping  class is used to apply a mapping to the loaded  DynamicFrame . datasource = glueContext.create_dynamic_frame.from_catalog(database = ‘meter-data’, table_name = ‘landingzone’, transformation_ctx = “datasource” ) mapped_reads= ApplyMapping.apply(frame = datasource, mappings = [\ ( “col0” , “long” , “meter_id” , “string” ), \ ( “col1” , “string” , “obis_code” , “string” ), \ ( “col2” , “long” , “reading_time” , “string” ), \ ( “col3” , “long” , “reading_value” , “double” ), \ ( “col4” , “string” , “reading_type” , “string” ) \ ], transformation_ctx = “mapped_reads” ) The left side of the example shows the input format with five columns (col0 – col4) and their respective data types. The right side shows the mapping to the internal data schema. The incoming data format is discovered automatically by an  AWS Glue Crawler . The Crawler checks the input file, detects its format and writes the metadata to an AWS Glue Data Catalog . The DynamicFrame then gets created from the information in the Data Catalog and is used by the AWS Glue job. Triggering the machine learning (ML) pipeline After the ETL has finished, the machine learning pipeline is triggered. Each ETL job publishes its state to Amazon CloudWatch Events that publishes each state change of the AWS Glue ETL job to an Amazon SNS topic. One subscriber of this topic is an AWS Lambda function. As soon as the business data has been written to the Amazon S3 bucket, this Lambda function checks if the ML pipeline is already running, or if the state machine that orchestrates the preparation and model training needs to be triggered. Machine learning architecture The machine learning pipelines are designed to meet both online and offline prediction needs. Online prediction allows users to run predictions against the latest data on a single meter upon request at any time of the day. Batch prediction allows users to generate predictions for many meters on a recurring schedule, such as weekly or monthly. Batch predictions are stored in the data lake and can be published via an API or used directly in any BI tool to feed dashboards to gain rapid insights. Meter readings are time series data. There are many algorithms that can be used for time series forecasting. Since some algorithms are designed for a single set of time series data, the model would needs to be trained individually for each meter before it can generate predictions. This approach does not scale well if used for even thousands of meters. The  DeepAR algorithm can train a single model jointly over many similar time series entries and it outperforms other popular forecasting algorithms. It can also be used to generate forecasts for new meters the model hasn’t been trained on. DeepAR allows up to 400 values for the  prediction_length , depending on the needed prediction granularity. DeepAR can generate hourly forecasts for up to two weeks, or daily forecasts for up to a year. There are many models that can be used for time series anomaly detection. The MDA Quick Start uses the  Prophet library ,  because it is easy to use and provides good results right out of the box. Prophet combines trend, seasonality, and holiday effects that suit meter consumption data well. The Quick Start uses hourly granularity for meter consumption forecasting and daily granularity for anomaly detection. The data preparation step can be modified to support different granularities. Preparing and training the model The input time series data for the model training should contain timestamps and corresponding meter consumption collected since last measurement. The data in the business zone, which acts as a single point of truth, is prepared accordingly. DeepAR also supports dynamic features like adding weather data that can be integrated into the ML pipeline as part of the training data to improve model accuracy. The weather data needs to be at the same frequency as the meter data. If the model is trained with weather data, the weather data also needs to be provided for both online inference and batch prediction. By default, weather data is not used, but utilities can be enable this as described in deployment documentation . The training pipeline can be run with a different set of hyperparameters , with or without the weather data, or even with another set of meter data, until the results of the model are acceptable. After the model has been trained, the training pipeline deploys it to a SageMaker endpoint, which is immediately ready for online inferences. The endpoint can be scaled by choosing a larger instance type to serve more concurrent online inference requests. To keep the model up to date, the training pipeline can be re-run daily to include new meter consumption data and learn pattern changes in customer consumption. Machine learning batch pipeline For energy consumption forecast and anomaly detection, the latency requirements are typically on the order of hours or days. So they can be generated periodically. By leveraging a serverless architecture incorporating AWS Lambda functions and Amazon SageMaker transform job , batch jobs can be parallelized increase the prediction speed. Each batch job includes an anomaly detection step, forecast data preparation step, forecasting step, and a step to store the results to Amazon S3. Step functions are used to orchestrate those steps and Map State to support custom batch size and meter ranges. This enables the MDA to scale and support millions of meters. The input of the batch pipeline includes the date range of meter data and the ML model. By default, it will use the latest model trained by the training pipeline, but a custom DeepAR model can also be specified. In general, the training jobs have to be run many times with different parameters and features before the model satisfies the expectations. Once the appropriate parameters and features are selected, the model training still needs to be re-run on a regular basis with the latest data to learn new patterns. In the MDA, the training and batch pipeline is managed in separate state machines that allows run of all pipelines as one workflow or each pipeline individually at different schedules to meet the requirements. How to get started and go build! To get started, the Quick Start can be deployed directly. Additional documentation explains step by step how to set up the MDA platform and use sample data to experiment with the components. This blog describes release one of the AWS smart meter data analytics (MDA) platform Quick Start. AWS plans to continue to extend the MDA based on customer feedback to unlock more possibilities to deliver value from smart meter data. TAGS: AWS MDA , AWS Meter Data Analytics , meter analytics , Meter Data Management Systems , Smart Meter Data , utility MDMS Sascha Janssen Sascha Janssen is a Senior Solutions Architect at AWS, helping Power & Utility customers to become a digital utility. He enjoys connecting 'things', build serverless solutions, and use data to deliver deeper insights. Juan Yu Juan Yu is a Data Warehouse Specialist Solutions Architect at Amazon Web Services, where she helps customers to adopt cloud data warehouse and solve analytic challenges at scale. Prior to AWS, she had fun building and enhancing MPP query engine to improve customer experience on Big Data workloads. Comments View Comments Resources AWS for Industry AWS Events AWS Training & Certification AWS Whitepapers AWS Compliance Reports Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Bank of Montreal Case Study _ AWS.txt,"additional stress test scenarios Français AWS also supports BMO’s Digital First strategy, using increased speed, scale, and the elimination of complexity to ensure customer experiences evolve continuously. Summing up the bank’s goals, Carl Gomes states, “BMO is working continuously to meet our initiative to modernize and simplify platforms, and we are in the process of migrating all components and capabilities to modern, cloud-native technologies. The bank is also implementing DevOps methodologies to automate the integration and development needed to respond agilely to fast-moving global markets. With the help of AWS, our focus now is training our staff on the latest cloud technologies so that we can build an elastic, scalable and modern risk platform that will meet the bank’s needs and ambitions for years to come.” BMO is a leading North American bank with a strong global reputation for disciplined risk management. After the 2007–2009 financial crisis, regulatory demands for disclosing market risk increased, requiring BMO to scale its risk platforms.  2023 Español Supported by the BMO Technology and Operations team, BMO’s three primary operating groups, Personal and Commercial Banking, BMO Capital Markets, and BMO Wealth Management, serve customers in Canada and the United States, with BMO Capital Markets operating in select global markets internationally. 日本語 Get Started 한국어 AWS has worked closely with BMO through the process. Teams across BMO’s business lines say the experience supports the bank’s ambition to digitize, increase flexibility, and drive product innovation for customers. “The real challenge is not the new services themselves. It’s adapting legacy processes and skillsets to get the full potential from cloud adoption,” notes Managing Director, Market Risk Technology, Harsh Katoch. “This requires a new and more simplified operating model that supports DevSecOps and product ownership, consistent Cloud governance, embracing Cloud Economics, and having the right skills across all our teams to make the most of AWS services.” Overview | Opportunity | Solution | Outcome | AWS Services Used This new platform gives BMO the flexibility to meet future regulatory challenges. “If in the future we have new regulatory requirements which need another 200 million or more calculations, we still need to complete them in the same fixed window,” notes Head of Market Risk and Chief Risk Officer for BMO Capital Markets, Jason Rachlin. “This will only happen if our platform is elastic and scalable.” The Amazon Web Services (AWS) solution has flexibility and elasticity to scale when needed. Market Risk Oversight took advantage of the increased computational capacity to add over 500 more stress scenarios, improving the Stress Test results accuracy. Also, the new platform can run Value At Risk (VAR) and Daily Stress Test batches in parallel, so detailed and aggregated risk numbers are delivered well before 7:30 am ET, saving the risk team five hours each day. BMO’s North American trading desks could then manage risk in a timely and effective manner. “We’ve now reached the point where all of our lines of business are using a broad array of cloud services and driving increasingly detailed cloud adoption roadmaps to meet those objectives,” says Chief Information Officer for Market Risk Technology and Corporate Treasury Technology at BMO, Carl Gomes. “For example, in Market Risk Technology, we are spinning off 8,000 to 10,000 on-demand and spot elastic compute cloud (EC2) instances nightly on AWS. These machines are also joining our Market Risk compute grid to perform various risk calculations.” More Data, Faster The BMO Market Risk Technology team builds and maintains the bank’s risk platform. First developed in 2015, BMO’s Market-Risk Next-Generation (MRNG) platform calculates market risk for all capital market positions in various asset classes such as Fixed Income, Commodity, FX, Interest Rate, Equity, and Structured products. AWS Services Used Amazon Elastic Compute Cloud (Amazon EC2) provides secure and resizable compute capacity for virtually any workload. 中文 (繁體) Bahasa Indonesia BMO had to run far more complex risk models to predict the bank’s ability to withstand hypothetical future adverse events. The BMO Market Risk Technology team also faced time challenges—all calculations and aggregations had to run at the close of business (10 pm ET) and be ready for the opening of markets (7:30 am ET). nightly calculations Carl Gomes Chief Information Officer for Market Risk Technology and Corporate Treasury Technology, BMO Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Amazon EC2 With the help of AWS, our focus now is training our staff on the latest cloud technologies so that we can build an elastic, scalable, and modern risk platform that will meet the bank’s needs and ambitions for years to come."" AWS Increases Flexibility and Drives Innovation Overview With increased demand for disclosing market risk post-financial crisis, banks needed to perform regular stress tests on a variety of data, including revenues, expenses, losses, pre-tax net income and capital ratios, plus distinguish between the trading book (assets intended for active trading) and the banking book (assets expected to be held to maturity, such as customer loans). Banks also had to calculate the risk of market illiquidity and assess the use of expected shortfall rather than the value at risk when measuring risk under stress. The introduction of Basel Reforms (2018) and implementation of the Fundamental Review of the Trading Book (2019) significantly increased the volume of risk calculations needed. BMO is a leading North American bank driven by a single purpose: to Boldly Grow the Good in business and life. Our Purpose informs our strategy, drives our ambition, and reinforces our commitments to progress: for a thriving economy, a sustainable future and an inclusive society. Customer Stories / Financial Services Grid Computing for Financial Services Added 500+ Türkçe English Delivering Business Objectives with Cloud Services Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. Learn more » The BMO Market Risk Technology team uses Amazon Elastic Compute Cloud (Amazon EC2), Grid Computing, and Amazon CloudWatch to continue innovating and optimizing computational resources. A scaled-up, elastic cloud platform helps BMO to run multiple risk metrics and regulatory stress calculations in parallel and scale computational capacity for future regulatory requirements. Able to run ∼10,000 Solving with Scalability Increased computing capacity to one billion+ Saved five hours daily Deutsch processing detailed and aggregated risk numbers Tiếng Việt BMO Market Risk Uses AWS to Optimize Computational Capacity Italiano ไทย By building and running grids with AWS, companies are able to execute a larger number of parallel tasks, which leads to increased speed of analysis and reduced time to results. Learn more » Amazon CloudWatch Contact Sales Navigating a Changing Regulatory Landscape Learn more » BMO’s Market Risk Technology team already spent many years using AWS and had the foundational skills and capabilities to meet the needs of the bank’s business partners. Now, with Amazon EC2, grid computing, and CloudWatch as the foundation for BMO’s Cloud Platform, the team is better positioned to support business needs across the enterprise. on-demand and spot EC2 instances nightly Leading North American bank BMO used AWS to build a more elastic platform for calculating risk metrics, scaling the bank’s computational capacity to comply with future regulatory requirements. To meet the regulatory market risk demands, the team needed a highly scalable compute platform to calculate complex models in similar or less time and allow for simultaneous calculation of multiple sets of test results. This new solution delivers on both fronts. It performs more than one billion calculations each night and maintains terabytes of data with significant daily growth. About BMO Português" Bazaarvoice Case Study _ AWS.txt,"Eliminates error-prone Instantaneously Français Español 82% Amazon SageMaker Serverless Inference deployment time for new models 日本語 Using Serverless Inference made it simple for Bazaarvoice to deploy a model and move it to a dedicated endpoint if the model experienced high traffic. As a result, the company has improved its throughput while reducing costs. It saved 82 percent on its ML inference costs by migrating all models across 12,000 clients to Serverless Inference. Bazaarvoice analyzes and augments millions of pieces of content per month, which results in tens of millions of monthly calls to SageMaker, or about 30 inference calls per second. But most of its ML models get called by clients only once every few minutes, so it doesn’t make sense for Bazaarvoice to allocate dedicated resources. “We needed the flexibility to change between dedicated hosts for large, expensive models and low-cost options for models used less frequently,” says Kratz. Using Serverless Inference, the company can scale up or down seamlessly to match demand, increasing efficiency and saving costs. “The big win for us is that we don’t have to manage servers or pay for compute time that we’re not using,” says Kratz. “And we can keep up with all the content coming in so that the client sees it moderated and augmented in a timely fashion.” sends data to existing models  2022 With headquarters in Austin, Texas, and offices across the globe, Bazaarvoice uses ML to automate content moderation for enterprise retailers and brands. The company collects, syndicates, and moderates reviews, social content, photos, and videos, which customers can use to enhance their product pages and drive sales. Bazaarvoice also uses ML to augment this content with semantic information to help clients categorize the content and glean insights. Get Started 한국어 Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used Solution | Achieving Simpler, More Scalable ML Deployments From 30 to 5 minutes Bazaarvoice, a leading provider of product reviews and user-generated content solutions, helps brands and retailers enrich their product pages with product ratings, reviews, and customer photos and videos. It uses machine learning (ML) to moderate and augment content quickly and to expedite the delivery of content to clients’ websites. AWS Services Used 中文 (繁體) Bahasa Indonesia innovation Contact Sales Ρусский Bazaarvoice desired an improved ML architecture to accelerate model deployment, to reduce its costs and its engineers’ workload, and to accelerate innovation for its clients. Having some of its infrastructure already on Amazon Web Services (AWS), Bazaarvoice migrated its ML workloads to Amazon SageMaker, which data scientists and developers use to prepare, build, train, and deploy high-quality ML models with fully managed infrastructure, tools, and workflows. In doing so, the company accelerated model deployment, improved scalability, and reduced costs by 82 percent. And it’s reinvesting those cost savings to improve its service further. عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Amazon Elastic Container Service (Amazon ECS) Customer Stories / Advertising & Marketing Overview By using SageMaker Serverless Inference, we can do ML efficiently at scale, quickly getting out a lot of models at a reasonable cost and with low operational overhead.”  As Bazaarvoice delivers content more quickly, its customers can display that content much sooner for new end users. Using SageMaker, it takes only 5 minutes. “Sending new client data to an existing model used to take 15–20 minutes,” says Kratz. “Now, it happens right away.” And deploying a brand-new model takes only 5 minutes instead of 20–30 minutes. On AWS, Bazaarvoice has seen an increase in model delivery throughput. The company can build a model, ship it, and run it on Serverless Inference to evaluate its performance before sending any content to it, reducing the risks of using live content. And there’s no need to redeploy when it’s time to send content to the model because the model is already running on SageMaker. Instead, it can deploy new models as soon as validation is complete. “Using Amazon SageMaker has vastly improved our ability to experiment and get new models to production quickly and inexpensively,” says Dave Anderson, technical fellow at Bazaarvoice. “We have the flexibility to drive our value proposition forward, and that’s exciting.” The company has helped its data scientists move faster and has added more value for customers. Opportunity | Accelerating ML Innovation on AWS Outcome | Continuing to Improve the Customer Experience Türkçe English Accelerates Bazaarvoice has unlocked significant cost savings while improving the ML development experience for its team and enhancing what it offers to its customers. The company plans to bring even more benefits to customers by using the SageMaker Serverless Inferences API to power quick access. “ML is becoming the norm in this industry—you can’t compete without it,” says Kratz. “By using SageMaker Serverless Inference, we can do ML efficiently at scale, quickly getting out a lot of models at a reasonable cost and with low operational overhead.” Lou Kratz Principal Research Engineer, Bazaarvoice Bazaarvoice considered building its own serverless hosting solution, but such a project would have been expensive and labor intensive. Instead, it adopted Amazon SageMaker Serverless Inference—a purpose-built inference option that makes it simple for businesses to deploy and scale ML models—to reduce the operational burden for its teams. “This project was the start of the unification of our model deployment,” says Edgar Trujillo, senior ML engineer at Bazaarvoice. The company began sending traffic to its new system in December 2021, and by February 2022, it was handling all production traffic. When Bazaarvoice feeds content into one of its ML models, the model outputs a confidence value and uses that to decide on the content. On the company’s previous architecture, Bazaarvoice had to ship a new model anytime that it wanted to change the decision logic. Bazaarvoice began using Amazon Elastic Container Service (Amazon ECS)—a fully managed container orchestration service that makes it easy for businesses to deploy, manage, and scale containerized applications—to handle decision logic outside the ML model. “Separating the decision logic was hugely beneficial because the content operations team can now get the results and make decisions virtually instantaneously,” says Kratz. “They don’t have to ship a new model and wait for it to deploy and update.” Deutsch About Bazaarvoice With headquarters in Austin, Texas, and offices around the world, Bazaarvoice provides tools for brands and retailers to create smart shopper experiences across the entire customer journey through a global retail, social, and search syndication network. Tiếng Việt Bazaarvoice wanted to improve its scalability, speed, and efficiency, but it was facing challenges with its older and slower ML solution. For example, every time the company needed to onboard a new client or train new models, it had to manually edit multiple model files, upload them, and wait for the system to register the change. The process took about 20 minutes and was prone to errors. Further, the architecture hadn’t been designed to support the company’s growing scale efficiently: each machine that supported its nearly 1,600 models needed 1 TB of RAM. “The cost was quite high, and because the architecture was built as a monolith, it couldn’t automatically scale, which was one of our key goals,” says Lou Kratz, principal research engineer at Bazaarvoice. Agility was also crucial to supporting Bazaarvoice’s growing number of clients and to experimenting on ML models. “We wanted to be able to increase the number of models in production by 10 times without running into memory limits,” says Kratz. Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Italiano ไทย Amazon SageMaker Serverless Inference is a purpose-built inference option that makes it easy for you to deploy and scale ML models. Bazaarvoice Reduces Machine Learning Inference Costs by 82% Using Amazon SageMaker Serverless Inference reduction in ML inference costs manual work Learn more » Português Amazon SageMaker" Better Mortgage using Amazon Elastic Kubernetes _ Better Mortgage Video _ AWS.txt,"Français 2023 Español 日本語 Get Started 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. AWS Services Used 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский عربي 中文 (简体) Customer Stories / Financial Services Türkçe Vishal Garg, founder and chief executive officer (CEO), discusses how Better Mortgage (NMLS #330511) uses Amazon Web Services (AWS) to grow its business and launch innovative solutions such as Equity Unlocker and One Day Mortgage.  Equity Unlocker has revolutionized the concept of what qualifies for a down payment to purchase a home by enabling tech employees to pledge vested equity toward a down payment. Historically, in the traditional homebuying process, buyers would wait for weeks to receive a decision from their lenders. One Day Mortgage delivers a Commitment Letter in 24 hours and was built entirely on AWS. Better chose AWS because of its Amazon Elastic Kubernetes Service (Amazon EKS) and machine learning and articificial intelligence capabilities. Watch the video to learn more about Better’s journey of innovation. English Better Mortgage Builds Innovative Mortgage Solutions for its Customers on AWS Deutsch Tiếng Việt Italiano ไทย Amazon EKS Learn more » Português" BIPO Improves Customer Experience on its HR Management System Using Machine Learning on AWS _ Case Study _ AWS.txt,"BIPO Improves Customer Experience on its HR Management System Using Machine Learning on AWS Amazon Textract is a machine learning (ML) service that automatically extracts text, handwriting, and data from scanned documents. It goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables. Français 2023 Solution | Expanding HRMS Platform’s Capabilities Using Machine Learning Español Reduced the cost of implementing facial recognition devices by 80 percent Learn More 日本語 AWS Services Used BIPO also plans to introduce these facial recognition-based access controls at meetings, conferences, exhibitions, and other similarly sized events.  Established in 2010 and headquartered in Singapore, BIPO is a global payroll and people solutions provider. Our enterprise-ready Human Capital Management (HCM) solution automates HR processes, simplifies workflows, and delivers actionable insights. Complemented by our global payroll outsourcing and Employer of Record (EOR) services, we support your global workforce needs through a network of 40+ offices, four R&D centres, and business partners in 100+ countries. 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used In 2020, BIPO integrated Amazon Textract with its HRMS mobile app and cut claims submission times by up to 50 percent for each receipt. Amazon Textract automatically extracts and uploads printed text on physical receipts using the cameras on their mobile devices. The new feature also minimized erroneous claims entries by up to 70 percent. BIPO has introduced this feature on its own internal HRMS and saved its employees up to 100 hours a month on claims submissions.  In late 2021, BIPO used Amazon Rekognition to implement a facial recognition-based attendance-taking feature at 20 percent of its initial estimated costs. Using Amazon Rekognition, BIPO eliminated the need for pricey, proprietary hardware and dedicated servers. Companies with the HRMS can use existing devices, such as employees’ own mobile phones or company tablets to take attendance, which reduces time spent on manual clock-ins by 80 percent. The facial recognition tool also incorporates liveness detection, which prevents fraudulent attendance-taking through pre-recorded videos. Get Started BIPO is a Singapore-based software company that provides cloud and mobile-based human resource management solutions for 3,300 customers worldwide, including those in the retail, food and beverages, and logistics industries. Its Human Resource Management System (HRMS) platform manages HR-related processes for more than 400,000 employees.  中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Customer Stories / Software & Internet عربي In 2020, BIPO saw a growing trend amongst their customers for a facial recognition-powered employee attendance-taking tool. It explored integrating existing facial recognition-based clocking systems on the market with the attendance-taking function on its HRMS. However, the costs were too high for its customers. BIPO would need to help its customers purchase devices and on-premises servers, which cost at least US$50,000.  Cost savings 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Looking ahead, BIPO will roll out the image-to-text claims processing and facial recognition-based attendance-taking feature to more customers. The features have been welcomed by customers, with an adoption rate of 20 percent since their introduction. Amazon Textract Reduced claims submission times by 50 percent for employees Optical Character Recognition (OCR) Overview Aside from attendance-taking, BIPO is also looking to utilize its facial recognition feature for other use cases, such as granting access control for authorized personnel. Most access control systems on the market use fingerprints or identification cards as their primary input methods. However, such methods are unreliable because fingerprints change over time, and identification cards are easily misplaced by users. A facial recognition-based access control eliminates these problems, while allowing for more seamless and secure entries. Opportunity | Reducing Cost and Inefficiencies on the HRMS Platform Amazon Rekognition offers pre-trained and customizable computer vision (CV) capabilities to extract information and insights from your images and videos. Learn more » Türkçe English Amazon Rekognition By using AWS AI/ML services, BIPO has successfully reduced cost- and productivity-related inefficiencies on the HRMS platform.  BIPO’s HRMS platform allows employees to perform everyday HR tasks, such as payroll, leave applications, claims submissions, and attendance taking, via a web- or mobile-based portal. To enhance the user experience, BIPO sought to improve its claims submission process, which was one of the most time-consuming tasks. Employees typically have multiple claims to file each month, and each claim can take an average of up to 20 minutes to upload. When combined, this resulted in up to 50 hours of lost productivity each month. Furthermore, finance departments spent up to 100 hours each month to rectify any errors that resulted from the highly manual process.  “We must constantly introduce cutting-edge features and solutions to serve our customers better. With AWS, we have halved the time it takes to innovate, build, and implement these features from four months to two months,” said Derick Teo, director of enterprise go-digital solutions at BIPO.  “Our employees generate a large number of claims monthly. The OCR technology on BIPO’s HRMS not only allows them to upload claims in an accurate and timely basis, but the time savings can also be redirected to other higher-value work within the company, and that has been truly invaluable to us,” said Derick Teo. Learn how BIPO introduced new features for HR Management System within weeks using AWS Machine Learning services Deutsch More efficient HR workflow In 2020, BIPO expanded the capabilities of its HRMS platform using artificial intelligence and machine learning (AI/ML). The company integrated Amazon Textract with the platform, which reduced claims submission times by 50 percent. In 2021, BIPO introduced a new attendance-taking function based on facial recognition using Amazon Rekognition. Tiếng Việt Italiano ไทย To learn more, visit aws.amazon.com/machine-learning/.  Outcome | Seamless Integration of AI/ML Features Learn more » Derick Teo Director, Enterprise Go-Digital Solutions, BIPO Integrated ML-powered facial recognition features for more efficient access control and security capabilities  About BIPO Integrated ML-powered OCR features with its HR Management System Facial recognition feature We must constantly introduce cutting-edge features and solutions to serve our customers better. With AWS, we have halved the time it takes to innovate, build, and implement these features from four months to two months.” Português" BNS Group Case Study _ Amazon Web Services.txt,"Reducing Virtual Machines from 40 to 12 The founders of BNS had been contemplating a migration from the company’s on-premises data center to the public cloud and observed a growing demand for cloud-based operations among current and potential BNS customers. Français Configures security according to cloud best practices Clive Pereira, R&D director at BNS Group, explains, “The database that records Praisal’s SMS traffic resides in Praisal’s AWS environment. Praisal can now run complete analytics across its data and gain insights into what’s happening with its SMS traffic, which is a real game-changer for the organization.”  Español AWS ISV Accelerate Program Receiving Strategic, Foundational Support from ISV Specialists Learn More The value that AWS places on the ISV stream sealed the deal in our choice of cloud provider.” 日本語 Contact Sales BNS is an Australian software provider focused on secure enterprise SMS and fax messaging. Its software runs on the Windows platform and is licensed to public sector organizations such as the Australian Taxation Office and to private firms like Suncorp. For Suncorp, BNS software handles between 2 million and 3 million monthly SMS messages. About BNS Group Get Started 한국어 BNS Group is an Australian independent software vendor providing enterprise SMS and fax messaging solutions. Its customers include public sector organizations such as the Australian Taxation Office and private clients like Suncorp, for which it handles up to 3 million SMS messages monthly. After its migration, BNS began developing a custom SMS solution for Praisal on AWS. Developers decided to use Microsoft SQL as a front-end application programming interface (API). Within two days, BNS developed an SQL API that could send and receive SMS from Praisal’s clients without its team having to learn any REST API calls or other technical complexities. Over the course of five months, BNS performed an AWS Foundational Technical Review with the support of the AWS ISV team and completed its cloud migration in June 2022. “AWS has been very responsive throughout our migration journey and guided us in setting up the right cloud foundation from day one. The review process really helped us understand the AWS security paradigm,” adds Buchanan. BNS founders gravitated to AWS because of its high availability and the AWS ISV Accelerate Program. “We really liked that AWS has an ISV competency in its partner program,” says Buchanan. “It was important for us to have our enterprise SMS software verified for use on AWS. The value that AWS places on the ISV stream sealed the deal in our choice of cloud provider.”  To learn more, visit aws.amazon.com/solutions/migration. Accelerating Transaction Rates While Increasing Productivity Pursuing a Cloud-Based Deployment Model AWS Services Used One of the areas BNS Group is focusing on with clients is to track the journey of each SMS—those received and not received by target customers—via out-of-the-box analytics models. With enhanced analytics, BNS Group’s clients can drive customer engagement, increase retention, and reduce churn. Reduces virtual machines from 40 to 12 The AWS Foundational Technical Review (FTR) enables you to identify and remediate risks in your software or solutions. 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Accesses new client base as an AWS-certified ISV To further enhance its analytics ambitions, BNS is now exploring how artificial intelligence and machine learning can benefit its clients. The company is also looking to list its enterprise solutions on AWS Marketplace to increase its customer reach. “If we didn’t migrate to AWS, we wouldn’t be able to engage the wide AWS customer base,” Buchanan says. “It’s been a win-win for BNS and AWS, and we look forward to what the future brings.” Ρусский Reduces infrastructure costs by 50% عربي BNS Group Meets Growing Demand for Cloud-Based SMS Solution on AWS 中文 (简体) The BNS SMS solution includes user-friendly dashboards that clients such as Praisal can use to understand their data and perform predictive analytics. The business plans to further enhance its analytics capabilities as part of its product development strategy, and recently hired two data scientists. According to Buchanan, onboarding new hires on AWS is much faster and easier compared to the BNS data center environment. Amazon RDS for SQL Server Learn more » Processes and transmits data faster Amazon RDS for SQL Server makes it easy to set up, operate, and scale SQL Server deployments in the cloud. Benefits of AWS AWS Foundational Technical Review By strategically starting with a clean slate on the AWS Cloud, BNS decreased its virtual machines from 40 to 12 and reduced infrastructure costs by 50 percent. The business spins up resizable Amazon Elastic Compute Cloud (Amazon EC2) instances for Microsoft Windows servers and uses Amazon RDS for SQL Server for database management. Enables predictive analytics and data insights for clients Türkçe Amazon Elastic Compute Cloud English Learn more » The final push for cloud migration came when BNS customer Praisal approached BNS for a cloud-based SMS solution in its Amazon Web Services (AWS) tenancy to connect with its users. BNS then consulted with AWS on how best to build new virtual machines and relicense software development tools securely on the cloud. The business wanted to steer away from the “lift and shift” migration approach to avoid transferring technical debt and “baggage” from the data center into the cloud. When harnessed strategically, Short Message Service (SMS) can be an extremely effective marketing tool. According to Gartner, SMS open rates are as high as 98 percent compared to email’s 20 percent average. Companies looking to run scalable SMS applications often rely on commercial software from independent software vendors (ISVs) such as BNS Group to reach their target audience. Onboarding Data Scientists to Enhance Analytics Capabilities Laurence Buchanan CEO, BNS Group Increases productivity with reduced maintenance burden Deutsch Tiếng Việt Italiano ไทย Laurence Buchanan, CEO at BNS Group, says, “Some of our larger customers have started asking about the cloud as they begin their own modernization journey. As an independent software vendor, we knew we had to be on the cloud too. We had to ensure our products work in our customers’ cloud tenancy and build documentation to support a cloud versus an on-premises deployment of our software.” Receives focused ISV support from AWS specialists 2022 The company has also experienced faster transaction rates on its SMS platform since the migration to AWS. “I’ve seen big improvements in throughput. We’re able to process and transmit data faster on AWS,” Buchanan says. BNS has also reduced time spent on backend operations, because it no longer carries out server maintenance, firewall updates, and disaster recovery planning and testing—elements that are now automated on AWS. Productivity has risen, and Buchanan can now allocate his time to R&D, quality assurance, creating documentation, and customer engagement.  The AWS ISV Accelerate Program is a co-sell program for organizations that provide software solutions that run on or integrate with AWS. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português" Boehringer Ingelheim Establishes Data-Driven Foundations Using AWS to Accelerate the Launch of New Medicines _ Boehringer Ingelheim Case Study _ AWS.txt,"mso-font-pitch:variable { mso-pagination:widow-orphan { Français use of data for 10,000+ employees The AWS Professional Services organization is a global team of experts that can help you realize your desired business outcomes when using the AWS Cloud. Learn more » The GxP Compliance on AWS solution expedites cloud migration by focusing on specific AWS applications which establish the environment needed to maintain compliance. Learn more » About Boehringer Ingelheim Español Founded in 1885, Boehringer Ingelheim works on breakthrough therapies to transform lives. More than 52,000 employees serve over 130 markets in three business areas: human pharma, animal health, and biopharmaceutical contract manufacturing. *.MsoChpDefault { Boehringer Ingelheim expects to be more effective in targeting diseases and developing breakthrough treatments, with an ambition to considerably shorten clinical trials. “By centralizing and improving the availability of our data, we accelerate our use case development in the future,” says Henrich. “That results in faster time to market, which translates into better patient outcomes.” Learn how Boehringer Ingelheim is transforming its ability to develop breakthrough treatments with its Dataland solution built on AWS. 日本語 2023 mso-generic-font-family:roman { AWS Glue is a serverless data integration service that makes it easier to discover, prepare, move, and integrate data from multiple sources for analytics, machine learning (ML), and application development. Learn more » AWS Professional Services Get Started 한국어 Opportunity | Collaborating alongside AWS in a Company-Wide Data Transformation Initiative for Boehringer Ingelheim  Overview | Opportunity | Solution | Outcome | AWS Services Used Working alongside Amazon Web Services (AWS), Boehringer Ingelheim is implementing an advanced company-wide initiative called Dataland, which aims to make data findable, accessible, interoperable, and reusable in the cloud. Using Dataland, the company has a structured catalog that accelerates data-driven decision-making across the organization and spreads a company-wide culture focused on data centricity. “Our Dataland initiative, powered by AWS, is establishing a data-driven mindset and working culture at Boehringer Ingelheim and will offer an unprecedented complete data solution for all colleagues,” says Andreas Henrich, vice president of enterprise data and platforms. Family owned since 1885, Boehringer Ingelheim serves more than 130 markets worldwide and spends €4.1 billion annually in research and development. In 2020, the company turned to AWS for help with an ambitious data transformation initiative to break down data silos and standardize enterprise-wide data solutions in 2 years while maintaining strict governance. “The main challenge is not the size of these huge datasets but knowing how to structure them. We chose AWS because we needed a trustworthy collaborator who fulfilled two main criteria: compliance and flexibility,” says Henrich. “Using AWS, we comply with regulations without too much customization. And its services are flexible enough to incorporate solutions from other third-party vendors to fill gaps in our current requirements.” Improved compliance mso-bidi-font-family:Cambria { *, serif { AWS Services Used In the pharmaceutical industry, massive amounts of data—from clinical trials, biobanks, electronic health records, supply chain, and production—can help uncover origins of disease, cures, and quicker development and delivery of new treatments to patients. But insights are often trapped in data silos. Global pharmaceutical company Boehringer Ingelheim is working to unlock the potential of data along its entire value chain by creating the infrastructure and processes to use data effectively. The company plans to increase the number of use cases in the pipeline and improve self-service capabilities, with a goal to phase out the initiative by 2025. “We see that data transformation on AWS helps us to create value and to work better and faster,” says Urgeles. “Together, we are progressing and pioneering these topics. It’s a great feeling to finally see the result and how far our impact can go.” GxP Compliance on AWS 中文 (繁體) Bahasa Indonesia } ไทย Ρусский Upskilled عربي 中文 (简体) With Dataland, the company makes huge amounts of curated data available for the entire workforce through a self-service solution that helps to drive insights. Its centralized data hub optimizes the structure of datasets while accounting for their size, up to multiple petabytes for external data. For its data lake, the company turned to Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. Using Amazon S3, Boehringer Ingelheim stores structured and unstructured data, such as information from handwritten documents or videos. To extract maximum value, Boehringer Ingelheim uses AWS Glue, a serverless data integration service that makes it simpler to discover, prepare, move, and integrate data from multiple sources for analytics, machine learning, and application development. Using AWS Glue, Boehringer Ingelheim relieves its data scientists of the heavy lifting previously required to maintain an extensive data catalog. Data scientists used to spend 60 percent of their time cleaning data, determining who could access it, and otherwise making it available. Now, users start working within hours instead of the months they previously needed. “This is what we wanted to turn around,” says Henrich. “Now, as soon as we have a great idea, data is available at our fingertips, and data scientists can start working right away.” More than 10,000 Boehringer Ingelheim employees so far have experienced the new solution through visualization models, dashboards, and other methods.   Using solutions from GxP Compliance on AWS, Boehringer Ingelheim’s highly secure architecture aligns with industry requirements for improved compliance. The company’s data governance structure establishes clear guardrails and data quality rules that facilitate data reusability and improve synergies across an application landscape of roughly 1,000 systems. Boehringer Ingelheim’s infrastructure spans two AWS Availability Zones, which helps the company to meet varying data residency requirements in Europe and the United States.   Boehringer Ingelheim realized that organized datasets would be helpful only if its teams had the right skills to generate insights for their daily work. In October 2021, Boehringer Ingelheim launched its Data Science Academy, with a mission to upskill employees and help them identify how to use data effectively, to build a focus on data culture, and to address the difference in data maturity levels across the organization. More than 3,000 employees across experience levels have participated in the program. “This program is intended to increase our pool of data scientists and engineers through retraining and recruitment and to strengthen our company-wide data literacy,” Henrich says. “This will increase awareness about the business potential of data and foster a data-driven culture across the company, encouraging an openness to new ways of working.”   The company is already deriving financial, efficiency, and compliance benefits from its 10 initial use cases. The use cases are real-world examples of how the Dataland initiative breaks down data silos, incorporates external real-world data, establishes strong data governance and data quality, and helps the company to better collaborate with external partners. “Most important, it is helping us to focus across the entire value chain, from research and development to commercialization, to enrich the lives of the human patients and animals that we serve,” says Henrich. “Our research pipeline can now make innovative products available sooner for patients.” div.WordSection1 { Solution | Building a Centralized Data Hub that Has Reached 10,000 Employees  Learn more » Established goal Overview Facilitated of reduction in clinical trial time mso-ascii-font-family:Cambria { Arial, sans-serif { Boehringer Ingelheim Establishes Data-Driven Foundations Using AWS to Accelerate the Launch of New Medicines Türkçe English page: WordSection1; to access data from months to hours Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. AWS Glue Outcome | Pioneering Data Transformation on AWS for Clinical Value Deutsch mso-fareast-font-family:Cambria { Our Dataland initiative, powered by AWS, is establishing a data-driven mindset and working culture at Boehringer Ingelheim and will offer an unprecedented complete data solution for all colleagues.” * { Tiếng Việt Amazon S3 Andreas Henrich Vice President of Enterprise Data and Platforms, Boehringer Ingelheim Italiano Customer Stories / Life Sciences p.MsoNormal, li.MsoNormal, div.MsoNormal { Contact Sales with GxP regulations 3,000+ employees through data academy mso-hansi-font-family:Cambria { Cut time To implement Dataland’s technological foundation, Boehringer Ingelheim collaborated closely alongside AWS Professional Services, a global team of experts that can help organizations realize desired business outcomes when using the cloud. “The AWS Professional Services team not only worked with us in the setup but also helped us understand what we needed to do to grow,” says Ferran Urgeles, program manager of Dataland. “It helped us establish clear processes and guardrails so that we had clarity on how to operate based on our organizational needs.” p.Normal0, li.Normal0, div.Normal0 { Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português" Bosch Thermotechnology Accelerates IoT Deployment Using AWS Serverless Computing and AWS IoT Core _ Case Study _ AWS.txt,"AWS CloudFormation AWS Lambda Français Increased reduction in product time to market 2023 Bosch Thermotechnology North America is a source of high-quality heating, cooling, and hot water systems. It is a division of Robert Bosch GmbH, a supplier of technology and services. In early 2021, Bosch TTNA began developing its first cloud-connected device, a heat pump system that technicians can remotely monitor, analyze, and troubleshoot. The company wanted to build a solution that could scale to handle highly variable workloads while requiring the least amount of effort to manage infrastructure. Español Bosch Thermotechnology North America (Bosch TTNA) built a smart source of heating, ventilating, and air-conditioning (HVAC) systems by modernizing and migrating its business to the cloud to monitor products remotely while removing the undifferentiated heavy lifting of managing the infrastructure. As part of the North American division of Robert Bosch GmbH, Bosch TTNA was new to smart device development and wanted a cost-effective solution to expand its infrastructure capacity and scalability while creating new smart technologies. Bosch TTNA used Amazon Web Services (AWS) to build solutions to connect its devices to AWS Internet of Things (AWS IoT). The solution uses AWS serverless technologies for data processing, application integration, and the scaling required to manage its business. Bosch TTNA can now remotely monitor its new smart energy and building devices with minimal operational overhead, improving customer service. 日本語 Bosch TTNA offers hardware solutions for its HVAC business and wants to transform to a software-driven company to better support wholesale, contractor, and homeowner customers. It is committed to offering state-of-the-art energy-efficient and smart systems that help reduce carbon emissions by building a portfolio of smart connected heating and cooling systems. The company saw an opportunity to use real-time device data to inform after-sale HVAC system maintenance and support. With the readiness of technology, a cloud-connected solution that captures, processes, and analyzes real-time device data can benefit customers and service providers. “We want to be smart HVAC champions. Sustainability is at the core of everything we do. The smarter our technologies are, the more efficient they will be,” says Pablo Ferreyra, head of software development for Bosch CI Americas. “We see using AWS as critical to that overall vision.” Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used First-ever operational overhead Amazon Simple Queue Service (Amazon SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications.  Learn more » Bosch Thermotechnology Accelerates IoT Deployment Using AWS Serverless Computing and AWS IoT Core development team’s agility AWS Services Used Reduced 中文 (繁體) Bahasa Indonesia Pablo Ferreyra Head of Software Development for Bosch CI Americas, Bosch Thermotechnology North America We use AWS to achieve our business goals and to innovate in the technology space. Using AWS, we accelerate the change that we’re driving.” AWS IoT Core lets you connect billions of IoT devices and route trillions of messages to AWS services without managing infrastructure. Learn more » Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Customer Stories / Hi Tech, Electronics & Semiconductor Bosch TTNA realized the importance and challenge of hiring and upskilling a new team to develop and maintain its smart products. Although the architecture is now in place, Bosch TTNA initially turned to AWS for these skills as its new team developed the competencies that the company needed to be successful. Eighty percent of its development team received AWS Certification, which validates technical skills and cloud expertise. “Our talent is pretty autonomous at this point, and that is largely from using the support that we received from AWS,” says Ferreyra. AWS IoT Core Overview AWS CloudFormation lets you model, provision, and manage AWS and third-party resources by treating infrastructure as code. Learn more » Solution | Creating Cloud Competency while Developing a New Product Türkçe AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use. Outcome | Expanding Product Capabilities Using AWS Solutions English smart product released by Bosch total cost of ownership (TCO) 4 months About Bosch Thermotechnology North America Given Bosch TTNA’s history selling connecting thermostats, it knew that managing IoT infrastructure required significant resources. Bosch TTNA’s goals led it to use AWS services to deliver compelling products and services to customers at an optimized cost with reduced operational complexity. The company used AWS serverless technologies and AWS IoT Core to connect large numbers of IoT devices and route a high volume of messages to AWS services without managing infrastructure. AWS serverless technologies feature automatic scaling, built-in high availability, and a pay-for-use billing model to increase agility and optimize costs. By using AWS to do the heavy lifting, Bosch TTNA’s developers can focus on adding value to the business by bringing new use cases and features to market. Deutsch The new architecture that Bosch TTNA develops is reusable across IoT use cases. Bosch TTNA uses AWS CloudFormation—a service to model, provision, and manage AWS and third-party resources by treating infrastructure as code—to standardize its architecture and scale it globally to other teams. This standardization accelerates the workloads for other teams because they do not have to start every IoT project from scratch, and they can build solutions faster than before, which has reduced time to market by an average of 4 months. “We have Bosch’s innovation on top of AWS innovation, which accelerates us further,” says Ferreyra. AWS Lambda—a serverless, event-driven compute service that can run code for virtually any type of application or backend service without provisioning or managing servers—fit this need, and Bosch TTNA decided to use it as the core service for the project. “For us, AWS Lambda was the perfect fit in terms of the burstiness of the workload and the cost considerations that we have for the solution,” says Ferreyra. With this solution, AWS managed the backend and infrastructure provisioning so that Bosch TTNA could focus on application innovation. For fully managed message queuing, the company incorporated Amazon Simple Queue Service (Amazon SQS), which sends, stores, and receives messages between software components at any volume. Bosch TTNA launched this connected heat pump system in June 2022 in the United States, and its success has led the company to plan multiple future smart products. Tiếng Việt Bosch TTNA is developing and implementing innovative technologies within the HVAC space, which benefits its products and customers. Using Bosch TTNA’s solution, service partners benefit from near-real-time installation support, remote diagnostics, troubleshooting support, and smart system health alerting. Before going onsite, service partners can use the Bosch TTNA mobile app to remotely determine if there are problems with a system and find the steps and tools required for the repair, reducing service visits and expediting service delivery. The mobile app can also tell onsite installers whether they have performed an installation correctly, a valuable feature because the number one cause of warranty claims comes from defects introduced during system installation. This increases customer satisfaction and product durability and reduces warranty costs. Additionally, Bosch TTNA now has data from the field that shows how its devices behave and hold up under different external conditions. The company can use this data to quantify the durability of its devices and target the reliability of specific product components. Italiano ไทย Contact Sales Opportunity | Accelerating Product Innovation Using AWS Services to Create Smart HVAC Systems for Bosch TTNA Learn more » Amazon SQS Bosch TTNA can now focus on making better products for its customers and service partners in less time and at a lower cost. Since the move to smart products and services, it has received better-than-expected sales results, and its successes have led it to explore other uses of AWS services, such as data lakes, data analytics, and machine learning. Bosch TTNA also wants to expand its current environment to extract more value from its data and thereby increase the service level and value to customers. “We use AWS to achieve our business goals and to innovate in the technology space. Using AWS, we accelerate the change that we’re driving,” says Ferreyra. Bosch Thermotechnology North America developed its first cloud-connected device using AWS Lambda and AWS IoT Core, optimizing costs while improving customer experience. Português" Botprise Reduces Time to Remediation by 86 on Average Using Automation and AWS Security Hub _ Botprise Case Study _ AWS.txt,"Amazon GuardDuty By using the infrastructure of AWS services to build its automation, Botprise significantly reduced the time to market for its solution. Time savings early on are particularly important for a startup looking to acquire customers quickly. “Using AWS services and support from the AWS team, we could move much faster,” says Bulusu. “We built our security solution in 1 year, cutting the time to market in half.” As Botprise continues to increase its customer base, the company can scale as needed in a cost-effective way using AWS services. Botprise continues to experience ongoing cost savings as well, reducing its operational costs by 34 percent because of the reduced manpower costs of using AWS services to automate tasks. Amazon GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation. Français Solution | Cutting Operational Costs by 34% and Saving Time Using AWS Security Hub 2023 Español Using AWS services and support from the AWS team, we could move much faster. We built our security solution in 1 year, cutting the time to market in half.” Botprise Reduces Time to Remediation by 86% on Average Using Automation and AWS Security Hub 日本語 AWS Services Used in time to remediation for security issues Contact Sales Get Started 한국어 Founded in October 2019, Botprise provides a security solution that monitors for configuration issues in cloud environments and offers automation of cloud operations. Because automation is complex and expensive to scale, Botprise offers apps and templates for customers to set up automation in a matter of minutes or days. Botprise’s customers don’t need technical expertise, and they gain value right away rather than taking months or years to build tools on their own. Learn how Botprise in the cloud security automation industry reduced costs and time to remediation by centralizing security operations using AWS Security Hub. Amazon Inspector is an automated vulnerability management service that continually scans AWS workloads for software vulnerabilities and unintended network exposure. 34% reduction Botprise modernized and strengthened its security posture using AWS services. Using insights from services such as AWS Security Hub, Botprise reduced the time it takes from issue identification to remediation by 86 percent on average because many issues no longer require manual remediation. It also bolstered the security of its solution using AWS services, increasing customer confidence and facilitating more growth. With time savings from automation, customer IT teams can focus on complex issues, which is important for Botprise’s customers that span the energy, financial services, and technology industries and have mission-critical security needs. Using AWS Security Hub, Botprise can see data from multiple sources, including other AWS services and supported third-party products, on a centralized dashboard. This dashboard gives Botprise complete visibility into its security posture, helping the company better understand challenges and identify areas that need automation. Using AWS Security Hub, Botprise can show findings from Amazon GuardDuty, which protects AWS accounts with intelligent threat detection. “Bringing data from all services into a centralized dashboard makes life a lot easier,” says Bulusu. “You can monitor your security posture and see everything you need to keep an eye on.” Findings from Amazon Inspector, an automated bug management service that continually scans AWS workloads for software vulnerabilities and unintended network exposure, also appear in AWS Security Hub. Kishan Bulusu Founder and Chief Executive Officer, Botprise Amazon Inspector 中文 (繁體) Bahasa Indonesia Founded in October 2019, Botprise is a cloud security automation company. Its solution saves customers time and effort by monitoring for configuration issues in cloud environments and automating cloud security operations tasks. Botprise Architecture Diagram About Botprise Ρусский Customer Stories / Software & Internet عربي Learn more » 中文 (简体) Opportunity | Using Programs Like AWS MAP to Build Momentum and Facilitate Growth for Botprise AWS Well-Architected Learn more » Botprise plans to continue building more automation around AWS services to maintain its security posture, facilitate growth, and help its customers get the most out of AWS. The company expects to scale rapidly in the next year, growing from 30 customers to over 100 by the end of 2023. “We want to use as many AWS services as we can to drive value to our customers in their automation journey, particularly in the areas of security and cloud operations,” says Bulusu. Achieved 86% average reduction Overview Botprise has aggressive growth goals for its no-code automation solution that helps customers reduce the amount of manual intervention needed for managing cloud systems. To scale effectively while meeting stringent requirements for its security operation automation solution, Botprise looked to Amazon Web Services (AWS). Using services such as AWS Security Hub, a cloud security posture management service for automating AWS security checks and centralizing security alerts, Botprise achieved operational cost savings, significantly reduced the time to remediate a security issue, and cut the time to market in half to stay on track with its growth goals of nearly quadrupling its number of customers in the next year. Türkçe English Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram Cut time Deutsch AWS Well-Architected helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for a variety of applications and workloads. Tiếng Việt In both 2020 and 2022, Botprise went through the AWS Well-Architected review process, which helps companies learn, measure, and build using architectural best practices and a framework of six pillars. Its security pillar focuses on using cloud technology to protect information and systems, such as managing confidentiality and security controls. “The AWS Well-Architected reviews gave us good guidance about what we can work on and what gaps we need to fill to make our company better,” says Kishan Bulusu, founder and chief executive officer at Botprise. In June 2022, Botprise also went through the AWS Migration Acceleration Program (AWS MAP), a comprehensive cloud migration program that uses outcome-driven methodology developed by migrating thousands of enterprise customers to the cloud. rapid customer growth Italiano ไทย Architecture Diagram to market in half Close AWS Security Hub is a cloud security posture management service that performs security best practice checks, aggregates alerts, and enables automated remediation. Click to enlarge for fullscreen viewing.  in operational costs using automation AWS Security Hub Beginning as a startup, Botprise needed a cloud solution that could scale to support its future growth while maintaining high security standards for itself and its customers. From its founding, Botprise used AWS services to improve its security posture. The company started with automation around IT operations, building automation for internal purposes first and then offering it to customers. In 2022, Botprise pivoted to develop more cloud automation solutions with an increasing focus on security operation automation. During this pivot, Botprise received support from AWS, which Botprise used to gain momentum and grow by 400 percent in the security operations automation sector. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português Outcome | Continuing to Grow Using AWS Services" Build a powerful question answering bot with Amazon SageMaker Amazon OpenSearch Service Streamlit and LangChain _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Build a powerful question answering bot with Amazon SageMaker, Amazon OpenSearch Service, Streamlit, and LangChain by Amit Arora , Navneet Tuteja , and Xin Huang | on 25 MAY 2023 | in Advanced (300) , Amazon SageMaker , Amazon SageMaker JumpStart , Expert (400) , Generative AI , Technical How-to | Permalink | Comments |  Share One of the most common applications of generative AI and large language models (LLMs) in an enterprise environment is answering questions based on the enterprise’s knowledge corpus. Amazon Lex provides the framework for building AI based chatbots . Pre-trained foundation models (FMs) perform well at natural language understanding (NLU) tasks such summarization, text generation and question answering on a broad variety of topics but either struggle to provide accurate (without hallucinations) answers or completely fail at answering questions about content that they haven’t seen as part of their training data. Furthermore, FMs are trained with a point in time snapshot of data and have no inherent ability to access fresh data at inference time; without this ability they might provide responses that are potentially incorrect or inadequate. A commonly used approach to address this problem is to use a technique called Retrieval Augmented Generation (RAG). In the RAG-based approach we convert the user question into vector embeddings using an LLM and then do a similarity search for these embeddings in a pre-populated vector database holding the embeddings for the enterprise knowledge corpus. A small number of similar documents (typically three) is added as context along with the user question to the “prompt” provided to another LLM and then that LLM generates an answer to the user question using information provided as context in the prompt. RAG models were introduced by Lewis et al. in 2020 as a model where parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. To understand the overall structure of a RAG-based approach, refer to Question answering using Retrieval Augmented Generation with foundation models in Amazon SageMaker JumpStart . In this post we provide a step-by-step guide with all the building blocks for creating an enterprise ready RAG application such as a question answering bot. We use a combination of different AWS services, open-source foundation models ( FLAN-T5 XXL for text generation and GPT-j-6B for embeddings) and packages such as LangChain for interfacing with all the components and Streamlit for building the bot frontend. We provide an AWS Cloud Formation template to stand up all the resources required for building this solution. We then demonstrate how to use LangChain for tying everything together: Interfacing with LLMs hosted on Amazon SageMaker. Chunking of knowledge base documents. Ingesting document embeddings into Amazon OpenSearch Service. Implementing the question answering task. We can use the same architecture to swap the open-source models with the Amazon Titan models. After Amazon Bedrock launches, we will publish a follow-up post showing how to implement similar generative AI applications using Amazon Bedrock, so stay tuned. Solution overview We use the SageMaker docs as the knowledge corpus for this post. We convert the HTML pages on this site into smaller overlapping chunks (to retain some context continuity between chunks) of information and then convert these chunks into embeddings using the gpt-j-6b model and store the embeddings in OpenSearch Service. We implement the RAG functionality inside an AWS Lambda function with Amazon API Gateway to handle routing all requests to the Lambda. We implement a chatbot application in Streamlit which invokes the function via the API Gateway and the function does a similarity search in the OpenSearch Service index for the embeddings of user question. The matching documents (chunks) are added to the prompt as context by the Lambda function and then the function uses the flan-t5-xxl model deployed as a SageMaker endpoint to generate an answer to the user question. All the code for this post is available in the GitHub repo . The following figure represents the high-level architecture of the proposed solution. Figure 1: Architecture Step-by-step explanation: The User provides a question via the Streamlit web application. The Streamlit application invokes the API Gateway endpoint REST API. The API Gateway invokes the Lambda function. The function invokes the SageMaker endpoint to convert user question into embeddings. The function invokes invokes an OpenSearch Service API to find similar documents to the user question. The function creates a “prompt” with the user query and the “similar documents” as context and asks the SageMaker endpoint to generate a response. The response is provided from the function to the API Gateway. The API Gateway provides the response to the Streamlit application. The User is able to view the response on the Streamlit application, As illustrated in the architecture diagram, we use the following AWS services: SageMaker and Amazon SageMaker JumpStart for hosting the two LLMs. OpenSearch Service for storing the embeddings of the enterprise knowledge corpus and doing similarity search with user questions. Lambda for implementing the RAG functionality and exposing it as a REST endpoint via the API Gateway . Amazon SageMaker Processing jobs for large scale data ingestion into OpenSearch. Amazon SageMaker Studio for hosting the Streamlit application. AWS Identity and Access Management roles and policies for access management. AWS CloudFormation for creating the entire solution stack through infrastructure as code. In terms of open-source packages used in this solution, we use LangChain for interfacing with OpenSearch Service and SageMaker, and FastAPI for implementing the REST API interface in the Lambda. The workflow for instantiating the solution presented in this post in your own AWS account is as follows: Run the CloudFormation template provided with this post in your account. This will create all the necessary infrastructure resources needed for this solution: SageMaker endpoints for the LLMs OpenSearch Service cluster API Gateway Lambda function SageMaker Notebook IAM roles Run the data_ingestion_to_vectordb.ipynb notebook in the SageMaker notebook to ingest data from SageMaker docs into an OpenSearch Service index. Run the Streamlit application on a terminal in Studio and open the URL for the application in a new browser tab. Ask your questions about SageMaker via the chat interface provided by the Streamlit app and view the responses generated by the LLM. These steps are discussed in detail in the following sections. Prerequisites To implement the solution provided in this post, you should have an AWS account and familiarity with LLMs, OpenSearch Service and SageMaker. We need access to accelerated instances (GPUs) for hosting the LLMs. This solution uses one instance each of ml.g5.12xlarge and ml.g5.24xlarge; you can check the availability of these instances in your AWS account and request these instances as needed via a Sevice Quota increase request as shown in the following screenshot. Figure 2: Service Quota Increase Request Use AWS Cloud Formation to create the solution stack We use AWS CloudFormation to create a SageMaker notebook called aws-llm-apps-blog and an IAM role called LLMAppsBlogIAMRole . Choose Launch Stack for the Region you want to deploy resources to. All parameters needed by the CloudFormation template have default values already filled in, except for the OpenSearch Service password which you’d have to provide. Make a note of the OpenSearch Service username and password, we use those in subsequent steps. This template takes about 15 minutes to complete . AWS Region Link us-east-1 us-west-2 eu-west-1 ap-northeast-1 After the stack is created successfully, navigate to the stack’s Outputs tab on the AWS CloudFormation console and note the values for OpenSearchDomainEndpoint and LLMAppAPIEndpoint . We use those in the subsequent steps. Figure 3: Cloud Formation Stack Outputs Ingest the data into OpenSearch Service To ingest the data, complete the following steps: On the SageMaker console, choose Notebooks in the navigation pane. Select the notebook aws-llm-apps-blog and choose Open JupyterLab . Figure 4: Open JupyterLab Choose data_ingestion_to_vectordb.ipynb to open it in JupyterLab. This notebook will ingest the SageMaker docs to an OpenSearch Service index called llm_apps_workshop_embeddings . Figure 5: Open Data Ingestion Notebook When the notebook is open, on the Run menu, choose Run All Cells to run the code in this notebook. This will download the dataset locally into the notebook and then ingest it into the OpenSearch Service index. This notebook takes about 20 minutes to run. The notebook also ingests the data into another vector database called FAISS . The FAISS index files are saved locally and the uploaded to Amazon Simple Storage Service (S3) so that they can optionally be used by the Lambda function as an illustration of using an alternate vector database. Figure 6: Notebook Run All Cells Now we’re ready to split the documents into chunks, which can then be converted into embeddings to be ingested into OpenSearch. We use the LangChain RecursiveCharacterTextSplitter class to chunk the documents and then use the LangChain SagemakerEndpointEmbeddingsJumpStart class to convert these chunks into embeddings using the gpt-j-6b LLM. We store the embeddings in OpenSearch Service via the LangChain OpenSearchVectorSearch class. We package this code into Python scripts that are provided to the SageMaker Processing Job via a custom container. See the data_ingestion_to_vectordb.ipynb notebook for the full code. Create a custom container, then install in it the LangChain and opensearch-py Python packages. Upload this container image to Amazon Elastic Container Registry (ECR). We use the SageMaker ScriptProcessor class to create a SageMaker Processing job that will run on multiple nodes. The data files available in Amazon S3 are automatically distributed across in the SageMaker Processing job instances by setting s3_data_distribution_type='ShardedByS3Key' as part of the ProcessingInput provided to the processing job. Each node processes a subset of the files and this brings down the overall time required to ingest the data into OpenSearch Service. Each node also uses Python multiprocessing to internally also parallelize the file processing. Therefore, there are two levels of parallelization happening, one at the cluster level where individual nodes are distributing the work (files) amongst themselves and another at the node level where the files in a node are also split between multiple processes running on the node . # setup the ScriptProcessor with the above parameters processor = ScriptProcessor(base_job_name=base_job_name, image_uri=image_uri, role=aws_role, instance_type=instance_type, instance_count=instance_count, command=[""python3""], tags=tags) # setup input from S3, note the ShardedByS3Key, this ensures that # each instance gets a random and equal subset of the files in S3. inputs = [ProcessingInput(source=f""s3://{bucket}/{app_name}/{DOMAIN}"", destination='/opt/ml/processing/input_data', s3_data_distribution_type='ShardedByS3Key', s3_data_type='S3Prefix')] logger.info(f""creating an opensearch index with name={opensearch_index}"") # ready to run the processing job st = time.time() processor.run(code=""container/load_data_into_opensearch.py"", inputs=inputs, outputs=[], arguments=[""--opensearch-cluster-domain"", opensearch_domain_endpoint, ""--opensearch-secretid"", os_creds_secretid_in_secrets_manager, ""--opensearch-index-name"", opensearch_index, ""--aws-region"", aws_region, ""--embeddings-model-endpoint-name"", embeddings_model_endpoint_name, ""--chunk-size-for-doc-split"", str(CHUNK_SIZE_FOR_DOC_SPLIT), ""--chunk-overlap-for-doc-split"", str(CHUNK_OVERLAP_FOR_DOC_SPLIT), ""--input-data-dir"", ""/opt/ml/processing/input_data"", ""--create-index-hint-file"", CREATE_OS_INDEX_HINT_FILE, ""--process-count"", ""2""]) Close the notebook after all cells run without any error. Your data is now available in OpenSearch Service. Enter the following URL in your browser’s address bar to get a count of documents in the llm_apps_workshop_embeddings index. Use the OpenSearch Service domain endpoint from the CloudFormation stack outputs in the URL below. You’d be prompted for the OpenSearch Service username and password, these are available from the CloudFormations stack. https://your-opensearch-domain-endpoint/llm_apps_workshop_embeddings/_count The browser window should show an output similar to the following. This output shows that 5,667 documents were ingested into the llm_apps_workshop_embeddings index. {""count"":5667,""_shards"":{""total"":5,""successful"":5,""skipped"":0,""failed"":0}} Run the Streamlit application in Studio Now we’re ready to run the Streamlit web application for our question answering bot. This application allows the user to ask a question and then fetches the answer via the /llm/rag REST API endpoint provided by the Lambda function. Studio provides a convenient platform to host the Streamlit web application. The following steps describes how to run the Streamlit app on Studio. Alternatively, you could also follow the same procedure to run the app on your laptop. Open Studio and then open a new terminal. Run the following commands on the terminal to clone the code repository for this post and install the Python packages needed by the application: git clone https://github.com/aws-samples/llm-apps-workshop cd llm-apps-workshop/blogs/rag/app pip install -r requirements.txt The API Gateway endpoint URL that is available from the CloudFormation stack output needs to be set in the webapp.py file. This is done by running the following sed command. Replace the replace-with-LLMAppAPIEndpoint-value-from-cloudformation-stack-outputs in the shell commands with the value of the LLMAppAPIEndpoint field from the CloudFormation stack output and then run the following commands to start a Streamlit app on Studio. EP=replace-with-LLMAppAPIEndpoint-value-from-cloudformation-stack-outputs # replace __API_GW_ENDPOINT__ with output from the cloud formation stack sed -i ""s|__API_GW_ENDPOINT__|$EP|g"" webapp.py streamlit run webapp.py When the application runs successfully, you’ll see an output similar to the following (the IP addresses you will see will be different from the ones shown in this example). Note the port number (typically 8501) from the output to use as part of the URL for app in the next step. sagemaker-user@studio$ streamlit run webapp.py Collecting usage statistics. To deactivate, set browser.gatherUsageStats to False. You can now view your Streamlit app in your browser. Network URL: http://169.255.255.2:8501 External URL: http://52.4.240.77:8501 You can access the app in a new browser tab using a URL that is similar to your Studio domain URL. For example, if your Studio URL is https://d-randomidentifier.studio.us-east-1.sagemaker.aws/jupyter/default/lab? then the URL for your Streamlit app will be https://d-randomidentifier.studio.us-east-1.sagemaker.aws/jupyter/default/proxy/8501/webapp (notice that lab is replaced with proxy/8501/webapp ). If the port number noted in the previous step is different from 8501 then use that instead of 8501 in the URL for the Streamlit app. The following screenshot shows the app with a couple of user questions. A closer look at the RAG implementation in the Lambda function Now that we have the application working end to end, lets take a closer look at the Lambda function. The Lambda function uses FastAPI to implement the REST API for RAG and the Mangum package to wrap the API with a handler that we package and deploy in the function. We use the API Gateway to route all incoming requests to invoke the function and handle the routing internally within our application. The following code snippet shows how we find documents in the OpenSearch index that are similar to the user question and then create a prompt by combining the question and the similar documents. This prompt is then provided to the LLM for generating an answer to the user question. @router.post(""/rag"") async def rag_handler(req: Request) -> Dict[str, Any]: # dump the received request for debugging purposes logger.info(f""req={req}"") # initialize vector db and SageMaker Endpoint _init(req) # Use the vector db to find similar documents to the query # the vector db call would automatically convert the query text # into embeddings docs = _vector_db.similarity_search(req.q, k=req.max_matching_docs) logger.info(f""here are the {req.max_matching_docs} closest matching docs to the query=\""{req.q}\"""") for d in docs: logger.info(f""---------"") logger.info(d) logger.info(f""---------"") # now that we have the matching docs, lets pack them as a context # into the prompt and ask the LLM to generate a response prompt_template = """"""Answer based on context:\n\n{context}\n\n{question}"""""" prompt = PromptTemplate( template=prompt_template, input_variables=[""context"", ""question""] ) logger.info(f""prompt sent to llm = \""{prompt}\"""") chain = load_qa_chain(llm=_sm_llm, prompt=prompt) answer = chain({""input_documents"": docs, ""question"": req.q}, return_only_outputs=True)['output_text'] logger.info(f""answer received from llm,\nquestion: \""{req.q}\""\nanswer: \""{answer}\"""") resp = {'question': req.q, 'answer': answer} if req.verbose is True: resp['docs'] = docs return resp Clean up To avoid incurring future charges, delete the resources. You can do this by deleting the CloudFormation stack as shown in the following screenshot. Figure 7: Cleaning Up Conclusion In this post, we showed how to create an enterprise ready RAG solution using a combination of AWS service, open-source LLMs and open-source Python packages. We encourage you to learn more by exploring JumpStart , Amazon Titan models, Amazon Bedrock , and OpenSearch Service and building a solution using the sample implementation provided in this post and a dataset relevant to your business. If you have questions or suggestions, leave a comment. About the Authors Amit Arora is an AI and ML Specialist Architect at Amazon Web Services, helping enterprise customers use cloud-based machine learning services to rapidly scale their innovations. He is also an adjunct lecturer in the MS data science and analytics program at Georgetown University in Washington D.C. Dr. Xin Huang is a Senior Applied Scientist for Amazon SageMaker JumpStart and Amazon SageMaker built-in algorithms. He focuses on developing scalable machine learning algorithms. His research interests are in the area of natural language processing, explainable deep learning on tabular data, and robust analysis of non-parametric space-time clustering. He has published many papers in ACL, ICDM, KDD conferences, and Royal Statistical Society: Series A. Navneet Tuteja is a Data Specialist at Amazon Web Services. Before joining AWS, Navneet worked as a facilitator for organizations seeking to modernize their data architectures and implement comprehensive AI/ML solutions. She holds an engineering degree from Thapar University, as well as a master’s degree in statistics from Texas A&M University. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Build a semantic search engine for tabular columns with Transformers and Amazon OpenSearch Service _ AWS Big Data Blog.txt,"AWS Big Data Blog Build a semantic search engine for tabular columns with Transformers and Amazon OpenSearch Service by Kachi Odoemene , Austin Welch , and Taylor McNally | on 01 MAR 2023 | in Amazon ML Solutions Lab , Amazon OpenSearch Service , Amazon SageMaker , Analytics , AWS Glue , Intermediate (200) , Technical How-to | Permalink | Comments |  Share Finding similar columns in a data lake has important applications in data cleaning and annotation, schema matching, data discovery, and analytics across multiple data sources. The inability to accurately find and analyze data from disparate sources represents a potential efficiency killer for everyone from data scientists, medical researchers, academics, to financial and government analysts. Conventional solutions involve lexical keyword search or regular expression matching, which are susceptible to data quality issues such as absent column names or different column naming conventions across diverse datasets (for example, zip_code, zcode, postalcode ). In this post, we demonstrate a solution for searching for similar columns based on column name, column content, or both. The solution uses approximate nearest neighbors algorithms available in Amazon OpenSearch Service to search for semantically similar columns. To facilitate the search, we create features representations (embeddings) for individual columns in the data lake using pre-trained Transformer models from the sentence-transformers library in Amazon SageMaker . Finally, to interact with and visualize results from our solution, we build an interactive Streamlit web application running on AWS Fargate . We include a code tutorial for you to deploy the resources to run the solution on sample data or your own data. Solution overview The following architecture diagram illustrates the two-stage workflow for finding semantically similar columns. The first stage runs an AWS Step Functions workflow that creates embeddings from tabular columns and builds the OpenSearch Service search index. The second stage, or the online inference stage, runs a Streamlit application through Fargate. The web application collects input search queries and retrieves from the OpenSearch Service index the approximate k-most-similar columns to the query. Figure 1. Solution architecture The automated workflow proceeds in the following steps: The user uploads tabular datasets into an Amazon Simple Storage Service (Amazon S3) bucket, which invokes an AWS Lambda function that initiates the Step Functions workflow. The workflow begins with an AWS Glue job that converts the CSV files into Apache Parquet data format. A SageMaker Processing job creates embeddings for each column using pre-trained models or custom column embedding models. The SageMaker Processing job saves the column embeddings for each table in Amazon S3. A Lambda function creates the OpenSearch Service domain and cluster to index the column embeddings produced in the previous step. Finally, an interactive Streamlit web application is deployed with Fargate. The web application provides an interface for the user to input queries to search the OpenSearch Service domain for similar columns. You can download the code tutorial from GitHub to try this solution on sample data or your own data. Instructions on the how to deploy the required resources for this tutorial are available on Github . Prerequistes To implement this solution, you need the following: An AWS account . Basic familiarity with AWS services such as the AWS Cloud Development Kit (AWS CDK), Lambda, OpenSearch Service, and SageMaker Processing. A tabular dataset to create the search index. You can bring your own tabular data or download the sample datasets on GitHub . Build a search index The first stage builds the column search engine index. The following figure illustrates the Step Functions workflow that runs this stage. Figure 2 – Step functions workflow – multiple embedding models Datasets In this post, we build a search index to include over 400 columns from over 25 tabular datasets. The datasets originate from the following public sources: s3://sagemaker-sample-files/datasets/tabular/ NYC Open Data Chicago Data Portal For the the full list of the tables included in the index, see the code tutorial on GitHub . You can bring your own tabular dataset to augment the sample data or build your own search index. We include two Lambda functions that initiate the Step Functions workflow to build the search index for individual CSV files or a batch of CSV files, respectively. Transform CSV to Parquet Raw CSV files are converted to Parquet data format with AWS Glue. Parquet is a column-oriented format file format preferred in big data analytics that provides efficient compression and encoding. In our experiments, the Parquet data format offered significant reduction in storage size compared to raw CSV files. We also used Parquet as a common data format to convert other data formats (for example JSON and NDJSON) because it supports advanced nested data structures. Create tabular column embeddings To extract embeddings for individual table columns in the sample tabular datasets in this post, we use the following pre-trained models from the sentence-transformers library. For additional models, see Pretrained Models . Model name Dimension Size (MB) all-MiniLM-L6-v2 384 80 all-distilroberta-v1 768 290 average_word_embeddings_glove.6B.300d 300 420 The SageMaker Processing job runs create_embeddings.py ( code ) for a single model. For extracting embeddings from multiple models, the workflow runs parallel SageMaker Processing jobs as shown in the Step Functions workflow. We use the model to create two sets of embeddings: column_name_embeddings – Embeddings of column names (headers) column_content_embeddings – Average embedding of all the rows in the column For more information about the column embedding process, see the code tutorial on GitHub . An alternative to the SageMaker Processing step is to create a SageMaker batch transform to get column embeddings on large datasets. This would require deploying the model to a SageMaker endpoint. For more information, see Use Batch Transform . Index embeddings with OpenSearch Service In the final step of this stage, a Lambda function adds the column embeddings to a OpenSearch Service approximate k-Nearest-Neighbor ( kNN) search index . Each model is assigned its own search index. For more information about the approximate kNN search index parameters, see k-NN . Online inference and semantic search with a web app The second stage of the workflow runs a Streamlit web application where you can provide inputs and search for semantically similar columns indexed in OpenSearch Service. The application layer uses an Application Load Balancer , Fargate, and Lambda. The application infrastructure is automatically deployed as part of the solution. The application allows you to provide an input and search for semantically similar column names, column content, or both. Additionally, you can select the embedding model and number of nearest neighbors to return from the search. The application receives inputs, embeds the input with the specified model, and uses kNN search in OpenSearch Service to search indexed column embeddings and find the most similar columns to the given input. The search results displayed include the table names, column names, and similarity scores for the columns identified, as well as the locations of the data in Amazon S3 for further exploration. The following figure shows an example of the web application. In this example, we searched for columns in our data lake that have similar Column Names ( payload type ) to district ( payload ). The application used all-MiniLM-L6-v2 as the embedding model and returned 10 ( k ) nearest neighbors from our OpenSearch Service index. The application returned transit_district , city , borough , and location as the four most similar columns based on the data indexed in OpenSearch Service. This example demonstrates the ability of the search approach to identify semantically similar columns across datasets. Figure 3: Web application user interface Clean up To delete the resources created by the AWS CDK in this tutorial, run the following command: cdk destroy --all Conclusion In this post, we presented an end-to-end workflow for building a semantic search engine for tabular columns. Get started today on your own data with our code tutorial available on GitHub . If you’d like help accelerating your use of ML in your products and processes, please contact the Amazon Machine Learning Solutions Lab . About the Authors Kachi Odoemene is an Applied Scientist at AWS AI. He builds AI/ML solutions to solve business problems for AWS customers. Taylor McNally is a Deep Learning Architect at Amazon Machine Learning Solutions Lab. He helps customers from various industries build solutions leveraging AI/ML on AWS. He enjoys a good cup of coffee, the outdoors, and time with his family and energetic dog. Austin Welch is a Data Scientist in the Amazon ML Solutions Lab. He develops custom deep learning models to help AWS public sector customers accelerate their AI and cloud adoption. In his spare time, he enjoys reading, traveling, and jiu-jitsu. TAGS: Data Lake , Embedding , Python , tutorial Comments View Comments Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Build custom chatbot applications using OpenChatkit models on Amazon SageMaker _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Build custom chatbot applications using OpenChatkit models on Amazon SageMaker by Vikram Elango , Andrew Smith , and Dhawalkumar Patel | on 12 JUN 2023 | in Amazon SageMaker , Customer Solutions , Expert (400) , Technical How-to | Permalink | Comments |  Share Open-source large language models (LLMs) have become popular, allowing researchers, developers, and organizations to access these models to foster innovation and experimentation. This encourages collaboration from the open-source community to contribute to developments and improvement of LLMs. Open-source LLMs provide transparency to the model architecture, training process, and training data, which allows researchers to understand how the model works and identify potential biases and address ethical concerns. These open-source LLMs are democratizing generative AI by making advanced natural language processing (NLP) technology available to a wide range of users to build mission-critical business applications. GPT-NeoX, LLaMA, Alpaca, GPT4All, Vicuna, Dolly, and OpenAssistant are some of the popular open-source LLMs. OpenChatKit is an open-source LLM used to build general-purpose and specialized chatbot applications, released by Together Computer in March 2023 under the Apache-2.0 license. This model allows developers to have more control over the chatbot’s behavior and tailor it to their specific applications. OpenChatKit provides a set of tools, base bot, and building blocks to build fully customized, powerful chatbots. The key components are as follows: An instruction-tuned LLM, fine-tuned for chat from EleutherAI’s GPT-NeoX-20B with over 43 million instructions on 100% carbon negative compute. The GPT-NeoXT-Chat-Base-20B model is based on EleutherAI’s GPT-NeoX model, and is fine-tuned with data focusing on dialog-style interactions. Customization recipes to fine-tune the model to achieve high accuracy on your tasks. An extensible retrieval system enabling you to augment bot responses with information from a document repository, API, or other live-updating information source at inference time. A moderation model, fine-tuned from GPT-JT-6B, designed to filter which questions the bot responds to. The increasing scale and size of deep learning models present obstacles to successfully deploy these models in generative AI applications. To meet the demands for low latency and high throughput, it becomes essential to employ sophisticated methods like model parallelism and quantization. Lacking proficiency in the application of these methods, numerous users encounter difficulties in initiating the hosting of sizable models for generative AI use cases. In this post, we show how to deploy OpenChatKit models ( GPT-NeoXT-Chat-Base-20B and GPT-JT-Moderation-6B ) models on Amazon SageMaker using DJL Serving and open-source model parallel libraries like DeepSpeed and Hugging Face Accelerate. We use DJL Serving, which is a high-performance universal model serving solution powered by the Deep Java Library (DJL) that is programming language agnostic. We demonstrate how the Hugging Face Accelerate library simplifies deployment of large models into multiple GPUs, thereby reducing the burden of running LLMs in a distributed fashion. Let’s get started! Extensible retrieval system An extensible retrieval system is one of the key components of OpenChatKit. It enables you to customize the bot response based on a closed domain knowledge base. Although LLMs are able to retain factual knowledge in their model parameters and can achieve remarkable performance on downstream NLP tasks when fine-tuned, their capacity to access and predict closed domain knowledge accurately remains restricted. Therefore, when they’re presented with knowledge-intensive tasks, their performance suffers to that of task-specific architectures. You can use the OpenChatKit retrieval system to augment knowledge in their responses from external knowledge sources such as Wikipedia, document repositories, APIs, and other information sources. The retrieval system enables the chatbot to access current information by obtaining pertinent details in response to a specific query, thereby supplying the necessary context for the model to generate answers. To illustrate the functionality of this retrieval system, we provide support for an index of Wikipedia articles and offer example code demonstrating how to invoke a web search API for information retrieval. By following the provided documentation, you can integrate the retrieval system with any dataset or API during the inference process, allowing the chatbot to incorporate dynamically updated data into its responses. Moderation model Moderation models are important in chatbot applications to enforce content filtering, quality control, user safety, and legal and compliance reasons. Moderation is a difficult and subjective task, and depends a lot on the domain of the chatbot application. OpenChatKit provides tools to moderate the chatbot application and monitor input text prompts for any inappropriate content. The moderation model provides a good baseline that can be adapted and customized to various needs. OpenChatKit has a 6-billion-parameter moderation model, GPT-JT-Moderation-6B , which can moderate the chatbot to limit the inputs to the moderated subjects. Although the model itself does have some moderation built in, TogetherComputer trained a GPT-JT-Moderation-6B model with Ontocord.ai’s OIG-moderation dataset . This model runs alongside the main chatbot to check that both the user input and answer from the bot don’t contain inappropriate results. You can also use this to detect any out of domain questions to the chatbot and override when the question is not part of the chatbot’s domain. The following diagram illustrates the OpenChatKit workflow. Extensible retrieval system use cases Although we can apply this technique in various industries to build generative AI applications, for this post we discuss use cases in the financial industry. Retrieval augmented generation can be employed in financial research to automatically generate research reports on specific companies, industries, or financial products. By retrieving relevant information from internal knowledge bases, financial archives, news articles, and research papers, you can generate comprehensive reports that summarize key insights, financial metrics, market trends, and investment recommendations. You can use this solution to monitor and analyze financial news, market sentiment, and trends. Solution overview The following steps are involved to build a chatbot using OpenChatKit models and deploy them on SageMaker: Download the chat base GPT-NeoXT-Chat-Base-20B model and package the model artifacts to be uploaded to Amazon Simple Storage Service (Amazon S3). Use a SageMaker large model inference (LMI) container, configure the properties, and set up custom inference code to deploy this model. Configure model parallel techniques and use inference optimization libraries in DJL serving properties. We will use Hugging Face Accelerate as the engine for DJL serving. Additionally, we define tensor parallel configurations to partition the model. Create a SageMaker model and endpoint configuration, and deploy the SageMaker endpoint. You can follow along by running the notebook in the GitHub repo . Download the OpenChatKit model First, we download the OpenChatKit base model. We use huggingface_hub and use snapshot_download to download the model, which downloads an entire repository at a given revision. Downloads are made concurrently to speed up the process. See the following code: from huggingface_hub import snapshot_download from pathlib import Path import os # - This will download the model into the current directory where ever the jupyter notebook is running local_model_path = Path(""./openchatkit"") local_model_path.mkdir(exist_ok=True) model_name = ""togethercomputer/GPT-NeoXT-Chat-Base-20B"" # Only download pytorch checkpoint files allow_patterns = [""*.json"", ""*.pt"", ""*.bin"", ""*.txt"", ""*.model""] # - Leverage the snapshot library to donload the model since the model is stored in repository using LFS chat_model_download_path = snapshot_download( repo_id=model_name,#A user or an organization name and a repo name cache_dir=local_model_path, #Path to the folder where cached files are stored. allow_patterns=allow_patterns, #only files matching at least one pattern are downloaded. ) DJL Serving properties You can use SageMaker LMI containers to host large generative AI models with custom inference code without providing your own inference code. This is extremely useful when there is no custom preprocessing of the input data or postprocessing of the model’s predictions. You can also deploy a model using custom inference code. In this post, we demonstrate how to deploy OpenChatKit models with custom inference code. SageMaker expects the model artifacts in tar format. We create each OpenChatKit model with the following files: serving.properties and model.py . The serving.properties configuration file indicates to DJL Serving which model parallelization and inference optimization libraries you would like to use. The following is a list of settings we use in this configuration file: openchatkit/serving.properties engine = Python option.tensor_parallel_degree = 4 option.s3url = {{s3url}} This contains the following parameters: engine – The engine for DJL to use. option.entryPoint – The entry point Python file or module. This should align with the engine that is being used. option.s3url – Set this to the URI of the S3 bucket that contains the model. option.modelid – If you want to download the model from huggingface.co, you can set option.modelid to the model ID of a pretrained model hosted inside a model repository on huggingface.co ( https://huggingface.co/models ). The container uses this model ID to download the corresponding model repository on huggingface.co. option.tensor_parallel_degree – Set this to the number of GPU devices over which DeepSpeed needs to partition the model. This parameter also controls the number of workers per model that will be started up when DJL Serving runs. For example, if we have an 8 GPU machine and we are creating eight partitions, then we will have one worker per model to serve the requests. It’s necessary to tune the parallelism degree and identify the optimal value for a given model architecture and hardware platform. We call this ability inference-adapted parallelism . Refer to Configurations and settings for an exhaustive list of options. OpenChatKit models The OpenChatKit base model implementation has the following four files: model.py – This file implements the handling logic for the main OpenChatKit GPT-NeoX model. It receives the inference input request, loads the model, loads the Wikipedia index, and serves the response. Refer to model.py (created part of the notebook) for additional details. model.py uses the following key classes: OpenChatKitService – This handles passing the data between the GPT-NeoX model, Faiss search, and conversation object. WikipediaIndex and Conversation objects are initialized and input chat conversations are sent to the index to search for relevant content from Wikipedia. This also generates a unique ID for each invocation if one is not supplied for the purpose of storing the prompts in Amazon DynamoDB . ChatModel – This class loads the model and tokenizer and generates the response. It handles partitioning the model across multiple GPUs using tensor_parallel_degree , and configures the dtypes and device_map . The prompts are passed to the model to generate responses. A stopping criteria StopWordsCriteria is configured for the generation to only produce the bot response on inference. ModerationModel – We use two moderation models in the ModerationModel class: the input model to indicate to the chat model that the input is inappropriate to override the inference result, and the output model to override the inference result. We classify the input prompt and output response with the following possible labels: casual needs caution needs intervention (this is flagged to be moderated by the model) possibly needs caution probably needs caution wikipedia_prepare.py – This file handles downloading and preparing the Wikipedia index. In this post, we use a Wikipedia index provided on Hugging Face datasets. To search the Wikipedia documents for relevant text, the index needs to be downloaded from Hugging Face because it’s not packaged elsewhere. The wikipedia_prepare.py file is responsible for handling the download when imported. Only a single process in the multiple that are running for inference can clone the repository. The rest wait until the files are present in the local file system. wikipedia.py – This file is used for searching the Wikipedia index for contextually relevant documents. The input query is tokenized and embeddings are created using mean_pooling . We compute cosine similarity distance metrics between the query embedding and the Wikipedia index to retrieve contextually relevant Wikipedia sentences. Refer to wikipedia.py for implementation details. #function to create sentence embedding using mean_pooling def mean_pooling(token_embeddings, mask): token_embeddings = token_embeddings.masked_fill(~mask[..., None].bool(), 0.0) sentence_embeddings = token_embeddings.sum(dim=1) / mask.sum(dim=1)[..., None] return sentence_embeddings #function to compute cosine similarity distance between 2 embeddings def cos_sim_2d(x, y): norm_x = x / np.linalg.norm(x, axis=1, keepdims=True) norm_y = y / np.linalg.norm(y, axis=1, keepdims=True) return np.matmul(norm_x, norm_y.T) conversation.py – This file is used for storing and retrieving the conversation thread in DynamoDB for passing to the model and user. conversation.py is adapted from the open-source OpenChatKit repository. This file is responsible for defining the object that stores the conversation turns between the human and the model. With this, the model is able to retain a session for the conversation, allowing a user to refer to previous messages. Because SageMaker endpoint invocations are stateless, this conversation needs to be stored in a location external to the endpoint instances. On startup, the instance creates a DynamoDB table if it doesn’t exist. All updates to the conversation are then stored in DynamoDB based on the session_id key, which is generated by the endpoint. Any invocation with a session ID will retrieve the associated conversation string and update it as required. Build an LMI inference container with custom dependencies The index search uses Facebook’s Faiss library for performing the similarity search. Because this isn’t included in the base LMI image, the container needs to be adapted to install this library. The following code defines a Dockerfile that installs Faiss from the source alongside other libraries needed by the bot endpoint. We use the sm-docker utility to build and push the image to Amazon Elastic Container Registry (Amazon ECR) from Amazon SageMaker Studio . Refer to Using the Amazon SageMaker Studio Image Build CLI to build container images from your Studio notebooks for more details. The DJL container doesn’t have Conda installed, so Faiss needs to be cloned and compiled from the source. To install Faiss, the dependencies for using the BLAS APIs and Python support need to be installed. After these packages are installed, Faiss is configured to use AVX2 and CUDA before being compiled with the Python extensions installed. pandas , fastparquet , boto3 , and git-lfs are installed afterwards because these are required for downloading and reading the index files. FROM 763104351884.dkr.ecr.us-east-1.amazonaws.com/djl-inference:0.21.0-deepspeed0.8.0-cu117 ARG FAISS_URL=https://github.com/facebookresearch/faiss.git RUN apt-get update && apt-get install -y git-lfs wget cmake pkg-config build-essential apt-utils RUN apt search openblas && apt-get install -y libopenblas-dev swig RUN git clone $FAISS_URL && \ cd faiss && \ cmake -B build . -DFAISS_OPT_LEVEL=avx2 -DCMAKE_CUDA_ARCHITECTURES=""86"" && \ make -C build -j faiss && \ make -C build -j swigfaiss && \ make -C build -j swigfaiss_avx2 && \ (cd build/faiss/python && python -m pip install ) RUN pip install pandas fastparquet boto3 && \ git lfs install --skip-repo && \ apt-get clean all Create the model Now that we have the Docker image in Amazon ECR, we can proceed with creating the SageMaker model object for the OpenChatKit models. We deploy GPT-NeoXT-Chat-Base-20B input and output moderation models using GPT-JT-Moderation-6B . Refer to create_model for more details. from sagemaker.utils import name_from_base chat_model_name = name_from_base(f""gpt-neoxt-chatbase-ds"") print(chat_model_name) create_model_response = sm_client.create_model( ModelName=chat_model_name, ExecutionRoleArn=role, PrimaryContainer={ ""Image"": chat_inference_image_uri, ""ModelDataUrl"": s3_code_artifact, }, ) chat_model_arn = create_model_response[""ModelArn""] print(f""Created Model: {chat_model_arn}"") Configure the endpoint Next, we define the endpoint configurations for the OpenChatKit models. We deploy the models using the ml.g5.12xlarge instance type. Refer to create_endpoint_config for more details. chat_endpoint_config_name = f""{chat_model_name}-config"" chat_endpoint_name = f""{chat_model_name}-endpoint"" chat_endpoint_config_response = sm_client.create_endpoint_config( EndpointConfigName=chat_endpoint_config_name, ProductionVariants=[ { ""VariantName"": ""variant1"", ""ModelName"": chat_model_name, ""InstanceType"": ""ml.g5.12xlarge"", ""InitialInstanceCount"": 1, ""ContainerStartupHealthCheckTimeoutInSeconds"": 3600, }, ], ) Deploy the endpoint Finally, we create an endpoint using the model and endpoint configuration we defined in the previous steps: chat_create_endpoint_response = sm_client.create_endpoint( EndpointName=f""{chat_endpoint_name}"", EndpointConfigName=chat_endpoint_config_name ) print(f""Created Endpoint: {chat_create_endpoint_response['EndpointArn']},"") Run inference from OpenChatKit models Now it’s time to send inference requests to the model and get the responses. We pass the input text prompt and model parameters such as temperature , top_k , and max_new_tokens . The quality of the chatbot responses is based on the parameters specified, so it’s recommended to benchmark model performance against these parameters to find the optimal setting for your use case. The input prompt is first sent to the input moderation model, and the output is sent to ChatModel to generate the responses. During this step, the model uses the Wikipedia index to retrieve contextually relevant sections to the model as the prompt to get domain-specific responses from the model. Finally, the model response is sent to the output moderation model to check for classification, and then the responses are returned. See the following code: def chat(prompt, session_id=None, **kwargs): if session_id: chat_response_model = smr_client.invoke_endpoint( EndpointName=chat_endpoint_name, Body=json.dumps( { ""inputs"": prompt, ""parameters"": { ""temperature"": 0.6, ""top_k"": 40, ""max_new_tokens"": 512, ""session_id"": session_id, ""no_retrieval"": True, }, } ), ContentType=""application/json"", ) else: chat_response_model = smr_client.invoke_endpoint( EndpointName=chat_endpoint_name, Body=json.dumps( { ""inputs"": prompt, ""parameters"": { ""temperature"": 0.6, ""top_k"": 40, ""max_new_tokens"": 512, }, } ), ContentType=""application/json"", ) response = chat_response_model[""Body""].read().decode(""utf8"") return response prompts = ""What does a data engineer do?"" chat(prompts) Refer to sample chat interactions below. Clean up Follow the instructions in the cleanup section of the to delete the resources provisioned as part of this post to avoid unnecessary charges. Refer to Amazon SageMaker Pricing for details about the cost of the inference instances. Conclusion In this post, we discussed the importance of open-source LLMs and how to deploy an OpenChatKit model on SageMaker to build next-generation chatbot applications. We discussed various components of OpenChatKit models, moderation models, and how to use an external knowledge source like Wikipedia for retrieval augmented generation (RAG) workflows. You can find step-by-step instructions in the GitHub notebook . Let us know about the amazing chatbot applications you’re building. Cheers! About the Authors Dhawal Patel is a Principal Machine Learning Architect at AWS. He has worked with organizations ranging from large enterprises to mid-sized startups on problems related to distributed computing, and Artificial Intelligence. He focuses on Deep learning including NLP and Computer Vision domains. He helps customers achieve high performance model inference on SageMaker. Vikram Elango is a Sr. AIML Specialist Solutions Architect at AWS, based in Virginia, US. He is currently focused on generative AI, LLMs, prompt engineering, large model inference optimization, and scaling ML across enterprises. Vikram helps financial and insurance industry customers with design and thought leadership to build and deploy machine learning applications at scale. In his spare time, he enjoys traveling, hiking, cooking, and camping with his family. Andrew Smith is a Cloud Support Engineer in the SageMaker, Vision & Other team at AWS, based in Sydney, Australia. He supports customers using many AI/ML services on AWS with expertise in working with Amazon SageMaker. Outside of work, he enjoys spending time with friends and family as well as learning about different technologies. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Buildigo.txt,"From there, Buildigo has big ambitions for the future. “We aim to be the number one player in this market within 5 years,” says Huegli. “Using AWS, we can scale at speed while remaining focused on delivering what our customers want.” Customer Stories / Software & Intenet Buildigo runs its customer-facing website, databases, data lake, and development pipeline on AWS. It uses AWS Step Functions, a low-code, visual workflow service to build distributed applications, and automate IT and business processes, to help developers keep on top of complex application workflows. Français Buildigo Gains Competitive Advantage with AWS Technology AWS CloudFormation lets you model, provision, and manage AWS and third-party resources by treating infrastructure as code. Learn more » Buildigo offers an easy way to link homeowners and renters with local craftspeople who can work on their houses and gardens. The Swiss startup’s online platform facilitates communication about jobs, the delivery of quotes, and payment for completed work. Español The flexibility of Buildigo’s platform helped when it had to onboard hundreds of new tradespeople after the acquisition. The 200-year old insurance company handles tens of thousands of damage claims each year, and has contacts with hundreds of local traders. Using AWS, Buildigo could cope with managing this increase in service providers. Buildigo recognizes that data is an asset. It analyzes customer usage to give employees insight into how to improve its services. For this, it uses AWS Glue, a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. It also uses Amazon Relational Database Service (Amazon RDS), a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud for its storage. 日本語 2022 Weekly features 3,000 job requests 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram From Automating Damage Claims to Becoming Number One AWS Glue Generates data-driven insights to improve customer experience. Scaling to Accommodate Customer Growth Using AWS The team has noticed seasonal trends, such as a rising demand for gardeners during the summer months and electricians during the winter. It also noticed a correlation between rising energy prices and increased demand for installing alternative heating systems such as heat pumps and solar installations. Buildigo matches homeowners and renters with the craftspeople they need to work on their properties. Based in Switzerland, its cloud-based systems matches by skills and location and provides simple payment solutions. The company is owned by Swiss insurance company La Mobilière, which has 2 million customers. AWS Services Used Data-driven insights Amazon RDS The company is also able to control costs as it grows. “As a young company, expanding in a cost-effective way is essential to our success,” says Mathieu Meylan, chief technology officer at Buildigo. “Using AWS serverless technology, we only pay for the resources we use. This helps us to manage our overheads and invest any funds saved into mission-critical projects.” Scales to accommodate rising customer demand including a 4x rise in demand for solar panels and heat pumps in the last 12 months. 中文 (繁體) Bahasa Indonesia Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Using AWS, Buildigo can instantly scale its compute resources to accommodate rising customer demand, so its users always experience a responsive service. This capability was vital to Buildigo during the COVID-19 pandemic because demand for its services fluctuated wildly. Demand for craftspeople disappeared at first but then increased rapidly as people spent more time at home. With the shift to remote working during lockdowns, Buildigo saw many requests for the creation of home offices. Ρусский AWS Glue is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. Learn more » عربي One recently launched feature is mobile device support. “Many of our customers, especially craftspeople, are on the move and prefer to access our services on their phones,” says Huegli. “We’re now able to offer Buildigo on any device or operating system.” Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. 中文 (简体) As a rapidly growing young company, it needsto quickly scale its IT systems as customer demand increases while minimizing costs and maintenance tasks for its small team. Buildigo can quickly roll out new capabilities to improve its offerings as it learns more about its customers. The company’s IT team has a short development cycle and typically deploys a new feature at least once a week using AWS CloudFormation and AWS Cloud Development Kit Buildigo’s service differentiator is not being available to all traders. Instead, craftspeople can only join the service by invitation. It aims to provide the best quality craftspeople and most suitable individual for a job as opposed to giving homeowners a long list of unvetted providers. The next steps for Buildigo include automating damage claims processing and providing insurance claimants with a quick way to get quotes for repair work. It is using Amazon API Gateway, a fully managed service for monitoring and securing APIs at scale and AWS Lambda, a serverless, event driven computing service, to run this automation without worrying about infrastructure. Overview Mathieu Meylan, Chief Technology Officer, Buildigo Get Started To support quick development, the company uses AWS CloudFormation to model, provision, and manage its resources by treating infrastructure as code. It also uses Amazon CloudFront, a content delivery network service, which automatically adapts multimedia elements on Buildigo’s website to to different screen sizes and devices. Buildigo offers an easy way to link homeowners and renters with local crafts people who can work on their houses and gardens. The Swiss startup needs to quickly scale its IT systems as customer demand increases while minimizing costs and maintenance tasks for its small team. It built its platform on AWS, running its customer-facing website, databases, data lake, and development pipeline in the AWS cloud. This enables Buildigo to focus on developing its core application, provide a responsive service for customers, and release weekly feature updates to meet their changing needs. Amazon Cloudfront Türkçe English About Buildigo Buildigo Scales at Speed While Delivering for Customers with AWS 4x rise Buildigo prioritized cutting-edge, cloud-based technologies from its inception. The decision proved to be a competitive advantage and was a factor in Swiss insurance company La Mobilière acquiring the company in 2020. “Several companies offer similar services but we wanted a company using state-of-the-art technology,” says Michael Huegli, managing director at Buildigo, who was previously head of home ecosystem at La Mobilière. “We knew that because Buildigo built its platform on AWS, it would be scalable, reliable, and support fast development times.” AWS Cloudformation These insights allow Buildigo to make sure it has the right tradespeople in place to meet customer demand at the right times. It also helps it to tailor marketing messages so they’re relevant to customer interest, thus increasing job requests. Buildigo built its platform on Amazon Web Services (AWS) from the start, so it could focus on developing its core services. It chose AWS for its scalability and managed services which means the team can concentrate on developing new features. Using AWS, it provides a responsive service for customers and releases weekly feature updates to meet their changing needs. Using AWS serverless technology, we only pay for the resources we use. This helps us to manage our overhead and invest any funds saved into mission-critical projects.” Deutsch Learn more » Tiếng Việt Italiano ไทย Matched 3,000 job requests with hundreds of trades people. Contact Sales Buildigo offers an easy way to link homeowners and renters with local craftspeople who can work on their houses and gardens. The Swiss start up needs to quickly scale its IT systems as customer demand increases while minimizing costs and maintenance tasks for its small team. It built its platform on AWS, running its customer-facing website, databases, data lake, and development pipeline in the AWS cloud. Developing Buildigo for Mobile-First Customers Noticing Seasonal Peaks Through Smart Data Analysis The model of actively selecting tradespeople has proved popular because it offers more than simple directory services or user reviewers but instead relies on real recomendations. Since relaunching in February 2021, the company has matched 3,000 job requests with hundreds of tradespeople. Buildigo Scales at Speed While Delivering for Customers with AWS Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português Releases at least one new feature per week." Building a medical image search platform on AWS _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Building a medical image search platform on AWS by Gang Fu , Erhan Bas , and Ujjwal Ratan | on 14 OCT 2020 | in Amazon Comprehend Medical , Amazon OpenSearch Service , Amazon SageMaker , Analytics , Artificial Intelligence , AWS Amplify , AWS AppSync , AWS Fargate | Permalink | Comments |  Share Improving radiologist efficiency and preventing burnout is a primary goal for healthcare providers. A nationwide study published in Mayo Clinic Proceedings in 2015 showed radiologist burnout percentage at a concerning 61% [1]. In additon, the report concludes that “burnout and satisfaction with work-life balance in US physicians worsened from 2011 to 2014. More than half of US physicians are now experiencing professional burnout.”[2] As technologists, we’re looking for ways to put new and innovative solutions in the hands of physicians to make them more efficient, reduce burnout, and improve care quality. To reduce burnout and improve value-based care through data-driven decision-making, Artificial Intelligence (AI) can be used to unlock the information trapped in the vast amount of unstructured data (e.g. images, texts, and voice) and create clinically actionable knowledge base. AWS AI services can derive insights and relationships from free-form medical reports, automate the knowledge sharing process, and eventually improve personalized care experience. In this post, we use Convolutional Neural Networks (CNN) as a feature extractor to convert medical images into a one-dimensional feature vector with a size of 1024. We call this process medical image embedding . Then we index the image feature vector using the K-nearest neighbors (KNN) algorithm in Amazon OpenSearch Service to build a similarity-based image retrieval system. Additionally, we use the AWS managed natural language processing (NLP) service Amazon Comprehend Medical to perform named entity recognition (NER) against free text clinical reports. The detected named entities are also linked to medical ontology, ICD-10-CM, to enable simple aggregation and distribution analysis. The presented solution also includes a front-end React web application and backend GraphQL API managed by AWS Amplify and AWS AppSync , and authentication is handled by Amazon Cognito . After deploying this working solution, the end-users (healthcare providers) can search through a repository of unstructured free text and medical images, conduct analytical operations, and use it in medical training and clinical decision support. This eliminates the need to manually analyze all the images and reports and get to the most relevant ones. Using a system like this improves the provider’s efficiency. The following graphic shows an example end result of the deployed application. Dataset and architecture We use the MIMIC CXR dataset to demonstrate how this working solution can benefit healthcare providers, in particular, radiologists. MIMIC CXR is a publicly available database of chest X-ray images in DICOM format and the associated radiology reports as free text files[3]. The methods for data collection and the data structures in this dataset have been well documented and are very detailed [3]. Also, this is a restricted-access resource. To access the files, you must be a registered user and sign the data use agreement . The following sections provide more details on the components of the architecture. The following diagram illustrates the solution architecture. The architecture is comprised of the offline data transformation and online query components. The offline data transformation step, the unstructured data, including free texts and image files, is converted into structured data. Electronic Heath Record (EHR) radiology reports as free text are processed using Amazon Comprehend Medical, an NLP service that uses machine learning to extract relevant medical information from unstructured text, such as medical conditions including clinical signs, diagnosis, and symptoms. The named entities are identified and mapped to structured vocabularies, such as ICD-10 Clinical Modifications (CMs) ontology. The unstructured text plus structured named entities are stored in Amazon ES to enable free text search and term aggregations. The medical images from Picture Archiving and Communication System (PACS) are converted into vector representations using a pretrained deep learning model deployed in an Amazon Elastic Container Service (Amazon ECS) AWS Fargate cluster. Similar visual search on AWS has been published previously for online retail product image search. It used an Amazon SageMaker built-in KNN algorithm for similarity search, which supports different index types and distance metrics. We took advantage of the KNN for Amazon ES to find the k closest images from a feature space as demonstrated on the GitHub repo . KNN search is supported in Amazon ES version 7.4+. The container running on the ECS Fargate cluster reads medical images in DICOM format, carries out image embedding using a pretrained model, and saves a PNG thumbnail in an Amazon Simple Storage Service (Amazon S3) bucket, which serves as the storage for AWS Amplify React web application. It also parses out the DICOM image metadata and saves them in Amazon DynamoDB . The image vectors are saved in an OpenSearch cluster and are used for the KNN visual search, which is implemented in an AWS Lambda function. The unstructured data from EHR and PACS needs to be transferred to Amazon S3 to trigger the serverless data processing pipeline through the Lambda functions. You can achieve this data transfer by using AWS Storage Gateway or AWS DataSync , which is out of the scope of this post. The online query API, including the GraphQL schemas and resolvers, was developed in AWS AppSync. The front-end web application was developed using the Amplify React framework, which can be deployed using the Amplify CLI. The detailed AWS CloudFormation templates and sample code are available in the Github repo . Solution overview To deploy the solution, you complete the following steps: Deploy the Amplify React web application for online search. Deploy the image-embedding container to AWS Fargate. Deploy the data-processing pipeline and AWS AppSync API. Deploying the Amplify React web application The first step creates the Amplify React web application, as shown in the following diagram. Install and configure the AWS Command Line Interface (AWS CLI). Install the AWS Amplify CLI . Clone the code base with stepwise instructions. Go to your code base folder and initialize the Amplify app using the command amplify init . You must answer a series of questions, like the name of the Amplify app. After this step, you have the following changes in your local and cloud environments: A new folder named amplify is created in your local environment A file named aws-exports.js is created in local the src folder A new Amplify app is created on the AWS Cloud with the name provided during deployment (for example, medical-image-search ) A CloudFormation stack is created on the AWS Cloud with the prefix amplify- You create authentication and storage services for your Amplify app afterwards using the following commands: amplify add auth amplify add storage amplify push When the CloudFormation nested stacks for authentication and storage are successfully deployed, you can see the new Amazon Cognito user pool as the authentication backend and S3 bucket as the storage backend are created. Save the Amazon Cognito user pool ID and S3 bucket name from the Outputs tab of the corresponding CloudFormation nested stack (you use these later). The following screenshot shows the location of the user pool ID on the Outputs tab. The following screenshot shows the location of the bucket name on the Outputs tab. Deploying the image-embedding container to AWS Fargate We use the Amazon SageMaker Inference Toolkit to serve the PyTorch inference model, which converts a medical image in DICOM format into a feature vector with the size of 1024. To create a container with all the dependencies, you can either use pre-built deep learning container images or derive a Dockerfile from the Amazon Sagemaker Pytorch inference CPU container , like the one from the GitHub repo , in the container folder. You can build the Docker container and push it to Amazon ECR manually or by running the shell script build_and_push.sh . You use the repository image URI for the Docker container later to deploy the AWS Fargate cluster. The following screenshot shows the sagemaker-pytorch-inference repository on the Amazon ECR console. We use Multi Model Server (MMS) to serve the inference endpoint . You need to install MMS with pip locally, use the Model archiver CLI to package model artifacts into a single model archive .mar file, and upload it to an S3 bucket to be served by a containerized inference endpoint. The model inference handler is defined in dicom_featurization_service.py in the MMS folder. If you have a domain-specific pretrained Pytorch model, place the model.pth file in the MMS folder; otherwise, the handler uses a pretrained DenseNET121[4] for image processing. See the following code: model_file_path = os.path.join(model_dir, ""model.pth"") if os.path.isfile(model_file_path): model = torch.load(model_file_path) else: model = models.densenet121(pretrained=True) model = model._modules.get('features') model.add_module(""end_relu"", nn.ReLU()) model.add_module(""end_globpool"", nn.AdaptiveAvgPool2d((1, 1))) model.add_module(""end_flatten"", nn.Flatten()) model = model.to(self.device) model.eval() The intermediate results of this CNN-based model is to represent images as feature vectors. In other words, the convolutional layers before the final classification layer is flattened to convert feature layers to a vector representation. Run the following command in the MMS folder to package up the model archive file: model-archiver -f --model-name dicom_featurization_service --model-path ./ --handler dicom_featurization_service:handle --export-path ./ The preceding code generates a package file named dicom_featurization_service.mar . Create a new S3 bucket and upload the package file to that bucket with public read Access Control List (ACL). See the following code: aws s3 cp ./dicom_featurization_service.mar s3:// / --acl public-read --profile You’re now ready to deploy the image-embedding inference model to the AWS Fargate cluster using the CloudFormation template ecsfargate.yaml in the CloudFormationTemplates folder. You can deploy using the AWS CLI: go to the CloudFormationTemplates folder and copy the following command: aws cloudformation deploy --capabilities CAPABILITY_IAM --template-file ./ecsfargate.yaml --stack-name --parameter-overrides ImageUrl= InferenceModelS3Location=https:// .s3.amazonaws.com/dicom_featurization_service.mar --profile You need to replace the following placeholders: stackname – A unique name to refer to this CloudFormation stack imageURI – The image URI for the MMS Docker container uploaded in Amazon ECR S3bucketname – The MMS package in the S3 bucket, such as https:// .s3.amazonaws.com/dicom_featurization_service.mar profilename – Your AWS CLI profile name (default if not named) Alternatively, you can choose Launch stack for the following Regions: us-east-1 – us-west-2 – After the CloudFormation stack creation is complete, go to the stack Outputs tab on the AWS CloudFormation console and copy the InferenceAPIUrl for later deployment. See the following screenshot. You can delete this stack after the offline image embedding jobs are finished to save costs, because it’s not used for online queries. Deploying the data-processing pipeline and AWS AppSync API You deploy the image and free text data-processing pipeline and AWS AppSync API backend through another CloudFormation template named AppSyncBackend.yaml in the CloudFormationTemplates folder, which creates the AWS resources for this solution. See the following solution architecture. To deploy this stack using the AWS CLI, go to the CloudFormationTemplates folder and copy the following command: aws cloudformation deploy --capabilities CAPABILITY_NAMED_IAM --template-file ./AppSyncBackend.yaml --stack-name --parameter-overrides AuthorizationUserPool = PNGBucketName = InferenceEndpointURL= --profile Replace the following placeholders: stackname – A unique name to refer to this CloudFormation stack AuthorizationUserPool – Amazon Cognito user pool PNGBucketName – Amazon S3 bucket name InferenceEndpointURL – The inference API endpoint Profilename – The AWS CLI profile name (use default if not named) Alternatively, you can choose Launch stack for the following Regions: us-east-1 – us-west-2 – You can download the Lambda function for medical image processing, CMprocessLambdaFunction.py , and its dependency layer separately if you deploy this stack in AWS Regions other than us-east-1 and us-west-2 . Because their file size exceeds the CloudFormation template limit, you need to upload them to your own S3 bucket (either create a new S3 bucket or use the existing one, like the aforementioned S3 bucket for hosting the MMS model package file) and override the LambdaBucket mapping parameter using your own bucket name. Save the AWS AppySync API URL and AWS Region from the settings on the AWS AppSync console. Edit the src/aws-exports.js file in your local environment and replace the placeholders with those values: const awsmobile = { ""aws_appsync_graphqlEndpoint"": """", ""aws_appsync_region"": """", ""aws_appsync_authenticationType"": ""AMAZON_COGNITO_USER_POOLS"" }; After this stack is successfully deployed, you’re ready to use this solution. If you have in-house EHR and PACS databases, you can set up the AWS Storage Gateway to transfer data to the S3 bucket to trigger the transformation jobs. Alternatively, you can use the public dataset MIMIC CXR: download the MIMIC CXR dataset from PhysioNet (to access the files, you must be a credentialed user and sign the data use agreement for the project) and upload the DICOM files to the S3 bucket mimic-cxr-dicom- and the free text radiology report to the S3 bucket mimic-cxr-report- . If everything works as expected, you should see the new records created in the DynamoDB table medical-image-metadata and the Amazon ES domain medical-image-search . You can test the Amplify React web application locally by running the following command: npm install && npm start Or you can publish the React web app by deploying it in Amazon S3 with AWS CloudFront distribution, by first entering the following code: amplify hosting add Then, enter the following code: amplify publish You can see the hosting endpoint for the Amplify React web application after deployment. Conclusion We have demonstrated how to deploy, index and search medical images on AWS, which segregates the offline data ingestion and online search query functions. You can use AWS AI services to transform unstructured data, for example the medical images and radiology reports, into structured ones. By default, the solution uses a general-purpose model trained on ImageNET to extract features from images. However, this default model may not be accurate enough to extract medical image features because there are fundamental differences in appearance, size, and features between medical images in its raw form. Such differences make it hard to train commonly adopted triplet-based learning networks [5], where semantically relevant images or objects can be easily defined or ranked. To improve search relevancy, we performed an experiment by using the same MIMIC CXR dataset and the derived diagnosis labels to train a weakly supervised disease classification network similar to Wang et. Al [6]. We found this domain-specific pretrained model yielded qualitatively better visual search results. So it’s recommended to bring your own model (BYOM) to this search platform for real-world implementation. The methods presented here enable you to perform indexing, searching and aggregation against unstructured images in addition to free text. It sets the stage for future work that can combine these features for multimodal medical image search engine. Information retrieval from unstructured corpuses of clinical notes and images is a time-consuming and tedious task. Our solution allows radiologists to become more efficient and help them reduce potential burnout. To find the latest development to this solution, check out medical image search on GitHub . Reference: https://www.radiologybusiness.com/topics/leadership/radiologist-burnout-are-we-done-yet https://www.mayoclinicproceedings.org/article/S0025-6196(15)00716-8/abstract#secsectitle0010 Johnson, Alistair EW, et al. “MIMIC-CXR, a de-identified publicly available database of chest radiographs with free-text reports.” Scientific Data 6, 2019. Huang, Gao, et al. “Densely connected convolutional networks.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. Wang, Jiang, et al. “Learning fine-grained image similarity with deep ranking.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014. Wang, Xiaosong, et al. “Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases.” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. About the Authors   Gang Fu is a Healthcare Solution Architect at AWS. He holds a PhD in Pharmaceutical Science from the University of Mississippi and has over ten years of technology and biomedical research experience. He is passionate about technology and the impact it can make on healthcare. Ujjwal Ratan is a Principal Machine Learning Specialist Solution Architect in the Global Healthcare and Lifesciences team at Amazon Web Services. He works on the application of machine learning and deep learning to real world industry problems like medical imaging, unstructured clinical text, genomics, precision medicine, clinical trials and quality of care improvement. He has expertise in scaling machine learning/deep learning algorithms on the AWS cloud for accelerated training and inference. In his free time, he enjoys listening to (and playing) music and taking unplanned road trips with his family. Erhan Bas is a Senior Applied Scientist in the AWS Rekognition team, currently developing deep learning algorithms for computer vision applications. His expertise is in machine learning and large scale image analysis techniques, especially in biomedical, life sciences and industrial inspection technologies. He enjoys playing video games, drinking coffee, and traveling with his family. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Building a Scalable Interactive Learning Application for Kids Using AWS Services with Yellow Class _ Case Study _ AWS.txt,"bandwidth required to view videos Security is of the utmost importance because the company’s customers are families with children, so data is encrypted during transit and at storage. To further fortify security, Yellow Class performed an AWS Well-Architected review, which is a process that helps the company learn, measure, and build using architectural best practices. Yellow Class met with experts at AWS and did exercises to align its security practices with recommendations, such as protecting data integrity and managing user permissions. Another security safeguard for Yellow Class is increasing observability so that the company is the first to know about issues. Yellow Class stays informed with dashboard data and alarms using Amazon CloudWatch, which helps organizations observe and monitor AWS resources and applications in the cloud and on premises. Français Working with solutions architects at AWS, Yellow Class optimized its application to improve performance. The company reduced the file size and segment of the videos on its application while improving the video quality. Yellow Class also transcoded the raw video to an industry-standard format using AWS Elemental MediaConvert, which reduced the time that it takes for videos to start playing from 4 seconds to less than 1 second. As a result, Yellow Class could keep kids engaged with the videos, reduce distribution and storage costs, and reach more users who live in low-bandwidth areas. To make its videos accessible from remote areas, Yellow Class also uses Amazon CloudFront, a content delivery network service for securely delivering content with low latency and high transfer speeds. Amazon CloudFront has coverage all over India using AWS edge locations and regional edge cache. “When we optimized our media pipeline using AWS services, core metrics, like average time on the application and conversion, increased,” says Jindal. 2023 Yellow Class launched the first deployment of its application using AWS services in September 2020, and the company has continued to evolve the application to support additional users and features. Although Yellow Class started with a small team of developers, it grew quickly and increased developer productivity with the support of AWS solutions architects. “At the start of a new project, subject matter experts from AWS scheduled a kickoff call with information about how to solve a particular problem using an AWS service, which helped save multiple weeks’ worth of research and development,” says Jindal. Español Using services like AWS Elemental MediaConvert, a file-based video transcoding service to prepare on-demand content for distribution or archiving, Yellow Class optimized transcoded video file sizes, reduced storage and distribution costs with enhanced playout experience for users, and scaled to create a secure and reliable application for its customers. Using AWS services, Yellow Class could also experiment with new codecs and product features quickly. Amazon ElastiCache is a fully managed, in-memory caching service supporting flexible, real-time use cases. Learn more » Yellow Class also keeps costs low by using AWS services rather than engaging with multiple vendors. When costs kept rising for a third-party provider that Yellow Class used to serve images on its website and application, the company transitioned to use Amazon CloudFront and AWS Lambda, a serverless, event-driven compute service for running code without thinking about servers or clusters. “Overnight, we were able to save $2,000 per month by replacing the entire third-party service with Amazon CloudFront and AWS Lambda,” says Jindal. “That’s the power of AWS. You can replace many third-party tools because of the sheer scale and low cost of AWS services.” 日本語 Contact Sales saved per month by replacing a third-party service Solution | Increasing Speed and Reliability While Reducing Costs by 50–60% Using AWS Services AWS Elemental MediaConvert is a file-based video transcoding service with broadcast-grade features. Create live stream content for broadcast and multi-screen delivery at scale. Get Started 한국어 Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used AWS Elemental MediaConvert AWS Services Used Reduced 中文 (繁體) Bahasa Indonesia When we optimized our media pipeline using AWS services, core metrics, like average time on the application and conversion, increased.” distribution and storage costs by reducing file size Ρусский Customer Stories / Software & Internet عربي Yellow Class, an educational technology startup, wanted to develop an educational application for kids. Developing the infrastructure from scratch would require significant time and resources for its small team. To focus on customers instead of infrastructure, Yellow Class needed a cost-effective and scalable cloud solution, so the company looked to Amazon Web Services (AWS). Yellow Class engages young children across India with its practice-based learning application for subjects like math, English, and art. Its application provides exercises, information, and concept video streaming to supplement classroom learning. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Weeks Overview Amazon ElasticCache Lowered Outcome | Reaching Additional Users without Impacting Performance Using Amazon ElastiCache Türkçe Reached English Opportunity | Using AWS Services to Reduce Research and Development Time for Yellow Class About Yellow Class Yellow Class plans to continue expanding to reach more users. It also plans to improve the customer experience using artificial intelligence offerings from AWS, which can offer recommendations and make the application more adaptive. “If we had built our entire infrastructure to support video streaming, it would have taken ages and cost a lot in terms of time and people resources,” says Jindal. “Using AWS, we get access to services that are readily available right off the shelf, which has helped us accelerate development. of research and development saved using AWS support As the company grows, Yellow Class can reach a larger audience using the scalability of AWS services. Yellow Class handles the increasing volume of users without impacting performance using Amazon ElastiCache, a fully managed in-memory caching service for unlocking microsecond latency and scale. Mohit Jindal Head of Engineering, Yellow Class users in low-bandwidth areas Deutsch Based in India, Yellow Class provides an application for children aged 5–10 to learn subjects like math, English, and art through daily practice. Its application offers exercises, information, and concept video streaming to supplement classroom learning in schools. $2000 Tiếng Việt Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you. Learn more » Italiano ไทย Amazon CloudFront Yellow Class sought a cloud provider that offered infrastructure and support so that it wouldn’t have to pay steep costs upfront or hire additional employees. The company chose AWS services because it could get started with limited funding, gain access to a wide variety of services, and scale up as the company grew without worrying about infrastructure. In August 2020, Yellow Class started developing its application using AWS Activate, which offers tools, resources, content, and expert support to accelerate startup companies. “Using AWS, we can serve videos at scale across different geographies with good reliability, good performance, and a limited amount of latency,” says Mohit Jindal, head of engineering at Yellow Class. “We’ve also been able to provision different infrastructure to scale and manage traffic.” Learn more » Amazon EC2 Building a Scalable Interactive Learning Application for Kids Using AWS Services with Yellow Class Learn how Yellow Class, a startup in the educational technology industry, reduced costs, optimized video performance, and scaled its application using AWS Elemental MediaConvert. The company’s optimization efforts reduced costs for Yellow Class and its customers with improved speed and reliability. By significantly reducing video file and segment sizes, Yellow Class reduced its distribution and storage costs by 50–60 percent using the Quality-Defined Variable Bitrate feature of AWS Elemental MediaConvert, which minimizes wasted bits to optimize output file sizes and maintains consistent video quality. Its customers save expenses by consuming less bandwidth while viewing videos in the application. Yellow Class further reduces costs using features like automatic scaling from Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. By adding or removing compute capacity to meet the application’s changing demand, Yellow Class scales to meet traffic needs while optimizing performance and cost. Português" Building a Scalable Machine Learning Model Monitoring System with DataRobot _ AWS Partner Network (APN) Blog.txt,"AWS Partner Network (APN) Blog Building a Scalable Machine Learning Model Monitoring System with DataRobot by Shun Mao and Oleksandr Saienko | on 29 JUN 2023 | in Advanced (300) , Amazon SageMaker , Artificial Intelligence , AWS Marketplace , AWS Partner Network , Customer Solutions , Technical How-to , Thought Leadership | Permalink | Comments |  Share By Shun Mao, Sr. Partner Solutions Architect – AWS By Oleksandr Saienko, Solutions Consultant – DataRobot DataRobot From improving customer experiences to developing products, there is almost no area of the modern business untouched by artificial intelligence (AI) and machine learning (ML). With the rise of generative AI , companies continue to invest more in their AI/ML strategies. However, many organizations struggle to work across the AI lifecycle, especially on the MLOps part. They often find it hard to build an easy-to-manage and scalable machine learning monitoring system that can work for different ML frameworks and environments. Maintaining multiple ML models across different teams can be challenging. Having a centralized platform to monitor and manage them can significantly reduce operational overhead and improve efficiency. DataRobot is an open, complete AI lifecycle platform that leverages machine learning and has broad interoperability with Amazon Web Services (AWS) and end-to-end capabilities for ML experimentation, ML production, and MLOps. DataRobot is an AWS Partner and AWS Marketplace Seller that has achieved Competencies in Machine Learning, Data and Analytics, and Financial Services, and holds the Amazon SageMaker service ready specialization. In this post, we will discuss how the models trained and deployed in Amazon SageMaker can be monitored in platform in a highly scalable fashion. In this way, together with a previously-published AWS blog post , customers can monitor both DataRobot-originated models and SageMaker-originated models under a single pane of glass in DataRobot. Solution Overview The following diagram illustrates a high-level architecture for monitoring Amazon SageMaker models in DataRobot. Figure 1 – Solution architecture diagram. In this diagram, users can build their own custom SageMaker containers to train a machine learning model and host the model as a SageMaker endpoint. The inference container has DataRobot MLOps libraries installed and model monitoring code written so it can collect the inference metrics and statistics and send it to an Amazon Simple Queue Service (SQS) spooler channel. The information queued in SQS is pulled by a DataRobot MLOps agent implemented by Amazon Elastic Container Service (Amazon ECS). Finally, the agent sends the message to the DataRobot environment and users can see the results in the DataRobot user interface (UI). This architecture design is serverless and highly scalable, and it can be used to monitor a large number of models simultaneously. To monitor multiple models, the inference containers send messages to the SQS queue and the agent in ECS can be auto-scaled to accommodate the workload depending on the queue length, which reduces the operational overhead and increases cost efficiency. Prerequisites This post assumes you have access to Amazon SageMaker and also a DataRobot account. DataRobot comes with three deployment types: multi-tenant software as a service (SaaS), single-tenant SaaS, and virtual private cloud (VPC), depending on customers’ requirements. If you don’t have a DataRobot account, follow the instructions to create a trial SaaS account. Create a DataRobot External Deployment to Monitor Models To monitor models hosted in Amazon SageMaker, you need to create an external model deployment in DataRobot with the following steps. Each step generates some necessary information to be collected when deploying the endpoint in SageMaker. Register training data in the DataRobot AI catalog Create DataRobot model package Create DataRobot external prediction environment Create DataRobot deployment These steps can be done manually from the DataRobot UI, or you can use the DataRobot MLOps command line interface (CLI) tool. The example we’re using here is the Iris flower species prediction . To use the DataRobot MLOps CLI tool, you need to install datarobot-mlops-connected-client and set up the DataRobot API token (which you can find in your DataRobot UI) as environment variables. ! pip install datarobot-mlops-connected-client %env MLOPS_SERVICE_URL=https://app.datarobot.com %env MLOPS_API_TOKEN=YOUR_API_TOKEN DataRobot stores statistics about predictions to monitor how distributions and values of features change over time. As a baseline for comparing distributions of features, DataRobot uses the distribution of the training data, which needs to be uploaded to the DataRobot AI Catalog. To register the training data in the DataRobot AI catalog, you can import a dataset through the AI catalog drop-down , which generates a dataset ID that will be used later. DataRobot supports a wide variety of data sources, including some of the most popular AWS services to allow easy data importing. For DataRobot multi-tenant SaaS, DataRobot uses an Amazon Simple Storage Service (Amazon S3) bucket for storing imported data that’s managed by DataRobot. There is no direct access to this bucket, however, as data is secured at-rest using encryption and all data transferred to and from S3 is encrypted in transit using TLS 1.2. Figure 2 – DataRobot AI Catalog and data connectors. After the training dataset is uploaded, you need to create a model package . In the UI, you can create one under Model Registry > Model Packages . Figure 3 – DataRobot model package UI. . Or, you can run the following CLI code and it returns a MODEL_PACKAGE_ID: MODEL_PACKAGE_NAME=""SageMaker_MLOps_Demo"" prediction_type=""Multiclass"" model_target = ""variety"" class_names = [""setosa"", ""versicolor"", ""virginica""] model_config = { ""name"": MODEL_PACKAGE_NAME, ""modelDescription"": { ""modelName"": ""Iris classification model"", ""description"": ""Classification on iris dataset"" }, ""target"": { ""type"": prediction_type, ""name"": model_target, ""classNames"": class_names } } with open(""demo_model.json"", ""w"") as model_json_file: model_json_file.write(json.dumps(model_config, indent=4)) !mlops-cli model create --json-config ""demo_model.json"" --training-dataset-id $TRAINING_DATASET_ID --json --quiet Next, we need to create a custom external prediction environment. Details can be found in the documentation for using the UI. To use the CLI tool, run the following code and it generates a PREDICTION_ENVIRONMENT_ID: demo_pe_config = { ""name"": ""MLOps SageMaker Demo"", ""description"": ""Sagemaker DataRobot MLOps"", ""platform"": ""aws"", ""supportedModelFormats"": [""externalModel""] } with open(""demo_pe.json"", ""w"") as demo_pe_file: demo_pe_file.write(json.dumps(demo_pe_config, indent=4)) !mlops-cli prediction-environment create --json-config ""demo_pe.json"" --json --quiet Finally, you can create a DataRobot deployment associated with the SageMaker model. In the UI, this can be done under Model Registry > Model Package > Deployments . Figure 4 – DataRobot model deployment UI. To use the CLI, run the following code with proper environment variable setup and it produces a DEPLOYMENT_ID: !mlops-cli model deploy --model-package-id $MODEL_PACKAGE_ID --prediction-environment-id $PREDICTION_ENVIRONMENT_ID --deployment-label ""SageMaker_MLOps_Demo"" --json --quiet Until now, we have finished all the preparations that are needed inside DataRobot. Next, we will train and host a SageMaker model in AWS. Build a SageMaker Custom Container To build an Amazon SageMaker custom container for training and inference, we are leveraging an existing SageMaker workshop on how to build a custom container; the code artifacts can be found in this GitHub repo . We will keep the original structure of code untouched, but with some key changes in the Dockerfile and predictor.py . In the Dockerfile, we’ll need to add one line to install datarobot-mlops library, which is key for the SageMaker container to send monitoring data out. Add the following line of code right after installation of python in the original Dockerfile: RUN pip --no-cache-dir install datarobot-mlops[aws] For predictor.py , the main changes are on the ScoringService object, where we need to call datarobot.mlops library to collect the metrics and send it to SQS spool channel. from datarobot.mlops.mlops import MLOps class ScoringService(object): model = None mlops = None @classmethod def get_mlops(cls): """"""MLOPS: initialize mlops library"""""" # Get environment parameters MLOPS_DEPLOYMENT_ID = os.environ.get('MLOPS_DEPLOYMENT_ID') MLOPS_MODEL_ID = os.environ.get('MLOPS_MODEL_ID') MLOPS_SQS_QUEUE = os.environ.get('MLOPS_SQS_QUEUE') if cls.mlops == None: cls.mlops = MLOps() \ .set_async_reporting(False) \ .set_deployment_id(MLOPS_DEPLOYMENT_ID) \ .set_model_id(MLOPS_MODEL_ID) \ .set_sqs_spooler(MLOPS_SQS_QUEUE) \ .init() return cls.mlops @classmethod def get_model(cls): if cls.model == None: with open(os.path.join(model_path, ""decision-tree-model.pkl""), ""rb"") as inp: cls.model = pickle.load(inp) return cls.model @classmethod def predict(cls, input): clf = cls.get_model() class_names = json.loads(os.environ.get('CLASS_NAMES')) start_time = time.time() predictions_array = clf.predict_proba(input.values) prediction = np.take(class_names, np.argmax(predictions_array, axis=1)) execution_time = time.time() - start_time ml_ops = cls.get_mlops() ml_ops.report_deployment_stats(predictions_array.shape[0], execution_time * 1000) ml_ops.report_predictions_data( features_df=input, predictions=predictions_array.tolist(), class_names=class_names, association_ids=None ) return prediction Here, we do not modify the training code since the monitoring is mainly for the inference. With the above changes ready, we build a Docker image and push it to Amazon Elastic Container Registry (Amazon ECR) with the name sagemaker-datarobot-decision-trees:latest . Deploy Amazon SQS and ECS to Receive Inference Monitoring Info The main infrastructure we need here is Amazon SQS and Amazon ECS on AWS Fargate . SQS serves as a spool channel to receive monitoring data from the SageMaker inference container, and it’s highly scalable and flexible to adapt to a variety of scenarios. Create an SQS queue in your AWS account named aws-mlops-agent-demo by following the instructions and leave everything else as default. The data in SQS will be picked up by the DataRobot agent deployed in ECS by a pre-built Docker image running on AWS Fargate. The steps to build the Docker image with the DataRobot MLOps agent are: Download the DataRobot MLOps package from your DataRobot UI in the Developer Tool tab. Unzip the package and navigate to the folder datarobot_mlops_package-8.2.13/tools/agent_docker . As of this writing, the latest version of this package is 8.2.13. Find the file mlops.agent.conf.yaml in the datarobot_mlops_package-8.2.13/tools/agent_docker/conf folder and edit the information in the following sections: #URL to the DataRobot MLOps service mlopsUrl: https://app.datarobot.com # DataRobot API token apiToken: ""you api token"" channelConfigs: # - type: ""FS_SPOOL"" # details: {name: ""filesystem"", directory: ""/tmp/ta""} - type: ""SQS_SPOOL"" details: {name: ""sqs"", queueUrl: ""https://sqs.us-east-1.amazonaws.com/651505238245/aws-mlops-agent-demo"", queueName: ""aws-mlops-agent-demo""} # - type: ""RABBITMQ_SPOOL"" You can see that DataRobot supports several communication channels (spooler channels) to collect model monitoring statistics, and in this example we choose to use Amazon SQS. With above edit in place, build the agent Docker image and push it to Amazon ECR. The step of creating an Amazon ECS cluster with Fargate deployment can be found in the documentation . When selecting s container image, choose the DataRobot agent image we just built. You can keep everything else as default. Train the Model and Deploy it as a SageMaker Endpoint Running the following code in Amazon SageMaker Studio Notebook can train a simple decision tree model in SageMaker. import sagemaker as sage sess = sage.Session() account = sess.boto_session.client(""sts"").get_caller_identity()[""Account""] region = sess.boto_session.region_name image = ""{}.dkr.ecr.{}.amazonaws.com/sagemaker-datarobot-decision-trees:latest"".format(account, region) # Save your input data in the /data folder WORK_DIRECTORY = ""data"" data_location = sess.upload_data(WORK_DIRECTORY, key_prefix=prefix) tree = sage.estimator.Estimator( image, role, 1, ""ml.c4.2xlarge"", output_path=""s3://{}/output"".format(sess.default_bucket()), sagemaker_session=sess, ) tree.fit(data_location) The following code will deploy the model as an endpoint with necessary DataRobot MLOps information that we generated in previous steps, such as “MLOPS_DEPLOYMENT_ID”, “MLOPS_MODEL_ID”, “MLOPS_SQS_QUEUE”, “prediction_type” and “CLASS_NAMES” in the inference container. from sagemaker.serializers import CSVSerializer import json prediction_type=""Multiclass"" class_names = [""setosa"", ""versicolor"", ""virginica""] MLOPS_SQS_QUEUE=""https://sqs.us-east-1.amazonaws.com/ 651505238245/ aws-mlops-agent-demo"" #passing all needed environment variables to sagemaker deployment: env_vars={ ""MLOPS_DEPLOYMENT_ID"": deployment_id, ""MLOPS_MODEL_ID"": model_id, ""MLOPS_SQS_QUEUE"": MLOPS_SQS_QUEUE, ""prediction_type"": prediction_type, ""CLASS_NAMES"": json.dumps(class_names)} print(env_vars) predictor = tree.deploy(1, ""ml.m4.xlarge"", serializer = CSVSerializer(), env=env_vars) Now, this has completed all of the deployment and the endpoint is ready to serve inference request. Once the endpoint is called, the monitoring information will be seen in the DataRobot UI. For more details on the code, please refer to this GitHub repo . Explore DataRobot’s Monitoring Capabilities DataRobot offers a central hub for monitoring model health and accuracy for all deployed models with low latency. For each deployment, DataRobot provides a status banner with model-specific information. Figure 5 – DataRobot model monitoring main UI. When you select a specific deployment, DataRobot opens an overview page for that deployment. The overview page provides a model and environment specific summary that describes the deployment, including the information you supplied when creating the deployment and any model replacement activity. Figure 6 – DataRobot deployment options. The Service Health tab tracks metrics about a deployment’s ability to respond to prediction requests quickly and reliably. This helps identify bottlenecks and assess capacity, which is critical to proper provisioning. The tab also provides informational tiles and a chart to help monitor the activity level and health of the deployment. Figure 7 – DataRobot model health monitoring. As training and production data change over time, a deployed model loses predictive power, and the data surrounding the model is said to be drifting. By leveraging the training data and prediction data that’s added to your deployment, the Data Drift dashboard helps you analyze a model’s performance after it has been deployed. Figure 8 – DataRobot model drift monitoring. There are several other tabs related to deployment (like Accuracy, Challenger Models, Usage, Custom Metrics, and Segmented Analysis) which are not in scope of this post but you can get more details in the DataRobot documentation . Conclusion In this post, you learned how to build a highly scalable machine learning model monitoring system using DataRobot for Amazon SageMaker hosted models. DataRobot also has other features, such as automatic feature discovery, autoML, model deployment, and ML notebook development. To get started with DataRobot, visit the website to set up a personalized demo . DataRobot is also available in AWS Marketplace . . . DataRobot – AWS Partner Spotlight DataRobot is an AWS Partner and open, complete AI lifecycle platform that leverages machine learning and has broad interoperability with AWS and end-to-end capabilities for ML experimentation, ML production, and MLOps. Contact DataRobot | Partner Overview | AWS Marketplace | Case Studies TAGS: AWS Competency Partners , AWS Partner Guest Post , AWS Partner Solutions Architects (SA) , AWS Partner Success Stories , AWS Service Ready Products , DataRobot Comments View Comments Resources AWS Partner and Customer Case Studies AWS Partner Network Case Studies Why Work with AWS Partners Join the AWS Partner Network Partner Central Login AWS Training for Partners AWS Sponsorship Opportunities Follow  AWS Partners LinkedIn  AWS Partners Twitter  AWS Partners YouTube  AWS Email Updates  APN Blog RSS Feed" Building generative AI applications for your startup part 1 _ AWS Startups Blog.txt,"AWS Startups Blog Building generative AI applications for your startup, part 1 by Hrushikesh Gangur | on 05 JUL 2023 | in Amazon Machine Learning , Artificial Intelligence , AWS for Startups , Generative AI , Startup | Permalink |  Share This blog series in two parts discusses how to build artificial intelligence (AI) systems that can generate new content. The first part gives an introduction, explains various approaches to build generative AI applications, and reviews their key components. The second part maps these components with the right AWS services, which can help startups quickly develop and launch generative AI products or solutions by avoiding time and money spent on undifferentiated heavy lifting work. Recent generative AI advancements are raising the bar on tools that can help startups to rapidly build, scale, and innovate. This widespread adoption and democratization of machine learning (ML), specifically with the transformer neural network architecture , is an exciting inflection point in technology. With the right tools, startups can build new ideas or pivot their existing product to harness the benefits of generative AI for their customers. Are you ready to build a generative AI application for your startup? Let’s first review the concepts, core ideas, and common approaches to build generative AI applications. What are generative AI applications? Generative AI applications are programs that are based on a type of AI that can create new content and ideas, including conversations, stories, images, videos, code, and music. Like all AI applications, generative AI applications are powered by ML models that are pre-trained on vast amounts of data, and commonly referred to as foundation models (FMs). An example of a generative AI application is Amazon CodeWhisperer , an AI coding companion that helps developers to build applications faster and more securely by providing whole line and full function code suggestions in your integrated development environment (IDE). CodeWhisperer is trained on billions of lines of code, and can generate code suggestions ranging from snippets to full functions instantly, based on your comments and existing code. Startups can use AWS Activate credits with the CodeWhisperer Professional Tier, or start with the Individual Tier which is free to use. Figure 1: Amazon CodeWhisperer writes a JavaScript code using comments as the prompt. The rapidly-developing generative AI landscape There is rapid growth occurring in generative AI startups, and also within startups building tools to simplify the adoption of generative AI. Tools such as LangChain —an open source framework for developing applications powered by language models—are making generative AI more accessible to a wider range of organizations, which will lead to faster adoption. These tools also include prompt engineering, augmenting services (such as embedding tools or vector databases), model monitoring, model quality measurement, guard rails, data annotation, reinforced learning from human feedback (RLHF), and many more. Figure 2: Components of the generative AI landscape. An introduction to foundation models For a generative AI application or tool, at the core is the foundation model. Foundation models are a class of powerful machine learning models that are differentiated by their ability to be pre-trained on vast amounts of data in order to perform a wide range of downstream tasks. These tasks include text generation, summarization, information extraction, Q&A, and/or chatbots. In contrast, traditional ML models are trained to perform a specific task from a data set. Figure 3: Demonstrates the difference between a traditional ML model and a foundation model. So how does a foundation model “generate” the output that generative AI applications are known for? These capabilities result from learning patterns and relationships that allow the FM to predict the next item or items in a sequence, or generate a new one: In text-generating models, FMs output the next word, next phrase, or the answer to a question. For image-generation models, FMs output an image based on the text. When an image is an input, FMs output the next relevant or upscaled image, animation, or 3D images. In each case, the model starts with a seed vector derived from a “prompt”: Prompts describe the task the model has to perform. The quality and detail (also known as the “context”) of the prompt determine the quality and relevance of the output. Figure 4: A user inputs a prompt into a foundation model and it generates a response. The simplest implementation of generative AI applications The simplest approach for building a generative AI application is to use an instruction-tuned foundation model, and provide a meaningful prompt (“prompt engineering”) using zero-shot learning or few-shot learning. An instruction-tuned model (such as FLAN T5 XXL, Open-Llama, or Falcon 40B Instruct) uses its understanding of related tasks or concepts to generate predictions to prompts. Here are some prompt examples: Zero-shot learning Title: \”University has new facility coming up“\\n Given the above title of an imaginary article, imagine the article.\n RESPONSE: Few-shot learning This is awesome! // Positive This is bad! // Negative That movie was hopeless! // Negative What a horrible show! // RESPONSE: Negative Startups, in particular, can benefit from the rapid deployment, minimal data needs, and cost optimization that result from using an instruction-tuned model. To learn more about considerations for selecting a foundation model, check out Selecting the right foundation model for your startup . Customizing foundation models Not all use cases can be met by using prompt engineering on instruction-tuned models. Reasons for customizing a foundation model for your startup may include: Adding a specific task (such as code generation) to the foundation model Generating responses based on your company’s proprietary dataset Seeking responses generated from higher quality datasets than those that pre-trained the model Reducing “hallucination,” which is output that is not factually correct or reasonable There are three common techniques to customize a foundation model. Instruction-based fine-tuning This technique involves training the foundation model to complete a specific task, based on a task-specific labeled dataset. A labeled data set consists of pairs of prompts and responses. This customization technique is beneficial to startups who want to customize their FM quickly and with a minimal dataset: It takes a fewer data sets and steps to train. The model weights update based on the task or the layer that you are fine-tuning. Figure 5: The instruction-based fine-tuning workflow. Domain adaptation (also known as “further pre-training”) This technique involves training the foundation model using a large “corpus”—a body of training materials—of domain-specific unlabeled data (known as “self-supervised learning”). This technique benefits use cases that include domain-specific jargon and statistical data that the existing foundation model hasn’t seen before. For example, startups building a generative AI application to work with proprietary data in the financial domain may benefit from further pre-training the FM on custom vocabulary and from “tokenization,” a process of breaking down text into smaller units called tokens. To achieve higher quality, some startups implement reinforced learning from human feedback (RLHF) techniques in this process. On top of this, instruction-based fine-tuning will be required to fine-tune a specific task. This is an expensive and time-consuming technique compared to the others. The model weights update across all the layers. Figure 6: The domain adaptation workflow. Information retrieval (also known as “retrieval-augmented generation” or “RAG”) This technique augments the foundation model with an information retrieval system that is based on dense vector representation. The closed-domain knowledge or proprietary data goes through a text-embedding process to generate a vector representation of the corpus, and is stored in a vector database. A semantic search result based on the user query becomes the context for the prompt. The foundation model is used to generate a response based on the prompt with context. In this technique, the foundation model’s weight is not updated. Figure 7: The RAG workflow. Components of a generative AI application In the above sections, we learnt various approaches startups can take with foundation models when building generative AI applications. Now, let’s review how these foundation models are part of the typical ingredients or components required to build a generative AI application. Figure 8: Components of a generative AI application. At the core is a foundation model (center). In the simplest approach discussed earlier in this blog, this requires a web application or mobile app (top left) that accesses the foundation model through an API (top). This API is either a managed service through a model provider or self-hosted using an open source or proprietary model. In the self-hosting case, you may need a machine learning platform that is supported by accelerated computing instances to host the model. In the RAG technique, you will need to add a text embedding endpoint and a vector database (left and lower left). Both of these are provided as either an API service or are self-hosted. The text embedding endpoint is backed by a foundation model, and the choice of foundation model depends on the embedding logic and tokenization support. All of these components are connected together using developer tools, which provide the framework for developing generative AI applications. And, lastly, when you choose the customization techniques of fine-tuning or further pre-training of a foundation model (right), you need components that help with data pre-processing and annotation (top right), and an ML platform (bottom) to run the training job on specific accelerated computing instances. Some model providers support API-based fine-tuning, and in such cases, you need not worry about the ML platform and underlying hardware. Regardless of the customization approach, you may also want to integrate components that provide monitoring, quality metrics, and security tools (lower right). Conclusion In this part of the blog, we learnt various approaches or patterns startups can take to build a generative AI application and the key components involved. In the next part, we will learn how these components map to AWS services, and showcase an example architecture. TAGS: AIML Hrushikesh Gangur Hrushikesh Gangur is a Principal Solutions Architect for AI/ML startups with expertise in both AWS machine learning and networking services. He helps startups building generative AI, autonomous vehicles, and ML platforms to run their business efficiently and effectively on AWS. Resources AWS Activate AWS for Startups Resources Build Your Startup with AWS AWS for Startups Events Follow  AWS Startups Twitter  AWS Cloud Twitter  AWS Startups Facebook  AWS Startups Instagram  AWS Startups LinkedIn  Twitch  Email Updates" Calgary Airport Authority Enhances Passenger Services and Cybersecurity on the AWS Cloud _ Case Study _ AWS.txt,"Customer Stories / Transportation As part of the Authority’s efforts to grow and diversify its services, Ian Turner, general manager of IT enterprise architecture at YYC, recognized the opportunity to strengthen Calgary’s critical infrastructure. The Authority honed in on how it could build new capacity to mitigate potential events without service interruptions. To do that, YYC decided to migrate its critical private workloads to the Amazon Web Services (AWS) Cloud and rearchitect its public websites for added security and scalability. Français Increased The automatic capabilities of AWS tools have significantly reduced YYC technician workforce hours and maintenance costs. In parallel, IT teams at YYC gained valuable hands-on experience and knowledge from working closely with the AWS Professional Services team, and now administer many systems themselves. To achieve this, YYC rearchitected with edge caching from Amazon CloudFront, a content delivery network (CDN) service, and deployed an application load balancer. For database scalability and automatic backup, it used Amazon Relational Database Service (Amazon RDS), a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. For file-system workloads, the company used Amazon FSx, which makes it easy and cost effective to launch, run, and scale feature-rich, high-performance file systems in the cloud. Español The airport is seeing other business advantages too. Monitoring and tagging within the AWS Cloud environment indicate where resources are being used, helping business groups manage costs and primary key performance indicators (KPIs). Cloud services have also reduced the need for onsite equipment and cooling. Those gains are reducing YYC’s overall carbon footprint. 日本語 2023 Ian Turner General Manager of IT Enterprise Architecture, Calgary Airport Authority AWS Professional Services Calgary Airport Authority Enhances Passenger Services and Cybersecurity on the AWS Cloud 한국어 AWS WAF helps you protect against common web exploits and bots that can affect availability, compromise security, or consume excessive resources.  Learn more » Opportunity | Strengthening Security at a Top-Tier Air Hub Overview | Opportunity | Solution | Outcome | AWS Services Used To address the failover issue, YYC set up an AWS Cloud environment as a “third site” with an independent power source and redundant connections over multiple internet service providers. YYC used AWS Transit Gateway, a distributed service that applies a hub-and-spoke method to public clouds. The new environment and architecture have improved the airport’s data-transmission capabilities while enhancing security. “The combination makes us feel comfortable that we’re protected,” says Turner. AWS Services Used YYC undertook a final risk assessment, surveyed available cloud providers, and made its decision. For Turner, the choice was clear: “AWS was the best fit for us all around.” For added security, YYC now uses AWS WAF, which helps to protect against common web exploits and bots, and AWS Shield to protect its on premises workloads from distributed denial of service (DDoS) attacks. Amazon GuardDuty is a threat-detection service that continuously monitors businesses AWS accounts and workloads for malicious activity and unauthorized behavior, and AWS Security Hub is a cloud security posture management service that centralizes and automates security checks and alerts. Solution | A Cloud Solution Built for Performance 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский to build and complete the solution عربي AWS WAF 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. The AWS Professional Services organization is a global team of experts that can help you realize your desired business outcomes when using the AWS Cloud. Learn more » For a project of its size, the deployment took place rapidly. YYC and AWS Professional Services, a global team of experts that can help businesses realize desired business outcomes when using the AWS Cloud, completed the solution in just 2.5 months, and it was implemented in December 2022. Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Overview resiliency Get Started The biggest challenge was to ensure that data from public workloads and applications (such as flight information or parking bookings) moved efficiently and securely between the remaining on-premises applications and the AWS Cloud. The solution needed high availability, increased speed, better load distribution, a scalable database, and a high-performing file system. Türkçe AWS Transit Gateway connects your Amazon Virtual Private Clouds (VPCs) and on-premises networks through a central hub. This connection simplifies your network and puts an end to complex peering relationships. Learn more » English About Calgary Airport Authority 2.5 months As a not-for-profit organization, cost was a key consideration for the Authority. Some of the YYC’s technical infrastructure was nearing the end of its lifespan. The cost-efficient option was to retire it and move to the cloud, which offered evergreen infrastructure on a permanent basis. The Calgary International Airport (YYC) is the fourth-busiest airport in Canada and home to Canada’s second-largest airline, WestJet and its global hub,. YYC meets the needs of multiple airline partners and approximately 50,000 travelers a day. Until recently, it did this entirely with on-premises equipment. Today, with its services on AWS, YYC delivers faster, better passenger services. Because of the cloud’s increased redundancy and resiliency, the risk of system downtime is negligible. As a key part of YYC’s broader digital transformation journey, the migration has had major positive impacts on the organization overall. “AWS is a lot further ahead in its technology, offerings, and capabilities than other cloud providers,” says Turner. YYC now has a solid foundation to spin up more cloud-based customer service improvements. High Scalability at Low Cost The experience we had working with AWS, from the presales calls to sign-off at the end—you couldn’t ask for anything better.” Deutsch The migration challenge was twofold. From a business perspective, the transition needed to be seamless. YYC manages a significant amount of data flowing in and out of its systems, and services needed to continue without disruption. From an IT perspective, YYC needed assurance that the technologies would perform in the same way in the cloud as they did on premises. Tiếng Việt Airports serve travelers all day, every day. To fulfill this mission, they need passenger services that are highly flexible, secure, and available without interruption.For the Calgary Airport Authority (the Authority), security has always been a top priority. In 2022, as post-COVID travel resumed, the Authority took the opportunity to plan ahead and prioritize an agile, highly secure, digital-first travel experience for its passengers. Moving workloads to the cloud became a key part of this road map.   As the airport moves more services, technologies, and applications to the cloud, it plans to use additional AWS features to innovate service delivery across more customer service areas. Turner is confident in choosing the AWS Cloud: “I would recommend AWS over other providers based on its offerings and capabilities alone.” Italiano ไทย IT business groups from across YYC—airport systems, corporate services, cybersecurity, and technical infrastructure—met in July 2022 to perform an internal-needs assessment and determine requirements. The top priorities were cybersecurity, scalability, resiliency, and cost. YYC needed access to a wide range of leading-edge services. It also wanted ease of integration and hands-on assistance standing up the foundational cloud environment. Amazon CloudFront Outcome | A Resilient Foundation Learn more » The Calgary Airport Authority (the Authority) is a not-for-profit, non-share capital corporation, incorporated under the Province of Alberta's Regional Airports Authorities Act (Alberta). Since 1992, it has been responsible for the operation, management and development of YYC Calgary International Airport (YYC) and, since 1997, Springbank Airport (YBW), under a long-term lease from the Government of Canada. AWS Transit Gateway security On AWS, YYC benefits from the elasticity of the cloud and the ability to scale its storage on demand. “We don’t have to worry about running out of space,” says Ian Turner, general manager, IT Enterprise Architecture. With its on-premises servers, the airport needed new hardware when capacity limits were reached. With that came added costs and procurement challenges. But on AWS, notes Turner, “there's no procurement. There's no requisitions through the supply chain. It just does what it does.” Português To mitigate passenger-service disruptions in the event of a cybersecurity incident, the Calgary Airport Authority migrated its on-premises data center to the AWS Cloud." CalvertHealth-case-study.txt,"Using AWS Solutions for Speedier System Recovery Improving Resilience Using AWS Français Achieving a Secure, Cost-Effective Solution Reduced potential revenue losses caused by reputation damage Español Melissa Hall Chief Information Officer, CalvertHealth  Learn More 日本語 Because they need to access EHR quickly, CalvertHealth nurses and clinicians benefit from the fact that the new system looks the same. Staff members work faster with a system that looks familiar.  Based in Calvert County, Maryland, CalvertHealth is a not-for-profit, community-owned hospital with over 200 active and consulting physicians on staff. It provides primary care and other services in its offices in several other locations around the county. Get Started 한국어 To learn more, visit https://aws.amazon.com/disaster-recovery/.  AWS Elastic Disaster Recovery (AWS DRS) minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery. About CalvertHealth Benefits Created resilience in the electronic health records system As a stand-alone hospital in rural Maryland, CalvertHealth found itself in a trifecta of risk in terms of its RTO. CalvertHealth depends on technology, but because of its rural location, it has no nearby organizations to rely on for backup should disaster strike. At the same time, the hospital’s mid-Atlantic location puts it in the path of hurricanes and other natural events. Its trove of valuable patient data increases the risk of ransomware and other cyberattacks. On average, such disasters can cost a midsize hospital nearly $5,600 per minute or over $300,000 per hour, according to a recent Gartner report—a serious and costly risk.  “The goal of almost every healthcare organization that has sensitive data is to bring the system back up as quickly as it can to decrease the amount of downtime,” says Melissa Hall, chief information officer of CalvertHealth.  AWS Services Used CalvertHealth Improves Electronic Health Records System Resilience and Shortens Recovery Time Using AWS Elastic Disaster Recovery Bahasa Indonesia Improved staff morale and confidence in the system Ρусский عربي 中文 (简体) “The fact that it’s hybrid and in the AWS environment means that staff members don’t have to monitor the connection as much as they previously had to,” Hall says. “That’s a plus because it lets us focus on more important things. We can trust that we have others who are watching the system to keep it working the way it should.”  It takes stress off me and the other executives knowing that we have AWS tools in place that can help us get things back up and running as soon as we possibly can.” Learn more » Contemporary patient care relies on information exchange with other organizations. CalvertHealth regularly communicates with the Maryland Health Information Exchange and the state about patients’ health histories, current prescriptions, and opioid usage, for instance. If the system is down, CalvertHealth can’t make appropriate decisions about patient care. This not only potentially harms patients but can also cause damage to the organization’s reputation.  Türkçe AWS Backup 中文 (繁體) English Reduced disaster recovery time by 97%, from 72 hours to under 2 hours Use AWS Backup to centralize and automate data protection across AWS services and hybrid workloads. AWS Backup offers a cost-effective, fully managed, policy-based service that further simplifies data protection at scale.   Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. “Using solutions from AWS and HealthCare Triangle, we’ve achieved something that not a lot of rural stand-alone hospitals can do,” says Hall. “It takes stress off me and the other executives knowing that we have AWS tools in place that can help us get things back up and running as soon as we possibly can. That’s a win-win for us.” To accomplish its goal, CalvertHealth deployed several solutions, including AWS Elastic Disaster Recovery (CloudEndure Disaster Recovery), which minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery. It also used AWS Backup, which organizations can use to centralize and automate data protection across AWS services and hybrid workloads. A CalvertHealth network engineer worked alongside HealthCare Triangle and the AWS team to deploy AWS Elastic Disaster Recovery and AWS Backup on almost 140 servers. They pulled the information through a VPN setup that helped them to replicate the data in the AWS environment. The changes reduced CalvertHealth’s RTO from 72 hours to under 2 hours—a 97 percent improvement. Deutsch Finally, CalvertHealth could get the system up and running with no up-front costs. The AWS team worked with HealthCare Triangle to minimize costs and invest in the project as part of an AWS initiative to help hospitals. The minimal up-front costs meant that Hall didn’t need to take it to the board or present it as a cost to anyone other than her supervisor. “We could just do the right thing rather than worrying about how to do it,” Hall says. CalvertHealth’s consultant HealthCare Triangle, a subsidiary of AWS Partner SecureKloud, has MEDITECH expertise and recommended that CalvertHealth migrate its EHR recovery site to the AWS cloud. Doing so not only added resilience to CalvertHealth’s EHR but also kept the organization’s data in a usable interface. In addition, migrating its application recovery system to AWS meant that CalvertHealth would not have to configure and manage all the servers manually in its corporate data center in the event of a disaster, hastening recovery time.  Tiếng Việt Italiano ไทย Contact Sales AWS Elastic Discovery Recovery Its new EHR backup and recovery solution has meant an improvement in CalvertHealth’s security and compliance. During a recent third-party security audit, the substantial reduction in RTO improved CalvertHealth’s overall security rating. The organization also shared this information with its cybersecurity insurance vendor. “They were impressed that a little stand-alone hospital has been able to achieve such a short RTO,” Hall says. “That was a big win for us.”  Disaster recovery, the ability to restore services quickly after any sort of interruption, is important for any organization. But for healthcare organizations, it’s critical. An organization’s resilience when it comes to disaster recovery is measured by two metrics. The first is the recovery time objective (RTO), which measures the maximum allowable time between interruption and recovery of service. The second is the recovery point objective (RPO), which measures the amount of data that can be lost within a period before significant harm occurs.  CalvertHealth had been using the MEDITECH EHR system to provide access to patient data. Data backups were done on premises in a corporate data center on servers that used third-party software. The RTO for CalvertHealth’s EHR system was 48 to 72 hours—an unacceptable amount of time.  Implementing the AWS solutions to shorten the RTO has improved the resilience of the CalvertHealth system, a relief for administrators and staff alike.  Português Improving CalvertHealth’s resilience would help the hospital serve patients more reliably. So, when Amazon Web Services (AWS) approached CalvertHealth with a proposal that would shorten the RTO and RPO for its primary electronic healthcare records (EHR) system, the organization gladly accepted. By using AWS robust backup and disaster recovery capabilities, CalvertHealth could drastically decrease its RTO and RPO." Capital One Saves Developer Time and Reduces Costs Going Serverless Using AWS Lambda and Amazon ECS _ Case Study _ AWS.txt,"AWS Lambda Outcome | Continuing to Modernize and Improve Using AWS Français Capital One is still in the process of modernizing its applications, and going serverless is not where this modernization will end. The company plans to become as cloud native as possible and is potentially looking to shift its extract, transform, and load jobs to AWS Lambda. Capital One recently adopted AWS Glue, a serverless data integration service used to discover, prepare, move, and integrate data from multiple sources, and at the same time, evaluated other new serverless options, such as AWS Step Functions, visual workflows for distributed applications, alongside AWS Lambda. “Any organization that’s committed to its technical transformation should work alongside the AWS team to go in the right direction,” says Mao. 2023 Another benefit of going serverless is the improved cost efficiency. By migrating to AWS Lambda, Capital One hopes to improve its costs. It can achieve this in part by saving developer time. “If we can save developers’ time by reducing infrastructure-related work, that savings is enormous,” says Mao. The other cost-efficiency factor is AWS Lambda’s pay-as-you-use model. The company pays at a per-millisecond interval for compute costs. “The cost efficiency is awesome. It changes the way that we think about building applications,” says Mao. “Using AWS Lambda, our engineers learn to build small and think about performance.” One application achieved 90 percent cost savings by migrating to AWS Lambda. Español Learn more » cost savings for applications 日本語 AWS Services Used Capital One Saves Developer Time and Reduces Costs by Going Serverless on AWS Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used AWS SAM Improved Solution | Improving Speed to Market and Reducing Costs Using AWS Serverless Technologies George Mao Senior Distinguished Engineer, Capital One Financial Corporation Amazon ECS Overview 中文 (繁體) Bahasa Indonesia Opportunity | Using AWS Lambda to Save Developer Time for Capital One Many Capital One applications run once a day, and others run once a month, which makes leaving instances up all the time inefficient. “When we migrate to AWS Lambda, our teams don’t have to worry about whether to scale instances up or down,” says Mao. “The same batch process that runs 1 or 100 times a day runs on AWS Lambda.” Developers can spend their time and effort making better products for the customers rather than worrying about managing or operating the infrastructure. The company is making better applications and delivering more features faster with a quicker time to market. “All the things that make the cloud great are enhanced by going serverless, which is a win-win for us and our customers,” says Mao. Capital One Financial Corporation (Capital One) exited its last legacy, on-premises data centers in 2020 to go all in on the cloud. Capital One has strict timelines for code patches, machine refreshes, and bug remediation. Its engineers, who would prefer to be building applications, were spending significant time working on infrastructure. Capital One improved its cost efficiency, speed to market, and developer quality of life by using Amazon Web Services (AWS) such as AWS Lambda—a serverless, event-driven compute service that businesses use to run code for virtually any type of application or backend service without provisioning or managing servers. The company is now achieving significant time savings for its developers in applications that are migrated to serverless compute while remaining well governed. Ρусский Any organization that’s committed to its technical transformation should work alongside the AWS team to go in the right direction. عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. This strategy has resulted in a large shift in the developer mindset and tooling process for the company—migrating away from a monolithic infrastructure and toward the building of smaller applications with higher-quality performance. During this digital transformation, the company has benefited from directly communicating with AWS service specialists for near-real-time support when it has production outages and service issues. “We treat the AWS account team as an extension of our internal architecture teams and communicate with the team daily to handle service issues and get updates quickly,” says Mao. Learn how Capital One in financial services saved developer time and reduced cost by going serverless using AWS Lambda and Amazon ECS. Learn more » operational efficency   Up to 90% With its applications running in various states of monolithic and modern architectures, Capital One’s default strategy is to migrate its applications to serverless compute, where it can reduce the overall operational burden for its engineering teams and increase operational efficiency. This migration helped the company ease the challenges that are associated with legacy architectures by reducing idle times and improving local debugging. For use cases when AWS Lambda cannot be used, the company uses Amazon Elastic Container Service (Amazon ECS)—which runs highly secure, reliable, and scalable containers—powered by AWS Fargate, a serverless, pay-as-you-go compute engine that is used to build applications without managing servers. AWS Fargate Customer Stories / Financial Services The company’s engineers use a central pipeline that has been upgraded to adapt to serverless computing to release code. To reduce the idle time that its engineers have to spend waiting for releases to go through this pipeline, Capital One uses the AWS Serverless Application Model (AWS SAM), an open-source framework for building serverless applications that provides shorthand syntax to express functions, APIs, databases, and event source mappings. By using AWS SAM, its engineers can run as much as possible locally before touching the release pipeline. Capital One has adapted its tooling and release process to deploy tens of thousands of AWS Lambda functions. “We can get what we need out of standard tooling like AWS SAM,” says Mao. new applications in days Türkçe English AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. The AWS Serverless Application Model (SAM) is an open-source framework for building serverless applications.. Saved About Capital One Financial Corporation Built Deutsch AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. By migrating its applications to serverless services like AWS Lambda, Capital One has achieved significant time savings across different developer teams. This saved time translates directly into an improved speed to market. Migrating its old applications to AWS Lambda could take weeks to months, depending on the underlying architecture of the application. For new applications, some teams at the company have put together a working application in days. Tiếng Việt Capital One is one of the top 10 largest banks in the United States, providing its banking and credit card services to its customers since 1994. The technical organization within the company has more than 12,000 people, the majority of whom are engineers. In 2020, the company closed its last physical data center and migrated everything to AWS. “Since then, we’ve made the decision to go serverless whenever possible,” says George Mao, senior distinguished engineer at Capital One. “Most of our technical organization is focused on modernizing our entire offering of applications.” As of the end of 2022, more than a third of Capital One’s apps use serverless technology. Italiano ไทย Contact Sales significant time for developers Capital One Financial Corporation is one of the top 10 largest banks in the United States and has been providing banking and credit card services since its founding in 1994. Português" Capture public health insights more quickly with no-code machine learning using Amazon SageMaker Canvas _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Capture public health insights more quickly with no-code machine learning using Amazon SageMaker Canvas by Henrik Balle and Dan Sinnreich | on 28 JUN 2023 | in Amazon SageMaker , Amazon SageMaker Canvas , Artificial Intelligence , Intermediate (200) | Permalink | Comments |  Share Public health organizations have a wealth of data about different types of diseases, health trends, and risk factors. Their staff has long used statistical models and regression analyses to make important decisions such as targeting populations with the highest risk factors for a disease with therapeutics, or forecasting the progression of concerning outbreaks. When public health threats emerge, data velocity increases, incoming datasets can grow larger, and data management becomes more challenging. This makes it more difficult to analyze data holistically and capture insights from it. And when time is of the essence, speed and agility in analyzing data and drawing insights from it are key blockers to forming rapid and robust health responses. Typical questions public health organizations face during times of stress include: Will there be sufficient therapeutics in a certain location? What risk factors are driving health outcomes? Which populations have a higher risk of reinfection? Because answering these questions requires understanding complex relationships between many different factors—often changing and dynamic—one powerful tool we have at our disposal is machine learning (ML), which can be deployed to analyze, predict, and solve these complex quantitative problems. We have increasingly seen ML applied to address difficult health-related problems such as classifying brain tumors with image analysis and predicting the need for mental health to deploy early intervention programs. But what happens if public health organizations are in short supply of the skills required to apply ML to these questions? The application of ML to public health problems is impeded, and public health organizations lose the ability to apply powerful quantitative tools to address their challenges. So how do we remove these bottlenecks? The answer is to democratize ML and allow a larger number of health professionals with deep domain expertise to use it and apply it to the questions they want to solve. Amazon SageMaker Canvas is a no-code ML tool that empowers public health professionals such as epidemiologists, informaticians, and bio-statisticians to apply ML to their questions, without requiring a data science background or ML expertise. They can spend their time on the data, apply their domain expertise, quickly test hypothesis, and quantify insights. Canvas helps make public health more equitable by democratizing ML, allowing health experts to evaluate large datasets and empowering them with advanced insights using ML. In this post, we show how public health experts can forecast on-hand demand for a certain therapeutic for the next 30 days using Canvas. Canvas provides you with a visual interface that allows you to generate accurate ML predictions on your own without requiring any ML experience or having to write a single line of code. Solution overview Let’s say we are working on data that we collected from states across the US. We may form a hypothesis that a certain municipality or location doesn’t have enough therapeutics in the coming weeks. How can we test this quickly and with a high degree of accuracy? For this post, we use a publicly available dataset from the US Department of Health and Human Services, which contains state-aggregated time series data related to COVID-19, including hospital utilization, availability of certain therapeutics, and much more. The dataset ( COVID-19 Reported Patient Impact and Hospital Capacity by State Timeseries (RAW) ) is downloadable from healthdata.gov, and has 135 columns and over 60,000 rows. The dataset is updated periodically. In the following sections, we demonstrate how to perform exploratory data analysis and preparation, build the ML forecasting model, and generate predictions using Canvas. Perform exploratory data analysis and preparation When doing a time series forecast in Canvas, we need to reduce the number of features or columns according to the service quotas. Initially, we reduce the number of columns to the 12 that are likely to be the most relevant. For example, we dropped the age-specific columns because we’re looking to forecast total demand. We also dropped columns whose data was similar to other columns we kept. In future iterations, it is reasonable to experiment with retaining other columns and using feature explainability in Canvas to quantify the importance of these features and which we want to keep. We also rename the state column to location . Looking at the dataset, we also decide to remove all the rows for 2020, because there were limited therapeutics available at that time. This allows us to reduce the noise and improve the quality of the data for the ML model to learn from. Reducing the number of columns can be done in different ways. You can edit the dataset in a spreadsheet, or directly inside Canvas using the user interface. You can import data into Canvas from various sources, including from local files from your computer, Amazon Simple Storage Service (Amazon S3) buckets, Amazon Athena , Snowflake (see Prepare training and validation dataset for facies classification using Snowflake integration and train using Amazon SageMaker Canvas ), and over 40 additional data sources . After our data has been imported, we can explore and visualize our data to get additional insights into it, such as with scatterplots or bar charts. We also look at the correlation between different features to ensure that we have selected what we think are the best ones. The following screenshot shows an example visualization. Build the ML forecasting model Now we’re ready to create our model, which we can do with just a few clicks. We choose the column identifying on-hand therapeutics as our target. Canvas automatically identifies our problem as a time series forecast based on the target column we just selected, and we can configure the parameters needed. We configure the item_id , the unique identifier, as location because our dataset is provided by location (US states). Because we’re creating a time series forecast, we need to select a time stamp, which is date in our dataset. Finally, we specify how many days into the future we want to forecast (for this example, we choose 30 days). Canvas also offers the ability to include a holiday schedule to improve accuracy. In this case, we use US holidays because this is a US-based dataset. With Canvas, you can get insights from your data before you build a model by choosing Preview model . This saves you time and cost by not building a model if the results are unlikely to be satisfactory. By previewing our model, we realize that the impact of some columns is low, meaning the expected value of the column to the model is low. We remove columns by deselecting them in Canvas (red arrows in the following screenshot) and see an improvement in an estimated quality metric (green arrow). Moving on to building our model, we have two options, Quick build and Standard build . Quick build produces a trained model in less than 20 minutes, prioritizing speed over accuracy. This is great for experimentation, and is a more thorough model than the preview model. Standard build produces a trained model in under 4 hours, prioritizing accuracy over latency, iterating through a number of model configurations to automatically select the best model. First, we experiment with Quick build to validate our model preview. Then, because we’re happy with the model, we choose Standard build to have Canvas help build the best possible model for our dataset. If the Quick build model had produced unsatisfactory results, then we would go back and adjust the input data to capture a higher level of accuracy. We could accomplish this by, for instance, adding or removing columns or rows in our original dataset. The Quick build model supports rapid experimentation without having to rely on scarce data science resources or wait for a full model to be completed. Generate predictions Now that the model has been built, we can predict the availability of therapeutics by location . Let’s look at what our estimated on-hand inventory looks like for the next 30 days, in this case for Washington, DC. Canvas outputs probabilistic forecasts for therapeutic demand, allowing us to understand both the median value as well as upper and lower bounds. In the following screenshot, you can see the tail end of the historical data (the data from the original dataset). You can then see three new lines: the median (50th quantile) forecast in purple, the lower bound (10th quantile) in light blue, and upper bound (90th quantile) in dark blue. Examining upper and lower bounds provides insight into the probability distribution of the forecast and allows us to make informed decisions about desired levels of local inventory for this therapeutic. We can add this insight to other data (for example, disease progression forecasts, or therapeutic efficacy and uptake) to make informed decisions about future orders and inventory levels. Conclusion No-code ML tools empower public health experts to quickly and effectively apply ML to public health threats. This democratization of ML makes public health organizations more agile and more efficient in their mission of protecting public health. Ad hoc analyses that can identify important trends or inflection points in public health concerns can now be performed directly by specialists, without having to compete for limited ML expert resources and slowing down response times and decision-making. In this post, we showed how someone without any knowledge of ML can use Canvas to forecast the on-hand inventory of a certain therapeutic. This analysis can be performed by any analyst in the field, through the power of cloud technologies and no-code ML. Doing so distributes capabilities broadly and allows public health agencies to be more responsive, and to more efficiently use centralized and field office resources to deliver better public health outcomes. What are some of the questions you might be asking, and how may low-code/no-code tools be able to help you answer them? If you are interested in learning more about Canvas, refer to Amazon SageMaker Canvas and start applying ML to your own quantitative health questions. About the authors Henrik Balle is a Sr. Solutions Architect at AWS supporting the US Public Sector. He works closely with customers on a range of topics from machine learning to security and governance at scale. In his spare time, he loves road biking, motorcycling, or you might find him working on yet another home improvement project. Dan Sinnreich leads Go to Market product management for Amazon SageMaker Canvas and Amazon Forecast. He is focused on democratizing low-code/no-code machine learning and applying it to improve business outcomes. Previous to AWS Dan built enterprise SaaS platforms and time-series risk models used by institutional investors to manage risk and construct portfolios. Outside of work, he can be found playing hockey, scuba diving, traveling, and reading science fiction. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" CaratLane Case Study - Amazon Web Services (AWS).txt,"CaratLane Scales To Meet Seasonal Peaks and Deliver Seamless Customer Experience With AWS Français About CaratLane To learn more, visit https://aws.amazon.com/retail/   Español 日本語 Contact Sales CaratLane is a leading player in jewelry ecommerce in India and is one of the country’s largest omnichannel jewelry retailer with over 140 physical stores in more than 40 cities. Over the years, CaratLane has focused on delivering a great unified customer experience across its digital and physical channels. As a result, over 70 percent of its sales today originates on its website and mobile app and concludes in the store. It has millions of active users every month and hundreds of thousands of sessions daily.  Get Started 한국어 Amazon RDS To Learn More Working with AWS gives us the confidence and peace of mind that our cloud infrastructure will scale to meet seasonal demand spikes. In addition, AWS is constantly introducing ways to optimize operational costs. This gives our teams the freedom to explore new innovations that improve the customer experience and differentiate us from the competition.” CaratLane uses Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Elastic Container Service (Amazon ECS) to automatically scale its capacity and instances based on load patterns, traffic patterns, and seasonal demands without over-provisioning or experiencing any downtimes. CaratLane also uses Amazon ElastiCache for Redis to reduce the latency of its applications and maintain high performance during peak seasonal loads.  Amazon EC2 Reduced the cost of server maintenance by up to 20% AWS Services Used A scalable, secure infrastructure 中文 (繁體) Bahasa Indonesia Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский عربي 中文 (简体) CaratLane is also building a data lake using AWS Glue, Amazon Kinesis, Amazon Simple Storage Service (Amazon S3), Amazon Elastic MapReduce (Amazon EMR), and Amazon Redshift. Once completed, the data lake will consolidate disparate data sources onto a single location, allowing developers and business users to tap a larger pool of data to generate deep customer insights and personalized user interventions.  CaratLane has been an early adopter of ML to improve customer experience. For instance, they use ML models to measure customer sentiment by analysing customer queries and feedback collected in-store, through email, phone, website, and the mobile app. These ML models, deployed on Amazon EC2, have helped shrink the number of customer complaint escalations by around 10%.  Learn more » Amazon ECS Benefits of AWS Provided infrastructure to deploy Machine Learning models for customer sentiment analysis, which reduced complaints by 10% per month Gurukeerthi Gurunathan Co-founder and Chief Technology Officer, CaratLane Machine Learning (ML) to improve the customer experience Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Amazon ElastiCache for Redis English Building a data lake for greater data visibility and accessibility Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. In 2021, CaratLane migrated its applications to AWS Fargate, a serverless, pay-as-you-go managed service, and reduced the cost of its server operations by 10-20 percent. CaratLane is one of India’s largest omnichannel jewelry retailer with over 140 physical stores in more than 40 cities across the country. CaratLane is exploring several other use cases with ML and plans to adopt Amazon SageMaker to increase the velocity of ML development.   Innovative use cases for customer engagement CaratLane is constantly on the look out for new technologies to enhance customer engagement. They are currently building a video calling solution using Amazon Chime SDK that will allow sale agents to showcase its jewellery collection to customers via live video call sessions. Similarly, CaratLane is also exploring blockchain related use cases. Helped CaratLane scale its storage and computational capacity up during seasonal traffic peaks Over 70 percent of CaratLane’s sales today originates on its ecommerce platform, which consists of its website and mobile app, and concludes in its physical stores. It has millions of active users every month and hundreds of thousands of sessions daily. Deutsch Purchasing jewelry is a deeply entrenched cultural tradition in India, and demands spike exponentially during festivals like Akshaya Trithiya, Diwali, and Dhanteras. In addition, special occasions like Valentine’s Day and Women’s Day also contribute to spikes in traffic. Having moved their infrastructure completely to the cloud in 2012, CaratLane is able to scale effortlessly to handle such seasonal peaks while optimizing for cost and performance. Using managed services has freed up time for the IT team to focus on innovative projects that improve the customer experience.  CaratLane also uses Amazon Relational Database Service (Amazon RDS) to operate its database. As a managed service, Amazon RDS automates and simplifies much of the manual, time-consuming administrative tasks associated with database management. Italiano ไทย “Working with AWS gives us the confidence and peace of mind that our cloud infrastructure will scale to meet seasonal demand spikes. In addition, AWS is constantly introducing ways to optimize operational costs. This gives our teams the freedom to explore new innovations that improve the customer experience and differentiate us from the competition,” said Gurukeerthi Gurunathan, co-founder and chief technology officer at CaratLane.  For security purposes, CaratLane uses AWS WAF and Amazon GuardDuty to secure and protect its customers’ information. Specifically, AWS WAF protects CaratLane’s web applications against common web exploits and bots, allowing CaratLane to build a secure and scalable infrastructure, which in turn facilitates its growth strategy.  2022 Tiếng Việt Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. Português" CarTrade Tech Drives a Seamless Car Buying and Selling Experience with Improved Website Performance and Analytics _ Case Study _ AWS.txt,"Français CarTrade Tech also sought to simplify the management of its containerized applications, which were running on the Kubernetes container orchestration system. “It was time-consuming to manage containers on our own, and we wanted to put more resources into feature development,” Vasa says. CarTrade Tech Drives a Seamless Car Buying and Selling Experience with Improved Website Performance and Analytics 2023 Furthermore, by moving its BI stack to Amazon QuickSight, CarTrade Tech can use dashboards to visualize data, gaining a more detailed view of how customers use its website. “Improved data and reporting help us make more informed business decisions and guides feature development. We can analyze customer behavior to determine how customers use our site features and identify those requiring further focus,” says Vasa. Español CarTrade Tech Ltd. is a multi-channel automobile platform offering various vehicle types and value-added services, with several brands in its portfolio: CarWale, CarTrade, Shriram Automall, BikeWale, CarTradeExchange, Adroit Auto, and AutoBiz. The company’s goal is to enable new and used automobile customers, vehicle dealerships, Vehicle OEMs, and other businesses to buy and sell vehicles in a simple and efficient manner.  日本語 CarTrade Tech uses Amazon Elastic Kubernetes Service to manage containerized applications, Amazon CloudFront to manage and scale its websites and services, and Amazon QuickSight to analyze and understand customers. As a result, the company offers a better car buying and selling experience by improving its website performance and deriving new insights from customer behavior data.  Since its founding in 2010, CarTrade Tech has hosted its web platforms in a colocated data center, which caused management challenges and limited the company’s ability to scale easily as traffic grew by 400 percent over 5 years. To address this, the company migrated its application platform to Amazon Web Services (AWS), running primarily on Amazon Elastic Compute Cloud (Amazon EC2) instances.  Next, CarTrade Tech migrated its business intelligence (BI) technology stack from a third-party solution to Amazon QuickSight, a serverless BI service offering interactive dashboards and natural language querying to help companies better understand their data. “We found that Amazon QuickSight provides the balanced feature set we require and integrates with other AWS services such as Amazon Athena and Amazon S3,” says Vasa. 한국어 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon Elastic Kubernetes Service (Amazon EKS) gives you the flexibility to start, run, and scale Kubernetes applications in the AWS cloud or on-premises. Learn more » CarTrade Tech Ltd. uses Amazon EKS and Amazon CloudFront to seamlessly manage and scale its website environment, improving the user experience and reducing costs. reduction in data transfer costs Get Started CarTrade Tech implemented Amazon Elastic Kubernetes Service (Amazon EKS) to automatically manage the availability and scalability of Kubernetes containers on AWS, as well as application security. By using Amazon EKS to simplify container management, the company can launch Amazon EC2 Spot Instances easily. If Spot Instances are unavailable, Amazon EKS alerts the business and automatically moves to on-demand instances. AWS Services Used Outcome | Improving the Customer Experience through Better Website Performance and Behavioral Analysis  improves website experience  More than 31 million people in India conduct research on what vehicle to purchase on CarTrade Tech Ltd.—a multi-channel automobile platform with portals CarWale, CarTrade, and BikeWale—every month. These platforms garner 1.2 million car listings for sale annually. 中文 (繁體) Bahasa Indonesia However, as web traffic increased further, the business sought a new content delivery network to improve website performance. “Customer experience on our websites is of utmost importance, and lower latency can improve that,” says Pratik Vasa, vice president, technology at CarTrade Tech Ltd. Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 70% 中文 (简体) Furthermore, CarTrade Tech is exploring adding AWS machine learning services such as Amazon SageMaker, to gain further insights from customer data. Concludes Vasa, “Using AWS, we know we can find new ways to continue improving the online buying and selling experience for our customers.” Opportunity | Seeking to Better Serve a Growing Market of Car Buyers and Sellers Learn more » reduction in latency Overview Amazon Elastic Kubernetes Service With its new capabilities, CarTrade Tech has created a faster web experience for customers. “We’re able to provide a seamless experience for anyone looking to buy or sell a vehicle with Amazon Quicksight and Amazon CloudFront,” says Vasa. Data-powered insights The company also migrated its CarTrade, CarWale, and BikeWale application environments to Amazon CloudFront, a content delivery network (CDN) designed for low latency and high data transfer speeds. “With Amazon CloudFront, we knew we could improve performance and scalability for our websites,” Vasa says. By running its key websites on Amazon CloudFront, CarTrade Tech has reduced website latency by 10–15 percent and outgoing data transfer costs by 70 percent. Türkçe Amazon Elastic Compute Cloud About CarTrade Tech Ltd. English We’re able to provide a seamless experience for anyone looking to buy or sell a vehicle with Amazon Quicksight and Amazon CloudFront.” Customer Stories / Software & Internet  Learn More Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. Learn more » Pratik Vasa Vice President, Technology at CarTrade Tech compute cost savings Deutsch 10% Tiếng Việt Amazon QuickSight is a fast, cloud-powered business intelligence service that makes it easy to deliver insights to everyone in your organization. Learn more » Italiano ไทย Amazon CloudFront 20% To learn more, visit aws.amazon.com/cloudfront. Solution | Deploying AWS for Container Management and BI CarTrade Tech Ltd. is a multi-channel automobile platform with coverage and presence across vehicle types and value-added services through its various brands: CarWale, CarTrade, and BikeWale. The company migrated its websites and applications to AWS to simplify server management, security, and scaling. Amazon QuickSight Additionally, CarTrade Tech now runs 70 percent of its Amazon EKS instances on Amazon EC2 Spot instances, compared with 25 percent previously. As a result, the business has reduced its compute costs by 20%, investing the savings back into the business and more AWS services.  Português" Central East Ontario Hospital Partnership Launches a Clinical Information System in the AWS Cloud _ Case Study _ AWS.txt,"9 months A regional partnership of seven acute care hospital organizations located in Central East Ontario, Central East Healthcare (CEHC), covers 16,673 km2 of urban and rural geography and serves over 1.5 million patients. CEHC deployed a clinical information system (CIS) with an alternate production or disaster recovery (DR) system in the Amazon Web Services (AWS) Cloud that successfully serves the entire regional partnership. The implementation of a new CIS helped CEHC to focus on clinical transformation, because it supports the delivery of the highest-quality patient care and improves healthcare services in the region. CEHC shared similarities with many other hospitals in the province, but it changed course by pursuing a move to the cloud. To better use clinical information, CEHC collaborated with AWS, the CIS platform vendor, and Deloitte, an AWS Partner, to implement a CIS with assets and DR in the AWS Cloud. Choosing to build the alternate production/DR environment on AWS let CEHC avoid equipment procurement, saving both time and money to optimize the project. AWS provides redundancy at every layer of the architecture, so there is no single point of failure throughout the environment. Français while also increasing uptime and performance 2023 Amazon EBS Español Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » 日本語 Facilitating a single healthcare record across the region was the highest priority for CEHC. Using a “Think Big” approach, CEHC used design ideas to apply data and secure processes so that patients could walk in the door of any CEHC hospital, and providers would already have their information ready to go. The CEHC hospitals’ primary motivation was to provide the safest, highest quality of care to patients across the region. “AWS delivered cloud services and experience for CEHC using automated tools and processes. AWS delivered both quickly and cost effectively. When compared to the brick-and-mortar production build, the alternate production/DR environment was built in days rather than months, and at a fraction of the cost,” says Eric Foote, Deloitte’s managing director of Healthcare Cloud Engineering. Outcome | Building for the Future To put the inventive plan into motion, CEHC needed partners. “When you have a CIS ready to implement, you need a scalable, reliable data center to support it,” says Andrew Kelly, chief digital officer of the Central East Regional Operations team, established post-live as a regional IT service for the seven-hospital partnership. As the AWS environment transitioned from Deloitte to internal CEHC staff, AWS Enterprise Support began directly working to provide enhanced technical support, billing and account management, and concierge services. A dedicated technical account manager (TAM) supports the entire CEHC AWS environment. The TAM provides consultative architectural guidance, knowledge, and reporting to help implement proactive and preventative programs, and, when needed, brings in AWS subject matter experts. Looking ahead, CEHC will evaluate AWS as an option for future migrations and use the CEHC team’s growing AWS skill set. Get Started 한국어 Improved innovation Overview | Opportunity | Solution | Outcome | AWS Services Used By choosing AWS, CEHC can make use of cloud-native services while simultaneously driving increased innovation and improved uptime and performance. Alongside AWS, CEHC can use and surface data for clinical use, reporting, and operational improvement, which helps increase efficiency, patient safety, and quality of patient care. AWS delivered both quickly and cost effectively. When compared to the brick-and-mortar production build, the alternate production/DR environment was built in days rather than months, and at a fraction of the cost."" Opportunity | Integrating Medical Records to Improve Patient Care Central East Ontario Hospital Partnership Launches a Clinical Information System in the AWS Cloud Amazon Elastic Block Store (Amazon EBS) is an easy-to-use, scalable, high-performance block-storage service designed for Amazon Elastic Compute Cloud (Amazon EC2). Learn more » Saved $10 million AWS Services Used Achieving Cost Savings and Security Benefits 中文 (繁體) Bahasa Indonesia CEHC went live in December 2021, after completing three successful tests on AWS. These successful tests for both alternate production/DR and production systems helped CEHC to have full confidence in the solution and to receive its Epic Good Install certification, which is designed to help healthcare organizations that use the company’s EHR achieve implementation best practices in patient outcomes, quality of care, workflow efficiency, and financial performance. “CEHC had limited experience building and supporting solutions in the cloud. Deloitte and AWS were our sherpas,” says Kelly of the collaboration. “They led us up the mountain the proper way.” Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي The region’s facilities were not set up to talk to each other and share medical records easily. The lack of a regional CIS caused chart fragmentation, creating barriers for clinicians providing care within an organization and across the region. Patients’ referrals to other hospitals for specialized treatment, such as cancer services, mental health treatment, or emergency cardiac care, created friction for medical practitioners at partner sites because they had difficulty accessing information about the referred patients. Safety mechanisms relied on antiquated technology and staff doing multiple checks. This resulted in increased labor, which led to a higher probability of error in all tasks. 中文 (简体) Learn more » Security was paramount on the project, irrespective of cost. “It was incumbent on us to ensure that we were raising the bar on security and that we have done,” says Kelly. CEHC’s migration to AWS met heightened security demands.   Overview Furthermore, the region’s healthcare organizations lacked the right technical infrastructure for the CIS. Could a CIS run in the cloud and meet the stringent Canadian regulatory requirements? CEHC is trailblazing innovation in the Canadian healthcare industry. Its EHR environment is now live and serving healthcare providers and patients at Campbellford Memorial Hospital, Haliburton Highlands Health Services, Lakeridge Health, Northumberland Hills Hospital, Peterborough Regional Health Centre, Ross Memorial Hospital, and Scarborough Health Network (SHN). Eric Foote Managing Director of Healthcare Cloud Engineering, Deloitte Türkçe English to build, test, and deploy CIS In 2017, seven organizations came together to form the CEHC and a Regional Executive Forum (REF) committee to guide the procurement of a CIS for the region. “We wanted that utopian state, where a patient comes to the hospital and you know everything about them, to ensure the safest and the highest quality care,” says Ilan Lenga, REF member and chief medical information officer for Lakeridge Health. Regional health records infrastructure would generate critical clinical information and operational insights to make a profound difference in the lives of patients and providers. Further, partnership would make the grand ideas of each member organization real. Using AWS, CEHC operates less equipment in the disaster recovery environment versus the primary data center. The scalability and automation of AWS helps CEHC to manage smaller, on-demand environments on a regular basis and reduce costs. In the event of a disaster, the environment in AWS scales up to support the full region. “We’re paying for only what we’re using,” says Kelly, “versus paying overhead for equipment that’s needed only in the event of a disaster.” Building in the cloud translated to a significant cost savings, estimated at more than $10 million over 10 years. CEHC builds on AWS with solutions such as Amazon Elastic Compute Cloud (Amazon EC2), specifically the utilization of Amazon EC2 R5b instances, a set of next-generation, memory-optimized instances used to host the database. CEHC also uses Amazon Elastic Block Store (Amazon EBS), and Amazon FSx for Windows File Server for easy-to-use, scalable, and high-performance storage. Passing the 1-year anniversary of its regional go-live in December 2022, the collaboration has experienced many benefits, including improved clinical workflows, information exchange, and data security among regional healthcare providers, resulting in improved services and higher-quality patient care. These early successes are a product of the work that the team did together, to build not only a compliant but also a cost-effective solution. The AWS Cloud solution matched the flexibility and scalability that CEHC needed for a medical records management solution, with the CEHC CIS running in alternate production in the AWS Cloud—instead of in a secondary data center, with potentially millions of dollars in operating costs. across the partnership To design a new approach, CEHC selected Wisconsin-based industry leader Epic as the preferred solution for its new electronic health records (EHR) environment and the CIS platform. It met CEHC’s clinical, performance, data security, and cost benefit needs. “It was the best of all the possibilities,” says Lenga. “Providers can just pick up the patient’s chart and keep moving with the diagnosis, as if the data were originally on their site.” Deutsch The Central East Healthcare (CEHC), a partnership between seven acute care hospital organizations located in Central East Ontario, covers 16,673 km2 of urban and rural geography and serves over 1.5 million patients. Tiếng Việt Solution | Collaborating to Pave the Way for Innovation Customer Stories / Healthcare “Despite the challenges of the journey, constrained by budget and timing, our collaborators met us where we were and helped us rethink what was possible. Looking forward, we’re set up for success and know that further advancement is on the horizon to deliver better care for patients,” says David Graham, president and CEO of SHN. Italiano ไทย over 10 years Although the selected EHR installation is in a primary, traditional data center, AWS hosted the alternate production/DR environment—plus other clinical systems ancillary to the EHR, applications, and regionally shared assets that form the CIS. This option innovates and improves the traditional alternate production/DR approach, whereby AWS works closely with the EHR vendor to validate and continue to optimize the environment. This architecture strives to deliver an optimal customer experience with cloud-powered scalability, reliability, and agility. By deploying on AWS and with the assistance of Deloitte, CEHC was able to build, test, and deploy the CIS rapidly under an aggressive timeframe of 9 months. “We all wanted to take a quantum leap forward in terms of the quality and safety of tools that existed in the marketplace today,” says Lenga. By the looks of things, CEHC did just that. Supports 7 hospitals The implementation of a new clinical information system helped CEHC to focus on clinical transformation, because it supports the delivery of the highest-quality patient care and improves healthcare services in the region. About Central East Ontario Hospital Partnership CEHC’s executive committee found a collaborator in AWS, which offered a solution that met CEHC internal benchmarks and merged well with the Epic-prescribed technology stack. With the support of implementation partner, Deloitte, the teams landed on an innovative hybrid solution. Amazon FSx for Windows File Server Português Amazon FSx for Windows File Server provides fully managed shared storage built on Windows Server, and delivers a wide range of data access, data management, and administrative capabilities." Circle of Life _ Amazon Web Services.txt,"Updates and renews Kubernetes configuration automatically Deploys Kubernetes clusters in 40 minutes instead of 1 hour Français Benefits of AWS Pundarikaksha Mishra Lead DevOps, Circle of Life Español Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service to run and scale Kubernetes applications in the cloud or on-premises. Circle of Life Migrates Mission-Critical Healthcare App to AWS to Eliminate Downtime 日本語 AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. Reduces development costs by 15% Amazon EC2 for Microsoft Windows Server 한국어 AWS CodePipeline Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. To learn more, visit aws.amazon.com/eks. Circle of Life offers health institutions cloud-based analytics tools to facilitate data-driven decision-making. Its main product, ZEVAC, accesses more than 7 million patient records each day to analyze how medication—primarily, antibiotics—are used. ZEVAC is a software as a service (SaaS) currently used to process data from multiple hospitals across India. Get Started Supporting Kubernetes Workloads with 200 Windows Virtual Machines AWS Services Used 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Since migrating to AWS, Circle of Life has received positive feedback from its external and internal customers on improved application performance. While PC Solutions was initially managing its AWS environment, Circle of Life’s IT team has since taken over and finds the AWS console simple to work with. عربي Learn more » Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. 中文 (简体) Achieves 99.999% uptime for mission-critical application Dhananjay Yogi, head of cloud services at PC Solutions, explains, “We successfully integrated Jenkins with Amazon Resource Names, which automatically spins up Amazon EC2 instances on-demand to run Amazon EKS clusters. All updates and patches are performed automatically without downtime or manual effort, so performance has improved while lowering cost.” Supporting Prescription Decisions with Artificial Intelligence Autoscales instances when 60% threshold is exceeded Speed has likewise improved on AWS, as deploying new instances is faster. In Circle of Life’s previous cloud environment, it took at least an hour to deploy a new instance, whereas on AWS engineers can deploy new Amazon EKS nodes in 40 minutes. Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. ZEVAC is a containerized application that runs in Kubernetes clusters in the cloud. In 2020, the company’s cloud provider experienced several bouts of downtime that interrupted customers’ ability to interact with ZEVAC—a round-the-clock, high-availability system. That same year, Circle of Life decided to migrate the SaaS to Amazon Web Services (AWS) to improve uptime. In addition to the Kubernetes migration, PC Solutions worked with Circle of Life to automate the company’s continuous integration/continuous development (CI/CD) pipeline. Circle of Life is now using AWS CodePipeline as a fully managed continuous delivery service and Jenkins as an open-source automation server. By integrating native AWS and open-source tools, Circle of Life has reduced its costs by 15 percent. Amazon Elastic Kubernetes Service Gaining Intuitive Dashboards and 25% Faster Deployment Right-sizes instances for optimal compute vs. cost Türkçe English Pundarikaksha Mishra, lead DevOps at Circle of Life Healthcare, says, “AWS dashboards are intuitive, which allows smooth performance of any task.” He continues, “The team at PC Solutions helped with the transition to AWS, which was extremely valuable as our team was new to the platform. The support we’ve received directly from AWS has also been amazing. Within minutes of raising a query, we get a response.” ZEVAC deploys in Docker containers running on Amazon Elastic Compute Cloud (Amazon EC2) instances for Microsoft Windows Server. Currently, Circle of Life runs several hundred Windows virtual machines to support its Amazon EKS nodes. PC Solutions right-sized instances for optimal compute versus cost, and app-integrated Amazon CloudWatch stack for monitoring. When traffic exceeds the 60 percent threshold, autoscaling provisions additional resources, which ensures ZEVAC remains highly available and durable regardless of data processing volumes. About Circle of Life Amazon Relational Database Service We’ve experienced greater processing power and faster computing on AWS.” Automating CI/CD Pipeline Reduces Costs by 15% Deutsch Tiếng Việt Receives technical support within minutes of raising a query Learn More Italiano ไทย Circle of Life worked with AWS Partner PC Solutions to migrate ZEVAC and other peripheral applications to AWS. In the year since migration, the company and its customers have experienced zero downtime with the ZEVAC platform, with 99.999 percent availability. Circle of Life is a software company with a mission to improve data-based decision making in the healthcare sector. Its main product, ZEVAC, analyzes 7 million patient records daily to show how antibiotics are being prescribed in hospitals. Before the migration, Circle of Life’s engineers had to manually monitor and check whether its container orchestration tool was updated when Kubernetes configurations changed. It now uses Amazon Elastic Kubernetes Service (Amazon EKS) for container orchestration and Amazon Relational Database Service (Amazon RDS) to manage PostgreSQL and MySQL databases. With Amazon EKS, the company benefits from automatic updates and version control. Improving Uptime with Kubernetes on AWS 2022 Healthcare analytics is an emerging area of data science that aims to make sense of the enormous volume of data, often unstructured and analog, generated in hospitals and clinics every day. The 2020 pandemic, however, highlighted many of the obstacles faced when sharing health data across, and data siloes within, organizations. Português Circle of Life’s roadmap for ZEVAC includes enhancing artificial intelligence to help guide physicians’ decisions when prescribing medication. The company continues to consult with PC Solutions and AWS to support evolving and potential use cases in the cloud. Mishra says, “We’re now thinking of ways to intelligently recommend the course of antibiotics for each patient based on empirical data and the patient’s profile.”" Claro Embratel Credits AWS Training and Certification as Key Driver in Fourfold Growth of Sales Opportunities _ Claro Embratel Case Study _ AWS.txt,"To build its AWS practice, Claro Embratel established a Cloud Center of Excellence to train its employees on the latest cloud technologies and promote cloud adoption among its clients. The company engaged AWS Training and Certification, which equips organizations with the practical skills and industry-recognized credentials necessary to succeed in the cloud, to support this initiative. achieved in 6 months Contact Sales Français Solution | Leaning into Foundational AWS Partner Training Courses Learn from AWS experts. Advance your skills and knowledge. Build your future in the AWS Cloud. Español Learn how telecommunications provider Claro Embratel empowered its cloud sales teams with AWS Training and Certification. Opportunity | Improving the Sales Team’s Cloud Expertise with AWS Partner Training 日本語 2023 4x increase Earned AWS Partner Accreditations can help you have more prescriptive conversations with customers in the field and provide prospective customers with proof of your AWS Cloud skills and expertise. AWS Partner Accreditations are also a simple way to contribute to Knowledge Requirements and progress through the APN Consulting Partner tiers. Learn more » Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used This credential helps organizations identify and develop talent with critical knowledge related to implementing cloud initiatives. Earning AWS Certified Cloud Practitioner validates cloud fluency and foundational AWS knowledge. Since 1965, Claro Embratel has kept pace with technological innovations and invests in its infrastructure and its people to meet new marketplace requirements. The company has been an AWS Partner since 2017 and has engaged in over 300 customer launches on AWS. It holds 261 AWS Certifications, demonstrating knowledge and skills in AWS technology across a wide range of AWS services. “Selling in the cloud requires a deeper understanding of the customer’s business challenges,” says Fabiana Couto Falcone de Melo, cloud business lead at Claro Embratel. “It is difficult to find skilled cloud-certified workers. By training our sales teams, we can build trust between our sales teams and potential customers.” Claro Embratel Credits AWS Training and Certification as Key Driver in Fourfold Growth of Sales Opportunities AWS Services Used 中文 (繁體) Bahasa Indonesia 293 Cloud Economics accreditations AWS Certified Cloud Practitioner Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. By equipping its employees with cloud knowledge, Claro Embratel has experienced significant business growth. “Within 5 months, we have quadrupled the number of sales opportunities generated by our sales professionals year over year,” says de Melo. Claro Embratel is a Brazilian telecommunications company and member of Grupo América Móvil. It provides a diverse range of offerings to meet customer needs, including security, data center, cloud, customer experience, and connectivity solutions. in year-over-year sales opportunities in 5 months Overview Fabiana Couto Falcone de Melo Cloud Business Lead, Claro Embratel For over 50 years, Claro Embratel has been a major telecommunications provider in Brazil. With the rapid advancement of technology, the telecommunications company identified the need to modernize and upgrade its solutions to keep pace with changing customer demands. Claro Embratel entered a multiyear strategic collaboration with the Amazon Web Services (AWS) team to support customers moving to the cloud. The company also achieved its 4-year target for the number of professionals with AWS expertise in a matter of months. “Our sales professionals and business developers have a broader repertoire about the cloud and its benefits and challenges,” says de Melo. “They can apply this knowledge in more productive conversations with customers, better qualify business opportunities, and drive new products and services to the market.” Türkçe Claro Embratel began its collaboration with AWS Training and Certification by helping its sales teams learn more about cloud economics and migration. First, the employees participated in AWS Partner: Sales Accreditation (Business), which teaches basic cloud concepts and communication skills to effectively articulate the value of AWS and engage in successful sales conversations with customers. Then, they took the AWS Partner: Cloud Economics Accreditation course, which teaches the benefits of migrating customers to AWS, including cost savings, better performance, and improved agility. English AWS Certified Solutions Architect - Associate Earning AWS Certified Solutions Architect – Associate validates the ability to design and implement distributed systems on AWS. Learn more » Within 5 months, we have quadrupled the number of sales opportunities generated by our sales professionals year over year.” achieved in 4 months Outcome | Quadrupling Sales Opportunities and Driving Growth AWS Partner Accreditation Deutsch In 2023, Claro Embratel will build on the success of its initial AWS Training and Certification program. It plans to expand course offerings to support sales and technical teams working on AWS migration and data analytics solutions. The company projects that it will have more than 600 accredited professionals through this engagement. As part of its strategic collaboration, Claro Embratel prioritized the upskilling of its team through AWS Partner Accreditation, which equips AWS Partners with foundational AWS knowledge, and AWS Certification, which validates technical skills and cloud expertise. The company mobilized over 500 professionals to earn these industry-recognized credentials and immediately improved its sales pipeline, with year-over-year sales opportunities quadrupling in just 5 months. Through this engagement, Claro Embratel has established itself as a trusted provider of AWS-based solutions. Tiếng Việt Through AWS Partner Training, Claro Embratel mobilized its sales representatives to earn as many industry-recognized credentials as possible. Within 4 months, these representatives achieved 456 Sales Accreditations. They also earned 293 Cloud Economics Accreditations in 6 months. These experts were noticeably more proficient at identifying sales prospects within 1 month after the training. “With the support of AWS Training and Certification, our sales professionals are better able to articulate the connection between AWS capabilities and our customers’ business needs,” says José Eduardo Aires Carneiro Braga, alliance lead at Claro Embratel. “Through AWS Training and Certification, we were able to transform our culture and market discourse to position the AWS Cloud.” AWS Training and Certification Italiano ไทย About Claro Embratel 456 sales accreditations Learn more » Claro Embratel also wanted its presales and technical teams to earn AWS Certification(s). By earning these industry-recognized credentials, the company could demonstrate its AWS expertise further and build trust with clients. From 2022 to April 2023, presales and technical team members earned a total of 54 AWS Certification(s), including AWS Certified Cloud Practitioner, which demonstrates a foundational understanding of AWS Cloud concepts, services, and terminology; AWS Certified Solutions Architect - Associate, which showcases knowledge and skills in AWS technology; and AWS Certified Security - Specialty, which validates expertise in the creation and implementation of security solutions in the AWS Cloud. “The sales accreditation course in particular had a great deal of engagement and impact on a daily basis,” says Fátima A. de Sousa, human resources specialist for corporate education at Claro Embratel. “This is due to the knowledge acquired, the opportunity for personal and professional development, and the digital badges that can be shared with colleagues and social networks.” Português “Our strategic alliance with the AWS team is a key pillar in building capabilities that contribute to our relevance in the IT solutions market,” says de Sousa." Climedo Case Study.txt,"AWS KMS We chose AWS because it helps us to meet data protection standards and provides the scalability we need.” Français Benefits of AWS Solution meets rigorous data protection, encryption, and security standards Climedo Health Captures Patient-Centric, Compliant, and Secure Clinical Data Using AWS Amazon EC2 AWS Key Management Service (KMS) makes it easy for you to create and manage cryptographic keys and control their use across a wide range of AWS services and in your applications. Climedo Health’s mission is to offer patients the best medical treatment through intelligent software solutions. Its powerful, modular, and secure solutions for decentralized clinical trials facilitate faster implementation, higher data quality, and better patient engagement. MedTech and pharmaceutical companies use the cloud-based platform for cutting-edge clinical validation and post-market surveillance of their products. 日本語 AWS Services Used Its data protection and security architecture, based on AWS Key Management Service (AWS KMS), has been successfully audited by multiple private and German government data protection and security institutions for compliance with all legal requirements. “AWS was a great help,” says Sauer. “We have regular calls to discuss our goals. AWS helps us to problem solve on everything from encryption and architecture to growing and scaling our company.” AWS Facilitates Rapid Growth These organizations can then access the information through a centralized location that allows them to easily view the data while meeting all regulatory standards for security, data protection, and product safety. 한국어 Benjamin Sauer Head of Backend Engineering at Climedo Health Reduced compliance challenges for customers by providing updates on relevant regulations Get Started Another Climedo patient diary solution, ePRO (electronic Patient-Reported Outcome), proved useful when social distancing restrictions limited hospital access during the COVID-19 pandemic. AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. The ability to provide data remotely meant that more patients could participate in research. And this meant that Climedo Health’s customers could complete more trials—the current patient completion rate is around 90 percent. “This decentralized approach puts patients at the center of the clinical trial process,” says Higginson. “The hospitals and other healthcare providers then benefit from a larger, more diverse group of trial participants, which leads to better clinical results.” About Climedo Health Within 12 months of beginning the project, approximately 140 offices were using eDiaries to keep an up-to-date view of potential COVID-19 cases. 中文 (繁體) Bahasa Indonesia With easy-to-build dashboards and modular features, Climedo Health allows its customers to conduct high-quality and efficient clinical research including product registries, patient diaries, and feedback surveys. Smart dashboards reveal real-time insights into the live status of a study, meaning that customers can view results at a glance and react quickly. “To ensure security, our main goal was to enforce complete isolation between customers’ data,” says Benjamin Sauer, head of backend engineering at Climedo Health. “We chose AWS because it helps us meet data protection standards and provides the scalability we need.” Decentralized Clinical Trials Boost Participation Ρусский عربي Migrating its patient diary solutions to AWS also increased the number of study subjects that Climedo Health could support, from 500 participants per study to hundreds of thousands of individuals. At the height of the pandemic, the solution allowed Climedo Health to process more than 30,000 SMS messages sent per day from public health offices to suspected COVID-19 patients. 中文 (简体) Scalable system can quickly and securely pivot to meet MedTech demands Using AWS, Climedo Health created a secure, cloud-native, and scalable electronic data capture (EDC) system for conducting clinical trials. The solution is fully data compliant and continuously updated to meet regulatory requirements. Learn more » Climedo Health’s ability to securely scale its services for customers at pace, as demonstrated by its work with public health offices across Germany during COVID-19 pandemic, has led to more customers and rapid growth. The Climedo Health team has quadrupled in size in the last 18 months. German EDC (electronic data capture) software provider Climedo Health used AWS to create secure, cloud-native, and scalable solutions to better capture and manage clinical data used by pharmaceutical companies, medical device manufacturers, hospitals, and around 150 public health offices. The fast-growing company accelerated its customers’ clinical trials and onboarded hundreds of thousands of patients in a short period of time. Español The scalability has also made it possible for the team to meet this rising demand and it has given the company confidence that it can continue to grow. “Using AWS has made it a lot easier for us to win new customers, and our successes will hopefully help us to win even more future customers too,” says Sauer. Having seen that many medical researchers used spreadsheets and paper-based systems to capture and manage clinical data for their trials, Germany’s Climedo Health saw an opportunity to create a more efficient digital solution. The new Symptoms eDiary solution was immediately put into use when the COVID-19 pandemic began and public health officials across Germany struggled with the volume of manual work generated by tracking symptoms of possible cases. Türkçe In early 2020, Climedo Health re-architected its eDiary for Public Health Offices using AWS. eDiaries help healthcare professionals capture and manage data about the experience of trial participants. English Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. Serverless on AWS Deutsch Tiếng Việt Italiano ไทย Building a Secure Foundation Without the eDiary, this process would have required thousands of manual phone calls from public servants to collect the data. “We’ve made life much easier for public health officers, who were previously relying on fax machines and phone calls for tracking cases,” says Catherine Higginson, marketing manager at Climedo Health. “Officials reduced the time spent on tracking symptoms by 80 percent, because, with eDiary, it’s fully automated. We’ve revolutionized their systems.” 2022 Supporting Public Health Officials during Difficult Times Thanks to the ease of onboarding new customers with the AWS architecture, public health offices could quickly use the eDiary solutions to ease the load. Phone calls and visits were replaced with patients inputting symptom data directly into the eDiary from their own mobile devices. Clinical trials need good data to produce valuable outcomes that study managers can use. One way to ensure this is to provide study participants with a convenient and user-friendly way to share the data with those conducting the study, such as medical device manufacturers, pharmaceutical companies, hospitals, and public health offices. AWS Lambda Build and run applications without thinking about servers. Português" CloudCall Invests in AWS Skill Builder Pivots to a SaaS Model _ CloudCall Case Study _ AWS.txt,"About CloudCall Français Learn from AWS experts. Advance your skills and knowledge. Build your future in the AWS Cloud. AWS Cloud Quest Español Improved troubleshooting match customer data 日本語 2023 Get Started 한국어 AWS Cloud Quest is the only role-playing game to help you build practical AWS Cloud skills. Whether you’re starting your cloud learning journey or diving into specialized skills, AWS Cloud Quest helps you learn in an interactive, engaging way. Overview | Opportunity | Solution | Outcome | AWS Services Used To get started, CloudCall performed an AWS Learning Needs Analysis, which helps identify an organization’s cloud skills gaps. Using the results of the assessment, it identified the disparities in team members’ AWS knowledge and built a data-driven plan to accelerate learning. At the core of CloudCall’s training program is AWS Skill Builder, an online learning center. CloudCall relies on the AWS Skill Builder Team subscription to gain visibility across its entire learning community, using its administrative tools to assign identical courses to all participants and establish a base level of knowledge across teams. Participants can also launch self-paced learning experiences on AWS Skill Builder, where they can practice different cloud skills based on their project needs and interests. With on-demand training, participants can schedule learning time around normal work activities, making it simple to learn on the job. “Having a mix of on-demand and in-person training meant that we could support different learning styles seamlessly,” says Alan Churley, director of software engineering at CloudCall. “Participants could take the courses as they needed to, as many times as required to feel comfortable.” CloudCall employees prepared for their exams using the preparation materials included with their AWS Skill Builder subscription; these resources include 6–8 hours of practice materials such as videos, hands-on labs, additional practice questions, and access to the Official Practice Exam. Employees then practiced their AWS skills using AWS Cloud Quest, a digital training option where employees can develop in-demand cloud skills in an interactive role-playing game. “AWS Cloud Quest is an exciting environment because it provides a gamified role-based learning experience, which works best for some learners,” says Ardinois. In November 2022, CloudCall hosted its first AWS Immersion Day, an event that educates companies about AWS products and services, to teach its employees about serverless architecture and practices using AWS Lambda, a serverless, event-driven compute service. Participants attended lectures by AWS solutions architects during the first half of the day and participated in hands-on activities during the second half. CloudCall’s entire product and engineering group is engaged in the training initiative, and 95 percent have achieved AWS Certified Cloud Practitioner Certification. With their improved AWS expertise, CloudCall’s employees are empowered to implement new features and projects, which fosters innovation toward its goal of providing better customer insights. For example, CloudCall has enhanced its capability to scale services and accelerated the time taken to release products from development to production. To support its SaaS transformation, it seamlessly adopted new AWS services like Amazon OpenSearch Service, which unlocks near-real-time search, monitoring, and analysis of business and operational data. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. AWS Immersion Days are a series of events that are designed to educate you about AWS products and services and help you develop the skills needed to build, deploy, and operate your infrastructure and applications in the cloud. Hands on labs provide you with an immersive experience in the AWS console. Learn more » AWS Services Used AWS Skill Builder Team subscription Outcome | Empowering Organizational Cloud Skills with Specialized Training Learn how SaaS provider CloudCall upskilled its engineers with AWS Training and Certification. 中文 (繁體) Bahasa Indonesia Opportunity | Contact Sales Ρусский and technical support عربي AWS Training and Certification aligned well with our goals, one of which was to provide the product and engineering group with a structured learning path on our cloud journey.” Customer Stories / Telecommunications Solution | Upskilling Employees on AWS with 100% Workforce Engagement AWS Training and Certification Klaas Ardinois Chief Technology Officer, CloudCall Accelerated transition Overview CloudCall’s software integrates directly with CRM systems to provide businesses with a 360-degree view of their customers. Beginning as a traditional provider of voice-over-internet-protocol telephony, it is evolving to offer more advanced features, such as automatic call distribution and near-real-time coaching for new hires. “We aim to use machine learning and artificial intelligence to provide valuable insights to our customers based on call data,” says Klaas Ardinois, chief technology officer of CloudCall. “Choosing a SaaS approach gave us greater data control, which facilitated capturing more intelligence during calls to provide additional information to end users.” CloudCall chose AWS to drive its transformation to a SaaS model. It had previously built architecture components on AWS but soon discovered that its engineering team had varying levels of AWS experience. “Some people had never heard of AWS because they came from a pure on-premises world, and others were definitely on their way to learning more on AWS but were not advanced,” says Ardinois. “Our first step was to get everyone on the same baseline.” In the summer of 2022, CloudCall engaged AWS Training and Certification to upskill its product and engineering group. “AWS Training and Certification aligned well with our goals, one of which was to provide the product and engineering group with a structured learning path on our cloud journey,” says Ardinois. To drive the program, CloudCall required its engineers to earn their AWS Certified Cloud Practitioner Certification by the end of the year. This sought-after industry credential validates a foundational understanding of AWS Cloud concepts, services, and terminology. CloudCall’s AWS Training would help employees earn this valuable certification and build their cloud expertise, thus improving their employability. Türkçe The AWS Skill Builder Team subscription grants unlimited access to expert-led AWS Digital Training, self-paced labs, learning plans, practice exams, and more. Team challenges and role-playing games make learning fun, and administrative features enable you to assign goals and track progress. Learn more » English Transforming the Business Model for CloudCall with AWS Training and Certification AWS Immersion Days to a Saas model and accelerated the time taken to release products CloudCall is a provider of communication software designed for businesses that use customer relationship management solutions. CloudCall aims to unify communications across organizations. Deutsch Tiếng Việt Scaled services Italiano ไทย CloudCall Invests in AWS Skill Builder, Pivots to a SaaS Model With its digital telephony software, CloudCall helps businesses unlock the full potential of their customer relationship management (CRM) solutions. As part of a mission to enhance digital capabilities, CloudCall is transitioning from a traditional telecommunications company to a software-as-a-service (SaaS) model, powered by Amazon Web Services (AWS). For this cloud transformation to work, its product and engineering group needed a baseline knowledge of AWS services. To strengthen its internal cloud skills, CloudCall engaged in a strategic training initiative with AWS Training and Certification, which helps organizations make the best of cloud capabilities. Now, the company can provide better technical support and advanced solutions to help its customers get the most from their CRM data. Learn more » Following the AWS Training initiative, CloudCall built a solution that accelerates the process of synchronizing contacts from a customer’s CRM into its system, making it 15 times faster. This process used to take 5–6 hours; with the new solution, it can take less than 20 minutes. This fully serverless solution is powered by several AWS services, including AWS Lambda and Amazon DynamoDB, a fully managed, serverless, key-value NoSQL database. Now, CloudCall is encouraging employees to explore advanced paths. Employees are targeting many AWS Certifications, such as AWS Certified Security – Specialty, which validates expertise in securing data and workloads in the AWS Cloud, and AWS Certified Developer – Associate, which showcases knowledge and understanding of core AWS services, uses, and basic AWS architecture best practices. CloudCall also plans to host two AWS Immersion Days per year. “AWS Training and Certification helped us set our program up and make this happen,” says Ardinois. “If I had to figure this out myself, I’d still be struggling. It’s been great to work with the AWS team and see them push this initiative forward for us. 中文 (简体) 15% faster Português" CloudWave Modernizes EHR Disaster Recovery and Provides Fast Secure Access to Archived Imaging Data on AWS _ Case Study _ AWS.txt,"AWS CloudFormation Français Founded in 1991, CloudWave is a provider of cloud and managed services for healthcare organizations, supporting over 125 EHR, clinical, and enterprise applications. The company previously hosted the environments for customers’ EHR systems and disaster recovery services in two separate data centers. “To provide the disaster recovery service, we had to keep a fully redundant set of infrastructure and hardware at each of our facilities,” says Matt Donahue, chief technical officer and vice president for product development at CloudWave. Hardware and infrastructure costs made this setup expensive, and it required significant manual effort to maintain. Amazon S3 Searching for a cost-efficient and high-performing solution, CloudWave chose to migrate its EHR and disaster recovery systems from its private cloud platform to the cloud on Amazon Web Services (AWS). Through this initiative, the company effectively scaled its EHR and disaster recovery environments, reducing return-to-operations time for its healthcare customers by approximately 83 percent without increasing service fees. Now, CloudWave is offering a reliable, cost-optimized disaster recovery solution with reduced return to operations and recovery point objectives for MEDITECH EHR and enterprise applications to customers, powered by AWS. Enhanced Español security and compliance through automation S3 Intelligent-Tiering is the only cloud storage class that delivers automatic storage cost savings when data access patterns change, without performance impact or operational overhead. 日本語 To reduce the cost to customers and improve the efficiency of the disaster recovery environment, CloudWave decided to use the cloud. After evaluating potential vendors, the company chose AWS. “The business support that AWS provided, as well as the functionality of the services, was much better than the competitors that we evaluated,” says Donahue. “Due to the maturity of AWS services and the ease at which our operations team adopted them, we were able to deploy faster than we would have if we had gone with another vendor.” The AWS team also supported CloudWave in identifying pain points that other healthcare customers had experienced, helping the company avoid common mistakes. 2022 25% reduction Matt Donahue Chief Technical Officer and Vice President for Product Development, CloudWave Get Started 한국어 Amazon S3 Glacier Instant Retrieval Overview | Opportunity | Solution | Outcome | AWS Services Used CloudWave is a cloud and managed services provider that builds and supports clinical, enterprise, and electronic health record applications for medical providers. Founded in 1991, it serves more than 280 hospitals and healthcare organizations. Solution | Improving EHR System Resilience on AWS About CloudWave Outcome | Continuing to Transform Healthcare Together AWS Services Used CloudWave Modernizes EHR Disaster Recovery and Provides Fast, Secure Access to Archived Imaging Data on AWS 中文 (繁體) Bahasa Indonesia unlocked in annual savings Ρусский CloudWave understands the importance of protecting patient data. Over 280 hospitals and healthcare organizations rely on the software company for mission-critical services, including secure electronic health record (EHR) applications. Without secure and reliable access to patient data, caregivers cannot perform their jobs and patients’ lives could be at risk. CloudWave’s on-premises disaster recovery environment provided a backup in case of an outage, but the company wanted to further improve the system’s availability and resilience. عربي 中文 (简体) On AWS, our return to operation is much faster, and the patient’s medical record can be available to a caregiver within a 2-hour time frame.” Amazon S3 Glacier Instant Retrieval is an archive storage class that delivers the lowest-cost storage for long-lived data that is rarely accessed and requires retrieval in milliseconds.  On AWS, CloudWave provides the clinicians that it serves with fast, secure access to patient data, supporting patient care quality and business continuity. The company will continue to use AWS services to improve its applications and deliver new services to customers. Learn more » 150% 83% reduction Overview CloudWave configured its EHR backups to target Amazon S3 Intelligent-Tiering, an Amazon S3 storage class that delivers automatic storage cost savings when data access patterns change, without operational overhead or performance impact. If a disaster occurs, CloudWave can rapidly deploy all its customers’ environments from an Amazon S3 bucket, facilitating business continuity. Using this solution, CloudWave reduced its return-to-operation time from 12 hours to 2 hours, effectively improving the resilience of its disaster recovery environment. “Patients don’t realize that their lives might depend on an EHR system being up or down. Outages also prevent providers from performing their jobs,” says Donahue. “On AWS, our return to operation is much faster, and the patient’s medical record can be available to a caregiver within a 2-hour time frame.” Türkçe English CloudWave appreciates the collaborative and proactive nature of the AWS team and looks forward to continuing to build on AWS in the future. “AWS wants to help us improve our services and bring new offerings to market rather than relying on us to say what we want to do,” says Donahue. “The team has been phenomenal to work with.” improvement in data storage in storage costs using Amazon S3 Glacier Instant Retrieval  in return-to-operation time AWS CloudFormation lets you model, provision, and manage AWS and third-party resources by treating infrastructure as code. Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. $1 million Deutsch Opportunity | Breaking Free from an On-Premises Backup Environment Tiếng Việt Previously, deploying the disaster recovery environment was a people-heavy operation for CloudWave. To streamline this process, the company adopted AWS CloudFormation, a service that lets customers model, provision, and manage AWS and third-party resources by treating infrastructure as code. “Previously, our team followed a paper runbook for configuration standards and conducted a monthly audit to catch any gaps,” says Donahue. “Now, we have everything built into an AWS CloudFormation template that we can audit and validate ahead of time. We know that every deployment looks the same, feels the same, and has the exact same security apparatus, which has been very beneficial.” With the automation of security and compliance processes, CloudWave has improved its security posture and significantly reduced manual labor for its employees. Customer Stories / Healthcare Italiano ไทย Contact Sales To store large picture archiving and communication system files, CloudWave relies on Amazon S3 Glacier Instant Retrieval, an archive storage class that delivers the lowest-cost storage for long-lived data that is rarely accessed and requires retrieval in milliseconds. “We were able to dramatically reduce costs by tiering our backups to Amazon S3 Glacier Instant Retrieval,” says Donahue. “We are now able to provide medical image archiving as a service for our customers at a price that fits their budget while offering the security, resiliency, and redundancy required for healthcare compliance.” By migrating its backups from on-premises storage systems to Amazon S3 Glacier Instant Retrieval, CloudWave reduced its storage costs by 25 percent. This cost reduction, combined with infrastructure and hardware savings, has led CloudWave to unlock $1 million in annual storage cost savings. Amazon S3 Intelligent-Tiering CloudWave’s customers require fast and reliable data access so that they can provide patients with the medical care that they need when they need it. To improve data storage capacity and retrieval speed, the company adopted Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. Using Amazon S3, CloudWave improved its data storage capacity by 150 percent, exceeding 5 PB of stored data—all while reducing its costs and strengthening its agility. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português" CMD Solutions Case Study _ AWS.txt,"136% Français AWS Partner Training and Certification Learn from AWS experts. Advance your skills and knowledge. Build your future in the AWS Cloud. revenue growth in 1 year Español Additionally, Mantel Group as a company grew from 300 to 800 employees in about 18 months and has significantly accelerated its onboarding process to keep up with customer demand. Rather than taking about 6 months to start working with customers, CMD Solutions employees can now begin doing billable work within 2 weeks. Plus, the company’s designation as an AWS Training Partner has created a new revenue stream, driving a return on investment of more than 130 percent in 2022 that includes $18 million in potential annual recurring revenue. “Through the training, we’ve been able to bring people in, upskill them, and add to our culture,” says Becker. “It’s also shown that we’re willing to invest in our employees, a value which can be difficult to quantify.” The investment in training contributed to back-to-back top rankings for Mantel Group in Australia’s “Best Workplaces” List for 2021 and 2022, compiled by an Australian workplace research group. We’re investing in our employees to meet the demand of our customers and help us to scale and grow. The robust training program that we built with AWS Training and Certification was a central part of achieving that.” 日本語 Outcome | | Looking to the Future with AWS Training Programs About CMD Solutions acceleration of cloud migration speed for customers Get Started 한국어 return on investment Overview | Opportunity | Solution | Outcome | AWS Services Used Collaborating with AWS Training and Certification and with funded support, CMD Solutions created a unique deep-dive program that features its own field consultants teaching the practical use of AWS alongside publicly available digital AWS courses in cloud theory. The company itself became an authorized AWS Training Reseller, resulting in a new revenue stream. The boot camps helped drive a greater than 130 percent return on investment and attracted new talent to the company. Plus, CMD Solutions saw a 30 percent increase in revenue corresponding with an increase in upskilled employees that helped to meet demand and accelerate customer migration to the cloud by five times. CMD Solutions assists organizations by transforming their IT operations using specialized AWS automation expertise. The company creates fully automated, customized AWS environment deployments using DevOps tool sets. CMD Solutions had been experiencing an uptick in demand from customers seeking to migrate to the cloud using Amazon Web Services (AWS), especially during the COVID-19 pandemic. The company needed to hire more skilled AWS consultants internally to meet customer needs. Plus, the external market had an extreme skills shortage, making it expensive and impractical to hire the necessary talent. Recruiting and retaining diverse employees and promoting a culture of loyalty also have positive effects on the company’s return on investment for the training program. The average experience of the employees going through the training program was 14.9 years. These IT professionals have, in some cases, decades of industry experience with servers, scripting skills, and understanding of DevOps, with little experience on AWS until participating in LearnCMD. The COVID-19 pandemic accelerated cloud migration demand from CMD Solutions’ customers. The corresponding increase in demand for engineers with cloud expertise exacerbated a lack of highly skilled AWS consultants in Australia and New Zealand. CMD Solutions realized that it needed to satisfy increased customer demand through more AWS training for its employees. CMD Solutions worked with AWS Training and Certification to create a specialized training program called LearnCMD, an AWS boot camp designed to upskill IT professionals with no AWS experience. Starting in November 2020, the company ran a 4-week LearnCMD program once per quarter. About 30 percent of CMD Solutions consultants engaged in the program, with 85 percent of internal recruits earning their AWS Certified Solutions Architect – Associate certification within 30 days of the training. As an authorized AWS Training Partner, CMD Solutions will continue expanding its training program to more customers. As customers complete the LearnCMD program, CMD Solutions plans to offer AWS Skill Builder, a digital learning center to build in-demand cloud skills. Through AWS Skill Builder, CMD Solutions provides customers with a path to train their employees further with deep subject matter knowledge that they can then bring in house. CMD Solutions Bridges Skills Gaps to Grow Revenue by 30% Working with AWS Training and Certification As the internal program grew, CMD Solutions saw an opportunity to support its customers by helping them to address the skills shortage through similar types of training programs. It developed an external training offering for LearnCMD to upskill customers who desired the same training to fill AWS skills gaps in their own teams. CMD Solutions held its first customer-facing training sessions in January of 2022, ultimately training 37 attendees from 10 customers on AWS. Each training featured 15 days of classes, including five AWS Solutions-Focused Immersion Days events, which are designed to educate businesses about AWS products and services and help them develop the skills needed to build, deploy, and operate infrastructure and applications in the cloud. 中文 (繁體) Bahasa Indonesia CMD Solutions also is integrating LearnCMD into its diversity and inclusion initiatives. For example, participants in its future associate program, Women Who Code, have the opportunity to opt into LearnCMD during the 6-month Women Who Code program. That way, they can additionally focus on AWS skills and eventually contribute to diversity within CMD Solutions’ workforce. “We’re investing in our employees to meet the demand of our customers and help us to scale and grow,” says Becker. “The robust training program that we built with AWS Training and Certification was a central part of achieving that.” Contact Sales Ρусский Approximately 34 percent of the current CMD Solutions workforce graduated from LearnCMD, and 20 percent of participants are running LearnCMD courses on their own. Through the training programs, CMD Solutions grew from 72 skilled consultants to 170 in less than 1.5 years, a 136 percent increase. Since implementing these training programs, CMD Solutions has seen a 30 percent growth in revenue, and it has helped customers accelerate their cloud migrations by five times. The increase in skilled consultants also helps meet increasing demand for CMD Solutions’ services. عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. AWS Services Used 5x 2022 Opportunity | Using AWS Training & Certification to Upskill Employees and Meet Customer Demand for CMD Solutions Overview > 130% AWS Skill Builder increase in employment of skilled consultants in 1.5 years Türkçe English Bryan Becker Cloud Excellence Practice Manager, CMD Solutions Customer Stories / Professional Services Based in Australia, CMD Solutions, part of Mantel Group, helps organizations to transform IT operations using specialized AWS automation expertise. Founded in 2015 and acquired by the Mantel Group in 2019, CMD Solutions creates fully automated, customized AWS environment deployments using DevOps continuous integration and delivery tool sets. The company not only delivers quality services to its customers but also works to empower, educate, and prioritize employees as valued consultants within the company. “Within CMD Solutions, we are extremely focused on AWS,” says Bryan Becker, CMD Solutions cloud excellence practice manager. “We work with customers to provide additional skill sets in digital advisory and data security areas. We are a one-stop shop for solutions that our customers need.” Solution | Growing Revenue and Accelerating Customer Cloud Migrations Deutsch Tiếng Việt Build in-demand cloud skills—your way—with our online learning center. Learn more » Italiano ไทย time of 6 months to 2 weeks Learn more » Learn how CMD Solutions grew by 30 percent with AWS in professional services. 30% Improved onboarding Português" Cognitran Deploys Customized CDN Solution in under 12 Weeks Using Amazon CloudFront.txt,"Solution | Deploying a Custom-Built CDN System in 3 Months David Butterworth Director and Business Leader, Cognitran Français deployed a customized CDN solution Automotive software provider Cognitran Limited (Cognitran) was looking to build and deliver a customized content delivery network (CDN) solution in under 3 months so that it could quickly disperse technical information and meet the requests of one of its customers. Cognitran’s customer was looking for an optimized CDN solution that would provide a competitive performance with commercial benefits. The customer was also working on an accelerated timeline, because it was facing an automatic contract renewal with the previous CDN vendor. Cognitran decided to build a new solution that would quickly deliver large, complex files after one of its customers approached the company with this request. Previously, Cognitran and this customer had collaborated to build a custom technical information distribution system that ran on a CDN from a third-party vendor. However, the customer was looking for a CDN solution that would balance performance and cost-effectiveness. “We have users all around the world, and they want the best possible experience in terms of responsiveness,” says Butterworth. Cognitran’s customer was also under pressure to come up with a new solution because it was facing a contract renewal with its incumbent vendor in 3 months. Español By April 2022, Cognitran had completed the proof of concept and received approval from its customer’s IT team to deploy the new CDN system. From there, Cognitran worked on the implementation so that it would not affect its customer’s production environment. “We had zero downtime or service interruption during the switchover,” says Butterworth. “It was an incredible achievement for us, especially considering the time constraints.” Using this custom-built system, Cognitran’s customer can quickly deliver content anywhere with 99.99 percent uptime—without having any physical infrastructure in place. “Using Amazon CloudFront means that we can deliver content to our customer very quickly,” says Butterworth. “That reliability is key to speeding up the technician experience.” Cognitran and its customer also have greater visibility into the performance of its new solution compared with its previous CDN. Greater visibility helps Cognitran troubleshoot errors and develop relevant new features as needed. 日本語 “We want to expand into different areas, such as connected vehicles, remote diagnostics, and vehicle monitoring,” says Butterworth. “Becoming an AWS Partner will help us target a specific market share and attract more OEMs to use our SaaS solutions.” Expanaded Get Started 한국어 AWS Professional Services Using Amazon CloudFront means that we can deliver content to our customer very quickly. That speed is key to speeding up the technician experience.”  Overview | Opportunity | Solution | Outcome | AWS Services Used Cognitran developed this new solution using CloudFront as the backbone for delivering content in milliseconds. To meet its customer’s security requirements, the company also implemented AWS Shield, a managed distributed-denial-of-service protection solution, along with AWS Firewall Manager, which gives companies the ability to centrally configure and manage firewall rules across accounts and applications. “Using out-of-the-box solutions like AWS Shield and AWS Firewall Manager was very attractive to us,” says Butterworth. Customer Stories / Industry Name To meet this request, Cognitran engaged Amazon Web Services (AWS), and the company worked on developing a scalable solution that could deliver technical files and service information with low latency and baked-in security provisions. In less than 12 weeks, Cognitran implemented a new solution using AWS services, including Amazon CloudFront, which securely delivers content with low latency and high transfer speeds. Now, Cognitran’s customer can deliver content almost instantaneously while maximizing cost savings. After successfully deploying this new solution, Cognitran joined the AWS Partner Network, and the company plans to incorporate this custom-built solution into its software offering. Amazon CloudFront AWS Services Used 中文 (繁體) Bahasa Indonesia AWS Firewall Manager is a security management service which allows you to centrally configure and manage firewall rules across your accounts and applications. Learn more » Contact Sales Ρусский SaaS offering عربي Given its history of using AWS, Cognitran decided to engage AWS Professional Services, which helps companies achieve their desired business outcomes using AWS solutions. Cognitran relied on technical advice from the AWS Professional Services team to accelerate its creation of a secure solution that would receive authorization from its customer’s internal IT team. “It was critical to get this system implemented in the timescale we were given,” says Butterworth. “We built a proof of concept alongside the AWS Professional Services team that included some augmented security aspects.” 中文 (简体) Automotive software-as-a-service (SaaS) provider Cognitran offers technical information software and systems around after-sales, diagnostic services, data analytics, content management, and multilingual publications. The company serves over 200,000 active users across original equipment manufacturers (OEMs). The AWS Professional Services organization is a global team of experts that can help you realize your desired business outcomes when using the AWS Cloud. Learn more » Learn more » 2022 Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Overview AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. Learn more » and distributed content globally during switchover Scaled Türkçe Automotive SaaS provider Cognitran used Amazon CloudFront and AWS Shield to deploy a custom-built content delivery network solution for one of its customers in under 12 weeks, helping it deliver content in near real time. experienced uptime English Automotive internal software has become more advanced over time, and many of Cognitran’s OEMs require complex calibration files so that they can perform the necessary maintenance and repairs. “Cars often have new technologies, like autonomous driving, electrification, infotainment, and telematics,” says David Butterworth, director and business leader at Cognitran. “The amount of software content and technical information required for one car has grown exponentially.” Cognitran Deploys Customized CDN Solution in under 12 Weeks Using Amazon CloudFront AWS Firewall Manager Outcome | Joining the AWS Partner Network Deutsch No downtime 99.99% Tiếng Việt Italiano ไทย Automotive software-as-a-service provider Cognitran offers technical information software and systems around after-sales, diagnostic service analytics, content management, and multilingual publications. The company serves over 200,000 customers across 130 countries. 12 weeks About Cognitran Based on the results of this project, Cognitran has decided to add this new system to its SaaS offering. “We can secure a new revenue stream by offering this solution,” says Butterworth. Cognitran has also joined the AWS Partner Network, which will help it grow its business on AWS. The company has already enrolled in several AWS training opportunities to deepen its understanding of CloudFront and upskill its teams. AWS Shield Opportunity | Distributing Complex Calibration Files to OEMs Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português" Comscore Maintains Privacy While Cross-Analyzing Data using AWS Clean Rooms _ Case Study _ AWS.txt,"Brian Pugh Chief Information Officer, Comscore Français Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » 2023 Español Then, Comscore can set up its own privacy controls, including a mutually agreed upon join key that gives collaborators the ability to match data tables and perform analyses using a double-blind method. This method means that all parties can protect sensitive data, such as cookies, first-party IDs, and IP addresses, and run queries on combined data to gain richer, more comprehensive insights. “Instead of ingesting all that information and doing the analysis behind our firewall, we can join those things in AWS Clean Rooms and get back what we need,” says Brian Pugh, chief information officer at Comscore. Additionally, Comscore can organize its analytics by demographics or other categories so that it can identify trends in how groups of people interact with certain media. Comscore can also connect AWS Clean Rooms with Amazon QuickSight—a solution that provides unified business intelligence at hyperscale—so that it can visualize its data in one place using interactive, customizable dashboards. 日本語 About Comscore Get Started 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Industry Challenge AWS Clean Rooms helps customers and their partners more easily and securely collaborate and analyze their collective datasets—without sharing or copying one another’s underlying data. AWS Services Used 中文 (繁體) Bahasa Indonesia AWS Clean Rooms...helps Comscore to provide the best possible measurement and support to our data partners to trust that the data that they’re providing is safe and protected.” Ρусский عربي Analytics and insights provider Comscore provides a wide range of data-driven solutions that support planning, transacting, and measuring media across channels. It serves media companies and advertisers, promoting transparency and trust within the industry. Benefits of Using AWS 中文 (简体) Comscore turned to Amazon Web Services (AWS) and chose AWS Clean Rooms to uphold privacy-enhanced collaborations with its partners. AWS Clean Rooms helps Comscore’s customers and partners to securely match, analyze, and collaborate on their combined datasets with ease and without sharing or revealing underlying data. Using this solution, Comscore can invite up to five collaborators into an AWS Clean Room and pull pre-encrypted data into a configured data table from Amazon Simple Storage Service (Amazon S3), an object storage service built to retrieve any amount of data from anywhere. Media ratings company Comscore can provide richer insights to advertisers while maintaining data privacy by securely collaborating on its data with third parties using AWS Clean Rooms. Amazon QuickSight powers data-driven organizations with unified business intelligence (BI) at hyperscale.  Learn more » Comscore Maintains Privacy while Cross-Analyzing Data Using AWS Clean Rooms Türkçe Comscore, a global media ratings company, provides its advertising customers with rich, accurate insights about their audiences and campaign effectiveness by ingesting and cross-analyzing its panel data with multiple other sources—a process that generally involves migrating data from server to server. Comscore wanted to provide customers with a simpler option: an interoperable environment that collaborators can access to analyze datasets without revealing their raw data. English AWS Clean Rooms Comscore's Solution Deutsch Tiếng Việt Amazon S3 Customer Stories / Advertising & Marketing Italiano ไทย Contact Sales Learn more » With its underlying infrastructure built on AWS, Comscore can scale to ingest data from thousands of data sources and standardize its processes for data collaboration with other enterprises by using AWS Clean Rooms. Further, Comscore can avoid the costs and risks associated with the physical migration of data from one environment to another, or the development costs involved in standing up an environment with the necessary security and governance provisions. As a result, Comscore can maintain its competitive edge and improve the accuracy of its analytics for its customers as it continues to ingest and cross-analyze new information from different sources. “AWS Clean Rooms...helps Comscore to provide the best possible measurement and support to our data partners to trust that the data that they’re providing is safe and protected,” says Pugh. Amazon QuickSight Português" Concert.ua Manages 1000 Traffic Spikes Using AWS Serverless _ AWS EC2.txt,"Concert.ua had migrated to a small cloud provider in 2017 but the arrangement was frustrating the company. Although the cloud was more efficient and flexible than managing its own on-premises servers, it had to provision servers manually, a process that could occupy several staff for many hours. Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Choose from seven popular engines — Amazon Aurora with MySQL compatibility, Amazon Aurora with PostgreSQL compatibility, MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server — and deploy on-premises with Amazon RDS on AWS Outposts. Français Benefits of AWS Concert.ua turned to AWS for out-of-the-box services that would automatically scale fast enough to deal with unexpected traffic spikes. Español Concert.ua developers have also reduced the time it takes to implement APIs using Amazon API Gateway and AWS Lambda. Before using AWS, when a customer purchased a ticket during a busy period, they had to wait for the database to work through a queue of requests before receiving a confirmation. Learn more » Instead of spending time coding, the developers send high-level instructions to AWS Lambda and can manipulate backend services to access data, business logic, and application functionality. “We couldn’t launch APIs as quickly as we can now,” says Lysenko. “Previously, we had to do a lot of coding but now it’s 300–500 percent faster. Using AWS, our software development cycle takes less time and effort, by fewer people. And it costs less than our previous setup.” Concert.ua Manages 1000% Traffic Spikes Using AWS Serverless Reduced total infrastructure costs Before using AWS, technical staff estimated how many servers were needed but often ended up overprovisioning and paying for unused resources. “Even when a traffic spike was expected, it was always a guess as to how many servers we’d need,” says Yevgen Lysenko, founder and chief technology officer (CTO) at Concert.ua. “But there was no other option with the resources and technologies we had at the time.” AWS Aurora Serverless Launched APIs 300–500% faster Now Concert.ua uses AWS Lambda to process multiple transactions simultaneously—so customers no longer have to wait. As soon as they complete their transaction, Concert.ua generates and dispatches the ticket. “Using AWS Lambda and AWS Fargate, we can have simultaneous transactions running in real time,” says Lysenko. “Everything just works and it’s all automated, which is fantastic.” Lysenko admits he was surprised by the results. “We didn’t think Fargate would be useful, but we quickly changed our minds once we tested the service. We discovered that it not only scales much faster, it’s also cost efficient. Fargate containers are twice the capacity of our previous containers, and so we use fewer containers than expected,” he says. “We were looking for a magic button that we could press to make our transaction processing run faster. Instead, we found AWS Fargate.” 한국어 Amazon RDS for Aurora To automatically scale up or down to handle traffic spikes, Concert.ua initially chose to use open-source Docker containers to package its SQL database. It then uploaded them to Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity. Amazon Lambda Delivering Real-Time Transactions Using AWS Lambda Get Started Amazon Fargate Concert.ua wanted to find a solution that would allow its staff to focus on improving its ticketing application and working on innovative marketing strategies instead of spending time troubleshooting its infrastructure. “Looking after the infrastructure was a never-ending story,” says Lysenko. “Something was always wrong and we never had enough people to do all the work.” AWS Services Used The announcement of a popular event, or a mention in social media, results in a sudden influx of visitors to the Concert.ua site. This causes traffic increases of anywhere between 400 and 1,000 percent within minutes. Some initial provisioning experiments reduced the time to spin up a server, but the solution was still too slow to deal with sudden large spikes in traffic. So Concert.ua tried AWS Fargate, a serverless, pay-as-you-go compute engine. 中文 (繁體) Bahasa Indonesia Concert.ua is one of Ukraine’s largest ticketing companies in terms of revenue, customers, and ticket sales. Its ticketing site receives almost 2 million visitors every month. Improved website reliability to 99.9% uptime Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use.File processing Stream processing Web applications IoT backends Mobile backends. عربي Learn more » 中文 (简体) Dealing with 1,000% traffic spikes The company migrated to AWS Lambda, a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Concert.ua also used Amazon Aurora, a MySQL and PostgreSQL-compatible relational database built for the cloud. AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. AWS Fargate is compatible with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Automated scaling to handle 1000% traffic spikes Ukrainian event ticketing company Concert.ua experienced unexpected spikes in traffic that overwhelmed its website, leaving customers unable to complete transactions. Using fully automated scaling and a serverless architecture built on Amazon Web Services (AWS), the company has increased the reliability and availability of its systems and reduced infrastructure costs. Its customers are able to reliably purchase tickets for popular events, even when traffic is high. We were looking for a magic button that we could press to make our transaction processing run faster. Instead, we found AWS Fargate.” Concert.ua is Ukraine’s largest ticketing agency and handles almost half of the country’s online ticket sales. To win over customers, it needs to provide fast and reliable services so event-goers don’t choose to purchase tickets from competitors. Concert.ua transitioned from its traditional approach of service provisioning to infrastructure as code, using the open-source Terraform tool. Türkçe The migration has improved system reliability while also reducing the cost of operating its ticketing infrastructure. “When we used the AWS calculators we were unsure how much the services might cost us, but most of the time our bill has been less than we estimated,” says Lysenko. “The bill is always relative to our business activity, so when the bills are high it means that we have been earning more.” English Getting 99.9% Uptime for Less Cost Deutsch Amazon Aurora Serverless is an on-demand, autoscaling configuration for Amazon Aurora. It automatically starts up, shuts down, and scales capacity up or down based on your application's needs. You can run your database on AWS without managing database capacity.. Tiếng Việt Migrating to a Serverless Architecture Italiano ไทย Ukrainian music ticketing firm Concert.ua experienced unexpected spikes in traffic that overwhelmed its website. This left customers unable to complete transactions and affected the company’s revenue and reputation. In addition, its reliance on manual server provisioning made it difficult to quickly scale to meet demand. Since migrating to a serverless architecture built on AWS—and with fully automated scaling—Concert.ua has cut its infrastructure costs and improved customers’ ticket-purchasing experience. Contact Sales About Concert.ua 2022 Concert.ua’s ticketing site can handle large, unexpected spikes in traffic and reports 99.9 percent uptime. In addition, its technical staff focus on higher value projects that helps the business grow its market share and further improve customer experience. 日本語 Yevgen Lysenko, Founder and Chief Technology Officer (CTO), Concert.ua Português" Cost Savings of 20 and 8 Hours of Data Processing Saved across 500 Spark Jobs Using AWS Graviton2 Processors _ Wealthfront Case Study _ AWS.txt,"To achieve these goals, Wealthfront uses Amazon Web Services (AWS) for its data processing and compute workloads. The company runs its data processing on Amazon Elastic Compute Cloud (Amazon EC2), a service that provides secure and resizable compute capacity for virtually any workload. By upgrading its infrastructure, the company has saved 20 percent on costs, reduced runtime by 5 percent, and lowered its carbon footprint. Français carbon footprint For more information about Wealthfront, including full disclosures, visit here. “Using AWS Graviton2 processors, our pipelines run faster and cheaper, providing us with important benefits,” says Bandaru. “Running our data workloads faster means downstream jobs run faster. And because Amazon EMR is one of our main expenses, the profitability of the service was important to us.” Saving runtime was the main motivator for Wealthfront in using AWS Graviton2 processors. Each of the company’s 500 data ingestion pipelines ingests data every day. Across all pipelines, the company has saved 8 hours of data processing a day, amounting to a reduction of 5 percent. Amazon Elastic Compute Cloud (Amazon EC2) Español 日本語 Lowered 2023 Contact Sales Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used in costs Wealthfront integrates smart investing and saving products to help young professionals build long-term wealth in all market conditions. Outcome | Expanding AWS Graviton2 Processor Use for Future Growth AWS Graviton processor AWS Services Used Wealthfront currently runs around 95 percent of its data workloads using AWS Graviton2 processors. The company serves more than 500,000 clients, and this solution can scale to support over a million clients while still producing faster runtime. “We are able to serve more clients without incurring large additional data processing costs,” says Ray. “Using AWS, we’ve optimized our infrastructure to scale along with our company’s growth. And running AWS Graviton2 processors is a cost-efficient way of improving our elasticity.” Cost Savings of 20% and 8 Hours of Data Processing Saved across 500 Spark Jobs Using AWS Graviton2 Processors with Wealthfront 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. About Wealthfront Using AWS Graviton processors, the data ingestion pipelines run automatically in the background while engineers work on other tasks. When these pipelines run faster, the output for the day is improved and the whole operation is completed more quickly. “Any saved time increases our ability to start trading at the right moment,” says Arup Ray, head of data engineering at Wealthfront. “Accelerating our data processing is critical from a business perspective.” This faster runtime translates to more time for our automated investing algorithms to better manage our clients’ investments, and with the same instances running for a shorter duration, lower power consumption translates to a lower carbon footprint for the company. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Amazon EMR is the industry-leading cloud big data solution for petabyte-scale data processing, interactive analytics, and machine learning using open-source frameworks such as Apache Spark, Apache Hive, and Presto. Learn more » 20% reduction 中文 (简体) 5% reduction Overview Solution | Running Amazon EMR to Provide Automated Investment Services As part of the upgrade, the company also did some prerequisite work that included Scala and Spark version upgrades compatible with Amazon EMR 6.2. From 2019, Wealthfront made several upgrades before migrating to Amazon EMR 6.2 in February 2022. Once EMR 6.2 was implemented, the implementation of AWS Graviton2 processors took less than a month, and the rollout was completed in March 2022. “Because of the way the code is structured to launch Amazon EMR infrastructure, the upgrade went smoothly,” says Nithin Bandaru, data infrastructure engineer at Wealthfront. “We needed to make sure critical pipelines were functional and do some runtime analysis, and the entire upgrade went well.” Each year during re:Invent, an AWS conference for the global cloud community, the company produces innovative ideas to help improve infrastructure and efficiency and to further reduce costs. “AWS is awesome,” says Bandaru. “It has been a really nice experience working on AWS.” Customer Stories / Financial Services Türkçe AWS Graviton2 processors deliver a major leap in performance and capabilities over first-generation AWS Graviton processors. Graviton2-based instances provide the best price performance for workloads in Amazon EC2. Learn more » in runtime across 500 data ingestion Spark jobs English Wealthfront integrates smart saving and investing products to help the next generation of investors build long-term wealth. Founded in Palo Alto, California, in 2008, the startup has grown to manage more than $30 billion in assets for over 500,000 clients. It has been using AWS from the beginning. Now, Wealthfront manages over 500 data pipelines, running some of its preinvesting jobs. The large financial data processing workloads are on a combination of transient and persistent clusters that run continuously using Amazon EMR, an industry-leading cloud big data solution for petabyte-scale processing, interactive analytics, and machine learning. By using Amazon EMR to support its compute workloads, Wealthfront generates derived datasets for marketing needs, clickstream data, client financial data, and tax-related data. Another major benefit of upgrading to AWS Graviton2 processors is the cost savings. “Using AWS Graviton2 processors provides, at a minimum, a 20 percent discount for the same jobs in the same amount of time compared with the old system,” says Bandaru. The company has seen performance reports of higher discounts as well. Each month, the company saves 20 percent by using AWS Graviton2 processors. Implementing the service on more pipelines will offer even more savings. “The main impact of using AWS Graviton2 processors is the cost savings,” says Bandaru. “As the underlying architecture of the processors changes, we will reap more benefits.” Opportunity | Using AWS Graviton2 Processors Saved 20% On Costs for Wealthfront Amazon EMR Deutsch To provide automated financial investment services to young professionals who want to build long-term wealth through their investments, Wealthfront decided to upgrade its infrastructure to improve automation while lowering business costs. The company wanted to reduce data processing workload runtime and save on costs while providing a better product for its customers. Tiếng Việt Nithin Bandaru Data Infrastructure Engineer, Wealthfront Italiano ไทย Learn how Wealthfront, an industry-leading automated wealth manager, saved 20 percent on costs and reduced runtime by 5 percent using AWS Graviton2–based instances. Wealthfront has been improving its Amazon EMR infrastructure every year and wanted to take these improvements a step further by using AWS Graviton processors, which are designed to deliver the best price performance for cloud workloads running on Amazon EC2. Saving time is critical for Wealthfront because its customers depend on fast and efficient data pipelines to make financial investment trades. On busy trading days, all the available financial data needs to be processed before the following day’s trading can be computed. To better support this workload and accelerate data processing on Amazon EMR, Wealthfront migrated to AWS Graviton2 processors. Learn more » The main impact of using AWS Graviton2 processors is the cost savings. As the underlying architecture of the processors changes, we will reap more benefits.” Português" Coventry University Group Empowers Next Generation of IT Professionals Using AWS Educate and AWS Academy _ Case Study _ AWS.txt,"Hands-on learning Opportunity | Addressing the Need for Specialized Skills in an Adaptable Format Français by preparing students for industry-recognized AWS Certifications Daniel Flood Lecturer in Cloud Computing, CU Coventry Learn from AWS experts. Advance your skills and knowledge. Build your future in the AWS Cloud. Despite this demand, students pursuing careers in the IT industry face challenges in gaining the hands-on experience and résumé-boosting certifications necessary to overcome IT access hurdles. To address student and industry needs and offer a strong foundation for future IT careers, CU Coventry, a wholly owned subsidiary of Coventry University Group, began to build bachelor of science (BSc) programs dedicated to cloud computing. The programs included a 3-year bachelor of science degree in cloud computing and a 2-year accelerated version of the same degree. The cloud computing BSc was designed with core skills and technical knowledge components in mind, incorporating a contemporary approach to meet the digital workplace’s growing and varied needs. “The ability to use cloud tools without additional cost to the students is an amazing value and helps them develop more advanced skills,” says Daniel Flood, lecturer in cloud computing at CU Coventry. Working with various AWS Training and Certification features, the program helps graduates learn the skills and functions needed to keep pace with the industry. The most important thing is for the modules to reflect what the industry needs. We want students to add value to the global workforce.”  Español Solution | Creating a Tech-Driven Solution 日本語 2022 On-demand cloud skills 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used AWS Educate Coventry University Group saw an opportunity to help students get hands-on experience to meet UK employers’ needs for trained workers with IT experience and digital skills—particularly with the cloud and cloud-based services. To meet this high demand, Coventry University Group chose Amazon Web Services (AWS) and worked with AWS Educate to design a bachelor of science degree in cloud computing. Customer Stories / Education Empowering higher education institutions to prepare students for industry-recognized certifications and careers in the cloud Learn more » Coventry University Group has more than 30,000 students and 200 undergraduate and postgraduate degrees and is based in the United Kingdom, which is quickly establishing itself as a global tech powerhouse. In the first 6 months of 2021, $18 billion in tech funding was raised, three times the amount raised in 2020. The tech boom has led to a surge in hiring, with IT-related jobs now making up 13 percent of all vacancies in the UK. Cloud-related skills are valuable assets in today’s marketplace, with available positions ranging from cloud engineering and analysis to administration and security. Coventry University Group Empowers Next Generation of IT Professionals Using AWS Educate and AWS Academy AWS Services Used About Coventry University Group 中文 (繁體) Bahasa Indonesia AWS Academy Contact Sales Ρусский to equip students for careers in the cloud عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. In early 2019, Coventry University Group subsidiary CU Coventry piloted this approach by introducing students to cloud computing using resources from AWS Educate, which offers hundreds of hours of self-paced training and resources for new-to-cloud learners. CU Coventry’s bachelor of science in cloud computing course officially began in September 2020 and has already seen success from the program’s industry-driven framework. Overview Validate technical skills and cloud expertise to grow your career and business. Learn more » Get Started on AWS services using AWS Academy Learner Labs Build your cloud skills at your own pace, on your own time, and completely for free. Looking ahead, Coventry University Group plans to expand bachelor of science degree in cloud computing courses to its campuses in London and Wroclaw. “The ability to have hands-on experience with AWS services—the same ones that companies use in the real world—is invaluable,” said Tomasz, a student of the Cloud Computing Course. “Once we join the workforce, we can apply our skill sets and hit the ground running.” Türkçe English Students successfully engaging in the program graduate with in-demand skills for careers in the cloud, including valuable experience with AWS services through AWS Academy Learner Labs. AWS Academy provides higher education institutions with ready-to-teach cloud computing curriculum to prepare students for AWS Certifications, which validate technical skills and cloud expertise for in-demand cloud jobs. “The most important thing is for the modules to reflect what the industry needs. We want students to add value to the global workforce,” says Flood. Taking advantage of AWS Education Programs, CU Coventry’s BSc degree in cloud computing innovates on AWS to track the IT industry’s rapid pace. AWS Certification Deutsch Coventry University Group is based in the United Kingdom with more than 30,000 students and more than 200 undergraduate and postgraduate degrees across its schools, faculties, and campuses. Tiếng Việt AWS Training and Certification Italiano ไทย Outcome | Looking to the Future of Coventry University Group’s Cloud Computing Program Learn more » Increases employability Coventry University Group used AWS Education Programs to create a comprehensive and flexible degree to help students meet growing IT industry cloud skills demand. Both the 3-year bachelor of science degree in cloud computing and its accelerated version were developed in collaboration with AWS. These programs were designed by working backwards from the cloud skills employers are currently seeking in the UK and across the global labor market. “The approach gave us insights into what skill gaps were lacking in the industry. From there, we designed the courses, with the AWS team providing helpful inputs,” says Flood. “For example, the AWS team pointed out that there was an industry need for serverless computing skills, and we integrated that into our curriculum.” Português" Create high-quality images with Stable Diffusion models and deploy them cost-efficiently with Amazon SageMaker _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Create high-quality images with Stable Diffusion models and deploy them cost-efficiently with Amazon SageMaker by Simon Zamarin , Vikram Elango , Joao Moura , and Saurabh Trikande | on 26 MAY 2023 | in Amazon Machine Learning , Amazon SageMaker , Artificial Intelligence , Expert (400) , Technical How-to | Permalink | Comments |  Share Text-to-image generation is a task in which a machine learning (ML) model generates an image from a textual description. The goal is to generate an image that closely matches the description, capturing the details and nuances of the text. This task is challenging because it requires the model to understand the semantics and syntax of the text and to generate photorealistic images. There are many practical applications of text-to-image generation in AI photography, concept art, building architecture, fashion, video games, graphic design, and much more. Stable Diffusion is a text-to-image model that empowers you to create high-quality images within seconds. When real-time interaction with this type of model is the goal, ensuring a smooth user experience depends on the use of accelerated hardware for inference, such as GPUs or AWS Inferentia2 , Amazon’s own ML inference accelerator. The steep costs involved in using GPUs typically requires optimizing the utilization of the underlying compute, even more so when you need to deploy different architectures or personalized (fine-tuned) models. Amazon SageMaker multi-model endpoints (MMEs) help you address this problem by helping you scale thousands of models into one endpoint. By using a shared serving container, you can host multiple models in a cost-effective, scalable manner within the same endpoint, and even the same GPU. In this post, you will learn about Stable Diffusion model architectures, different types of Stable Diffusion models, and techniques to enhance image quality. We also show you how to deploy Stable Diffusion models cost-effectively using SageMaker MMEs and NVIDIA Triton Inference Server. Prompt: portrait of a cute bernese dog, art by elke Vogelsang, 8k ultra realistic, trending on artstation, 4 k Prompt: architecture design of living room, 8 k ultra-realistic, 4 k, hyperrealistic, focused, extreme details Prompt: New York skyline at night, 8k, long shot photography, unreal engine 5, cinematic, masterpiece Stable Diffusion architecture Stable Diffusion is a text-to-image open-source model that you can use to create images of different styles and content simply by providing a text prompt. In the context of text-to-image generation, a diffusion model is a generative model that you can use to generate high-quality images from textual descriptions. Diffusion models are a type of generative model that can capture the complex dependencies between the input and output modalities text and images. The following diagram shows a high-level architecture of a Stable Diffusion model. It consists of the following key elements: Text encoder – CLIP is a transformers-based text encoder model that takes input prompt text and converts it into token embeddings that represent each word in the text. CLIP is trained on a dataset of images and their captions, a combination of image encoder and text encoder. U-Net – A U-Net model takes token embeddings from CLIP along with an array of noisy inputs and produces a denoised output. This happens though a series of iterative steps, where each step processes an input latent tensor and produces a new latent space tensor that better represents the input text. Auto encoder-decoder – This model creates the final images. It takes the final denoised latent output from the U-Net model and converts it into images that represents the text input. Types of Stable Diffusion models In this post, we explore the following pre-trained Stable Diffusion models by Stability AI from the Hugging Face model hub. stable-diffusion-2-1-base Use this model to generate images based on a text prompt. This is a base version of the model that was trained on LAION-5B . The model was trained on a subset of the large-scale dataset LAION-5B , and mainly with English captions. We use StableDiffusionPipeline from the diffusers library to generate images from text prompts. This model can create images of dimension 512 x 512. It uses the following parameters: prompt – A prompt can be a text word, phrase, sentences, or paragraphs. negative_prompt – You can also pass a negative prompt to exclude specified elements from the image generation process and to enhance the quality of the generated images. guidance_scale – A higher guidance scale results in an image more closely related to the prompt, at the expense of image quality. If specified, it must be a float. stable-diffusion-2-depth This model is used to generate new images from existing ones while preserving the shape and depth of the objects in the original image. This stable-diffusion-2-depth model is fine-tuned from stable-diffusion-2-base , an extra input channel to process the (relative) depth prediction. We use StableDiffusionDepth2ImgPipeline from the diffusers library to load the pipeline and generate depth images. The following are the additional parameters specific to the depth model: image – The initial image to condition the generation of new images. num_inference_steps (optional) – The number of denoising steps. More denoising steps usually leads to a higher-quality image at the expense of slower inference. This parameter is modulated by strength . strength (optional) – Conceptually, this indicates how much to transform the reference image. The value must be between 0–1. image is used as a starting point, adding more noise to it the larger the strength. The number of denoising steps depends on the amount of noise initially added. When strength is 1, the added noise will be maximum and the denoising process will run for the full number of iterations specified in num_inference_steps . A value of 1, therefore, essentially ignores image . For more details, refer to the following code . stable-diffusion-2-inpainting You can use this model for AI image restoration use cases. You can also use it to create novel designs and images from the prompts and additional arguments. This model is also derived from the base model and has a mask generation strategy. It specifies the mask of the original image to represent segments to be changed and segments to leave unchanged. We use StableDiffusionUpscalePipeline from the diffusers library to apply inpaint changes on original image. The following additional parameter is specific to the depth model: mask_input – An image where the blacked-out portion remains unchanged during image generation and the white portion is replaced stable-diffusion-x4-upscaler This model is also derived from the base model, additionally trained on the 10M subset of LAION containing 2048 x 2048 images. As the name implies, it can be used to upscale lower-resolution images to higher resolutions Use case overview For this post, we deploy an AI image service with multiple capabilities, including generating novel images from text, changing the styles of existing images, removing unwanted objects from images, and upscaling low-resolution images to higher resolutions. Using several variations of Stable Diffusion models, you can address all of these use cases within a single SageMaker endpoint. This means that you’ll need to host large number of models in a performant, scalable, and cost-efficient way. In this post, we show how to deploy multiple Stable Diffusion models cost-effectively using SageMaker MMEs and NVIDIA Triton Inference Server. You will learn about the implementation details, optimization techniques, and best practices to work with text-to-image models. The following table summarizes the Stable Diffusion models that we deploy to a SageMaker MME. Model Name Model Size in GB stabilityai/stable-diffusion-2-1-base 2.5 stabilityai/stable-diffusion-2-depth 2.7 stabilityai/stable-diffusion-2-inpainting 2.5 stabilityai/stable-diffusion-x4-upscaler 7 Solution overview The following steps are involved in deploying Stable Diffusion models to SageMaker MMEs: Use the Hugging Face hub to download the Stable Diffusion models to a local directory. This will download scheduler, text_encoder, tokenizer, unet, and vae for each Stable Diffusion model into its corresponding local directory. We use the revision=""fp16"" version of the model. Set up the NVIDIA Triton model repository, model configurations, and model serving logic model.py . Triton uses these artifacts to serve predictions. Package the conda environment with additional dependencies and the package model repository to be deployed to the SageMaker MME. Package the model artifacts in an NVIDIA Triton-specific format and upload model.tar.gz to Amazon Simple Storage Service (Amazon S3). The model will be used for generating images. Configure a SageMaker model, endpoint configuration, and deploy the SageMaker MME. Run inference and send prompts to the SageMaker endpoint to generate images using the Stable Diffusion model. We specify the TargetModel variable and invoke different Stable Diffusion models to compare the results visually. We have published the code to implement this solution architecture in the GitHub repo . Follow the README instructions to get started. Serve models with an NVIDIA Triton Inference Server Python backend We use a Triton Python backend to deploy the Stable Diffusion pipeline model to a SageMaker MME. The Python backend lets you serve models written in Python by Triton Inference Server. To use the Python backend, you need to create a Python file model.py that has the following structure: Every Python backend can implement four main functions in the TritonPythonModel class: import triton_python_backend_utils as pb_utils class TritonPythonModel: """"""Your Python model must use the same class name. Every Python model that is created must have ""TritonPythonModel"" as the class name. """""" def auto_complete_config(auto_complete_model_config): def initialize(self, args): def execute(self, requests): def finalize(self): Every Python backend can implement four main functions in the TritonPythonModel class: auto_complete_config , initialize , execute , and finalize . initialize is called when the model is being loaded. Implementing initialize is optional. initialize allows you to do any necessary initializations before running inference. In the initialize function, we create a pipeline and load the pipelines using from_pretrained checkpoints. We configure schedulers from the pipeline scheduler config pipe.scheduler.config . Finally, we specify xformers optimizations to enable the xformer memory efficient parameter enable_xformers_memory_efficient_attention . We provide more details on xformers later in this post. You can refer to model.py of each model to understand the different pipeline details. This file can be found in the model repository. The execute function is called whenever an inference request is made. Every Python model must implement the execute function. In the execute function, you are given a list of InferenceRequest objects. We pass the input text prompt to the pipeline to get an image from the model. Images are decoded and the generated image is returned from this function call. We get the input tensor from the name defined in the model configuration config.pbtxt file. From the inference request, we get prompt , negative_prompt , and gen_args , and decode them. We pass all the arguments to the model pipeline object. Encode the image to return the generated image predictions. You can refer to the config.pbtxt file of each model to understand the different pipeline details. This file can be found in the model repository. Finally, we wrap the generated image in InferenceResponse and return the response. Implementing finalize is optional. This function allows you to do any cleanups necessary before the model is unloaded from Triton Inference Server. When working with the Python backend, it’s the user’s responsibility to ensure that the inputs are processed in a batched manner and that responses are sent back accordingly. To achieve this, we recommend following these steps: Loop through all requests in the requests object to form a batched_input . Run inference on the batched_input . Split the results into multiple InferenceResponse objects and concatenate them as the responses. Refer to the Triton Python backend documentation or Host ML models on Amazon SageMaker using Triton: Python backend for more details. NVIDIA Triton model repository and configuration The model repository contains the model serving script, model artifacts and tokenizer artifacts, a packaged conda environment (with dependencies needed for inference), the Triton config file, and the Python script used for inference. The latter is mandatory when you use the Python backend, and you should use the Python file model.py . Let’s explore the configuration file of the inpaint Stable Diffusion model and understand the different options specified: name: ""sd_inpaint"" backend: ""python"" max_batch_size: 8 input [ { name: ""prompt"" data_type: TYPE_STRING dims: [ -1 ] }, { name: ""negative_prompt"" data_type: TYPE_STRING dims: [ -1 ] optional: true }, { name: ""image"" data_type: TYPE_STRING dims: [ -1 ] }, { name: ""mask_image"" data_type: TYPE_STRING dims: [ -1 ] }, { name: ""gen_args"" data_type: TYPE_STRING dims: [ -1 ] optional: true } ] output [ { name: ""generated_image"" data_type: TYPE_STRING dims: [ -1 ] } ] instance_group [ { kind: KIND_GPU } ] parameters: { key: ""EXECUTION_ENV_PATH"", value: {string_value: ""/tmp/conda/sd_env.tar.gz"" } } The following table explains the various parameters and values: Key Details name It’s not required to include the model configuration name property. In the event that the configuration doesn’t specify the model’s name, it’s presumed to be identical to the name of the model repository directory where the model is stored. However, if a name is provided, it must match the name of the model repository directory where the model is stored. sd_inpaint is the config property name. backend This specifies the Triton framework to serve model predictions. This is a mandatory parameter. We specify python , because we’ll be using the Triton Python backend to host the Stable Diffusion models. max_batch_size This indicates the maximum batch size that the model supports for the types of batching that can be exploited by Triton. input→ prompt Text prompt of type string. Specify -1 to accept dynamic tensor shape. input→ negative_prompt Negative text prompt of type string. Specify -1 to accept dynamic tensor shape. input→ mask_image Base64 encoded mask image of type string. Specify -1 to accept dynamic tensor shape. input→ image Base64 encoded image of type string. Specify -1 to accept dynamic tensor shape. input→ gen_args JSON encoded additional arguments of type string. Specify -1 to accept dynamic tensor shape. output→ generated_image Generated image of type string. Specify -1 to accept dynamic tensor shape. instance_group You can use this this setting to place multiple run instances of a model on every GPU or on only certain GPUs. We specify KIND_GPU to make copies of the model on available GPUs. parameters We set the conda environment path to EXECUTION_ENV_PATH . For details about the model repository and configurations of other Stable Diffusion models, refer to the code in the GitHub repo . Each directory contains artifacts for the specific Stable Diffusion models. Package a conda environment and extend the SageMaker Triton container SageMaker NVIDIA Triton container images don’t contain libraries like transformer, accelerate, and diffusers to deploy and serve Stable Diffusion models. However, Triton allows you to bring additional dependencies using conda-pack . Let’s start by creating the conda environment with the necessary dependencies outlined in the environment.yml file and create a tar model artifact sd_env.tar.gz file containing the conda environment with dependencies installed in it. Run the following YML file to create a conda-pack artifact and copy the artifact to the local directory from where it will be uploaded to Amazon S3. Note that we will be uploading the conda artifacts as one of the models in the MME and invoking this model to set up the conda environment in the SageMaker hosting ML instance. %%writefile environment.yml name: mme_env dependencies: - python=3.8 - pip - pip: - numpy - torch --extra-index-url https://download.pytorch.org/whl/cu118 - accelerate - transformers - diffusers - xformers - conda-pack !conda env create -f environment.yml –force Upload model artifacts to Amazon S3 SageMaker expects the .tar.gz file containing each Triton model repository to be hosted on the multi-model endpoint. Therefore, we create a tar artifact with content from the Triton model repository. We can use this S3 bucket to host thousands of model artifacts, and the SageMaker MME will use models from this location to dynamically load and serve a large number of models. We store all the Stable Diffusion models in this Amazon S3 location. Deploy the SageMaker MME In this section, we walk through the steps to deploy the SageMaker MME by defining container specification, SageMaker model and endpoint configurations. Define the serving container In the container definition, define the ModelDataUrl to specify the S3 directory that contains all the models that the SageMaker MME will use to load and serve predictions. Set Mode to MultiModel to indicate that SageMaker will create the endpoint with the MME container specifications. We set the container with an image that supports deploying MMEs with GPU. See Supported algorithms, frameworks, and instances for more details. We see all three model artifacts in the following Amazon S3 ModelDataUrl location: container = {""Image"": mme_triton_image_uri, ""ModelDataUrl"": model_data_url, ""Mode"": ""MultiModel""} Create an MME object We use the SageMaker Boto3 client to create the model using the create_model API. We pass the container definition to the create model API along with ModelName and ExecutionRoleArn : create_model_response = sm_client.create_model( ModelName=sm_model_name, ExecutionRoleArn=role, PrimaryContainer=container ) Define configurations for the MME Create an MME configuration using the create_endpoint_config Boto3 API. Specify an accelerated GPU computing instance in InstanceType (we use the same instance type that we are using to host our SageMaker notebook). We recommend configuring your endpoints with at least two instances with real-life use cases. This allows SageMaker to provide a highly available set of predictions across multiple Availability Zones for the models. create_endpoint_config_response = sm_client.create_endpoint_config( EndpointConfigName=endpoint_config_name, ProductionVariants=[ { ""InstanceType"": instance_type, ""InitialVariantWeight"": 1, ""InitialInstanceCount"": 1, ""ModelName"": sm_model_name, ""VariantName"": ""AllTraffic"", } ], ) Create an MME Use the preceding endpoint configuration to create a new SageMaker endpoint and wait for the deployment to finish: create_endpoint_response = sm_client.create_endpoint( EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name ) The status will change to InService when the deployment is successful. Generate images using different versions of Stable Diffusion models Let’s start by invoking the base model with a prompt and getting the generated image. We pass the inputs to the base model with prompt, negative_prompt, and gen_args as a dictionary. We set the data type and shape of each input item in the dictionary and pass it as input to the model. inputs = dict(prompt = ""Infinity pool on top of a high rise overlooking Central Park"", negative_prompt = ""blur,low detail, low quality"", gen_args = json.dumps(dict(num_inference_steps=50, guidance_scale=8)) ) payload = { ""inputs"": [{""name"": name, ""shape"": [1,1], ""datatype"": ""BYTES"", ""data"": [data]} for name, data in inputs.items()] } response = runtime_sm_client.invoke_endpoint( EndpointName=endpoint_name, ContentType=""application/octet-stream"", Body=json.dumps(payload), TargetModel=""sd_base.tar.gz"", ) output = json.loads(response[""Body""].read().decode(""utf8""))[""outputs""] decode_image(output[0][""data""][0]) Prompt: Infinity pool on top of a high rise overlooking Central Park Working with this image, we can modify it with the versatile Stable Diffusion depth model. For example, we can change the style of the image to an oil painting, or change the setting from Central Park to Yellowstone National Park simply by passing the original image along with a prompt describing the changes we would like to see. We invoke the depth model by specifying sd_depth.tar.gz in the TargetModel of the invoke_endpoint function call. In the outputs, notice how the orientation of the original image is preserved, but for one example, the NYC buildings have been transformed into rock formations of the same shape. inputs = dict(prompt = ""highly detailed oil painting of an inifinity pool overlooking central park"", image=image, gen_args = json.dumps(dict(num_inference_steps=50, strength=0.9)) ) payload = { ""inputs"": [{""name"": name, ""shape"": [1,1], ""datatype"": ""BYTES"", ""data"": [data]} for name, data in inputs.items()] } response = runtime_sm_client.invoke_endpoint( EndpointName=endpoint_name, ContentType=""application/octet-stream"", Body=json.dumps(payload), TargetModel=""sd_depth.tar.gz"", ) output = json.loads(response[""Body""].read().decode(""utf8""))[""outputs""] print(""original image"") display(original_image) print(""generated image"") display(decode_image(output[0][""data""][0])) Original image Oil painting Yellowstone Park Another useful model is Stable Diffusion inpainting, which we can use to remove certain parts of the image. Let’s say you want to remove the tree in the following example image. We can do so by invoking the inpaint model sd_inpaint.tar.gz. To remove the tree, we need to pass a mask_image , which indicates which regions of the image should be retained and which should be filled in. The black pixel portion of the mask image indicates the regions that should remain unchanged, and the white pixels indicate what should be replaced. image = encode_image(original_image).decode(""utf8"") mask_image = encode_image(Image.open(""sample_images/bertrand-gabioud-mask.png"")).decode(""utf8"") inputs = dict(prompt = ""building, facade, paint, windows"", image=image, mask_image=mask_image, negative_prompt = ""tree, obstruction, sky, clouds"", gen_args = json.dumps(dict(num_inference_steps=50, guidance_scale=10)) ) payload = { ""inputs"": [{""name"": name, ""shape"": [1,1], ""datatype"": ""BYTES"", ""data"": [data]} for name, data in inputs.items()] } response = runtime_sm_client.invoke_endpoint( EndpointName=endpoint_name, ContentType=""application/octet-stream"", Body=json.dumps(payload), TargetModel=""sd_inpaint.tar.gz"", ) output = json.loads(response[""Body""].read().decode(""utf8""))[""outputs""] decode_image(output[0][""data""][0]) Original image Mask image Inpaint image In our final example, we downsize the original image that was generated earlier from its 512 x 512 resolution to 128 x 128. We then invoke the Stable Diffusion upscaler model to upscale the image back to 512 x 512. We use the same prompt to upscale the image as what we used to generate the initial image. While not necessary, providing a prompt that describes the image helps guide the upscaling process and should lead to better results. low_res_image = output_image.resize((128, 128)) inputs = dict(prompt = ""Infinity pool on top of a high rise overlooking Central Park"", image=encode_image(low_res_image).decode(""utf8"") ) payload = { ""inputs"": [{""name"": name, ""shape"": [1,1], ""datatype"": ""BYTES"", ""data"": [data]} for name, data in inputs.items()] } response = runtime_sm_client.invoke_endpoint( EndpointName=endpoint_name, ContentType=""application/octet-stream"", Body=json.dumps(payload), TargetModel=""sd_upscale.tar.gz"", ) output = json.loads(response[""Body""].read().decode(""utf8""))[""outputs""] upscaled_image = decode_image(output[0][""data""][0]) Low-resolution image Upscaled image Although the upscaled image is not as detailed as the original, it’s a marked improvement over the low-resolution one. Optimize for memory and speed The xformers library is a way to speed up image generation. This optimization is only available for NVIDIA GPUs. It speeds up image generation and lowers VRAM usage. We have used the xformers library for memory-efficient attention and speed. When the enable_xformers_memory_efficient_attention option is enabled, you should observe lower GPU memory usage and a potential speedup at inference time. Clean Up Follow the instruction in the clean up section of the notebook to delete the resource provisioned part of this blog to avoid unnecessary charges. Refer Amazon SageMaker Pricing  for details the cost of the inference instances. Conclusion In this post, we discussed Stable Diffusion models and how you can deploy different versions of Stable Diffusion models cost-effectively using SageMaker multi-model endpoints. You can use this approach to build a creator image generation and editing tool. Check out the code samples in the GitHub repo to get started and let us know about the cool generative AI tool that you build. About the Authors Simon Zamarin is an AI/ML Solutions Architect whose main focus is helping customers extract value from their data assets. In his spare time, Simon enjoys spending time with family, reading sci-fi, and working on various DIY house projects. Vikram Elango is a Sr. AI/ML Specialist Solutions Architect at AWS, based in Virginia, US. He is currently focused on generative AI, LLMs, prompt engineering, large model inference optimization, and scaling ML across enterprises. Vikram helps financial and insurance industry customers with design and architecture to build and deploy ML applications at scale. In his spare time, he enjoys traveling, hiking, cooking, and camping with his family. João Moura is an AI/ML Specialist Solutions Architect at AWS, based in Spain. He helps customers with deep learning model training and inference optimization, and more broadly building large-scale ML platforms on AWS. He is also an active proponent of ML-specialized hardware and low-code ML solutions. Saurabh Trikande is a Senior Product Manager for Amazon SageMaker Inference. He is passionate about working with customers and is motivated by the goal of democratizing machine learning. He focuses on core challenges related to deploying complex ML applications, multi-tenant ML models, cost optimizations, and making deployment of deep learning models more accessible. In his spare time, Saurabh enjoys hiking, learning about innovative technologies, following TechCrunch, and spending time with his family. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Creating Air Taxi Simulations Using Amazon EC2 with Wisk Aero _ Wisk Aero Case Study _ AWS.txt,"10-20% improvement Converge, alongside the AWS HPC team, created a pilot environment on AWS for the Wisk Aero team. The fully funded environment helped Wisk Aero to benchmark performance of the Amazon EC2 Hpc6a Instances—HPC instances powered by 3rd generation AMD EPYC processors—and run the necessary software to simulate a smooth transition to AWS. In addition to meeting technical and performance requirements, Wisk Aero worked with Converge to make sure the financial model for using AWS was also part of the pilot deliverables. Wisk Aero can benefit from cloud elasticity to help drive better economics, instead of expanding its physical footprint in its colocated data center. Wisk Aero’s autonomous eVTOL aircraft is the first-ever candidate for type certification by the Federal Aviation Administration and aims to make it possible for passengers to skip traffic and get to their destinations faster. By migrating its HPC to AWS, the company can run simulations more efficiently and at a lower cost. “Using AWS, we quickly scaled and added the needed on-demand compute power for the CFD team, compared with the months required and significant capital to build and scale an on-premises HPC cluster,” says Colin Haubrich, head of IT at Wisk Aero. Français AWS GovCloud (US) Wisk Aero uses Amazon FSx Lustre—fully managed shared storage built on the world’s most popular high-performance file system—for high-performance, scalable storage for HPC compute workloads. The company runs these workloads on AWS GovCloud (US), designed to host sensitive data and regulated workloads and address the most stringent US government security and compliance requirements. AWS GovCloud satisfies the compliance requirements for the software from NASA that Wisk Aero uses. In addition, test models on AWS GovCloud showed a 10–20 percent improvement in runtime compared with the on-premises solution. Achieves high-performance Español Opportunity | Using Amazon EC2 to Improve Job Runtime for Wisk Aero 日本語 Using AWS, we quickly scaled and added the needed on-demand compute power for the CFD team, compared with the months required and significant capital to build and scale an on-premises HPC cluster.” 한국어 Wisk Aero has developed the first-ever autonomous electrical vertical take-off and landing (eVTOL) aircraft and is using Amazon Web Services (AWS) to build high performance compute (HPC) clusters to run simulations. The company relies on HPC to run computationally intensive and complex simulations, each of which uses thousands of CPU cores. Purchasing on-premises computers for its HPC workload presented challenges, such as cost and managing enough CPU cores for peak runs. Wisk Aero migrated its HPC clusters to AWS to improve job runtime, achieve scalable storage, and drive improved economics. Overview | Opportunity | Solution | Outcome | AWS Services Used AWS GovCloud (US) gives government customers and their partners the flexibility to architect secure cloud solutions that comply with the FedRAMP High baseline. Customer Stories / Aerospace Get Started Amazon FSx for Lustre provides fully managed shared storage with the scalability and performance of the popular Lustre file system. Wisk Aero is an advanced air mobility company dedicated to delivering safe, everyday flight for everyone. The company is backed by the Boeing Company and Kitty Hawk Corporation. AWS Services Used The Converge client-executive supporting Wisk Aero for its on-premises infrastructure introduced Converge’s Cloud Platforms team and its AWS offerings to the engineering manager of core infrastructure at Wisk Aero. Converge shared a similar use case when Converge—using its AWS Competency Program, which highlights AWS technical expertise and specialization—helped the client successfully migrate its HPC workload to AWS. 中文 (繁體) Bahasa Indonesia Outcome | Creating Innovative Technologies Using AWS The use of CFD simulations gives engineers a clear understanding of the aircraft’s expected performance under various loading and boundary conditions. Because of the novel design of Wisk Aero’s sixth-generation four-seat self-flying eVTOL, it is not possible to use previous simulations or design models. Wisk Aero engineers rely on HPC to run these computationally intensive and complex CFD simulations, each using thousands of CPU cores. To purchase on-premises computers for these HPC workloads, Wisk Aero would need to spend more on hardware that might go entirely unused when not running at peak jobs. Wisk Aero also had to address the increased operational overhead of managing physical hardware as the size of the on-premises cluster increased. To solve these challenges, Wisk Aero turned to the AWS HPC team and Converge Technology Solutions (Converge), an AWS Advanced Consulting Partner, to assist in migrating the company’s HPC simulations to Amazon Elastic Compute Cloud (Amazon EC2), which offers secure and resizable compute capacity for virtually any workload. Ρусский in job runtime عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. NASA software requirements After the successful pilot, Wisk Aero chose to use AWS for another round of CFD simulations for its eVTOL aircraft. Now, Wisk Aero can build HPC clusters on the fly and achieve a significant performance increase over running simulations on premises. It uses purpose-built Amazon EC2 Hpc6a Instances to achieve the desired scalability by accessing CPU architectures alongside AWS ParallelCluster, which helps users to quickly build HPC compute environments on AWS. Learn more » 2022 Learn how Wisk Aero in the Aerospace industry built HPC clusters and improved performance using Amazon EC2. Overview Drives About Wisk Aero Amazon EC2 Hpc6a instances offer the best price performance for compute-intensive high performance computing (HPC) workloads in Amazon EC2. AWS ParallelCluster Türkçe Creating Air Taxi Simulations Using Amazon EC2 with Wisk Aero English Satisfies scalable storage Colin Haubrich Head of IT, Wisk Aero Wisk Aero is an aviation company focused on developing eVTOL aircraft and revolutionizing mobility through quiet, fast, and clean air travel. The company has over 10 years of experience, has locations around the world, and is backed by the Boeing Company and Kitty Hawk Corporation. To study the in-flight airflow, Wisk Aero engineers perform computational fluid dynamics (CFD) simulations using in-house and NASA CFD applications, such as OVERFLOW and FUN3D. Wisk Aero focuses more on using CFD than traditional aircraft builders because CFD supports rapid design iteration as the team explores different aircraft designs and architectures, especially in the early phase of the design process. Amazon FSx Lustre Solution | Choosing AWS for Agility, Elasticity, Storage, and Security Deutsch Tiếng Việt AWS ParallelCluster is an open source cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. Italiano ไทย Contact Sales improved economics Amazon Elastic Compute Cloud (Amazon EC2) Hpc6a Instances Português" Creating an App for 12000 Game Show Viewers Using Amazon CloudFront with TUI _ TUI Case Study _ AWS.txt,"Français 2023 Español 90% Creating an App for 12,000 Game Show Viewers Using Amazon CloudFront with TUI The company decided to use AWS because of the increased agility that it could achieve using services such as Amazon CloudFront. “In the past, this sort of request would have required considerable upfront planning, design, and development work,” says Timmermans. Using AWS, TUI built its voting application quickly and cost effectively, without having to worry about resource scaling. 日本語 Amazon S3 The development team at TUI began working on the voting app just a few weeks before the season finale of De Mol. Within a matter of hours, the team had created a working prototype of the application: a static website with an embedded iFrame element containing the interactive game content. To host the site, TUI used Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. “We opted for a static website hosted on Amazon S3 for the simplicity of the solution,” says Jeroen Daemers, cloud architect at TUI. “Fronting our Amazon S3 bucket with Amazon CloudFront offered a scalable, secure delivery method for the website.” 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used 12,000 Outcome | Accelerating the Journey to the Cloud With roots dating back to the 1800s, TUI is one of the world’s leading travel companies and has served 27 million customers and counting. Through its 1,600 travel agencies across Europe, its line of hotels and cruise ships, and its fleet of planes, TUI helps travelers enjoy experiences in 180 destinations around the world. Sponsoring the popular game show De Mol would be an exciting way for the organization to increase brand awareness. However, when it came to building a custom, branded voting application within a tight timeframe, TUI was challenged by the limitations of its on-premises hardware, which was managed by regional teams. “We needed to build and host an application that would be used by 12,000 people for one night only, all at the same time, during each commercial break,” says Peter Timmermans, head of technology at TUI. “When you’re building for that sort of scenario using fixed, on-premises infrastructure, you have to carefully manage the limited resources that you have.” Get Started Customer Stories / Travel & Hospitality About TUI Amazon CloudFront AWS Services Used TUI would be placing its logo prominently within the voting application interface, so creating a great audience member experience was of paramount importance. For instance, the company wanted to give audience members the opportunity to share their game experiences on social media platforms and used Amazon CloudFront to achieve the elasticity necessary to handle the increased data load. “With our old, fixed infrastructure, that scenario would have been potentially concerning because we might not have had the resources to support additional load,” says Daemers. “We knew that Amazon CloudFront could handle any additional load and that the outcome for the business would be positive, with more individuals engaging with the brand.” Reduced 中文 (繁體) Bahasa Indonesia cost of development Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Opportunity | Using Amazon CloudFront to Build a Voting Application for TUI faster development time Achieved Overview TUI Group (TUI), a leading leisure, travel, and tourism company, was seeking a way to maximize its brand exposure by creating a voting application for use on the popular Belgian television show De Mol (The Mole). The format of the game show, which pits contestants against a secret saboteur in the pursuit of cash prizes, encourages audience members to participate by voting on which contestant they believe to be the mole. By developing a branded voting application, TUI—a sponsor of the game show—would be able to put its logo in front of an in-studio audience of 12,000 people. The challenge was completing the app in just 2 weeks, in time for the show’s season finale. TUI had historically used on-premises hardware and didn’t have the agility needed to respond quickly to short-term business requirements, such as building the voting app. TUI is a global tourism group consisting of tour operators, 1,600 travel agencies and online portals, 5 airlines, over 400 hotels, 16 cruise liners, and incoming agencies in all major holiday destinations around the world. Türkçe Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer experience. Learn more » Solution | Creating a Positive User Experience for 12,000 Audience Members English Using AWS to build its voting application quickly and cost effectively, with the elasticity necessary to support a high level of user interaction, helped TUI to demonstrate the value of increased agility. The company has used its learnings to accelerate its cloud migration for other systems, including its reservation and booking infrastructure. Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Peter Timmermans, Head of Technology, TUI Deutsch scalability and elasticity Tiếng Việt Learn how TUI in the travel industry used AWS to build a game show voting application quickly and cost effectively. TUI completed its voting application on time, and the app was successfully used by the 12,000 audience members in attendance at the series finale of De Mol. The company delivered a positive experience at an exciting moment for the show’s viewers, leading to positive brand impressions. “Due to the one-night-only nature of the application, we would have historically struggled to justify the expense of this project,” says Timmermans. “Using Amazon S3 and Amazon CloudFront, we could build the app in hours, at a fraction of the cost of any on-premises solution.” Italiano ไทย “The significance of this project is how much faster we were able to respond to a business requirement,” says Timmermans. “Building this application on AWS, with the solution that we opted for, took us roughly one-tenth of the time that it would have taken with our legacy on-premises infrastructure."" Using Amazon S3 and Amazon CloudFront, we could build the app in hours, at a fraction of the cost of any on-premises solution.” Learn more » TUI had been in the process of migrating its backend travel bookings infrastructure to the cloud for increased agility and decided to use Amazon Web Services (AWS)—namely, Amazon CloudFront, a content delivery network service built for high performance, security, and developer convenience—to create its interactive voting app. TUI was able to work quickly to build and deliver its solution in time for the season finale of De Mol, making the most of its opportunity to drive brand awareness. Português audience members used TUI’s voting app" Creating an Optimized Solution for Smart Buildings Using Amazon EC2 G5g Instances with Mircoms OpenGN _ Case Study _ AWS.txt,"30–40% Français Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 600 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. As Mircom’s move to AWS progresses, the company scales while managing costs, gaining cost-structure flexibility, improving monitoring capability, and achieving reliable performance. Outcome | Optimizing OpenGN’s Unified Pane of Glass for Price and Performance Español smarter building analytics Mircom developed OpenGN as a single-site fire alarm control management system providing monitoring of its regulatory agency-approved fire and life safety products. OpenGN displays various building experiences (single, complex, and campus) in both 2D and 3D representations. In addition, OpenGN graphically displays fire and life safety events from corresponding fire and life safety products, such as pull stations and smoke detectors. Mircom later expanded OpenGN to include other mission-critical building technologies from its product line, including building automation, communication and security, and smart technologies. As a result, OpenGN evolved into a single-site digital twin and Internet of Things software platform for on-premises building experiences. reduction in infrastructure costs 日本語 Amazon EC2 G5g instances are powered by AWS Graviton2 processors and feature NVIDIA T4G Tensor Core GPUs to provide the best price performance in Amazon EC2 for graphics workloads such as Android game streaming. Learn more » 2023 Increased building monitoring Contact Sales Customer Stories / Engineering, Construction & Real Estate 한국어 Facilitates Overview | Opportunity | Solution | Outcome | AWS Services Used Over 90% Amazon EC2 G5g Instances NICE DCV To mitigate the costs associated with migrating from an onsite to a cloud-hosted solution, Mircom moved from licensed to open-source software, which it could do because of the flexibility of AWS services. This shift helped the company reduce its licensing costs and prevented it from needing to repurchase licenses for cloud use. The essential open-source software used by Mircom included Ubuntu Server 18.04, an operating system; MATE Desktop Environment; MySQL Community Server 8.0, a relational database management system; and OpenVPN Access Server, a virtual private network system. Opportunity | Using AWS Services to Modernize OpenGN’s Graphics-Intensive Single Pane of Glass OpenGN’s graphics-intensive workloads mandate a dedicated graphics card to accommodate all its customers’ building experiences. Although Mircom’s on-premises hardware infrastructure could support most of its customers, its largest deployments pushed OpenGN’s performance limits. The hardware infrastructure could handle approximately 250 buildings, but some current and future deployments had two to four times that number. Additionally, multiple-site deployments, requiring distributed building experiences, led Mircom to explore the feasibility of migrating its on-premises hardware infrastructure to the cloud, which ultimately increased the company’s building monitoring capability by 4 to 10 times. AWS Services Used Mircom, a global designer, manufacturer, and distributor of intelligent building solutions, wanted to modernize its Open Graphic Navigator (OpenGN)—a single-site digital twin and on-premises Internet of Things (IoT) software platform. Looking for a solution that managed cost while supporting and extending this graphics-intensive application for smart building monitoring, Mircom decided to use Amazon Web Services (AWS). As a result, Mircom can now use the cloud to deliver its fire alarm control panels and mission-critical building technologies, making buildings safer, smarter, and more livable. Mircom has also reduced third-party licensing costs by over 90 percent and infrastructure costs by 30–40 percent. Mircom chose AWS Graviton processor, designed by AWS to deliver optimal price performance for cloud workloads running in Amazon EC2. The company selected AWS Graviton2 processors in particular, which deliver a major leap in performance and capabilities. Mircom uses the AWS Graviton2 processors to power Amazon EC2 G5g Instances, the first Arm-based instances in a major cloud to feature GPU acceleration, to further manage costs while gaining the processing power associated with GPUs to handle some of the functions that its software performs. Mircom can also move to a subscription pricing model, an option that the onsite hardware did not support as seamlessly as the cloud. This flexibility could help Mircom increase revenue while controlling its cost structure. Modernizing OpenGN to the cloud has helped Mircom to monitor mission-critical building technologies, such as fire detection and alarm, building automation, communication and security, and smart technologies from anywhere in the world. Mircom’s multiple-site cloud experience provides opportunities to significantly increase the breadth and depth of its customer base. “The sky’s the limit,” says Tony Falbo, founder and CEO of Mircom. 中文 (繁體) Bahasa Indonesia Learn how Mircom modernized OpenGN’s single pane of glass and reduced infrastructure costs 30–40 percent using Amazon EC2 G5g Instances. About Mircom Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Mircom also needed a mechanism for viewing its mission-critical building infrastructure. To render a browser that let Mircom monitor continuous connectivity between buildings and the cloud, the company used NICE DCV, a high-performance remote display protocol that provides customers with a way to deliver remote desktops and application streaming from any cloud or data center to any device, over varying network conditions. Using NICE DCV and Amazon EC2, customers can run graphics-intensive applications remotely on Amazon EC2 instances and stream their user interface to simpler client machines, reducing the need for expensive dedicated workstations. reduction in third-party licensing costs Overview AWS Graviton Processor Get Started Headquartered in Toronto, Canada, Mircom was founded in 1991 and carries requisite regulatory agency approvals from Underwriters Laboratories (UL/ULC) and Factory Mutual (FM) for all its fire and life safety products. The company is the largest independent fire alarm manufacturer and distributor in North America. Its product line spans fire detection and alarm, communications and security, mass notification, building automation, and smart technologies. Türkçe Using AWS, Mircom has modernized OpenGN from an on-premises single pane of glass, or single-site building experience, to a cloud-based unified pane of glass, or multiple-site cloud experience. English Mircom is the largest independent fire alarm manufacturer and distributor in North America. It is a global designer, manufacturer, and distributor of intelligent building solutions, whose product line spans fire detection and alarm, communications and security, mass notification, building automation, and smart technologies. 4 to 10 times Deutsch NICE DCV is a high-performance remote display protocol that provides customers with a secure way to deliver remote desktops and application streaming from any cloud or data center to any device, over varying network conditions.  Learn more » Creating an Optimized Solution for Smart Buildings Using Amazon EC2 G5g Instances with Mircom’s OpenGN Tiếng Việt The strong cloud foundation provided by AWS gives Mircom the confidence to continue its application modernization. In the future, Mircom hopes to rearchitect and rebuild OpenGN to a serverless architecture. In the long run, Mircom is better prepared to achieve its company vision, which is “to make safer, smarter, more livable buildings in order to save lives. Working alongside AWS is helping us accomplish that,” says Leung. Italiano ไทย Brian Leung Senior Manager of Engineering, Mircom During its search for the right cloud solution provider, Mircom discovered that AWS offered a cost-saving, high-performance solution that worked well for OpenGN’s application modernization. In early 2021, Mircom decided to use several AWS services, including Amazon Elastic Compute Cloud (Amazon EC2), which offers secure and resizable compute capacity for graphics-intensive workloads. After testing a few different solutions, Mircom decided to embark on refactoring and replatforming OpenGN with Amazon EC2 G5g Instances. “The can-do attitude from AWS gave us the confidence to move forward with our application modernization,” says Brian Leung, senior manager of engineering at Mircom. AWS Graviton processors are designed by AWS to deliver the best price performance for your cloud workloads running in Amazon EC2. Learn more » Learn more » Amazon EC2 Solution | Using Amazon EC2 G5g Instances with GPU Acceleration The can-do attitude from AWS gave us the confidence to move forward with our application modernization.” Português" Dallara Uses HPC on AWS to Off-Load Peak CFD Workloads for Race Car Simulations _ Case Study _ AWS.txt,"On AWS, Dallara found the flexibility and availability it needed. “We get resources when we need them, and we release them when we don’t, so we’re not wasting the resources or paying for what we don’t use,” says Serioli. Whereas Dallara couldn’t acquire every new release of hardware for its on-premises system, the company can access the latest technology on AWS. “The innovation is immediate and comes from the availability of new instances, which raises new ideas of how we can use the hardware to improve our workflow,” says Serioli.   Solution | Launching a Scalable HPC Solution in Less Than 5 Months Français increased HPC capacity from on premises only Customer Stories / Automotive Español 2x AWS ParallelCluster is a smart, flexible tool. It helps manage the HPC, so our information technology team is not dedicated to hardware problems. We can scale on more nodes than we thought possible.” 1 month Learn more » 日本語 2022 Amazon EC2 Get Started 한국어 Due to an influx of customer projects in February 2021, Dallara reached 100 percent usage of its HPC capacity on premises. Serioli and the Dallara HPC team were tasked with upgrading the company’s HPC infrastructure and outsourcing its management to a cloud provider. “Our first goal was to have a ready-to-use industrial infrastructure that would support our specific applications, huge models, and high demand for HPC,” says Serioli. “The second goal was to integrate our workflows into an external environment like the cloud.” Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon FSx for Lustre In April 2021, 2 months after beginning the build, Dallara had created an industrial infrastructure on AWS, united it with its existing workloads, and allocated resources to it. The solution was stable and operating well within 5 months of intensive use. First, Dallara linked its on-premises workloads to AWS using Amazon Virtual Private Cloud (Amazon VPC), which gives the company full control over its virtual networking environment, including Amazon EC2 resource placement, and AWS Virtual Private Network (AWS VPN) solutions that establish secure connections between on-premises networks, remote offices, client devices, and the AWS global network. within request to go into production on AWS Dallara landed on Amazon Web Services (AWS) for the HPC that it needed. Using AWS, Dallara built an HPC system that met its benchmarks for performance and cost, leading the company to continue designing some of the world’s fastest and most aerodynamic vehicles. Dallara not only found the solution to its business-critical issue quickly on AWS but also benefited from its scalability and flexibility. Amazon FSx for Lustre provides fully managed shared storage with the scalability and performance of the popular Lustre file system. AWS Services Used 中文 (繁體) Bahasa Indonesia Amazon Virtual Private Cloud (Amazon VPC) gives you full control over your virtual networking environment, including resource placement, connectivity, and security.  Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) 3x burst in customer demand  Dallara Uses HPC on AWS to Off-Load Peak CFD Workloads for Race Car Simulations Overview Amazon VPC Dallara sought proofs of concept from various cloud providers, yet AWS was the most responsive and supportive. Within a month of its request, Dallara was in production on AWS and running CFD simulations at scale. “The support from AWS was there every day,” says Serioli. “The flexibility and engagement from AWS were key for us.” Additionally, Dallara already used software from Ansys, an AWS Partner, as its main CFD solutions, particularly Ansys Fluent, a fluid simulation software. Another reason Dallara chose AWS is that it appreciated the ability to choose the right instance for each workflow using Amazon Elastic Compute Cloud (Amazon EC2), which offers secure and resizable compute capacity for virtually any workload. For example, Dallara began using Amazon EC2 C5n Instances, which are designed for compute-intensive workloads and use the fourth generation of custom Nitro card and Elastic Network Adapter device to deliver 100 Gbps of network throughput to a single instance.  About Dallara Automobili AWS ParallelCluster Türkçe scaled AWS cluster Founded in 1972, Dallara manufactures racing cars for the IndyCar, Indy Lights, Formula 2, Formula 3, and Super Formula Championships. It produces cars for endurance races such as the 24 Hours of Le Mans and for electric car races such as the Formula E. Today, Dallara even develops road cars, drawing interest from luxury car manufacturers. Every vehicle design undergoes rigorous testing in structure, aerodynamics, and vehicle dynamics. For that, Dallara relies on more than 15 simulation and testing tools that require massive amounts of HPC, including ones that assess computational fluid dynamics (CFD). “We use CFD tools because it’s mandatory to investigate the flow fields around our cars with all the details needed to achieve our target,” says Elisa Serioli, CFD methodology team leader at Dallara. English Opportunity | Encountering a Business-Critical Issue On Premises With its cloud and on-premises environments connected, Dallara decided to migrate 80 percent of its CFD workflow to the cloud and download the least amount of data possible in order to delegate several tasks of each workflow to the cloud. “We use several software applications that each perform a different task for our complex CFD workflow, and the output of one job is the input for another,” says Serioli. The connection between the systems on AWS and on premises facilitates a transparent user experience for Dallara’s aerodynamicists, who can choose where to run each task or overall workflow. When a task runs on the cloud, the needed files are copied automatically to Amazon FSx for Lustre, which provides fully managed shared storage with the scalability and performance of the popular Lustre file system. Then an orchestrator makes all the workflows run. After every task completes, the data is downloaded to the on-premises solution and shared with aerodynamicists. Using FSx for Lustre, Dallara can scale up its file storage as needed within half an hour without any particular support. On average, Dallara can run 15 complete workflows per day. Outcome | Meeting High-Demand HPC Needs, Now and in the Future Elisa Serioli CFD Methodology Team Leader, Dallara Automobili On AWS, Dallara could quickly put in place the HPC resources required to deliver quality racing cars to its customers during a period of high demand. The company can innovate and update its HPC by selecting the best Amazon EC2 instance for each workload. “In terms of supporting our HPC, the cloud is ready with the instances and infrastructure we need for industrial racing and motor sporting workflows, which is not easy,” says Serioli. “It was crucial to let us support our customers and do their projects.” Dallara takes advantage of AWS ParallelCluster, an open-source cluster management tool that makes it easy for companies to deploy and manage HPC clusters on AWS. Using it, Dallara can access additional HPC resources immediately, scaling up instances almost instantaneously and adding new instance types in just 1 day. The company increased HPC capacity more than three times from on premises and has scaled the AWS cluster by two times, supporting the company in meeting a 6-month burst in customer demand. “AWS ParallelCluster is a smart, flexible tool,” says Serioli. “It helps manage the HPC, so our information technology team is not dedicated to hardware problems. We can scale on more nodes than we thought possible, sometimes scaling to more than 80 nodes.” Deutsch 6 month Tiếng Việt AWS ParallelCluster is an open source cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. Italiano ไทย Contact Sales Learn more » In April 2021, Italian race car manufacturer Dallara Automobili (Dallara) needed more high-performance computing (HPC) for simulation and testing than what was available in its on-premises environment. The company’s computational power was over-requested, leading to difficulties meeting the demands of its customers during peak season. As a major provider of commercial racing cars for prestigious championships, Dallara uses HPC to power the tests of its car designs, making HPC fundamental to its operations. 5 months Founded in 1972, Dallara manufactures racing cars for the IndyCar, Indy Lights, Formula 2, Formula 3, and Super Formula Championships, and it also produces road cars. Its specialties are composite materials, aerodynamics, and vehicle dynamics. to build a stable infrastructure  Português" Dataminr Achieves up to Nine Times Better Throughput per Dollar Using AWS Inferentia _ Dataminr Case Study _ AWS.txt,"Opportunity | Using Amazon EC2 to Run Highly Complex ML and AI Models Français Matt Hill Director of AI Engineering, Dataminr 2023 Español Amazon EC2 In 2021, the company started to experiment with AWS Inferentia to optimize its Amazon EC2 spend, while scaling its models. “We built on our early experiments to develop a pattern by which many common model types can be dropped into an optimization workflow,” says Hill. “Then, we used AWS Inferentia to produce and benchmark a compiled model so that we could select an optimal way to deploy it.” Enthused 日本語 Dataminr is realizing three distinct business benefits from the project: increased scale, increased speed, and lower costs. Moreover, Dataminr is seeing increased accuracy in cases where AWS Inferentia has facilitated the use of more complex models or covers more data sources, which are vital to effective crisis-response efforts. Founded in 2009, Dataminr employs over 850 people across eight global offices. Dataminr’s AI platform detects early signs of high-impact events and emerging risks in near real time, from more than 500,000 publicly available data sources. The company’s alerts help customers to know critical information first, mobilize for quick response, and manage crises effectively. Speed and coverage are the key values that Dataminr strives to provide its customers. “We cover many types of events all over the world in many languages, in different formats (images, video, audio, text sensors, combinations of all these types) from hundreds of thousands of sources,” says Jaimes. “Optimizing for speed and cost given that scale is absolutely critical for our business.” AWS Inferentia accelerators are designed by AWS to deliver high performance at the lowest cost for your deep learning (DL) inference applications.  Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Overview Solution | Increasing Data Volume Processing 5x to Enhance Crisis Response Using AWS Inferentia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 600 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. The first models produced using AWS Inferentia were deployed in spring of 2022, and the implementation process went as smoothly as possible. When there was an issue, Dataminr reached out to AWS Inferentia experts who provided quick guidance to develop a solution. “We were able to call in an AWS expert to diagnose memory-usage patterns and optimize our approach,” says Hill. The early results were promising. “On one of our early efforts, we increased speed by five times compared to GPU-based instances on a natural-language processing task,” says Hill. “That translated into a nine-times improvement in throughput per dollar spent for our natural-language processing models.” Those initial results inspired Dataminr to move forward with the effort, which is delivering five times increased throughput per dollar or more across all the models that it optimized, including computer vision and natural-language processing. AWS Services Used of data throughput per dollar Up to 5x increase 中文 (繁體) Bahasa Indonesia Dataminr Achieves up to 9x Better Throughput per Dollar Using AWS Inferentia Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Operating at a global scale, Dataminr has used AWS Inferentia to both reduce costs and expand its AI capabilities. The company is confident that it can continue to increase the value that it provides its worldwide corporate and government customers with fast and accurate event alerts. “To sum up the AWS Inferentia deployment: it was an innovative way to scale our platform’s scope efficiently,” says Hill. “We’re happy to say that it produced all the promised benefits.” Enhanced accuracy Dataminr, which detects high-impact events and emerging risks for corporate and government customers, wanted to increase the scale of its artificial intelligence (AI) models to provide more comprehensive event coverage by processing more data. The company uses AI to detect the earliest signals of high-impact events and emerging risks from within publicly available data in near real time. Because Dataminr employs a complex mix of machine learning (ML) models to process petabytes of data each day, scaling efficiently was a difficult task. “We wanted to continue to scale our deployment of AI models in production, but at the same time, we wanted to bend the cost curve,” says Matt Hill, director of AI engineering at Dataminr. development teams Up to 9x increase AWS Inferentia in data volume processed Türkçe English Outcome | Scaling Global Alerts Using AWS Services by using more complex models Due to the size and scope of Dataminr systems, the company strives to optimize everywhere that it can. However, it’s not enough to reduce costs. Each project that the company undertakes must help it increase scale, whether that be in the form of the speed of compute or number of data sources. Dataminr uses Amazon Elastic Compute Cloud (Amazon EC2), a broad and deep compute solution, to host its models at scale. “For any organization, time and money are constraints, but we wanted to continue efficiently scaling our coverage to generate additional types of alerts,” says Hill. The company started searching for a way to optimize for both speed and cost simultaneously to scale on Amazon EC2. Moving forward, the company is targeting improvements across corporate risk, cyber risk, and social good. Though Dataminr has access to greater scale with less spend, there are plenty of opportunities to be addressed. The company is considering using some new AWS services to help it continue improving. Among them is AWS Trainium, a high-performance ML training accelerator. “We’ll continue to explore ways to make our compute faster, cheaper, and more scalable using AWS services,” says Jaimes. Deutsch To sum up the AWS Inferentia deployment: it was an innovative way to scale our platform’s scope efficiently. We’re happy to say that it produced all the promised benefits.” Tiếng Việt Learn how Dataminr increased throughput per dollar by up to nine times using AWS Inferentia. Italiano ไทย Dataminr needs to continually improve its services and features because emergency responders depend on its event alerts. The company was running its models on a mix of CPUs and GPUs, and there was no clear path toward improving its processing throughput while reducing costs. “Speed is critical for our customers because they need our services for emergency response, so our near-real-time alerts save lives,” says Alex Jaimes, chief scientist and senior vice president of AI at Dataminr. “Our corporate customers also rely on the speed of our alerting to reduce risk from events that might impact them.” Dataminr was in communication with Amazon Web Services (AWS) when it discovered AWS Inferentia, purpose-built accelerators that deliver high performance while reducing inference costs. The company then used AWS Inferentia to accomplish both its performance and cost-efficiency goals: improving data throughput and covering more data sources for first responders and corporate customers. Dataminr improved data throughput per dollar by five times or more on the AI models that it optimized for AWS Inferentia and realized up to nine times better throughput per dollar. Learn more » About Dataminr Developers are also enthused. Dataminr emphasizes innovation, and the engineers are excited to have a new, cost-effective way to deploy AI models beyond CPUs and GPUs. The company’s commitment to innovation is now driving an internal optimization push to automate model compilation and benchmarking. “We really like working on AWS Inferentia,” says Jaimes. “We need only a few people to get this up and running, which is great.” Português Dataminr provides the earliest indication of high-impact events and emerging risks. Dataminr’s artificial intelligence platform processes data from over 500,000 public sources to generate alerts that help customers effectively manage crises and emergency response." DB Energie Case Study.txt,"Français Alongside units from product development, grid operations, and IT, DB Energie successfully deployed two use cases within 10 months: the demand forecasting model and a model to decrease peak energy load from train operations. “Without Amazon SageMaker, it would have been hard to deploy any of these models in such a short period,” says Senzel. Currently, the team is training models for three to four additional use cases, such as predictive maintenance and renewable energy forecasting. Español Building a Fully Managed ML Operations Pipeline Using Amazon SageMaker Amazon SageMaker Model Registry, which simplifies the process of managing model versions.   Learn how »  Learn how leading organizations in Europe across industries trust AWS to drive innovation at every level of their business.  日本語 The team uses a web-based interface to access a set of purpose-built ML tools through the use of Get Started 한국어 Find out how Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. In February 2021, three DB Energie data scientists joined two engineers of the data lake team to build an ML pipeline with a goal to produce solutions in less than a year. They elected not to use a Kubernetes infrastructure, which might require three full-time engineers to manage. Instead, they built a continuous integration and delivery pipeline for ML operations that activates the deployment of Amazon SageMaker services such as model training and inference. Driving a Future of Sustainability through ML DB Energie Uses Machine Learning to Enhance Sustainability and Reliability of Its Power Grid Operations Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. ""AWS services have empowered us to collect data and produce value for our clients with our analysis and machine learning solutions.” —Dimitrios Avramidis, data scientist, DB Energie GmbH Amazon SageMaker Studio, a fully integrated development environment for ML. “Using Amazon SageMaker Studio, we take really fast actions and provide better consulting to our clients,” says Dimitrios Avramidis, a data scientist at DB Energie. Data scientists manage their models centrally using 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский عربي Contact Us AWS Partner, DB Energie was converting the company’s data warehouse to a data lake on AWS. DB Energie wanted to connect its ML pipeline to the data lake, which stores large volumes of raw structured and unstructured data. “We wanted to standardize how we did studies,” says Dr. Florian Senzel, lead data scientist for ML at DB Energie. “But we were puzzled by establishing the technical infrastructure.” 中文 (简体) DB Energie MLOps AWS Customer Success Stories Türkçe Leading Cloud Innovators in Europe English Bridging the Gap between Experimentation and ML in Production Deutsch Tiếng Việt Italiano ไทย DB Systel GmbH, the main IT provider of Deutsche Bahn and an Português DB Energie’s commitment to ML helps to fulfill Deutsche Bahn’s Strong Rail initiative to improve rail travel efficiency and drive sustainability. “Using AWS, we’re establishing a data-driven culture within our company,” says Senzel. “We are showing what ML and data science can offer, answering business questions, and establishing trust in the magic of ML and artificial intelligence.” DB Energie is the main electricity provider and exclusive operator of the power grid for Deutsche Bahn. It faced strict enterprise compliance regulations as it sought to reduce operational burden in the ML process. Initially, data scientists wrote code in their own notebooks, which limited their ability to demonstrate the practical value of their models. For example, they had developed a demand forecasting model that uses historical data to predict future energy demand but lacked a way to operationalize the insights.With data engineers from As part of the German national railway Deutsche Bahn (DB), DB Energie GmbH (DB Energie) wanted to use machine learning (ML) to help meet sustainability and electricity supply reliability goals. Data scientists sought a cost-effective, scalable solution that would free them to focus on training models they could launch quickly into production. DB Energie turned to Amazon Web Services (AWS) and used Amazon SageMaker, which data scientists and ML engineers use to build, train, and deploy ML models with managed infrastructure, tools, and workflows. Within 1 year, DB Energie built a scalable ML pipeline that empowers fast deployment, helping to deliver agile and customer-centric data products." DBS Bank Uses Amazon ElastiCache for Redis to Run Its Pricing Models at Real-Time Speed _ DBS Bank Case Study _ AWS.txt,"Outcome | Continuing to Develop Cutting-Edge Financial Models for QPE on AWS In recent years, DBS migrated its Quant Pricing Engine (QPE) to Amazon Web Services (AWS) to offer near real-time pricing with a dynamic workload for its customers. Using this innovative pricing solution, DBS processes data on a massive scale on demand and generates responses from its pricing models at a fast speed. With QPE, DBS has effectively harnessed the power of cloud technology to improve its customers’ price discovery journeys and help traders better manage their market risks. Français Achieved significant cost savings Significantly reduced computing costs 2023 Español On AWS, DBS can access the latest technologies and seamlessly incorporate them into its solution stack. For example, it can set up ElastiCache clusters to partition data across multiple shards. Due to the scale of DBS’s databases, data read/write processes can happen hundreds of thousands of times per second. This scale would overwhelm a traditional database immediately, but the flexible ElastiCache clusters can scale and meet DBS’s demands effortlessly without interruption. Learn more » 日本語 AWS Services Used Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used with Amazon EC2 Spot Instances As one of the largest banks in Asia, DBS Bank Ltd. (DBS) offers innovative financial services to support a wide range of customers, including trading companies. Over the decades, the bank’s quantitative pricing engines have helped trading customers identify the most profitable opportunities using algorithms built in house. These engines were hosted on legacy on-premises infrastructures powered by various Windows and Linux systems with traditional databases, which were costly to maintain and difficult to scale. Harnessing ultrafast performance and agility, DBS will continue to expand its QPE with even more cutting-edge solutions. Next on DBS’s road map is to build machine learning and artificial intelligence solutions on AWS and incorporate advanced analytics into its QPE. Along with rapid market movement and the need for dynamic trading, the workload for pricing engines also varies dramatically. The on-premises infrastructure could not be efficiently scaled to meet traders’ needs. In addition, millions of dollars in fintech vendor licensing were spent every year. DBS chose to build a cloud-based solution on AWS and used Amazon ElastiCache for Redis—an ultrafast in-memory data store with microsecond response time —to achieve its near real-time performance. Amazon ECS Opportunity | Using Amazon ElastiCache for Redis to Process Data at a Massive Scale for DBS 中文 (繁體) Bahasa Indonesia Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that simplifies your deployment, management, and scaling of containerized applications. Amazon ElastiCache Scales to support hundreds of thousands of data Ρусский “We’re always looking for new ways to boost efficiency, improve performance, reduce costs, and explore opportunities,” says Liu. “On AWS, we can always find new solutions to help achieve our goals.” عربي We can provision resources from AWS for whatever we need, whenever we need them. For the nature of our job, AWS is a perfect fit.” Learn how DBS Bank built its innovative Quant Pricing Engine using Amazon ElastiCache for Redis. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. 100x improvement Learn more » Overview in customer pricing query response time read/write processes per second Amazon ElastiCache is a fully managed, Redis- and Memcached-compatible service delivering real-time, cost-optimized performance for modern applications. in fintech vendor licensing fees Improved revaluation Customer Stories / Financial Services Powered by ElastiCache for Redis and other services, DBS has achieved virtually infinite scalability for its pricing engines, which is key to success in fulfilling fluctuating computing needs in its trading business. In the cloud, DBS quickly provisions capacity as needed by using Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload, and Amazon Elastic Container Service (Amazon ECS), a fully managed container orchestration service, in conjunction with ElastiCache for Redis. “Previously, setting up an on-premises infrastructure was a painful task that involved tedious resource acquisition and lengthy provisioning activities,” says Liu. “It would take months for the infrastructure to be ready to use. On AWS, it can be done in 1 minute.” DBS Bank Uses Amazon ElastiCache for Redis to Run Its Pricing Models at Near Real-Time Speed Türkçe DBS can effectively scale its QPEs to meet customers’ pricing requests. Hundreds of millions of tasks are processed daily, amounting to an estimated 10 TB of data per day. The company has scaled up to 5,000 CPUs on Amazon ECS, which can be scaled up further if needed. “The best benefit of the cloud is on-demand capacity,” says Liu. “We can provision resources from AWS for whatever we need, whenever we need them. For the nature of our job, AWS is a perfect fit.” In addition to scalability and performance benefits, DBS has also reduced its pricing engine costs. The bank no longer needs to pay millions of dollars in annual licensing fees. It achieved further cost savings by adopting Amazon EC2 Spot Instances, which runs fault-tolerant workloads for up to a 90 percent discount compared to on-demand instances. Solution | Reducing Pricing Query Response Time by 100x with Amazon ElastiCache for Redis English Gengpu Liu Executive Director of Quant and Tech Modeling, Treasury and Markets Business, DBS Bank Ltd. Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. With support from the AWS team, DBS began to build QPE in 2018. After launching the first subsystem for its QPE in September 2019, DBS built nine subsystems covering different trading activities in just 3 years. “On AWS, we took advantage of the capacity, reliability, technology, and support that we needed to build QPE,” says Liu. “With all these capabilities, we were able to deliver a powerful and reliable system in a short period of time.” Deutsch Tiếng Việt Italiano ไทย and risk performance of risk engines by a few times Contact Sales DBS uses Amazon ElastiCache for Redis as a near real-time cache to handle complicated job queues for its QPE. As a result, it has vastly improved its pricing query response time from up to 1 minute to as fast as 0.5 seconds—a 100-times improvement in performance. “Our customers have access to prices from different banks,” says Liu. “They indicate that we’re among the fastest in the industry to provide them a price, which lets us capture more business opportunities and increase customer satisfaction.” Amazon EC2 Spot Instances DBS is a financial services group in Asia with a presence in 19 markets. Named World’s Best Bank by Global Finance and Euromoney and Global Bank of the Year by The Banker, DBS provides a full range of services in consumer, SME, and corporate banking. Headquartered and listed in Singapore, DBS is a leading financial services group with a presence in 19 markets and over S$744 billion in assets. It provides a full range of services in consumer, small and medium enterprise, and corporate banking. To best serve its trading customers, DBS has built quantitative pricing algorithms that identify and capitalize on the available trading opportunities over the decades. “In the past, what we used for our pricing models was hosted on premises, from the hardware to the software—and that limited our agility,” says Gengpu Liu, executive director of quant and tech modeling for DBS’s Treasury and Markets business. “We didn’t have the capacity to scale up whenever we needed to.” About DBS Bank Ltd. Português DBS can also access a variety of services, capacities, and capabilities on AWS, such as CPUs and GPU instances. It can thus adopt the most efficient solutions to run different workloads. This agility is a major advantage for the bank, which powers many different use cases. “We can choose AWS services based on our job nature,” says Liu. “With its suite of services, there is always something that suits our purpose, which is good.”" DCI Saves 27 on Cloud Costs Gains Support for Long-Term Growth Using AWS _ Amazon EC2.txt,"In addition, DCI’s participation in AWS Activate—which offers free tools, resources, and more to help startups quickly begin using AWS—meant that it could move fast, using guidance from its account team and AWS support engineers. The migration reduced its monthly cloud costs by 27 percent. DCI now believes it has the tools and support it needs to set itself up for long-term success. Français Español DCI was a little more than two years old when an investor suggested the company migrate to AWS to avoid the kind of billing issues it had with its previous cloud provider. On multiple occasions, DCI received higher-than-expected charges for routine usage, which meant Zannikos had to spend time trying to resolve billing with the provider. “We are a startup—we cannot have a resource dedicated to managing the cloud service charges,” says Zannikos. “That's not our focus. We're trying to build our product.” Improved customer support and guidance Konstantinos Kitsaras Chief Technology Officer, Digital Commerce Intelligence Digital Commerce Intelligence (DCI) provides intelligence about online market trends, competitors, and brand performance, allowing its customers to plan corporate strategy based on data. It was founded in 2018 and is based in Singapore. 日本語 Founded in 2018 in Singapore, Digital Commerce Intelligence (DCI) saw that ecommerce businesses in Southeast Asia were operating blind and making decisions on intuition, rather than data. DCI makes timely ecommerce market intelligence available to businesses to help them make better commercial decisions. DCI now provides insights on market sizing, trends, competition, and brand performance to customers throughout Southeast Asia. Migrate with AWS. The most complete solutions to efficiently migrate to AWS and see business results faster.  Get Started 한국어 Digital Commerce Intelligence (DCI) was founded in 2018 in Singapore and has offices in both Singapore and Greece. DCI provides businesses with intelligence on market trends, competition, and brand performance that allows them to plan corporate strategy based on data. As DCI grew, it was hindered by a lack of flexibility and high costs from its previous cloud provider. The company migrated to AWS in 6 months using the AWS Startup Program. The migration reduced its monthly cloud costs by 27 percent. DCI now has the tools and support it needs to achieve long-term success. Lower costs mean we can spend more on people and on product development—things that make the business more competitive.” The company used proprietary tools that it had built and optimized for its previous provider and, in addition to migrating compute and data to AWS, it also needed to update and test those tools. DCI was able to migrate its data collection tools, SQL Server, messaging queue, Kubernetes clusters, image registry, and compute to AWS. It is now running about 65 percent of its systems on AWS and intends to move the rest after its remaining tools are updated to run on AWS. “In contrast to our previous provider, AWS provides a feature-rich and configurable cloud experience,” says Cavan David, software development lead at DCI. “With the help of the AWS team, we were able to migrate our systems from the previous cloud service to AWS in a couple of months with just a team of two engineers and without a lot of DevOps know-how.” Reduced monthly cloud costs by 27% AWS Services Used Migrating about 65 percent of its systems to AWS took DCI only 6 months. It plans to migrate the remainder soon. So far, the migration to AWS has reduced monthly IT costs by 27 percent. Those savings matter, because—to run its algorithms and deliver results to its customers—DCI needs to ingest and process a lot of data. These results give DCI customers the market insights they need to run their businesses more intelligently. DCI also found that the support at AWS helped it make better choices for the company overall. “We wanted to have an account manager from our cloud services provider who could guide us,” says Konstantinos Kitsaras, chief technical officer (CTO) at DCI. “We wanted someone to help us select the right services, evaluate our architecture, and evaluate workloads. Someone who would share knowledge with us. We got that from AWS.” 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский عربي Learn more » Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. 中文 (简体) The AWS team has provided better cost control and support for DCI. “Lower costs mean we can spend more on people and on product development—things that make the business more competitive,” says Kitsaras. “We now have a deeper understanding of how we use our cloud services. The insights we get from CloudWatch, for example, help us react quickly to any infrastructure issues that may affect our customers. We also have responsive support to help us if we ever have a question. As a market intelligence company, we see the value of what we’ve gained by using AWS.” Amazon RDS for SQL Server Migrated services in 6 months Benefits of AWS Türkçe About DCI English DCI uses Amazon RDS for SQL Server (Amazon RDS) to ingest and process data. Amazon RDS makes it easy to set up, operate, and scale SQL Server deployments in the cloud. The company uses Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload, and Amazon Simple Storage Service (Amazon S3) object storage built to retrieve any amount of data from anywhere. Amazon CloudWatch (CloudWatch) has been added to gain observability of its AWS resources and applications. Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. We are the first major cloud provider that supports Intel, AMD, and Arm processors, the only cloud with on-demand EC2 Mac instances, and the only cloud with 400 Gbps Ethernet networking. Gained insight into use of cloud services Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), IT managers, and product owners. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, and optimize resource utilization. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events.  Migrating to AWS in 6 Months and Gaining a Cloud Guide Deutsch Monthly Cloud Costs Cut by 27% Using AWS Tiếng Việt Amazon S3 To provide market intelligence to customers, DCI uses a proprietary algorithm that acquires publicly available real-time data from top ecommerce platforms. It then converts that data into ready-to-use sales performance insights that customers can view using interactive dashboards. This allows customers to plan ecommerce strategy based on data, not guesswork. “If you’re selling products online, you need to know if you’re doing it as fast as your competitors, or if you’re a leader, in last place, or in the middle,” says Kyriakos Zannikos, founder and chief executive officer (CEO) at DCI. “Our solutions give you that critical information.” Italiano ไทย Amazon CloudWatch DCI Saves 27% on Cloud Costs, Gains Support for Long-Term Growth Using AWS Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Customers of all sizes and industries can store and protect any amount of data for virtually any use case, such as data lakes, cloud-native applications, and mobile apps. 2022 Amazon EC2 However, as DCI grew, its cloud services provider lacked the flexibility it needed, resulting in unpredictable compute and database costs. The company migrated to Amazon Web Services (AWS) in 6 months using the AWS Startup Program. This AWS program offers a broad range of events to support startups as they launch, grow, and scale. Português" Deep Pool Optimizes Software Quality Control Using Amazon QuickSight _ Deep Pool Case Study _ AWS.txt,"Solution | Unlocking Previously Inaccessible Data Decreased Software Issues by 57% Amazon QuickSight Français 2023 Español Improve Using QuickSight, Deep Pool can analyze software development data at the granular level and provide business intelligence to its entire organization. The company has seven development squads that work independently to build components of its software; using QuickSight, Deep Pool can track data like the number of software tests performed, the number of tests failed, whether any bugs were found, and when those issues were addressed for each squad. It can even trace software bugs down to their source, which makes it simple to locate areas for improvement. Deep Pool Optimizes Software Quality Control Using Amazon QuickSight software quality control 日本語 About Deep Pool Financial Services Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used During a larger migration to Amazon Web Services (AWS), Deep Pool discovered Amazon QuickSight, a cloud-native service that powers data-driven organizations with unified business intelligence at hyperscale. Using this innovative service, the company could meet varying analytic needs from the same source of truth through modern interactive dashboards, paginated reports, embedded analytics, and natural-language queries. Since adopting QuickSight, Deep Pool has democratized access to unused data, unlocking key insights to improve the overall quality of its software. During a lift-and-shift migration to the AWS Cloud, the AWS team introduced Deep Pool to QuickSight. Deep Pool quickly realized that, by using this service on top of its project-management system, it could identify areas for improvement and deploy key strategies to improve the quality of its solutions. “Amazon QuickSight would be an excellent foray into managing the data that we were collecting,” says Brett Promisel, chief operating officer for Deep Pool. “This solution provided the means to use previously inaccessible data and track key performance indicators involving software tests, failures, and successful fixes.” previously inaccessible data AWS Services Used increase in software testing 中文 (繁體) Bahasa Indonesia By unlocking these critical insights, Deep Pool can then take targeted actions to streamline the development cycle and improve software quality. Since the move to AWS, Deep Pool has increased software testing by 154 percent, but the number of issues that it has discovered and logged has dropped by 57 percent. “We’re just getting into the value of using Amazon QuickSight,” says Promisel. “But we’ve already proven that we can use it to measure our goals of quality control and improvement, which helps our customers as well as our internal efficiencies.” Opportunity | Using Amazon QuickSight to Improve Software Development Ρусский Brett Promisel Chief Operating Officer, Deep Pool Financial Solutions عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Deep Pool Financial Solutions is an investor servicing and compliance solutions supplier, providing software and consulting services to the world’s leading fund administrators and asset managers.  development efficiency Overview   Customer Stories / Financial Services AWS services, such as QuickSight, will continue to be critical tools for Deep Pool. The company is currently exploring ways to implement QuickSight into other workflows and unlock powerful insights about software performance, sales data, and client assets and holdings. “The options are seemingly infinite in terms of what we can do using AWS, and I know we’re just starting down that path,” says Promisel. “Amazon QuickSight is bringing the full picture of our business intelligence together.” 57% Türkçe decrease in software issues logged English Going forward, Deep Pool plans to invest in AWS Training and Certification, which organizations can use to be more effective and do more in the cloud, to continually improve its internal cloud skills and software quality. It has already participated in courses like AWS Cloud Practitioner Essentials, which provides individuals—independent of their specific technical roles—an overall understanding of the AWS Cloud, and AWS Technical Essentials, which teaches about AWS products, services, and common solutions, so this initiative is a natural extension. “As we think about the future of our products, we want our staff to be innovative,” says Promisel. “To keep up, we want to continue to invest in our employees to make sure that they can perform at the highest level.” Deep Pool is an investor servicing and compliance solutions supplier, providing software and consulting services to the world’s leading fund administrators and asset managers. In the highly regulated financial services industry, these reports need to be as accurate as possible. Companies need software solutions that they can trust, and Deep Pool incorporates rigorous quality controls into its workflows to meet and exceed its clients’ standards. Deutsch Amazon QuickSight powers data-driven organizations with unified business intelligence (BI) at hyperscale. With QuickSight, all users can meet varying analytic needs from the same source of truth through modern interactive dashboards, paginated reports, embedded analytics, and natural language queries. Learn more » Learn how Deep Pool Financial Solutions democratized access to business intelligence using Amazon QuickSight. High-quality software is paramount in the financial services industry, and Deep Pool Financial Solutions (Deep Pool) constantly seeks ways to deliver optimal solutions to its clients. The company, which builds digital solutions for fund administrators and asset managers, collected large amounts of data from its project management software. This data could be used to increase operational efficiency and, therefore, improve the quality of Deep Pool’s solutions, but siloed systems made this business intelligence difficult to access. Tiếng Việt 154% Italiano ไทย Increase Contact Sales Analyze “The options are seemingly infinite in terms of what we can do using AWS, and I know we’re just starting down that path.” Outcome | Improving Client Satisfaction with High-Quality Digital Solutions Because Deep Pool's project-management tool also tracks customer support requests, the company can use QuickSight to make sure that each ticket is resolved promptly and to the customer's satisfaction. It can also identify unique trends, such as when multiple customers encounter the same roadblock, and take corrective action when necessary. “On Amazon QuickSight, we have a log of every customer’s request, the age of that request, how it’s being resolved, and so forth,” says Promisel. “We can use this solution to not only optimize our internal approach to development but also to track how the client perceives our service.” Since it began using Amazon QuickSight, Deep Pool has improved client satisfaction by 16 percent. Português" Delivering a Seamless Gaming Experience to 25 Million Players Using AWS with Travian Games _ Travian Games Case Study _ AWS.txt,"Travian is now migrating its business intelligence systems to AWS using Amazon Redshift, which uses SQL to analyze structured and semistructured data across data warehouses, operational databases, and data lakes. Using data analytics, Travian will be able to analyze player behavior in the game based on the 11 TB of data that it collects each month and make improvements. “It used to be impossible for us to do this at this scale,” says Strathaus. “We’re looking forward to using analytics to improve our games further on AWS.” Français Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. Learn more » Travian needed a more stable service that could handle Kubernetes. The studio was initially hesitant to use AWS because the offerings from AWS are so vast that Travian worried it would be overwhelming. However, as the need for reliability became paramount, Travian decided to give it a try. “We spoke with people at AWS and had the feeling that they want to help us grow,” says Strathaus. “That is exactly what we were looking for.” Travian realized that AWS was willing to collaborate to help Travian learn how to use AWS services to improve its games. The studio scheduled six special workshops, AWS Immersion Days, to learn how to get the most out of AWS services. It then started using AWS in 2021. Within 1 year, Travian’s two biggest games were running completely on AWS. 2023 Opportunity | Using AWS to Deliver a Reliable Gaming Experience for Travian Español To deliver a seamless experience to its loyal player base, Travian migrated to Amazon Web Services (AWS). “We were searching for someone who really understands our business, someone who’s there to help us make our games better,” says Joerg Strathaus, chief executive officer (CEO) of Travian Games. “Collaborating with the AWS team has been amazing.” The studio used AWS for Games, purpose-built game development capabilities, to implement its initiative. Now, Travian players are enjoying greater game stability, its developers don’t have to spend weeks troubleshooting reliability issues, and its leaders are using data to drive business intelligence. In 2015, Travian migrated to a private cloud, and then in 2020, it changed its architectural approach and began using a managed Kubernetes service on a different cloud provider. However, the studio continued to need additional stability. “We had outages pretty much every day,” says Daniel Thoma, head of technical operations at Travian Games. “Our developers would spend weeks combing through code trying to find the fault, but they never found anything.” On several occasions, the studio had to implement rollbacks that restored the game to 48-hour-old backups—a frustration point for both Travian and its customers. Optimized to accommodate player needs Equipped with its new tools, Travian feels confident that it can continue improving and expanding its game worlds on AWS. The studio is now working to enhance its browser games. “We know that we can call AWS whenever we have a question, and the team will be there to support us,” says Strathaus. “We’re happy to have found a team that will collaborate with us into the future.” 日本語 AWS Services Used Customer Stories / Games Founded in 2005, Travian Games is a strategy game studio known for titles including Travian: Legends and Rail Nation. The company, which has a community of 25 million players, makes both turn-based and near-real-time titles. Contact Sales Delivering a Seamless Gaming Experience to 25 Million Players Using AWS with Travian Games 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used game reliability Improved Get Started Outcome | Engaging Gamers Using AWS Solution | Collaborating with the AWS Team to Create a Resilient Infrastructure AWS for Games 中文 (繁體) Bahasa Indonesia Joerg Strathaus CEO, Travian Games Ρусский About Travian Games عربي Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. When Travian migrated its first game to AWS, the results were immediate. “We have no issues,” says Thoma. “The result is what counts, and it’s running. The players don’t see time-out errors. It’s stable. It’s reliable.” On AWS, Travian uses Amazon Elastic Kubernetes Service (Amazon EKS), a managed service to run Kubernetes in the cloud. Using the managed worker nodes within Amazon EKS, Travian has a deeper level of management over its container deployments than it did before. Additionally, Travian gained greater scalability by using Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity. Now, the studio can react more quickly to changes in player demand. Overview cost-saving opportunities Learn how Travian Games achieved scalability and reliability by migrating to AWS. When Travian Games (Travian) wanted to achieve high reliability for its titles, the strategy game studio needed a new solution to support its 25 million registered players. The studio focused on eliminating stability issues to make it simpler for developers to focus on creating new features. As Travian continued to release near-real-time games, reliability would be essential for players to have consistent access to their game worlds. Scaled Türkçe English migration scope developers from code reviews Amazon RDS Migrating to AWS also saved time for Travian developers, who no longer need to spend days or weeks combing the code for errors. Instead, they can develop the code further and add value to the game. Travian also increased reliability and reduced the burden on developers by using Amazon Relational Database Service (Amazon RDS), a collection of managed services that makes it simple to set up and scale databases in the cloud. “Before we migrated to AWS, the database was not corresponding to the web server fast enough,” says Thoma. “Now, our teams use Amazon RDS easily without doing any of the configuration work that used to be necessary.” After migrating to Amazon RDS, Travian collaborated with the AWS team to optimize its spending. AWS recommended the use of next-generation Amazon RDS General Purpose gp3 storage volumes for Rail Nation. Using gp3 storage volumes, Travian reduced the size of its databases by 50 percent while increasing the rate of input-output operations per second. While the increased reliability and new tools have been crucial for Travian, collaborating with the team at AWS has been a major benefit as well. “The most important part of choosing a service provider for me was to find a ‘partner in crime,’ a collaborator who really understands our business and who is there to help us,” says Strathaus. “I’m really happy that we made this move to AWS for Games.” Unlocked Deutsch Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 600 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Tiếng Việt Italiano ไทย Amazon EKS Liberated We know that we can call AWS whenever we have a question, and the team will be there to support us.” Founded in Germany in 2005, Travian creates strategy games such as Travian: Legends, Crowfall, and Rail Nation. Its titles are 4X games, which means that players explore, expand, exploit, and exterminate within the game world. “When we’re talking about a game like this, stability is crucial because the games take place in near real time,” says Strathaus. Learn more » Amazon EC2 AWS for Games aligns purpose-built game development capabilities – including AWS services, AWS solutions, and AWS Partners –  to help developers build, run, and grow their games. Learn more » Português" Delivering Engaging Games at Scale Using AWS with Whatwapp _ Case Study _ AWS.txt,"Français Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » Whatwapp was founded in Milan in 2013 by a small team of university students who wanted to reinvent classic cultural card games as video games. A decade later, the app had 29 million downloads, with averages of 900,000 monthly and 300,000 daily users. As it grew, Whatwapp needed to improve scalability and backend management for its games. “At the beginning, we explored different technologies, people were coming and going, and we were changing very quickly,” says Ricardo Gonzalez, technical lead at Whatwapp. The company needed a solution to more easily share and manage knowledge, such as database and authentication, and features, such as leaderboards and player-to-player challenge matchmaking. Implementing new features took up too much valuable engineering time, and difficulties maintaining capability among game clients led to ever-increasing technical debt and initiated updates that threatened to harm user retention. To solve these problems, Whatwapp looked to standardize its game infrastructure. “We’re now trying to put down common standards among games, with best practices and a common core, automating as much as possible,” says Gonzalez. Whatwapp looked to AWS in its effort to standardize its backend operations, avoid constant rewriting, and maintain compatibility with older versions. “We already had an AWS account, so migrating our games to AWS was the best choice for us,” says Gonzalez. One of the services Whatwapp was already using was Amazon Elastic Kubernetes Service (Amazon EKS), a managed Kubernetes service, for its backend operations. To manage backend game operations, Whatwapp elected to host the Nakama solution on its own Kubernetes clusters using Amazon EKS. 2023 Español in time to share game features 日本語 Customer Stories / Games Amazon S3 Get Started 한국어 Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used Giovanni Piumatti Technical Lead, Whatwapp Figure 1: Whatwapp Architecture Diagram Whatwapp is now focused on using Nakama to perfect its original games, building consistency across versions and laying the groundwork for innovation and expansion in the future. Better social and competitive game features make competitions more compelling, and modernized infrastructure makes it easier for Whatwapp’s engineers to create and share features. Most importantly, the improvements are passed along to players. “Using AWS for our new infrastructure, we deliver content to players faster, without forcing them to download any updates,” says Piumatti. “They can use it almost as quickly as we can deploy it.” AWS Services Used 中文 (繁體) Bahasa Indonesia Amazon Kinesis Data Streams is a serverless streaming data service that makes it easy to capture, process, and store data streams at any scale. Learn more » In 2022, Whatwapp conducted a smooth migration with limited disruptions to its live games when it migrated its backend operations to Nakama, running on its own Kubernetes clusters using Amazon EKS. By pairing its own use of AWS services with Nakama, Whatwapp now has a scalable game server that can accommodate 40,000 simultaneous players and gains visibility, time savings, and feature improvements. “Nakama was the game service provider that had all the features that we needed out of the box,” says Giovanni Piumatti, technical lead at Whatwapp. “Our games were already live, and we had a large number of active users. It also let us run code in JavaScript, which allowed us to start from our existing codebase, and that made the migration a lot easier.” Managing Nakama on Amazon EKS gives Whatwapp greater visibility, meaning the company can alleviate gaming bottlenecks and identify underperforming code. “Now we can see bottlenecks and improve our code. We know how to improve our code base to get the best out of both Nakama and AWS,” says Gonzalez. Now, sharing features among games takes approximately one-third of the time that it used to take. Developers no longer need to rewrite code for each individual technical stack or push out critical updates to players. Time saved can be spent creating new features to engage players and drive retention. Because Whatwapp’s games are social multiplayer games, matchmaking—connecting individuals’ and teams’ experience at comparable challenge levels—is particularly critical to user experience and, ultimately, retention. Whatwapp developed its own asynchronous matchmaking feature, which it manages using Nakama. Whatwapp also runs a number of other social and competitive APIs on Nakama, including logins, authentication, chat, near-real-time parties, tournaments, and leaderboards. Behind the Nakama solution running on Amazon EKS, Whatwapp also uses a suite of AWS services to run its internal operations and improve the gaming experience for its players. For cost-effective storage, Whatwapp uses Amazon Simple Storage Service (Amazon S3), an object storage service offering scalability, data availability, security, and performance. For data ingestion, Whatwapp migrated to Amazon Kinesis Data Streams, a serverless streaming data service that makes it simple to capture, process, and store data streams at virtually any scale. Whatwapp uses Amazon CloudFront—a content delivery network service built for high performance, security, and developer convenience—to deliver content for its games. Developing its infrastructure on AWS has the added benefit of making Whatwapp more attractive to new DevOps talent, who prefer to work with updated, agile technology. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Learn more » 中文 (简体) Solution | Accommodating 40,000 Simultaneous Players Using Nakama on Amazon EKS  Increased visibility Using AWS for our new infrastructure, we deliver content to players faster, without forcing them to download any updates. They can use it almost as quickly as we can deploy it.” monthly and daily users Overview 66% reduction Türkçe Outcome | Attracting Players and New Talent with Improved User Experience and Faster Delivery  English Gaming company Whatwapp needed to standardize its infrastructure to save engineering time, support player retention, and avoid ever-increasing technical debt. The company wanted to streamline its backend infrastructure to provide a consistent, optimized player experience for its users. But rewriting feature implementations to share among its games was time-consuming and led to inconsistencies, complexity, and incompatibility. Since its inception, Whatwapp had been using solutions from Amazon Web Services (AWS) for its internal operations. So it decided to migrate its games’ backend solution and unify implementations on AWS through Nakama, an open-source distributed social and near-real-time server for games and apps provided by Heroic Labs, an AWS Partner. About Whatwapp Amazon Kinesis Data Streams Deutsch Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service to run Kubernetes in the AWS Cloud and on-premises data centers. Delivering Engaging Games at Scale Using AWS with Whatwapp Tiếng Việt Founded by university students in 2013, Whatwapp is a gaming company that provides social video-game versions of classic cultural games. As of 2023, Whatwapp averages 900,000 monthly active users, playing as individuals and clubs. Italiano ไทย Amazon EKS Amazon CloudFront Opportunity | Using AWS to Create Standardized Gaming Infrastructure for Whatwapp  900,000 & 300,000 Learn how gaming company Whatwapp achieved scalability, availability, and control of its data using AWS solutions. into game and code performance Português Contact Sales" Delivering Innovative Visual Search Capabilities Using AWS with Syte _ Syte Case Study _ AWS.txt,"Solution | Boosting Customer Conversion Rates by 177% with Innovative Capabilities on AWS Français Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 600 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload.  2023 Using Amazon OpenSearch Service saves us a lot of time and effort.” Español Amazon EC2 About Syte Syte’s ML models are the foundation of its customer offerings. To host its models, the startup adopted Amazon SageMaker, a service to build, train, and deploy ML models for any use case with fully managed infrastructure, tools, and workflows. Using these models, Syte can automatically extract data from an image or its customers’ product catalog to support various services. “At Syte, our innovation is in data science,” says Green. “We use Amazon SageMaker to serve and run our ML models. We can build more and more algorithms that we can use for different products.” For example, Syte’s camera search feature can analyze an image uploaded by a shopper and display products similar to the ones in the picture. The startup also uses ML models to display dynamic product recommendations based on predictive AI models, and its discovery icon helps shoppers explore similar items if their desired product is out of stock. Amazon OpenSearch Service 日本語 Opportunity | Using AWS Services to Drive Innovation and Optimize Costs for Syte Customer Stories / Retail & Wholesale Now that Syte has migrated its technology stack to AWS, it plans to expand its footprint. The startup has become an independent software vendor and has completed its listing for AWS Marketplace, where customers can find, test, buy, and deploy software that runs on AWS. Syte has also become an AWS Retail Competency Partner, an AWS Partner that is recognized for providing innovative technology offerings that accelerate retailers’ modernization and cloud journeys. 한국어 Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used in cost per transaction  Syte drives ecommerce performance for fashion, jewelry, and home decor retailers with intuitive search experiences powered by visual artificial intelligence. Its solutions include visual search, artificial intelligence product tagging, and personalized recommendations. Yair Green Vice President of Research and Development, Syte Get Started AWS Services Used 中文 (繁體) Bahasa Indonesia Since optimizing on AWS, Syte has seen a 200 percent increase in traffic and has boosted its revenue. The startup is continuing to deliver innovative search capabilities to its customers, driving powerful ecommerce results. “On average, our customers are seeing average order value increases of 11.5 percent, average conversion rate increases of 259 percent, and average revenue per user increases of 300 percent for shoppers exposed to Syte solutions,” says Yuter. Using its visual discovery solution, Syte helped Signet Jewelers, a major luxury jewelry retailer in the United Kingdom, increase its conversion rate by 580 percent and average revenue per user by 584.5 percent for website shoppers exposed to the product recommendations. The startup also achieved an increase in conversions for furniture retailer Coleman Furniture by a factor of 7.1 and helped fashion company Tally Weijl increase average revenue per user by 375 percent. Contact Sales Ρусский Improves scalability عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Learn more » Amazon SageMaker Amazon OpenSearch Service makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. Overview Syte relies on Amazon OpenSearch Service as the core database for its visual search data to reduce complexity, response times, and cost. Using this fully managed service, Syte can support complex search queries in 10 different languages and deliver faster results for users. “Before adopting Amazon OpenSearch Service, we had to manage the search database by ourselves,” says Green. “Now, we do not need to worry about maintenance, upgrades, or backups. Using Amazon OpenSearch Service saves us a lot of time and effort.” 200% increase Delivering Innovative Visual Search Capabilities Using AWS with Syte Türkçe Having developed one of the first product discovery solutions, Syte constantly looks for ways to optimize. Using machine learning (ML) models and artificial intelligence, the startup makes it possible for retailers to integrate advanced visual search capabilities and personalized product recommendations in their ecommerce storefronts. Syte’s solutions have helped these retailers boost key performance indicators like average order value, average revenue per user, and conversion rate. To better support its customers, the startup chose to optimize by migrating to Amazon Web Services (AWS). 42% reduction English While using another cloud provider, Syte was responsible for managing its own infrastructure. This process was time consuming for the startup, which sought to reduce manual effort and improve its scalability so that it could grow with its customers. Additionally, the company wanted to reduce its costs so that it could pass those savings along to retailers. Syte realized that it could optimize its business by migrating to AWS and adopting fully managed cloud services. “Most of our team has strong knowledge of AWS services, and we felt comfortable running our services on AWS,” says Yair Green, vice president of research and development at Syte. “We also believed that AWS services would be more cost effective compared with our previous solution.” Learn how Syte is driving innovation and ecommerce performance with its visual discovery service using AWS. Amazon Elastic Kubernetes Service (Amazon EKS) automatically manages the availability and scalability of the Kubernetes control plane nodes responsible for scheduling containers, managing application availability, storing cluster data, and other key tasks. in traffic Deutsch in customers' conversion rates Tiếng Việt Founded in 2015, Syte helps fashion, home, and jewelry brands make every product visually discoverable, helping shoppers find what they’re looking for. Using Syte’s visual discovery service, retailers can recommend products to shoppers, improve the searchability of their product catalogs, and increase revenue and conversion rates. “Our solutions meet the customers at every point in their ecommerce journeys to deliver a seamless experience,” says Gina Yuter, partnership manager at Syte. “These features include image search, visual search, automated product tagging, several recommendation engines, advanced personalization, omnichannel solutions, and many more.” Italiano ไทย Amazon EKS Over a period of 3–4 months, Syte migrated its technology stack to AWS. The startup then used AWS to optimize its solution for cost, performance, and availability, performing critical upgrades to its application and infrastructure. To host critical databases, Syte has adopted Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. It has also containerized its features using Amazon Elastic Kubernetes Service (Amazon EKS), a managed Kubernetes service that runs Kubernetes in the AWS Cloud and on-premises data centers. By adopting these fully managed services, Syte has improved its scalability and reduced its applications’ response times. The startup has also reduced its cost per transaction by 42 percent. “Using AWS managed services, we can maintain our infrastructure without needing to increase headcount,” says Green. “We can keep our team the same size but still grow with our customers.” without growing head count 177% average increase Learn more » Syte migrated its technology stack to AWS and adopted services like Amazon OpenSearch Service, an open-source, distributed search and analytics suite, to power its features. After adopting Amazon OpenSearch Service, the startup has reduced cost per transaction by 42 percent, improved its response times, and increased traffic by 200 percent. By optimizing for cost and performance on AWS, Syte has positioned itself to grow alongside its customers and drive powerful ecommerce results. As Syte continues to grow, it plans to use AWS services and resources to enhance its visual search capabilities and support its customers. “In the AWS community, we all want to help and advance our projects,” says Yuter. “We have felt very supported.” Português Outcome | Continuing to Build on AWS and Deliver Advanced Search Services to Retailers" Delivering Travel Deals across 110 Markets Using Amazon CloudFront with Skyscanner _ Case Study _ AWS.txt,"Français Amazon CloudFront is a content delivery network service built for high performance, security, and developer convenience. 2023 Vetted solutions and guidance for business and technical use cases Learn more » Español 日本語 AWS Services Used Contact Sales As Skyscanner has grown to serve over 110 market domains, the company wanted to support engineering efficiency and productivity while optimizing its cloud spend. Although Skyscanner had invested in AWS technologies, it used a fully managed CDN solution from another provider. “One of the major challenges of this project was that we were untangling almost a decade’s worth of root configurations that our team had not implemented,” says Stuart Ross, senior engineering manager at Skyscanner. cost savings for CDN usage Customer Stories / Travel Get Started 한국어 average cache-hit rate for images Overview | Opportunity | Solution | Outcome | AWS Services Used 3 billion The migration to Amazon CloudFront has simplified the management of our infrastructure footprint. There are far fewer moving parts, and it’s largely driven by AWS-managed services, which is great.” To continue innovating, the Skyscanner team plans to adopt a blue-green deployment strategy, which will help its team reduce deployment risk and quickly roll back changes by creating two identical independent environments for routing web traffic. The Skyscanner team can accelerate its efforts toward this goal with a streamlined, standardized stack on AWS. “The migration to Amazon CloudFront has simplified the management of our infrastructure footprint,” says Ross. “There are far fewer moving parts, and it’s largely driven by AWS-managed services, which is great.” To set up these configurations, Skyscanner used the AWS Cloud Development Kit (AWS CDK), giving its team the ability to define its cloud application resources using familiar programming languages. “AWS CDK was key to this project,” says Aylett. “Our teams could write code rather than writing infrastructure.” Skyscanner sourced code for its configurations from the AWS Solutions Library, which provides vetted solutions and guidance for business and technical use cases. By making these resources available to its engineering teams, Skyscanner configured Amazon CloudFront with 1,000 lines of code—a significant reduction from its previous solution, which had over 26,000 lines. Opportunity | Using Amazon CloudFront to Optimize the Technology Stack for Skyscanner 中文 (繁體) Bahasa Indonesia AWS Shield is a managed DDoS protection service that safeguards applications running on AWS. Learn more » Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Stuart Ross Senior Engineering Manager, Skyscanner 中文 (简体) After completing the POC, the Skyscanner team migrated its front-facing applications and website to Amazon CloudFront in increments, starting with its less-trafficked market domains. “It built up our confidence to start pushing the rest of our traffic from our consumer-facing sites to Amazon CloudFront,” says Aylett. The migration took a total of 3 months to complete, during which the Skyscanner team experienced zero global downtime. Since then, the Skyscanner team has been able to scale its serverless image handler to three billion monthly API requests while maintaining an average cache-hit rate of 99.99 percent. And by running its image handler on a serverless architecture, the Skyscanner team reduced its CDN costs by 50 percent. Another challenge that the Skyscanner team faced was migrating its CDN to AWS without degrading the customer experience. On any given day, Skyscanner can receive up to 1.5 billion API requests, representing about 24 TB of data. With such high demand, it was essential to avoid global incidents and downtime. AWS Shield 26,000 Overview Skyscanner engaged the AWS team to create a proof of concept (POC) for Amazon CloudFront. “The AWS team was amazing,” says Andrew Aylett, senior software engineer at Skyscanner. “We had the opportunity to talk to subject-matter experts to determine which AWS services would be the best fit for our road map.” During the 3-month POC phase, the Skyscanner team built customized configurations, including a serverless image-management handler that automatically compresses static images in the most cost-effective format. “That aspect was previously managed by our CDN provider, and we wanted Amazon CloudFront to have the same capabilities,” says Rory McCann, senior software engineer at Skyscanner. lines of code reduced to 1000 lines 50% monthly API requests handled Founded in 2003, Skyscanner is a global leader in travel, helping 100 million travelers plan and book their trips with ease and confidence by providing an all-in-one place for the best flight, hotel, or car-hire options from more than 1,200 trusted travel partners. Türkçe experienced globally About Skyscanner Ltd. English Delivering Travel Deals across 110 Markets Using Amazon CloudFront with Skyscanner Skyscanner is a global leader in travel that connects over 100 million travelers each month with more than 1,200 trusted travel partners so that travelers can find the best flight, hotel, or car-hire options. Founded in 2003, Skyscanner has offices worldwide, in Europe, Asia-Pacific, and North America, where traveler-first innovations are developed and powered by data and insights. The company is committed to helping shape a more responsible future for travel in collaboration with its partners and by making use of the latest technology so that every traveler can explore the world effortlessly for generations to come. Skyscanner also configured Amazon CloudFront for multiregion deployment, increasing its fault tolerance. “Our team can sleep at night knowing that if something happened, there would be another AWS Region where we could automatically direct our web traffic,” says Ross. Protecting its front-facing applications and website from distributed denial-of-service attacks was a priority, too, so the Skyscanner team implemented AWS Shield, a managed distributed denial-of-service protection service that safeguards applications running on AWS. The team activated AWS Shield Advanced so that it has near-real-time visibility into distributed denial-of-service events and 24/7 support from the AWS Shield Response Team. AWS Solutions Library Outcome | Future-Proofing Its Architecture for Blue-Green Deployments Solution | Configuring a Serverless Image Handler and Multiregion Deployment Using AWS CDK AWS CDK Deutsch 99.99% Tiếng Việt Italiano ไทย Amazon CloudFront Zero downtime Learn more » As a global leader in travel, Skyscanner Ltd. (Skyscanner) made the strategic decision to operate in one cloud environment as a means to future-proof its environment and identify opportunities for cost savings. Because the company serves 100 million people each month through its travel marketplace, fault tolerance was a high priority for Skyscanner while consolidating its technological stack. Skyscanner had already migrated its front-facing applications from its data center to Amazon Web Services (AWS) in 2017. Based on its experience, the company wanted to standardize its content delivery network (CDN) on AWS. So, the Skyscanner team adopted Amazon CloudFront, which securely delivers dynamic and static content with low latency and high transfer speeds. The Skyscanner team also built a serverless image handler that compresses static content using Amazon CloudFront, helping the company achieve 50 percent cost savings across its total CDN usage. AWS Cloud Development Kit (AWS CDK) accelerates cloud development using common programming languages to model your applications. Learn more » Português Learn how Skyscanner in the travel industry scales to three billion monthly API requests using Amazon CloudFront." Democratize Access to HPC for Computer-Aided Materials Design Using Amazon EC2 Spot Instances with Good Chemistry _ Good Chemistry Case Study _ AWS.txt,"Philip Ifrah Head of Product, Good Chemistry Democratize Access to HPC for Computer-Aided Materials Design Using Amazon EC2 Spot Instances with Good Chemistry Français Outcome | Applying Cloud-Native HPC Technology to Accelerate New Use Cases 2023 Español QEMIST Cloud facilitates high-throughput, high-accuracy computational chemistry simulations for billions of chemical combinations powered by Amazon Web Services (AWS) infrastructure. Using this solution, Good Chemistry is driving the development of economical ways to remove PFAS from the world’s water supply, helping solve one of the most pressing environmental challenges that humans currently face. Scales to one million Amazon Aurora provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. 日本語 virtual CPU cores Close Democratizes access 한국어 Solution | Scaling Past One Million Cores and Democratizing Access to Powerful Supercomputer Capabilities on AWS Using its highly scalable AWS infrastructure, Good Chemistry accurately calculated the bond-breaking energy for PFOA, one of the largest and most notorious PFAS molecules, in 37 hours, with only 4 hours at the one million core peak. Had the company tried to run these simulations sequentially, the process would have taken several years. “We dynamically scaled QEMIST Cloud to one million cores, and by the next day, we were able to create a new solution that was out of reach before,” says Zaribafiyan. “All it took was the on-demand scalability of the cloud. It’s a game changer for HPC in material science and chemistry.” of new materials and drugs About Good Chemistry Get Started Founded in 2021, Good Chemistry has a mission to create a more sustainable, circular economy by solving tough material science problems, like the removal of PFAS from the environment. Its product, QEMIST Cloud, uses high performance computing (HPC) clusters on AWS to push the boundaries of what is possible with quantum chemistry simulations. Using these simulations, scientists can accelerate the discovery and development of new materials. “The number of potential synthesizable molecules dwarfs the number of particles in the observable universe,” says Zaribafiyan. “Our mission is to use modern computing on the cloud to search uncharted chemical space and bring new materials and new drugs to market faster.” AWS Services Used Accelerates design and discovery 中文 (繁體) Bahasa Indonesia Click to enlarge for fullscreen viewing.  high-accuracy workloads at scale “The accurate understanding of chemical reactions is the key to finding the best solution to break PFAS apart and remove them from the environment,” says Arman Zaribafiyan, founder of Good Chemistry. “We can now interrogate chemical reactions at a tremendous volume because of the unprecedented scale of the cloud and accuracy of our algorithms.” Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Good Chemistry has a mission to make the world healthier, cleaner, and more sustainable using QEMIST Cloud, a cloud-native solution that accelerates materials design by facilitating high-throughput, high-accuracy computational chemistry simulations. Learn more » Amazon Aurora To speed up chemistry simulations in the cloud, Good Chemistry joined forces with the AWS HPC team and Intel as part of the Amazon Global Impact Computing team’s initiative on Digital Technologies for a Circular Economy. Together, the teams developed highly scalable infrastructure for QEMIST Cloud powered by AWS services like Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances, which run hyperscale workloads at significant cost savings. Through this engagement, Good Chemistry massively increased the scaling capabilities of QEMIST Cloud to run a chemistry simulation using more than one million CPU cores. Overview On AWS, Good Chemistry empowers researchers worldwide to simulate chemical combinations and drive sustainable innovations. This project marks an essential step forward for the remediation of PFAS from the environment and will likely play a major role in the discovery of new pathways for PFAS destruction. “Through this PFAS project, we demonstrated that we could run very high-accuracy calculations on AWS,” says Takeshi Yamazaki, director of research and development at Good Chemistry. “We are creating lots of high-quality data that will, in turn, help us offer differentiated machine learning models for material discovery.” Good Chemistry is already expanding QEMIST Cloud to support more industries, like pharmaceuticals, advanced chemicals, energy, and automotive. Use cases in progress, like crystal structure prediction, virtual screening, and reaction pathway prediction, will significantly reduce the cost, time, and risk associated with new drug development. Other use cases will lead to the development of better batteries, more effective carbon capture, and better solar panels. Good Chemistry is also one of the few AWS Partners that have been selected for the third cohort of the AWS Clean Energy Accelerator (CEA), where it will work with leading energy organizations to solve pressing clean energy and decarbonization challenges. “Right now, we’ve only scratched the surface,” says Ifrah. “We’re excited to extend our capabilities in computational chemistry, machine learning, and quantum computing to bring many new use cases to life.” Finding affordable, scalable ways to break the chemical bonds in PFAS is a major priority for scientists around the world. These artificial chemicals are found in everything from nonstick cookware to firefighting equipment but are known to cause significant health problems, including harm to the reproductive and immune systems and an increased risk of cancer. “Because PFAS are not biodegradable, they accumulate in the environment and find their way into underground water reservoirs,” says Zaribafiyan. “In the United States alone, more than 200 million people have PFAS in their drinking water. That’s two-thirds of the population.” Türkçe Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Learn more » English Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram Amazon EC2 Spot Instances Opportunity | Using AWS to Achieve Massive Scale for Workloads at Low Cost to supercomputer capabilities Per- and polyfluoroalkyl substances (PFAS), often called forever chemicals, pose a significant risk to human health and the environment. The remediation of PFAS pollution is a huge global challenge, estimated to cost billions of dollars and involve years of research. But now, Good Chemistry has developed a powerful solution to accelerate the process and further the development of a circular economy. Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers.  Deutsch Runs high-throughput Tiếng Việt Maintains high availability Italiano ไทย Amazon EKS Architecture Diagram With QEMIST Cloud, Good Chemistry has democratized access to supercomputer capabilities for research organizations, regardless of size or resources. “You don’t have to spend millions of dollars in infrastructure to get computing capability at this scale,” says Philip Ifrah, head of product at Good Chemistry. “Our solution on AWS orchestrates millions of computing resources on demand to perform experiments that push the boundaries of what’s possible.” Learn more » QEMIST Cloud architecture Learn how Good Chemistry is helping scientists run HPC workloads at scale with QEMIST Cloud on AWS. of compute resources On AWS, Good Chemistry can run high-throughput, high-accuracy HPC workloads at scale. QEMIST Cloud’s infrastructure is containerized and uses Amazon Elastic Kubernetes Service (Amazon EKS) to start, run, and scale Kubernetes clusters, each of which runs chemistry algorithms. Using Karpenter, an open-source node provisioning solution, each HPC cluster can scale on multiple instance types across Availability Zones, providing optimal scale and availability. “Using this approach, we can take advantage of all Availability Zones in an AWS Region and circumvent any scaling issues that Kubernetes might encounter,” says Rudi Plesch, head of software development at Good Chemistry. “Periodically, we rebalance the clusters to make sure that none of them run out of work.” The immediate results of the simulation are then stored on Amazon Aurora, a relational database management system built for the cloud with full MySQL and PostgreSQL compatibility. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português Our solution on AWS orchestrates millions of computing resources on demand to perform experiments that push the boundaries of what’s possible.”" Democratize computer vision defect detection for manufacturing quality using no-code machine learning with Amazon SageMaker Canvas _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Democratize computer vision defect detection for manufacturing quality using no-code machine learning with Amazon SageMaker Canvas by Brajendra Singh , Davide Gallitelli , and Danny Smith | on 30 JUN 2023 | in Advanced (300) , Amazon SageMaker , Amazon SageMaker Canvas , Artificial Intelligence | Permalink | Comments |  Share Cost of poor quality is top of mind for manufacturers. Quality defects increase scrap and rework costs, decrease throughput, and can impact customers and company reputation. Quality inspection on the production line is crucial for maintaining quality standards. In many cases, human visual inspection is used to assess the quality and detect defects, which can limit the throughput of the line due to limitations of human inspectors. The advent of machine learning (ML) and artificial intelligence (AI) brings additional visual inspection capabilities using computer vision (CV) ML models. Complimenting human inspection with CV-based ML can reduce detection errors, speed up production, reduce the cost of quality, and positively impact customers. Building CV ML models typically requires expertise in data science and coding, which are often rare resources in manufacturing organizations. Now, quality engineers and others on the shop floor can build and evaluate these models using no-code ML services, which can accelerate exploration and adoption of these models more broadly in manufacturing operations. Amazon SageMaker Canvas is a visual interface that enables quality, process, and production engineers to generate accurate ML predictions on their own—without requiring any ML experience or having to write a single line of code. You can use SageMaker Canvas to create single-label image classification models for identifying common manufacturing defects using your own image datasets. In this post, you will learn how to use SageMaker Canvas to build a single-label image classification model to identify defects in manufactured magnetic tiles based on their image. Solution overview This post assumes the viewpoint of a quality engineer exploring CV ML inspection, and you will work with sample data of magnetic tile images to build an image classification ML model to predict defects in the tiles for the quality check. The dataset contains more than 1,200 images of magnetic tiles, which have defects such as blowhole, break, crack, fray, and uneven surface. The following images provide an example of single-label defect classification, with a cracked tile on the left and a tile free of defects on the right. In a real-world example, you can collect such images from the finished products in the production line. In this post, you use SageMaker Canvas to build a single-label image classification model that will predict and classify defects for a given magnetic tile image. SageMaker Canvas can import image data from a local disk file or Amazon Simple Storage Service (Amazon S3). For this post, multiple folders have been created (one per defect type such as blowhole, break, or crack) in an S3 bucket, and magnetic tile images are uploaded to their respective folders. The folder called Free contains defect-free images. There are four steps involved in building the ML model using SageMaker Canvas: Import the dataset of the images. Build and train the model. Analyze the model insights, such as accuracy. Make predictions. Prerequisites Before starting, you need to set up and launch SageMaker Canvas. This setup is performed by an IT administrator and involves three steps: Set up an Amazon SageMaker domain. Set up the users. Set up permissions to use specific features in SageMaker Canvas. Refer to Getting started with using Amazon SageMaker Canvas and Setting Up and Managing Amazon SageMaker Canvas (for IT Administrators) to configure SageMaker Canvas for your organization. When SageMaker Canvas is set up, the user can navigate to the SageMaker console, choose Canvas in the navigation pane, and choose Open Canvas to launch SageMaker Canvas. The SageMaker Canvas application is launched in a new browser window. After the SageMaker Canvas application is launched, you start the steps of building the ML model. Import the dataset Importing the dataset is the first step when building an ML model with SageMaker Canvas. In the SageMaker Canvas application, choose Datasets in the navigation pane. On the Create menu, choose Image . For Dataset name , enter a name, such as Magnetic-Tiles-Dataset . Choose Create to create the dataset. After the dataset is created, you need to import images in the dataset. On the Import page, choose Amazon S3 (the magnetic tiles images are in an S3 bucket). You have the choice to upload the images from your local computer as well. Select the folder in the S3 bucket where the magnetic tile images are stored and chose Import Data . SageMaker Canvas starts importing the images into the dataset. When the import is complete, you can see the image dataset created with 1,266 images. You can choose the dataset to check the details, such as a preview of the images and their label for the defect type. Because the images were organized in folders and each folder was named with the defect type, SageMaker Canvas automatically completed the labeling of the images based on the folder names. As an alternative, you can import unlabeled images, add labels, and perform labeling of the individual images at a later point of time. You can also modify the labels of the existing labeled images. The image import is complete and you now have an images dataset created in the SageMaker Canvas. You can move to the next step to build an ML model to predict defects in the magnetic tiles. Build and train the model You train the model using the imported dataset. Choose the dataset ( Magnetic-tiles-Dataset ) and choose Create a model . For Model name , enter a name, such as Magnetic-Tiles-Defect-Model. Select Image analysis for the problem type and choose Create to configure the model build. On the model’s Build tab, you can see various details about the dataset, such as label distribution, count of labeled vs. unlabeled images, and also model type, which is single-label image prediction in this case. If you have imported unlabeled images or you want to modify or correct the labels of certain images, you can choose Edit dataset to modify the labels. You can build model in two ways: Quick build and Standard build. The Quick build option prioritizes speed over accuracy. It trains the model in 15–30 minutes. The model can be used for the prediction but it can’t be shared. It’s a good option to quickly check feasibility and accuracy of training a model with a given dataset. The Standard build chooses accuracy over speed, and model training can take between 2–4 hours. For this post, you train the model using the Standard build option. Choose Standard build on the Build tab to start training the model. The model training starts instantly. You can see the expected build time and training progress on the Analyze tab. Wait until the model training is complete, then you can analyze model performance for the accuracy. Analyze the model In this case, it took less than an hour to complete the model training. When the model training is complete, you can check model accuracy on the Analyze tab to determine if the model can accurately predict defects. You see the overall model accuracy is 97.7% in this case. You can also check the model accuracy for each of the individual label or defect type, for instance 100% for Fray and Uneven but approximately 95% for Blowhole . This level of accuracy is encouraging, so we can continue the evaluation. To better understand and trust the model, enable Heatmap to see the areas of interest in the image that the model uses to differentiate the labels. It’s based on the class activation map (CAM) technique. You can use the heatmap to identify patterns from your incorrectly predicted images, which can help improve the quality of your model. On the Scoring tab, you can check precision and recall for the model for each of the labels (or class or defect type). Precision and recall are evaluation metrics used to measure the performance of a binary and multiclass classification model. Precision tells how good the model is at predicting a specific class (defect type, in this example). Recall tells how many times the model was able to detect a specific class. Model analysis helps you understand the accuracy of the model before you use it for prediction. Make predictions After the model analysis, you can now make predictions using this model to identify defects in the magnetic tiles. On the Predict tab, you can choose Single prediction and Batch prediction . In a single prediction, you import a single image from your local computer or S3 bucket to make a prediction about the defect. In batch prediction, you can make predictions for multiple images that are stored in a SageMaker Canvas dataset. You can create a separate dataset in SageMaker Canvas with the test or inference images for the batch prediction. For this post, we use both single and batch prediction. For single prediction, on the Predict tab, choose Single prediction , then choose Import image to upload the test or inference image from your local computer. After the image is imported, the model makes a prediction about the defect. For the first inference, it might take few minutes because the model is loading for the first time. But after the model is loaded, it makes instant predictions about the images. You can see the image and the confidence level of the prediction for each label type. For instance, in this case, the magnetic tile image is predicted to have an uneven surface defect (the Uneven label) and the model is 94% confident about it. Similarly, you can use other images or a dataset of images to make predictions about the defect. For the batch prediction, we use the dataset of unlabeled images called Magnetic-Tiles-Test-Dataset by uploading 12 test images from your local computer to the dataset. On the Predict tab, choose Batch prediction and choose Select dataset . Select the Magnetic-Tiles-Test-Dataset dataset and choose Generate predictions . It will take some time to generate the predictions for all the images. When the status is Ready , choose the dataset link to see the predictions. You can see predictions for all the images with confidence levels. You can choose any of the individual images to see image-level prediction details. You can download the prediction in CSV or .zip file format to work offline. You can also verify the predicted labels and add them to your training dataset. To verify the predicted labels, choose Verify prediction . In the prediction dataset, you can update labels of the individual images if you don’t find the predicted label correct. When you have updated the labels as required, choose Add to trained dataset to merge the images into your training dataset (in this example, Magnetic-Tiles-Dataset ). This updates the training dataset, which includes both your existing training images and the new images with predicted labels. You can train a new model version with the updated dataset and potentially improve the model’s performance. The new model version won’t be an incremental training, but a new training from scratch with the updated dataset. This helps keep the model refreshed with new sources of data. Clean up After you have completed your work with SageMaker Canvas, choose Log out to close the session and avoid any further cost. When you log out, your work such as datasets and models remains saved, and you can launch a SageMaker Canvas session again to continue the work later. SageMaker Canvas creates an asynchronous SageMaker endpoint for generating the predictions. To delete the endpoint, endpoint configuration, and model created by SageMaker Canvas, refer to Delete Endpoints and Resources . Conclusion In this post, you learned how to use SageMaker Canvas to build an image classification model to predict defects in manufactured products, to compliment and improve the visual inspection quality process. You can use SageMaker Canvas with different image datasets from your manufacturing environment to build models for use cases like predictive maintenance, package inspection, worker safety, goods tracking, and more. SageMaker Canvas gives you the ability to use ML to generate predictions without needing to write any code, accelerating the evaluation and adoption of CV ML capabilities. To get started and learn more about SageMaker Canvas, refer to the following resources: Amazon SageMaker Canvas Developer Guide Announcing Amazon SageMaker Canvas – a Visual, No Code Machine Learning Capability for Business Analysts About the authors Brajendra Singh is solution architect in Amazon Web Services working with enterprise customers. He has strong developer background and is a keen enthusiast for data and machine learning solutions. Danny Smith is Principal, ML Strategist for Automotive and Manufacturing Industries, serving as a strategic advisor for customers. His career focus has been on helping key decision-makers leverage data, technology and mathematics to make better decisions, from the board room to the shop floor. Lately most of his conversations are on democratizing machine learning and generative AI. Davide Gallitelli is a Specialist Solutions Architect for AI/ML in the EMEA region. He is based in Brussels and works closely with customers throughout Benelux. He has been a developer since he was very young, starting to code at the age of 7. He started learning AI/ML at university, and has fallen in love with it since then. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Deploy a serverless ML inference endpoint of large language models using FastAPI AWS Lambda and AWS CDK _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Deploy a serverless ML inference endpoint of large language models using FastAPI, AWS Lambda, and AWS CDK by Tingyi Li and Demir Catovic | on 23 JUN 2023 | in Advanced (300) , Amazon SageMaker , Artificial Intelligence , AWS Lambda , Generative AI , Technical How-to | Permalink | Comments |  Share For data scientists, moving machine learning (ML) models from proof of concept to production often presents a significant challenge. One of the main challenges can be deploying a well-performing, locally trained model to the cloud for inference and use in other applications. It can be cumbersome to manage the process, but with the right tool, you can significantly reduce the required effort. Amazon SageMaker inference , which was made generally available in April 2022, makes it easy for you to deploy ML models into production to make predictions at scale, providing a broad selection of ML infrastructure and model deployment options to help meet all kinds of ML inference needs. You can use SageMaker Serverless Inference endpoints for workloads that have idle periods between traffic spurts and can tolerate cold starts. The endpoints scale out automatically based on traffic and take away the undifferentiated heavy lifting of selecting and managing servers. Additionally, you can use AWS Lambda directly to expose your models and deploy your ML applications using your preferred open-source framework, which can prove to be more flexible and cost-effective. FastAPI is a modern, high-performance web framework for building APIs with Python. It stands out when it comes to developing serverless applications with RESTful microservices and use cases requiring ML inference at scale across multiple industries. Its ease and built-in functionalities like the automatic API documentation make it a popular choice amongst ML engineers to deploy high-performance inference APIs. You can define and organize your routes using out-of-the-box functionalities from FastAPI to scale out and handle growing business logic as needed, test locally and host it on Lambda, then expose it through a single API gateway, which allows you to bring an open-source web framework to Lambda without any heavy lifting or refactoring your codes. This post shows you how to easily deploy and run serverless ML inference by exposing your ML model as an endpoint using FastAPI, Docker, Lambda, and Amazon API Gateway . We also show you how to automate the deployment using the AWS Cloud Development Kit (AWS CDK). Solution overview The following diagram shows the architecture of the solution we deploy in this post. Prerequisites You must have the following prerequisites: Python3 installed, along with virtualenv for creating and managing virtual environments in Python aws-cdk v2 installed on your system in order to be able to use the AWS CDK CLI Docker installed and running on your local machine Test if all the necessary software is installed: The AWS Command Line Interface (AWS CLI) is needed. Log in to your account and choose the Region where you want to deploy the solution. Use the following code to check your Python version: python3 --version Check if virtualenv is installed for creating and managing virtual environments in Python. Strictly speaking, this is not a hard requirement, but it will make your life easier and helps follow along with this post more easily. Use the following code: python3 -m virtualenv --version Check if cdk is installed. This will be used to deploy our solution. cdk --version Check if Docker is installed. Our solution will make your model accessible through a Docker image to Lambda. To build this image locally, we need Docker. docker --version Make sure Docker is up and running with the following code: docker ps How to structure your FastAPI project using AWS CDK We use the following directory structure for our project (ignoring some boilerplate AWS CDK code that is immaterial in the context of this post): ``` fastapi_model_serving │ └───.venv │ └───fastapi_model_serving │   │   __init__.py │   │   fastapi_model_serving_stack.py │   │ │   └───model_endpoint │       └───docker │       │      Dockerfile │       │      serving_api.tar.gz │ │ │       └───runtime │            └───serving_api │                    requirements.txt │                    serving_api.py │                └───custom_lambda_utils │                     └───model_artifacts │                            ... │                     └───scripts │                            inference.py │ └───templates │   └───api │   │     api.py │   └───dummy │         dummy.py │ │ app.py │   cdk.json │   README.md │   requirements.txt │   init-lambda-code.sh ``` The directory follows the recommended structure of AWS CDK projects for Python . The most important part of this repository is the fastapi_model_serving directory. It contains the code that will define the AWS CDK stack and the resources that are going to be used for model serving. The fastapi_model_serving directory contains the model_endpoint subdirectory, which contains all the assets necessary that make up our serverless endpoint, namely the Dockerfile to build the Docker image that Lambda will use, the Lambda function code that uses FastAPI to handle inference requests and route them to the correct endpoint, and the model artifacts of the model that we want to deploy. model_endpoint also contains the following: Docker – This subdirectory contains the following: Dockerfile – This is used to build the image for the Lambda function with all the artifacts (Lambda function code, model artifacts, and so on) in the right place so that they can be used without issues. serving.api.tar.gz – This is a tarball that contains all the assets from the runtime folder that are necessary for building the Docker image. We discuss how to create the .tar.gz file later in this post. runtime – This subdirectory contains the following: serving_api – The code for the Lambda function and its dependencies specified in the requirements.txt file. custom_lambda_utils – This includes an inference script that loads the necessary model artifacts so that the model can be passed to the serving_api that will then expose it as an endpoint. Additionally, we have the template directory, which provides a template of folder structures and files where you can define your customized codes and APIs following the sample we went through earlier. The template directory contains dummy code that you can use to create new Lambda functions: dummy – Contains the code that implements the structure of an ordinary Lambda function using the Python runtime api – Contains the code that implements a Lambda function that wraps a FastAPI endpoint around an existing API gateway Deploy the solution By default, the code is deployed inside the eu-west-1 region. If you want to change the Region, you can change the DEPLOYMENT_REGION context variable in the cdk.json file. Keep in mind, however, that the solution tries to deploy a Lambda function on top of the arm64 architecture, and that this feature might not be available in all Regions. In this case, you need to change the architecture parameter in the fastapi_model_serving_stack.py file, as well as the first line of the Dockerfile inside the Docker directory, to host this solution on the x86 architecture. To deploy the solution, complete the following steps: Run the following command to clone the GitHub repository: git clone https://github.com/aws-samples/lambda-serverless-inference-fastapi Because we want to showcase that the solution can work with model artifacts that you train locally, we contain a sample model artifact of a pretrained DistilBERT model on the Hugging Face model hub for a question answering task in the serving_api.tar.gz file. The download time can take around 3–5 minutes. Now, let’s set up the environment. Download the pretrained model that will be deployed from the Hugging Face model hub into the ./model_endpoint/runtime/serving_api/custom_lambda_utils/model_artifacts directory. It also creates a virtual environment and installs all dependencies that are needed. You only need to run this command once: make prep . This command can take around 5 minutes (depending on your internet bandwidth) because it needs to download the model artifacts. Package the model artifacts inside a .tar.gz archive that will be used inside the Docker image that is built in the AWS CDK stack. You need to run this code whenever you make changes to the model artifacts or the API itself to always have the most up-to-date version of your serving endpoint packaged: make package_model . The artifacts are all in place. Now we can deploy the AWS CDK stack to your AWS account. Run cdk bootstrap if it’s your first time deploying an AWS CDK app into an environment (account + Region combination): make cdk_bootstrap This stack includes resources that are needed for the toolkit’s operation. For example, the stack includes an Amazon Simple Storage Service (Amazon S3) bucket that is used to store templates and assets during the deployment process. Because we’re building Docker images locally in this AWS CDK deployment, we need to ensure that the Docker daemon is running before we can deploy this stack via the AWS CDK CLI. To check whether or not the Docker daemon is running on your system, use the following command: docker ps If you don’t get an error message, you should be ready to deploy the solution. Deploy the solution with the following command: make deploy This step can take around 5–10 minutes due to building and pushing the Docker image. Troubleshooting If you’re a Mac user, you may encounter an error when logging into Amazon Elastic Container Registry (Amazon ECR) with the Docker login, such as Error saving credentials ... not implemented . For example: exited with error code 1: Error saving credentials: error storing credentials - err: exit status 1,...dial unix backend.sock: connect: connection refused Before you can use Lambda on top of Docker containers inside the AWS CDK, you may need to change the ~/docker/config.json file. More specifically, you might have to change the credsStore parameter in ~/.docker/config.json to osxkeychain. That solves Amazon ECR login issues on a Mac. Run real-time inference After your AWS CloudFormation stack is deployed successfully, go to the Outputs tab for your stack on the AWS CloudFormation console and open the endpoint URL. Now our model is accessible via the endpoint URL and we’re ready to run real-time inference. Navigate to the URL to see if you can see “hello world” message and add /docs to the address to see if you can see the interactive swagger UI page successfully. There might be some cold start time, so you may need to wait or refresh a few times. After you log in to the landing page of the FastAPI swagger UI page, you can run via the root / or via /question . From / , you could run the API and get the “hello world” message. From /question , you could run the API and run ML inference on the model we deployed for a question answering case. For example, we use the question is What is the color of my car now? and the context is My car used to be blue but I painted red. When you choose Execute , based on the given context, the model will answer the question with a response, as shown in the following screenshot. In the response body, you can see the answer with the confidence score from the model. You could also experiment with other examples or embed the API in your existing application. Alternatively, you can run the inference via code. Here is one example written in Python, using the requests library: import requests url = ""https://.execute-api..amazonaws.com/prod/question?question=\""What is the color of my car now?\""&context=\""My car used to be blue but I painted red\"""" response = requests.request(""GET"", url, headers=headers, data=payload) print(response.text) The code outputs a string similar to the following: '{""score"":0.6947233080863953,""start"":38,""end"":41,""answer"":""red""}' If you are interested in knowing more about deploying Generative AI and large language models on AWS, check out here: Deploy Serverless Generative AI on AWS Lambda with OpenLLaMa Deploy large language models on AWS Inferentia2 using large model inference containers Clean up Inside the root directory of your repository, run the following code to clean up your resources: make destroy Conclusion In this post, we introduced how you can use Lambda to deploy your trained ML model using your preferred web application framework, such as FastAPI. We provided a detailed code repository that you can deploy, and you retain the flexibility of switching to whichever trained model artifacts you process. The performance can depend on how you implement and deploy the model. You are welcome to try it out yourself, and we’re excited to hear your feedback! About the Authors Tingyi Li is an Enterprise Solutions Architect from AWS based out in Stockholm, Sweden supporting the Nordics customers. She enjoys helping customers with the architecture, design, and development of cloud-optimized infrastructure solutions. She is specialized in AI and Machine Learning and is interested in empowering customers with intelligence in their AI/ML applications. In her spare time, she is also a part-time illustrator who writes novels and plays the piano. Demir Catovic is a Machine Learning Engineer from AWS based in Zurich, Switzerland. He engages with customers and helps them implement scalable and fully-functional ML applications. He is passionate about building and productionizing machine learning applications for customers and is always keen to explore around new trends and cutting-edge technologies in the AI/ML world. TAGS: Generative AI , Natural Language Processing Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Deploy Falcon-40B with large model inference DLCs on Amazon SageMaker _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Deploy Falcon-40B with large model inference DLCs on Amazon SageMaker by James Park , Abhi Shivaditya , Evandro Franco , Frank Liu , Qing Lan , and Robert Van Dusen | on 13 JUN 2023 | in Advanced (300) , Amazon SageMaker , Artificial Intelligence | Permalink | Comments |  Share Last week, Technology Innovation Institute (TII) launched TII Falcon LLM , an open-source foundational large language model (LLM). Trained on 1 trillion tokens with Amazon SageMaker , Falcon boasts top-notch performance (#1 on the Hugging Face leaderboard at time of writing) while being comparatively lightweight and less expensive to host than other LLMs such as llama-65B. In this post, we demonstrate how to deploy Falcon for applications like language understanding and automated writing assistance using large model inference deep learning containers on SageMaker. The Falcon has landed on SageMaker TII is the applied research organization within Abu Dhabi’s Advanced Technology Research Council ; its team of scientists, researchers, and engineers is dedicated to the discovery of transformative technologies and development of scientific breakthroughs that will future-proof our society. Earlier this year, TII set out to train a state-of-the-art, open-source LLM and used the infrastructure, tooling, and expertise of SageMaker to get the job done (to learn more about how this model was trained on SageMaker, refer to Technology Innovation Institute trains the state-of-the-art Falcon LLM 40B foundation model on Amazon SageMaker ). The result of this effort is TII Falcon LLM . Trained on 1 trillion tokens, Falcon boasts top-notch performance against the Eleuther AI Language Model Evaluation Harness and is currently #1 on the Hugging Face leaderboard for accuracy. The model is available in two different sizes—Falcon-40B and Falcon-7B—and can be used for state-of-the-art performance in applications such as language understanding, conversational experiences, and automated writing assistance. This post will help you get started with deploying Falcon on SageMaker for high-accuracy inference in these types of domains. SageMaker large model inference DLCs simplify LLM hosting Hosting LLMs such as Falcon-40B and Falcon-7B can be challenging. Larger models are often more accurate because they include billions of parameters, but their size can also result in slower inference latency or worse throughput. Hosting an LLM can require more GPU memory and optimized kernels to achieve acceptable performance. To further complicate things, although smaller models such as Falcon-7B can generally fit on a single GPU such as an NVIDIA A10G instance that powers AWS G5 instance types, larger models like Falcon-40B cannot. When this happens, strategies such as tensor parallelism must be used to shard that larger model into multiple pieces and take advantage of the memory of multiple GPUs. Legacy hosting solutions used for smaller models typically don’t offer this type of functionality, adding to the difficulty. SageMaker large model inference (LMI) deep learning containers (DLCs) can help. LMI DLCs are a complete end-to-end solution for hosting LLMs like Falcon-40B. At the front end, they include a high-performance model server (DJL Serving) designed for large model inference with features such as token streaming and automatic model replication within an instance to increase throughput. On the backend, LMI DLCs also include several high-performance model parallel engines, such as DeepSpeed and FasterTransformer, that can shard and manage model parameters across multiple GPUs. These engines also include optimized kernels for popular transformer models, which can accelerate inference by up to three times faster. With LMI DLCs, you simply need to create a configuration file to get started with LLM hosting on SageMaker. To learn more about SageMaker LMI DLCs, refer to Model parallelism and large model inference and our list of available images . You can also check out our previous post about hosting Bloom-175B on SageMaker using LMI DLCs. Solution overview This post walks you through how to host Falcon-40B using DeepSpeed on SageMaker using LMI DLCs. Falcon-40B requires that we use multiple A10 GPUs, whereas Falcon-7B only requires a single GPU. We have also prepared examples you can reference to host Falcon-40B and Falcon-7B using both DeepSpeed and Accelerate. You can find our code examples on GitHub . This example can be run in SageMaker notebook instances or Amazon SageMaker Studio notebooks. For hosting Falcon-40B using LMI and DeepSpeed, we need to use an ml.g5.24xlarge instance. These instances provide 4x NVIDIA A10G GPUs, which each support 96 GiB of GPU memory. In addition, the host provides 96 vCPUs and 384 GiB of host memory. The LMI container will help address much of the undifferentiated heavy lifting associated with hosting LLMs, including downloading the model and partitioning the model artifact so that its comprising parameters can be spread across multiple GPUs. Quotas for SageMaker machine learning (ML) instances can vary between accounts. If you receive an error indicating you’ve exceeded your quota for g5.24xlarge instances while following this post, you can increase the limit through the Service Quotas console . Notebook walkthrough To begin, we start by installing and importing the necessary dependencies for our example. We use the Boto3 SDK as well as the SageMaker SDK. Note that we use Amazon Simple Storage Service (Amazon S3) to store the model artifacts that we need for SageMaker and LMI to use, so we set up an S3 prefix variable accordingly. See the following code: import sagemaker import jinja2 from sagemaker import image_uris import boto3 import os import time import json from pathlib import Path from sagemaker.utils import name_from_base role = sagemaker.get_execution_role() # execution role for the endpoint sess = sagemaker.session.Session() # sagemaker session for interacting with different AWS APIs bucket = sess.default_bucket() # bucket to house artifacts model_bucket = sess.default_bucket() # bucket to house artifacts s3_code_prefix_deepspeed = ""hf-large-model-djl-/code_falcon40b/deepspeed"" # folder within bucket where code artifact will go region = sess._region_name account_id = sess.account_id() s3_client = boto3.client(""s3"") sm_client = boto3.client(""sagemaker"") smr_client = boto3.client(""sagemaker-runtime"") jinja_env = jinja2.Environment() We then create a local folder for our workspace to store our model artifacts: !mkdir -p code_falcon40b_deepspeed We first create a serving.properties configuration file in the local directory we created. This serving.properties file indicates to the LMI container and the front-end DJL Serving library which model parallelization and inference optimization engine we want to use. You can find the configuration options for both DeepSpeed and Hugging Face Accelerate in Configurations and settings . Here, note that we set the option.model_id parameter to define which Hugging Face model to pull from. SageMaker makes working with Hugging Face models simple, and this one line is all you need. In addition, we set option.tensor_parallel_degree to a value of 4 because we have four GPUs on our ml.g5.24xlarge instance. This parameter defines how many partitions of the model to create and distribute. Note that if we had used a larger instance with eight GPUs, such as ml.g5.48xlarge, and still set a value of 4, then LMI would automatically create two replicas of the model (two replicas spread across four GPUs each). See the following code: %%writefile ./code_falcon40b_deepspeed/serving.properties engine=Python #to deploy falcon-40b-instruct set the model_id value to 'tiiuae/falcon-40b-instruct' option.model_id=tiiuae/falcon-40b option.tensor_parallel_degree=4 #option.s3url = {{s3url}} You can also swap out tiiuae/falcon-40b with tiiuae/falcon-40b-instruct if it suits your needs better. We also include a requirements.txt file that you can specify to install packages that you require: %%writefile ./code_falcon40b_deepspeed/requirements.txt einops torch==2.0.1 The last thing we need is the model.py file that will be used with your model: %%writefile ./code_falcon40b_deepspeed/model.py from djl_python import Input, Output import os import torch from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer from typing import Any, Dict, Tuple import warnings predictor = None def get_model(properties): model_name = properties[""model_id""] local_rank = int(os.getenv(""LOCAL_RANK"", ""0"")) model = AutoModelForCausalLM.from_pretrained( model_name, low_cpu_mem_usage=True, trust_remote_code=True, torch_dtype=torch.bfloat16, device_map=""auto"", ) tokenizer = AutoTokenizer.from_pretrained(model_name) generator = pipeline( task=""text-generation"", model=model, tokenizer=tokenizer, device_map=""auto"" ) return generator def handle(inputs: Input) -> None: global predictor if not predictor: predictor = get_model(inputs.get_properties()) if inputs.is_empty(): # Model server makes an empty call to warmup the model on startup return None data = inputs.get_as_json() text = data[""text""] text_length = data[""text_length""] outputs = predictor(text, do_sample=True, min_length=text_length, max_length=text_length) result = {""outputs"": outputs} return Output().add_as_json(result) That’s it! At this point, we have created all the artifacts you will need deploy Falcon-40B with DeepSpeed! We package the directory into a *.tar.gz file and upload it to Amazon S3. Note that the actual model has not been downloaded or packaged into this file. The LMI container will download the model for you from Hugging Face directly. You also have the option to target an S3 bucket if you would like your own copy of the model in a location that will be more performant to download. LMI also includes optimization for downloading from Amazon S3 with high performance. See the following code: s3_code_artifact_deepspeed= sess.upload_data(""model.tar.gz"", bucket, s3_code_prefix_deepspeed) print(f""S3 Code or Model tar for deepspeed uploaded to --- > {s3_code_artifact_deepspeed}"") All that is left to do at this point is to define the container we want to use and create a model object: inference_image_uri = ( f""763104351884.dkr.ecr.{region}.amazonaws.com/djl-inference:0.22.1-deepspeed0.8.3-cu118"" ) model_name_acc = name_from_base(f""falcon40b-model-ds"") create_model_response = sm_client.create_model( ModelName=model_name_acc, ExecutionRoleArn=role, PrimaryContainer={""Image"": inference_image_uri, ""ModelDataUrl"": s3_code_artifact_deepspeed}, ) model_arn = create_model_response[""ModelArn""] Then we create an endpoint configuration and create the endpoint: endpoint_config_name = f""{model_name}-config"" endpoint_name = f""{model_name}-endpoint"" endpoint_config_response = sm_client.create_endpoint_config( EndpointConfigName=endpoint_config_name, ProductionVariants=[ { ""VariantName"": ""variant1"", ""ModelName"": model_name, ""InstanceType"": ""ml.g5.24xlarge"", ""InitialInstanceCount"": 1, ""ModelDataDownloadTimeoutInSeconds"": 3600, ""ContainerStartupHealthCheckTimeoutInSeconds"": 3600, # ""VolumeSizeInGB"": 512 }, ], ) endpoint_config_response create_endpoint_response = sm_client.create_endpoint( EndpointName=f""{endpoint_name}"", EndpointConfigName=endpoint_config_name ) print(f""Created Endpoint: {create_endpoint_response['EndpointArn']}"") Configuration items to keep in mind for successful hosting An important consideration for large model hosting is ensuring there is adequate time for model download from Hugging Face. In our tests, the Falcon-40B took about 90 minutes to download onto the instance. A key set of configurations to allow for this are ContainerStartupHealthCheckTimeoutInSeconds and ModelDataDownloadTimeoutInSeconds . Make sure the SageMaker endpoint configuration has a value of 3600 for each of these. Additionally, it’s much easier to download from Amazon S3 instead of the original model zoo using the LMI containers that are specially designed for LLMS that use the S5cmd utility, which cuts the model download time to around 10 minutes. You can monitor the status of the endpoint by calling DescribeEndpoint , which will tell you when everything is complete. Your endpoint is now ready to respond to inference requests! Because LMI handles the model partitioning and orchestration for you, each request will be processed using all 4 GPUs available on our ml.g5.12xlarge instance. This allows us to host LLMs and increase performance if you scale GPU accelerators horizontally. See the following code: response_model = smr_client.invoke_endpoint( EndpointName=endpoint_name, Body=json.dumps({""text"": ""What is the purpose of life?"", ""text_length"": 150}), ContentType=""application/json"", ) response_model[""Body""].read().decode(""utf8"") If you are done and would like to delete the endpoint configuration, endpoint, and model object, you can run the following commands: sm_client.delete_endpoint(EndpointName=endpoint_name) sm_client.delete_endpoint_config(EndpointConfigName=endpoint_config_name) sm_client.delete_model(ModelName=model_name) This code we referenced in this post can be found in the complete notebook on GitHub . Conclusion SageMaker Hosting and the LMI DLC makes it easy for you to host LLMs like Falcon-40B. It takes on the undifferentiated heavy lifting in orchestrating what is required to host models across multiple GPUs and provides configurable options to suit your needs. In addition, using Hugging Face models becomes very straightforward, with built-in support for these models. In this post, we showed how you can use SageMaker to host the Falcon-40B model using DeepSpeed. In addition, we provided examples in GitHub to host Falcon-40B using Accelerate, and the smaller Falcon-7B models. We encourage you to give this a try on SageMaker with LMI and get hands-on with the best-performing publicly available LLM to date! About the authors James Park  is a Solutions Architect at Amazon Web Services. He works with Amazon.com to design, build, and deploy technology solutions on AWS, and has a particular interest in AI and machine learning. In h is spare time he enjoys seeking out new cultures, new experiences,  and staying up to date with the latest technology trends.You can find him on LinkedIn . Abhi Shivaditya is a Senior Solutions Architect at AWS, working with strategic global enterprise organizations to facilitate the adoption of AWS services in areas such as Artificial Intelligence, distributed computing, networking, and storage. His expertise lies in Deep Learning in the domains of Natural Language Processing (NLP) and Computer Vision. Abhi assists customers in deploying high-performance machine learning models efficiently within the AWS ecosystem. Robert Van Dusen is a Senior Product Manager with Amazon SageMaker. He leads deep learning model optimization for applications such as large model inference. Evandro Franco is an AI/ML Specialist Solutions Architect working on Amazon Web Services. He helps AWS customers overcome business challenges related to AI/ML on top of AWS. He has more than 15 years working with technology, from software development, infrastructure, serverless, to machine learning. Qing Lan is a Software Development Engineer in AWS. He has been working on several challenging products in Amazon, including high performance ML inference solutions and high performance logging system. Qing’s team successfully launched the first Billion-parameter model in Amazon Advertising with very low latency required. Qing has in-depth knowledge on the infrastructure optimization and Deep Learning acceleration. Frank Liu  is a Software Engineer for AWS Deep Learning. He focuses on building innovative deep learning tools for software engineers and scientists. In his spare time, he enjoys hiking with friends and family. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Deploying and benchmarking YOLOv8 on GPU-based edge devices using AWS IoT Greengrass _ The Internet of Things on AWS Official Blog.txt,"The Internet of Things on AWS – Official Blog Deploying and benchmarking YOLOv8 on GPU-based edge devices using AWS IoT Greengrass by Romil Shah and Kevin Song | on 29 JUN 2023 | in Amazon Machine Learning , Artificial Intelligence , AWS IoT Greengrass , Technical How-to | Permalink |  Share Introduction Customers in manufacturing, logistics, and energy sectors often have stringent requirements for needing to run machine learning (ML) models at the edge. Some of these requirements include low-latency processing, poor or no connectivity to the internet, and data security. For these customers, running ML processes at the edge offers many advantages over running them in the cloud as the data can be processed quickly, locally and privately. For deep-learning based ML models, GPU-based edge devices can enhance running ML models at the edge. AWS IoT Greengrass can help with managing edge devices and deploying of ML models to these devices. In this post, we demonstrate how to deploy and run YOLOv8 models, distributed under the GPLv3 license, from Ultralytics on NVIDIA-based edge devices. In particular, we are using Seeed Studio’s reComputer J4012 based on NVIDIA Jetson Orin™ NX 16GB module for testing and running benchmarks with YOLOv8 models compiled with various ML libraries such as PyTorch and TensorRT. We will showcase the performance of these different YOLOv8 model formats on reComputer J4012. AWS IoT Greengrass components provide an efficient way to deploy models and inference code to edge devices. The inference is invoked using MQTT messages and the inference output is also obtained by subscribing to MQTT topics. For customers interested in hosting YOLOv8 in the cloud, we have a blog demonstrating how to host YOLOv8 on Amazon SageMaker endpoints. Solution overview The following diagram shows the overall AWS architecture of the solution. Seeed Studio’s reComputer J4012 is provisioned as an AWS IoT Thing using AWS IoT Core and connected to a camera. A developer can build and publish the com.aws.yolov8.inference Greengrass component from their environment to AWS IoT Core. Once the component is published, it can be deployed to the identified edge device, and the messaging for the component will be managed through MQTT, using the AWS IoT console. Once published, the edge device will run inference and publish the outputs back to AWS IoT core using MQTT. Prerequisites An AWS account with permissions for AWS IoT Core, AWS IoT Greengrass, and Amazon Simple Storage Service (S3) A Seeed Studio reComputer J4012 edge device (optional) Edge device connected to a camera or RTSP stream Walkthrough Step 1: Setup edge device Here, we will describe the steps to correctly configure the edge device reComputer J4012 device with installing necessary library dependencies, setting the device in maximum power mode, and configuring the device with AWS IoT Greengrass. Currently, reComputer J4012 comes pre-installed with JetPack 5.1 and CUDA 11.4, and by default, JetPack 5.1 system on reComputer J4012 is not configured to run on maximum power mode. In Steps 1.1 and 1.2, we will install other necessary dependencies and switch the device into maximum power mode. Finally in Step 1.3, we will provision the device in AWS IoT Greengrass, so the edge device can securely connect to AWS IoT Core and communicate with other AWS services. Step 1.1: Install dependencies From the terminal on the edge device, clone the GitHub repo using the following command: $ git clone https://github.com/aws-samples/deploy-yolov8-on-edge-using-aws-iot-greengrass Move to the utils directory and run the install_dependencies.sh script as shown below: $ cd deploy-yolov8-on-edge-using-aws-iot-greengrass/utils/ $ chmod u+x install_dependencies.sh $ ./install_dependencies.sh Step 1.2: Setup edge device to max power mode From the terminal of the edge device, run the following commands to switch to max power mode: $ sudo nvpmodel -m 0 $ sudo jetson_clocks To apply the above changes, please restart the device by typing ‘yes’ when prompted after executing the above commands. Step 1.3: Set up edge device with IoT Greengrass For automatic provisioning of the device, run the following commands from reComputer J4012 terminal: $ cd deploy-yolov8-on-edge-using-aws-iot-greengrass/utils/ $ chmod u+x provisioning.sh $ ./provisioning.sh (optional) For manual provisioning of the device, follow the procedures described in the AWS public documentation . This documentation will walk through processes such as device registration, authentication and security setup, secure communication configuration, IoT Thing creation, & policy and permission setup. When prompted for IoT Thing and IoT Thing Group , please enter unique names for your devices. Otherwise, they will be named with default values (GreengrassThing and GreengrassThingGroup). Once configured, these items will be visible in AWS IoT Core console as shown in the figures below: Step 2: Download/Convert models on the edge device Here, we will focus on 3 major categories of YOLOv8 PyTorch models: Detection, Segmentation, and Classification. Each model task further subdivides into 5 types based on performance and complexity, and is summarized in the table below. Each model type ranges from ‘Nano’ (low latency, low accuracy) to ‘Extra Large’ (high latency, high accuracy) based on sizes of the models. Model Types Detection Segmentation Classification Nano yolov8n yolov8n-seg yolov8n-cls Small yolov8s yolov8s-seg yolov8s-cls Medium yolov8m yolov8m-seg yolov8m-cls Large yolov8l yolov8l-seg yolov8l-cls Extra Large yolov8x yolov8x-seg yolov8x-cls We will demonstrate how to download the default PyTorch models on the edge device, converted to ONNX and TensorRT frameworks. Step 2.1: Download PyTorch base models From the reComputer J4012 terminal, change the path from edge/device/path/to/models to the path where you would like to download the models to and run the following commands to configure the environment: $ echo 'export PATH=""/home/$USER/.local/bin:$PATH""' >> ~/.bashrc $ source ~/.bashrc $ cd {edge/device/path/to/models} $ MODEL_HEIGHT=480 $ MODEL_WIDTH=640 Run the following commands on reComputer J4012 terminal to download the PyTorch base models: $ yolo export model=[yolov8n.pt OR yolov8n-seg.pt OR yolov8n-cls.pt] imgsz=$MODEL_HEIGHT,$MODEL_WIDTH Step 2.2: Convert models to ONNX and TensorRT Convert PyTorch models to ONNX models using the following commands: $ yolo export model=[yolov8n.pt OR yolov8n-seg.pt OR yolov8n-cls.pt] format=onnx imgsz=$MODEL_HEIGHT,$MODEL_WIDTH Convert ONNX models to TensorRT models using the following commands: [Convert YOLOv8 ONNX Models to TensorRT Models] $ echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/targets/aarch64-linux/lib' >> ~/.bashrc $ echo 'alias trtexec=""/usr/src/tensorrt/bin/trtexec""' >> ~/.bashrc
$ source ~/.bashrc $ trtexec --onnx={absolute/path/edge/device/path/to/models}/yolov8n.onnx --saveEngine={absolute/path/edge/device/path/to/models}/yolov8n.trt Step 3: Setup local machine or EC2 instance and run inference on edge device Here, we will demonstrate how to use the Greengrass Development Kit (GDK) to build the component on a local machine, publish it to AWS IoT Core, deploy it to the edge device, and run inference using the AWS IoT console. The component is responsible for loading the ML model, running inference and publishing the output to AWS IoT Core using MQTT. For the inference component to be deployed on the edge device, the inference code needs to be converted into a Greengrass component. This can be done on a local machine or Amazon Elastic Compute Cloud (EC2) instance configured with AWS credentials and IAM policies linked with permissions to Amazon Simple Storage Service (S3). Step 3.1: Build/Publish/Deploy component to the edge device from a local machine or EC2 instance From the local machine or EC2 instance terminal, clone the GitHub repository and configure the environment: $ git clone https://github.com/aws-samples/deploy-yolov8-on-edge-using-aws-iot-greengrass $ export AWS_ACCOUNT_NUM=""ADD_ACCOUNT_NUMBER"" $ export AWS_REGION=""ADD_REGION"" $ export DEV_IOT_THING=""NAME_OF_OF_THING"" $ export DEV_IOT_THING_GROUP=""NAME_OF_IOT_THING_GROUP"" Open recipe.json under components/com.aws.yolov8.inference directory, and modify the items in Configuration . Here, model_loc is the location of the model on the edge device defined in Step 2.1: ""Configuration"": { ""event_topic"": ""inference/input"",     ""output_topic"": ""inference/output"",     ""camera_id"": ""0"",     ""model_loc"": ""edge/device/path/to/yolov8n.pt"" OR "" edge/device/path/to/models/yolov8n.trt"" } Install the GDK on the local machine or EC2 instance by running the following commands on terminal: $ python3 -m pip install -U git+https://github.com/aws-greengrass/aws-greengrass-gdk-cli.git@v1.2.0 $ [For Linux] apt-get install jq $ [For MacOS] brew install jq Build, publish and deploy the component automatically by running the deploy-gdk-build.sh script in the utils directory on the local machine or EC2 instance: $ cd utils/ $ chmod u+x deploy-gdk-build.sh $ ./deploy-gdk-build.sh Step 3.2: Run inference using AWS IoT Core   Here, we will demonstrate how to use the AWS IoT Core console to run the models and retrieve outputs. The selection of model has to be made in the recipe.json on your local machine or EC2 instance and will have to be re-deployed using the deploy-gdk-build.sh script. Once the inference starts, the edge device will identify the model framework and run the workload accordingly. The output generated in the edge device is pushed to the cloud using MQTT and can be viewed when subscribed to the topic. Figure below shows the inference timestamp, model type, runtime, frame per second and model format. To view MQTT messages in the AWS Console, do the following: In the AWS IoT Core Console, in the left menu, under Test, choose MQTT test client. In the Subscribe to a topic tab, enter the topic inference/output and then choose Subscribe. In the Publish to a topic tab, enter the topic inference/input and then enter the below JSON as the Message Payload. Modify the status to start, pause or stop for starting/pausing/stopping inference: { ""status"": ""start"" } Once the inference starts, you can see the output returning to the console. Benchmarking YOLOv8 on Seeed Studio reComputer J4012 We compared ML runtimes of different YOLOv8 models on the reComputer J4012 and the results are summarized below. The models were run on a test video and the latency metrics were obtained for different model formats and input shapes. Interestingly, PyTorch model runtimes didn’t change much across different model input sizes while TensorRT showed marked improvement in runtime with reduced input shape. The reason for the lack of changes in PyTorch runtimes is because the PyTorch model does not resize its input shapes, but rather changes the image shapes to match the model input shape, which is 640×640. Depending on the input sizes and type of model, TensorRT compiled models performed better over PyTorch models. PyTorch models seem to have a decreased performance in latency when model input shape was decreased which is due to extra padding. While compiling to TensorRT, the model input is already considered which removes the padding and hence they perform better with reduced input shape. The following table summarizes the latency benchmarks (pre-processing, inference and post-processing) for different input shapes using PyTorch and TensorRT models running Detection and Segmentation. The results show the runtime in milliseconds for different model formats and input shapes. For results on raw inference runtimes, please refer to the benchmark results published in Seeed Studio’s blog post . Model Input Detection – YOLOv8n (ms) Segmentation – YOLOv8n-seg (ms) [H x W] PyTorch TensorRT PyTorch TensorRT [640 x 640] 27.54 25.65 32.05 29.25 [480 x 640] 23.16 19.86 24.65 23.07 [320 x 320] 29.77 8.68 34.28 10.83 [224 x 224] 29.45 5.73 31.73 7.43 Cleaning up While the unused Greengrass components and deployments do not add to the overall cost, it is ideally a good practice to turn off the inference code on the edge device as described using MQTT messages. The GitHub repository also provides an automated script to cancel the deployment. The same script also helps to delete any unused deployments and components as shown below: From the local machine or EC2 instance, configure the environment variables again using the same variables used in Step 3.1: $ export AWS_ACCOUNT_NUM=""ADD_ACCOUNT_NUMBER"" $ export AWS_REGION=""ADD_REGION"" $ export DEV_IOT_THING=""NAME_OF_OF_THING"" $ export DEV_IOT_THING_GROUP=""NAME_OF_IOT_THING_GROUP"" From the local machine or EC2 instance, go to the utils directory and run cleanup_gg.py script: $ cd utils/ $ python3 cleanup_gg.py Conclusion In this post, we demonstrated how to deploy YOLOv8 models to Seeed Studio’s reComputer J4012 device and run inferences using AWS IoT Greengrass components. In addition, we benchmarked the performance of reComputer J4012 device with various model configurations, such as model size, type and image size. We demonstrated the near real-time performance of the models when running at the edge which allows you to monitor and track what’s happening inside your facilities. We also shared how AWS IoT Greengrass alleviates many pain points around managing IoT edge devices, deploying ML models and running inference at the edge. For any inquiries around how our team at AWS Professional Services can help with configuring and deploying computer vision models at the edge, please visit our website . About Seeed Studio We would first like to acknowledge our partners at Seeed Studio for providing us with the AWS Greengrass certified reComputer J4012 device for testing. Seeed Studio is an AWS Partner and has been serving the global developer community since 2008, by providing open technology and agile manufacturing services, with the mission to make hardware more accessible and lower the threshold for hardware innovation. Seeed Studio is NVIDIA’s Elite Partner and offers a one-stop experience to simplify embedded solution integration, including custom image flashing service, fleet management, and hardware customization. Seeed Studio speeds time to market for customers by handling integration, manufacturing, fulfillment, and distribution. Learn more about their NVIDIA Jetson ecosystem . Romil Shah Romil Shah is a Sr. Data Scientist at AWS Professional Services. Romil has more than six years of industry experience in computer vision, machine learning, and IoT edge devices. He is involved in helping customers optimize and deploy their machine learning workloads for edge devices.   Kevin Song Kevin Song is a Data Scientist at AWS Professional Services. He holds a PhD in Biophysics and has more than five years of industry experience in building computer vision and machine learning solutions.   TAGS: machine learning at the edge , Nvidia , object detection Resources Getting Started What's New Top Posts Official AWS Podcast AWS Case Studies Follow  Twitter  Facebook  LinkedIn  Twitch  RSS Feed  Email Updates" Deputy Case Study _ Amazon Web Services.txt,"Opportunity | Scheduling Millions of Shift Workers on Deputy’s Platform Français Amazon Aurora is a fully managed relational database that delivers faster queries, decreased latency, high performance, and reliability. Its high throughput rate makes it particularly well suited for computationally heavy workloads like Deputy’s. “Our data stores are massive—each cluster has up to 10,000 databases, and each database can have as many as 200 tables,” explained Rajini Carpenter, vice president of engineering at Deputy. “That’s close to 2 million tables in a single cluster, and just watching how Amazon Aurora handles that is amazing.” 2023 Amazon Simple Storage Service Español Since we can lean on Amazon Aurora for scaling and maintaining our databases, we can focus on building world-class software.” Amazon Aurora also natively integrates with other critical components of Deputy’s infrastructure. For example, Deputy uses Amazon OpenSearch Service for data-powered business insights and has built a data pipeline using Amazon Kinesis Data Firehose and AWS Lambda to load streaming data into OpenSearch clusters. In addition, Deputy offers a touch-free facial-analysis feature with biometric validation for employees to clock in and out, built using Amazon Rekognition. “We’ve received a tremendous amount of support from AWS to fuel us to go upmarket and serve larger, more complex businesses,” said Qamal Kosim-Satyaputra, senior director of engineering at Deputy. “We wouldn’t be here without their support.” Learn More Learn more » 日本語 Qamal Kosim-Satyaputra Senior Director of Engineering, Deputy Solution | Delivering High Performance for Massive Clusters Can recover deleted records in minutes Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used increase in query speed and latency Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. To learn more, visit aws.amazon.com/rds/aurora. AWS Services Used 中文 (繁體) Bahasa Indonesia Amazon Aurora Amazon Elastic Cloud Compute Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Outcome | Boosting Performance by 30% and Improving Reliability Deputy is on a mission to Simplify Shift Work™ for millions of shift workers and businesses globally. The company streamlines scheduling, timesheets, tasks, and communication for business owners and their workers. Deputy is available on AWS Marketplace. With Amazon Aurora, Deputy increased performance of its workforce scheduling platform by 30% to better support its enterprise customers. About Deputy Overview Amazon Aurora Helps Deputy Improve Performance by 30% and Expand Customer Base to Large Organisations Türkçe Deputy provides cloud-based workforce management and scheduling solutions that enable companies to schedule complex shift work. With Amazon Aurora, Deputy took advantage of high-throughput and variable scaling to speed query processing times, improve reliability, and boost performance by nearly 30 percent. English Deputy is a cloud-based workforce scheduling platform designed to automate the complex calculations required to optimally schedule shift work. More than 330,000 workplaces and 1.4 million shift workers around the world rely on Deputy software to automate scheduling and facilitate workforce management. Since moving to Amazon Aurora, Deputy has seen an improvement in query speed and latency of up to 30 percent. The platform is also more reliable, with faster failovers and the ability to easily recover lost data. “The reliability improvements have been extremely helpful in our day-to-day operations,” said Deepika Rao, engineering manager at Deputy. “In situations where our customers accidentally delete their records, we’ve been able to backtrack and spin up a new cluster in a matter of minutes, rather than having to restore them manually from terabytes of data.” Kosim-Satyaputra added, “Since we can lean on Amazon Aurora for scaling and maintaining our databases, we can focus on building world-class software.” Amazon Relational Database Service Rapid deployment While Amazon RDS freed Deputy’s engineering team from infrastructure management, the company had its sights set on other scaling solutions that would supercharge its growth by resonating with large-scale enterprises with even more complex scheduling needs. In particular, the team wanted a solution capable of handling Deputy’s read-heavy application without replication lag. The company also wanted to be sure the platform could handle the larger volumes of data and queries coming from its growing stable of midmarket and enterprise customers. After working with the AWS team on a proof of concept, Deputy chose to move its workforce scheduling platform to Amazon Aurora and run on Aurora MySQL version 3, which is wire-compatible with MySQL 8.0. Implemented massive migration in 8 weeks Data recovery Deutsch Born in the cloud, Deputy’s workforce scheduling platform has been powered by Amazon Web Services (AWS) since the very beginning. The original platform was built using Amazon Elastic Cloud Compute (Amazon EC2) and Amazon Simple Storage Service (Amazon S3) with self-managed MySQL databases on Amazon EC2. As the company grew, it committed to minimizing operational burdens for its engineers, which meant moving to fully managed Amazon Relational Database Service (Amazon RDS). “Amazon RDS allowed us to focus on our product, while leaning on Amazon scaling up and down to effortlessly serve our fast growing, largest customers,” said Deepesh Banerji, chief product officer at Deputy. Tiếng Việt Amazon Aurora provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. Italiano ไทย Contact Sales Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Learn more » 30% Português" Design considerations for cost-effective video surveillance platforms with AWS IoT for Smart Homes _ The Internet of Things on AWS Official Blog.txt,"The Internet of Things on AWS – Official Blog Design considerations for cost-effective video surveillance platforms with AWS IoT for Smart Homes by Thorben Sanktjohanser | on 14 JUL 2023 | in Amazon API Gateway , Amazon Cognito , Amazon DynamoDB , Amazon Kinesis , AWS IoT Core , Intermediate (200) , Internet of Things , Kinesis Video Streams , Technical How-to | Permalink |  Share Introduction Designing and developing a cost-efficient, cloud-connected video platform for surveillance cameras and smart home devices require developers to architect and integrate a streaming service capable of ingesting, storing, and processing unstructured media data at scale. The infrastructure behind such a platform needs to handle large volumes of predicated data load along with the flexibility to support sudden, non-forecasted demand spikes. From buffering and latency to dropped connections and data storage issues, video streaming from smart home devices can be fraught with difficulties. Therefore, one of the key objectives for a smart camera solution must be the flexibility and scalability to support millions of devices, trillions of messages, and petabytes of data. Serverless computing eliminates the need for provisioning servers and enables automatic scaling, cost optimization by charging only for actual usage, and provides built-in fault tolerance and high availability. Serverless architectures promote agility, reduce operational complexity, and accelerate time-to-market for businesses. Considerations To deliver a smart camera solution that is capable of providing scalable, reliable, and efficient video streaming service, you need to consider the costs associated with managing servers, storage, and network hardware responsible for providing high bandwidth and low latency network performance. Procuring, installing, and maintaining the hardware can lower your staff’s focus on creating differentiated applications and delivering a better user experience. Amazon Kinesis Video Stream s is a fully managed AWS service that enables you to securely stream media for storage, analytics, and playback without provisioning servers. You do not have to build, operate, or scale any WebRTC (Web Real-Time Communication) related cloud infrastructure, such as signaling servers or media relay servers to securely stream media across applications and devices. This makes it an ideal service to combine with AWS IoT for connected products. HTTP Live Streaming (HLS) and Dynamic Adaptive Streaming over HTTP (DASH) are two streaming protocols used to deliver pre-recorded, on-demand and live video content from a server. WebRTC is an open-source project and set of technologies that enables real-time and low-latency peer-to-peer communication, directly between web browsers or mobile applications. With Amazon Kinesis Video Streams, you can choose from two options to provide live video streaming: play-back videos from streams with HLS and DASH ; or low-latency two-way media streaming with WebRTC . The option to stream from HLS and DASH will lead to data transfer charges from the Kinesis Video Streams service to the internet. Kinesis Video Streams service charges you per GB for data ingested and data consumed . There is no additional fee for data from the internet to AWS. Data transferred out to the internet is free for the first 100GB of each month, as of December 1, 2021 . An additional fee per GB applies to the data transfer after that. Further cost improvements can be achieved by lowering data rates using compression, or dynamic bitrates and frame rate adjustments of a video stream. n a 24×7 streaming scenario, I recommend lowering the bitrate to an acceptable minimum. The bitrate used in your product is a major contributing factor to the overall KVS service cost. Amazon Kinesis Video Streams supports different video codecs, such as H.264 (Advanced Video Coding or AVC) and H.265 (High Efficiency Video Coding or HVEC). You can read more about the differences and their trade-offs in this blog post . Consider the overall video and audio quality, the effective bitrate, the resulting data volume, and the capabilities of your hardware when selecting a codec for your product. The data egress costs scale with the number of cameras and users of your platform when streaming live from HLS and DASH. Data egress can be avoided when using Kinesis Video Streams with WebRTC and peer-to-peer connections. Kinesis Video Streams with WebRTC uses a signaling channel to exchange connection information between peers. Afterwards, the peers connect directly with each other for live streaming, instead of sending or receiving data from the AWS cloud. Charges occur for the signaling channel active in a given month and the number of signaling messages sent and received . There are no charges for streaming video content directly, peer-to-peer without a relay server. In cases where direct connections are not feasible, due to restrictive network conditions, a relay server (TURN) provided by Kinesis Video Streams will be used. This server relays the media traffic between peers to ensure connectivity. Relaying media traffic via the TURN server are charged in streaming minutes with an additional fee per GB to the data transfer out after the first 100GB . Architecture Overview Figure 1. Surveillance camera platform architectural diagram. With Amazon Kinesis Video Stream s’ fully-managed capability, you do not have to build, operate, or scale any WebRTC related cloud infrastructure, such as signalling servers or media relay servers to securely stream media across applications and devices. You use the Kinesis Video Streams with WebRTC SDK with the camera and client. Until now, I have discussed how you can stream video from a smart camera to a client with a peer-to-peer connection and shared considerations on costs. Another part of this architecture is the administrating and controlling of the smart camera itself, such as provisioning, configuration, security and maintenance to ensure the smart device functions properly. You can onboard your smart cameras to AWS by using AWS IoT Core to implement a secure connection between the device and AWS to manage them. The service includes a device gateway and a message broker. The communication from the camera to AWS IoT Core is based on MQTT , a lightweight publish-subscribe network protocol. The recommended way of securing the management connection between smart home devices and the AWS Cloud is by using X.509 certificates. The certificates allow you to authorize cameras to access services on AWS. AWS IoT Core can generate and register an individual certificate for each device at scale. In this architecture the fleet provisioning by claim method is used. A bootstrap certificate is saved to the camera which will be automatically exchanged with a unique device certificate upon provisioning. During the provisioning process, an AWS Lambda function reads a database table that holds information, such as a serial number, of all the manufactured surveillance cameras to verify the cameras accessing the services. In this architecture, the serverless key-value database service Amazon DynamoDB is used to verify identities, to store user and device data. DynamoDB integrates seamlessly with AWS IoT services delivering consistent, single-digit millisecond latency at any scale, enabling real-time processing and analysis of IoT data. For communication on the client side, you can implement the serverless authenticate and authorize pattern to control access to your backend services. Amazon Cognito provides a user directory storing user’s profile attributes, such as username, email addresses, and phone numbers. The client receives access tokens from Cognito to verify users and to authorize access to backend services and surveillance cameras. Amazon API Gateway handles the verification of access tokens by providing a REST API that integrates with Amazon Cognito . This authorizes authenticated users to proxy requests from the client to the backend services with Amazon API Gateway. The backend services receiving and returning requests in this architecture are built with AWS Lambda , which allows you to run code on demand. You can use a Lambda function to read from the manufacturer database to verify devices and to bind user accounts with cameras. Lambda will request session credentials on demand with AWS Identity and Access Management (IAM) to access the signalling channel of the camera on Kinesis Video Streams. With generated credentials, you can isolate clients from each other .   Walkthrough You will incur costs when deploying the Amazon Kinesis Video Streams Serverless Surveillance Platform in your account. When you are finished examining the example, follow the steps in the Clean Up section to delete the infrastructure and stop incurring charges. Have a look at the README file in the repository to understand the building blocks of the platform example in detail. You can use AWS Cloud9 to deploy the code sample. Cloud9 provides a cloud-based platform for developers to write, debug, and collaborate on code using a web browser, making it convenient and accessible from anywhere. The code sample was tested using Cloud9, which reduces the need for local setup and configuration. Step 1: Create Cloud9 environment Open Cloud9 in the AWS Management Console Click on Create environment Name your environment surveillance-camera-ide Click on Create and wait until the environment is created Choose surveillance-camera-ide and Open in Cloud9 Open a terminal in Cloud9 Clone the Amazon Kinesis Video Streams Serverless Surveillance Platform repository: git clone https://github.com/aws-samples/amazon-kinesis-video-streams-serverless-surveillance-platform.git Step 2: Deploy the surveillance camera platform Copy the Cloud9 ID from the address bar in your browser, i.e. .console.aws.amazon.com/cloud9/ide/ 59f5e14c6cdb4fbb95f61f107b5ad86d Install the infrastructure from root directory with the Cloud9 ID as follows: cd infrastructure sh ./install-infrastructure.sh 59f5e14c6cdb4fbb95f61f107b5ad86d Deploy the camera mock from root directory as follows: cd camera sh ./install-mock.sh The deployment of the camera takes up to 10 minutes Deploy the web client from root directory as follows: cd web-client yarn install --silent yarn start Open https:// 59f5e14c6cdb4fbb95f61f107b5ad86d .vfs.cloud9..amazonaws.com ( Alternatively ) Click on Preview in the top bar in Cloud9 Select Preview Running Application Select Pop Out Into New Window  in the preview window Step 3: Login and bind the camera mock to your account Copy the Username and Password and select Login Enter the credentials and select a new password Setup a software MFA in the Cognito Hosted UI Enter the provided Serial number and Secret and select Submit Once the camera mock provision status is true , select BCM2835-00000000b211cf11 in the table. Refresh the page to request a status update or if an error occurs You will see the test stream from the camera mock as below. Figure 2. Web client sample stream from camera mock Cleanup Remove infrastructure, camera mock, and Cloud9 environment Remove the infrastructure from root directory within Cloud9 ID as follows: cd infrastructure sh ./uninstall-infrastructure.sh Remove the camera mock from root directory within Cloud9 ID as follows: cd camera sh ./uninstall-mock.sh Navigate to Cloud9 in the AWS Management Console Choose surveillance-camera-ide Click Delete Conclusion The architecture covered above, showed an approach on how to build a cloud-connected surveillance camera. With the considerations in mind, you can determine a pricing model and build a cost-efficient cloud-connected video surveillance platform with AWS IoT. Follow the next steps and read the following resources to provide your consumers with state-of-the-art functionality and use cases: Integrate real-time alerts on the live video stream with Amazon Rekognition. Follow this blog post here . Provide your own machine learning models to cameras performing inference without a connection to the cloud. Read more about it here . Stream and process data from video streams locally with a machine learning appliance like AWS Panorama. Read this blog post to see how other customers leverage IoT services . Build a machine learning pipeline to save images from your Kinesis Video Streams stream to S3 for further processing. See this blog post to implement this feature . About the author Thorben Sanktjohanser Thorben Sanktjohanser is a Solutions Architect at Amazon Web Services supporting small- and medium-sized business on their cloud journey with his expertise. Thorben has an Information Systems and Management background and could gather knowledge in different business verticals to innovate together with his customers on modern data strategies and migrations. He is passionate about IoT and building smart home devices. Almost every part of his home is automated from light bulb over blinds to vacuum cleaning and mopping. Resources Getting Started What's New Top Posts Official AWS Podcast AWS Case Studies Follow  Twitter  Facebook  LinkedIn  Twitch  RSS Feed  Email Updates" Designing a hybrid AI_ML data access strategy with Amazon SageMaker _ AWS Architecture Blog.txt,"AWS Architecture Blog Designing a hybrid AI/ML data access strategy with Amazon SageMaker by Franklin Aguinaldo, Ananta Khanal, Sid Misra, and Tony Chen | on 10 JUL 2023 | in Amazon Elastic File System (EFS) , Amazon File Cache , Amazon FSx for Lustre , Amazon SageMaker , Architecture , AWS DataSync , AWS Direct Connect , AWS Storage Gateway | Permalink | Comments |  Share Over time, many enterprises have built an on-premises cluster of servers, accumulating data, and then procuring more servers and storage. They often begin their ML journey by experimenting locally on their laptops. Investment in artificial intelligence (AI) is at a different stage in every business organization. Some remain completely on-premises, others are hybrid (both on-premises and cloud), and the remaining have moved completely into the cloud for their AI and machine learning (ML) workloads. These enterprises are also researching or have started using the cloud to augment their on-premises systems for several reasons. As technology improves, both the size and quantity of data increases over time. The amount of data captured and the number of datapoints continues to expand, which presents a challenge to manage on-premises. Many enterprises are distributed, with offices in different geographic regions, continents, and time zones. While it is possible to increase the on-premises footprint and network pipes, there are still hidden costs to consider for maintenance and upkeep. These organizations are looking to the cloud to shift some of that effort and enable them to burst and use the rich AI and ML features on the cloud. Defining a hybrid data access strategy Moving ML workloads into the cloud calls for a robust hybrid data strategy describing how and when you will connect your on-premises data stores to the cloud. For most, it makes sense to make the cloud the source of truth, while still permitting your teams to use and curate datasets on-premises. Defining the cloud as source of truth for your datasets means the primary copy will be in the cloud and any dataset generated will be stored in the same location in the cloud. This ensures that requests for data is served from the primary copy and any derived copies. A hybrid data access strategy should address the following: Understand your current and future storage footprint for ML on-premises. Create a map of your ML workloads, along with performance and access requirements for testing and training. Define connectivity across on-premises locations and the cloud. This includes east-west and north-south traffic to support interconnectivity between sites, required bandwidth, and throughput for the data movement workload requirements. Define your single source of truth (SSOT)[1] and where the ML datasets will primarily live. Consider how dated, new, hot, and cold data will be stored. Define your storage performance requirements, mapping them to the appropriate cloud storage services . This will give you the ability to take advantage of cloud-native ML with Amazon SageMaker . Hybrid data access strategy architecture To help address these challenges, we worked on outlining an end-to-end system architecture in Figure 1 that defines: 1) connectivity between on-premises data centers and AWS Regions; 2) mappings for on-premises data to the cloud; and 3) Aligning Amazon SageMaker to appropriate storage, based on ML requirements. Figure 1. AI/ML hybrid data access strategy reference architecture Let’s explore this architecture step by step. On-premises connectivity to the AWS Cloud runs through AWS Direct Connect for high transfer speeds. AWS DataSync is used for migrating large datasets into Amazon Simple Storage Service (Amazon S3). AWS DataSync agent is installed on-premises. On-premises network file system (NFS) or server message block (SMB) data is bridged to the cloud through Amazon S3 File Gateway , using either a virtual machine (VM) or hardware appliance. AWS Storage Gateway uploads data into Amazon S3 and caches it on-premises. Amazon S3 is the source of truth for ML assets stored on the cloud. Download S3 data for experimentation to Amazon SageMaker Studio . Amazon SageMaker notebooks instances can access data through S3, Amazon FSx for Lustre , and Amazon Elastic File System . Use Amazon File Cache for high-speed caching for access to on-premises data, and Amazon FSx for NetApp ONTAP for cloud bursting. SageMaker training jobs can use data in Amazon S3, EFS, and FSx for Lustre. S3 data is accessed via File, Fast File, or Pipe mode, and pre-loaded or lazy-loaded when using FSx for Lustre as training job input. Any existing data on EFS can also be made available to training jobs as well. Leverage Amazon S3 Glacier for archiving data and reducing storage costs. ML workloads using Amazon SageMaker Let’s go deeper into how SageMaker can help you with your ML workloads. To start mapping ML workloads to the cloud, consider which AWS storage services work with Amazon SageMaker. Amazon S3 typically serves as the central storage location for both structured and unstructured data that is used for ML. This includes raw data coming from upstream applications, and also curated datasets that are organized and stored as part of a Feature Store. In the initial phases of development, a SageMaker Studio user will leverage S3 APIs to download data from S3 to their private home directory. This home directory is backed by a SageMaker-managed EFS file system. Studio users then point their notebook code (also stored in the home directory) to the local dataset and begin their development tasks. To scale up and automate model training, SageMaker users can launch training jobs that run outside of the SageMaker Studio notebook environment. There are several options for making data available to a SageMaker training job. Amazon S3. Users can specify the S3 location of the training dataset. When using S3 as a data source, there are three input modes to choose from: File mode. This is the default input mode, where SageMaker copies the data from S3 to the training instance storage. This storage is either a SageMaker-provisioned Amazon Elastic Block Store (Amazon EBS) volume or an NVMe SSD that is included with specific instance types. Training only starts after the dataset has been downloaded to the storage, and there must be enough storage space to fit the entire dataset. Fast file mode. Fast file mode exposes S3 objects as a POSIX file system on the training instance. Dataset files are streamed from S3 on demand, as the training script reads them. This means that training can start sooner and require less disk space. Fast file mode also does not require changes to the training code. Pipe mode. Pipe input also streams data in S3 as the training script reads it, but requires code changes. Pipe input mode is largely replaced by the newer and easier-to-use Fast File mode. FSx for Lustre. Users can specify a FSx for Lustre file system, which SageMaker will mount to the training instance and run the training code. When the FSx for Lustre file system is linked to a S3 bucket, the data can be lazily loaded from S3 during the first training job. Subsequent training jobs on the same dataset can then access it with low latency. Users can also choose to pre-load the file system with S3 data using hsm_restore commands. Amazon EFS. Users can specify an EFS file system that already contains their training data. SageMaker will mount the file system on the training instance and run the training code. Find out how to Choose the best data source for your SageMaker training job. Conclusion With this reference architecture, you can develop and deliver ML workloads that run either on-premises or in the cloud. Your enterprise can continue using its on-premises storage and compute for particular ML workloads, while also taking advantage of the cloud, using Amazon SageMaker. The scale available on the cloud allows your enterprise to conduct experiments without worrying about capacity. Start defining your hybrid data strategy on AWS today! Additional resources: Choose the best data source for your Amazon SageMaker training job Hybrid Machine Learning Whitepaper Access Training data with Amazon SageMaker Learn more about how to migrate data into the AWS Cloud Learn more about different AWS storage offerings [1] The practice of aggregating data from many sources to a single source or location. Franklin Aguinaldo Franklin is a Senior Solutions Architect at Amazon Web Services, He has over 20+ years of experience in development and architecture. Franklin is an App Modernization SME, and an expert on Serverless and Containers. Ananta Khanal Ananta Khanal is a Solutions Architect focused on Cloud storage solutions at AWS. He has worked in IT for over 15 years, and held various roles in different companies. He is passionate about cloud technology, infrastructure management, IT strategy, and data management. Sid Misra Sid Misra is a Senior Product Manager on the Amazon File Storage team. Sid has 15+ years of experience leading product and engineering teams focused on enterprise software, machine learning, computer vision, and wireless communications. Tony Chen Tony Chen is a Machine Learning Solutions Architect at Amazon Web Services, helping customers design scalable and robust machine learning capabilities in the cloud. As a former data scientist and data engineer, he leverages his experience to help tackle some of the most challenging problems organizations face with operationalizing machine learning. Comments View Comments Resources AWS Architecture Center AWS Well-Architected AWS Architecture Monthly AWS Whitepapers AWS Training and Certification This Is My Architecture Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Developing a Pioneering Multicancer Early Detection Test _ GRAIL Case Study _ AWS.txt,"Français architecture Scaled to ingest data Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. Learn more » To make sure that Galleri met its required clinical validation, the team embarked on one of the largest clinical development programs in genomic medicine: a pivotal clinical trial across 142 sites in the United States and Canada, tracking over 15,000 participants over 5 years. It involved collecting genomic sequencing data at a massive scale and using it to build model-training classifiers. Once the models were ready, bioinformaticians could run and develop pipelines at scale. Using AWS, GRAIL built a scalable infrastructure to handle large amounts of genomic data so that bioinformaticians could focus on applying their expertise in building pipelines instead of worrying about scaling infrastructure. “Using AWS provided us with reliable, cost-effective services to build Galleri,” says Olga Ignatova, director of software development at GRAIL. Español Opportunity | Developing a Cancer Detection Test in 5 Years with Robust Clinical Validation  Optimized 日本語 Contact Sales 2022 Learn how biotechnology company GRAIL used Amazon EC2 and 60 other scalable AWS services to pioneer new technologies for early cancer detection. GRAIL - Pioneering early-stage cancer testing Get Started 한국어 GRAIL Develops a Pioneering Multicancer Early Detection Test Using AWS Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon EKS Satnam Alag Senior Vice President for Software Development and Chief Security Officer, GRAIL Amazon EC2 per gigabyte of storage cost AWS Services Used 中文 (繁體) Bahasa Indonesia Aiming to shift the paradigm from screening for individual cancers to screening individuals for cancer and to detect cancers earlier, biotechnology innovator GRAIL created a multicancer early detection test, Galleri. It detects a cancer signal shared by over 50 types of cancer—over 45 of which currently lack recommended screening—through a blood draw. Combining next-generation genomics sequencing, population-scale clinical studies, state of the art data science, and machine learning, GRAIL used a range of offerings from Amazon Web Services (AWS) to test and commercially scale its platform while achieving significant cost savings, scalability, reliability, and architecture optimization. In a clinical study, GRAIL’s test demonstrated high overall sensitivity, less than 1 percent false positive rates based on 99.5 percent specificity, and high accuracy in participants with a positive cancer signal. ไทย Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Learn more » 中文 (简体) 40% savings In 2021 GRAIL partnered with the National Health Service (NHS) of England to implement Galleri in the largest multiyear, multicancer early detection trial to date, including 140,000 participants at mobile clinics operating in 150 locations around England. Those participating were recruited in a record 10 months. The enrollment ended in July 2022, and screenings are scheduled to continue for participants annually for 3 years. The NHS might eventually roll out the Galleri test to an additional one million people and has a long-term goal of detecting 75 percent of cancers while they are less advanced. Outcome | Improving Testing Over Time Using AWS  Overview The GRAIL team developed Reflow to manage its bioinformatics workloads on AWS. Reflow language helps bioinformaticians to compose existing tools—packaged in Docker images—using ordinary programming constructs. Reflow runtime is deployed in Amazon Elastic Kubernetes Service (Amazon EKS) clusters, a managed service to run Kubernetes in the AWS cloud and on-premises data centers. It evaluates Reflow programs and parallelizes workloads onto Spot Instances, further reducing costs. It also improved performance through incremental data processing and memoization of results. “We are constantly looking for opportunities to optimize our architecture and to get the boost of using AWS services that we haven’t used before and changing our architecture to take advantage of those,” says Alag. About GRAIL secure data encryption One of the biggest values of using AWS is that we can concentrate up the stack without needing to worry about scale associated with storage or compute.” Türkçe Launched in 2021, the Galleri test takes genetic data from a single blood draw and screens for a cancer signal by analyzing DNA methylation patterns. The team uses AWS to support the commercial scaling of the infrastructure to meet high demand and to fuel the software that runs its labs. The infrastructure uses over 60 AWS services.   English Headquartered in Menlo Park, California, GRAIL is a healthcare company working on innovative cancer-detection technologies. Because GRAIL deals with sensitive health-related information, having a strong networking and security program is imperative. To make sure that its data is secure and complies with data privacy laws, GRAIL uses Amazon Virtual Private Cloud (Amazon VPC). It lets organizations define and launch AWS instances in a logically isolated virtual network, with guardrails in place to control access to sensitive data. “AWS provides really good infrastructure and capabilities that we use for data protection and encryption at rest and in transit,” says Alag. “We’re making use of the controls on AWS to restrict access to our sensitive data.” GRAIL expands into different AWS Regions and scales globally while meeting the data residency requirements by using the 87 Availability Zones on AWS. Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. For the compute resources to run Galleri tests at scale, GRAIL uses Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. “One of the biggest values of using AWS is that we can concentrate up the stack without needing to worry about scale associated with storage or compute,” says Alag. To cost-efficiently run its computational workloads, the company uses Amazon EC2 Spot Instances, which let users take advantage of unused Amazon EC2 capacity. For its databases, GRAIL uses Reserved DB instances for Aurora, which provide a significant discount compared to On-Demand database instance pricing. The earlier cancer is diagnosed, the higher the chance of successful treatment and survival. In the United States today, around 70 percent of all cancer-related deaths are from cancers with no recommended screening. GRAIL’s mission is to detect cancers earlier, when they have a higher probability of being cured. Its pioneering Galleri test analyzes a single blood draw to detect multiple types of cancers—most of which cannot be detected with current screening paradigms. It also predicts with high accuracy where the cancer originated in those diagnosed with cancer. “No one knew if an assay would be able to detect multiple cancers at the same time through a blood test,” says Satnam Alag, senior vice president for software development and chief security officer of GRAIL. “With Galleri, we met success and results complementary to traditional standard-of-care screening.” Deutsch Amazon Elastic Compute Cloud (Amazon EC2) provides secure and resizable compute capacity for virtually any workload. Learn more » Tiếng Việt Amazon S3 from participants in a 140,000-person trial Italiano Customer Stories / Life Sciences Solution | Achieving Scalability, Cost Savings, and Security Using AWS  Amazon Virtual Private Cloud (Amazon VPC) gives you full control over your virtual networking environment, including resource placement, connectivity, and security. Learn more » To address its storage needs, GRAIL uses Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. The company has achieved cost savings using Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering), which automates storage cost savings by migrating data when access patterns change. “We transitioned most of our data to S3 Intelligent-Tiering, which led to 40 percent savings per gigabyte of storage cost,” says Ignatova. Adding Galleri to the five US-recommended cancer screenings could potentially reduce 5-year cancer mortality by 39 percent in those intercepted. GRAIL is working on more clinical trials to add more data to prove the efficacy of the Galleri test and looking for ways to further improve the performance and cost of the test as it scales to a larger population. “We wouldn’t have been able to scale, perform the huge number of computations, and store the large amounts of data that we deal with daily as easily without AWS infrastructure,” says Alag. “Using AWS will be key for us as we scale the system across the world.” Amazon VPC Supports Português" Dexatek Optimizes Its IoT Platform and Boosts Spend on Innovation by 30 with AWS _ Dexatek Case Study _ AWS.txt,"AWS Lambda Français AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Learn more » 2023 Español Pause slide rotation Next Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. Learn more » 日本語 Thanks to this optimization, the company has been able to dedicate 30 percent more resources to innovation and speed up coding and testing times, while scaling platform performance tenfold. In addition, Dexatek is now looking to new markets and has launched a product on the AWS Marketplace. The ability to easily expand the IoT platform is helping Dexatek focus on new markets. Chen, for one, is already looking to a near future where the company goes beyond smart homes. “We have the expertise and the capabilities to support the transfer of IoT data from devices and sensors on cars just as well as in the home, which means fleet management could be an area of interest for the future,” he says. Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Prev 한국어 Dexatek Technology, headquartered in New Taipei City, designs, manufactures, and promotes Internet of Things (IoT) consumer electronic products. Founded in 2003, the company provides solutions for a range of smart appliances, covering home security, wellbeing, and more.   Overview | Opportunity | Solution | Outcome | AWS Services Used With AWS, Dexatek can continue pursuing expansion, using the platform’s scalability to seize new business opportunities. As a first step, the company has launched its Dexatek IoT Core solution on the AWS Marketplace to offer businesses an out-of-the-box solution, complete with mobile apps, that provides their products with smart capabilities. Play In addition to being simpler to administer, the platform scales automatically as more IoT connections are added, and data travels between the platform and devices 10 times faster. “With AWS IoT Core, we can drive growth without worrying about platform workloads and offer businesses a level of performance that exceeds many of our competitors,” comments Chen. AWS IoT Core lets you connect billions of IoT devices and route trillions of messages to AWS services without managing infrastructure. Get Started Jerry Chen Chief Executive Officer, Dexatek Technology By optimizing its platform with AWS IoT Core and going serverless, Dexatek has tightened the security of device connections through mutual authentication and end-to-end encryption. “I think the overall stability of the platform is also greater,” adds Chen, “which means I can go to bed at night and not think about problems such as a server causing the platform to go down.” AWS Services Used Dexatek hoped to create a more scalable IoT platform that reduced management time while maintaining a high level of security. Jerry Chen, chief executive officer at Dexatek Technology, explains, “We had to scale our instances manually and schedule regular maintenance to update servers as well as the security certification for our MQTT connections. We wanted to eliminate these administrative activities so we could focus on development and growing the company.” increase in processing performance 中文 (繁體) Bahasa Indonesia 10x Working closely with AWS, Dexatek successfully migrated to AWS Lambda with AWS IoT Core to securely connect smart devices, and Amazon DynamoDB to easily store and query device data. The strong working relationship with AWS helped the Dexatek team save a lot of work. “We completed development, including all APIs and basic testing, in under three months instead of six to eight months as expected for a project like this.” Automated encryption and authentication Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Customer Stories / Hi Tech, Electronics & Semiconductor Greater security AWS IoT Core To drive growth in this expanding market, Dexatek wanted to optimize its Amazon Web Services (AWS) infrastructure that supported the processing of smart-device data. The infrastructure was based on a combination of Amazon Elastic Compute Cloud (Amazon EC2) instances and Amazon Simple Storage Service (Amazon S3) to handle the transfer of information to and from devices via the MQTT protocol. Overview We estimate that by moving to AWS IoT Core along with AWS Lambda, we can shift 30 percent more IT resources to product development.” Dexatek increased the performance of its Internet of Things (IoT) platform, enhanced security, and lowered management time by migrating to AWS IoT Core. Opportunity | Making Smart Devices Easier to Scale and Less Management Intensive Dexatek Optimizes Its IoT Platform and Boosts Spend on Innovation by 30% with AWS Amazon EC2 Türkçe Resume slide rotation English Outcome | Creating Opportunities for New Markets with AWS Lowered coding and testing times from months The company is currently experimenting with Amazon SageMaker to help train machine learning models and AWS IoT Greengrass to leverage pre-built software components that would speed up delivery of the IoT device software. “If your goals are to reduce costs and make IoT devices smarter, then AWS has what you need,” Chen concludes. About Dexatek Technology 30% Dexatek can also onboard businesses quicker, launching IoT platform demos for new customers in less than a week—a process that could previously take three months. This is because AWS IoT Core makes coding easier and testing periods shorter. Chen explains, “We give the engineers a heads up on what we need them to do, and after three to five days, they’re saying, ‘It’s done.’” Solution | Freeing Up Resources for Innovation with AWS IoT Core   < 5 days Dexatek Technology, based in Taiwan, gives electronic consumer products smart capabilities using its IoT solutions. To optimize its IoT platform for processing data from smart devices, Dexatek migrated to AWS IoT Core and AWS Lambda along with the Amazon DynamoDB database service. Deutsch Amazon DynamoDB Tiếng Việt Italiano ไทย more available resources for innovation Contact Sales Learn more » With development finished, Dexatek Technology is completing a final technical review before fully adopting the AWS IoT Core–based serverless architecture. Chen expects it to significantly reduce the amount of infrastructure management that IT personnel will need to perform. “We estimate that by moving to AWS IoT Core along with AWS Lambda, we can shift 30 percent more IT resources to product development,” he says. Dexatek Technology helps consumer electronics companies incorporate smart technology into products like light switches, thermostats, and air-conditioning units. It equips businesses with Internet of Things (IoT) capabilities so that customers can remotely monitor and control their devices, such as adjusting the temperatures of their homes or scheduling when their lights come on. The company is taking advantage of the growing market for smart home products, which is expected to attract $173 billion in consumer spending worldwide by 2025. With optimization as its goal, Dexatek looked at moving from Amazon EC2 to AWS Lambda serverless service. In addition, it began investigating AWS IoT Core to join, manage, and scale its smart device connections without having to think about security. Says Chen, “We decided to engage with AWS Solutions Architects to make sure we proceeded correctly. We wanted them to double-check everything we did to avoid any delays in the optimization process.” Português" Directing ML-powered Operational Insights from Amazon DevOps Guru to your Datadog event stream _ AWS DevOps Blog.txt,"AWS DevOps Blog Directing ML-powered Operational Insights from Amazon DevOps Guru to your Datadog event stream by Bineesh Ravindran and David Ernst | on 13 JUL 2023 | in Amazon DevOps Guru , Amazon Machine Learning , Artificial Intelligence , AWS CLI , DevOps , Integration & Automation , Technical How-to | Permalink |  Share Amazon DevOps Guru is a fully managed AIOps service that uses machine learning (ML) to quickly identify when applications are behaving outside of their normal operating patterns and generates insights from its findings. These insights generated by DevOps Guru can be used to alert on-call teams to react to anomalies for business mission critical workloads. If you are already utilizing Datadog to automate infrastructure monitoring, application performance monitoring, and log management for real-time observability of your entire technology stack, then this blog is for you. You might already be using Datadog for a consolidated view of your Datadog Events interface to search, analyze and filter events from many different sources in one place. Datadog Events are records of notable changes relevant for managing and troubleshooting IT Operations, such as code, deployments, service health, configuration changes and monitoring alerts. Wherever DevOps Guru detects operational events in your AWS environment that could lead to outages, it generates insights and recommendations. These insights/recommendations are then pushed to a user specific Datadog endpoint using Datadog events API. You can then create dashboards, incidents, alarms or take corrective automated actions based on these insights and recommendations in Datadog. Datadog collects and unifies all of the data streaming from these complex environments, with a 1-click integration for pulling in metrics and tags from over 90 AWS services. Companies can deploy the Datadog Agent directly on their hosts and compute instances to collect metrics with greater granularity—down to one-second resolution. And with Datadog’s out-of-the-box integration dashboards, companies get not only a high-level view into the health of their infrastructure and applications but also deeper visibility into individual services such as AWS Lambda and Amazon EKS . This blogpost will show you how to utilize Amazon DevOps Guru with Datadog to get real time insights and recommendations on your AWS Infrastructure. We will demonstrate how an insight generated by Amazon DevOps Guru for an anomaly can automatically be pushed to Datadog’s event streams which can then be used to create dashboards, create alarms and alerts to take corrective actions. Solution Overview When an Amazon DevOps Guru insight is created, an Amazon EventBridge rule is used to capture the insight as an event and routed to an AWS Lambda Function target. The lambda function interacts with Datadog using a REST API to push corresponding DevOps Guru events captured by Amazon EventBridge. The EventBridge rule can be customized to capture all DevOps Guru insights or narrowed down to specific insights. In this blog, we will be capturing all DevOps Guru insights and will be performing actions on Datadog for the below DevOps Guru events: DevOps Guru New Insight Open DevOps Guru New Anomaly Association DevOps Guru Insight Severity Upgraded DevOps Guru New Recommendation Created DevOps Guru Insight Closed Figure 1: Amazon DevOps Guru Integration with Datadog with Amazon EventBridge and AWS. Solution Implementation Steps Pre-requisites Before you deploy the solution, complete the following steps. Datadog Account Setup: We will be connecting your AWS Account with Datadog. If you do not have a Datadog account, you can request a free trial developer instance through Datadog . Datadog Credentials: Gather the credentials of Datadog keys that will be used to connect with AWS. Follow the steps below to create an API Key and Application Key. Add an API key or client token To add a Datadog API key or client token: Navigate to Organization settings, then click the API keys or Client Tokens Click the New Key or New Client Token button, depending on which you’re creating. Enter a name for your key or token. Click Create API key or Create Client Token . Note down the newly generated API Key value. We will need this in later steps Figure 2: Create new API Key. Add application keys To add a Datadog application key, navigate to Organization Settings > Application Keys .If you have the permission to create application keys, click New Key .Note down the newly generated Application Key. We will need this in later steps. Add Application Key and API Key to AWS Secrets Manager : Secrets Manager enables you to replace hardcoded credentials in your code, including passwords, with an API call to Secrets Manager to retrieve the secret programmatically. This helps ensure the secret can’t be compromised by someone examining your code,because the secret no longer exists in the code. Follow below steps to create a new secret in AWS Secrets Manager. Open the Secrets Manager console at https://console.aws.amazon.com/secretsmanager/ Choose Store a new secret . On the Choose secret type page, do the following: For Secret type , choose other type of secret . In Key/value pairs , either enter your secret in Key/value pairs Figure 3: Create new secret in Secret Manager. Click next and enter “DatadogSecretManager” as the secret name followed by Review and Finish. Figure 4: Configure secret in Secret Manager. Enable DevOps Guru for your applications by following these steps or you can follow this blog to deploy a sample serverless application that can be used to generate DevOps Guru insights for anomalies detected in the application. AWS Cloud9 is recommended to create an environment as   AWS Serverless Application Model (SAM) CLI and   AWS Command Line Interface (CLI) are pre-installed  and can be accessed from a bash terminal. Install and set up SAM CLI – Install the SAM CLI . Download and set up Java . The version should be matching to the runtime that you defined in the SAM template. yaml Serverless function configuration – Install the Java SE Development Kit 11 . Maven – Install Maven Option 1: Deploy Datadog Connector App from AWS Serverless Repository The DevOps Guru Datadog Connector application is available on the AWS Serverless Application Repository which is a managed repository for serverless applications. The application is packaged with an AWS Serverless Application Model (SAM) template, definition of the AWS resources used and the link to the source code. Follow the steps below to quickly deploy this serverless application in your AWS account. Login to the AWS management console of the account to which you plan to deploy this solution. Go to the DevOps Guru Datadog Connector application in the AWS Serverless Repository and click on “Deploy”. The Lambda application deployment screen will be displayed where you can enter the Datadog Application name Figure 5: DevOps Guru Datadog connector. Figure 6: Serverless Application DevOps Guru Datadog connector. After successful deployment the AWS Lambda Application page will display the “Create complete” status for the serverlessrepo-DevOps-Guru-Datadog-Connector application. The CloudFormation template creates four resources, Lambda function which has the logic to integrate to the Datadog Event Bridge rule for the DevOps Guru Insights Lambda permission IAM role Now skip Option 2 and follow the steps in the “Test the Solution” section to trigger some DevOps Guru insights/recommendations and validate that the events are created and updated in Datadog. Option 2: Build and Deploy sample Datadog Connector App using AWS SAM Command Line Interface As you have seen above, you can directly deploy the sample serverless application form the Serverless Repository with one click deployment. Alternatively, you can choose to clone the GitHub source repository and deploy using the SAM CLI from your terminal. The Serverless Application Model Command Line Interface (SAM CLI) is an extension of the AWS CLI that adds functionality for building and testing serverless applications. The CLI provides commands that enable you to verify that AWS SAM template files are written according to the specification, invoke Lambda functions locally, step-through debug Lambda functions, package and deploy serverless applications to the AWS Cloud, and so on. For details about how to use the AWS SAM CLI, including the full AWS SAM CLI Command Reference, see  AWS SAM reference – AWS Serverless Application Model . Before you proceed, make sure you have completed the pre-requisites section in the beginning which should set up the AWS SAM CLI, Maven and Java on your local terminal. You also need to install and set up Docker to run your functions in an Amazon Linux environment that matches Lambda. Clone the source code from the github repo. git clone https://github.com/aws-samples/amazon-devops-guru-connector-datadog.git Build the sample application using SAM CLI. $cd DatadogFunctions $sam build Building codeuri: $\amazon-devops-guru-connector-datadog\DatadogFunctions\Functions runtime: java11 metadata: {} architecture: x86_64 functions: Functions Running JavaMavenWorkflow:CopySource Running JavaMavenWorkflow:MavenBuild Running JavaMavenWorkflow:MavenCopyDependency Running JavaMavenWorkflow:MavenCopyArtifacts Build Succeeded Built Artifacts : .aws-sam\build Built Template : .aws-sam\build\template.yaml Commands you can use next ========================= [*] Validate SAM template: sam validate [*] Invoke Function: sam local invoke [*] Test Function in the Cloud: sam sync --stack-name {{stack-name}} --watch [*] Deploy: sam deploy --guided This command will build the source of your application by installing dependencies defined in Functions/pom.xml, create a deployment package and saves it in the. aws-sam/build folder. Deploy the sample application using SAM CLI. $sam deploy --guided This command will package and deploy your application to AWS, with a series of prompts that you should respond to as shown below: Stack Name: The name of the stack to deploy to CloudFormation. This should be unique to your account and region, and a good starting point would be something matching your project name. AWS Region: The AWS region you want to deploy your application to. Confirm changes before deploy: If set to yes, any change sets will be shown to you before execution for manual review. If set to no, the AWS SAM CLI will automatically deploy application changes. Allow SAM CLI IAM role creation: Many AWS SAM templates, including this example, create AWS IAM roles required for the AWS Lambda function(s) included to access AWS services. By default, these are scoped down to minimum required permissions. To deploy an AWS CloudFormation stack which creates or modifies IAM roles, the CAPABILITY_IAM value for capabilities must be provided. If permission isn’t provided through this prompt, to deploy this example you must explicitly pass --capabilities CAPABILITY_IAM to the sam deploy command. Disable rollback [Y/N]: If set to Y, preserves the state of previously provisioned resources when an operation fails. Save arguments to configuration file (samconfig.toml): If set to yes, your choices will be saved to a configuration file inside the project, so that in the future you can just re-run sam deploy without parameters to deploy changes to your application. After you enter your parameters, you should see something like this if you have provided Y to view and confirm ChangeSets. Proceed here by providing ‘Y’ for deploying the resources. Initiating deployment ===================== Uploading to sam-app-datadog/0c2b93e71210af97a8c57710d0463c8b.template 1797 / 1797 (100.00%) Waiting for changeset to be created.. CloudFormation stack changeset --------------------------------------------------------------------------------------------------------------------- Operation LogicalResourceId ResourceType Replacement --------------------------------------------------------------------------------------------------------------------- + Add FunctionsDevOpsGuruPermissi AWS::Lambda::Permission N/A on + Add FunctionsDevOpsGuru AWS::Events::Rule N/A + Add FunctionsRole AWS::IAM::Role N/A + Add Functions AWS::Lambda::Function N/A --------------------------------------------------------------------------------------------------------------------- Changeset created successfully. arn:aws:cloudformation:us-east-1:867001007349:changeSet/samcli-deploy1680640852/bdc3039b-cdb7-4d7a-a3a0-ed9372f3cf9a Previewing CloudFormation changeset before deployment ====================================================== Deploy this changeset? [y/N]: y 2023-04-04 15:41:06 - Waiting for stack create/update to complete CloudFormation events from stack operations (refresh every 5.0 seconds) --------------------------------------------------------------------------------------------------------------------- ResourceStatus ResourceType LogicalResourceId ResourceStatusReason --------------------------------------------------------------------------------------------------------------------- CREATE_IN_PROGRESS AWS::IAM::Role FunctionsRole - CREATE_IN_PROGRESS AWS::IAM::Role FunctionsRole Resource creation Initiated CREATE_COMPLETE AWS::IAM::Role FunctionsRole - CREATE_IN_PROGRESS AWS::Lambda::Function Functions - CREATE_IN_PROGRESS AWS::Lambda::Function Functions Resource creation Initiated CREATE_COMPLETE AWS::Lambda::Function Functions - CREATE_IN_PROGRESS AWS::Events::Rule FunctionsDevOpsGuru - CREATE_IN_PROGRESS AWS::Events::Rule FunctionsDevOpsGuru Resource creation Initiated CREATE_COMPLETE AWS::Events::Rule FunctionsDevOpsGuru - CREATE_IN_PROGRESS AWS::Lambda::Permission FunctionsDevOpsGuruPermissi - on CREATE_IN_PROGRESS AWS::Lambda::Permission FunctionsDevOpsGuruPermissi Resource creation Initiated on CREATE_COMPLETE AWS::Lambda::Permission FunctionsDevOpsGuruPermissi - on CREATE_COMPLETE AWS::CloudFormation::Stack sam-app-datadog - --------------------------------------------------------------------------------------------------------------------- Successfully created/updated stack - sam-app-datadog in us-east-1 Once the deployment succeeds, you should be able to see the successful creation of your resources. Also, you can find your Lambda, IAM Role and EventBridge Rule in the CloudFormation stack output values. You can also choose to test and debug your function locally with sample events using the SAM CLI local functionality.Test a single function by invoking it directly with a test event. An event is a JSON document that represents the input that the function receives from the event source. Refer the Invoking Lambda functions locally – AWS Serverless Application Model link here for more details. $ sam local invoke Functions -e ‘event/event.json’ Once you are done with the above steps, move on to “Test the Solution” section below to trigger some DevOps Guru insights and validate that the events are created and pushed to Datadog. Test the Solution To test the solution, we will simulate a DevOps Guru Insight. You can also simulate an insight by following the steps in this blog . After an anomaly is detected in the application, DevOps Guru creates an insight as shown below. Figure 7: DevOps Guru insight for DynamoDB For the DevOps Guru insight shown above, a corresponding event is automatically created and pushed to Datadog as shown below. In addition to the events creation, any new anomalies and recommendations from DevOps Guru is also associated with the events. Figure 8 : DevOps Guru Insight pushed to Datadog event stream. Cleaning Up To delete the sample application that you created, In your Cloud 9 environment open a new terminal. Now type in the AWS CLI command below and pass the stack name you provided in the deploy step. aws cloudformation delete-stack --stack-name Alternatively, you could also use the AWS CloudFormation Console to delete the stack. Conclusion This article highlights how Amazon DevOps Guru monitors resources within a specific region of your AWS account, automatically detecting operational issues, predicting potential resource exhaustion, identifying probable causes, and recommending remediation actions. It describes a bespoke solution enabling integration of DevOps Guru insights with Datadog, enhancing management and oversight of AWS services. This solution aids customers using Datadog to bolster operational efficiencies, delivering customized insights, real-time alerts, and management capabilities directly from DevOps Guru, offering a unified interface to swiftly restore services and systems. To start gaining operational insights on your AWS Infrastructure with Datadog head over to Amazon DevOps Guru documentation page. About the authors: Bineesh Ravindran Bineesh is Solutions Architect at Amazon Webservices (AWS) who is passionate about technology and love to help customers solve problems. Bineesh has over 20 years of experience in designing and implementing enterprise applications. He works with AWS partners and customers to provide them with architectural guidance for building scalable architecture and execute strategies to drive adoption of AWS services. When he’s not working, he enjoys biking, aquascaping and playing badminton. David Ernst David is a Sr. Specialist Solution Architect – DevOps, with 20+ years of experience in designing and implementing software solutions for various industries. David is an automation enthusiast and works with AWS customers to design, deploy, and manage their AWS workloads/architectures. TAGS: AI/ML , AIOps , Amazon DevOps Guru , AWS Serverless Application Model (SAM) , DevOps , Observability Resources AWS Development Center AWS Developer Tools Blog AWS Cloud9 AWS CodeStar AWS Elastic Beanstalk AWS X-Ray Follow  AWS .NET on Twitter  AWS Cloud on Twitter  AWS on Reddit  LinkedIn  Twitch  Email Updates" DTN Case Study _ HPC _ AWS.txt,"Elastic Fabric Adapter (EFA) is a network interface for Amazon EC2 instances that enables customers to run applications requiring high levels of inter-node communications at scale on AWS. About DTN Helping Critical Organizations Make Data-Driven Decisions Français Español 日本語 AWS ParallelCluster is an open source cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. It began testing the high-performance computing (HPC) capabilities of Amazon Web Services (AWS) and running data processing and modeling workloads on Amazon Elastic Compute Cloud (Amazon EC2), a service that provides secure, resizable compute capacity in the cloud. As a proof of concept, DTN used historical data from Hurricane Laura, a category 4 hurricane that made landfall in Louisiana in August 2020. Using HPC on AWS, the company could reliably, accurately, and consistently double the frequency with which it could generate high-resolution weather forecasts. With faster model output, DTN can generate more-timely and valuable insights for organizations that depend on them for safe and sustainable operations. For example, DTN weather data feeds Storm Impact Analytics, a machine learning application that helps electric utilities more accurately predict the power outages a given weather event might create. “We go beyond the data to give our customers timely, actionable insights for specific storms,” says Chenevert. “We help them understand how to prepare for potential outages, estimate time to restore power, and plan for restoration response efficiently.” Further testing with the Amazon EC2 Hpc6a Instances has shown the potential to further compress the rendering time to under 1 hour. “Our team celebrated when a test configuration showed that we could run our global model and generate 1 hour of forecast data in less than 1 minute on AWS,” says Chenevert. 한국어 Working on AWS brings agility to HPC. We can go from idea to production rapidly and scale in a way that’s beneficial to us and our customers.” “Working on AWS brings agility to HPC,” says Shaw. “We can go from idea to production rapidly and scale in a way that’s beneficial to us and our customers.” Part of that agility is the result of using Amazon FSx for Lustre, which provides businesses with fully managed shared storage built on the world’s most popular high-performance file system, and Amazon Simple Storage Service (Amazon S3), an object storage service that offers industry-leading scalability, data availability, security, and performance. DTN uses these services to store the data that it pulls in from around the world and make it highly available to other parts of its technology infrastructure. With the combination of AWS services and technical collaboration, DTN has been able to innovate more quickly, improve insights during rapidly evolving weather events, and offer the best operational intelligence possible for its customers. In January 2022 DTN began using Amazon EC2 Hpc6a Instances—which are designed specifically for compute-intensive HPC workloads in Amazon EC2—and effectively doubled its high-resolution global weather modeling capacity to four times daily. The company needed a flexible and powerful management tool to increase throughput for its range of HPC workloads, such as simultaneously running atmospheric- and oceanic wave-modeling spaces as well as handling rapid-refresh updates. It started using AWS ParallelCluster, an open-source cluster management tool that makes it easier to deploy and manage HPC clusters on AWS. Increased high-resolution model frequency from two to four runs per day Amazon EC2 Hpc6a Instances offer the best price performance for compute-intensive, high performance computing (HPC) workloads in Amazon EC2. Hpc6a instances deliver up to 65% better price performance over comparable, compute-optimized, x86 based instances. Get Started Amazon S3 is an object storage service offering industry-leading scalability, data availability, security, and performance. Achieving Agile HPC and Improving Performance in the Cloud Rendered 1 hour of forecast data in under 1 minute in test scenario AWS Services Used 中文 (繁體) Bahasa Indonesia DTN is a global data, analytics, and technology company that delivers unparalleled operational intelligence to help businesses prosper and organizations improve service delivery in the agriculture, energy, and other weather-dependent industries. DTN engaged the AWS team in fall 2020 to explore how to efficiently increase the frequency of forecast outputs. Starting with existing data from Hurricane Laura as a benchmark, DTN developed and tested HPC infrastructures alongside the AWS team over 18 months to optimize the throughput potential of its forecast models. “We found a lot of value in collaborating with the AWS team,” says Brent Shaw, chief weather architect and director of core content services at DTN. “As our engineers optimized our weather science workflows, AWS provided support in optimizing the HPC infrastructure. These changes led to improvements across our weather modeling technology stack.” Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский Brent Shaw Chief Weather Architect and Director of Core Content Services, DTN عربي Learn more » 中文 (简体) Amazon Elastic Compute Cloud (Amazon EC2) Hpc6a Instances Benefits of AWS Amazon Simple Storage Service (Amazon S3) Organizations in weather-sensitive industries need highly accurate and near-real-time weather intelligence to make adept business decisions. Many companies in these industries rely on information from DTN, a global data, analytics, and technology company, for that information. To deliver high-level operational intelligence for weather-dependent industries, DTN deploys a suite of proprietary and supplementary weather data and models that deliver sophisticated, high-resolution outputs and require continual processing of vast amounts of data from inputs across the globe. This complexity has historically limited how often forecast engines can update. To optimize its solutions for customers worldwide, DTN sought innovative ways to efficiently increase the frequency and accuracy of its weather forecasting models. AWS ParallelCluster Türkçe DTN specializes in the analysis and delivery of timely weather, agricultural, energy, and commodity market information. While most global weather forecasting organizations run models twice daily, DTN wanted to increase the frequency of forecast modeling to provide customers with intelligence that better reflects how changing weather could impact their operations. “In weather forecasting, we need highly elastic and scalable HPC systems to analyze huge amounts of data globally,” says Doug Chenevert, director of the forecast platform at DTN. “Because weather changes rapidly, a system that can ingest data quickly and run our models frequently is critical for delivering near-real-time insights.” DTN chose to use AWS for the capacity, flexibility, and maturity of its HPC capabilities and services. “Ideally, we want to render high-resolution global forecasts hourly,” says Chenevert. “That kind of output is uncharted territory for weather forecasting, but we’re getting closer by using AWS.” English Since DTN’s successful proof of concept, the company has moved most of its weather data infrastructure to AWS. “The entire global forecasting solution currently runs on AWS,” says Chenevert. This infrastructure supports a massive amount of data input, storage, and processing; the company estimates that it processes petabytes of data per day. Running tightly coupled HPC workloads presents a challenge with intensive parallel processes running across many instances that must communicate with each other at high speeds. “Weather is the original big data problem,” says Shaw. “Each part needs to know what’s happening in the other parts of the system as it’s happening.” DTN is running HPC workloads in the cloud using Elastic Fabric Adapter (EFA), a network interface for Amazon EC2 instances that customers can use to run applications requiring high levels of internode communications at scale. Delivering More Timely Weather Forecasts Using AWS Deutsch Tiếng Việt DTN Doubles Weather Forecasting Performance Using Amazon EC2 Hpc6a Instances Italiano ไทย DTN has a long history of innovation and continues to develop infrastructures that deliver improved, more-timely intelligence for customers. The company is currently exploring using the artificial intelligence (AI) features of AWS while making further improvements to its forecast model processing. By collaborating with AWS and using its services, DTN has made improvements that further differentiate it from other data providers. “We view the accomplishments we’ve made to our global forecast engine on AWS as groundbreaking,” says Chenevert. “It is truly innovative and extremely beneficial to the weather-dependent organizations that we serve.” Contact Sales 2022 Supports faster results and more-timely insights to customers Português Elastic Fabric Adapter (EFA)" e-banner Streamlines Its Contact Center Operations and Facilitates a Fully Remote Workforce with Amazon Connect _ e-banner Case Study _ AWS.txt,"e-banner's on-premises contact center solution also had a range of additional challenges. Maintaining e-banner's legacy contact center system required the assistance of a third-party provider, and some features took several months to update. Furthermore, the business faced an annual increase in maintenance costs of 15–20 percent. In addition, e-banner’s existing interactive voice response (IVR) system, used to automate simple customer requests by phone, was tedious to customize. As a result, the business had to allocate 30 customer service team members to attend to basic customer requests that could easily have been automated with the right IVR system in place. Team members had to spend 30 minutes to an hour searching for customer information in e-banner’s client relationship management (CRM) software, which impacted the overall customer experience. less time to update IVR call flow Français Kenny Lui Head of Operations, e-banner Kenny Lui, head of operations at e-banner, explains, “We sought a cloud-based solution that would empower more than 30 customer service team members to work from home during the pandemic without compromising our reputation for responsiveness and quality customer service.”   Outcome | Delivering a Seamless Customer Service Experience with Remote Staff 2023 100% Español Amazon EC2 AWS worked closely with AWS Partner Megazone Cloud to transform e-banner’s contact center into a modern cloud-based platform. They met with e-banner’s leadership team to assess the company’s needs. e-banner’s top priority was to ensure customer satisfaction through uninterrupted service. “AWS and Megazone Cloud collaborated to present the full suite of features offered by Amazon Connect to our leadership team and provided a demonstration. Their dedicated support in the process of migrating our contact center to the new platform gave us the confidence and trust to proceed with the implementation,” says Kenny. e-banner is a Hong Kong–based digital printing company that specializes in a variety of printing services, including large-format printing, display stands, event backdrops, outdoor banners, and more. The company has been in operation for over a decade and has served hundreds of clients in Hong Kong and the Asia Pacific region. 日本語 e-banner Streamlines Its Contact Center Operations and Facilitates a Fully Remote Workforce with Amazon Connect Customer Stories / Retail & Wholesale e-banner, one of Hong Kong’s largest digital printing companies, is committed to providing quick, convenient, high-quality services to its customers. To make the digital printing process even easier, the company offers real-time quotations and a self-service order platform, as well as access to order history and status, 24/7 via its website. Get Started 한국어 Opportunity | Overcoming the Challenges of an On-Premises Contact Center Solution Overview | Opportunity | Solution | Outcome | AWS Services Used 40% Solution | Implementing a Customized Cloud-Based Contact Center   AWS Services Used About Megazone Cloud e-banner is one of Hong Kong’s largest digital printing companies. To optimize its contact center and facilitate its transition to a work-from-home model, the company adopted Amazon Connect as its contact center solution. 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. 80% Contact Sales Ρусский Ensures a reliable contact center experience عربي Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. To ensure a seamless online shopping experience, e-banner operates a contact center where customers can get help with any enquiries or issues they encounter. Its contact center solution was hosted on premises, which required customer service staff to work on site. However, the 2020 global pandemic rendered this approach completely unsustainable, as the contact center had to be shut down entirely, leaving customers' inquiries unattended. Amazon Connect's scalability and pay-as-you-go model makes it an ideal choice for businesses of all sizes. The seamless management of backend technical issues by the AWS account team also ensures that we can focus on delivering the best possible customer experience.” With Amazon Connect, you can set up a contact center in minutes that can scale to support millions of customers. During a period of internal research, e-banner discovered Amazon Web Services (AWS) and learned that implementing a fully remote contact center team was easily achievable through Amazon Connect. Learn more » Amazon Connect Overview Not only has the implementation of Amazon Connect saved e-banner a significant amount of time, but it’s also led to a significant reduction in maintenance and upgrade costs. Kenny adds, “Our initial cost concerns were alleviated with Amazon Connect's pay-as-you-go pricing model, which ultimately resulted in 40 percent cost savings for e-banner.” Türkçe Furthermore, e-banner integrated its CRM and enterprise resource planning (ERP) software with Amazon Connect, streamlining operations for greater efficiency. Consequently, call agents can effortlessly access and retrieve real-time information on the customers they are assisting from a single platform, resulting in further time savings and increased responsiveness. English Amazon RDS e-banner’s management team is currently leveraging Amazon Connect's performance monitoring features to gain valuable insights and collect data on essential customer service metrics, including call times and agent productivity. This data guides the company's efforts to continually enhance its customer service. Additionally, e-banner can perform artificial intelligence–based sentiment analysis on calls across multiple languages, providing even more valuable insights into its customers. Through sentiment analysis, management can identify the specific issues that customers often express negative feedback about, and subsequently provide training to agents to enhance their communication and resolve these issues more effectively. About e-banner Deutsch e-banner, one of Hong Kong’s largest print production companies, transforms its contact center with Amazon Connect to reduce costs, improve uptime, and empower its customer service team to work remotely. Tiếng Việt As a leading AWS Premier Consulting Partner in APAC, Megazone Cloud has earned the trust of over 5,000 customers ranging from startups to large enterprises. Apart from delivering cloud contact center support, Megazone Cloud also offers expertise in artificial intelligence, machine learning, serverless architecture, cloud-based media streaming, and AI chatbot technologies. Italiano ไทย cost savings Zero downtime By implementing Amazon Connect, e-banner has ensured a seamless online experience for its customers, improved stability and reliability, reduced costs, and facilitated 100 percent remote work for all of its contact center operations.   of customer service staff empowered to work from home With Amazon Connect, e-banner gained a stable, reliable contact center system for seamless customer service during the pandemic and beyond. Now, the company’s customer service staff can work remotely, and the business has the flexibility to onboard new agents from anywhere. AWS and Megazone Cloud implemented a customized Amazon Connect solution in one month from the initial engagement, automating basic customer service requests with a personalized IVR. The Amazon Connect IVR is fully customizable, flexible, and user friendly, which allows staff to easily modify scripts and design the most effective flows. Kenny says, “With Amazon Connect, we no longer rely on third-party vendors to design our IVR flow, which used to take weeks to implement. Instead, we can make changes to our IVR in just a few hours.” e-banner estimates it now saves 80 percent of the time it previously spent on IVR. e-banner looks forward to extending Amazon Connect to its sister company, e-print. The business also intends to adopt Amazon Connect's omni-channel contact center solutions, which will allow customers to connect with its contact center team via WhatsApp, Facebook, and more. Kenny concludes, “Amazon Connect's ability to scale and its pay-as-you-go model makes it an ideal choice for businesses of all sizes. The seamless management of backend technical issues by the AWS account team also ensures that we can focus on delivering the best possible customer experience.” Português" Effectively solve distributed training convergence issues with Amazon SageMaker Hyperband Automatic Model Tuning _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Effectively solve distributed training convergence issues with Amazon SageMaker Hyperband Automatic Model Tuning by Uri Rosenberg | on 13 JUL 2023 | in Amazon SageMaker , Best Practices , Expert (400) | Permalink | Comments |  Share Recent years have shown amazing growth in deep learning neural networks (DNNs). This growth can be seen in more accurate models and even opening new possibilities with generative AI: large language models (LLMs) that synthesize natural language, text-to-image generators, and more. These increased capabilities of DNNs come with the cost of having massive models that require significant computational resources in order to be trained. Distributed training addresses this problem with two techniques: data parallelism and model parallelism. Data parallelism is used to scale the training process over multiple nodes and workers, and model parallelism splits a model and fits them over the designated infrastructure. Amazon SageMaker distributed training jobs enable you with one click (or one API call) to set up a distributed compute cluster, train a model, save the result to Amazon Simple Storage Service (Amazon S3), and shut down the cluster when complete. Furthermore, SageMaker has continuously innovated in the distributed training space by launching features like heterogeneous clusters and distributed training libraries for data parallelism and model parallelism . Efficient training on a distributed environment requires adjusting hyperparameters. A common example of good practice when training on multiple GPUs is to multiply batch (or mini-batch) size by the GPU number in order to keep the same batch size per GPU. However, adjusting hyperparameters often impacts model convergence. Therefore, distributed training needs to balance three factors: distribution, hyperparameters, and model accuracy. In this post, we explore the effect of distributed training on convergence and how to use Amazon SageMaker Automatic Model Tuning to fine-tune model hyperparameters for distributed training using data parallelism. The source code mentioned in this post can be found on the GitHub repository (an m5.xlarge instance is recommended). Scale out training from a single to distributed environment Data parallelism is a way to scale the training process to multiple compute resources and achieve faster training time. With data parallelism, data is partitioned among the compute nodes, and each node computes the gradients based on their partition and updates the model. These updates can be done using one or multiple parameter servers in an asynchronous, one-to-many, or all-to-all fashion. Another way can be to use an AllReduce algorithm. For example, in the ring-allreduce algorithm, each node communicates with only two of its neighboring nodes, thereby reducing the overall data transfers. To learn more about parameter servers and ring-allreduce, see Launching TensorFlow distributed training easily with Horovod or Parameter Servers in Amazon SageMaker . With regards to data partitioning, if there are n compute nodes, then each node should get a subset of the data, approximately 1/ n in size. To demonstrate the effect of scaling out training on model convergence, we run two simple experiments: Train an image classification model using a fully connected-layer DNN with ReLU activation functions using MXNet and Gluon frameworks. For training data, we used the MNIST dataset of handwritten digits. We used the source provided in the SageMaker example repository . Train a binary classification model using the SageMaker built-in XGBoost algorithm . We used the direct marketing dataset to predict bank customers who are likely to respond with a specific offer. The source code and steps to reproduce the experiment can be found on the GitHub repo . Each model training ran twice: on a single instance and distributed over multiple instances. For the DNN distributed training, in order to fully utilize the distributed processors, we multiplied the mini-batch size by the number of instances (four). The following table summarizes the setup and results. Problem type Image classification Binary classification Model DNN XGBoost Instance ml.c4.xlarge ml.m5.2xlarge Data set MNIST (Labeled images) Direct Marketing (tabular, numeric and vectorized categories) Validation metric Accuracy AUC Epocs/Rounds 20 150 Number of Instances 1 4 1 3 Distribution type N/A Parameter server N/A AllReduce Training time (minutes) 8 3 3 1 Final Validation score 0.97 0.11 0.78 0.63 For both models, the training time was reduced almost linearly by the distribution factor. However, model convergence suffered a significant drop. This behavior is consistent for the two different models, the different compute instances, the different distribution methods, and different data types. So, why did distributing the training process affect model accuracy? There are a number of theories that try to explain this effect: When tensor updates are big in size, traffic between workers and the parameter server can get congested. Therefore, asynchronous parameter servers will suffer significantly worse convergence due to delays in weights updates [1]. Increasing batch size can lead to over-fitting and poor generalization, thereby reducing the validation accuracy [2]. When asynchronously updating model parameters, some DNNs might not be using the most recent updated model weights; therefore, they will be calculating gradients based on weights that are a few iterations behind. This leads to weight staleness [3] and can be caused by a number of reasons. Some hyperparameters are model or optimizer specific. For example, the XGBoost official documentation says that the exact value for the tree_mode hyperparameter doesn’t support distributed training because XGBoost employs row splitting data distribution whereas the exact tree method works on a sorted column format. Some researchers proposed that configuring a larger mini-batch may lead to gradients with less stochasticity. This can happen when the loss function contains local minima and saddle points and no change is made to step size, to optimization getting stuck in such local minima or saddle point [4]. Optimize for distributed training Hyperparameter optimization (HPO) is the process of searching and selecting a set of hyperparameters that are optimal for a learning algorithm. SageMaker Automatic Model Tuning (AMT) provides HPO as a managed service by running multiple training jobs on the provided dataset. SageMaker AMT searches the ranges of hyperparameters that you specify and returns the best values, as measured by a metric that you choose. You can use SageMaker AMT with the built-in algorithms or use your custom algorithms and containers. However, optimizing for distributed training differs from common HPO because instead of launching a single instance per training job, each job actually launches a cluster of instances. This means a greater impact on cost (especially if you consider costly GPU-accelerated instances, which are typical for DNN). In addition to AMT limits , you could possibly hit SageMaker account limits for concurrent number of training instances. Finally, launching clusters can introduce operational overhead due to longer starting time. SageMaker AMT has specific features to address these issues. Hyperband with early stopping ensures that well-performing hyperparameters configurations are fine-tuned and those that underperform are automatically stopped. This enables efficient use of training time and reduces unnecessary costs. Also, SageMaker AMT fully supports the use of Amazon EC2 Spot Instances, which can optimize the cost of training up to 90% over on-demand instances. With regards to long start times, SageMaker AMT automatically reuses training instances within each tuning job, thereby reducing the average startup time of each training job by 20 times . Additionally, you should follow AMT best practices , such as choosing the relevant hyperparameters, their appropriate ranges and scales, and the best number of concurrent training jobs, and setting a random seed to reproduce results. In the next section, we see these features in action as we configure, run, and analyze an AMT job using the XGBoost example we discussed earlier. Configure, run, and analyze a tuning job As mentioned earlier, the source code can be found on the GitHub repo . In Steps 1–5, we download and prepare the data, create the xgb3 estimator (the distributed XGBoost estimator is set to use three instances), run the training jobs, and observe the results. In this section, we describe how to set up the tuning job for that estimator, assuming you already went through Steps 1–5. A tuning job computes optimal hyperparameters for the training jobs it launches by using a metric to evaluate performance. You can configure your own metric , which SageMaker will parse based on regex you configure and emit to stdout , or use the metrics of SageMaker built-in algorithms . In this example, we use the built-in XGBoost objective metric , so we don’t need to configure a regex. To optimize for model convergence, we optimize based on the validation AUC metric: objective_metric_name=""validation:auc"" We tune seven hyperparameters: num_round – Number of rounds for boosting during the training. eta – Step size shrinkage used in updates to prevent overfitting. alpha – L1 regularization term on weights. min_child_weight – Minimum sum of instance weight (hessian) needed in a child. If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight , the building process gives up further partitioning. max_depth – Maximum depth of a tree. colsample_bylevel – Subsample ratio of columns for each split, in each level. This subsampling takes place once for every new depth level reached in a tree. colsample_bytree – Subsample ratio of columns when constructing each tree. For every tree constructed, the subsampling occurs once. To learn more about XGBoost hyperparameters, see XGBoost Hyperparameters . The following code shows the seven hyperparameters and their ranges: hyperparameter_ranges = { ""num_round"": IntegerParameter(100, 200), ""eta"": ContinuousParameter(0, 1), ""min_child_weight"": ContinuousParameter(1, 10), ""alpha"": ContinuousParameter(0, 2), ""max_depth"": IntegerParameter(1, 10), ""colsample_bylevel"": ContinuousParameter(0, 1), ""colsample_bytree"": ContinuousParameter(0, 1), } Next, we provide the configuration for the Hyperband strategy and the tuner object configuration using the SageMaker SDK. HyperbandStrategyConfig can use two parameters: max_resource (optional) for the maximum number of iterations to be used for a training job to achieve the objective, and min_resource – the minimum number of iterations to be used by a training job before stopping the training. We use HyperbandStrategyConfig to configure StrategyConfig , which is later used by the tuning job definition. See the following code: hsc = HyperbandStrategyConfig(max_resource=30, min_resource=1) sc = StrategyConfig(hyperband_strategy_config=hsc) Now we create a HyperparameterTuner object, to which we pass the following information: The XGBoost estimator, set to run with three instances The objective metric name and definition Our hyperparameter ranges Tuning resource configurations such as number of training jobs to run in total and how many training jobs can be run in parallel Hyperband settings (the strategy and configuration we configured in the last step) Early stopping ( early_stopping_type ) set to Off Why do we set early stopping to Off? Training jobs can be stopped early when they are unlikely to improve the objective metric of the hyperparameter tuning job. This can help reduce compute time and avoid overfitting your model. However, Hyperband uses an advanced built-in mechanism to apply early stopping. Therefore, the parameter early_stopping_type must be set to Off when using the Hyperband internal early stopping feature. See the following code: tuner = HyperparameterTuner( xgb3, objective_metric_name, hyperparameter_ranges, max_jobs=30, max_parallel_jobs=4, strategy=""Hyperband"", early_stopping_type=""Off"", strategy_config=sc ) Finally, we start the automatic model tuning job by calling the fit method. If you want to launch the job in an asynchronous fashion, set wait to False . See the following code: tuner.fit( {""train"": s3_input_train, ""validation"": s3_input_validation}, include_cls_metadata=False, wait=True, ) You can follow the job progress and summary on the SageMaker console. In the navigation pane, under Training , choose Hyperparameter tuning jobs , then choose the relevant tuning job. The following screenshot shows the tuning job with details on the training jobs’ status and performance. When the tuning job is complete, we can review the results. In the notebook example, we show how to extract results using the SageMaker SDK. First, we examine how the tuning job increased model convergence. You can attach the HyperparameterTuner object using the job name and call the describe method. The method returns a dictionary containing tuning job metadata and results. In the following code, we retrieve the value of the best-performing training job, as measured by our objective metric (validation AUC): tuner = HyperparameterTuner.attach(tuning_job_name=tuning_job_name) tuner.describe()[""BestTrainingJob""][""FinalHyperParameterTuningJobObjectiveMetric""][""Value""] The result is 0.78 in AUC on the validation set. That’s a significant improvement over the initial 0.63! Next, let’s see how fast our training job ran. For that, we use the HyperparameterTuningJobAnalytics method in the SDK to fetch results about the tuning job, and read into a Pandas data frame for analysis and visualization: tuner_analytics = sagemaker.HyperparameterTuningJobAnalytics(tuning_job_name) full_df = tuner_analytics.dataframe() full_df.sort_values(by=[""FinalObjectiveValue""], ascending=False).head() Let’s see the average time a training job took with Hyperband strategy: full_df[""TrainingElapsedTimeSeconds""].mean() The average time took approximately 1 minute. This is consistent with the Hyperband strategy mechanism that stops underperforming training jobs early. In terms of cost, the tuning job charged us for a total of 30 minutes of training time. Without Hyperband early stopping, the total billable training duration was expected to be 90 minutes (30 jobs * 1 minutes per job * 3 instances per job). That is three times better in cost savings! Finally, we see that the tuning job ran 30 training jobs and took a total of 12 minutes. That is almost 50% less of the expected time (30 jobs/4 jobs in parallel * 3 minutes per job). Conclusion In this post, we described some observed convergence issues when training models with distributed environments. We saw that SageMaker AMT using Hyperband addressed the main concerns that optimizing data parallel distributed training introduced: convergence (which improved by more than 10%), operational efficiency (the tuning job took 50% less time than a sequential, non-optimized job would have taken) and cost-efficiency (30 vs. the 90 billable minutes of training job time). The following table summarizes our results: Improvement Metric No Tuning/Naive Model Tuning Implementation SageMaker Hyperband Automatic Model Tuning Measured Improvement Model Quality (Measured by validation AUC) 0.63 0.78 15% Cost (Measured by billable training minutes) 90 30 66% Operational efficiency (Measured by total running time) 24 12 50% In order to fine-tune with regards to scaling (cluster size), you can repeat the tuning job with multiple cluster configurations and compare the results to find the optimal hyperparameters that satisfy speed and model accuracy. We included the steps to achieve this in the last section of the notebook . References [1] Lian, Xiangru, et al. “Asynchronous decentralized parallel stochastic gradient descent.” International Conference on Machine Learning . PMLR, 2018. [2] Keskar, Nitish Shirish, et al. “On large-batch training for deep learning: Generalization gap and sharp minima.” arXiv preprint arXiv:1609.04836 (2016). [3] Dai, Wei, et al. “Toward understanding the impact of staleness in distributed machine learning.” arXiv preprint arXiv:1810.03264 (2018). [4] Dauphin, Yann N., et al. “Identifying and attacking the saddle point problem in high-dimensional non-convex optimization.” Advances in neural information processing systems 27 (2014). About the Author Uri Rosenberg is the AI & ML Specialist Technical Manager for Europe, Middle East, and Africa. Based out of Israel, Uri works to empower enterprise customers to design, build, and operate ML workloads at scale. In his spare time, he enjoys cycling, hiking, and complaining about data preparation. TAGS: AI/ML , Amazon SageMaker Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Effortlessly Summarize Phone Conversations with Amazon Chime SDK Call Analytics_ Step-by-Step Guide _ Business Productivity.txt,"Business Productivity Effortlessly Summarize Phone Conversations with Amazon Chime SDK Call Analytics: Step-by-Step Guide by Jillian Munro, Court Schuett, and Takeshi Kobayashi | on 26 JUN 2023 | in Amazon Chime SDK , Amazon DynamoDB , Amazon EventBridge , Amazon SageMaker , Amazon Simple Storage Service (S3) , Amazon Transcribe , AWS Lambda , Business Productivity , Customer Solutions , Kinesis Data Streams , Technical How-to | Permalink |  Share Introduction The Amazon Chime SDK Call Analytics Real-Time Summarizer is a solution that provides real-time summarization of the phone conversation held through Amazon Chime SDK Voice Connector that leverages the Amazon Chime SDK call analytics to provide real-time summarization of phone conversation health. This demo, Amazon Chime SDK Call Analytics Real-Time Summarizer, utilizes the Amazon Chime SDK Voice Connector to obtain conversation transcripts which are then used to generate a summary of the conversation using Amazon SageMaker . In this blog post, we will discuss how to leverage the Amazon Chime SDK Call Analytics to capture conversation transcriptions and use a SageMaker endpoint to generate a summary of the conversation in real-time as soon as the phone conversation is completed. The application of this solution is versatile and can be utilized in various scenarios. Use Cases Legal Services: Law firms often deal with a high volume of phone calls, and it can be time-consuming for lawyers and legal professionals to manually review and summarize each call. With Amazon Chime SDK Call Analytics, the automatic summarization feature can quickly generate transcripts and summaries of client consultations, court proceedings, or legal negotiations. This enables lawyers to focus more on analyzing the content and key points of the calls rather than spending valuable time on transcribing them. Call Centers: Within call centers, customer support representatives have the ability to use Amazon Chime SDK Call Analytics real-time summarizer to analyze support calls as they occur, providing a report of the call within seconds. Additionally, a customer summarizer generates a report of the phone call including a transcript, for both the representative and the customer. Healthcare: In the healthcare industry, healthcare providers who use Telehealth Solutions can also take advantage of the Amazon Chime SDK Call Analytics Real-Time Summarizer, which can record SOAP notes for patients during the call. Financial Services: Financial institutions, including banks, insurance companies, and investment firms, handle numerous client interactions over the phone. Automatic call summarization can assist in compliance monitoring by analyzing and summarizing these calls, flagging any potential regulatory or compliance issues. It helps in ensuring adherence to industry regulations and maintaining a high standard of customer service. Overview Amazon Chime SDK Call Analytics is a collection of Machine Learning (ML) driven capabilities that enable a customer to record, transcribe, and analyze their communication sessions in real time. Amazon Chime SDK Call Analytics has different configure options, such as, Amazon Transcribe or Amazon Transcribe Call Analytics to create call transcripts, detect and redact PII, generate call summaries and insights from sentiment (non-talk, talk-speed, loudness, interruptions, and voice tone). Amazon Chime SDK Call Analytics can record calls and call metadata to Amazon Simple Storage Service (Amazon S3) as well as send real-time alerts via Amazon EventBridge on matched rule. This demo offers a webpage that displays real-time transcriptions of phone conversations between agents and customers. Once the conversation is completed, the summarization of the conversation is generated and displayed in the upper section of the page. Technical Walkthrough Architecture diagram of Amazon Chime SDK Call Analytics Real-Time Summarizer solution Getting Phone System Setup The Amazon Chime SDK voice connector is a service that operates on a pay-as-you-go basis and facilitates Session Initiation Protocol (SIP) trunking for your current phone system. To simplify the setup of the phone system, an Asterisk PBX web server will be deployed on an EC2 instance in this demo. The Amazon Chime SDK Voice Connector will also be deployed and assigned a phone number. Any incoming calls to this number will be directed to the Asterisk PBX web server. Capturing Transcripts To generate a summary quickly, it is necessary to capture real-time transcriptions using Transcribe through the Amazon Chime SDK Call Analytics. To achieve this, we will take the output of the Amazon Chime SDK Call Analytics Media Insight Pipeline and write the transcriptions to an Amazon DynamoDB table. This will be accomplished by processing the output of the Amazon Kinesis Data Stream with an AWS Lambda function. try { const putCommand = new PutItemCommand({ TableName: process.env.TRANSCRIBE_TABLE, Item: { transactionId: { S: metadata.transactionId }, timestamp: { N: epochTime }, channelId: { S: postData.TranscriptEvent.ChannelId }, startTime: { N: postData.TranscriptEvent.StartTime.toString() }, endTime: { N: postData.TranscriptEvent.EndTime.toString() }, transcript: { S: postData.TranscriptEvent.Alternatives[0].Transcript, }, }, }); await dynamoDBClient.send(putCommand); } catch (error) { console.error('Failed to insert record into DynamoDB:', error); } Simultaneously, we will record this data to a WebSocket API through Amazon API Gateway , allowing for near real-time delivery to the client for the duration of the call. Post-Call Summarization Processing Upon completion of the call, a notification event will be transmitted to EventBridge, and upon receipt of this event, we will: Query the DynamoDB table Parse the results Create a prompt Send the prompt to our Sagemaker Endpoint Send the response to our WebSocket API As we have been capturing the transcription results in real-time, the process of reading, parsing, and making a request to SageMaker can be completed rapidly. This enables us to generate a summary of the call within seconds, rather than minutes. Prerequisites To implement the solution outlined in this blog post, the following items will be required: yarn – https://yarnpkg.com/getting-started/install Docker desktop – https://www.docker.com/products/docker-desktop/ AWS account Basic understanding of telephony Request access to Amazon SageMaker – Foundation models (this could take few days) Subscribe to Cohere Generate Model – Command-Light at AWS Marketplace Deploy We have provided a sample on Github that is easy to deploy and test in your own environment. Once you have confirmed that all prerequisites are met, you can clone the repository to your local environment and initiate ‘yarn launch’ from the command line to get started. Upon successful deployment, the output will provide you with the DistributionUrl and PhoneNumber information. Alternatively, you can find this information on the CloudFormation page on the AWS Console . This information will be required for testing purposes. Testing To test this demo, go to the CloudFront Distribution webpage . If ‘Endpoint Status’ shows as ‘Endpoint disabled’, click on ‘Start Endpoint’ to enable the SageMaker endpoint. This process may take a few minutes to complete. Once the ‘Endpoint Status’ shows as ‘InService’, you are ready to begin testing. Attention: This deployment includes SageMaker endpoint which you incur additional charges when you start the SageMaker endpoint. We recommend you to stop the SageMaker endpoint by clicking on the ‘Stop Endpoint’ button once finished with the experiment to avoid unexpected charges. See Amazon SageMaker Pricing for relevant costs. Dial the provided phone number and upon answering, a WAV file will be played, simulating the response from a sample agent. Clean up Once you have completed experimenting with the solution, you can clean up your resources by initiating ‘yarn cdk destroy’ . This will remove all resources that were created during the deployment of the solution. Conclusion This blog post provides a detailed explanation of the deployment steps required to run the Amazon Chime SDK Call Analytics Real-Time Summarizer as well as the technical implementation of this simple solution. The Amazon Chime SDK Call Analytics Real-Time Summarizer provides an instant summary of phone conversations, opening up new possibilities for post-conversation reporting and analysis. We recommend using this solution as a starting point for your projects and taking further steps to provide feature differentiation to your service. Learn More Amazon Chime SDK in the AWS Console Amazon Chime SDK launches call analytics Github: amazon-chime-sdk-call-analytics-real-time-summarizer Using Amazon Chime SDK call analytics Using the call analytics workflows Blog: Amazon Chime SDK Call Analytics: Real-Time Voice Tone Analysis and Speaker Search TAGS: amazon chime voice connector , Amazon Machine Learning , Amazon Transcribe Call Analytics , SIP trunking Jillian Munro Jillian Munro is a Program Manager for the Amazon Chime SDK. Jillian is focused on Amazon Chime SDK education and awareness. Court Schuett Court Schuett is the Lead Evangelist for the Amazon Chime SDK with a background in telephony and now loves to build things that build things. Court is focused on teaching developers and non-developers alike how to build with AWS. Takeshi Kobayashi Takeshi Kobayashi is a Senior Chime Specialist Solutions Architect at AWS, based in Seattle. He is passionate about building web media applications with AWS services. Resources Alexa for Business Amazon Chime Amazon Honeycode Amazon WorkDocs Amazon WorkMail Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Empowering Customers to Take an Active Role in the Energy Transition Using AWS Serverless Services with Iberdrola _ Case Study _ AWS.txt,"through energy consumption optimization AWS Step Functions is a visual workflow service that helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and machine learning (ML) pipelines. Learn more » AWS Lambda Français Scalability During the prototyping comparison, Iberdrola specifically looked for scalability because the company anticipates needing to manage millions of devices as more customers use the platform over time. To store all the data coming from its ASA platform, Iberdrola uses Amazon DynamoDB, a fast, flexible NoSQL database service for single-digit millisecond performance at virtually any scale. Español Customer Stories / Energy - Power & Utilities Learn more » About Iberdrola 日本語 2023 Another use case for the ASA platform is adjusting energy consumption based on fluctuating energy prices. With recommendation models trained and deployed using Amazon SageMaker, the ASA platform can, for example, heat water in a customer’s water tank when energy is the cheapest instead of heating it on demand during peak hours. “It’s not simple for customers to optimize energy consumption because they need to understand their devices’ energy needs as well as changing energy prices, but we’re developing different variables in the platform to handle complex energy optimization,” says Pascual. The company uses AWS Lambda to run code for the platform without provisioning or managing infrastructure, helping Iberdrola run the solution efficiently, scale as needed, and reduce its own carbon footprint. Iberdrola also increases efficiency using serverless and scalable AWS Step Functions, a visual workflow service that helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and machine learning pipelines. 한국어 Carlos Pascual Head of Connected Energy Services, Iberdrola Iberdrola will launch the commercial product that customers can use to manage the ASA platform for their own devices in Spain in 2023. The company plans to expand to make the product available in the rest of its geographies as quickly as possible, using the scalability of the product and the global footprint of AWS to achieve additional cost efficiencies. Iberdrola’s platform will support residential customers first, but the company plans to support businesses in the future to help manage resources like buildings and fleets. “Scalability is key,” says Pascual. “When we made the prototype using AWS services, we tested to see if we could connect to millions of devices because that’s the volume we anticipate in the next few years.” Get Started Reduces customers’ carbon footprint AWS Services Used projected reduction in smart device energy consumption Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. 中文 (繁體) Bahasa Indonesia achieved to connect millions of devices 10–30% Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Amazon SageMaker 中文 (简体) Based in Spain, Iberdrola is a global electric utility company that connects 40 million customers around the world in countries like Portugal, Italy, France, the United Kingdom, the United States, Brazil, Mexico, and Australia. Iberdrola is one of the largest global producers of renewable energy and manages businesses for network distribution and retail in the energy industry. Provides flexibility services Outcome | Expanding the ASA Platform Using Scalable AWS Services and introduces more renewable energy to the grid Iberdrola Innovation Middle East, the global digital solutions development company of Iberdrola, is in charge of the algorithms, artificial intelligence, machine learning, and logic rules that help the platform make meaningful recommendations and automated actions to minimize energy, cost, and emissions. For this part of the platform, Iberdrola Innovation Middle East uses Amazon SageMaker—fully managed infrastructure, tools, and workflows for building, training, and deploying machine learning models for virtually any use case. “Our solution optimally reschedules loads, such as electrical vehicle chargers, heat pumps, or water heaters, without needing additional hardware in our clients’ homes,” says Santiago Bañales, managing director at Iberdrola Innovation Middle East. “It’s a 100 percent cloud solution.” Iberdrola anticipates that its ASA platform will help customers reduce energy consumption by a projected 10–30 percent depending on the devices involved. This outcome will lower costs for customers and reduce power consumption across the grid. Overview based on energy price or source Opportunity | Developing a Smart Devices Monitoring Platform to Help Customers Save Energy AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Türkçe AWS Step Functions Based in Spain, Iberdrola is a global electric utility company with over 40 million customers worldwide. Iberdrola is one of the largest producers of renewable energy in the world and manages businesses for network distribution and retail. English Solution | Using AWS Lambda Supports Reduction in Smart Device Energy Consumption for Iberdrola Customers by a Projected 10–30% Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram Iberdrola’s main goal is to help customers save energy and reduce their carbon footprint with the ASA platform. It focuses on large devices that can be flexibly managed to achieve the greatest impact on energy consumption. For example, when solar panels produce a large amount of energy in the middle of the day, Iberdrola’s ASA platform can intelligently increase the household energy consumption to perform tasks like charging an electric vehicle instead of routing that energy to the grid, which is less cost effective for customers than consuming the energy they produce. The platform can also increase consumption when the energy source is renewable, delaying nonurgent energy consumption until the greenest hours of the day. This advanced control can also benefit the grid, providing flexibility services and introducing more renewable energy into the system. “Our solution is crucial, especially as the industry is trying to reduce the dependence on fossil fuels,” says Pascual. “Using AWS services, our energy management platform helps every customer figure out how to consume energy more efficiently and more sustainably.” Iberdrola had a vision to create the ASA platform to empower customers to connect remotely to smart home devices, monitor them, and determine whether they need to take action to improve power consumption. To evaluate how the platform would perform and scale using AWS services, Iberdrola conducted a proof of concept for comparison. In the first phase, Iberdrola worked alongside the AWS prototyping team in late 2021, collaborating with teams in multiple countries to connect test devices. When the proof-of-concept testing using AWS services was successful, Iberdrola moved on to the second phase a few months later to do industrial development for the ASA platform. “When we compared AWS to our first tests, it was clear that using AWS was going to be much better, more scalable, and more cost effective,” says Carlos Pascual, head of connected energy services at Iberdrola. “It wasn’t a difficult decision.”   When we compared AWS to our first tests, it was clear that using AWS was going to be much better, more scalable, and more cost effective.” Deutsch Amazon DynamoDB Adjusts customer energy consumption Tiếng Việt In 2021, the company looked to Amazon Web Services (AWS) for building the prototype for its Advanced Smart Assistant (ASA) platform using services like AWS Lambda—a serverless, event-driven compute service for running code without provisioning or managing servers. The ASA platform connects to any of Iberdrola’s Smart Solutions portfolio and controls them autonomously to reduce a customer’s energy bills and carbon footprint while maintaining comfort. The ASA platform also offers advanced insights and recommendations to help customers progress in the energy transition and in their efficiency. Italiano ไทย Global energy company Iberdrola facilitates its customers’ electrification journey with a portfolio of Smart Solutions and understands the sustainable impact of managing energy consumption for devices like electric vehicle chargers, heat pumps, solar panels, and water heaters. To further support sustainability, Iberdrola wanted to develop a scalable, high-performing, and cost-efficient platform for consumers. Architecture Diagram Learn how global energy company Iberdrola in the power and utilities industry supports energy efficiency using AWS serverless services. Close Learn more » Click to enlarge for fullscreen viewing.  Empowering Customers to Take an Active Role in the Energy Transition Using AWS Serverless Services with Iberdrola Português Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale." ENGIE Rapidly Migrates Assets and Accounts Easing Divestiture Using AWS _ Engie Case Study _ AWS.txt,"or service interruptions to its production environment Millions of dollars Français Centralized its financial operations To facilitate its divestiture, ENGIE also engaged AWS Enterprise Support, which provides consultative guidance service where the main focus is helping customers achieve their outcomes and find success in the cloud. Using AWS Enterprise Support, ENGIE received access to a dedicated technical account manager, who verifies technical procedures, advises on automation opportunities, and coordinates efforts between ENGIE and AWS. Through this collaboration, ENGIE aligned the scheduling of its new project with the AWS Enterprise Support team in case it needed technical support along the way. “AWS Enterprise Support helps us sleep better at night,” says Frédéric Poncin. “We know that if something happens, we can call them, and they will respond.” ENGIE Rapidly Migrates Assets and Accounts, Easing Divestiture Using AWS To support its purpose, ENGIE decided to form a separate division that would absorb the majority of its services-led activities. In July 2021, the company created EQUANS, a global multitechnical services leader. EQUANS employs 74,000 people in 17 countries and generates an annual turnover of over €12 billion. Español Customer Stories / Energy - Power & Utilities Outcome | Supporting a Greener Future on AWS AWS Organizations lets you create new AWS accounts at no additional charge. With accounts in an organization, you can easily allocate resources, group accounts, and apply governance policies to accounts or groups. AWS Service Catalog lets you centrally manage deployed IT services, applications, resources, and metadata to achieve consistent governance of your infrastructure as code (IaC) templates. annually by maintaining its AWS Savings Plan 日本語 Opportunity | Preparing for a Large-Scale Divestiture  Contact Sales Get Started 한국어 AWS Enterprise Support Overview | Opportunity | Solution | Outcome | AWS Services Used Millions of dollars saved Headquartered in La Défense, France, ENGIE’s purpose is to accelerate the transition toward a carbon-neutral economy through reduced energy consumption and environmentally friendly solutions. This purpose brings together the company, its 170,000 employees, its clients, and its shareholders, and builds on its key areas of business—gas, renewable energy, and services—to offer competitive solutions. Globally, the group generated €57.9 billion in 2021. Learn how ENGIE in the energy industry seamlessly transferred IT assets using AWS Cloud Operations. to reduce compute costs AWS Services Used ENGIE duplicated this setup for EQUANS and, with its baseline environment configured with security, networking, governance, and identity and access management, ENGIE could securely transfer existing accounts to a new, separate environment. First, ENGIE manually reassigned a small batch of its accounts using AWS Organizations to see if that would have an effect on its operations. “It was a new approach,” says Frédéric Poncin. “We did not have to migrate workloads. We did not have to migrate data. We just reassigned the ownership of our AWS accounts to the new organization and fixed a few technical dependencies.” Throughout the project, ENGIE experienced virtually no downtime or service interruptions to its production workloads. This divestiture meant that the company needed to efficiently migrate thousands of workloads to a separate and secure environment without impacting its production. ENGIE had already widely adopted Amazon Web Services (AWS), and at the time, there were several large-scale, ongoing cloud migration projects that the company wanted to avoid impacting. To simplify the management of its workloads, ENGIE uses AWS Organizations, which gives companies the ability to centrally manage and govern their environments as they scale their AWS resources. In 8 months, the energy group completed a complex divestiture by migrating workloads from 70 AWS accounts, including multiple production systems, with minimal effort compared with a traditional data center migration project. 中文 (繁體) Bahasa Indonesia Solution | Transferring IT Assets Seamlessly Using AWS Cloud Operations Since completing this project, EQUANS has now been handed over to a new team, and it is operated autonomously. As a result, ENGIE can allocate its resources toward its ambitious net-zero carbon strategy, which it plans to fulfill by 2045. This decarbonization strategy includes increasing its renewable hydrogen capacity to 4 GW and its overall renewable energy capacity to 80 GW by 2030. ENGIE was already operating a secure, multi-account AWS environment with an account factory that is based on AWS best practices for AWS Organizations, AWS Service Catalog, and AWS Cloud Operations, which helps businesses operate securely and safely in the cloud at scale. Under this model, the company can support its local IT teams in adopting a cloud-first approach, addressing business needs, and centralizing its financial operations to reduce compute costs and align with its security standards. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Learn more » 2022 worth of workloads migrated from 70 AWS accounts in 8 months Overview In November 2021, ENGIE accelerated the project by automating the transfer of its assets using AWS Organizations. By automating this task, the company could complete an AWS account transfer in minutes. Within 2 months, ENGIE migrated over 95 percent of its accounts while keeping its IT team free to focus on other projects. In total, ENGIE migrated several million dollars’ worth of workloads across 70 AWS accounts in 8 months and avoided a costly and risky workload migration project that would have required a large-scale mobilization of its IT teams. “It was a smooth ride,” says Frédéric Poncin. “We removed the burden from our IT team that was already loaded with other tasks and divestiture activities.” Our multi-account strategy using AWS Organizations has been key to our success when facing both acquisitions and divestitures.” About ENGIE Türkçe AWS Enterprise Support provides you with concierge-like service where the main focus is helping you achieve your outcomes and find success in the cloud. English ENGIE is a global reference in low-carbon energy and services. The group is committed to accelerating the transition toward a carbon-neutral world through reduced energy consumption and more environmentally friendly solutions. ENGIE also worked alongside AWS Enterprise Support to maintain the benefits of using Savings Plans, a flexible pricing model offering lower prices compared with On-Demand Pricing, in exchange for a specific usage commitment for a 1- or 3-year period. As a longtime user of AWS, ENGIE had committed to an AWS Savings Plan years prior, which has helped it save millions of dollars each year. “We had questions about whether we could keep our commitment and cost savings as we split part of our organization,” says Frédéric Poncin. “By collaborating with AWS Enterprise Support, we could reassign part of our long-term commitment to the new organization, which brings in significant cost savings for both ENGIE and EQUANS.” 95% AWS Service Catalog As the company moves closer to achieving its goals, it will continue to rely on AWS for scalable and cost-effective cloud services. “Our multi-account strategy using AWS Organizations has been key to our success when facing both acquisitions and divestitures,” says Frédéric Poncin. “This strategy has given us the agility that we need to accelerate our organizational transformation.” Global reference for low-carbon and energy services ENGIE announced the sale of its EQUANS division in 2021. This announcement is a major step forward in support of the group’s strategic plan to focus on accelerating investment in its core activities, notably in energy renewables, and to achieve net-zero carbon emissions by 2045. Deutsch AWS Cloud Operations provides a model and tools for a secure and efficient way to operate in the cloud. You can transform your organization, modernize and migrate your applications, and accelerate innovation with AWS. of workloads transferred in 2 months Tiếng Việt Italiano ไทย AWS Cloud Operations Experienced virtually no downtime However, creating this autonomous entity required ENGIE, which had been running on AWS since 2017, to transfer thousands of virtual machines and AWS-managed services into a separate environment without impacting its operations. Originally, the company had started working on AWS to modernize its IT systems, and it had adopted AWS Organizations and AWS Service Catalog, which helps organizations create and manage catalogs of IT services that are approved for use on AWS. These services gave its teams more flexibility in their resource management. “Using AWS Organizations and a multi-account strategy, our IT teams can deploy and operate workloads at a local level in a controlled environment,” says Frédéric Poncin, head of cloud center of excellence at ENGIE. “We quickly grew from two AWS accounts to five hundred AWS accounts under this model.” Learn more » AWS Organizations Frédéric Poncin Head of Cloud Center of Excellence, ENGIE Português" Enhancing customer experience using Amazon CloudFront with Zalando _ Case Study _ AWS.txt,"images delivered per day Zalando, a leading fashion, beauty, and lifestyle-focused online platform based in Berlin, Germany, was looking to optimize its services in the face of rapid growth. Zalando connects customers to brands and products across 25 European markets and serves more than 49 million active customers. A key component of Zalando’s online customer experience is the use of rich media content across its web and app properties. The solution Zalando had in place to manage, transform, and deliver images was not providing enough developer visibility or control—both vital factors to support continued growth and differentiated customer experience. Français Increased CloudFront Functions is ideal for high scale and latency sensitive operations like HTTP header manipulations, URL rewrites/redirects, and cache-key normalizations. These types of short running, lightweight operations support traffic that is often unpredictable and spiky. Learn more » Zalando migrated quickly and flexibly. By working alongside the Enterprise Support, Service Specialists, and Service Teams at AWS, Zalando planned the migration timeline in a way that avoided overlaps with customer campaigns and market events. Zalando’s migration to CloudFront started in August 2020 and lasted 4 months, pausing in preparation for Cyber Week, a busy time of year for online retailers. The first phases of the migration started with small groups of customers so that the company could detect any migration improvement opportunities without significantly affecting Zalando customers. Zalando migrated over 20 websites and apps during this process, for a combined 26.93 PB of data. The peak traffic handled by CloudFront has been regularly exceeding 100,000 requests per second. Customer Stories / Retail & Wholesale Español In August 2020, Zalando decided to migrate its media management and delivery solution to Amazon Web Services (AWS) using Amazon CloudFront, a content delivery network service built for high performance, security, and developer convenience. Zalando used CloudFront to improve scalability, provide enhanced online shopping experiences, and improve developer observability. reduction in requests to nonoptimized images transactions handled per second, on average Zalando migrated to CloudFront to improve the media management and delivery architectures that drive shopper experience so that it could provide better services for its customers. Support from the AWS team meant Zalando could conduct a smooth migration, resulting in substantial benefits. “The business benefits of using Amazon CloudFront are the operational flexibility as well as the ability to monitor the health of the solution and experiment and reverse changes quickly,” says Czarnecki. “We can react to incidents in near real time without waiting for support to be called in. This operational flexibility is a big, big benefit for us.” 日本語 Contact Sales 2022 Get Started 한국어 Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Securely deliver content with low latency and high transfer speeds. Overview | Opportunity | Solution | Outcome | AWS Services Used developer visibility and control  AWS Services Used Process video files and clips to prepare on-demand content for distribution or archiving. Learn more » Initially, Zalando decided to use Lambda@Edge, a feature of CloudFront that lets customers run code closer to the users of their applications and improve performance and reduce latency. Zalando used Lambda@Edge to run image-width normalization and to rewrite URLs based on the viewer device type. Following the release of CloudFront Functions, a complementary edge compute runtime environment deployed within CloudFront edge locations and built for short-running, latency-sensitive JavaScript code, Zalando switched to CloudFront Functions to further reduce costs and optimize the performance of its solution. Through the direct relationship between Zalando and the CloudFront Service team, Zalando customized the behavior of its website and mobile apps. With Zalando’s prelaunch hands-on access to CloudFront Functions, the development team further optimized the image-delivery solution. “I was very happy to be supported on multiple levels during multiple stages,” says Emil Varga, lead software engineer at Zalando. “Starting very early, when we were investigating proofs of concept, there was regular communication. We were sending code to check for validity and for hurdles in our way.” 中文 (繁體) Bahasa Indonesia Zalando wants to continue to innovate the management and manipulation of rich media content using AWS. It is planning to encourage customer engagement by building an interactive ecommerce solution using AWS Elemental MediaConvert, a file-based video transcoding service with broadcast-grade features. Ρусский Solution | Migrating to the AWS Edge عربي Przemek Czarnecki Vice President of Software Engineering, Zalando 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. 3x Learn more » Overview Lambda@Edge About Zalando Focused on fashion and lifestyle, Zalando is an online retailer based in Berlin, Germany. Founded in 2008, it connects customers, brands, and partners across 25 European countries. Opportunity | Increasing Developer Ownership to Support Growth Türkçe Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to users of your application, which improves performance and reduces latency. With Lambda@Edge, you don't have to provision or manage infrastructure in multiple locations around the world. Learn more » English CloudFront Functions To address these challenges, the team at Zalando decided to build its new media management solution using Amazon CloudFront. “We looked at Amazon CloudFront as an extension of our existing AWS product portfolio,” says Przemek Czarnecki, vice president of software engineering at Zalando. “Migrating to AWS simplified the way that we develop and integrate products.” Zalando used CloudFront for its programmability and flexibility, both essential to scale operations and match increases in customer demand. After the migration, Zalando has been achieving cache hit ratios of 99.5 percent, and its new image-delivery solution serves around five billion images daily. CloudFront and CloudFront Functions were fully implemented prior to Cyber Week 2021. “I was responsible for engineering aspects of Cyber Week in 2021, and there was not a single issue related to Amazon CloudFront,” says Czarnecki. With around 250 million online orders in 2021, the scale and efficiency of Zalando’s solution on CloudFront played a key role in delivering an excellent customer experience. Zalando has implemented further optimizations, leading to a three times reduction in requests to nonoptimized images on the home screens of both the company’s mobile and web applications. Teams across Zalando have switched to using the pipeline built on CloudFront for other types of content due to its enhanced performance and flexibility of usage. 100,000 AWS Elemental MediaConvert Deutsch Tiếng Việt 5 billion Italiano ไทย Zalando Enhances Customer Experience Using Amazon CloudFront Amazon CloudFront Outcome | Driving Future Customer Engagement 99.5% The business benefits of using Amazon CloudFront are the operational flexibility as well as the ability to monitor the health of the solution and experiment and reverse changes quickly.” In May 2021, Zalando began to use CloudFront Functions in production. “The big change with CloudFront Functions is smooth configuration,” says Varga. “It scales on demand and makes it simpler to deploy and reliably revert tasks on an operational level and for everyday development.” As the company began to roll out the new solution across its web properties, Zalando quickly overcame obstacles. “When adjustments were needed, we were able to roll back very quickly, making changes before real downtime could occur, which was key,” says Varga. Today, Zalando uses both CloudFront Functions and Lambda@Edge for different use cases. Having multiple layers of edge compute provides more flexibility, visibility, and control for its developers and a better overall experience for customers. This helps Zalando react with agility and better serve both customers and the business. cache hit ratios achieved Zalando, an online fashion and lifestyle business based in Germany, migrated its media management solution to Amazon CloudFront and increased developer control, leading to an improved customer experience. As a result of significant growth, Zalando outgrew its previous image management solution, which offered limited flexibility in the configuration capabilities to Zalando’s engineering and product teams. Additionally, operational insights were sparse, creating a lack of visibility into how efficiently the service was functioning and what optimizations could be made. This impacted Zalando’s ability to adapt and optimize its digital storefronts. The lack of detailed reporting around image transformation presented challenges in delivering consistent customer experience during peak seasonal events. Português" EPAM Systems.txt,"The Maestro platform and its companion app, Maestro Databased (MD), comprise a modern solution designed for effective hybrid and multi-cloud infrastructure management, monitoring, analytics, FinOps enablement, and other business-critical operations. They are designed for use in large enterprises, where top performance is needed from every component. Using AWS, Maestro performance improved by 10 percent and its ratio of price-to-performance improved by 40 percent, according to internal tests comparing the AWS infrastructure to its previous setup. This means that the customer using Maestro can now more effectively and efficiently manage its cloud environments. Français While migrating to AWS, the EPAM team explored the best ways to structure Maestro technology. It ran tests on resource-intensive processes, such as simulated month-end procedures, and was impressed by the results it experienced using AWS. “This was a good stress test,” says Isaiev. “The task needs the highest CPU and memory capacity because the database uses the in-memory cache for requested data. The new platform we built on AWS performed well. We couldn’t overload it, it just scaled and scaled. And it performed about 10 percent faster than our on-premises systems.” Create, maintain, and secure APIs at any scale.  Learn more » The tasks include provisioning infrastructure, meeting security and compliance requirements, managing resources and permissions, and auditing events.EPAM is now applying the lessons it learned working on the Maestro platform migration to help its other customers do more. “Our customers look to us to solve their problems,” says Isaiev. “Working on this project and getting support from AWS, we’ve improved our cloud skills and know so much more than we used to. It was like an intense training program, and we can now share that knowledge with our customers.” One of MCC’s enterprise customers needed more processing power for its implementation of Maestro, and EPAM’s Cloud Native Research and Development Center was invited to participate in the renovation of a platform’s entity installed on the customer’s side. The EPAM team decided to migrate Maestro to a solution based on Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity, and AWS Graviton. The migration was one of the first cases of AWS Graviton being used on an organization level. “The results of the tests and proof-of-concept (PoC) looked great,” says Anton Isaiev, lead systems engineer at EPAM, who was engaged into the migration project. “We migrated applications and cloud-native solutions to AWS Graviton to take advantage of performance and scalability benefits.” Español Solution | Gaining Value and a 10% Speed Boost   performance We couldn’t overload it, it just scaled and scaled. And it performed about 10% faster than our on-premises system.” 日本語 AWS Services Used 2023 The Maestro platform also interacts with various applications deployed by its customers using APIs (application programming interfaces). To enable users to interact with the platform with predictable performance, the platform uses a cloud-native user interface based on AWS Lambda 한국어 Learn how »  Overview | Opportunity | Solution | Outcome | AWS Services Used with regional and industry standards , which are ideal for running memory-intensive workloads, to make the applications based on its databases work faster. This means that the customers’ technical managers, DevOps teams, and engineers experience speedy performance for all data-management tasks. Opportunity | Choosing AWS Graviton for Scalability and Faster Performance Amazon API Gateway Amazon EC2 R6g instances 40% improvement AWS Graviton Processor EPAM Gains 40% Price-Performance Improvement for a Cloud Management App With AWS Graviton improvement EPAM was tasked, by Maestro Cloud Control, to migrate it’s Maestro hybrid cloud management platform to AWS Graviton within a pre-existing enterprise infrastructure. The aim of the project was to reduce Maestro’s ongoing R&D cost and improve its performance. This was achieved by increasing processing time by 10 percent, using less resources to achieve this higher level of performance. Maestro Cloud Control (MCC) uses EPAM’s software engineering expertise to create and evolve the Maestro platform. The platform provides automated control over virtual resource creation, updates, monitoring, analytics, billing, charge-back, compliance, security threat detection, and usage optimization recommendations. ipsum et velit consectetur 中文 (繁體) Bahasa Indonesia Better experience Customer Stories / Financial Services / EMEA Ρусский Enabling the best price performance in Amazon EC2. Learn more » عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today.   Overview AWS Lambda To meet user needs and set the product up for future growth, EPAM redeveloped Maestro into a cloud-native application on Amazon Web Services (AWS). The resulting platform relies significantly on AWS Graviton, processors designed by AWS to deliver the best price-performance for cloud workloads running in Amazon EC2. About Company Get Started , a serverless, event-driven compute service, and AWS Customer Success Stories Türkçe English , a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. Both of these services provide customers with a high-quality experience for accessing their unique applications. Anton Isaiev Team Lead of Level 3 Support for Applications, EPAM Run code without thinking about servers or clusters. Learn more » Secure and resizable compute capacity for virtually any workload. Maestro’s move to AWS made sense for the customer, both in terms of capabilities and operational benefits. “The main advantage is the ratio of price-to-value,” says Isaiev. “That was a real winner, with an improvement of about 40 percent over our previous setup. The project team now has the necessary capacity to handle large workloads, and extra resources to increase its staff productivity—all for a reasonable price. And the implementation works beautifully. There is a wide range of specialized instances to meet its needs, and the amount of compute power it can use can be scaled without limit.” Deutsch The cloud-native Maestro on AWS performs faster than its previous on-premises version, and delivers greater value to the business. “The customer gets much more for the same price, using AWS Graviton,” says Isaiev. “It has faster performance for all compute tasks and greater access to resources. It’d cost so much more to try to replicate this in an on-premises system.” Ability to scale Tiếng Việt Outcome | Using AWS to Solve Its Customers’ Problems Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. Italiano ไทย Contact Sales 10% performance Learn more » Amazon EC2 EPAM provides digital transformation and product engineering services to help businesses plan, build, and run their IT systems. Headquartered in the US, the company operates globally in more than 50 countries. EPAM’s more than 59,000 staff help businesses reimagine themselves with an eye to today’s challenges and the digital future. Its software engineering heritage combined with its strategic business and innovation consulting generated $4.82 billion in revenues in 2022, according to its annual report. The company operates in more than 50 countries around the world. Maestro migration team selected Amazon API Gateway Português" Esade Business School Increases Graduates Employability Using AWS Education Programs _ Case Study _ AWS.txt,"Esade Business School Increases Graduates’ Employability Using AWS Education Programs AWS Academy Français The Esade Business School bolstered student employability by incorporating AWS Academy into its business curriculum to teach the fundamentals of building IT infrastructure on AWS. About Esade Business School Teaches IT fundamentals with critical skills for implementing cloud initiatives Español Whether their intended role is sales, management, or business development, students need a basic understanding of the cloud. Because the cloud is used widely in all industries, employers now expect all new employees, not just those trained in technical fields, to be cloud-savvy. AWS Certified Solutions Architect–Associate As applications continue to surge, business schools serve as an increasingly important link between tertiary education and the professional world. Given the continuing integration of technology and business, business school curricula have an important role to play in technical education for students interested in careers in information technology. 日本語 The industry-recognized AWS Certification allows students to strengthen their curriculum vitae. And the credential increases their employability by validating their ability to design and implement distributed systems on AWS. “These materials feature the best practices by some of the best-known companies, and students learn how to help businesses create a competitive edge using AWS,” says Esteve Almirall, associate professor, Department of Operations, Innovation and Data Sciences at Esade Business School. 2022 Get Started 한국어 Esade Business School is one of the academic units of Esade, a prestigious international academic institution. Based in Barcelona, Spain, Esade has over 12,000 students and over 400 faculty at its business school, law school, and language center. Overview | Opportunity | Solution | Outcome | AWS Services Used Customer Stories / Education Learn from AWS experts. Advance your skills and knowledge. Build your future in the AWS Cloud. AWS Services Used Esade Business School’s MSc in Business Analytics accepts approximately 130–140 students each year. About 70 percent of these students achieve their AWS Certified Solutions Architect–Associate certification. Most graduates go on to work in cloud computing, including AWS and other teams and organizations that use AWS, and about two-thirds work in business development roles.  Improves 中文 (繁體) Bahasa Indonesia These materials feature the best practices by some of the best-known companies, and students learn how to help businesses create a competitive edge using AWS.”  As part of the course, students can opt to take the AWS Certified Solutions Architect - Associate certification. To boost the take-up rate, students who pass the certification exam will receive the maximum score for the final exam in the course, which counts as 60 percent of the overall course grade. Strengthens Ρусский curriculum vitae عربي By offering its students the opportunity to learn more about Amazon Web Services (AWS) and cloud computing, the Esade Business School bolstered student employability and fulfilled the industry need for technical education. As a prestigious international business school, the Esade Business School realized that it needed to include technical education as part of its curriculum. Requiring the AWS Academy Cloud Architecting course as part of its curriculum and offering students a chance to get certified as an AWS Certified Solutions Architect–Associate helped Esade stay on the cutting edge of education. As a result, students have stronger curriculum vitae and increased employment options in businesses with cloud computing platforms and business development. Empowering higher education institutions to prepare students for industry-recognized certifications and careers in the cloud. Learn more » 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. AWS Training and Certification Esteve Almirall Associate Professor, Department of Operations, Innovation and Data Sciences, Esade Business School The focus of this certification is on the design of cost and performance optimized solutions, demonstrating a strong understanding of the AWS Well-Architected Framework.  Learn more » Overview student employability Solution | Leading the Way in Technical Education for Business Türkçe Opportunity | Identify and Develop Cloud Skills  Based in Spain, Esade is a prestigious international academic institution with more than 12,000 students and 400 faculty. The Esade Business School consistently ranks as one of the top business schools in the world. As a leader in business education, the Esade Business School started its MSc in Business Analytics to help students understand how big data and data analytics are used in the marketing, retail, and finance industries. English That’s why, when the Esade Business School began offering a master of science (MSc) in Business Analytics in 2018, it worked with AWS Education Programs to offer the opportunity for students to earn their certification as an AWS Certified Solutions Architect–Associate. This credential helps organizations identify talent with critical skills for implementing cloud initiatives and gives graduates an advantage when it comes to postgraduate employability. Earning AWS Training and Certification credentials gives Esade Business School graduates an advantage when it comes to postgraduate employability. “Beyond technical knowledge, the AWS course taught me that there are opportunities in sales, management, and business development as well,” says Javier Poveda-Panter, a data science consultant at AWS and former Esade Business School student. “We learned how to help our customers integrate cloud features and generate value in the long run.” Identify and develop talent Outcome | Preparing the Next Generation of Cloud Talent Deutsch Tiếng Việt Italiano ไทย to students building IT infrastructure on AWS Contact Sales Learn more » As part of the MSc requirements, Esade Business School students take the AWS Academy Cloud Architecting course. In the course, students learn the fundamentals of building IT infrastructure on AWS through lectures, hands-on labs, and project work. The course incorporates AWS content, like whitepapers, to explain cloud infrastructure fundamentals. It also uses case studies to illustrate how major corporations achieved positive business outcomes when they deployed cloud infrastructure. Português" Establishing the Nations Largest Mileage-Based User Fee Program Using Amazon Connect with the Virginia DMV _ Case Study _ AWS.txt,"The Virginia Department of Motor Vehicles registers and titles motor vehicles and licenses drivers in the Commonwealth of Virginia. Français To create this program, the Virginia DMV turned to Emovis, a company providing a usage-based mobility solution and contact center powered by Amazon Web Services (AWS). The Virginia DMV implemented the Mileage Choice Program using Emovis’s solution in only 6 months and initially expected to enroll a few thousand drivers. Over the next 6 months, the Virginia DMV used Emovis’s solution to enroll over 10,000 drivers. The Virginia DMV also maintained staff productivity with Emovis using the new contact center, and the Mileage Choice Program became the largest road-usage charging program in the United States. 2023 According to a 2019 report by the Virginia secretary of transportation, by 2030 the use of electric, hybrid, and other fuel-efficient vehicles will amount to a loss of $250 million in fuel-tax revenue. This revenue constitutes 25 percent of the Virginia state budget for transportation financing and infrastructure projects. The Virginia DMV was tasked with implementing a road-usage charging program for fuel-efficient vehicles to make it possible for customers to pay their highway use fee per mile instead of all at once at the time of vehicle registration, an option that often results in cost savings for customers. The Virginia DMV quickly needed a way to enroll constituents in this new program. After requesting proposals for a mileage-based highway-usage solution and contact center, the Virginia DMV analyzed the bids and chose to work with Emovis. The company had already implemented road-usage charging programs in Utah and Oregon and could show how a permanent solution could be put in place. Emovis provides and manages a contact center solution powered by Amazon Connect, which provides superior customer service at a lower cost with an easy-to-use cloud contact center that can scale to support millions of customers. “We saw two major advantages in using Amazon Connect,” says Tom Krueger, vice president of operations at Emovis. “It easily integrated with our solution, and it improved the expandability of the solution.” Under the strict timeline of implementation, the Virginia DMV had a baseline goal of enrolling 2,000 drivers during the first year. This matched the numbers Emovis had seen when implementing its solution in Utah, though Virginia’s pool of eligible individuals is larger because it includes fuel-efficient vehicles that are not electric. From July 2022 to January 2023, the Virginia DMV saw over 10,000 individuals enrolled, exceeding expectations and becoming the largest road-usage charging program in the nation. Using Amazon Connect, Emovis could use agents from across the United States to support customers during times of heavy enrollment. The state has almost two million eligible vehicles, and the program will continue to roll out initial eligibility to residents through July 2023, with enrollment remaining available going forward. “We went far and above our goal very early after enrollment began. We were really pleased that there was a positive response from our residents when they signed up for the program, which is supported by Amazon Connect,” says Cummings. Español 日本語 scalability in call center that can support thousands of agents Outcome | Continuing Enrollment for the Mileage Choice Program Using Amazon Connect The long-term goal of this solution is to help the Virginia DMV address declining fuel-tax revenues, which make up 25 percent of the state’s funding to maintain roads, bridges, and tunnels and to improve transportation infrastructure. By using Amazon Connect, the Virginia DMV does not have to handle any manual processes or crunch numbers outside the system. Customers are able to apply for the new program during registration using the Emovis solution powered by Amazon Connect. “We have a lot of room to grow the program,” says Cummings. “The work we’re doing with Emovis is a great step in the right direction.” Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Overview  Improved AWS Services Used 10,000 中文 (繁體) Bahasa Indonesia Provide superior customer service at a lower cost with an easy-to-use cloud contact center. The option to enroll in the Mileage Choice Program opens for individuals at the time of vehicle registration renewal, so the first wave of enrollment is still progressing and will be complete at the end of June 2023. Nearly two million vehicles are eligible to enroll in the program, and the Virginia DMV wants to focus on enrollment as a path forward. The solution using Amazon Connect can scale up to this continued influx of new customers. The Virginia DMV is looking to create a process for new cars to be directly enrolled in the program when they are sold. At the same time, Emovis is designing a postcontact survey through Amazon Connect to support customers better and gain insights into customer satisfaction with the program. “Our role working with the Virginia DMV is to make sure customers are satisfied in their interactions with our support team,” says Krueger. “Amazon Connect is a key tool in helping us achieve customer satisfaction.” Amazon Connect Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) staff productivity With this solution, Virginia DMV staff did not have to figure out how to program devices to capture mileage or account for device inventory. “There’s very little staff time that the Virginia DMV needs to devote to this program,” says Cummings. “Emovis interacts with customers, collects miles, and sends invoices. Emovis is doing all the heavy lifting, and that’s a great benefit to us.” Because Emovis—using Amazon Connect—handles the customer service for the Mileage Choice Program, the Virginia DMV can continue operations as normal, with no added burden to Virginia DMV personnel. Amazon Connect facilitates interactions with customers and can scale to accommodate thousands of agents. Achieved participants enrolled in 6 months About the Virginia Department of Motor Vehicles The Mileage Choice Program is offered as a pay-per-use option, an alternative to paying a flat cost at the time of vehicle registration. Customers are eligible to sign up when renewing their vehicle registration; after they enroll through Emovis, they either receive a device to plug in to their vehicles or have the data taken from in-car telematics. “Through Emovis’s work with our IT team, the implementation was seamless,” says Scott Cummings, assistant commissioner for finance at the Virginia DMV. The Virginia DMV implemented the solution in 6 months, successfully meeting the statutory deadline. We were really pleased that there was a positive response from our residents when they signed up for the program, which is supported by Amazon Connect.” Türkçe 6 Solution | Connecting to Customers Using Amazon Connect English Establishing the Nation’s Largest Mileage-Based User Fee Program Using Amazon Connect with the Virginia DMV Fuel-tax revenue is critical to maintaining the roads that get us from point A to point B, but in 2019, as overall vehicle fuel efficiency increased and more drivers purchased electric and hybrid vehicles, the amount of taxes paid at the gas pump declined. To address reduced revenue, the Virginia State Legislature passed a bill in 2020 creating a highway use fee for fuel-efficient and electric vehicle owners and directed the Virginia Department of Motor Vehicles (Virginia DMV) to create a per-mile fee program as a payment option. The Virginia Department of Motor Vehicles (Virginia DMV) enrolled over 10,000 people in its Mileage Choice Program in 6 months with a solution managed by Emovis and powered by AWS.   Deutsch Tiếng Việt months to implement the Mileage Choice Program Italiano ไทย Scott Cummings Assistant Commissioner for Finance, Virginia DMV Contact Sales Customer Stories / Government Learn more » Opportunity | Using Amazon Connect to Power the Mileage Choice Program for the Virginia DMV Português" Evolving ADPs Single Global Experience in MyADP and ADP Mobile Using AWS Lambda _ Case Study _ AWS.txt,"AWS Lambda Français 2023 AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use. Learn more » Español ADP pursued a novel approach to unify its global UX and improve latency, cost, and performance. “The serverless model looked like a good way to handle higher traffic and be active across multiple regions,” says Anderson Buzo, chief architect at ADP. “And with serverless architecture, the cost is based on what we actually use, not what we deploy.” The company began migrating its flagship application to Amazon Web Services (AWS) in 2019 to take advantage of the benefits that come from a robust computing network. Now the application runs entirely on AWS, and clients are enjoying improved quality, lower latency, and a seamless UX. The migration to a serverless model on AWS has also accelerated the pace of innovation because ADP teams no longer have to spend time on infrastructure management. for bursts of traffic to eliminate throttling and errors AWS AppSync creates serverless GraphQL and Pub/Sub APIs that simplify application development through a single endpoint to securely query, update, or publish data.  AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. 日本語 Scaled The application users—the employees of ADP client companies—are benefiting from ADP innovations, which include intelligent self-service and chatbot functionality in some regions. The increased flexibility that ADP now offers means that the application maintains a 4.5 rating from users on mobile application marketplaces. With a new, unified user experience, time to market has been reduced, and the company can onboard new clients more quickly. ADP has also accelerated feature delivery substantially. Its teams are happy to be able to focus on what they do best. “Using AWS solutions, the talent on our team is doing actual product engineering work instead of worrying about infrastructure,” says Ramachandran. 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used About ADP After migrating to AWS, ADP adopted AWS AppSync to bolster the reliability of the application and offer a better experience with offline-first design. By designing an offline-first architecture, the team is developing a solution that pushes ADP Mobile and MyADP data to user devices as new data becomes available. This approach makes the application more resilient to faults and gives users access to recently updated data even if their network connection is slow.  AWS Fargate 4.5+ AWS AppSync AWS Services Used Learn how ADP in human resources evolved a global UX using AWS serverless technologies. Amazon ECS Automatic Data Processing (ADP) wanted to modernize its flagship desktop and mobile solutions, MyADP and ADP Mobile, so that its over 17 million users had a seamless user experience (UX). The company, a global technology company providing human capital management (HCM) and enterprise payroll services, strives to build innovative products. Low latency and a high-quality UX are a must for the enterprise.  中文 (繁體) Bahasa Indonesia Automatic Data Processing (ADP) provides payroll, human resources, and tax services to businesses around the world. The company processes the payroll of one in six American employees. Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Resiliency Learn more » Learn more » Portability Overview ADP used AWS tools to resolve challenges within its application. The company required a solution that could scale seamlessly to accommodate the rush of workers that clock in during a 90-second window around the beginning of each hour. However, ADP’s prior system took 60 seconds to scale as traffic doubled. Engineers worked quickly to develop a proof of concept using AWS Fargate, a serverless, pay-as-you-go compute solution that scaled rapidly. ADP uses AWS Fargate in tandem with Amazon Elastic Container Service (Amazon ECS), a fully managed container orchestration service for containerized applications. “We’re using AWS because we want to be a product development team and not an infrastructure management team,” says Ramachandran. As part of the application modernization, ADP started to build a new generation of microservices in AWS Lambda, a serverless, event-driven compute service. ADP further increased resiliency by deploying in multiple availability zones. After the migration, the team began optimizing costs. “Today, we are using AWS solutions like a Ferrari, but we’re paying the price of a regular car because of our serverless architecture,” says Ramachandran. In addition to saving money, ADP has increased staff productivity. Before using AWS, product developers had to coordinate and align with multiple internal teams to troubleshoot issues with databases and other resources. After migrating to managed services on AWS, development teams own their resources fully, and the company now spends much less time on support and maintenance.  Get Started Solution | Unlocking Resilience Through Off-line Architecture and AWS Services app store rating maintained Türkçe English ADP processes payments for one in six American workers, and the company is expanding globally. To meet quality and latency goals, the company is committed to consolidating, standardizing, and modernizing its application, which is used by over 17 million people and more than 470,000 companies. Although ADP Mobile and MyADP are used as the delivery mechanism for all ADP services, the company wanted to present a more consistent brand to customers with a unified global experience for common pillars like payroll, benefits, retirement, and taxes.  Devi Ramachandran Senior Director, DevOps, ADP with latency-based routing Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that simplifies your deployment, management, and scaling of containerized applications. Learn more » Evolving ADP’s Single Global Experience in MyADP and ADP Mobile Using AWS Lambda Deutsch Outcome | Moving Toward Global Deployments on AWS Tiếng Việt We’re using AWS because we want to be a product development team and not an infrastructure management team.” Italiano ไทย After three years, all of the application’s critical systems have been migrated to the cloud. “We are a total AWS shop right now,” says Ramachandran. Serverless architecture has opened new possibilities for innovation. The team is now focused on global deployments so that improvements developed in one region will automatically deploy globally. “When we build a feature in the United States or Europe, we can simply bring it to the app, and everybody can have it,” says Buzo. “On AWS, we can build a global app.”  ADP had to innovate to create a single experience for disparate systems of record without introducing error. “The speed at which pay statements open up should be the same speed at which benefits enrollment open, but these are two different sources of content on two different sets of infrastructure,” says Devi Ramachandran, senior director of DevOps at ADP. “That’s been our challenge from the beginning, and migrating our systems to AWS made everything simpler.” ADP also had to simplify the ADP Mobile and MyADP application programming interface (API) access that is provided by those different infrastructures. To streamline data aggregation on the backend, the company used AWS AppSync, which creates serverless GraphQL and Pub/Sub APIs that simplify application development. Using AWS AppSync, ADP can bring together data from the various backends and sources into a single endpoint. for global UX achieved Reduced Latency Opportunity | Using AWS to Create a Global User Experience for 17 Million People Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português Improved through multi region architecture" Expanding Opportunities Using Amazon WorkSpaces with The Chicago Lighthouse _ Case Study _ AWS.txt,"Amazon WorkSpaces Expanding Opportunities Using Amazon WorkSpaces with The Chicago Lighthouse Français People with disabilities still face barriers to employment, no matter how talented and dedicated they are, so The Lighthouse continues to advocate for inclusive workplaces. Because Amazon WorkSpaces worked so well for its workers with visual impairments, The Lighthouse is currently working toward remote accessibility solutions for users who are completely sightless. “Remote work creates opportunities for more people with disabilities to work from home,” Szlyk says. “This is significant because 60 percent of working-age adults with disabilities are not employed. Using these AWS solutions helps us open doors for more accessible, inclusive employment.” 2023 Customer Stories / Non-Profit Amazon Virtual Private Cloud (Amazon VPC) gives you full control over your virtual networking environment, including resource placement, connectivity, and security. Learn more » Español Since 1906, The Chicago Lighthouse has been a leader in comprehensive vision care, education, social services, assistive technologies, and employment opportunities that improve the quality of life for patients, clients, workers, and their families. Janet Szlyk President and Chief Executive Officer, The Chicago Lighthouse  日本語 The Chicago Lighthouse has been in operation since 1906. Today, in 2023, the agency provides 40 programs and services that help more than 50,000 people every year. Its clients access vision rehabilitation, education, assistive technology consulting, and other opportunities that improve their quality of life and empower them to live as confidently and independently as possible. Contact Sales The following diagram shows the network flow for an Amazon Workspaces user connecting to the service via the public internet from outside the corporate firewall. To find a way to keep The Lighthouse operational, Naqeeb first created a pilot workstation at his own home. When his home pilot worked, he and his IT team of six tested it in one of the contact centers. It worked well, and Naqeeb proposed an organization-wide solution. On March 17, Naqeeb and the IT team began transitioning employees to remote work. Four days later, on March 21, 70 employees were up and running. Over the next 4 days, the team transitioned another 50 employees to Amazon WorkSpaces. By March 24, 1 week after beginning the transition, 120 employees, many with disabilities, were working remotely, which was enough to continue the call centers’ seamless operations. “Esmeil Naqeeb is our hero,” Szlyk says. Among the organization’s social enterprises are 12 customer contact centers, handling calls from a number of healthcare and government clients. These businesses generate just over 60 percent of The Lighthouse’s total annual revenue. Until 2020, it was a completely in-person work environment so that The Lighthouse could provide employees with the adaptive technologies that they needed to accommodate visual and other impairments. But as the COVID-19 pandemic began making its way through the United States, Esmeil Naqeeb, network security engineer at The Lighthouse, saw the writing on the wall. “We knew lockdowns were coming,” says Naqeeb, “so we started looking for solutions.” 한국어 50% reduction in employee attrition About The Chicago Lighthouse Get Started AWS Services Used By using AWS, we have cut employee attrition in half in our customer care centers. It’s a win-win situation."" in call volume 中文 (繁體) Bahasa Indonesia In solving the challenges presented by the COVID-19 pandemic, The Lighthouse also ended up finding new solutions to support long-term accessibility and inclusion in its workforce. Commuting in and around the third largest US city can be challenging in the best of times, even without added complications due to weather, transportation, or accessibility. Offering work-from-home options turned out to be a tremendous boon—not just for workers but also for the business itself. “It’s our new normal,” Szlyk says. “We’re a hybrid organization now.” 20% increase Overview | Opportunity | Solution | Archiecture Diagram |  Outcome | AWS Services Used  Ρусский By migrating some of its operations to AWS, The Lighthouse kept revenues flowing, served customers, cared for clients, and enhanced and expanded its operations. Its social enterprises even saw significant growth, including a 20 percent increase in call volume, 50 percent increase in clientele, and 26 percent increase in revenue. Perhaps most importantly, workers at The Lighthouse are happier and more productive than ever. “We hear all the time about how much they love remote work, and they still feel a sense of closeness to their teams,” says Szlyk. “They have meaningful, challenging jobs that don’t require commuting, so they’re less likely to leave. By using AWS, we have cut employee attrition by 50 percent in our customer care centers. It’s a win-win situation.” عربي Amazon VPC 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Amazon Redshift Simply shutting down the call centers was not an option. The Lighthouse serves several large organizations in Illinois, such as the Illinois Tollway Authority, University of Illinois Health System, and Cook County Health Systems, so interruptions in service could mean harmful impacts on healthcare and infrastructure around the state. The Lighthouse was also committed to caring for its employees. “Everyone needed to continue receiving a paycheck, paying their bills, and feeding their families,” says Janet Szlyk, president and CEO of The Chicago Lighthouse. “Additionally, our customer service business provides revenues that support our organization’s social services. It was critical they remain open.” Learn more » Amazon WorkSpaces is a managed, secure Desktop-as-a-Service (DaaS) solution. You can use Amazon WorkSpaces to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of desktops to workers across the globe. Overview 26% increase Using Amazon WorkSpaces had immediate impacts across multiple departments. Aaron Baar, senior director of advancement at The Lighthouse, says, “People said how great it was to keep working, to maintain a sense of normalcy and routine in what were not normal times.” Workers in the IT department could access the active directory and keep managing users’ accounts and other on-premises network resources while working from home. In the customer care centers, which employ 119 people who are blind, visually impaired, or otherwise disabled, the results were especially remarkable. Several call center employees with visual impairments use ZoomText, an adaptive program that enlarges a computer screen and reads webpages. Licensing each computer individually would have been expensive and cumbersome, but using AWS greatly simplified the process. Amazon QuickSight powers data-driven organizations with unified business intelligence (BI) at hyperscale. Learn more » Türkçe A granular look at the AWS cloud how the Lighthouse's on-premise infrastructure will connect to it: English in revenue The company’s first idea was to physically deliver computers to workers’ homes, but this would have been prohibitively time consuming and could have potentially compromised sensitive data. In the search for a better idea, The Lighthouse discovered Amazon WorkSpaces, a family of solutions that provides the right virtual workspace for varied worker types, especially hybrid and remote workers. Amazon WorkSpaces customers can get tech support, but The Lighthouse needed very little assistance during the transition. “It worked flawlessly,” says Naqeeb. Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. Learn more » The Lighthouse uses Amazon Redshift as its cloud data warehouse. Amazon Redshift uses SQL to analyze structured and semistructured data, so The Lighthouse can run complex queries and scale analytics on call center data without managing infrastructure. Amazon QuickSight is a service that powers data-driven organizations with unified business intelligence at hyperscale. The Lighthouse uses Amazon QuickSight to power millions of weekly dashboard views so that all users can meet analytic needs from the same data sources and make better decisions. The Lighthouse runs these services on Amazon Virtual Private Cloud (Amazon VPC), a logically isolated virtual network that gives customers control over their networking environment, resource placement, connectivity, and security. The Chicago Lighthouse (The Lighthouse) serves and advocates for the blind and visually impaired, disabled, and veteran communities. To help make its operations self-sustaining, The Lighthouse has developed several social enterprises—all of which serve the dual purpose of generating revenues and creating employment opportunities for its clients—in customer service, digital accessibility consulting, manufacturing, and shipping.  50% increase Amazon Quicksight Deutsch Opportunity | Using Amazon WorkSpaces to Transition to Remote Work for The Chicago Lighthouse Tiếng Việt Learn how The Chicago Lighthouse, a nonprofit organization, pivoted to remote work using AWS. Solution | Keeping Workers Employed Using Amazon VPC Italiano ไทย Architecture Diagram When the COVID-19 pandemic forced workplace closures in March 2020, The Lighthouse had an urgent need to keep the organization and its programs operating without interruption. Using Amazon Web Services (AWS), The Lighthouse pivoted to a work-from-home model in a matter of days, keeping customers satisfied and mission-critical revenues flowing in. Perhaps most importantly, it allowed employees, particularly those with visual and other disabilities, to continue working. Outcome | Empowering Happy, Independent Workers Using AWS in client roster Português" Exploring Generative AI in conversational experiences_ An Introduction with Amazon Lex Langchain and SageMaker Jumpstart _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Exploring Generative AI in conversational experiences: An Introduction with Amazon Lex, Langchain, and SageMaker Jumpstart by Marcelo Silva , Kanjana Chandren , Justin Leto , Mahesh Biradar , Ryan Gomes , and Victor Rojo | on 08 JUN 2023 | in Amazon Lex , Amazon SageMaker , Amazon SageMaker JumpStart , Artificial Intelligence , Generative AI , Technical How-to | Permalink | Comments |  Share Customers expect quick and efficient service from businesses in today’s fast-paced world. But providing excellent customer service can be significantly challenging when the volume of inquiries outpaces the human resources employed to address them. However, businesses can meet this challenge while providing personalized and efficient customer service with the advancements in generative artificial intelligence (generative AI) powered by large language models (LLMs). Generative AI chatbots have gained notoriety for their ability to imitate human intellect. However, unlike task-oriented bots, these bots use LLMs for text analysis and content generation. LLMs are based on the Transformer architecture , a deep learning neural network introduced in June 2017 that can be trained on a massive corpus of unlabeled text. This approach creates a more human-like conversation experience and accommodates several topics. As of this writing, companies of all sizes want to use this technology but need help figuring out where to start. If you are looking to get started with generative AI and the use of LLMs in conversational AI, this post is for you. We have included a sample project to quickly deploy an Amazon Lex bot that consumes a pre-trained open-source LLM. The code also includes the starting point to implement a custom memory manager. This mechanism allows an LLM to recall previous interactions to keep the conversation’s context and pace. Finally, it’s essential to highlight the importance of experimenting with fine-tuning prompts and LLM randomness and determinism parameters to obtain consistent results. Solution overview The solution integrates an Amazon Lex bot with a popular open-source LLM from Amazon SageMaker JumpStart , accessible through an Amazon SageMaker endpoint. We also use LangChain, a popular framework that simplifies LLM-powered applications. Finally, we use a QnABot to provide a user interface for our chatbot. First, we start by describing each component in the preceding diagram: JumpStart offers pre-trained open-source models for various problem types. This enables you to begin machine learning (ML) quickly. It includes the FLAN-T5-XL model , an LLM deployed into a deep learning container. It performs well on various natural language processing (NLP) tasks, including text generation. A SageMaker real-time inference endpoint enables fast, scalable deployment of ML models for predicting events. With the ability to integrate with Lambda functions, the endpoint allows for building custom applications. The AWS Lambda function uses the requests from the Amazon Lex bot or the QnABot to prepare the payload to invoke the SageMaker endpoint using LangChain . LangChain is a framework that lets developers create applications powered by LLMs. The Amazon Lex V2 bot has the built-in AMAZON.FallbackIntent intent type. It is triggered when a user’s input doesn’t match any intents in the bot. The QnABot is an open-source AWS solution to provide a user interface to Amazon Lex bots. We configured it with a Lambda hook function for a CustomNoMatches item, and it triggers the Lambda function when QnABot can’t find an answer. We assume you have already deployed it and included the steps to configure it in the following sections. The solution is described at a high level in the following sequence diagram. Major tasks performed by the solution In this section, we look at the major tasks performed in our solution. This solution’s entire project source code is available for your reference in this GitHub repository . Handling chatbot fallbacks The Lambda function handles the “don’t know” answers via AMAZON.FallbackIntent in Amazon Lex V2 and the CustomNoMatches item in QnABot. When triggered, this function looks at the request for a session and the fallback intent. If there is a match, it hands off the request to a Lex V2 dispatcher; otherwise, the QnABot dispatcher uses the request. See the following code: def dispatch_lexv2(request): """"""Summary Args: request (dict): Lambda event containing a user's input chat message and context (historical conversation) Uses the LexV2 sessions API to manage past inputs https://docs.aws.amazon.com/lexv2/latest/dg/using-sessions.html Returns: dict: Description """""" lexv2_dispatcher = LexV2SMLangchainDispatcher(request) return lexv2_dispatcher.dispatch_intent() def dispatch_QnABot(request): """"""Summary Args: request (dict): Lambda event containing a user's input chat message and context (historical conversation) Returns: dict: Dict formatted as documented to be a lambda hook for a ""don't know"" answer for the QnABot on AWS Solution see https://docs.aws.amazon.com/solutions/latest/QnABot-on-aws/specifying-lambda-hook-functions.html """""" request['res']['message'] = ""Hi! This is your Custom Python Hook speaking!"" qna_intent_dispatcher = QnASMLangchainDispatcher(request) return qna_intent_dispatcher.dispatch_intent() def lambda_handler(event, context): print(event) if 'sessionState' in event: if 'intent' in event['sessionState']: if 'name' in event['sessionState']['intent']: if event['sessionState']['intent']['name'] == 'FallbackIntent': return dispatch_lexv2(event) else: return dispatch_QnABot(event) Providing memory to our LLM To preserve the LLM memory in a multi-turn conversation, the Lambda function includes a LangChain custom memory class mechanism that uses the Amazon Lex V2 Sessions API to keep track of the session attributes with the ongoing multi-turn conversation messages and to provide context to the conversational model via previous interactions. See the following code: class LexConversationalMemory(BaseMemory, BaseModel): """"""Langchain Custom Memory class that uses Lex Conversation history Attributes: history (dict): Dict storing conversation history that acts as the Langchain memory lex_conv_context (str): LexV2 sessions API that serves as input for convo history Memory is loaded from here memory_key (str): key to for chat history Langchain memory variable - ""history"" """""" history = {} memory_key = ""chat_history"" #pass into prompt with key lex_conv_context = """" def clear(self): """"""Clear chat history """""" self.history = {} @property def memory_variables(self) -> List[str]: """"""Load memory variables Returns: List[str]: List of keys containing Langchain memory """""" return [self.memory_key] def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]: """"""Load memory from lex into current Langchain session memory Args: inputs (Dict[str, Any]): User input for current Langchain session Returns: Dict[str, str]: Langchain memory object """""" input_text = inputs[list(inputs.keys())[0]] ccontext = json.loads(self.lex_conv_context) memory = { self.memory_key: ccontext[self.memory_key] + input_text + ""\nAI: "", } return memory The following is the sample code we created for introducing the custom memory class in a LangChain ConversationChain: # Create a conversation chain using the prompt, # llm hosted in Sagemaker, and custom memory class self.chain = ConversationChain( llm=sm_flant5_llm, prompt=prompt, memory=LexConversationalMemory(lex_conv_context=lex_conv_history), verbose=True ) Prompt definition A prompt for an LLM is a question or statement that sets the tone for the generated response. Prompts function as a form of context that helps direct the model toward generating relevant responses. See the following code: # define prompt prompt_template = """"""The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. You are provided with information about entities the Human mentions, if relevant. Chat History: {chat_history} Conversation: Human: {input} AI:"""""" Using an Amazon Lex V2 session for LLM memory support Amazon Lex V2 initiates a session when a user interacts to a bot. A session persists over time unless manually stopped or timed out. A session stores metadata and application-specific data known as session attributes. Amazon Lex updates client applications when the Lambda function adds or changes session attributes. The QnABot includes an interface to set and get session attributes on top of Amazon Lex V2. In our code, we used this mechanism to build a custom memory class in LangChain to keep track of the conversation history and enable the LLM to recall short-term and long-term interactions. See the following code: class LexV2SMLangchainDispatcher(): def __init__(self, intent_request): # See lex bot input format to lambda https://docs.aws.amazon.com/lex/latest/dg/lambda-input-response-format.html self.intent_request = intent_request self.localeId = self.intent_request['bot']['localeId'] self.input_transcript = self.intent_request['inputTranscript'] # user input self.session_attributes = utils.get_session_attributes( self.intent_request) self.fulfillment_state = ""Fulfilled"" self.text = """" # response from endpoint self.message = {'contentType': 'PlainText','content': self.text} class QnABotSMLangchainDispatcher(): def __init__(self, intent_request): # QnABot Session attributes self.intent_request = intent_request self.input_transcript = self.intent_request['req']['question'] self.intent_name = self.intent_request['req']['intentname'] self.session_attributes = self.intent_request['req']['session'] Prerequisites To get started with the deployment, you need to fulfill the following prerequisites: Access to the AWS Management Console via a user who can launch AWS CloudFormation stacks Familiarity navigating the Lambda and Amazon Lex consoles Deploy the solution To deploy the solution, proceed with the following steps: Choose Launch Stack to launch the solution in the us-east-1 Region: For Stack name , enter a unique stack name. For HFModel , we use the Hugging Face Flan-T5-XL model available on JumpStart. For HFTask , enter text2text . Keep S3BucketName as is. These are used to find Amazon Simple Storage Service (Amazon S3) assets needed to deploy the solution and may change as updates to this post are published. Acknowledge the capabilities. Choose Create stack . There should be four successfully created stacks. Configure the Amazon Lex V2 bot There is nothing to do with the Amazon Lex V2 bot. Our CloudFormation template already did the heavy lifting. Configure the QnABot We assume you already have an existing QnABot deployed in your environment. But if you need help, follow t hese instructions to deploy it. On the AWS CloudFormation console, navigate to the main stack that you deployed. On the Outputs tab, make a note of the LambdaHookFunctionArn because you need to insert it in the QnABot later. Log in to the QnABot Designer User Interface (UI) as an administrator. In the Questions UI , add a new question. Enter the following values: ID – CustomNoMatches Question – no_hits Answer – Any default answer for “don’t know” Choose Advanced and go to the Lambda Hook section. Enter the Amazon Resource Name (ARN) of the Lambda function you noted previously. Scroll down to the bottom of the section and choose Create. You get a window with a success message. Your question is now visible on the Questions page. Test the solution Let’s proceed with testing the solution. First, it’s worth mentioning that we deployed the FLAN-T5-XL model provided by JumpStart without any fine-tuning. This may have some unpredictability, resulting in slight variations in responses. Test with an Amazon Lex V2 bot This section helps you test the Amazon Lex V2 bot integration with the Lambda function that calls the LLM deployed in the SageMaker endpoint. On the Amazon Lex console, navigate to the bot entitled Sagemaker-Jumpstart-Flan-LLM-Fallback-Bot . This bot has been configured to call the Lambda function that invokes the SageMaker endpoint hosting the LLM as a fallback intent when no other intents are matched. Choose Intents in the navigation pane. On the top right, a message reads, “English (US) has not built changes.” Choose Build . Wait for it to complete. Finally, you get a success message, as shown in the following screenshot. Choose Test . A chat window appears where you can interact with the model. We recommend exploring the built-in  integrations between Amazon Lex bots  and  Amazon Connect . And also, messaging platforms (Facebook, Slack, Twilio SMS) or third-party Contact Centers using Amazon Chime SDK and Genesys Cloud, for example. Test with a QnABot instance This section tests the QnABot on AWS integration with the Lambda function that calls the LLM deployed in the SageMaker endpoint. Open the tools menu in the top left corner. Choose QnABot Client . Choose Sign In as Admin . Enter any question in the user interface. Evaluate the response. Clean up To avoid incurring future charges, delete the resources created by our solution by following these steps: On the AWS CloudFormation console, select the stack named SagemakerFlanLLMStack (or the custom name you set to the stack). Choose Delete . If you deployed the QnABot instance for your tests, select the QnABot stack. Choose Delete . Conclusion In this post, we explored the addition of open-domain capabilities to a task-oriented bot that routes the user requests to an open-source large language model. We encourage you to: Save the conversation history to an external persistence mechanism . For example, you can save the conversation history to Amazon DynamoDB or an S3 bucket and retrieve it in the Lambda function hook. In this way, you don’t need to rely on the internal non-persistent session attributes management offered by Amazon Lex. Experiment with summarization – In multiturn conversations, it’s helpful to generate a summary that you can use in your prompts to add context and limit the usage of conversation history. This helps to prune the bot session size and keep the Lambda function memory consumption low. Experiment with prompt variations –  Modify the original prompt description that matches your experimentation purposes. Adapt the language model for optimal results – You can do this by fine-tuning the advanced LLM parameters such as randomness ( temperature ) and determinism ( top_p ) according to your applications. We demonstrated a sample integration using a pre-trained model with sample values, but have fun adjusting the values for your use cases. In our next post, we plan to help you discover how to fine-tune pre-trained LLM-powered chatbots with your own data. Are you experimenting with LLM chatbots on AWS? Tell us more in the comments! Resources and references Companion source code for this post Amazon Lex V2 Developer Guide AWS Solutions Library: QnABot on AWS Text2Text Generation with FLAN T5 models LangChain – Building applications with LLMs Amazon SageMaker Examples with Jumpstart Foundation Models Amazon BedRock – The easiest way to build and scale generative AI applications with foundation models Quickly build high-accuracy Generative AI applications on enterprise data using Amazon Kendra, LangChain, and large language models About the Authors Marcelo Silva is an experienced tech professional who excels in designing, developing, and implementing cutting-edge products. Starting off his career at Cisco, Marcelo worked on various high-profile projects including deployments of the first ever carrier routing system and the successful rollout of ASR9000. His expertise extends to cloud technology, analytics, and product management, having served as senior manager for several companies like Cisco, Cape Networks, and AWS before joining GenAI. Currently working as a Conversational AI/GenAI Product Manager, Marcelo continues to excel in delivering innovative solutions across industries. Victor Rojo is a highly experienced technologist who is passionate about the latest in AI, ML, and software development. With his expertise, he played a pivotal role in bringing Amazon Alexa to the US and Mexico markets while spearheading the successful launch of Amazon Textract and AWS Contact Center Intelligence (CCI) to AWS Partners. As the current Principal Tech Leader for the Conversational AI Competency Partners program, Victor is committed to driving innovation and bringing cutting-edge solutions to meet the evolving needs of the industry. Justin Leto is a Sr. Solutions Architect at Amazon Web Services with a specialization in machine learning. His passion is helping customers harness the power of machine learning and AI to drive business growth. Justin has presented at global AI conferences, including AWS Summits, and lectured at universities. He leads the NYC machine learning and AI meetup. In his spare time, he enjoys offshore sailing and playing jazz. He lives in New York City with his wife and baby daughter. Ryan Gomes is a Data & ML Engineer with the AWS Professional Services Intelligence Practice. He is passionate about helping customers achieve better outcomes through analytics and machine learning solutions in the cloud. Outside work, he enjoys fitness, cooking, and spending quality time with friends and family. Mahesh Birardar is a Sr. Solutions Architect at Amazon Web Services with specialization in DevOps and Observability. He enjoys helping customers implement cost-effective architectures that scale. Outside work, he enjoys watching movies and hiking. Kanjana Chandren is a Solutions Architect at Amazon Web Services (AWS) who is passionate about Machine Learning. She helps customers in designing, implementing and managing their AWS workloads. Outside of work she loves travelling, reading and spending time with family and friends. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Facilitating the Most Live Streamed Super Bowl and Olympics Using AWS Services _ NBCUniversal Case Study _ AWS.txt,"Included in Peacock’s group of CDNs was Amazon CloudFront, a CDN service built for high performance, security, and developer convenience. Besides being economically efficient, Amazon CloudFront offers a global edge network that delivers content to end users with lower latency. “CDNs with large footprints, like Amazon CloudFront, are key because, by using them, we perform better on edge networks to provide customers high-quality video,” says Mastin. “We used Amazon CloudFront and AWS Elemental MediaTailor to optimize our core video key performance indicators and resolve performance issues like bottlenecks. Amazon CloudFront was one of the best of our CDNs.” Français In early December 2021, months into testing, the Peacock team uncovered scalability issues with AWS Elemental MediaTailor but quickly resolved them by engaging AWS Elemental Media Event Management (AWS Elemental MEM), a support program designed to improve the operational reliability of business-critical video events. “Using AWS, we don’t just get a solution that either works or doesn’t—we can iterate together and improve quickly if we find issues,” says David Bohunek, senior vice president, Playback Services, at NBCUniversal. Peacock deploys its encoding and packaging software on Amazon Elastic Compute Cloud (Amazon EC2), which offers the broadest and deepest compute solution to help companies best match the needs of their workload. Peacock content is encoded, packaged, and sent to AWS, where the CDNs take the content and deliver it to viewers. Peacock and the AWS team chose the right type and size of Amazon EC2 instances to scale during the Super Bowl and Olympics. NBCUniversal, a multinational mass media and entertainment conglomerate, received streaming rights to Super Bowl LVI and the Winter Olympics in 2022. For the first time, these events would be simulcasted and live streamed on its streaming platform, Peacock—making it Peacock’s largest concurrent streaming event ever. NBCUniversal had to increase and reinforce Peacock’s global infrastructure to reliably handle such scale to provide a first-rate viewing experience and establish Peacock as a major streaming player powering international solutions. Amazon Elastic Compute Cloud (Amazon EC2) paid subscriptions Español Launched in July 2020 nationally, Peacock offers a catalog of entertainment content from NBCU and beyond, with live sports, critically acclaimed series like The Office and Yellowstone, blockbuster movies, breaking news, and more. Offering video on demand and live broadcasting, the streaming service launched with the backing of the Comcast platform, fueled by Sky’s technology. AWS Elemental MediaTailor is a channel assembly and personalized ad-insertion service for video providers to create linear over-the-top (OTT) channels using existing video content.  日本語 2022 Customer Stories / Media & Entertainment NBCUniversal commissioned Amazon Web Services (AWS) and AWS support teams to prepare Peacock for those business-critical events, focusing on insertion and content delivery networks (CDNs) to provide what it estimated would be millions of users with a viewing experience free of playback disruptions. On AWS, NBCUniversal live streamed the Super Bowl to a record-breaking 6 million concurrent users and the Olympic Games to 1.5 million on Peacock and direct-to-consumer apps, and dropped its most-streamed movie and original TV series at the same time. 한국어 Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used 13 million Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload.  Learn more » Outcome | Personalizing Ads for NBCUniversal Customers Using AWS Elemental MediaTailor … Solution | Improving Streaming Quality for Millions of Concurrent Viewers Using Amazon CloudFront and Amazon EC2 To achieve that, NBCUniversal and AWS had daily calls starting in May as they tested and iterated using AWS services. Peacock began using AWS Elemental MediaTailor, a channel assembly and personalized ad insertion service for video providers to create linear over-the-top channels using existing video content and monetize those channels with personalized advertising. “The idea was to consolidate all our ad insertions using a single solution,” says Diwaker. “We slowly tested everything at the scale of millions of concurrent users.” AWS Elemental Media Event Management (MEM) is a consultative support program designed to improve the operational reliability of your business-critical video workloads. Learn more » Learn how NBCUniversal used AWS services to facilitate the most live streamed Super Bowl and Olympics in history. AWS Services Used 1 NBCUniversal took advantage of AWS services and support to break records and establish Peacock as a streaming service competitor. “As a new service with the Super Bowl and the Olympics, Peacock could not have problems if it was to survive,” says Mastin. “AWS understood that our customers getting the content that they desire was life or death for us—and we’re still here.” 中文 (繁體) Bahasa Indonesia Patrick Miceli Executive Vice President and Chief Technology Officer, Direct-to-Consumer, at NBCUniversal AWS Elemental MediaTailor concurrent live stream views for the Olympics, a record high to increase scalability and reliability of content delivery Ρусский 1.5 million عربي Having already used AWS, NBCUniversal was familiar with its scalability and global footprint. “We were looking for a long-term relationship, and AWS gave us the confidence that it would be the right fit and would tackle challenges alongside us,” says Patrick Miceli, executive vice president and chief technology officer, Direct-to-Consumer, at NBCUniversal. The company engaged AWS Enterprise Support teams, including AWS Infrastructure Event Management (AWS IEM), which offers architecture and scaling guidance and operational support during the preparation and running of planned events. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. We were looking for a long-term relationship, and AWS gave us the confidence that it would be the right fit and would tackle challenges alongside us."" More Media & Entertainment Stories Overview Amazon CloudFront NBCUniversal plans to use the dynamic adaptive streaming feature of AWS Elemental MediaTailor to personalize ads for every stream to every user. The company also will add 4K and high-definition resolution and even lower latency. “We delivered the Super Bowl in the standard latency, about 30 seconds behind the broadcast streams,” says Bohunek. “We want to get on the broadcast level next time.” Get Started 6 million Türkçe NBCUniversal and AWS began collaborating in May 2021, and in February 2022, Peacock broke every record for customer gains and engagement due to the Super Bowl, the Olympics, and its release of a movie and new drama series. The Beijing Olympics were the most-streamed Olympic Games ever, at 1.5 million viewers. The Super Bowl was the most-streamed Super Bowl in history, with Peacock and other direct-to-consumer apps supporting 6 million concurrent users at peak traffic. Also, on February 13, Peacock dropped Bel-Air, which became its most-streamed original series, reaching 8 million accounts as of May 2022. Heading into Valentine’s Day weekend, Peacock, in partnership with Universal Pictures, launched Marry Me, the platform’s most-watched movie to date. The streaming service ended the first quarter with over 28 million monthly active accounts, 13 million paid subscriptions, and more than 60 million monthly active users. English AWS Elemental Media Event Management concurrent live stream Super Bowl views on Peacock and direct-to-consumer apps About NBCUniversal no items found  Less than 1 year Deutsch Tiếng Việt 1 week A subsidiary of Comcast Corporation, NBCUniversal is a media and entertainment company that develops, produces, and markets entertainment and news to a global audience. Italiano ไทย Opportunity | Using AWS Services to Provide High Playback and Picture Quality for Live Streaming Contact Sales Learn more » to drop the biggest-ever load of TV and film content NBCUniversal also needed to insert personalized ads on Peacock at scale. “If users enter the live stream, they should be able to scrub back and watch the video, and we should be able to insert ads on the content and deliver an optimal user experience,” says Naman Diwaker, director of video software engineering at Peacock. For the Super Bowl and the Beijing Olympics, Peacock had to provide a cinematic viewing experience with high playback and a picture quality that kept viewers satisfied. “Our major key performance indicator for live events is playback failures because if customers are watching a live event and their playback fails, they aren’t happy,” says Chas Mastin, vice president of quality and CDN management at Peacock. NBCUniversal Facilitates the Most Live Streamed Super Bowl and Olympics Using AWS Services Português" FanCode Case Study - Amazon Web Services (AWS).txt,"From 2019 to 2021, FanCode worked with an end-to-end video communication platform hosted on AWS to deliver its live streams. However, changes often took up to weeks to implement as FanCode had to work with the vendor’s operations team. This limited its agility and flexibility in responding to customer requests and feedback.   Français Benefits of AWS AWS Services Used Simplified distribution of live streams to a broad range of video playback devices, including web players, smart phones, and connected TVs Español Amazon EC2 Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. Learn More 日本語 About FanCode Amit Mirchandani Head of Engineering, FanCode AWS has unlocked many possibilities for the FanCode team. Aside from new features, such as greater personalization for audiences, introducing advertising-based models, and productivity improvements, we plan to increase the number of brand partnerships, increase our merchandise offering, and channel more users to our ecommerce store. On that front, we will be working with Amazon to leverage its last-mile delivery expertise and other best practices. Ultimately, it is about giving users the best possible sports entertainment experience, and we have been able to achieve that with help from AWS.” 한국어 Cloud-based media services that deliver secure live streams Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. The AWS Cloud has provided FanCode with the scalability and low latency it needs to ensure consistent, high quality live streams for all its users. FanCode deployed Amazon Elastic Compute Cloud (Amazon EC2) for secure and scalable compute capacity and Amazon Aurora for a fully managed relational database that provides high performance and availability. It also uses Amazon ElastiCache and Amazon CloudFront to minimize latency and shorten live stream loading times for viewers.  Get Started Launched FanCode within 3 months instead of 8 months In 2021, FanCode decided to deploy AWS Media Services, and move away from its previous end-to-end video platform. Using AWS Elemental MediaLive to encode and stream live videos, FanCode’s developers now deploy new channels within 15 minutes to test new features for its video player. To learn more, visit aws.amazon.com/media. FanCode is a sports content aggregator under Dream Sports, an India-based sports technology company. The platform provides live streaming services for sporting events, the latest athlete- and team-related content and statistics, as well as an online merchandise store. Since its founding in 2019, FanCode has grown from 2 million users in the first year to over 80 million in India in 2022.  中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload.  FanCode is a sports content aggregator incubated by Dream Sports, an India-based sports technology company. The platform provides live streaming services for sporting events, the latest athlete- and team-related content and statistics, as well as an ecommerce marketplace for sports merchandise. Since its founding in 2019, FanCode has grown from 2 million users in the first year to over 80 million in India in 2022.  Can deploy new channels within 15 minutes to test new features for its video player Additionally, with AWS’s pay-as-you-go pricing approach where it only pays for the services consumed, FanCode estimates that being on the AWS Cloud saves it 15 percent/month on operational costs, compared to an on-premises infrastructure. Ρусский In 2019, FanCode streamed about 350 sporting events with near-zero downtime. During a major cricket event, the West Indies tour of India in 2022, FanCode was able to scale its infrastructure to support up to 6 million concurrent viewers without suffering from any downtime or latency issues thanks to the AWS Cloud.  عربي Amazon ElastiCache 中文 (简体) Learn more » Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Amazon Aurora To efficiently and cost-effectively support surges in the number of viewers during live streams, FanCode decided to build its infrastructure on the cloud. It chose Amazon Web Services (AWS) as its preferred cloud provider as Dream Sports has had a good experience with AWS. By tapping the AWS expertise that Dream Sports’ IT team has, FanCode was able to launch the platform in just 3 months, well under its planned timeframe. The aggregator estimated that it would have taken up to 8 months if it had to build from scratch on an on-premises infrastructure.  Türkçe Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-source compatible in-memory data stores in the cloud. English FanCode Grows 40x in 3 years By Delivering High Quality Live Streams on AWS Tapping the cloud to scale computing capacity Deutsch FanCode additionally deployed AWS Elemental MediaPackage to prepare and protect live videos streams over the internet. The service simplifies the distribution of its live streams to a broad range of video playback devices, including web players, smart phones, and connected TVs.  Tiếng Việt Italiano ไทย Amazon CloudFront Contact Sales 2022 Unlocking new innovations with the AWS Cloud “AWS has unlocked many possibilities for the FanCode team. Aside from new features, such as greater personalization for audiences, introducing advertising-based models, and productivity improvements, we plan to increase the number of brand partnerships, increase our merchandise offering, and channel more users to our ecommerce store. On that front, we will be working with Amazon to leverage its last-mile delivery expertise and other best practices. Ultimately, it is about giving users the best possible sports entertainment experience, and we have been able to achieve that with help from AWS,” said Amit Mirchandani, head of engineering at FanCode.  FanCode’s developers are also testing out new features, including ways to overlay athlete- and team-related data over live streams using machine learning (ML). FanCode also wants to enhance personalization by providing content and product recommendations based on users’ favorite teams and players. On the backend, FanCode plans to expand its microservices stack into Kubernetes, which will help developers spend less time deploying, scaling, and managing Kubernetes applications. Português" FanDuel Migrates to AWS in Less than 3 Weeks Improves the Customer Experience _ AWS.txt,"On AWS, FanDuel can support cross-functional teams that previously never interacted with one another. Its FanDuel+ department can now collaborate with other teams that have been using AWS since 2014. “On AWS, we broke down internal siloes,” says Girard. Since the migration, FanDuel+ has grown its organization to include dedicated product, commercial, and engineering teams. “AWS convinced a lot of internal stakeholders that we could scale, and we have,” says Girard. By migrating to AWS, we had the flexibility to experiment with lower latency video streaming that enhances our customer experience.”  to migrate four channels to AWS Français Increased Outcome | Preparing for Continued Exponential Growth Español Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer experience. insight into video-streaming processes AWS Elemental MediaPackage 日本語 Customer Stories / Games On AWS, FanDuel has improved the customer experience and security. “We haven’t had a single outage since migrating to AWS,” says Girard. “The reliability that we can provide to our customers has improved tremendously.” The company monitors video input and processing using AWS Media Services Application Mapper, which automatically provisions the services necessary to visualize media services, their relationships, and the near-real-time status of linear video services. “Our operations teams can make their workflows more efficient so we can introduce not just monitoring but also automation and orchestration,” says Girard. Using Amazon CloudWatch—which collects and visualizes real-time logs, metrics, and event data in automated dashboards—FanDuel monitors its AWS Elemental services and CloudFront for suspicious logins and to see that engineers adhere to multifactor authentication policies. Eric Girard Senior Manager of Video Architecture, FanDuel Group to build the first channel on AWS 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Improved AWS Elemental MediaLive Get Started About FanDuel Group By 2021, the company’s app, FanDuel+, had four live streaming linear channels through which it offered one-time sporting events for 50 million US households to watch and wager on. Without a dedicated engineering team for video streaming, the company relied on third-party off-the-shelf products to facilitate video encoding and distribution to customers. The company sought to improve the viewer experience with lower latency, greater reliability, and scalability. AWS Services Used Opportunity | Seeking Reliability on the Cloud 中文 (繁體) Bahasa Indonesia No outages The company was drawn to the performance, global reliability, and availability of AWS solutions. “By migrating to AWS, we had the flexibility to experiment with lower latency video streaming that enhances our customer experience,” says Eric Girard, senior manager of video architecture at FanDuel. “AWS engineers and architects provided support along the way for architecture, configuration, engineering, and operational activities to help train us and deploy this new infrastructure.” Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Learn more » 中文 (简体) Learn more » experienced on live streams FanDuel plans to live stream other sports on its linear channels. It has signed contracts with more content partners, and because it can scale on AWS, the company intends to add more than 5,000 hours of content in 2023. FanDuel plans to migrate all its one-time television channels and video-on-demand services to AWS. In September 2022, FanDuel TV launched; it’s the first linear/digital network dedicated to sports-wagering content. 2022   Overview FanDuel Migrates to AWS in Less than 3 Weeks, Improves the Customer Experience Solution | Migrating Quickly for High Performance and Reliability Founded in 2009, FanDuel provides an online sports-betting experience to more than 12 million customers in the United States and Canada. The company has grown exponentially year over year since the US Supreme Court struck down the federal ban on sports gambling in 2018, and the states began legalizing it: FanDuel’s annual revenue grew by 81 percent to $896 million in 2020 and by 113 percent to $1.9 billion in 2021. Türkçe English In 2021, sports gaming company FanDuel Group (FanDuel) faced an obstacle that threatened to halt its year-over-year exponential growth: its third-party video-streaming vendor couldn’t handle the 24/7 live streams that facilitated near-real-time betting for its customers. Expecting continued growth, FanDuel needed to scale without adversely affecting the viewing experience. The company contacted Amazon Web Services (AWS) in December 2021, and by January 2022, it had migrated four live stream linear channels to AWS, where it gained high reliability and scalability. AWS Elemental MediaPackage can take a single video input from an encoder, package it in multiple streaming formats, and automatically scale outputs in response to audience demand. customer experience AWS Elemental MediaConnect AWS Elemental MediaLive is a broadcast-grade live video processing service that creates high-quality streams for delivery to broadcast TVs and internet-connected devices. Learn more » Although many internal stakeholders at FanDuel anticipated the project would take 6 months, the team migrated its four channels to AWS in less than 3 weeks. “The cross-functional relationships across our teams meant that I could set up accounts in 1 day and start architecting and building the solution in less than 4 days,” says Girard. “AWS was instrumental in helping us launch and engineer the solution. With a close collaboration between our teams and AWS, we can operate more efficiently.” FanDuel built its first channel in 10 days, which it replicated for the remaining three channels. Next came rigorous failover testing. Deutsch Once the video streams are input into AWS Elemental MediaLive, HTTP live streaming outputs go to AWS Elemental MediaPackage, which prepares and protects video for delivery over the internet to connected devices. “We like AWS Elemental MediaPackage because of its capability and functionality, such as the restart, rewind, record feature,” says Girard. “We can also use its digital-rights management to protect our content.” From AWS Elemental MediaPackage, the video goes to Amazon CloudFront—a content delivery network service built for high performance, scalability, security, and developer convenience—and then to FanDuel’s application. AWS Elemental MediaConnect is a high-quality transport service for live video. It delivers the reliability and security of satellite and fiber-optic combined with the flexibility, agility, and economics of IP-based networks. Tiếng Việt Founded in 2009, FanDuel Group is a sports gaming company owned by Flutter Entertainment. With offices in Los Angeles, New York, and Atlanta, it offers an online sports-betting experience to more than 12 million customers in the United States and Canada. Learn how FanDuel in the gaming industry improved customer experience using AWS Elemental MediaConnect. Italiano ไทย Amazon CloudFront < 3 weeks On AWS, FanDuel not only rapidly enhanced the customer experience but also set itself up for continued growth and improvement. “AWS doesn’t just deliver reliability, it also supports us in scaling and using new technology,” says Girard. “We have flexibility to innovate and improve long term.” 10 days FanDuel uses AWS Elemental MediaConnect, a high-quality, highly reliable transport service for live video, to transmit its video signals over the public internet to AWS from its headquarters in Los Angeles. FanDuel has created two redundant paths by which its video streams pass to AWS Elemental MediaLive, a broadcast-grade live video processing service that creates high-quality video streams for delivery to broadcast televisions and internet-connected multiscreen devices. “We created the AWS Elemental MediaLive input for failover between those two paths,” says Girard. “We can turn off one of those paths if we need to—or, if one of them breaks, the video stream will stay on air.” Português" Fantom Case Study - Amazon Web Services (AWS).txt,"Fantom runs an open-source, public blockchain platform that provides ledger services to individuals and enterprises seeking greater security, traceability, and veracity, across decentralized applications in business and government settings. Fantom turned to Amazon Web Services (AWS) to build a stable, secure, and fast platform to better serve a wide range of private users and capture new enterprise users in the financial and public sectors. Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Français With Amazon Elastic Compute Cloud (Amazon EC2), Fantom optimized its platform’s speed and security and was recognized as the one of the fastest blockchain platforms in April 2021.  To learn more, visit aws.amazon.com/financial-services. Español 99.9% platform uptime MySQL is the world's most popular open source relational database and Amazon RDS makes it easy to set up, operate, and scale MySQL deployments in the cloud. Amazon Elastic Compute Cloud (EC2) 日本語 Verified blockchain transactions within 1 second each Contact Sales Get Started 한국어 To Learn More With AWS, Fantom can now pursue its business goals with the assurance that its software infrastructure is robust enough to meet the needs of a wider pool of users. About Fantom With secure, resizable compute capacity from Amazon EC2, Fantom offers fast, traceable multi-chain support for its business users to do more. Users can build securities exchanges using Fantom’s patented distributed ledger technologies for smart contracts, and accurately track shipments and identify potential counterfeit goods. AWS Services Used Fantom is a public decentralized blockchain platform servicing a wide range of decentralized applications in business and government settings. Fantom developed one of the first open-source public blockchain platforms that runs asynchronously to complete each transaction in one second. Today, Fantom has a network of more than 100 partners and investors, 8800 smart contracts deployed on its platform, and a market capitalization of USD 1 billion. 中文 (繁體) Bahasa Indonesia Quan Nguyen Chief Technology Officer, Fantom Amazon Elastic Block Store (EBS) is an easy to use, high-performance, block-storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction intensive workloads at any scale. Ρусский عربي Business Growth, Full Speed Ahead 中文 (简体) “Compared to other cloud providers and services, we’ve found AWS Cloud, and Amazon EC2 in particular, to be the most reliable, stable, and secure. We’ve actively recommended Amazon EC2 to our members since the platform first launched in December 2019,” says Nguyen. Learn more » Benefits of AWS Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud.  Türkçe English Amazon RDS for PostgreSQL Compared to other cloud providers and services, we’ve found AWS Cloud, and Amazon EC2 in particular, to be the most reliable, stable, and secure. We’ve actively recommended this service to our members since the platform first launched in December 2019.” Deutsch Fantom’s Blockchain Platform Raises the Bar for Transaction Verifications Setting a Solid Infrastructural Foundation Tiếng Việt Since running on AWS in 2018, Fantom has grown its network and ecosystem from a few partners and investors to more than a hundred. The number of smart contracts—programs that carry out a specific set of instructions, which cannot be changed once in force—deployed on its platform has increased from 0 to 8800, while its market capitalization expanded from USD 40 million to 1 billion. With AWS Cloud, Fantom has achieved 400 times growth in the number of daily transactions, with 3 times faster peer-to-peer synchronization for sub-second transaction verification speeds and better user experience. Italiano ไทย PostgreSQL has become the preferred open source relational database for many enterprise developers and start-ups, powering leading business and mobile applications.  Amazon RDS for MySQL Michael Kong, chief executive officer and chief information officer at Fantom adds, “We’re planning to enhance our platform with more AWS services to further improve platform nodes, create better monitoring capabilities, and provide new business analytics and recommendations to customers.” 2021 Offering a highly stable and efficient platform, Fantom is actively expanding its services to new enterprise users in sectors including financial services, healthcare, and logistics. According to Quan Nguyen, chief technology officer at Fantom, the company uses Amazon EC2, Amazon Elastic Block Store (EBS), Amazon RDS for PostgreSQL, and Amazon RDS for MySQL to optimize its platform and development environment, serving hundreds of developers and thousands of users with low network latency and 99.9 percent uptime. Português Amazon Elastic Block Store (EBS)" Fatshark Delivers Warhammer 40K_ Darktide Fully on AWS for Millions of Players _ Case Study _ AWS.txt,"AWS Global Accelerator is a networking service that helps you improve the availability, performance, and security of your public applications. Learn more » Français 2023 The search for backend services to make a high-quality game experience led Fatshark to AWS. Claridge says, “If we migrate to AWS, there are so many solutions available that we can use to improve the quality of services to our players.” The team started the migration to AWS in early 2020, and Amazon GameLift FleetIQ was a key part of the journey. Amazon GameLift FleetIQ optimizes the use of low-cost Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances, which let customers take advantage of unused Amazon EC2 capacity in the AWS Cloud, for cloud-based game hosting to deliver inexpensive, resilient game hosting. After the core was up and running, Fatshark started using a range of other services in a serverless development environment. Español Optimized Andrew Claridge Lead Backend Developer, Fatshark 日本語 Amazon GameLift Customer Stories / Games ultralow latency gaming Efficiency was an overriding concern on the project, and Fatshark saved time and effort by using Amazon DynamoDB, a fast, flexible NoSQL database service for single-digit millisecond performance at virtually any scale. “We don’t have to worry about things like database scaling using Amazon DynamoDB,” says Claridge. “It just works.” Fatshark has also accelerated development by using infrastructure as code. Claridge says, “Using infrastructure as code means that we can easily and cost-efficiently stand up developer environments that are one-to-one clones of production environments.” The time saved on building developer environments has given the team more freedom to test new features. Contact Sales Fatshark developed the game backend entirely on AWS with a team of only eight people. After migrating the backend logic for Darktide to the cloud, Fatshark used a host of AWS services within AWS for Games, a purpose-built game development offering. “The fact that there are so many AWS solutions gives us the confidence to keep building, because we know that we’re not walking into a trap,” says Claridge. “There will almost certainly be something that solves our use case.” Fatshark has also accelerated the development process by attracting talent familiar with using AWS. 한국어 AWS Global Accelerator Overview | Opportunity | Solution | Outcome | AWS Services Used In gaming, usage tends to spike very quickly. “Our peaks and troughs are highly compact,” says Claridge. “In just a matter of hours, we go from quite chill to a lot of people playing during the evening.” Moreover, Fatshark is especially well known for its rhythmic approach to melee combat. As players engage artificial intelligence enemies, their parries and redoubts fall into a familiar pattern. One service that Fatshark uses to deliver seamless gaming is AWS Global Accelerator, a networking service that optimizes the user path to applications to keep packet loss, jitter, and latency consistently low. When groups of friends distributed across several continents set up a Darktide game together, Fatshark uses AWS Global Accelerator to eliminate lag spikes. Claridge says, “Using AWS Global Accelerator, our servers aren’t on fire trying to catch up because people are pinging around all over the place.” The result is a high-quality gaming experience that scales to meet spikes in demand. GameLift FleetIQ optimizes the use of low-cost Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances for cloud-based game hosting. With GameLift FleetIQ, you can work directly with your hosting resources in Amazon EC2 and Amazon EC2 Auto Scaling while taking advantage of GameLift optimizations to deliver inexpensive, resilient game hosting for your players. Learn more » Improved About Fatshark Get Started Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. DynamoDB offers built-in security, continuous backups, automated multi-Region replication, in-memory caching, and data import and export tools. AWS Services Used Fatshark Delivers Warhammer 40K: Darktide Fully on AWS for Millions of Players After running the backend of previous games on managed services, Fatshark wanted to have more control of features for Darktide. To achieve that goal, the team started using high-level AWS services and chose lower-level services when that seemed optimal. “We have a philosophy of almost entirely starting with serverless technology because it lets our smaller team innovate like a larger studio, and then we drop down when we want more control over the environment,” says Claridge. “Using AWS, we can start quickly and take on complexity when we need it but not when we don’t.” That strategic approach helps Fatshark maximize the impact of its talent pool. Outcome | Focusing on Features to Enhance Player Experience  中文 (繁體) Bahasa Indonesia Amazon GameLift deploys and manages dedicated game servers hosted in the cloud, on-premises, or through hybrid deployments. GameLift provides a low-latency and low-cost solution that scales with fluctuating player demand. Learn more » infrastructure costs Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Adapted gaming experience   Overview Using AWS, we have a very powerful infrastructure for our game, and we can focus on writing features."" Fatshark chose to meet those needs by developing Darktide on Amazon Web Services (AWS). “Because we’ve built on AWS before, we know that the game backend, the communication features, and the gameplay servers can scale simultaneously to the level that we need,” says Claridge. Fatshark used services such as Amazon GameLift, a dedicated game server hosting solution, to achieve its desired levels of elasticity, scalability, and cost optimization, which helped prepare the studio to launch Darktide globally. Türkçe Opportunity | Migrating to a Serverless Infrastructure with Amazon GameLift  English Amazon GameLift FleetIQ Founded in 2007, Fatshark is a Stockholm-based studio with two fully supported online cooperative multiplayer games. Both take place within the Warhammer universe from Games Workshop. “We are quite fanatical about the Warhammer universe,” says Claridge. The team was excited to add a new chapter to the franchise, and Fatshark knew that it was time to use a new approach. “We want to make all this cool stuff, but we don’t particularly want to host it,” says Claridge. Provided Deutsch Learn how Fatshark built its new game on the cloud using AWS for Games solutions. Amazon DynamoDB Solution | Improving Global Gaming Performance Using AWS  Tiếng Việt Italiano ไทย to rapid scaling Learn more » Fatshark, a Swedish video game developer, wanted to build its most complex game yet—Warhammer 40,000: Darktide. To build on the success of the studio’s Warhammer: Vermintide series, the combat-focused cooperative multiplayer game must offer ultralow latency to over 100,000 concurrent players. “If players join, they need a server, they need to talk to all their friends, and they need to get to all their characters,” says Andrew Claridge, lead backend developer at Fatshark. Fatshark, a Swedish video game developer, creates high-quality PC and console games. The studio has 200 employees and two titles—Warhammer: Vermintide and Warhammer 40,000: Darktide. Português Fatshark is confident in the game that it has built and is eager to see gamers enjoy the new title. “Using AWS, we have a very powerful infrastructure for our game, and we can focus on writing features,” says Claridge. Now the team aims to keep improving the gaming experience. Claridge says, “Given the smooth experience that we’ve had on AWS so far, we’re looking for new ways to use features and create awesome things for our players.”" Finch Computing Reduces Inference Costs by 80 Using AWS Inferentia for Language Translation _ Case Study _ AWS.txt,"Amazon Elastic Compute Cloud (Amazon EC2) Opportunity | Seeking Scalability and Cost Optimization for ML Models Français 80% decrease Amazon Elastic Container Service (Amazon ECS) 3 additional languages Español Optimized 日本語 The strategy involved the deployment of Docker containers to Amazon Elastic Container Service (Amazon ECS), a fully managed container orchestration service that makes it simple for organizations to deploy, manage, and scale containerized applications. The solution incorporated AWS Deep Learning AMIs (DLAMI), preconfigured environments to build deep learning applications quickly. Finch plugged the AWS Inferentia AMIs into its DevOps pipeline and updated its infrastructure-as-code templates to use AWS Inferentia to run customized containers using Amazon ECS. “Once we had our DevOps pipeline running on Amazon EC2 Inf1 Instances and Amazon ECS, we were able to rapidly deploy more deep learning models,” says Franz Weckesser, chief architect at Finch. In fact, Finch built a model to support the Ukrainian language in just 2 days. Within a few months, Finch deployed three additional ML models—supporting NLP in German, French, and Spanish—and improved the performance of its existing Dutch model. Scott Lightner CTO and Founder, Finch Computing 2022 Outcome | Migrating Additional Applications to AWS Inferentia Amazon EC2 offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » 한국어 Finch Computing is a natural language processing company that uses machine learning to help customers gain near-real-time insights from text. Clients include media companies and data aggregators, US government and intelligence, and financial services. Overview | Opportunity | Solution | Outcome | AWS Services Used Together, Finch and Slalom built a solution that optimized the use of AWS Inferentia–based Amazon EC2 Inf1 Instances, which deliver high-performance ML inference at a low cost in the cloud. “Given the cost of GPUs, we simply couldn’t have offered our customers additional languages while keeping our product profitable,” says Lightner. “Amazon EC2 Inf1 Instances changed that equation for us.” throughput and response times for customers supported because of cost-savings Finch Computing develops natural language processing (NLP) technology to provide customers with the ability to uncover insights from huge volumes of text data, and it was looking to fulfill customers’ requests to support additional languages. Finch had built its own neural translation models using deep learning algorithms with a heavy compute requirement that depended on GPUs. The company was looking for a scalable solution that would scale to support global data feeds and give it the ability to iterate new language models quickly without taking on prohibitive costs. About Finch Computing AWS Services Used for new products 中文 (繁體) Bahasa Indonesia At AWS re:Invent 2021, a yearly conference hosted by AWS for the global cloud computing community, Finch representatives learned about AWS Inferentia–based instances in the Amazon Elastic Compute Cloud (Amazon EC2), which offers secure and resizable compute capacity for virtually any workload. AWS introduced Finch to AWS Partner Slalom, a consulting firm focused on strategy, technology, and business transformation. For 2 months after AWS re:Invent, Slalom and Finch team members worked on building a cost-effective solution. “In addition to getting guidance from the AWS team, we connected with Slalom, which helped us optimize our workloads and accelerate this project,” says Scott Lightner, Finch’s founder and chief technology officer. Given the cost of GPUs, we simply couldn’t have offered our customers additional languages while keeping our product profitable. Amazon EC2 Inf1 Instances changed that equation for us.” Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. in computing costs Learn more » Additional customers Faster time to market Overview AWS Inferentia is Amazon's first custom silicon designed to accelerate deep learning workloads and is part of a long-term strategy to deliver on this vision. Get Started attracted by using the service Solution | Building a Solution Using AWS Inferentia Türkçe With offices in Reston, Virginia, and Dayton, Ohio, Finch—a combination of the words “find” and “search”—serves media companies and data aggregators, US intelligence and government organizations, and financial services companies. Its products center around NLP, a subset of artificial intelligence that trains models to understand the nuances of human language, including deciphering tone and intent. Its product Finch for Text uses dense, parallel machine learning (ML) computations that rely on high-performance, accelerated computing so that it can deliver near-real-time insights to customers about their informational assets. For example, its entity disambiguation feature provides customers with the ability to interpret the correct meaning of a word that has multiple meanings or spellings. Since its inception, Finch had been using solutions from Amazon Web Services (AWS). The company began looking at AWS Inferentia, a high performance machine learning inference accelerator, purpose built by AWS, to accelerate deep learning workloads. Creating a compute infrastructure that is centered around the use of AWS Inferentia, Finch reduced its costs by more than 80 percent compared with the use of GPUs while maintaining its throughput and response times for its customers. With a powerful compute infrastructure in place, Finch has accelerated its time to market, expanded its NLP to support three additional languages, and attracted new customers. English Using Amazon EC2 Inf1 Instances, the company improved the speed of developing these new products while reducing its inference costs by more than 80 percent. The addition of the new models attracted customers interested in gaining insights from the additional languages and received positive feedback from existing customers. “There are always challenges in making wholesale changes to the infrastructure,” says Lightner. “But we were able to quickly overcome them with the perseverance of our team with help from Slalom and AWS. The end result made it worthwhile.” Finch is looking to continue migrating more models to AWS Inferentia. These models include Sentiment Assignment, which identifies a piece of content as positive, negative, or neutral, and a new feature called Relationship Extraction, a compute-intensive application that discovers relationships between entities mentioned in text. And Finch continues to add new languages, with plans for Arabic, Chinese, and Russian next. “Our experience working on AWS Inferentia has been great,” says Lightner. “It’s been excellent having a cloud provider that works alongside us and helps us scale as our business grows.” The AWS Deep Learning AMIs provide machine learning practitioners and researchers with the infrastructure and tools to accelerate deep learning in the cloud, at any scale. Learn more » Deutsch Tiếng Việt Italiano ไทย Finch Computing Reduces Inference Costs by 80% Using AWS Inferentia for Language Translation Finch expanded its capabilities to support Dutch, which sparked the idea that it needed to scale further to include French, German, Spanish, and other languages. This decision was valuable not only because Finch’s clients had a lot of content in those languages but also because models that could support additional languages could attract new customers. Finch needed to find a way to process a significant amount of additional data without affecting throughput or response times, critical factors for its clients, or increasing deployment costs. AWS Deep Learning AMIs (DLAMI) Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Learn more » Amazon Inferentia Português The company’s proprietary deep learning translation models were running on PyTorch on AWS, an open-source deep learning framework that makes it simple to develop ML models and deploy them to production. Finch used Docker to containerize and deploy its PyTorch models. Finch migrated these compute-heavy models from GPU-based instances to Amazon EC2 Inf1 Instances powered by AWS Inferentia. Amazon EC2 Inf1 Instances were built to accelerate a diverse set of models—ranging from computer vision to NLP. The team could build a solution that mixed model sizes and maintained the same throughput as it had when it used GPUs but at a significantly lower cost. “Using AWS Inferentia, we are able to get the throughput and performance needed at a price point that our customers can afford,” Lightner says." Fine-tune GPT-J using an Amazon SageMaker Hugging Face estimator and the model parallel library _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Fine-tune GPT-J using an Amazon SageMaker Hugging Face estimator and the model parallel library by Zmnako Awrahman , Anastasia Pachni Tsitiridou , Dhawalkumar Patel , Rahul Huilgol , Roop Bains , and Wioletta Stobieniecka | on 12 JUN 2023 | in Amazon SageMaker , Best Practices , Generative AI , PyTorch on AWS , Technical How-to | Permalink | Comments |  Share GPT-J is an open-source 6-billion-parameter model released by Eleuther AI. The model is trained on the Pile and can perform various tasks in language processing. It can support a wide variety of use cases, including text classification, token classification, text generation, question and answering, entity extraction, summarization, sentiment analysis, and many more. GPT-J is a transformer model trained using Ben Wang’s Mesh Transformer JAX . In this post, we present a guide and best practices on training large language models (LLMs) using the Amazon SageMaker distributed model parallel library to reduce training time and cost. You will learn how to train a 6-billion-parameter GPT-J model on SageMaker with ease. Finally, we share the main features of SageMaker distributed model parallelism that help with speeding up training time. Transformer neural networks A transformer neural network is a popular deep learning architecture to solve sequence-to-sequence tasks. It uses attention as the learning mechanism to achieve close to human-level performance. Some of the other useful properties of the architecture compared to previous generations of natural language processing (NLP) models include the ability distribute, scale, and pre-train. Transformers-based models can be applied across different use cases when dealing with text data, such as search, chatbots, and many more. Transformers use the concept of pre-training to gain intelligence from large datasets. Pre-trained transformers can be used as is or fine-tuned on your datasets, which can be much smaller and specific to your business. Hugging Face on SageMaker Hugging Face is a company developing some of the most popular open-source libraries providing state-of-the-art NLP technology based on transformers architectures. The Hugging Face transformers , tokenizers , and datasets libraries provide APIs and tools to download and predict using pre-trained models in multiple languages. SageMaker enables you to train, fine-tune, and run inference using Hugging Face models directly from its Hugging Face Model Hub using the Hugging Face estimator in the SageMaker SDK . The integration makes it easier to customize Hugging Face models on domain-specific use cases. Behind the scenes, the SageMaker SDK uses AWS Deep Learning Containers (DLCs), which are a set of prebuilt Docker images for training and serving models offered by SageMaker. The DLCs are developed through a collaboration between AWS and Hugging Face. The integration also offers integration between the Hugging Face transformers SDK and SageMaker distributed training libraries, enabling you to scale your training jobs on a cluster of GPUs. Overview of the SageMaker distributed model parallel library Model parallelism is a distributed training strategy that partitions the deep learning model over numerous devices, within or across instances. Deep learning (DL) models with more layers and parameters perform better in complex tasks like computer vision and NLP. However, the maximum model size that can be stored in the memory of a single GPU is limited. GPU memory constraints can be bottlenecks while training DL models in the following ways: They limit the size of the model that can be trained because a model’s memory footprint scales proportionately to the number of parameters They reduce GPU utilization and training efficiency by limiting the per-GPU batch size during training SageMaker includes the distributed model parallel library to help distribute and train DL models effectively across many compute nodes, overcoming the restrictions associated with training a model on a single GPU. Furthermore, the library allows you to obtain the most optimal distributed training utilizing EFA-supported devices, which improves inter-node communication performance with low latency, high throughput, and OS bypass. Because large models such as GPT-J, with billions of parameters, have a GPU memory footprint that exceeds a single chip, it becomes essential to partition them across multiple GPUs. The SageMaker model parallel (SMP) library enables automatic partitioning of models across multiple GPUs. With SageMaker model parallelism, SageMaker runs an initial profiling job on your behalf to analyze the compute and memory requirements of the model. This information is then used to decide how the model is partitioned across GPUs, in order to maximize an objective, such as maximizing speed or minimizing memory footprint. It also supports optional pipeline run scheduling in order to maximize the overall utilization of available GPUs. The propagation of activations during forward pass and gradients during backward pass requires sequential computation, which limits the amount of GPU utilization. SageMaker overcomes the sequential computation constraint utilizing the pipeline run schedule by splitting mini-batches into micro-batches to be processed in parallel on different GPUs. SageMaker model parallelism supports two modes of pipeline runs: Simple pipeline – This mode finishes the forward pass for each micro-batch before starting the backward pass. Interleaved pipeline – In this mode, the backward run of the micro-batches is prioritized whenever possible. This allows for quicker release of the memory used for activations, thereby using memory more efficiently. Tensor parallelism Individual layers, or nn.Modules , are divided across devices using tensor parallelism so they can run concurrently. The simplest example of how the library divides a model with four layers to achieve two-way tensor parallelism ( ""tensor_parallel_degree"": 2 ) is shown in the following figure. Each model replica’s layers are bisected (divided in half) and distributed between two GPUs. The degree of data parallelism is eight in this example because the model parallel configuration additionally includes ""pipeline_parallel_degree"": 1 and ""ddp"": True . The library manages communication among the replicas of the tensor-distributed model. The benefit of this feature is that you may choose which layers or which subset of layers you want to apply tensor parallelism to. To dive deep into tensor parallelism and other memory-saving features for PyTorch, and to learn how to set up a combination of pipeline and tensor parallelism, see Extended Features of the SageMaker Model Parallel Library for PyTorch . SageMaker sharded data parallelism Sharded data parallelism is a memory-saving distributed training technique that splits the training state of a model (model parameters, gradients, and optimizer states) across GPUs in a data parallel group. When scaling up your training job to a large GPU cluster, you can reduce the per-GPU memory footprint of the model by sharding the training state over multiple GPUs. This returns two benefits: you can fit larger models, which would otherwise run out of memory with standard data parallelism, or you can increase the batch size using the freed-up GPU memory. The standard data parallelism technique replicates the training states across the GPUs in the data parallel group and performs gradient aggregation based on the AllReduce operation. In effect, sharded data parallelism introduces a trade-off between the communication overhead and GPU memory efficiency. Using sharded data parallelism increases the communication cost, but the memory footprint per GPU (excluding the memory usage due to activations) is divided by the sharded data parallelism degree, therefore larger models can fit in a GPU cluster. SageMaker implements sharded data parallelism through the MiCS implementation. For more information, see Near-linear scaling of gigantic-model training on AWS . Refer to Sharded Data Parallelism for further details on how to apply sharded data parallelism to your training jobs. Use the SageMaker model parallel library The SageMaker model parallel library comes with the SageMaker Python SDK. You need to install the SageMaker Python SDK to use the library, and it’s already installed on SageMaker notebook kernels. To make your PyTorch training script utilize the capabilities of the SMP library, you need to make the following changes: Strat by importing and initializing the smp library using the smp.init() call. Once it’s initialized, you can wrap your model with the smp.DistributedModel wrapper and use the returned DistributedModel object instead of the user model. For your optimizer state, use the smp.DistributedOptimizer wrapper around your model optimizer, enabling smp to save and load the optimizer state. The forward and backward pass logic can be abstracted as a separate function and add a smp.step decorator to the function. Essentially, the forward pass and back-propagation needs to be run inside the function with the smp.step decorator placed over it. This allows smp to split the tensor input to the function into a number of microbatches specified while launching the training job. Next, we can move the input tensors to the GPU used by the current process using the torch.cuda.set_device API followed by the .to() API call. Finally, for back-propagation, we replace torch.Tensor.backward and torch.autograd.backward . See the following code: @smp.step def train_step(model, data, target): output = model(data) loss = F.nll_loss(output, target, reduction=""mean"") model.backward(Loss) return output, loss with smp.tensor_parallelism(): model = AutoModelForCausalLM.from_config(model_config) model = smp.DistributedModel (model) optimizer = smp. DistributedOptimizer(optimizer) The SageMaker model parallel library’s tensor parallelism offers out-of-the-box support for the following Hugging Face Transformer models: GPT-2 , BERT , and RoBERTa (available in the SMP library v1.7.0 and later) GPT-J (available in the SMP library v1.8.0 and later) GPT-Neo (available in the SMP library v1.10.0 and later) Best practices for performance tuning with the SMP library When training large models, consider the following steps so that your model fits in GPU memory with a reasonable batch size: It’s recommended to use instances with higher GPU memory and high bandwidth interconnect for performance, such as p4d and p4de instances. Optimizer state sharding can be enabled in most cases, and will be helpful when you have more than one copy of the model (data parallelism enabled). You can turn on optimizer state sharding by setting ""shard_optimizer_state"": True in the modelparallel configuration. Use activation checkpointing , a technique to reduce memory usage by clearing activations of certain layers and recomputing them during a backward pass of selected modules in the model. Use activation offloading , an additional feature that can further reduce memory usage. To use activation offloading, set ""offload_activations"": True in the modelparallel configuration. Use when activation checkpointing and pipeline parallelism are turned on and the number of microbatches is greater than one. Enable tensor parallelism and increase parallelism degrees where the degree is a power of 2. Typically for performance reasons, tensor parallelism is restricted to within a node. We have run many experiments to optimize training and tuning GPT-J on SageMaker with the SMP library. We have managed to reduce GPT-J training time for an epoch on SageMaker from 58 minutes to less than 10 minutes—six times faster training time per epoch. It took initialization, model and dataset download from Amazon Simple Storage Service (Amazon S3) less than a minute, tracing and auto partitioning with GPU as the tracing device less than 1 minute, and training an epoch 8 minutes using tensor parallelism on one ml.p4d.24xlarge instance, FP16 precision, and a SageMaker Hugging Face estimator. To reduce training time as a best practice, when training GPT-J on SageMaker, we recommend the following: Store your pretrained model on Amazon S3 Use FP16 precision Use GPU as a tracing device Use auto-partitioning, activation checkpointing , and optimizer state sharding : auto_partition: True shard_optimizer_state: True Use tensor parallelism Use a SageMaker training instance with multiple GPUs such as ml.p3.16xlarge, ml.p3dn.24xlarge, ml.g5.48xlarge, ml.p4d.24xlarge, or ml.p4de.24xlarge. GPT-J model training and tuning on SageMaker with the SMP library A working step-by-step code sample is available on the Amazon SageMaker Examples public repository. Navigate to the training/distributed_training/pytorch/model_parallel/gpt-j folder . Select the gpt-j folder and open the train_gptj_smp_tensor_parallel_notebook.jpynb Jupyter notebook for the tensor parallelism example and train_gptj_smp_notebook.ipynb for the pipeline parallelism example. You can find a code walkthrough in our Generative AI on Amazon SageMaker workshop . This notebook walks you through how to use the tensor parallelism features provided by the SageMaker model parallelism library. You’ll learn how to run FP16 training of the GPT-J model with tensor parallelism and pipeline parallelism on the GLUE sst2 dataset. Summary The SageMaker model parallel library offers several functionalities. You can reduce cost and speed up training LLMs on SageMaker. You can also learn and run sample codes for BERT, GPT-2, and GPT-J on the Amazon SageMaker Examples public repository. To learn more about AWS best practices for training LLMS using the SMP library, refer to the following resources: SageMaker Distributed Model Parallelism Best Practices Training large language models on Amazon SageMaker: Best practices To learn how one of our customers achieved low-latency GPT-J inference on SageMaker, refer to How Mantium achieves low-latency GPT-J inference with DeepSpeed on Amazon SageMaker . If you’re looking to accelerate time-to-market of your LLMs and reduce your costs, SageMaker can help. Let us know what you build! About the Authors Zmnako Awrahman, PhD , is a Practice Manager, ML SME, and Machine Learning Technical Field Community (TFC) member at Global Competency Center, Amazon Web Services. He helps customers leverage the power of the cloud to extract value from their data with data analytics and machine learning. Roop Bains is a Senior Machine Learning Solutions Architect at AWS. He is passionate about helping customers innovate and achieve their business objectives using artificial intelligence and machine learning. He helps customers train, optimize, and deploy deep learning models. Anastasia Pachni Tsitiridou is a Solutions Architect at AWS. Anastasia lives in Amsterdam and supports software businesses across the Benelux region in their cloud journey. Prior to joining AWS, she studied electrical and computer engineering with a specialization in computer vision. What she enjoys most nowadays is working with very large language models. Dhawal Patel is a Principal Machine Learning Architect at AWS. He has worked with organizations ranging from large enterprises to mid-sized startups on problems related to distributed computing and artificial intelligence. He focuses on deep learning, including NLP and computer vision domains. He helps customers achieve high-performance model inference on SageMaker. Wioletta Stobieniecka is a Data Scientist at AWS Professional Services. Throughout her professional career, she has delivered multiple analytics-driven projects for different industries such as banking, insurance, telco, and the public sector. Her knowledge of advanced statistical methods and machine learning is well combined with a business acumen. She brings recent AI advancements to create value for customers. Rahul Huilgol is a Senior Software Development Engineer in Distributed Deep Learning at Amazon Web Services. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Firework Games case study.txt,"Moses Ip Chief executive officer, Firework Games To ingest and process the game data for its machine learning models, Firework Games deployed a combination of Amazon Relational Database for MySQL (Amazon RDS for MySQL), Amazon ElastiCache, Amazon Aurora, and AWS Glue. The company estimates that Spark Era handles 114 TB of data per hour on average. Français Amazon EC2 Auto Scaling Find out how being on the AWS Cloud lets Firework Games keep latency low for players across the world. Español Using Amazon CloudFront, a low-latency content delivery network, players were able to download the game content within 10 minutes. During testing on its previous on-premises servers, this took double the time. 日本語 On AWS, Firework Games reduced its average latency to 160ms from 300ms connecting users from Korea to AWS servers located in US. This was 46 percent lower than on its previous on-premises servers, while also saving up to 30 percent in costs. Customer Stories / Games 2022 Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Amazon Elastic Graphics allows you to easily attach low-cost graphics acceleration to a wide range of EC2 instances. Simply choose an instance with the right amount of compute, memory, and storage for your application, and then use Elastic Graphics to add acceleration required by your application. Learn more » 한국어 “The AWS team and AWS Enterprise Support has been very helpful in supporting our development of Spark Era and ensuring the best player experience. They guided us on which AWS Regions meet our needs, and advised us on the Amazon EC2 instances needed for our setup. The AWS team also worked with us on a globally scalable design – from latency reduction through region selection, Amazon CloudFront and AWS Global Accelerator implementation, and Transmission Control Protocol (TCP)-based autoscaling strategy to optimize compute resource usage,” said Moses Ip, chief executive officer, Firework Games. “In summary, AWS helped us make the most of our resources, which is vital for us as a startup.”  Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Firework Games is a Hong Kong-based startup established in 2021 that develops blockchain games for the metaverse. In November 2022, the company launched its first game, Spark Era, a Massive Multiplayer Online Role-Playing Game (MMORPG). To achieve a smooth in-game experience for its players, Firework Games built Spark Era on Amazon Web Services (AWS). Benefits Get Started As Spark Era is a highly competitive battle royale game, Firework Games needs to deliver consistently low latencies to its players for a fun and fair gaming experience. AWS Services Used Further Opportunity to Innovate and Evolve 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Ensuring Smooth and Fast Gameplay for Players Across the World عربي • 40% improvement in latency for all its gamers globally • 30% reduction in manpower costs • US$30,000 saved from migrating from on-premises to the AWS Cloud • 20% faster in developing the game • 300% increase in download speeds for new players With AWS Cloud, Firework Games (MMORPG) allows up to 50 players to participate simultaneously per match. To achieve low latency, scalability, and time and cost savings for Spark Era. On the day of the game’s launch, it was able to support 10 million game downloads concurrently, and up to 1 million users to log into its servers with near-zero lag or latency.   中文 (简体) Amazon EC2 Auto Scaling helps you maintain application availability and allows you to automatically add or remove EC2 instances according to conditions you define. Solution Overview Looking ahead, Firework Games plans to integrate Amazon Polly and Amazon Transcribe into Spark Era, with players interacting with NPCs using their voice instead of having to click on a given list of options, building a more immersive gaming experience. About Company Amazon Elastic Graphics Türkçe More importantly, Firework Games can deliver a level playing field for players globally with AWS Availability Zones and AWS Regions, keeping latencies within the range of 100-160 ms for global users. English Firework Games Delivers a Smooth In-Game User Experience and Saves Manpower Costs with AWS The company accelerated the game development by 80 percent by deploying AWS Deep Learning AMIs. These provide pre-configured environments for Firework Games, allowing it to move straight to development instead of having to set up a deep learning framework and pipelines, which typically takes up to 3 months. Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Amazon Elastic Compute Cloud The AWS team and AWS Enterprise Support has been very helpful in supporting our development of Spark Era and ensuring the best player experience. They guided us on which AWS Regions meet our needs, and advised us on the Amazon EC2 instances needed for our setup. The AWS team also worked with us on a globally scalable design – from latency reduction through region selection, Amazon CloudFront and AWS Global Accelerator implementation, and Transmission Control Protocol (TCP)-based autoscaling strategy to optimize compute resource usage. In summary, AWS helped us make the most of our resources, which is vital for us as a startup.” Deutsch Additionally, using artificial intelligence (AI) to generate unique non-player characters (NPCs), Spark Era aims to deliver a personalized gameplay experience for every player. The NPCs are tailored based on players’ prior activities, such as their in-game gear choices and interactions with other players. As such, Spark Era has to ingest and process large amounts of data for its AI-based features. With these in mind, Firework Games turned to AWS for a cloud infrastructure that can deliver on performance, scalability, and cost-effectiveness.   Opportunity Tiếng Việt Italiano ไทย Amazon CloudFront Learn more » Outcome Overview | Opportunity | Solution | Benefits | Outcome | AWS Services Used With Amazon Elastic Cloud Compute (Amazon EC2) Auto Scaling and Amazon Elastic Graphics, Firework Games easily scales its compute capacity for player traffic. Since it launched, Spark Era has hosted up to 20,000 players concurrently, with near-zero downtime. Amazon EC2 Auto Scaling has also saved its developers about 40 hours in manual infrastructure maintenance and scaling. Firework Games is a Hong Kong-based game development company that uses cutting-edge technologies to create limitless unique player experiences. The studio focuses on immersive and portable applications that allow users to play games while also bringing innovation into the gaming industry. Its first game, Spark Era, is a massively multiplayer online role-playing game and global metaverse game set in an interstellar environment.   Português Delivering an Unmatched User Experience" FLSmidth Case Study.txt,"Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Français Amazon FSx for Lustre Español Tapped virtually unlimited compute capacity Iterating and Innovating Its Way to Zero Emissions 日本語 Amazon FSx for Lustre makes it easy and cost effective to launch and run the world’s most popular high-performance file system. Use it for workloads where speed matters, such as machine learning, high performance computing (HPC), video processing, and financial modeling. Get Started 한국어 Tapping into Vast Compute Capacity in the Cloud FLSmidth Reduces Simulation Time from Months to Days on AWS Using Virtual Reactor, we’ve explored a wider range of possibilities than we ever could have considered using physical testing for scaling up to industrial size. AWS gave us speed, scalability, and flexibility in our simulations.” Amazon EC2 Gained on-demand access to the latest NVIDIA GPU technology AWS Services Used 中文 (繁體) Bahasa Indonesia Speeding Up Mission-Critical Simulations Contact Sales Ρусский عربي With nearly 12,000 employees in 60 countries, FLSmidth is a global leader in the mining and cement industry. Critical to its operations is cement calcination, a thermochemical process in which limestone is converted into lime and carbon dioxide. To iterate and improve cement calcination, FLSmidth needs to run a series of simulations. However, the company found that running such simulations on its legacy on-premises system was time and cost intensive. “We would regularly run simulations that took 1–2 weeks to complete for a single design analysis,” says Sam Zakrzewski, a fluid dynamics specialist at FLSmidth. “Comparing five design alternatives would take 5–10 weeks on a fairly high-end engineering workstation if we were to run them serially.” Ideally, FLSmidth engineers preferred to compare as many design iterations as they could through physics-based simulations before identifying and implementing the final design. To simulate multiple design scenarios simultaneously, the company needed to invest in additional hardware. But simply adding compute capacity to its legacy system would be cost inefficient, as FLSmidth would still have to pay for the added infrastructure even when not in use. 中文 (简体) About FLSmidth FLSmidth and CPFD also used AWS ParallelCluster—an AWS-supported open-source cluster management tool that makes it simple to deploy and manage HPC clusters on AWS—to integrate other HPC services into the architecture. Once the cluster was up and running, FLSmidth was soon able to run multiple workloads concurrently. For one project, FLSmidth ran five simulations over a single weekend—a feat that just months prior would have taken over 40 days to complete sequentially using limited on-premises capacity. The p3.8xlarge Amazon EC2 instance enabled the simulations to run on four NVIDIA Tesla V100 GPUs. Switching to the NVIDIA GPUs alone resulted in a time reduction of nearly 4 times over FLSmidth’s legacy on-premises compute capability. Since its founding in 1882, innovation has always been at the core of multinational engineering company FLSmidth. Though the company continues to develop sophisticated engineering solutions to lift up the mining and cement industries, the times also demand steady advancements in digital technology. With this in mind, FLSmidth is pursuing sustainable, technology-driven productivity under MissionZero, an initiative to achieve zero emissions and zero waste in cement production and mining by 2030. Benefits of AWS To deliver the powerful, elastic, and cost-effective compute capacity required to run sophisticated simulations concurrently, the company recognized that it needed a cloud solution. So FLSmidth and CPFD consulted AWS on the appropriate HPC cloud services. Amazon Elastic Compute Cloud (Amazon EC2)—a service that provides secure, resizable compute capacity in the cloud—emerged as an obvious choice. For this particular workload, CPFD chose Amazon EC2 P3 Instances with NVIDIA Tesla V100 GPUs, because its Virtual Reactor could harness the compute capabilities of NVIDIA GPUs. Reduced simulation project time frames from months to days The other HPC services involved were Amazon FSx for Lustre—a fully managed service that provides cost-effective, high-performance, scalable storage for compute workloads—and NICE DCV, a high-performance remote display protocol that provides customers with a secure way to deliver remote desktops and application streaming from any cloud or data center to any device. NICE DCV Rüdiger Zollondz Vice President of Innovation and R&D, FLSmidth AWS ParallelCluster Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. English NICE DCV is a high-performance remote display protocol that provides customers with a secure way to deliver remote desktops and application streaming from any cloud or data center to any device, over varying network conditions. By using CPFD’s Barracuda Virtual Reactor powered by cloud compute capacity from AWS, FLSmidth has brought together leaders in cement technology, advanced industrial fluid-particle simulations, GPU computing, and cloud computing to drive positive change. “The digitalization technology enables us to optimize the energy efficiency and emissions of our cement technologies as well as minimize our overall carbon footprint,” says Zollondz. AWS, like FLSmidth, has a perpetual impulse to improve and innovate. As FLSmidth continues to iterate on its cement technologies and edge closer to fulfilling its MissionZero initiative, AWS will continue to step up its support by releasing new features and services. Already the teams at CPFD and FLSmidth are eager to try the newly available Amazon EC2 P4d Instances, which use NVIDIA A100 Tensor Core GPUs. Deutsch “With MissionZero, we seek to accelerate the use of technology and knowledge to enable our customers to produce cement and process minerals with zero environmental impact,” says Thomas Schulz, CEO of FLSmidth. One way FLSmidth is honoring its MissionZero initiative is by using physics-based engineering software package Barracuda Virtual Reactor from AWS Partner CPFD Software (CPFD). Powered by high-performance computing (HPC) on Amazon Web Services (AWS), Barracuda Virtual Reactor enables FLSmidth to more efficiently run simulations that are critical to optimizing its cement technologies. Tiếng Việt Italiano ไทย Because Amazon EC2 is available across 24 regions and 77 Availability Zones, FLSmidth’s engineers have local access to the AWS-powered Barracuda Virtual Reactor across the company’s various global teams. “Using Virtual Reactor, we’ve explored a wider range of possibilities than we ever could have considered using physical testing for scaling up to industrial size,” says Rüdiger Zollondz, vice president of innovation and R&D at FLSmidth. “AWS gave us speed, scalability, and flexibility in our simulations.” 2021 Learn more » AWS ParallelCluster is an AWS-supported open source cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. Enabled broader R&D exploration into bold environmental solutions Português Present in more than 60 countries, FLSmidth delivers sustainable productivity to the global mining and cement industries around the world." FLYING WHALES Case Study.txt,"FLYING WHALES is a French startup that is developing a 60-ton payload cargo airship for the heavy lift and outsize cargo market. The project was born out of France’s ambition to provide efficient, environmentally friendly transportation for collecting wood in remote areas. “We have one of the biggest forested areas in Europe, but these areas are on mountains that are very difficult to access,” says Guillaume Martinat, lead aerodynamics engineer for FLYING WHALES. “This is why we need to create an airship that can load and unload cargo without landing, in hovering flight.” Français FLYING WHALES is using its ability to scale quickly to complete more work than before. Because of the wide variety of AWS instance types available, the company can perform complex simulations that were not possible in an on-premises environment. For example, some ground effect calculations that are critical to size the airship would have required the company to block its entire on-premises cluster for weeks. Now, those calculations can be performed quickly, without having to delay other activities. “There were some studies we couldn’t do because we lacked the compute resources,” says Martinat. “Now, we can do everything we want to. It’s not just a matter of being faster on AWS—it’s a matter of having the ability to get the job done. Furthermore, by selecting high-memory hardware among the large range of available instance types, we are now able to remotely generate finer/heavier meshes than we could on-premises, for better CFD accuracy.” Benefits of AWS Español Amazon Elastic Compute Cloud (EC2) 日本語 Moving an HPC Platform to AWS Get Started 한국어 Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Rapid Scaling to Support 600-Core Computational Models More Flexibility for Engineers Additionally, the on-demand availability of resources helps FLYING WHALES engineers perform many computations simultaneously, instead of performing each job sequentially. As a result, engineers can spend more time analyzing data and creating intellectual property instead of managing infrastructure. With these capabilities, along with the direct support from AWS, FLYING WHALES will be able to deliver its first airship in 2024, as planned. AWS Activate AWS Services Used FLYING WHALES also leveraged AWS expertise to accelerate the HPC solution’s adoption time. Running its HPC environment on AWS, FLYING WHALES can turn around CFD workflows faster than before. “We can run CFD workflow jobs 15 times faster on AWS thanks to the computing power and inter-node network performance we get using the Amazon EC2 C5n.18xlarge instances and EFA,” says Martinat. “As a result, we can complete jobs in days instead of the months it used to take.” We can run CFD workflow jobs 15 times faster on AWS thanks to the computing power and inter-node network performance we get using the Amazon EC2 C5n.18xlarge instances and EFA.” Elastic Fabric Adapter 中文 (繁體) Bahasa Indonesia Elastic Fabric Adapter (EFA) is a network interface for Amazon EC2 instances that enables customers to run applications requiring high levels of inter-node communications at scale on AWS. FLYING WHALES Runs CFD on AWS to Quickly Launch Environmentally Friendly Cargo Transport Airships Contact Sales Ρусский عربي 中文 (简体) AWS Activate provides startups with a host of benefits, including AWS credits*, AWS support plan credits, and training, to help grow your business. Turning Around CFD Workflows 15 Times Faster Initially, the company relied on an in-house high-performance computing (HPC) cluster to perform the CFD analysis. However, the cluster only had 200 cores, and the company didn’t have the scalability or flexibility it needed to support the workloads. FLYING WHALES also needed to ensure its IT environment was cost-effective and ready for a 2021 model delivery. “As a startup, we were lacking the resources to meet that deadline on our own,” says Martinat. AWS ParallelCluster is an AWS-supported open source cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. Guillaume Martinat Lead Aerodynamics Engineer, FLYING WHALES About FLYING WHALES AWS ParallelCluster Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. English Thanks to the scalability and flexibility of AWS, FLYING WHALES can now focus on its core business: designing innovative cargo airships. “For our company, the strength of AWS is that it helps us scale and customize our HPC cluster so we always have an environment that performs well and responds to our CFD workloads,” says Martinat. “This will not only enable us to launch our product on time, but it will also help us grow our company.” FLYING WHALES chose to move its HPC environment to the cloud, running its CFD workloads on Amazon Web Services (AWS). “We evaluated several cloud providers, and AWS provided the best performance for us,” says Martinat. Specifically, FLYING WHALES chose to run on Amazon Elastic Compute Cloud (Amazon EC2) C5n.18xlarge instances, which support Elastic Fabric Adapter (EFA) as the Amazon EC2 instance network interface. The C5n instances provide the power and scalability FLYING WHALES needs for its CFD workloads. FLYING WHALES provisions C5n instances using Amazon EC2 Spot Instances. Spot Instances are spare Amazon EC2 capacity available at up to a 90-percent discount. With Spot Instances, FLYING WHALES was able to lower the cost of its HPC clusters by 64 percent.  Additionally, the company uses AWS ParallelCluster to simplify the deployment and management of an HPC cluster to run CFD simulations on AWS. Now, using NICE DCV, FLYING WHALES can securely stream applications while dramatically decreasing data transfer costs, so engineers can inspect solutions without ever having to download them locally. FLYING WHALES also took advantage of the financial and technical assistance provided through the AWS Activate program. “The credits and technical support from AWS helped us get off the ground faster than we could have on our own,” says Martinat. FLYING WHALES is relying on AWS to scale its HPC environment quickly to support 600-core computational models, each 6 TB in size. “We have almost unlimited compute capacity on AWS, which gives us a level of scalability nearly equivalent to the power of a national supercomputer,” says Martinat. “If we need 6,000 cores, we can use all those cores, which means we can do all our computation at the same time, whenever we need to.” Also, the company’s engineers don’t have to wait in job queues to perform simulations, which saves dozens of hours each week. Deutsch To design its airship, FLYING WHALES runs complex Computational Fluid Dynamics (CFD), a tool to numerically simulate the flow of any fluid, and structural analysis simulations, which require large amounts of compute capacity. The company cannot perform physical testing because the airship is too large, and testing would prove too expensive and take too much time. Instead, engineers need data to size the airship and define workloads for every flight phase. CFD gives engineers this much-needed data without having to manufacture any parts, enabling a much faster design process. However, each computation requires about 600 cores, and it takes approximately 400 computations to generate one model, requiring significant computational resources. Tiếng Việt FLYING WHALES, founded in France in 2012, is developing a cargo airship for the heavy lift and outsize cargo market. The company’s environmentally friendly airships can transport up to 60 metric tons of goods at altitudes close to 3,000 meters and in difficult-to-reach areas. Italiano ไทย Runs CFD workflow jobs 15x faster With the flexibility of AWS ParallelCluster, the company’s engineers can get HPC jobs up and running in 15 minutes, instead of taking months to acquire, configure, and manage servers. “We can tailor our instances to fit CFD job sizes by using AWS ParallelCluster,” says Martinat. As an example, if the company doesn’t need large compute capacity, engineers can select an instance type that might be less expensive and then scale it up when necessary. “We get flexibility and cost savings by using this solution. This was key for us as a startup with limited resources,” says Martinat. 2021 Learn more » Scales HPC environment to support 600-core computational models Completes CFD jobs in days instead of months Português Expects to launch first airship on schedule" Fujita Health University Case Study _ Amazon Web Services.txt,"Furthermore, the university has bolstered disaster recovery (DR) with its cloud-based PHR system. Fujita Health University is situated on a major fault line in Japan, so having data on the cloud—protected from the threat of natural disasters—made sense for business continuity. Additionally, the university is now conducting a proof of concept to move its EMR system—which currently stores information from its clinicians’ paper charts—from on-premises servers to the AWS Cloud. Ensuring Compliance with Three Japanese Ministries Français AWS Services Used Amazon Cognito provides an identity store that scales to millions of users, supports social and enterprise identity federation, and offers advanced security features to protect your consumers and business. Español Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Learn More 日本語 About Fujita Health University Contact Sales Get Started 한국어 Fujita Health University Aims to Improve Continuity of Patient Care and Deliver Higher Quality Healthcare with Patient Records on AWS AWS Fargate The Fast Healthcare Interoperability Resources (FHIR) standard was instituted in 2012 to provide a standardized format for healthcare information exchange. FHIR allows healthcare providers to build interoperable records systems that facilitate faster and more accurate care with a full picture of patients’ medical history. To learn more, visit aws.amazon.com/health. 中文 (繁體) Bahasa Indonesia Preparing to Scale Records System to One Million Patients We learned a lot working with AWS engineers and business development teams on the architecture of our FHIR-compliant system.” Ρусский Improving DR and Migrating Existing EMR عربي 中文 (简体) Stores high volumes of compute-heavy medical images Learn more » Amazon Elastic Container Service Benefits of AWS Until recently, handwritten medical notes were the norm among medical practitioners in Japan. Even with the proliferation of electronic medical records, inputting these notes into proprietary EMR systems took away time that could otherwise be spent on patient interaction. The university aimed to change this with a digital PHR system. Furthermore, building a scalable PHR system would allow the university to store large volumes of images including X-rays in a central location, and to deploy compute-heavy artificial intelligence (AI) models to support diagnoses. Benefiting from Data-Driven Models and APIs Kobayashi concludes, “We learned a lot working with AWS engineers and business development teams on the architecture of our FHIR-compliant system. We also appreciate how AWS collaborated with our internal teams and external IT vendors and auditors throughout the project, which is not something that happens often in this industry. Everyone is rowing in the same direction, which gives us confidence for the next step in migrating our EMR to AWS.” Fujita Health University had weekly meetings with AWS engineers to ensure its PHR system was securely set up. “The process went smoothly because AWS already had an FHIR-compliant framework in place,” says Nobuyuki Kobayashi, head of IT at Fujita Health University. In addition, the university worked with a third-party auditor to ensure all processes—particularly the transfer of on-premises medical data to the cloud—were performed according to security best practices. Fujita Health University is the largest private health university in Japan and is recognized for its cutting-edge research and advances in medicine. It has four teaching hospitals, with about 13,500 surgeries carried out at its largest hospital annually. To improve quality and continuity of care for its patients, Fujita Health University decided to build a personal health records (PHR) system according to FHIR standards.  Amazon Cognito Türkçe Reduces potential for diagnostic or other medical errors Nobuyuki Kobayashi Head of IT, Fujita Health University English AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. Expected benefits of the system include higher record reliability, reduced risk of diagnostic or other medical errors, and doctors being able to spend more time with patients rather than on administrative work. Fujita Health University will have access to API-driven software applications that can be deployed for drug discovery and the development of targeted medical devices and supplements. Integrated at-home health tracking devices and omnichannel communications are among the innovations being developed by other medical institutions using FHIR systems to create a safer and more convenient healthcare experience. Fujita Health University takes advantage of Amazon Cognito for user access control and AWS WAF – Web Application Firewall to protect its patient records against common web exploits. It relies on Amazon Elastic Container Service (Amazon ECS) as a fully managed container orchestration tool and AWS Fargate as a serverless compute engine for deploying containerized applications. The marketing team is also exploring the construction of a data lake on AWS to streamline and personalize customer communications. Similar to electronic health records (EHR), PHR stores patient data from multiple clinical providers in an inter-organizational system. However, as medicine becomes more personalized and patient-centric, many organizations are adopting systems dominated by PHR, which unlike EHR are controlled and managed by patients rather than the medical institutions where they seek treatment. Currently, Fujita Health University is trialing the PHR system with 6,000 staff members before rolling it out to the public. By 2023, patients visiting its teaching hospitals for annual health checks will be able to enter their data into the digital PHR system for the first time. The university anticipates adding one million patient records into the PHR system on AWS within three to four years after deployment. Deutsch FHIR Works on AWS FHIR Works on AWS is a new AWS Solutions Implementation with an open source software toolkit that can be used to create a Fast Healthcare Interoperability Resources (FHIR) interface over existing healthcare applications and data. Tiếng Việt Transitioning to Patient-Centric Care Supported by Cloud Technology Italiano ไทย Fujita Health University is the largest private health university in Japan, with four teaching hospitals. Its largest hospital performs 13,500 surgeries each year. Fujita Health University is a cutting-edge research institution and is committed to advanced medicine to benefit its patients and students. By building its PHR system on AWS, Fujita Health University has opened the door to Internet of Things (IoT) and other modern technology applications that rely on application programming interfaces (APIs). Kobayashi says, “We want to make our data work for us and our patients, empowering them to live a healthier life. Cloud solutions are more flexible for working with IoT, AI, and API-based solutions.” Complies with FHIR standards and guidelines issued by 3 Japanese government ministries 2022 Helps doctors to spend more time with patients Security was the leading requirement for a digital PHR system, to ensure data privacy and compliance with government regulations. The university chose to work with Amazon Web Services (AWS) because AWS provides a toolkit and guidelines on designing medical information systems that are compliant with three Japanese ministries: the Ministry of Health, Labour and Welfare; the Ministry of Internal Affairs; and the Ministry of Economy, Trade and Industry. FHIR Works on AWS is a toolkit that facilitates the design of health data exchange interfaces. Facilitates innovation with IoT, AI, and API-driven solutions Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português" Game Studio Small Impact Games Runs Successful Alpha and Beta Tests Using Amazon GameLift _ Case Study _ AWS.txt,"Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. DynamoDB offers built-in security, continuous backups, automated multi-Region replication, in-memory caching, and data import and export tools. Learn more » Français SIG was founded in 2012 and is based in Leicester, England. It is working on developing tactile, first-person, looter-shooter games and has been involved with 22 different gaming projects. The gaming studio’s focus is on making player-centric games with a small team of 12 developers. 2023 Español AWS AppSync to deploy regional infrastructure Although SIG frequently hit its predefined bandwidth limits during the testing phases of Marauders, it was able to expand quickly as needed. Given the company’s size, launching Marauders would not have been as successful without the flexibility afforded by AWS. “You hear stories about games where the whole system will just bottom out because of capacity, but we’ve never seen that or been close to it,” says Rowbotham. “Even in our busiest points, we were matchmaking in 30 seconds, and everyone was having a great time. Scaling was never a concern using AWS.” On AWS, SIG can monitor new players joining its game worldwide and can quickly deploy infrastructure when necessary. “Not only were we spreading out horizontally, but we were also dealing with vertical capacity issues, which were painless to resolve,” says Rowbotham. 日本語 Amazon GameLift Customer Stories / Games Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used ≤ 30 second AWS Services Used Game Studio Small Impact Games Runs Successful Alpha and Beta Tests Using Amazon GameLift 中文 (繁體) Bahasa Indonesia Small Impact Games (SIG), a small, independent computer video game development company, wanted to launch Alpha and Beta testing for its new game: Marauders. However, SIG believed that the scale of these tests would go far beyond that of any game that it had previously created and supported. The company wanted a solution that would give it the ability to retain primary control over its infrastructure. Because of the performance and scalability that these tests required, and the large number of concurrent users that were expected worldwide, the company decided to use a suite of Amazon Web Services (AWS) solutions to support the game. Now SIG has access to the bandwidth that it requires while maintaining the control that it wants. Contact Sales Ρусский Increased flexibility عربي When SIG began working on Marauders, a first-person multiplayer game, the studio expected the demand from participation rates would overwhelm the testing of the game. At the start of the Marauders testing period, SIG wanted to prepare for fluctuations in traffic by investing in a highly scalable infrastructure solution that would be simple to manage and deliver an optimal gaming experience. To meet these goals, the SIG team decided to use Amazon GameLift, a solution for dedicated game server hosting that deploys, operates, and scales cloud servers for multiplayer games. “We went all in using Amazon GameLift for Marauders,” says James Rowbotham, lead developer at SIG. “GameLift is a service that gave us the ability to do specifically what we wanted to do.” SIG’s service upgrade ultimately proved to be a wise choice because the tests far exceeded expectations; the closed Alpha test logged 3,000 concurrent users, and the closed Beta test logged around 7,000 concurrent users. 中文 (简体) Opportunity | Searching for a Scalable, Reliable Infrastructure Solution for Small Impact Games 7,000 In addition to retaining control over its infrastructure, SIG was able to improve the performance of Marauders while using AWS. The company wanted to add a persistent gear aspect to the game so that players could keep the gear that they collected across different game sessions. The technical demands and capacity that this feature required led the SIG team to adopt Amazon DynamoDB, a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at virtually any scale. SIG also wanted to use the data that it collected to improve the game for players. For this purpose, the SIG team chose Amazon QuickSight, which empowers everyone in an organization to understand data by asking questions in natural language, exploring through interactive dashboards, or automatically looking for patterns and outliers powered by machine learning.   Overview Maintained control Amazon DynamoDB Starting in July 2020, a core team of three lead developers transformed SIG’s game development environment in 16 months, adopting several fully managed AWS services, including Amazon GameLift and AWS AppSync, which creates serverless GraphQL and Pub/Sub APIs that simplify application development through a single endpoint to securely query, update, or publish data. The scalability, elasticity, and control offered by these services worked perfectly for the small team that was working on Marauders, and the gaming studio scaled its infrastructure to support over 7,000 concurrent players during one of its tests. Moreover, SIG gained the ability to control its infrastructure in house so that it did not depend on a third party to perform the actions required to keep its system running. “As we dug deeper into AWS, we gained more knowledge, and we retained control. Using AWS, we can be fully autonomous,” says Mitchell Small, managing director at SIG. Türkçe English Amazon GameLift deploys and manages dedicated game servers hosted in the cloud, on-premises, or through hybrid deployments. GameLift provides a low-latency and low-cost solution that scales with fluctuating player demand. The success of the Marauders Alpha and Beta tests—marked by the sale of more than 80,000 copies of the game as of September 2022—has positioned SIG to become a significant player and successful developer in its market. Marauders has been featured on the home page of Team17, a video game developer and SIG’s publisher. Additionally, as of September 2022, Marauders was a top Wishlist item on Steam, a popular video game digital distribution service and storefront, and the game’s Discord channel had grown to over 36,000 members. Solution | Using Amazon GameLift to Scale Testing to a Global Fan Base  James Rowbotham Lead Developer, Small Impact Games Small Impact Games is a small, independent computer video game development company that primarily creates tactile, first-person, looter-shooter games. Deutsch concurrent users Learn how game studio Small Impact Games supported Alpha and Beta testing for its new video game using Amazon GameLift. Tiếng Việt AWS AppSync creates serverless GraphQL and Pub/Sub APIs that simplify application development through a single endpoint to securely query, update, or publish data. Learn more » Italiano ไทย Amazon QuickSight powers data-driven organizations with unified business intelligence (BI) at hyperscale. With QuickSight, all users can meet varying analytic needs from the same source of truth through modern interactive dashboards, paginated reports, embedded analytics, and natural language queries. Learn more » SIG’s immediate goal is to focus on the early-access release of Marauders. The company is all in on AWS following the success of the Marauders Alpha and Beta tests. Going forward, SIG sees the potential for more growth with the flexibility and speed that using AWS provides. The company wants to use more events and tournaments to publicize its games, and it believes that AWS is the solution to make that happen. “I’m so glad that we ended up fully embracing AWS. It gives you so much for such little work, which is perfect for us,” says Rowbotham. Learn more » About Small Impact Games over the development environment Amazon QuickSight Even in our busiest points, we were matchmaking in 30 seconds, and everyone was having a great time. Scaling was never a concern using AWS.” matchmaking during peak times Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português Outcome | Becoming a Larger Player in the Gaming Market Using AWS" Games24x7.txt,"Games24x7 believes that data science is the future of mainstream gaming. Hyper-personalization—driven by data, analytics, and machine learning (ML)—is at the core of Games 24x7’s business. Furthermore, system bottlenecks often prolonged hypothesis testing, which typically comprises 80 percent of data scientists’ workloads. “We’re experimentation-oriented problem solvers, not ML engineers, and we need to try many iterations before finalizing a model,” says Mukherjee. Under the previous system, it could take weeks to formulate and test analytics hypotheses. In a highly competitive industry such as gaming, this was simply too long. Français AWS Step Functions to create ML workflows. Following that, teams enhanced data workflows with For Mukherjee, the greatest benefit of the modernization project with AWS has been productizing its ML models. By fully leveraging the rich feature set within AWS analytics and ML tools, Games24x7 has reduced model iteration time, improved productivity, and lowered analytics costs. “AI and ML are truly at the core of our internal operations and user-facing platform,"" Mukherjee explains. ""This couldn’t have happened without the ability to scale up our development efforts seamlessly on the AWS Cloud.” 2023 Amazon EMR as a big data framework. The company consulted its AWS account team, then began optimizing its ML pipeline by leveraging more cloud-native capabilities and serverless delivery models. With support from its AWS team, Games24x7 modernized its ML models, following MLOps best practices and automating key training, production, and post-production processes. faster iteration cycle Español Amazon SageMaker for ML model training and Looking ahead, Games24x7 is considering how it could reuse or reposition already-developed models. The gaming industry is highly dynamic, and models are becoming irrelevant at an increasingly faster rate. Users come and go, but attrition rates are highest after the first platform trial. Games24x7 views post-production modeling activities as extremely important, to automate the identification of user drift and introduce features that cater to the profile of users who are starting to veer from the platform. Learn More Learn more » Games24x7 Accelerates Machine Learning Lifecycle with Cloud-Native Data Science Tools on AWS As a first step in the process, the company adopted Contact Sales To learn more, visit aws.amazon.com/solutions/analytics. Amazon SageMaker Studio is an integrated development environment (IDE) that provides a single web-based visual interface where you can access purpose-built tools to perform all machine learning (ML) development steps, from preparing data to building, training, and deploying your ML models, improving data science team productivity by up to 10x. Support from AWS has been instrumental in upskilling Games24x7’s teams and introducing the tools to fit the company’s dynamic use cases. “AWS has helped us ensure we’re using our resources optimally and following MLOps best practices. That’s been key to our productivity acceleration,” Mukherjee adds. 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used AWS Step Functions is a visual workflow service that helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and machine learning (ML) pipelines. About Games24x7 Reliable support Productivity is further enhanced with increase in user retention We’ve improved the quality of outcomes from our ML models as a result of our modernization efforts on AWS, and we can manage our overall data science ecosystem more efficiently.” Amazon SageMaker Model AWS Services Used Outcome | Accelerating Iteration while Lowering Costs of Analyses Amazon EMR Serverless to automate infrastructure management. Data scientists no longer need to overprovision instances for experimentation or shut down instances when they’re done. This has led to significant time and cost savings. “The rate of iteration is about 10 times faster than before, which allows us to consistently deliver projects on time or even ahead of schedule,” Mukherjee says. 中文 (繁體) Bahasa Indonesia 10x Amazon SageMaker Studio AWS Step Functions Building Pipeline. Data scientists now enjoy higher autonomy thanks to reduced interdependencies between their team and those responsible for infrastructure and engineering. Ρусский Customer Stories / Software & Internet عربي Games24x7 is India’s leading multigame platform, with offerings such as RummyCircle, My11Circle—India’s second-largest fantasy games platform—and U Games, a portfolio of casual games. The company leverages hyper-personalization and data science to provide superior user experiences. Games24x7 sought to modernize its machine learning (ML) pipeline using cloud-native tools. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Get Started 3x Cost was also a growing concern. Without a background in ML engineering, data scientists typically overprovisioned virtual servers running on Amazon Web Services (AWS). The business sought to increase data science efficiency by leveraging cloud-native automation tools for faster iterations at scale. Games24x7 is an India-headquartered online gaming company with a portfolio that spans skill games and casual games. Founded by New York University–trained economists in 2006, the company is backed by marquee international investors. It specializes in using behavioral science, technology, and artificial intelligence to provide an exceptional game-playing experience across its platforms. Overview Amazon EMR Serverless As Games24x7 has grown, so has the number of business use cases of its ML models. Scaling was becoming increasingly tedious for its team of data scientists, and post-production activities such as ML model monitoring were growing cumbersome. Tridib Mukherjee, vice president & head, AI & Data Science at Games24x7, explains, “The volume of data that we handle involves a lot of infrastructure configuration and frequent scaling up. Our pipelines often timed out when we were processing heavy loads and had to be restarted, which was a productivity drain.” Games24x7 had been using Since beginning the MLOps project on AWS, Games24x7 has driven a threefold increase in productivity. Previously, a team of eight data scientists and analysts could complete four projects within a year—with each project containing 15–100 individual models that influence factors such as user game choice. The Games24x7 team has grown to 30, and its expertise and efficiency have scaled dramatically: the company can now complete 50 projects a year.   Türkçe English The company is using Amazon SageMaker as a fully managed development environment, Amazon EMR as a big data platform, and AWS Step Functions with Amazon SageMaker Pipelines to orchestrate its ML pipelines. With the support of AWS, Games24x7 automated post-production tasks such as ML monitoring to increase productivity and empower its data scientists to solve more business problems, faster. Solution | Adopting MLOps for Increased Automation and Productivity Opportunity | Solving for Bottlenecks that Delay Solution Delivery Games24x7 prides itself on providing a responsible gaming platform. The company tracks its users’ journeys and temporarily blocks players who start to become disruptive or fail to take breaks from marathon gaming sessions. It has deployed other data science use cases such as hyper-personalization, which offers a 360-degree view of each user's activities. Amazon SageMaker Studio, a fully managed development environment that allows data scientists to quickly move through the ML model lifecycle. The environment automates post-production monitoring of ML models, and data scientists can scale individual jobs separately. With Amazon SageMaker Pipelines, Games24x7 has greater visibility into its ML pipeline and models. It uses the model registry in Amazon SageMaker to store all model metadata and evaluation metrics, which data scientists use to track models and share progress among team members. Collaboration has improved and it’s much easier for one team member to pick up where another left off in developing and testing models. Tridib Mukherjee Vice President & Head, AI & Data Science, Games24x7 Deutsch Games24x7 improved data science productivity using Amazon SageMaker Studio and Amazon EMR, reducing overhead and automating ML processes for faster iterations. Tiếng Việt Italiano ไทย 20% Amazon EMR Serverless is a serverless option in Amazon EMR that makes it easy for data analysts and engineers to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers. Next, Games24x7 switched to These efforts to streamline and boost ML model deployment have paid dividends, with user retention increasing by 20 percent, and long-term attribution and revenue increasing by 10 percent. Games24x7 projects a significant indirect impact on long-term revenue thanks to its MLOps project. 日本語 “We’ve improved the quality of outcomes from our ML models as a result of our modernization efforts on AWS, and we can manage our overall data science ecosystem more efficiently,” Mukherjee concludes. higher productivity Português Optimizes architecture with AWS support" Ganit Transforms Fast Fashion Apparel Retail with Intelligent Demand Forecasting on AWS _ AWS Partner Network (APN) Blog.txt,"AWS Partner Network (APN) Blog Ganit Transforms Fast Fashion Apparel Retail with Intelligent Demand Forecasting on AWS by Gaurav H Kankaria , Vaishnavi B , and Sriram Kuravi | on 28 JUN 2023 | in Amazon Forecast , Artificial Intelligence , AWS Partner Network , Case Study , Customer Solutions , Industries , Intermediate (200) , Retail , Thought Leadership | Permalink | Comments |  Share By Gaurav H Kankaria, Head of Strategic Partnerships and Engagement Manager – Ganit By Vaishnavi B, Apprentice Leader – Ganit By Sriram Kuravi, Sr. Partner Management Solution Architect – AWS Ganit Gauging market demand for the apparel retail industry is challenging. The success of stock keeping units (SKUs) sold in this market depends on customer preference (fitting, feel, regional acceptance) and latest trends, which can change frequently. Because of this, large amounts of stock remain unsold, impacting retailers’ working capital in the short term (3-6 months) and eventually leading to large liquidation of leftover stock, reducing the company’s overall profitability. Ganit is an AWS Advanced Tier Services Partner with the Retail Competency that provides intelligent solutions at the intersection of hypothesis-based analytics, discovery-driven artificial intelligence (AI), and new-data insights. Over the years, Ganit has successfully deployed inventory management systems using intelligent demand forecasting at the core of its solutions. This system has helped many clients optimize their inventory, leading to efficient working capital deployment and improvement in topline and bottom-line numbers. In this post, we will discuss how Ganit helped an apparel retailer design their intelligent demand forecasting engine by addressing key business problems such as inventory stockouts, overstocking scenarios, and excess stock liquidation. We’ll detail the approach towards addressing these challenges and designing an efficient demand forecast and allocation engine using Amazon Forecast . Customer Challenges Ganit’s customer is an apparel retailer selling more than ~1,500 unique SKUs at any point across its chain of stores. Demand patterns for its SKUs vary significantly across stores due to the diverse geographical presence within the country. A single apparel center of excellence (CoE) team carries procurement and replenishment activity through a central warehouse (lead time to store varies between 1-7 days) for all SKUs. Two key challenges faced by the customer in running its operations are: Decisions on what and how much to procure (procure to sell model) for all seasonal/fast fashion SKUs are made by subject matter experts (SMEs), which is subjective and leads to ~40% of all SKUs procured liquidated as stock clearance sales post-6 months of purchase, thus impacting overall profit margins. Regular selling SKUs (like white T-shirts, socks, and inner garments) are replenished from the warehouse (procurement to replenish model), leading to improper inventory allocation across stores and causing over- and under-stock events regularly. These challenges negatively impact multiple key performance indicators (KPIs) like inventory turns, working capital, stockouts, overstocking, and higher procurement costs. They also lead to an increase in product damages that impact top and bottom line figures. Solution Overview To address the challenges faced by the customer, Ganit recommended a two-part solution for initial stock allocation and stock replenishment: An item attribute-based demand forecasting method for the fast fashion SKUs was chosen, as these SKUs didn’t have any historical data for modelling. Item attributes like color, size, type, and price range were selected as model levels for demand forecasting. Automated intelligent demand forecasting and an inventory optimization approach were used to address the inventory allocation issue. The demand forecasting engine was designed to use historical and external demand drivers (promotion, weather), and the inventory optimization engine was designed to accommodate varying demand, lead time, and supply chain constraints like minimum order quantity and service unit factors. Figure 1 – Overall approach to building automated replenishment system. Attribute-Based Demand Forecasting To study the demand pattern of fast fashion SKUs, historical sales were time adjusted based on the first day of sales till 183 days of sales (see Figure 2 ) using a Jupyter notebook on Amazon SageMaker . Figure 2 – Standardizing data based on first sales date for Target Time Series Forecasting. Analyzing the data, Ganit observed that SKUs followed an exponential decay pattern of sales at the overall org level with fluctuating demand at the granular level (see Figure 3 ). Figure 3 – Overall sales pattern across stores. Based on the distribution of the demand observed, three models were chosen: Gamma Distribution (GLM) Two-parameter exponential curve Three-parameter exponential curve These models were built using the custom model feature on Amazon SageMaker. The Weighted Absolute Percentage Error (WAPE) metric was used to measure the accuracy of the models. Figure 4 – Statistical model chosen for model fit on historical time adjusted sales data. The three-parameter model had the best model fit accuracy among the models chosen. This was due to the decay parameter in the model, which makes the decay faster initially and then slows it down (like what was observed in the sales trend). Model fit results at lower hierarchy levels are as shown in Figure 5 . For simplicity in understanding, SKUs were classified into ABC segments based on their saliency. Figure 5 – Model fit output for three-parameter exponential model. Using the outputs from the three-parameter model, a decision board was designed using Amazon QuickSight . This decision board provided guidance to the business to procure SKUs and distribute them across stores based on the attributes. With this decision board, the decision-maker can: Get an estimate on what quantity they can procure overall, in accordance to the budget allocated for procuring a new fast fashion SKU. Efficiently allocate those procured SKUs based on probability of success, shelf space available, etc. Figure 6 – Decision board for fast fashion SKU procurement and initial allocation. For regular SKUs, the auto-replenishment model has two engines: Intelligent demand forecasting model Inventory management system Demand Forecasting Engine Amazon Forecast was chosen to build the intelligent forecasting model for the auto-replenishment system. This model was designed to predict demand at Store-SKU-Week level for rolling six weeks. Datasets used were: Historical Target Time Series (TTS) data was used to learn sales trends and seasonality. Regressor Time Series (RTS) data includes factors like promotion, liquidation, stock-outs, and holidays model to learn the impact on demand due to events that occurred in the past. Store-Item Metadata was used to capture synergies like Halo and cannibalization effect between SKUs. Halo effect occurs when the purchase of one SKU positively correlates with the purchase of another; that is, when two SKUs are frequently bought together. Cannibalization effect is when the purchase of one SKU negatively impacts the demand of another SKU. TTS, RTS, and Store-Item Metadata were fed as the inputs to Amazon Forecast. Ganit tried and tested multiple modelling techniques—namely exponential smoothening (ETS), Arima and its variations, Prophet, CNN-QR, and Deep AR+ (AutoML feature was also used). CNN-QR model produced the best acceptable results and was chosen as the forecasting model. During the model design, three forecasts were generated at p40, p50, and p60 quantiles, with p50 being the base quantile which had equal probability of both over and under forecast. The selection of quantiles was based on SKU classification (SKUs were classified into fast- and slow-moving SKUs based on days of inventory of the SKU). p60 was chosen for fast-moving SKUs, as the business impact of customer loss was significantly higher than holding extra inventory, and p50 was chosen for slow-moving SKUs. Once the forecast export was complete, the files were combined to yield the consolidated forecast file. Using the historical estimates, Ganit ran the forecast file through its bias corrector mechanism to adjust for bias and select the right quantile for store-SKU combinations. Inventory Management System There are two key elements required to build an efficient inventory management system: safety stock (SS) and reorder point (ROP). Ganit incorporated the forecasted demand and its variability in calculating the SS and ROP for an efficient stock replenishment system and proper allocation of SKUs across different stores. Safety stock = Minimum display quantity required at store + Demand variability Reorder point = SS + rate of sale (RoS) * (Warehouse-to-store lead time + Purchase time) Automated alerts and transfer order from warehouse to stores were raised when net inventory at store (stock on hand at store + stock in transit + stock allocated to the store) was less than the reorder point. The automated inventory management system helped the client eliminate manual intervention in their procurement team, thereby minimizing stockout conditions caused because of manpower shortage. Production System Development A robust technical architecture for the production system was designed and implemented, following AWS Well-Architected best practices, enabling a sustainable, scalable, and cost-effective tool. Figure 7 – Architecture for automated replenishment system for regular SKUs. Historical demand and regressor time series data was stored in Amazon Redshift , an optimized data warehouse with massive data processing speed for instantaneous data retrieval. The latest regressor-related information was loaded to Amazon Simple Storage Service (Amazon S3) by business users to have an updated data repository for the forecast model development. Amazon SageMaker was used to identify the hypothesis list and perform required analysis to understand the correlation between the regressors and demand. Amazon S3 was a transformed data layer with cleaned and processed data ready for analytical consumption and to store the forecast outputs from Amazon Forecast. Amazon Forecast was used to test and run different models (from ARIMA, Prophet, ETS, BSTS, Deep AR+ and CNN-QR) to improve the accuracy levels AWS Glue was used for running bias correction mechanism and perform reorder point calculation with the stock related (near real-time) inputs from the data warehouse. Amazon QuickSight was used to estimate the procurement quantity based on the budget provided by the user and allocate the SKUs across the stores. End-to-end process was in AWS ecosystem which was secured through its innate features like AWS Identity and Access Management (IAM) access policies, security group, and virtual private cloud (VPC), row-level security for certain users and data encryption using AWS Key Management Service (AWs KMS). Business Impact For fast fashion SKUs, Ganit observed cost-per-invoice for procurement reduced by ~15%, thus improving the working capital of the division. Efficient allocation of SKUs led to increased revenue by ~3% reduction in damage of goods (shrinkage loss) by ~18%, thereby improving both the top and bottom line of the business unit. For regular SKUs, Ganit defined the baselines as a weighted average of the last four weeks for the same day (in the absence of a forecasting model earlier), and estimated a ~12% improvement in forecast accuracy (from 71% to 83%). This automated replenishment system reduced inventory turns by ~2 days (improved working capital), reduced stockout by ~3%, and a topline increase of ~1.4%. Conclusion A machine learning-based procurement and auto-replenishment system helped Ganit’s client unlock value in its existing value chain. Given the current dynamics and competition in the market, companies need to work towards unleashing the true capabilities of data and AI/ML. To give your supply chain operations an edge using the power of ML and data analytics, Ganit recommends you apply Amazon Forecast and Amazon SageMaker to unlock additional value from your existing system. To learn more about Ganit and its solutions, reach out to info@ganitinc.com . . . Ganit – AWS Partner Spotlight Ganit is an AWS Partner  that provides intelligent solutions at the intersection of hypothesis-based analytics, discovery-driven AI, and new-data insights. Contact Ganit | Partner Overview TAGS: AWS Competency Partners , AWS Partner Guest Post , AWS Partner References , AWS Partner Solutions Architects (SA) , AWS Partner Success Stories , AWS Service Delivery Partners , Ganit Comments View Comments Resources AWS Partner and Customer Case Studies AWS Partner Network Case Studies Why Work with AWS Partners Join the AWS Partner Network Partner Central Login AWS Training for Partners AWS Sponsorship Opportunities Follow  AWS Partners LinkedIn  AWS Partners Twitter  AWS Partners YouTube  AWS Email Updates  APN Blog RSS Feed" Generating 100000 Images Daily Using Amazon ECS _ Scenario Case Study _ AWS.txt,"time to market for game studios Based on its first 3 months on the market, Scenario hopes to soon be a household name in the gaming industry. “We just launched our mobile app and acquired companies doing texture generation and art pixelization, which will be built into Scenario,” says Nivon. “We’re also working on 3D-image generation, and we’re not constrained by the infrastructure, so we have plenty to work on.” Français 2023 Hervé Nivon Co-Founder and Chief Technology Officer Scenario Incorporated is a generative artificial intelligence company that accelerates time to market for game developers by harnessing artificial intelligence to create style-aligned images and assets in minutes. Español Scenario was founded to revolutionize the way in-game and marketing assets are produced for studios. For example, without the assistance of AI, game artists spend valuable time on repetitive tasks to mass-produce assets for their games. This time could be spent to create more original visuals that attract players and make games more engaging. “It’s super time consuming for game artists to generate assets, edit them, send them for approval, and go back and forth with their colleagues,” says Marie Gerard, head of growth at Scenario. “That’s not the core of what an artist in the gaming industry wants to do.” 日本語 Customer Stories / Games Because AI is not inherently creative, Scenario needed its solution to be simple for customers to interact with. “As a game studio, you bring your own art to Scenario, and our solution accelerates the development process by generating style-aligned images,” says Hervé Nivon, cofounder and chief technology officer of Scenario. “The challenge is scalability—when customers generate images, they’re not willing to wait for minutes.” Scenario has to deliver images in seconds so that customers can train their models and generate the game assets that suit their aesthetic. to buid a generative AI offering Get Started 한국어 2 Months Overview | Opportunity | Solution | Outcome | AWS Services Used After launching its beta in December of 2022, Scenario scaled to over 40 countries in 3 months. “We haven’t had any downtime since our launch, even though we’ve been growing so quickly,” says Nivon. “Our company has served and generated millions of images with only three people, proving a new use case for generative AI with little time and effort.” As of March 2023, Scenario provides customers with approximately 100,000 images each day. Scenario expects that its tools will have a lasting impact on the game industry. If artists no longer have to devote time to marketing and other repetitive tasks related to asset generation, they can focus on producing more rich, detailed, and original content. “Michelangelo had assistants, and so do the fine artists of today,” says Gerard. “Scenario gives game artists an AI assistant so that they can focus on creative work.” Similarly, if game developers can easily generate game assets, they can spend more time creating engaging storylines. “Scenario is empowering creatives to waste less time on repetitive tasks and devote themselves to the game they’re developing,” says Gerard. Scenario built its solution exceptionally fast. “We wrote the first line of code on October 13, 2022,” says Nivon. “We built the beta in 2 months with only three engineers, and Scenario generated over one million images in its first 2 weeks.” The company used a host of AWS services to accelerate its development process. It chose Amazon API Gateway, a fully managed service to create, publish, and secure APIs at nearly any scale, to act as the “front door” for its applications. AWS Cloud Development Kit (AWS CDK) accelerates cloud development using common programming languages to model your applications. Learn more » scaled to in 3 months game artists from noncreative tasks AWS Services Used 40 Countries Amazon ECS 中文 (繁體) Bahasa Indonesia Accelerated Outcome | Continuing to Scale Rapidly on AWS Game development company Scenario Incorporated (Scenario) wanted to reduce time to market for game studios by using generative artificial intelligence (AI) to create style-consistent assets, but it had to deliver fast to meet industry demand for its offering. Studios need to generate many assets and variations based on their artwork, and Scenario aims to assist artists by putting AI to work on these noncreative tasks. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) About Scenario Overview AWS Batch of images provided with three engineers Solution | Building a Generative AI Offering in 2 Months Using AWS Batch Learn how Scenario accelerated time to market for game studios using Amazon ECS. “With only three engineers, we built the cloud backend, the infrastructure, and the native mobile app,” says Nivon. Scenario implemented a continuous integration and continuous deployment process on AWS Cloud Development Kit (AWS CDK), a tool that accelerates cloud development using common programming languages to model applications. “Without AWS CDK, Scenario wouldn’t have been possible. All the infrastructure is deployed through it, so we are doing almost nothing manually,” says Nivon. Türkçe The company also uses AWS Batch, which efficiently runs hundreds of thousands of batch and machine learning computing jobs while optimizing compute resources, to train its machine learning models. “The strategy was to use AWS services that reduce the development workload and are simple to maintain, while meeting low-latency and availability requirements,” says Nivon. In keeping with that strategy, Scenario also uses Amazon ECS to run the containers that its image-generation application uses. English Without AWS CDK, Scenario wouldn’t have been possible. All the infrastructure is deployed through it, so we are doing almost nothing manually.” Amazon API Gateway Millions How Scenario Produces 100,000 Images Daily Using Generative AI on AWS Deutsch Scenario plans to remain all in on AWS as it continues to grow. “The culture of AWS is really part of our DNA,” says Nivon. “We are hiring for leadership principles, customer obsession, and a bias for action. Working with that culture in mind is simple, and those values have greatly helped us achieve our goals.” Tiếng Việt Italiano ไทย Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. Learn more » Liberated Contact Sales To get its product up and running quickly, Scenario committed to going all in on Amazon Web Services (AWS). The company used Amazon Elastic Container Service (Amazon ECS), a fully managed container orchestration service, to build its generative AI offering. Using Scenario’s API-first offering, studios can generate hundreds of usable characters, props, and landscapes for their games in minutes from team workspaces or directly within their games. Learn more » Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that simplifies your deployment, management, and scaling of containerized applications. Learn more » Opportunity | Using AWS CDK to Accelerate Cloud Development AWS Batch lets developers, scientists, and engineers efficiently run hundreds of thousands of batch and ML computing jobs while optimizing compute resources, so you can focus on analyzing results and solving problems. AWS CDK Português" Generative AI for Telcos_ taking customer experience and productivity to the next level _ AWS for Industries.txt,"AWS for Industries Generative AI for Telcos: taking customer experience and productivity to the next level by Chris Featherstone | on 16 JUN 2023 | in Amazon CodeWhisperer , Amazon SageMaker JumpStart , Generative AI , Industries , Telecommunications | Permalink | Comments |  Share According to a recent Gartner ® CEO survey – The Pause and Pivot Year, what is the “top new technology that CEOs believe will significantly impact their industry over the next three years”? You guessed it: Artificial Intelligence. “21% of CEO’s say AI is the top disruptive technology.” i Telcos are not alone in recognizing the immense power of artificial intelligence (AI) – virtually all business leaders are eager to harness its potential. There are several exciting variants, but one that has captured everyone’s attention recently is generative AI. Generative AI is a type of AI that can create new content and ideas, including conversations, stories, images, videos, and music. This technology promises to enhance customer experiences, boost employee productivity, streamline operations, and much more. Mark Raskino, VP analyst at Gartner , said generative AI will “profoundly impact business and operating models.” ii Telcos (and everyone else) are racing to invest in this transformative capability to avoid being left behind. However, realizing the full potential of generative AI requires the right infrastructure, expertise, and support. In this post, we explore some of the most promising use cases for Telcos and explain how AWS can help you innovate with generative AI. “Fear of missing out [FOMO] is a powerful driver of technology markets. AI is reaching the tipping point where CEOs who are not yet invested become concerned that they are missing something competitively important.” Mark Raskino, VP Analyst, Gartner iii Generative AI represents the next evolution in AI Generative AI represents the next evolution in AI, seamlessly empowering Telcos to create diverse types of content, such as text, images, audio, and synthetic data. This capability is a significant time-saver and productivity booster, providing accurate and up-to-date information that fills skills gaps and enables Telco employees to focus on other crucial tasks. Here are some compelling use cases for generative AI in the Telco industry: Customer support – Instantly providing accurate and personalized responses to customer queries through chatbots and virtual assistants. Network performance – Identifying potential network issues, suggesting troubleshooting steps, and automating maintenance tasks. Marketing – Predicting customer preferences, generating targeted content, and offering smart product recommendations. Software development – Automating software development with text/voice to code, filling skills gaps, and empowering non-coding specialists. Sales – Improving productivity and sales with B2B offer generation and sales toolkits. Operations – Producing insights to help optimize operating costs and reducing revenue leakages through cross-platform correlation and analysis. The benefits of adopting generative AI are clear: more innovation, more efficient services, more productive employees, and, ultimately, happier customers. All of these factors contribute to a significant competitive advantage. However, we are still in the early days. Customers have told us there are a few big things standing in their way today. First, they need a straightforward way to find and access high-performing foundation models (FMs) that give outstanding results and are best-suited for their purposes. Second, customers want integration into applications to be seamless, without having to manage huge clusters of infrastructure or incur large costs. Finally, customers want it to be easy to take the base FM, and build differentiated apps using their own data (a little data or a lot). Since the data customers want to use for customization is incredibly valuable IP, they need it to stay completely protected, secure, and private during that process, and they want control over how their data is shared and used. And whatever customers are trying to do with FMs—running them, building them, customizing them—they need the most performant, cost-effective infrastructure that is purpose-built for machine learning (ML). Fortunately, Telcos can overcome these challenges and achieve dramatic savings and productivity gains by selecting the most performant and cost-effective infrastructure that is purpose-built for machine learning. This is where AWS comes to the rescue. How AWS supports Telcos in exploring the potential of generative AI: Choosing the right Foundation Model. Amazon Bedrock is a managed service that provides access to generative AI models from leading AI startups like AI21 Labs, Anthropic, Stability AI, and Amazon’s own Titan models. This enables Telcos to select the perfect model for their required use case. In addition, all models are available through APIs, which makes it easy to build generative AI capabilities into customer and third-party applications. Amazon SageMaker JumpStart offers FMs not available in Amazon Bedrock such as Cohere and LightOn, as well as open source models such as Flan T5, GPT-J and Bloom. Saving Time and Money on Foundation Model Training. Amazon Elastic Compute Cloud Trn1 (Amazon EC2) instances powered by AWS Trainium are purpose-built for high-performance deep learning (DL) training of generative AI models. They reduce the time required to train models from months to weeks, or even days, while also lowering costs. This enables Telcos to save up to 50% on training costs versus other EC2 instances. Improving Productivity and Reducing Deployment Costs. When deploying generative AI models at scale, most costs are associated with running the models and doing inference. Fortunately, Telco customers can cost-effectively crunch massive amounts of data with the help of Amazon EC2 Inf2 instances powered by AWS Inferentia2. Inf2 instances are optimized for large-scale generative AI applications with models containing hundreds of billions of parameters (and deliver up to 4x higher throughput and up to 10x lower latency than Inf1 instances). Building Applications Faster and More Securely. Amazon CodeWhisperer radically improves developer productivity by making coding seamless. The AI coding companion uses a foundation model to generate code suggestions in real-time based on developers’ comments in natural language and prior code in an integrated development environment. It also has built-in security scanning (powered by automated reasoning) for finding and suggesting remediations for hard-to-detect vulnerabilities. Are you prepared to unleash the full potential of generative AI? At AWS, we have a mission to empower every developer with AI/ML capabilities, and we have a long-standing history of collaborating with Telcos to implement a wide range of AI initiatives. We continually develop purpose-built ML services and trained models to address everyday use cases, such as automatic object recognition, voice-to-text transcription, recommendation generation, fraud detection, chatbots, and automated call centers. Moreover, we understand the importance of tailoring these services to Telco-specific needs. We pay meticulous attention to the unique characteristics of Telco data and customer behaviors, making sure of seamless and secured integration with other Telco-specific data sources like the network. We invite you to explore how AWS can accelerate your innovation, streamline cost management, and keep you ahead of the competition in a Telco-focused generative AI workshop, and equip yourself with the knowledge and tools to thrive in the rapidly evolving landscape. Register here to learn more. i Gartner, 2023 CEO Survey — The Pause and Pivot Year, Mark Raskino, Stephen Smith, Kristin Moyer, Gabriela Vogel, 17 April 2023 ii Gartner Press Release, Gartner Survey Finds CEOs Cite AI as the Top Disruptive Technology Impacting Industries, May 17, 2023 iii Gartner Press Release, Gartner Survey Finds CEOs Cite AI as the Top Disruptive Technology Impacting Industries, May 17, 2023 GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved. TAGS: AI , telecom , Telecommunications Chris Featherstone Chris Featherstone is an AI and data expert who helps organizations improve their business processes and workflows through innovative technology solutions. At AWS Chris specializes in data architectures, chatbots, virtual assistants, and all things artificial intelligence and machine learning specifically for communication service providers and telecommunications customers. With over 26 years of experience, Chris has worked with dozens of enterprise clients to build custom AI, machine learning, and automated conversational interfaces tailored to their needs. His work focuses on optimizing data governance and usage, automating manual tasks, personalizing user experiences, and enabling smarter decision making through data-driven insights and AI/ML. Chris is passionate about the possibilities of AI and its potential to transform businesses. Using his technical and domain expertise, Chris has delivered data and AI solutions that drive real impact for organizations. You will find him speaking at re:Invent as well as other industry conferences. In his spare time, you'll find Chris and his family in the mountains of Montana where they reside. Comments View Comments Resources AWS for Industry AWS Events AWS Training & Certification AWS Whitepapers AWS Compliance Reports Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Generative AI with Large Language Models New Hands-on Course by DeepLearning.AI and AWS _ AWS News Blog.txt,"AWS News Blog Generative AI with Large Language Models — New Hands-on Course by DeepLearning.AI and AWS by Antje Barth | on 28 JUN 2023 | in Announcements , Artificial Intelligence , Generative AI , Launch , News | Permalink | Comments |  Share Generative AI has taken the world by storm, and we’re starting to see the next wave of widespread adoption of AI with the potential for every customer experience and application to be reinvented with generative AI. Generative AI lets you to create new content and ideas including conversations, stories, images, videos, and music. Generative AI is powered by very large machine learning models that are pre-trained on vast amounts of data, commonly referred to as foundation models (FMs). A subset of FMs called large language models (LLMs) are trained on trillions of words across many natural-language tasks. These LLMs can understand, learn, and generate text that’s nearly indistinguishable from text produced by humans. And not only that, LLMs can also engage in interactive conversations, answer questions, summarize dialogs and documents, and provide recommendations. They can power applications across many tasks and industries including creative writing for marketing, summarizing documents for legal, market research for financial, simulating clinical trials for healthcare, and code writing for software development. Companies are moving rapidly to integrate generative AI into their products and services. This increases the demand for data scientists and engineers who understand generative AI and how to apply LLMs to solve business use cases. This is why I’m excited to announce that DeepLearning.AI and AWS are jointly launching a new hands-on course Generative AI with large language models on Coursera’s education platform that prepares data scientists and engineers to become experts in selecting, training, fine-tuning, and deploying LLMs for real-world applications. DeepLearning.AI was founded in 2017 by machine learning and education pioneer Andrew Ng with the mission to grow and connect the global AI community by delivering world-class AI education. DeepLearning.AI teamed up with generative AI specialists from AWS including Chris Fregly , Shelbee Eigenbrode , Mike Chambers , and me to develop and deliver this course for data scientists and engineers who want to learn how to build generative AI applications with LLMs. We developed the content for this course under the guidance of Andrew Ng and with input from various industry experts and applied scientists at Amazon, AWS, and Hugging Face. Course Highlights This is the first comprehensive Coursera course focused on LLMs that details the typical generative AI project lifecycle, including scoping the problem, choosing an LLM, adapting the LLM to your domain, optimizing the model for deployment, and integrating into business applications. The course not only focuses on the practical aspects of generative AI but also highlights the science behind LLMs and why they’re effective. The on-demand course is broken down into three weeks of content with approximately 16 hours of videos, quizzes, labs, and extra readings. The hands-on labs hosted by AWS Partner  Vocareum let you apply the techniques directly in an AWS environment provided with the course and includes all resources needed to work with the LLMs and explore their effectiveness. In just three weeks, the course prepares you to use generative AI for business and real-world applications. Let’s have a quick look at each week’s content. Week 1 – Generative AI use cases, project lifecycle, and model pre-training In week 1, you will examine the transformer architecture that powers many LLMs, see how these models are trained, and consider the compute resources required to develop them. You will also explore how to guide model output at inference time using prompt engineering and by specifying generative configuration settings. In the first hands-on lab, you’ll construct and compare different prompts for a given generative task. In this case, you’ll summarize conversations between multiple people. For example, imagine summarizing support conversations between you and your customers. You’ll explore prompt engineering techniques, try different generative configuration parameters, and experiment with various sampling strategies to gain intuition on how to improve the generated model responses. Week 2 – Fine-tuning, parameter-efficient fine-tuning (PEFT), and model evaluation In week 2, you will explore options for adapting pre-trained models to specific tasks and datasets through a process called fine-tuning. A variant of fine-tuning, called parameter efficient fine-tuning (PEFT), lets you fine-tune very large models using much smaller resources—often a single GPU. You will also learn about the metrics used to evaluate and compare the performance of LLMs. In the second lab, you’ll get hands-on with parameter-efficient fine-tuning (PEFT) and compare the results to prompt engineering from the first lab. This side-by-side comparison will help you gain intuition into the qualitative and quantitative impact of different techniques for adapting an LLM to your domain specific datasets and use cases. Week 3 – Fine-tuning with reinforcement learning from human feedback (RLHF), retrieval-augmented generation (RAG), and LangChain In week 3, you will make the LLM responses more humanlike and align them with human preferences using a technique called reinforcement learning from human feedback (RLHF). RLHF is key to improving the model’s honesty, harmlessness, and helpfulness. You will also explore techniques such as retrieval-augmented generation (RAG) and libraries such as LangChain that allow the LLM to integrate with custom data sources and APIs to improve the model’s response further. In the final lab, you’ll get hands-on with RLHF. You’ll fine-tune the LLM using a reward model and a reinforcement-learning algorithm called proximal policy optimization (PPO) to increase the harmlessness of your model responses. Finally, you will evaluate the model’s harmlessness before and after the RLHF process to gain intuition into the impact of RLHF on aligning an LLM with human values and preferences. Enroll Today Generative AI with large language models is an on-demand, three-week course for data scientists and engineers who want to learn how to build generative AI applications with LLMs. Enroll for generative AI with large language models today. —  Antje Antje Barth Antje Barth is a Principal Developer Advocate for AI and ML at AWS. She is co-author of the O’Reilly book – Data Science on AWS. Antje frequently speaks at AI/ML conferences, events, and meetups around the world. She also co-founded the Düsseldorf chapter of Women in Big Data. Comments View Comments Resources Getting Started What's New Top Posts Official AWS Podcast Case Studies Follow  Twitter  Facebook  LinkedIn  Twitch  RSS Feed  Email Updates" Genpact Delivers Innovative Services to Customers Faster by Running Critical Applications on AWS _ Case Study _ AWS.txt,"Français With AWS Identity and Access Management (AWS IAM), you can specify who or what can access services and resources in AWS, centrally manage fine-grained permissions, and analyze access to refine permissions across AWS.  Learn more » 2023 Genpact is currently implementing a cloud-based contact center on Amazon Connect and AWS serverless technologies. Says Kumar, “We’re looking to further modernize our business applications, and AWS Professional Services is helping us do that.” Kumar concludes, “With AWS, we have made our infrastructure more agile, resilient, automated and flexible to support dynamic business demand and drive collaborative innovation.” To increase innovation agility, Genpact engaged Amazon Web Services (AWS) Professional Services and migrated its application environment to AWS. The company established a global AWS Landing Zone, with an exclusive zone for its business in China, allowing customers to set up a multi-account, scalable, and secure AWS environment. Srihari notes, “Our custom AWS Landing Zone has helped Genpact ensure resource deployments are in sync with global regions and that new account organization units are able to automatically deploy resources on demand."" Español Outcome | Delivering Solutions Faster with On-Demand Deployment  日本語 Contact Sales With the agility gained from migrating to AWS, Genpact has significantly reduced deployment times for new applications. “Previously, it would take at least 12 weeks to procure and provision servers to deploy an application. Now, we can provision on demand,” Srihari says. Genpact leverages AWS Service Catalog to govern infrastructure-as-code templates, AWS Config to deploy a compliance-as-code framework, and Amazon API Gateway to create application programming interfaces (APIs) at scale. The company migrated 45 business-critical applications, including customer-facing applications and core services such as Active Directory, from its on-premises data centers to AWS. In total, the company shut down over 1,300 physical servers and decommissioned 14 data centers. Furthermore, Genpact optimized operational costs on AWS, largely as a result of decommissioning 14 data centers. “We’ve eliminated hardware refresh and maintenance costs, as well as data center power and cooling costs,” Srihari explains. 한국어 data centers decommissioned Overview | Opportunity | Solution | Outcome | AWS Services Used To learn more, visit aws.amazon.com/solutions/cloud-operations. Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. Learn more » About Genpact Genpact uses over 30 AWS services, including AWS Config, AWS Service Catalog, and Amazon API Gateway, to support a wide range of business applications. As a result, the company can now provision infrastructure on demand, quickly set up sandbox environments, and scale seamlessly.  AWS Services Used Amazon API Gateway Genpact can also quickly set up sandbox environments for developers to test new features and applications before moving them to production. With accelerated testing and deployment times, Genpact can deliver solutions to customers faster and thus differentiate its business from competing professional services providers. 中文 (繁體) Bahasa Indonesia Genpact Delivers Innovative Services to Customers Faster by Running Critical Applications on AWS AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. Ρусский Genpact collaborates with AWS Professional Services to securely migrate its infrastructure to the cloud, delivering solutions to global customers faster and more efficiently. عربي Learn more » 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Genpact is a global professional services company with 800 clients across the globe. To gain agility and flexibility, Genpact migrated 45 business-critical applications to AWS. “If we wanted to test new applications, we would typically spend 12–16 weeks to procure and provision new servers,” says Santhosh Srihari, cloud & operations lead at Genpact.  Overview Solution | Migrating 45 Business-Critical Applications  45 Get Started AWS Identity and Access Management servers migrated Türkçe AWS Config Mohan Kumar, cloud engineering lead at Genpact, says, “We partner with our clients to identify their key challenges and create innovative solutions based on process, data, technology and AI expertise to help them overcome those challenges and deliver transformation at scale.” English Genpact is a global professional services company dedicated to delivering outcomes that transform businesses. The company serves 800 clients across the globe in industries including financial services, consumer goods, retail, healthcare, manufacturing, and technology. Opportunity | Improving Pace of Innovation and Provisioning time saved on infrastructure setup  Customer Stories / Professional Services AWS Service Catalog Previously, it would take at least 12 weeks to procure and provision servers to deploy a new application. Now, we can provision on demand.” AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Learn more » Deutsch applications migrated Tiếng Việt Learn More Italiano ไทย 14 12 weeks Santhosh Srihari Cloud & Operations Lead, Genpact 1,300+ “We’ve improved our security posture with the ability to manage security from a central location on AWS, deploying rules that are specific to our technology and blocking malicious events,” Srihari says. In collaboration with AWS Professional Services, Genpact embarked on an Experience-Based Acceleration (EBA) program, a step-by-step transformation methodology that expedites the AWS Cloud migration journey by empowering internal teams. “EBA was a highly collaborative experience, mobilizing teams to work towards a common goal by breaking down silos and removing blockers to accelerate and scale cloud adoption,” says Kumar. Genpact is a global professional services firm that transforms its clients' businesses and shapes their futures. The company is guided by its real-world experience redesigning and running thousands of processes for hundreds of global companies. With deep industry and functional expertise, Genpact runs digitally enabled operations and applies its Data-Tech-AI services to design, build, and transform businesses.   To bolster security, Genpact implemented AWS Identity and Access Management (IAM), defining detailed roles for functional teams in its global organization. Furthermore, Genpact’s AWS infrastructure yields proactive security insights the company uses to thwart potential threats. Should an issue occur, engineers can perform a root cause analysis to understand the error and avoid a recurrence. Português" Geo.me Reduces Customers Annual Geospatial Costs by up to 90 Using Amazon Location Service _ Geo.me Case Study _ AWS.txt,"Learn how Geo.me in the software industry optimized costs for customers using Amazon Location Service. Stuart Grant Cofounder and Director, Geo.me Français Increased in annual geocoding costs for customers 2023 Enhanced Español 日本語 As for geocoding, “Amazon Location Service offered better terms of use than our existing solution, thus reducing annual geocoding costs for our customers by more than 90 percent while also removing onerous compliance processes from their workflows,” says Grant. Amazon Location Service transactional geocoding is a tenth of the cost of other providers, and customers can save even more by combining it with stored geocodes for frequently accessed addresses. Amazon Location Service offered better terms of use than our existing solution, thus reducing annual geocoding costs for our customers by more than 90% while also removing onerous compliance processes from their workflows.” Contact Sales Get Started 한국어 Solution | Opening Industry Opportunities and Optimizing Costs through Enhanced Location Data Storage Overview | Opportunity | Solution | Outcome | AWS Services Used Expanded AWS Services Used company market opportunities It needed a new location data solution to better serve its global customers in the retail, logistics, transportation, and insurance industries. Its existing location data service provider prohibited the storing of geocoded data and was too expensive for some customers. Geo.me was dealing with millions of geocoded records that it wanted to store or cache. Geo.me needed a backend system capable of storing these location records in a secure and private way that was cost effective while performing geospatial calculations. Additionally, Geo.me’s existing solution could not handle truck routing, so the company sought a global solution, which was important to much of its customer base and would avoid needing different regional truck routing providers. Geo.me Reduces Customers’ Annual Geospatial Costs by up to 90% Using Amazon Location Service Overview 中文 (繁體) Bahasa Indonesia Looking forward, Geo.me is actively exploring how to use mapping capabilities with Amazon Location Service to visualize and optimize the data they collect. For example, insurance customers can geolocate risks and then analyze the concentration of those risks. Customers can use geofencing capabilities to analyze historical situations where an insured asset enters and exits permitted areas, high-risk areas, low-risk areas and adjust fees based on data they collect. “Now that Amazon Location Service is starting to provide out-of-the-box building blocks to do things like location data storage, the focus can shift to what customers can do with that data,” says Grant. “There’s a huge amount of analytical capability that Amazon Location Service has the potential to unlock.” As an AWS Partner since 2014, Geo.me had the opportunity to be an early adopter of Amazon Location Service. Because the service includes routing, tracking, geofencing, stored geocodes, and other managed location data services that Geo.me offers to its customer base as a service, Geo.me did not need to create its own solutions. Such efficiencies aligned with the company philosophy of using recognizable, managed services. This philosophy makes the best use of Geo.me’s resources and has earned the company credibility with its customers. “We decided very early on in our evolution that we would always stand on the biggest shoulders we could,” says Stuart Grant, cofounder and director of Geo.me. Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Amazon Location Service Opportunity | Supporting Customers’ Geospatial Information Needs Geo.me was founded in 2008 and delivers location-based applications that provide geospatial web services like routing, geofencing, tracking, placing points of interest, and storing geolocation data for enterprises in the B2B sector. It does this in two ways: First, it builds digital mapping applications that take an asset or customer’s location, including route, and render it onto a map to provide information on where the customer or asset is located at any given time. This includes truck routing, asset tracking, and locating specialized refueling stations and residential addresses on a map. Second, the company provides the capability of storing geocoded records for future geospatial calculations, assessments, and analysis. Geospatial web services provider Geo.me opened industry opportunities, expanded innovation possibilities, and optimized costs for its customers using Amazon Location Service, a location-based service that makes it simple for developers to add geospatial data and location functionality to applications without compromising data security and user privacy. Geo.me enhances digital mapping solutions that engage customers, optimize deliveries, and help customers make better decisions. company scalability Outcome | Adding Mapping Capabilities Using Amazon Location, Geo.me has saved time that the team can now spend on product innovation, such as adding sophisticated heuristic algorithms to optimize route planning. “Because Amazon Location Service provides building blocks like geocoding or routing, which are core to any geospatial service, we can now shift the focus to what we do with the data we collect,” says Grant. “We can now analyze that data and look at how more efficient heavy road transportation routes can be generated.” Türkçe Because Amazon Location incorporates HERE Technologies and Esri and integrates seamlessly with other AWS services, Geo.me gained access to mapping, geocoding, geofencing, asset tracking, and routing data on a global scale. The company could accelerate application development by using other AWS service capabilities outside of Amazon Location Service to meet its customer’s needs. Geo.me is a software company that specializes in handling location data for large enterprises. Its solutions gather, analyze, and deliver location data to its customers using smartphone apps, navigational systems, and mobile devices. English About Geo.me Geo.me has helped European transportation customers plan and optimize delivery routes so that trucks can avoid roads that are narrow, unpaved, or otherwise unsuitable for heavy traffic. Using Amazon Location APIs, Geo.me clients can optimize routing to avoid roads where trucks are not allowed due to bridge heights and other regulations. By using Geo.me’s solution to plan reliable routes, customers can more efficiently meet their sustainability targets; for example, customers could identify usage opportunities for the 24 percent of European intracountry truck journeys that run with empty vehicles. 90% reduction Deutsch Tiếng Việt Each month, Geo.me serves around 120 million API calls. Handling millions of geolocation records requires a system that can store geocoded records or use geospatial capabilities like routing, tracking, and locating points of interest to improve delivery times by optimizing the routing of vehicles. customer sustainability goals Italiano ไทย Geo.me started using Amazon Web Services (AWS) solutions in 2008 and adopted Amazon Location Service in 2021. Using Amazon Location Service, Geo.me increased innovation by performing geospatial calculations that identified areas for route planning improvement and reduced annual geocoding costs by 90 percent. Learn more » Amazon Location Service makes it easy for developers to add location functionality, such as maps, points of interest, geocoding, routing, tracking, and geofencing, to their applications without sacrificing data security and user privacy. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português" Gileads Journey from Migration to Innovation on AWS _ Case Study _ AWS.txt,"Contact Sales Français Increased sustainability and automated compliance SAP HANA on AWS Enhanced Español Outcome | Deriving Value from Data Analytics Using AWS   日本語 2023 AWS and SAP have worked together closely to certify the AWS platform so that companies of all sizes can fully realize all the benefits of the SAP HANA in-memory computing platform on AWS. Learn more » Amazon S3 Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Sustainability and cost efficiency were other important considerations for Gilead. After thoroughly reviewing its infrastructure in 2020, the company decided to accelerate its cloud migration to reduce the carbon footprint of its data systems. “Migrating our data analytics to the cloud also meant that we could avoid large capital expenditure in bringing our data centers up to higher standards of resilience,” says Berson. “Today, we manage over 50 PB of data on AWS.” About Gilead Amazon Simple Storage Service (Amazon S3)—an object storage service offering industry-leading scalability, data availability, security, and performance—to store and retrieve data at scale. The company also uses Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Learn more » Outside of the data mesh, Gilead has built several other solutions to break down data silos and creatively approach innovation. This includes the enterprise semantics search application, Morpheus, which increases search result accuracy while reducing data search results times by over 50 percent. Another example is a Gilead data marketplace with massive data transfer speeds, built on operating model transformation Amazon Relational Database Service (Amazon RDS), a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Storing data is only part of the challenge, however. Gilead adopted AWS Services Used Gilead’s Journey from Migration to Innovation on AWS Overview 中文 (繁體) Bahasa Indonesia SAP HANA on AWS as part of its enterprise resource planning transformation. ไทย Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 70% 中文 (简体) of data center footprint migrated to cloud Amazon Redshift Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. Learn more » Solution | Implementing a Data Mesh Architecture on AWS  The underlying architecture uses several major AWS services. Gilead uses of data managed on the cloud For the past 35 years, Gilead has focused on bold advances in biopharmaceutical innovation, setting high standards for research into HIV, viral hepatitis, cancer, and other diseases. The company began migrating 70 percent of its workloads to AWS in 2020 to streamline and democratize data access.   Türkçe Marc Berson Chief Information Officer, Gilead Sciences Inc. English Learn how Gilead, leading global biopharmaceutical organization, built a data mesh architecture on AWS to accelerate innovation and drug commercialization. Amazon RDS Three years into its cloud transformation, Gilead has big plans for the future. “The primary reason that we chose AWS was its passion for innovative transformation,” says Berson. “We had discussions on transforming the way clinical trials are performed and changing the way molecules are discovered.” Armed with its new cloud foundation on AWS, the company feels confident in its ability to deliver lifesaving treatments faster. Gilead adopted a data mesh approach to improve agility, accelerate insight generation, and increase its return on investment. A simplified user interface helped business units easily find data products from the catalog, inspect their quality, and get access to the data through a federated query engine. On the other side, four platform APIs reduced the friction for data producers to register their data products on the mesh, building a self-serve infrastructure. This also included observability and data quality APIs to record the data quality on a scorecard as a part of the data catalog. 50 PB Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. The primary reason that we chose AWS was its passion for innovative transformation. With AWS, we have developed an enterprise data solution to create better access to and analysis of data across the organization using a data mesh approach.” Unlocked Deutsch Today, the mesh hosts hundreds of data products in the catalog, providing useful descriptions, row-level and column-level access, and cross-lines of business coordination. The platform’s data stewards govern the quality by looking at scorecards. “Now, we have business, technical, and observability metadata, along with service-level objectives and quality in our catalog,” says Murali. “The data mesh platform has decentralized data ownership—we don’t have to chase subject matter experts to go find information about the data because we have that in a catalog.” Gilead chose Amazon Web Services (AWS) as its preferred cloud provider and began migrating its critical workloads from its data centers to the cloud. It chose AWS for its innovation, willingness to invest in co-innovation, and strong industry capabilities. Using AWS, Gilead has developed an enterprise data solution to create better access to and analysis of data across the organization, using a data mesh approach. Tiếng Việt Opportunity | Using AWS to Host and Manage 50 PB of Data  Italiano Customer Stories / Life Sciences AWS Data Exchange—which makes it simple to find, subscribe to, and use third-party data in the cloud. “We have a 38 PB observational dataset that previously took 36 hours for data transfer,” says Murali. “Now it takes 6 minutes.” After 1 year in this new phase of optimization, Gilead has seen operational and financial improvements across capital expenditure avoidance, software asset consolidation, cycle-time improvements, and compliance automation. Amazon Redshift—which uses SQL to analyze structured and semistructured data across data warehouses, operational databases, and data lakes—to get from data to insights faster. The company also uses Learn more » agility to deliver innovation Gilead Sciences Inc. (Gilead) wanted to modernize its data infrastructure and use cloud innovation to improve its operational performance. With thousands of virtual machines running hundreds of regulated applications in on-premises data centers, the company was challenged to balance governance and agility. “We wanted to support our business stakeholders to innovate faster and discover drugs with higher efficacy,” says Marc Berson, chief information officer (CIO) of Gilead. The company also wanted to increase its operational resilience for data recovery and backup in the event of a disaster without substantial capital investment. In addition, it wanted to automate GxP compliance to further streamline its processes. “We have aspirations to bring more than 10 transformative therapies to patients by 2030 and strategic priorities to expand internal and external innovation,” says Murali Vridhachalam, head of cloud, data, and analytics at Gilead. Seamless access to trusted data was very important for Gilead to achieve these strategic priorities. The company realized it needed to move away from traditional monolithic data management approaches and apply modern engineering practices and organizational models to quickly generate insights and respond to changing business needs. Português Gilead Sciences Inc. is a biopharmaceutical company that has pursued and achieved breakthroughs in medicine for more than 3 decades. The company is committed to advancing innovative medicines to prevent and treat life-threatening diseases, including HIV, viral hepatitis, and cancer." Global Unichip Corporation Case Study.txt,"About Global Unichip Corporation Français Benefits of AWS Running High-Performance Computing Workloads on Amazon EC2 Spot Instances Prevents costly system failures and replacement during operation Español Since data privacy is important to GUC, proteanTecs provides GUC an Amazon Virtual Private Cloud (Amazon VPC), which it runs on its own system using AWS. Any connection to the proteanTecs solution is using a virtual private network, or a secure closed channel, that reduces risk and prevents proteanTecs and GUC from seeing each other’s data. GUC and proteanTecs are collaborating on the next generation of interfaces, which will be developed using TSMC’s 3DFabric dies assembly as opposed to the side-by-side dies assembly in 2.5D generation. These interfaces will have hundreds of thousands of lines between the dies, greatly increasing computing power and memory in each ASIC. “Even in the very early stage of development, proteanTecs is already an integral part of our mechanism for reliability monitoring and repair,” says Elkanovich. “Now we can address reliability at all development stages—from architecture to physical implementation—together.”  日本語 Igor Elkanovich Chief Technology Officer, Global Unichip Corporation Growing in Scale and Complexity 한국어 Even in the very early stage of development, proteanTecs is already an integral part of our mechanism for reliability monitoring and repair.” proteanTecs runs its high-performance computing workloads on Intel Xeon processor–powered Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances. Its Kubernetes container orchestration system also runs on Amazon EC2 instances. And whenever proteanTecs sees a burst in workload, its Kubernetes cluster triggers a request to increase the number of Spot Instances so that proteanTecs can process that workload with ease. Using Spot Instances reduces the company’s compute costs by approximately 60 percent.  Get Started Amazon EC2 Facilitating Quality and Reliability of ASICs Using AWS Partner proteanTecs Every time GUC releases a new generation of ASICs, the design and processes become more complex. “We’ve multiplied the number of transistors, the chip complexity, and the processing power many times, and with the recent revolution in advanced packaging technology, we can now assemble many different dies together in one heterogeneous integrated circuit package,” explains Elkanovich. Big functional circuits are fabricated using several silicon dies. “There is a dense interconnect between the dies in order to provide high bandwidth and performance to our customers,” says Elkanovich. “They demand reliability because most of the ASICs go to mission-critical applications, like data center applications that grow exponentially. And once they grow, the effect of every failure worsens. We want to develop the most complex designs while increasing reliability. And this is a challenge we address with proteanTecs.”  AWS Services Used “To quickly provide GUC feedback on a very large amount of data, proteanTecs uses AWS to achieve the scalability and flexibility it needs to support high-performance computing workloads that run millions of simulations each day,” says Yuval Bonen, cofounder and vice president of software at proteanTecs. Through the AWS-powered proteanTecs analytics platform, GUC customers can closely monitor their ASICs to proactively detect and repair silicon failures. 中文 (繁體) Bahasa Indonesia proteanTecs also uses Amazon Relational Database Service (Amazon RDS) to store application metadata. Amazon RDS makes it simple to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups. That saves the company’s DevOps team a lot of time.  Contact Sales Ρусский عربي 中文 (简体) Global Unichip Corporation (GUC) helps system and semiconductor companies develop application-specific integrated circuits (ASICs), or microchips. Each generation of ASICs has a more complex design and uses more advanced semiconductor processes, making it harder to reach quality targets. But these ASICs become components in data center systems, where uptime and system reliability are critical. To tackle that challenge, GUC engaged Amazon Web Services (AWS) Select Technology Partner proteanTecs, which uses deep data and machine learning to predict failures in electronics. Its software solution could monitor ASIC performance, even as ASICs operate in the field, with zero downtime or disruption to the system.  Building Additional Lines to Future Reliability GUC and proteanTecs first collaborated on GUC’s high-bandwidth memory interface IP for 2.5D die-to-die interconnects. In the typical design, the ASIC uses several high-bandwidth memory components with tens of thousands of lines connecting them. During normal ASIC operation, proteanTecs collects data from the Universal Chip Telemetry embedded in the ASIC and analyzes that data to assess the signal integrity of lines in the field. When proteanTecs detects a quality degradation for a line that may lead to future defects, the system replaces it with a preinstalled redundant line during the next maintenance cycle. This extends the ASIC’s lifecycle, prevents system failure, and avoids costly replacements of failing systems for customers’ data center applications. This entire process is accomplished with no downtime or disruption to the customers’ normal operation.  Amazon VPC GUC focuses on the design, interface intellectual property (IP) development, and management of ASIC manufacturing by its key shareholder, Taiwan Semiconductor Manufacturing Company (TSMC). The large-scale global semiconductor foundry manufactured 10,761 different products using 272 distinct technologies for 499 different customers in 2019. “We adopt a new semiconductor process, a new assembly technology, and new interfaces before the customer comes to us with their projects,” says Igor Elkanovich, chief technology officer at GUC. “We work very closely with TSMC so that while its technology is still in development, we are already starting to adopt it and develop IP in parallel. By the time TSMC technology is available for the customer, the IP is silicon proven and a part of GUC’s development flow.”  Amazon EC2 Spot Instances Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. English Amazon RDS GUC engaged proteanTecs to combine data derived from Universal Chip Telemetry technology embedded in the ASICs with predictive artificial intelligence and data analytics—using the proteanTecs cloud system on AWS—to track and repair silicon defects before they cause system failure. By taking these measures, GUC and proteanTecs can increase the quality and reliability of GUC’s ASICs. Amazon Virtual Private Cloud (Amazon VPC) is a service that lets you launch AWS resources in a logically isolated virtual network that you define.  Headquartered in Taiwan, Global Unichip Corporation (GUC) helps system and semiconductor companies design and develop application-specific integrated circuits (ASICs), or microchips. Its parent company, Taiwan Semiconductor Manufacturing Company, is a global semiconductor foundry. Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices.  Deutsch GUC Enlists AWS Partner proteanTecs to Increase ASIC Reliability and Quality at Scale Tiếng Việt Monitors and repairs ASICs in the field during normal system operation Italiano ไทย Even as customers’ data center applications grow and ASICs become more complex, GUC will continue to offer predictive ASIC monitoring using the solution offered by AWS Partner proteanTecs. “Some people think that with growing complexity, the reliability will inevitably be compromised,” says Elkanovich. “Our purpose is the opposite. Our goal is to bring our customers more scalability at an even better level of reliability.” 2021 Learn more » Achieves ASIC reliability and quality at scale GUC previously monitored its ASICs during the manufacturing process—but by using proteanTecs, it can maintain that visibility and repairability in the field. “We previously had little visibility into what happened in the ASICs,” says Elkanovich. “Once we added the proteanTecs solution, we got a totally different view. Now we observe and repair physical effects that we weren’t able to discover before.” Português" Glossika case study.txt,"With an eye to expansion, Glossika is constantly innovating and developing new features that improve the learning process. One potential future feature would utilize machine-learning models in Amazon SageMaker to analyze audio files hosted on Glossika. This analysis would generate two colored lines above the text of a given sentence: one color showing the intonation of the native speaker’s recording and another showing the intonation of a user's uploaded recording. This information would let users see where their rhythm and intonation diverge from the native speaker’s. Learners can use the analysis to independently assess their speaking ability in any target language, helping them to improve their pronunciation and allowing them to more objectively assess how natural they sound. Français Campbell elaborates, “Our algorithm considers several factors such as how recently a student learned certain information and how well they’re retaining it. If they just learned a structure yesterday or are struggling to replicate the sentence independently, students will see that structure more frequently.”   While Glossika works with professional translators and voice actors to produce content, Viva crowdsources this information from users around the world who record and document their native languages in Glossika’s database. Participants who upload recordings of their language not only help preserve lesser-known languages and dialects, but also earn a share of the subscription revenue generated from learners studying that language.   Español From Glossika’s experience, approximately 50,000 total sentence repetitions are necessary to “graduate” from the program and it takes a typical learner approximately 300 hours to achieve this. Overview | Opportunity | Solution | Benefits | AWS Services Used Preserving Less-Spoken Dialects with Viva Project 日本語 Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Contact Sales 2022 In addition to scaling its business to add more users in more countries, Glossika recently launched a beta version of its Viva project to expand Glossika’s content offerings and preserve endangered and/or “minorized” languages such as Gaelic and Hakka. Amazon S3 Campbell says, “The stability of AWS services is excellent. With active paying users in 148 countries, this is especially important for us. With AWS, we’re confident that our users have a reliable experience on our app no matter where they’re located.” 한국어 To serve its global customers, Glossika relies on Amazon CloudFront as a content delivery network. The company is headquartered in Taiwan, but most of its customers live in the United States or Europe. To manage incoming traffic from around the world, its engineers built high-availability architecture using Elastic Load Balancing. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Benefits Customer Stories / Education Get Started AWS Services Used Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Glossika is an online learning site and mobile app that uses adaptive learning algorithms to offer customized content based on a student’s language proficiency level, learning progress, and interests. Michael Campbell, chief executive officer and founder of Glossika, says, “We focus on efficiency and aim to streamline the learning process as much as possible. We use adaptive learning algorithms to determine when previously learned content should be reviewed. This ensures that more study time is spent practicing things users struggle with and less on repeating things they’ve already mastered.” 中文 (繁體) Bahasa Indonesia Michael Campbell CEO and Founder, Glossika Glossika is an education technology company whose application uses common sentence structures to train people to understand and speak more than 60 languages. Serving customers in 148 countries, Glossika curates content to match users’ preferences and ability, making learning more efficient.   Ρусский • Serves customers in 148 countries with low-latency content delivery • Stores textual and audio data of more than 350,000 sentences and translations • Stores over 25 million user-uploaded audio recordings • Scales infrastructure to accommodate growing user base • Saves human and financial resources with automation on the cloud • Manages global incoming traffic to maintain high uptime عربي 中文 (简体) Glossika started out producing language learning books and transitioned to an online model in 2017, followed by a mobile app in 2022. Upon going digital, Glossika chose Amazon Web Services (AWS) to build its IT infrastructure. Sheena Chen, chief operations officer at Glossika, says, “As a company striving for worldwide product adoption, we need cloud technology that can scale with us. AWS makes it easy to purchase the infrastructure we need right now and to adjust as we expand. AWS is also feature-rich and highly configurable, with an intuitive user console—all of which facilitates our growth as a startup in a sustainable way.” Solution Overview Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. Learn more » About Company Glossika’s users continue to offer praise for the application as well. One learner comments, “Nothing else combines comprehensible natural audio input and active language production with nearly the same amount of practice as Glossika; every session is challenging, but always comprehensible and rewarding.” Türkçe Amazon ElastiCache for Redis Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Learn more » English According to the Foreign Service Institute, where employees working in the US foreign affairs community receive training, it takes 600–2,200 class hours to learn a foreign language. These figures vary based on the complexity of the language relative to English, and are also affected by a given learner’s ability, past experience, and exposure to the target language. However, most teachers tend to apply a one-size-fits-all approach in designing curriculum, which may not be suited to every student's learning needs. Amazon Relational Database Service Glossika Builds Language-Learning Platform on AWS to Serve Users in 148 Countries Glossika’s application works by organizing sentences according to the type of grammar/syntax they contain and how well a user is retaining them. At present, Glossika has uploaded about 6,000 unique sentences per language for learners to practice, and the company expects that figure to double in the next 2–3 years. Glossika uses Amazon Relational Database Service (Amazon RDS) with autoscaling enabled to store more than 350,000 sentences and their translations, Amazon Simple Storage Service (Amazon S3) to store over 25 million user-uploaded audio recordings of sentences cost-effectively, and Amazon ElastiCache for Redis as a low-latency caching service. Its adaptive learning algorithms run on Amazon Elastic Compute Cloud (Amazon EC2) instances. Glossika has big plans for its global business and will continue to rely on AWS as its cloud provider in the next phase of its journey. Chen concludes, “AWS is reliable and easy to use. Because AWS has efficiently taken care of server- and security-related issues, our engineers have been able to focus completely on product development since day one. We look forward to growing our business further on AWS.” Deutsch Opportunity Glossika currently serves customers in 148 countries who are learning over 60 different languages. Its courses guide users through a massive database of sentences in their target language(s). Sentences gradually increase in difficulty and are accompanied by recordings from native speakers. Users can make adjustments to each course in accordance with their preferences. For instance, someone interested in learning Japanese to read manga comics could choose to not practice sentences about working in an office.   Tiếng Việt With AWS, we’re confident that our users have a reliable experience on our app no matter where they’re located.” Italiano ไทย Curating Content to Match User Preferences Amazon CloudFront Ensuring Uptime and Low Latency for Global Customers Teaching through 6,000 Core Sentence Structures Learn more » Glossika built its language learning platform on AWS to ensure low latency for users in 148 countries and access to on-demand compute and storage as it expands. Glossika is an education technology company that uses syntax-sorted and customized content to train people to understand and speak foreign languages. The startup uses Amazon RDS to store lesson data, Amazon ElastiCache for Redis for caching, and Amazon CloudFront as a content delivery network. Adding ML Analysis to Improve Rhythm and Intonation Português" GoDaddy Case Study _ AWS.txt,"About GoDaddy AWS Lambda Français As it began to migrate its on-premises resources to the cloud using Amazon Web Services (AWS), GoDaddy saw an opportunity to reimagine its security processes. It incorporated AWS Security Hub, a cloud security posture management service that performs security best practice checks, aggregates alerts, and facilitates automated remediation. Using Security Hub, GoDaddy manages security from a serverless, customizable, centralized location that has increased visibility and coverage while saving GoDaddy significant overhead and maintenance costs. Benefits of AWS As a global leader in domain registration and web hosting, GoDaddy sought to embed best practices in its development and operational processes as it migrated to the cloud. The company was looking for a way to streamline the time-consuming processes of parsing and normalizing data from multiple security tools into a common format for search, analytics, and response and remediation. Español Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Learn more » When Security Hub became available in late 2018, GoDaddy incorporated it as a single source of truth for security findings on AWS. GoDaddy uses multiple in-house and third-party automated on-demand tools that scan its workloads for security misconfigurations and report the findings on Security Hub. Each team has its own set of AWS accounts and uses Security Hub to view security findings on their accounts. GoDaddy uses its own central ticketing tool and Security Hub to create problem tickets for the corresponding application teams, who receive alerts about the findings on their accounts. “We are running a large set of security tools, and using AWS Security Hub gives us a way to import results of these tools into a central place,” says Aarushi Goel, GoDaddy’s Application Security manager. “Our users no longer have to go to 10 different places to get findings. They just go to their account’s Security Hub and have findings from all the tools listed for them.” In addition, GoDaddy has automated the process of closing tickets upon remediation using AWS Lambda, a serverless, event-driven compute service that lets users run code for virtually any type of application or backend service without provisioning or managing servers.  日本語 Contact Sales AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. Get Started 한국어 Scott Bailey Senior Software Engineer, Application Security, GoDaddy Alleviated maintenance and overhead by automating processes AWS Fargate Created customized dashboards for users Diagram 1: CirrusScan Overview AWS Services Used GoDaddy’s use of Security Hub has been so successful that it has begun to extend its use alongside CirrusScan to scan legacy workloads. The process helps reduce coverage, latency, and consistency gaps between GoDaddy’s on-premises processes and those that use AWS. The company also plans to incorporate Amazon Inspector, an automated vulnerability management service that continually scans AWS workloads for software vulnerabilities and unintended network exposure. AWS rearchitected Amazon Inspector in November of 2021 so that it automates vulnerability management and delivers near real-time findings, which reduces the delay between the introduction of a potential vulnerability and its remediation. “Our security program on AWS is far more mature and streamlined than our legacy on premises infrastructure,” Goel says. “Using AWS Security Hub in conjunction with our in house tools, we have come a long way in managing security risks since we migrated to AWS.” 中文 (繁體) Bahasa Indonesia AWS Security Hub is a cloud security posture management service that performs security best practice checks, aggregates alerts, and enables automated remediation. Ρусский عربي Using AWS, GoDaddy has been able to automate and streamline its security processes—running scans, reporting findings in Security Hub, and making findings available to users in its central ticketing system. Scans run every few hours with much better coverage than under the previous system, when scanning might have only occurred monthly. Automation saves time for GoDaddy’s developers as well as for customers, and the company saves money because it doesn’t pay for unused resources between scans. Application builders use Security Hub for a high-level view of their accounts and to remediate critical findings. “Using AWS serverless solutions, we don’t have to manage the infrastructure—including databases—to store security findings for all the accounts, so it’s very efficient for us,” says Goel. 中文 (简体) Founded in 1997, GoDaddy serves more than 21 million customers as a global leader in domain registration and web hosting. Headquartered in Tempe, Arizona, GoDaddy provides the tools that everyday entrepreneurs need to succeed online and in person.  Amazon ECS GoDaddy built CirrusScan as a containerized solution using Amazon Elastic Container Service (Amazon ECS), a fully managed container orchestration service that makes it simple for companies to deploy, manage, and scale containerized applications. To look for security vulnerabilities in the targeted accounts, CirrusScan uses third-party, open-house, and its own customized scanners. The scans run as independent Amazon ECS tasks using AWS Fargate, a serverless, pay-as-you-go compute engine that lets companies focus on building applications without managing servers. “AWS Security Hub made it straightforward for us to bring in our in-house-developed, customized tools,” says Goel. Diagram 2: CirrusScan Detailed Architecture GoDaddy Centralizes Security Findings and Gains Insights Using AWS Security Hub Türkçe Saved cost by not paying for downtime between scans When it hit a roadblock in development or needed general guidance, GoDaddy has benefited from online documentation available for Security Hub as well as quick, personalized assistance from AWS Support. The AWS Support team has facilitated GoDaddy’s understanding of best practices for using AWS, always considering the company’s particular requirements so that the team can better support GoDaddy’s objectives. “We don’t have to go through a series of escalations before we speak to an engineer,” Goel says. “AWS customer support has been above and beyond.”  English AWS Security Hub is there, it’s reliable, and it just works. We can plug stuff into it from anywhere in any particular individual AWS account and then pull data out into the central account when we need to use it somewhere else. And we don’t have to worry about maintaining it or backing it up.” The security tooling in development pipelines notifies GoDaddy developers about security risks early in the application lifecycle, avoiding the deployment of insecure code in production. “As a result, our exposure is reduced, and we can do a lot more with a lot fewer people than we could before,” says Scott Bailey, GoDaddy senior software engineer. In addition, GoDaddy discovers potential problems earlier in the development process before they can impact production. This reduced latency also helps GoDaddy address issues proactively and at convenient times, rather than respond to an emergency. Aggregating Security Findings Using AWS Security Hub Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Deutsch AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers.  Tiếng Việt Italiano ไทย Reduced mean time to remediate with continual vulnerability scanning 2022 Customizing Security Tools AWS Security Hub Centralized and streamlined security findings Expanding Security Management Using AWS Services Português Founded in 1997, GoDaddy has grown to serve more than 21 million customers around the world. Initially, the company did all of its processing on premises, running a number of security tools that each provided findings that users had to access individually instead of from a central dashboard. In March of 2018, GoDaddy began to migrate a large part of its infrastructure to AWS and searched for scalable open-source or commercial tools that it could use to scan its accounts for security-related issues and centralize its findings. Unable to find a solution that met all of its criteria at that time, the company developed its own framework, called CirrusScan, which is designed to run in conjunction with the AWS services GoDaddy was already using. However, CirrusScan did not include a convenient way to display findings from a central dashboard." Greenway Health Scales to Hundreds of Terabytes of Data Using Amazon DocumentDB (with MongoDB compatibility) _ Greenway Health Case Study _ AWS.txt,"Français Greenway Health LLC provides electronic health record (EHR) solutions to over 50,000 healthcare organizations. The company, one of the oldest in its field, offers both software and services to support medical practices. Opportunity | Developing a Highly Scalable and Secure Solution Using Amazon DocumentDB 2023 Solution | Using Amazon DocumentDB to Deliver Unified EHR Systems and Easily Use Other AWS Services In 2021, Greenway started using Amazon DocumentDB to build both a rapid enterprise data hub solution and a change data capture engine. Because Greenway provides EHR systems, privacy is of the utmost importance. Greenway is committed to delivering highly secure services, and its clients demand full HIPAA and 21st Century Cures Act compliance from all Greenway solutions. The need to protect patient data meant that every phase of the cloud migration had to be secure. However, Greenway was driven by more than a desire to meet regulations—it wanted to apply industry best practices to bring insights to its clients. The company opted to use AWS services to meet its complex project requirements. “AWS had the strongest offering for a number of services we were seeking and was the most willing to collaborate with us on our projects,” says Nick. Español Outcome | Powering a New Generation of Innovative Services on AWS Amazon DocumentDB 日本語 Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. AWS Professional Services Get Started 한국어 When Greenway started its cloud journey, it had two existing EHR solutions with separate data processing and analytics workflows. Greenway wanted to build common ground between those two disparate datasets and turned to Amazon DocumentDB (with MongoDB compatibility)—a fully managed native JSON document database that makes it easy and cost effective to operate critical document workloads at virtually any scale—to centralize and normalize Greenway’s EHR data. By streamlining its technical infrastructure, Greenway built a solution that scales seamlessly to process hundreds of terabytes of data and makes it simpler for healthcare providers to focus on serving patients. “We didn’t want to deal with scaling and managing a MongoDB engine ourselves, so we used Amazon DocumentDB,” says Philip Nick, senior director of production engineering at Greenway. Greenway Health Scales to Hundreds of Terabytes of Data Using Amazon DocumentDB Overview | Opportunity | Solution | Outcome | AWS Services Used of patient data Amazon DocumentDB (with MongoDB compatibility) is a fully managed native JSON document database that makes it easy and cost effective to operate critical document workloads at virtually any scale without managing infrastructure. Greenway collaborated extensively with AWS Professional Services—a global team of AWS-certified experts that supplements customers’ teams with specialized skills and experience to achieve results—to define the architecture of its new solution and identify the optimal tools for each step of its complex project. “We saw AWS absolutely step up to collaborate with us on this project and picked AWS because of this collaboration,” says Macaluso. By planning out each element of the effort in tandem with dedicated AWS team members, Greenway accelerated the development process by 6–12 months. The project was complex, and Greenway experimented with several iterations until it found a set of solutions that met its performance requirements. The company succeeded by loading the raw data from Amazon S3 into Amazon DocumentDB, which acts as a mirror database of all its clients’ systems. When clients update their EHRs, the change is reflected in Amazon DocumentDB, and the data is dumped back into the unified model using Amazon S3 data lakes. “At full scale, with all the regulatory reporting functionality, we will be pulling forward nearly 100 TB of data using Amazon DocumentDB,” says Nick. “Greenway is also benefiting from 99.999999999 percent, or ‘eleven nines,’ of durability and lower storage costs.” AWS Services Used 中文 (繁體) Bahasa Indonesia Accelerated Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Learn more » Greenway is excited to deliver its new unified data solution to clients. A seamless EHR experience makes it easier for healthcare organizations to center their resources on the provision of high-quality care to patients. The scale and durability that the company achieved using Amazon DocumentDB will have a real impact on customers. “Having this solution available makes things easier for our clients,” says Nick. “By using a cloud-based data solution, Greenway makes it easier for clients to adopt the company’s EHR software without requiring them to invest in their own data centers.” Learn more » Migrated 20 years Overview Greenway’s EHR software and services are currently used by over 50,000 healthcare organizations. Its two EHR solutions, Intergy and Prime Suite, have separate reporting mechanisms that required significant staff resources. Greenway wanted to build a powerful enterprise data hub that would serve as the foundation for all its solutions and reduce the complexity of development. The company had a requirement for scalability, which was a crucial business necessity. It quickly became clear that a cloud solution would deliver the best value to clients. “We have a large number of practices across the United States that rely on our services, so we needed something that would scale up seamlessly,” says Macaluso. Developed We didn’t want to deal with scaling and managing a MongoDB engine ourselves, so we used Amazon DocumentDB.” Unified of terabytes of data After laying out the solution architecture, Greenway used several AWS solutions to implement its project. The company built a change data capture engine on Amazon Simple Storage Service (Amazon S3), an object storage service, and migrated 20 years of historic patient data to the solution. It then transformed the data into regulatory reporting engines. “For us, it was very helpful to choose solutions off the shelf,” says Nick. Türkçe Greenway’s new unified data solution has made it simple for the company to focus on developing new offerings for its clients. Moving forward, Greenway will use AWS infrastructure to provide a central place for providers and vendors to share and interoperate with healthcare data. “With our new data solution on Amazon DocumentDB, we can now provide solutions and services to our clients at a speed that is unusually fast for the healthcare industry,” says Macaluso. a highly secure solution English About Greenway Health LLC EHR systems Scaled to hundreds The AWS Professional Services organization is a global team of experts that can help you realize your desired business outcomes when using the AWS Cloud. Deutsch Tiếng Việt Amazon S3 Customer Stories / Healthcare Italiano ไทย Philip Nick Senior Director of Production Engineering, Greenway Health LLC system development by 6-12 months Greenway Health LLC (Greenway), one of the first companies to offer electronic health record (EHR) solutions to medical providers, was seeking to unify its data processing and storage in the cloud. Greenway’s products had streamlined reporting for its customers, but using them was a manual and time-consuming process. The company needed a cloud-based solution that streamlined data reporting, sharing, and analyses, as well as on-premises data centers at its medical institutions. Greenway was also committed to creating a secure and compliant solution that would meet stringent health-data regulations. It turned to Amazon Web Services (AWS) to unify its data offering. “Our goal was to capture, transform, and use the data from operational settings in a cloud-based environment powered by AWS to provide a launching point of new services for our clients,” says Michael Macaluso, vice president of product management at Greenway. Learn how Greenway Health developed a health-data solution using Amazon DocumentDB. Português" GSR Scales Fast on AWS to Become One of the Largest Crypto Market Makers _ Amazon S3.txt,"Supported - about 13 trades every second. Français Rapidly generates Español GSR launched several projects to improve its infrastructure with the help of dedicated AWS account managers and technical teams.  Customer Stories / Trading Platforms / Switzerland Learn how »  日本語 2023 It’s definitely been challenging when you scale that quickly. But using AWS, we’ve integrated elasticity into our day-to-day, and that makes it a lot easier.” The goal was to ensure scalability and fast network connections. “There was a lot of trading happening, and a lot of new liquidity coming into these markets,” says Matteo Cerutti, head of trading platform at GSR. “There was just this massive influx of interest into the sector. I think that started this next wave of the market.”  Enables rapid business growth 한국어 The addition of three new AWS Availability Zones allows GSR to provide faster regional connections to exchanges. “For trading at very high speed, that kind of connectivity is very useful,” says Cerutti. For services in the New York area, GSR uses an AWS Local Zone, which places AWS compute, storage, and database services close to large population centers. This means that GSR can run applications with single-digit-millisecond latency.  Automated trading models help GSR to rapidly manage transactions, enabling it to handle more than 1.1 million trades a day and, at times, over 100 million daily orders. GSR supports these models by using Amazon Simple Storage Service (Amazon S3), an object storage service that lets it retrieve any amount of data from anywhere, and ato run batch computing jobs at any scale. It manages data using Amazon Aurora, which is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility.  Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Although the crypto market has since seen a downturn, Cerutti believes GSR has a promising future, thanks to its scalable, responsive IT foundation built using AWS. “We don’t want to scale back any infrastructure over the next few months, even if the market is quiet,” he says. “We expect that it’s only going in one direction in the long term, now that we have that foundation built.” Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » Get Started Amazon EC2 AWS Services Used 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload.  Amazon Aurora automated trading models using large volumes of market data Global crypto market maker GSR provides cryptocurrency exchanges, token issuers, financial institutions, and investors with critical liquidity services that buy and sell digital assets at scale. When the COVID-19 pandemic disrupted economies around the world and led many governments to respond with large financial stimulus programs, GSR saw a fast-rising demand for cryptocurrency trading. It turned to AWS to increase the speed and scalability of its systems. Using AWS, GSR gained the elasticity it needed to serve its growing customer base, helping it to expand the business and increase the size of its workforce by a factor of 5. It added support for more than 1,400 trading instruments. It also gained the ability to manage daily trading volumes that have reached values of over $5 billion at times. Ρусский عربي GSR has roots in traditional finance, with many of its executive team members coming from the likes of Goldman Sachs, Citadel, and Two Sigma. In addition to providing liquidity services, it also manages trade derivatives, supports over-the-counter trading, and creates custom-made trading algorithms.  中文 (简体) Cryptocurrency values can rise or fall fast, so trading depends on liquidity—the ability to buy or sell quickly before prices change. GSR, founded in 2013, has a global footprint and provides cryptocurrency token issuers, exchanges, financial institutions, and investors with that liquidity. Reduces costs GSR also optimizes costs by working with AWS solutions architects to use AWS Reserved Instances, which provides discounts of up to 75 percent compared to buying capacity on demand.  Overview AWS Batch using reserved capacity through AWS Reserved Instances 1 million daily trades AWS Customer Success Stories Türkçe Using these services, GSR’s research team can access the data it needs to analyze trading results. The trading team then develops automated strategies to monetize trading signals. “The main trading that we do is done programmatically on exchanges,” says Cerutti. “So, if you go to a crypto trading platform today, and you see the order book going 100 miles an hour, and then you put in your bid for Bitcoin or Ethereum or another crypto asset, we’re there to sell it to you and we’re also there to buy it from you. We’re both sides of that order book. And we’re doing that at scale.” English Using AWS, GSR has grown rapidly as it expands its market capabilities. In May 2021, it had around 60 employees. Today, it has 300. The elasticity it’s gained by using AWS has helped it to scale up services fast to meet rising customer demand, paving the way for the company to expand in size. “It’s definitely been challenging when you scale that quickly,” says Cerutti. “But using AWS, we can integrate the elasticity into our day-to-day, and that makes it a lot easier.”  Support for More than 1.1 Million Daily Trades—13 Every Second Amazon Aurora provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. Learn more » The company needed to update its IT systems quickly to meet a rise in demand for cryptocurrency trading during the pandemic. It had already been using AWS and it looked for opportunities to expand its use.  During the COVID-19 pandemic, GSR saw a rapid rise in demand for cryptocurrency trading, as many governments created large financial stimulus programs and large investors put more money into digital assets. To improve its systems’ speed and scalability, it turned to Amazon Web Services (AWS). Matteo Cerruti,  Head of Trading Platform, GSR A Need to Meet the Rapid Increase in Trading Demands Deutsch Single-Digit-Millisecond Latency with New AWS Availability Zones Tiếng Việt Amazon S3 Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. GSR Scales Fast on AWS to Become One of the Largest Crypto Market Makers Italiano ไทย AWS Batch lets developers, scientists, and engineers efficiently run hundreds of thousands of batch and ML computing jobs while optimizing compute resources, so you can focus on analyzing results and solving problems. Financial services Life sciences Digital media. Building on GSR’s existing foundation on AWS, the provider was a natural choice. “AWS is very reliable,” Cerruti says. “I think it’s very hard to beat.” Using AWS, GSR has added support for more than 1,400 trading instruments, which let customers conduct transactions in many different currency combinations. It also gained the elasticity it needed to serve its growing customer base, helping it to increase the size of its workforce by a factor of 5. And it’s gained the ability to manage daily trading volumes that, at times, have reached values of over $5 billion. and services to more than 60 global cryptocurrency exchanges. Learn more » With its expanded use of AWS and its ability to automate trading models for fast transactions, GSR can support more daily trades and types of transactions. It now trades more than 1,400 trading instruments—for example, Bitcoin to Ethereum or Ethereum to USD. At the market’s height in 2021, it handled more than 1.1 million daily trades—about 13 trades every second.  GSR has 9 years of deep crypto market expertise as an ecosystem partner and active, multi-stage investor. It sources and provides spot and non-linear liquidity in digital assets for token issuers, institutional investors, and leading cryptocurrency exchanges. Its trading technology is connected to 60 trading venues. GSR employs 300 people around the globe. Improving System Speed, Support, and Scalability on AWS Português Contact Sales" Helen of Troy Case Study _ Consumer Packaged Goods _ AWS.txt,"AWS IoT Core lets you connect billions of IoT devices and route trillions of messages to AWS services without managing infrastructure. Insights from our solution built using AWS IoT Core drive the new products that we create, how we think about innovation, and the decisions that we make in the near term.” Français Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » 2023 Español Analyzes real customer data 日本語 Customer Stories / Consumer Packaged Goods Rich Thrush Vice President of Design and Innovation, Helen of Troy Solution | Analyzing Data to Improve Products and Increase Innovation Using AWS IoT Core Get Started 한국어 About Helen of Troy Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Connected devices also bring value by providing feedback to help improve a product. Customers expect products like thermometers to be accurate, and the perception of accuracy often determines whether a customer has a positive experience. By analyzing data collected using AWS IoT Core, Helen of Troy plans to identify readings that a customer might perceive as inaccurate, such as an outlier reading when a customer takes several readings in the same day. Helen of Troy will also be able to see when a customer enters a temperature manually in the app and can compare the value to readings received through the CDF. “Using data collected from our solution built on AWS, we can see if the customer experience is degrading and proactively fix issues before a user complains,” says Jim Gorsich, associate director of engineering at Helen of Troy. Helen of Troy Reference Architecture Outcome | Expanding the Framework Using AWS IoT Core to More Products and Locales AWS Services Used Global consumer products company Helen of Troy has years of experience producing quality physical products across many well-recognized and widely trusted brands, including Braun1, Vicks1, and Honeywell2. To stay on the cutting edge and best meet customer needs, the company saw an opportunity to add a digital experience to its physical products within its Beauty and Wellness division. Founded in 1968 as a beauty products company, Helen of Troy has grown to offer durables from its Beauty and Wellness and Home and Outdoor divisions. In the fall of 2020, the company’s Beauty and Wellness team set out to develop the CDF and the first connected experience: the Braun Family Care app for the Braun ThermoScan®3 7 Connect thermometer. Helen of Troy’s goals were to help families get value out of temperature data with features like age-appropriate recommendations for fever care and to use a cloud infrastructure to continue updating the software and enhancing features for the life of the product. The framework needed to be scalable so that Helen of Troy could quickly and simply launch more connected experiences across its brands. Helen of Troy compared other cloud providers and chose AWS because of its expertise and flexibility to meet immediate needs, with room to expand for future initiatives. With experience using AWS services for IoT projects since 2018, Helen of Troy trusted AWS to deliver a quality solution that maintains high security for sensitive health data. 中文 (繁體) Bahasa Indonesia AWS Professional Services The IoT infrastructure also facilitates the agile rollout of new products and scaling up of required services, which is important because Helen of Troy can use its worldwide presence to provide intelligent healthcare, wellness, and home comfort products. With support from AWS Professional Services, Helen of Troy is working toward releasing the Braun Family Care app in the European Union, which requires an application to the Medical Device Regulation and compliance with the General Data Protection Regulation. “We’ve benefited greatly from the expertise of AWS Professional Services while pursuing compliance with the General Data Protection Regulation,” says Gorsich. “That team has been invaluable and is leveling up our knowledge in a very tricky field.” Helen of Troy can also release software updates remotely to continue improving the experience after a customer takes a product home. Contact Sales Ρусский عربي 中文 (简体) AWS IoT Core Learn how consumer products company Helen of Troy uses its digital solution to give customers a connected experience, improve products, and facilitate innovation using AWS. Overview with a connected device app that makes it simpler to track and understand thermometer data Built a scalable framework Uses data analysis Türkçe that can expand to additional locales and products English Provides value to customers Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram As part of its international expansion plan, Helen of Troy intends to launch the Braun Family Care app and Braun ThermoScan 7 Connect thermometer in the European Union next. Helen of Troy plans to expand the CDF framework to more products, using the foundation that it built alongside AWS Professional Services to make the apps simpler and faster to implement. “As we create more connected products, release updates to continue improving software, and continue to grow the user base, we expect to see more cost savings,” says Gorsich. “We know that going fully cloud based from the start will pay dividends in the end.” Motivated by a vision to help customers track and use data collected from smart devices, Helen of Troy looked to Amazon Web Services (AWS) for assistance designing and implementing an Internet of Things (IoT) solution, the connected devices framework (CDF). Helen of Troy and its customers both benefit from this innovative solution: customers are more engaged with products and can use advanced features, and Helen of Troy receives feedback in near real time for troubleshooting and product-improvement initiatives. With a cloud infrastructure, Helen of Troy collects and delivers insights to customers in near real time, helping them understand the data to make informed decisions. For example, the Braun Family Care app serves as a centralized place for all household members to track temperature data, regardless of who took a reading. To access this data, Helen of Troy uses several AWS services, such as Amazon Simple Storage Service (Amazon S3), object storage built to retrieve any amount of data from anywhere. “Using AWS IoT Core and the CDF, we didn’t need to build, stitch together, and manage as much ourselves,” says Uwe Meding, senior IoT architect at Helen of Troy. “Reducing the development time and complexity required to build and maintain IoT-scale systems was really important for us.” Getting the Most Out of Temperature Data with the Braun Family Care™ App Built Using AWS IoT Core with Helen of Troy Global consumer products company Helen of Troy began in 1968 as a family business for beauty products and has grown to offer durables in various industries. Its Beauty and Wellness division provides beauty, healthcare, and home comfort products. To continually innovate and improve the customer experience, Helen of Troy collects data from Bluetooth-capable customer devices using AWS IoT Core, which organizations use to easily and securely connect billions of IoT devices to the cloud. Helen of Troy uses data analysis to determine which features of smart devices are most useful to customers, helping the company invest resources effectively. The CDF that the company built on AWS facilitates innovation by providing data to guide advanced feature development. For example, because the company collects temperature data across the United States, Helen of Troy is looking to notify parents if illnesses are increasing in a geographic area, which could affect behavior and reduce disease transmission. “Using data in the app, we can answer questions that we have but can’t design consumer product testing around,” says Rich Thrush, vice president of design and innovation at Helen of Troy. “Insights from our solution built using AWS IoT Core drive the new products that we create, how we think about innovation, and the decisions that we make in the near term.” Deutsch Tiếng Việt Amazon S3  The AWS Professional Services organization is a global team of experts that can help you realize your desired business outcomes when using the AWS Cloud.  Learn more » Italiano ไทย From the beginning, Helen of Troy engaged AWS Professional Services, which organizations use to achieve desired business outcomes when using AWS, to help design the digital solution and provide guidance around regulatory compliance. “The AWS Professional Services team had expertise, which gave us the confidence to complete the project the right way the first time,” says Edwin De Leon, director of engineering at Helen of Troy. “There was a sense of collaboration from the start.” In early 2022, Helen of Troy launched the Braun Family Care app in the United States. 1 Certain trademarks used under license from The Proctor & Gamble Company or its affiliates. 2 Honeywell is a trademark of Honeywell International Inc., used under license by Helen of Troy Limited. 3 ThermoScan is a registered trademark of Helen of Troy Limited and/or its affiliates. Architecture Diagram Opportunity | Using AWS Professional Services to Build a Digital Solution that Improves the Customer Experience for the Life of the Product for Helen of Troy Learn more » to improve products to drive innovation and guide feature development Português" Help Customers Reduce Data Query Time by 70 and Improve Business Insights Capabilities with Amazon OpenSearch Service _ Deputy Case Study _ AWS.txt,"More Software & Internet Customer Stories Français Once it made its decision, Deputy began using a single index with routing keys and filters to achieve a multi-tenant architecture within Amazon OpenSearch Service. The company also built a data pipeline based on Furthermore, Deputy’s AWS-based data pipeline provides the ability to quickly scale up or down based on demand at specific times, supporting hundreds of millions of data points for each customer. Español This is just the beginning. We have so many other use cases to solve with Amazon OpenSearch Service, especially unlocking predictive analytics and ML capabilities for scheduling.” Develops new software due to improved data efficiency Opportunity | Improving Application Performance with Amazon OpenSearch Service Deputy initially evaluated several database solutions alongside Amazon OpenSearch Service, performing a query use-case analysis to compare performance, and found the service to be the fastest and most flexible. Plus, as a fully managed service, the business can focus on its applications instead of scaling. “We documented our access patterns and ensured we had a query to match each pattern across different services. We then ran the queries manually to record the timings,” says Marchant. 日本語 AWS Services Used 2022 Deputy is also able to develop new software features due to the improved data efficiency of Business Insights. “Amazon OpenSearch Service provides more flexibility in terms of data retrieval and eliminates performance bottleneck. As a result, we’ve been able to release new features on top of the application,” says Marchant. “With better performance, we can view multiple weeks of data at once in a summarized format, aggregated to the week, within a six-month timeframe. This allows our customers to analyze trends and compare week or month totals, is something not possible before Amazon OpenSearch Service.” Amazon OpenSearch Service, a fully managed service that makes it easy to perform interactive log analytics, real-time application monitoring, and website search functions. “We already use a lot of AWS services and were also planning to build a data pipeline on AWS, so it made sense to integrate everything using AWS,” says Marchant. 1 Deputy also anticipates cost savings from implementing Amazon OpenSearch Service. “Amazon OpenSearch Service was around three times less expensive than the other solutions we evaluated, and we've removed the need for an engineer to maintain our infrastructure on a daily basis,” says Li. He continues, “Amazon OpenSearch Service has helped us increase the performance of Business Insights. We look forward to leveraging AWS to help maximize what our customers get out of their data.” Adds Rajini Carpenter, Deputy’s vice president of engineering, “This is just the beginning. We have so many other use cases to solve with Amazon OpenSearch Service, especially unlocking predictive analytics and ML capabilities for scheduling.” 한국어 AWS Lambda. The application writes data to an Amazon Kinesis Data Stream, triggering an AWS Lambda function to push data into the correct cluster for the customer. This helps Deputy further improve performance via more efficient batch processing and simple scaling depending on traffic volume. AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Overview | Opportunity | Solution | Outcome | AWS Services Used no items found  Deputy Helps Customers Reduce Data Query Time by 70% and Improve Business Insights Capabilities with Amazon OpenSearch Service Read about the Deputy OpenSearch service here, visit https://www.deputy.com/features/demand-forecasting Deputy uses AWS to drive 70 percent faster data request times for customers, scale to support hundreds of millions of data points, save time by eliminating management and maintenance, and lower costs. Deputy, based in Australia, provides software that automates scheduling and facilitates workforce management for global customers.   Solution | Driving 70% Faster Data Request Times … Reduces operational costs Improved data efficiency Amazon OpenSearch Service Overview 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Customer Stories / Software & Internet عربي Rajini Carpenter Vice President of Engineering, Deputy 70% 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Amazon Kinesis Data Streams and Scales to support data points Learn more » To learn more, visit aws.amazon.com/opensearch-service/. Deputy often had to manually intervene by vertically scaling MySQL clusters to support increased data volumes, but it needed an efficient solution to address the problem. “We wanted to identify a new database solution that could scale easily and query data in real time. We also needed to improve page load times and help our customers take in more data,” says Li. By eliminating management and maintenance Get Started   Deputy is on a mission to Simplify Shift WorkTM for millions of workers and businesses worldwide. The company streamlines scheduling, timesheets, tasks, and communication for business owners and their workers. More than 300,000 global workplaces use Deputy to schedule and effectively communicate with employees, providing millions of shift workers with more flexibility and control over their schedules. By relying on Amazon OpenSearch Service for data querying, Deputy is experiencing 70 percent faster overall request times for data-powered Business Insights. “For some of our larger customers, data queries that took minutes to complete now take just seconds using Amazon OpenSearch Service, so they’re not sitting there waiting for the screen to load,” says Marchant. With reduced request times, Deputy customers can quickly analyze business performance by checking updated metrics across multiple stores or regions. Türkçe Drives 70% faster data request times for customers Saves time English Deputy launched its business on Amazon Web Services (AWS), and was interested in expanding its AWS environment by implementing  Amazon Kinesis Data Streams is a serverless streaming data service that makes it easy to capture, process, and store data streams at any scale. Because Amazon OpenSearch Service is fully managed, Deputy has eliminated the time it previously spent managing and maintaining the Business Insights application environment. “Our engineers are no longer resizing MySQL instance clusters just to cope with slow queries or new demands,” says Marchant. Discover how data drives transformation Amazon Kinesis Data Streams Deutsch Tiếng Việt Amazon OpenSearch Service makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. OpenSearch is an open source, distributed search and analytics suite derived from Elasticsearch. Italiano ไทย Scales to support hundreds of millions of data points Learn more » More than 320,000 workplaces and 1.3 million shift workers in over 100 countries use Deputy software to automate scheduling and facilitate workforce management. Many of these customers, including Fortune 500 companies, use Deputy’s Business Insights Dashboard to access analytical data about their organization. Caesar Li, senior product manager at Deputy, says, “Business Insights uses historical information to forecast projected future sales, allowing customers to make smarter, data-driven scheduling decisions.” The tool integrates point-of-sale data with wage and shift data, with up to 4,000 monthly active users depending on these combined data sets to streamline their scheduling. Outcome | Saving Time and Money by Eliminating Management and Maintenance AWS Lambda However, the solution's MySQL-based database struggled to scale as the business experienced rapid growth and data sets expanded to millions of records. This resulted in customers reporting delays in page load times. Jack Marchant, technical lead at Deputy, says, “Our customers experienced slow page loading times for their data, sometimes waiting 2 minutes for queries to complete.” This load time delay was unacceptable for customers requiring fast data snapshots. “Our customers benefit from viewing all their data in one place so they can reduce labor-intensive, manual scheduling processes. They need to visualize their weekly data in under 30 seconds to update their employee work schedules,” Marchant says. Português About Deputy" Helping Customers Modernize Their Cloud Infrastructure Using the AWS Well-Architected Framework with Comprinno _ Comprinno Technologies Case Study _ AWS.txt,"To achieve this goal, Comprinno looked to Amazon Web Services (AWS) and the AWS Well-Architected Framework, which helps cloud architects learn, measure, and build using architectural best practices. By adopting the AWS Well-Architected Framework during the sales process, Comprinno provides a standardized experience for customers, identifies and resolves for blind spots, and builds trust that leads to business growth. Français 55–60% of revenue comes from long-term customers 2023 71% conversion achieved in 2022 Español About Comprinno Technologies Established by a team of technical experts, Comprinno recognized that it needed business expertise to improve and standardize its sales process. Previously, its solutions architects independently determined the direction of presale conversations. This strategy was effective because of the company’s experienced staff, but it didn’t provide a consistent customer journey. The company also sought standardization to reach a wider audience instead of exclusively building and selling custom solutions. In 2019, Comprinno became an AWS Software Partner, a path for organizations that develop software to run on or alongside AWS services. As part of the process, the company performed an AWS Well-Architected review to evaluate its Tevico solution. After going through the AWS Well-Architected review process, Comprinno knew that it would be a good tool for its customers as well. to retain customers and facilitate growth Learn more » 日本語 Prasad Puranik CEO, Comprinno 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used AWS Well-Architected helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for a variety of applications and workloads. Comprinno is a cloud consulting and professional services startup based in India. Its software-as-a-service brand, Tevico, provides artificial intelligence to automate processes for its customers’ AWS solutions. Standardized the sales process Get Started AWS Services Used with annual AWS Well-Architected reviews 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Comprinno works primarily with customers in the startup sector. Although its customers span multiple industries, they often have shared needs and a common goal of cloud infrastructure modernization. Comprinno needs standardization to be fast and accurate so that it can quickly reach a mutual understanding with its customers about the next steps. To accomplish that goal, Comprinno developed content explaining principles from the AWS Well-Architected Framework and made it available in Tevico. Multiple users can contribute information about customer needs, which facilitates collaboration and engagement. When customers engage with Tevico, Comprinno can better understand the business needs and build better solutions, translating those needs into actionable technical requirements. Founded in 2013, Comprinno is a cloud consulting and professional services company based in India that serves over 500 customers. Its software-as-a-service brand, Tevico, provides an artificial intelligence layer on top of AWS solutions to help customers automate processes, detect anomalies, and repair issues automatically. For example, when an ecommerce company approaches Comprinno with the goal of increasing conversions, Comprinno uses Tevico and the structure of the AWS Well-Architected Framework to capture data about the company’s existing infrastructure, amount of traffic, and other business requirements. Then, Comprinno offers consultations to identify technical changes that the customer can make to scale in the cloud, refactor code, and adopt additional AWS services if needed. “Using the AWS Well-Architected Framework, our customers get a well-ordered and structured way of understanding their solution implementation,” says Puranik. “This structured approach is a big win both for the customer, because they are guided in the right direction, and for us, because we have a path to accomplish the customer’s goals.” Using this structure to provide value to customers, Comprinno increased its number of launched opportunities in 2022 by two and a half times with a 71 percent conversion rate of qualified opportunities into launched opportunities. Helping Customers Modernize Their Cloud Infrastructure Using the AWS Well-Architected Framework with Comprinno Helps customers identify blind spots Overview to deliver a consistent customer experience Professional services startup Comprinno Technologies (Comprinno) excels in cloud orchestration and management, but the company wanted to grow its business by gaining sales experience and providing a more standardized process for customers. Because we work with customers across multiple industries and have seen a wide range of setups, we can share lessons and help customers identify their blind spots faster using the AWS Well-Architected Framework.” In April 2021, Comprinno was accepted into the AWS Well-Architected Partner Program, which helps organizations establish good architectural habits, reduce risks, and build robust applications. Comprinno underwent extensive training and boot camps to be equipped to provide exceptional AWS Well-Architected reviews for its clients. “As a startup, Comprinno benefits from the experienced framework that AWS offers to facilitate building our business better,” says Prasad Puranik, CEO of Comprinno. “This framework has been helpful in enhancing our business maturity by teaching us how to build sales, customer relationships, and technical solutions.” for qualified opportunities into launched opportunities Türkçe AWS Well-Architected Framework Learn how Comprinno Technologies standardized the customer experience and grew its business using the AWS Well-Architected Framework. English Opportunity | Using the AWS Well-Architected Framework to Standardize the Customer Experience for Comprinno Comprinno strives to continue learning and investing deeper in all the pillars of the AWS Well-Architected Framework. The company also plans to expand its reach to bring in more revenue from small and medium-sized businesses while retaining its influence in the startup sector. To cater to small and medium-sized businesses, Comprinno plans to develop more packaged solutions that are quick to deploy. “Without the AWS Well-Architected Framework, we wouldn’t have been as successful,” says Puranik. “We learned that you need to be good at solving problems and good at doing business. The teams at AWS have provided us with good business guidance over the past several years.” quickly using extensive content and case studies Comprinno also uses the AWS Well-Architected Framework to build trust, which has helped the company retain customers and increase business. An estimated 55–60 percent of its revenue comes from existing customers through value-added and managed-services contracts, which feature an annual AWS Well-Architected review. “The AWS Well-Architected Framework acts like an icebreaker and helps our customers see the efficacy of our solutions architects, how thoughtful their suggestions are, and how insightful the conversation is,” says Puranik. “Those components become the cornerstone of building trust and show why a customer would want to work with Comprinno for subsequent engagements.” One of Comprinno’s customers, a large company in the wearable and hearable technology industry in India, has continued the relationship after the success of its initial project. “We did an AWS Well-Architected review with the customer and helped them optimize costs,” says Puranik. “Now, we are engaged with them for application modernization to further reduce costs by redesigning their existing architecture.” Deutsch Known for working in varied and highly regulated industries, like financial technology and healthcare, Comprinno has a lot of expertise to offer its customers. Using Tevico and the AWS Well-Architected Framework, Comprinno can clearly present best practices and identify blind spots that the customer might have. “Because we work with customers across multiple industries and have seen a wide range of setups, we can share lessons and help customers identify their blind spots faster using the AWS Well-Architected Framework,” says Puranik. For example, Comprinno can use the framework to present security best practices alongside compelling case studies about what happens if best practices aren’t followed. Tiếng Việt Solution | Providing Direction, Building Trust, and Generating More than 50% of Its Revenue with Loyal Repeat Business Using the AWS Well-Architected Framework Italiano ไทย Builds trust Outcome | Expanding to Use the AWS Well-Architected Framework for Additional Business Sectors Português" Helping Doctors Treat Pediatric Cancer Using AWS Serverless Services _ Nationwide Childrens Hospital Case Study _ AWS.txt,"Using AWS serverless solutions, we can focus not on the upkeep of technology but on the output of the science. Grant Lammi Cloud Development Manager, the Steve and Cindy Rasmussen Institute for Genomic Medicine at Nationwide Children’s Hospital to analyze genomics data for pediatric cancer patients Outcome | Improving the Treatment and Diagnosis of Children with Cancer Français Amazon EventBridge is a serverless event bus that lets you build event-driven applications at scale across AWS and existing systems. Learn more » Español Using AWS serverless solutions, the IGM is turning cancer samples from pediatric patients into valuable data. After the samples are run through the sequencing workflows, an expert interprets the results and prepares two reports: one that provides deidentified results to the researchers at the National Cancer Institute and one that helps doctors determine the best course of treatment for their patients. Researchers from the National Cancer Institute can access this information through a dedicated bucket on Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. The hospital uses AWS Identity and Access Management (AWS IAM), a service that securely manages identities and access to AWS services and resources, to protect patient data throughout the pipeline and prevent unauthorized users from accessing sensitive health information. Nationwide Children’s Hospital, an academic pediatric medical center, is one of the largest pediatric hospitals in the United States. It brings advanced clinical genomics capabilities to patients to help select the best care pathways. 日本語 AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. Learn more » NCH automated complex analyses of cancer samples using a wide variety of AWS serverless services, like AWS Step Functions, a visual workflow service, to model its laboratory procedures and automate pipeline-based, step-by-step processes. On AWS, the hospital spends less time managing infrastructure and more time focusing on what matters most: improving treatment for patients with pediatric cancer. AWS Step Functions 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Overview Based in Columbus, Ohio, NCH is one of the largest pediatric hospitals in the United States. The IGM at NCH specializes in genomics data generation and analysis, using blood and cancer samples to help physicians better treat pediatric patients. The IGM handles 6¬–7 PB of genomics data, which increases by 1–2 PB every year, and migrated from an on-premises environment to the AWS Cloud in 2017. “We couldn’t keep up with our goals by doing everything on premises,” says Grant Lammi, cloud development manager at the IGM at NCH. “We needed a solution where we could have more elastic compute and storage, so we migrated to the cloud.” Faster Scales Get Started analyses of cancer samples AWS Services Used Helping Doctors Treat Pediatric Cancer Using AWS Serverless Services with Nationwide Children’s Hospital 中文 (繁體) Bahasa Indonesia AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources. Learn more » Contact Sales Ρусский time through automation عربي 中文 (简体) Opportunity | Using AWS Serverless Services to Analyze Cancer Samples for Nationwide Children’s Hospital The IGM uses AWS Step Functions to automate the analysis of cancer samples and runs multiple jobs concurrently using AWS Batch, which is used to efficiently run hundreds of thousands of batch and machine learning computing jobs. The hospital uses Amazon EventBridge, a serverless event bus, to emit events throughout the workflow and track the progress of each cancer sample as it travels from primary, secondary, and tertiary analyses. “Because the sequencing workflows are activated by Amazon EventBridge events, they’re all automated,” says Lammi. “There’s no manual intervention needed beyond kicking things off in the lab.” This data is then stored in Amazon DynamoDB, a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at virtually any scale. Learn more » 2022 Saves AWS Batch to deliver critical data to doctors In spring 2021, the Steve and Cindy Rasmussen Institute for Genomic Medicine (IGM) at Nationwide Children’s Hospital (NCH) entered into an agreement with the National Cancer Institute and the Children’s Oncology Group to perform molecular characterization for all children living with cancer in the United States. For the pediatric teaching hospital, the project would be a major undertaking. To perform this advanced genomics testing, it would need to process massive amounts of data in a highly secure and scalable environment. As part of its ongoing journey to the cloud on Amazon Web Services (AWS), NCH looked to adopt serverless solutions to handle these genomics testing pipelines. Türkçe Protects Facilitates 24/7 English AWS IAM Solution | Saving More Time with Automated Genomics Pipelines on AWS Amazon EventBridge In the future, NCH plans on expanding the solution to other programs, such as the diagnosis and treatment of epilepsy and rare genetic diseases. “Using AWS serverless solutions, we can focus not on the upkeep of technology but on the output of the science,” says Lammi. “We can focus on improving the lives of kids everywhere.” The Steve and Cindy Rasmussen Institute for Genomic Medicine at Nationwide Children’s Hospital is analyzing critical genomics data for pediatric cancer patients at scale using AWS serverless solutions. By 2021, the IGM had already begun using AWS Step Functions. When the National Cancer Institute and the Children’s Oncology Group approached the IGM that year, it was in a strong position to handle the compute-intensive molecular-characterization project. “We would need to sequence the genomes of essentially all kids with cancer in the United States to see if they qualified for clinical trials that could treat them,” says Lammi. “On AWS, we were able to scale from our internal research protocol to handle cases from all over the country in about 12 months.” Deutsch AWS Step Functions is a low-code, visual workflow service that developers use to build distributed applications, automate IT and business processes, and build data and machine learning pipelines using AWS services. Tiếng Việt sensitive patient data Customer Stories / Healthcare Italiano ไทย Using AWS serverless services, the IGM has saved significant time through automation. “We’ve automatically bought ourselves extra time,” says Lammi. “By the time we get the data out of the lab and synced up, we’re looking at maybe 1 day to process the genome and for the results to be ready for review.” Because it no longer needs to manage each step in the sequencing workflow manually, the hospital has reduced the risk of human error and can analyze cancer samples 24 hours per day. The IGM can now focus its time on writing scientific software to improve patient outcomes, rather than managing infrastructure. Because the hospital analyzes cancer samples at a faster pace, it can deliver important data to physicians and help pediatric patients get the care that they need. “There are actual kids who need the results that we are generating, and they need them as quickly as possible,” says Lammi. “The faster that we can get the report into a doctor’s hands, the better off the kid will be. That’s what drives everything for us.” Using serverless solutions from AWS, the IGM can move much faster than traditional hospital developers. It can quickly analyze cancer samples from pediatric patients to recommend treatment, supporting stronger patient outcomes. Additionally, the IGM scales without worrying about compute capacity; now, it is confident that it can handle its workflows regardless of how many tests it needs to run. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. About Nationwide Children’s Hospital Português" Helping Fintech Startup Snoop Deploy Quickly and Scale Using Amazon ECS with AWS Fargate _ Case Study _ AWS.txt,"Helping Fintech Startup Snoop Deploy Quickly and Scale Using Amazon ECS with AWS Fargate Snoop, a cloud-native fintech startup, wanted to harness the United Kingdom’s system of open banking and develop an app to help users control their finances. To achieve this, the company had to scale up rapidly, from zero to millions of daily open banking transactions, with uninterrupted availability. Français Jamie West Senior DevSecOps Engineer, Snoop Español Savings of £1500 Solution | Building an App that Scales from Zero to One Billion Transactions in 2 Years AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers.  Learn more » The small team of cofounders looked to Amazon Web Services (AWS) to provide the infrastructure needed to bring their vision to life. Snoop uses Amazon Elastic Container Service (Amazon ECS), a fully managed container orchestration service that facilitates deploying, managing, and scaling containerized applications. Using Amazon ECS with AWS Fargate, a serverless, pay-as-you-go compute engine, Snoop gives users hyperpersonalized insights in seconds. Using AWS, Snoop can deploy containerized apps quickly, scale efficiently, and spend more time focusing on its mission of helping customers cut the cost of living. 日本語 2023 AWS Cloud Map is a cloud resource discovery service. With Cloud Map, you can define custom names for your application resources, and it maintains the updated location of these dynamically changing resources. Learn more » AWS Cloud Map Get Started 한국어 overhead Amazon ECS Overview | Opportunity | Solution | Outcome | AWS Services Used Scaled significantly AWS Fargate Snoop’s goal is to offer a bespoke experience for users to manage all their finances in one place. This means the app needs to be secure, simple to use, and available 24/7. Automatic scaling and availability mean Snoop can keep growing, whether branching out into new territories or adding business-to-business applications. And the team stayed within budget using AWS Customer Enablement, which supports companies in migrating and building faster in the cloud. AWS Services Used Enhanced All of our Amazon ECS instances use AWS Fargate, which takes off a huge piece of overhead. As a fast-scaling startup, that’s exactly what we need.”  Reduced 中文 (繁體) Bahasa Indonesia Opportunity | Using AWS to Take Insights a Step Further for Snoop Founded in 2019 and launched in April 2020, Snoop saw an opportunity in open banking in the United Kingdom. When open banking started in 2018, the country’s largest banks began sharing data in a secure, standardized form. In response, Snoop created its own cloud-based app that uses open banking data to empower users. Customers can access their accounts in one place and receive additional insights into their account activities. Going all in on AWS, Snoop built its architecture to easily scale to a billion banking transactions and grow rapidly while maintaining the security and performance users expect. “We’ve found that, on average, if customers take the actions we propose, they can save up to £1500 per year,” says Walters. Snoop offers users privacy and security as well as performance and availability. “Making sure the solution performs as we grow is key to building trust and building a powerful brand,” Walters added. per year potential for customers Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) staff productivity AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. AWS Fargate is compatible with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Learn more » Learn how Snoop in the fintech industry used Amazon ECS with AWS Fargate to build its app and scale from zero to one billion transactions in 2 years. Outcome | Putting Autoscaling to Work for Customers Overview Snoop offers customizable features, like social media–style “Snoop Feed,” emails, event-driven alerts, and more. When customers join Snoop, they give their name, email, and phone number, along with secure access to their account through Open Banking APIs. Snoop gathers over 300 data points from their transactions, and then its artificial intelligence and machine learning engines kick in. Snoop’s recurring payments engine shows customers where their money goes. Its recommendation engine offers timely content to help them make better financial decisions. For example, the app might tell a user they’re autopaying for a subscription they’d forgotten all about, or a user might learn that they have better options for car insurance plans. Zero to one billion “Performance is everything, and when something isn’t right, we fix it, and fix it fast,” says Andy Makings, head of DevSecOps at Snoop. This mindset makes it easier for Snoop to get processes in place from the start. Snoop’s engineers can talk in near real time with AWS Startups—a service that helps companies get started, connect with other founders, and find resources to grow—to get quick assistance. “We’ve had some great support from the AWS Startups team along the way,” says Walters. Customer Stories / Financial Services Starting from zero in 2020 when it launched, Snoop has now had well over one million downloads, with 150,000–200,000 active monthly users. Using Amazon ECS with AWS Fargate to provision, manage, and orchestrate containers serverlessly means Snoop can continue to put customers first. “We have an ambitious and exciting growth and product development road map ahead of us,” says Walters, “and AWS will be at the heart of everything we do.” Türkçe English The company’s innovation and customer service have already earned recognition. In 2021, the Banking Tech Awards declared Snoop the year’s Best Open Banking Solution. More recently, Snoop won a “Rising Star” award from the AWS Software Startup Awards for being an early-stage startup that has demonstrated innovative tech solutions to support customers. Deutsch About Snoop Using AWS solutions, Snoop can handle the massive task of interface and traffic management, making it possible for a few engineers to accomplish a lot. Rather than creating a monolithic application, Snoop’s developers can treat software applications as independent parts, streamlining their tasks. Using AWS Cloud Map, a cloud resource discovery service, Snoop constantly checks the dynamic environment to keep locations up to date. Tiếng Việt Snoop uses Amazon ECS with AWS Fargate to build resilient applications without having to manage its own infrastructure. This includes AWS Fargate Spot, which can run interruption-tolerant Amazon ECS tasks at savings of up to 70–90 percent off on-demand pricing. “All of our Amazon ECS instances use AWS Fargate, which takes off a huge piece of overhead. As a fast-scaling startup, that’s exactly what we need,” says Jamie West, senior DevSecOps engineer at Snoop. Snoop builds resilience and scalability into the program using AWS Lambda—a serverless, event-driven compute service used to run code for virtually any type of application or backend service without provisioning or managing infrastructure. Snoop uses AWS Lambda for asynchronous integrations, in which the function code hands off to AWS Lambda, which places the user request in a queue and returns a successful response. A separate process reads events from the queue and sends them to the function. Snoop uses Amazon API Gateway, a service that makes it simple for developers to create, publish, monitor, and secure APIs at virtually any scale, for the “front door” of its applications. Tying it all together is AWS App Mesh, which provides application-level networking so services can communicate across multiple types of compute infrastructure. With an ambition to make everyone better off, Snoop is a fintech firm that helps people cut their bills, pay off debt, grow their savings, and save where they spend, all without changing banks. Italiano ไทย Turning insights into a useful app takes time, expertise, and compute power. Born in the cloud, Snoop was a startup that had to work without the large teams and budgets that established companies enjoy. With lean resources, the cofounders looked to AWS. They knew from prior experience that AWS had solutions for hastening the time to market of scalable apps. And using AWS Activate, Snoop accessed tools, resources, content, and expert support to accelerate the startup. “It was a straightforward decision to use AWS,” says Jem Walters, chief technology officer for Snoop. “We’re really pleased that using its services supported us in building Snoop the way that we wanted.” Contact Sales with optimized costs Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Learn more » AWS Lambda transactions in 2 years Português" Helping Patients Access Personalized Healthcare from Anywhere Using Amazon Chime SDK with Salesforce _ Salesforce Case Study _ AWS.txt,"Salesforce and AWS have many joint customers who use Salesforce to manage customer relationships and AWS for compute, storage, database, and other managed-service solutions. In June 2021, Salesforce and AWS announced plans to launch a series of new intelligent applications that combine AWS and Salesforce Customer 360. Joint customers can seamlessly deploy AWS voice, video, and artificial intelligence services natively within Salesforce business applications in a scalable way. More Salesforce Stories Français 2023 Español Salesforce wanted to help its life sciences customers improve healthcare access for patients, lower costs for services, and provide a connected, equitable experience. It also wanted to help healthcare teams garner a 360-degree view of patients to provide meaningful insights into health outcomes. Using Amazon Web Services (AWS), Salesforce built Salesforce Health Cloud: Virtual Care on AWS, which simplifies virtual appointments for patients and healthcare providers. The turnkey solution is built on Amazon Chime SDK, which provides embedded intelligent near-real-time communication capabilities. Using Amazon Chime SDK and other managed services from AWS, Salesforce built a scalable, agile telehealth solution that saves time for doctors and patients, provides more-personalized care, and helps remove barriers to healthcare. Explore Salesforce's journey of innovation using AWS Provides more-personalized healthcare 日本語 AWS Services Used Divya Daftari Senior Director of Product, Salesforce Opportunity | Using AWS to Build a Telehealth Solution for Salesforce  Amazon Transcribe 한국어 In October 2022, Salesforce launched its first such application: Virtual Care. Virtual Care is built using AWS and functions within Salesforce Health Cloud, which serves as a centralized platform for clinical and nonclinical patient data. Salesforce wanted to deliver this more efficient care remotely at scale so that physicians could broadly improve health outcomes. The aim of Virtual Care was to remove friction from the healthcare experience by helping patients to overcome difficulties such as transportation, location, mobility, or limited appointment availability. “Our Virtual Care solution is a critical part of our vision to achieve whole-patient value and provide equitable care to patients and members,” says Divya Daftari, senior director of product at Salesforce. Overview | Opportunity | Solution | Outcome | AWS Services Used no items found  Helping Patients Access Personalized Healthcare from Anywhere Using Amazon Chime SDK with Salesforce … Solution | Improving Patient Engagement through Managed Solutions  Amazon CloudWatch collects and visualizes real-time logs, metrics, and event data in automated dashboards to streamline your infrastructure and application maintenance. Learn more » Salesforce is one of the world’s leading customer relationship management companies. It provides centralized management of the customer experience for the marketing, sales, commerce, service, and IT teams of more than 150,000 companies. Amazon Chime SDK 1 The Virtual Care solution serves as a model to optimize the use of Amazon Chime SDK in other Salesforce Industry Clouds. Salesforce plans to support remote sales and services sessions in a variety of industries, including automotive, manufacturing, retail, and wealth management. “Through AWS, we have trusted, scalable, performant services,” Daftari says. “Using the technology has helped us innovate for our joint customers.” 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Outcome | Expanding Intelligent Features of Virtual Care Amazon Transcribe is an automatic speech recognition service that makes it easy to add speech to text capabilities to any application. Learn more » Learn more » It is critical that video visits are secure, responsive, and reliable. Using AWS helps us provide all this in a performant and scalable way. ""   Overview With the Amazon Chime SDK, builders can easily add real-time voice, video, and messaging powered by machine learning into their applications. Get Started Beyond traditional use cases, Salesforce is adding capabilities in medication-therapy management, connectivity for care coordinators, and other approaches for patient engagement. The company is developing a new feature that will expand its support of Virtual Care sessions to multiple participants, instead of just clinician and patient. This will facilitate care-team coordination with multiple parties in a single meeting. Using AWS, Salesforce circumvented the heavy lifting that would have been required to build and maintain a video-calling solution from scratch. Patients self-schedule virtual appointments, coordinate previsit activities, and conduct virtual visits in a HIPAA-compliant environment. A patient’s appointment request gets routed to Amazon Chime SDK. Clinicians then review a patient’s intake form and correlate the patient to a Virtual Care session using Amazon Chime SDK messaging, which connects providers and patients with secure, scalable messaging in their web and mobile applications. The Amazon Chime SDK control plane sends event notifications through a default event bus to Amazon EventBridge, a serverless event bus that helps organizations receive, filter, transform, route, and deliver events. Healthcare professionals deliver care over the internet in near real time, which has significantly reduced no-shows for appointments. “Using Amazon Chime SDK, we don’t have to worry about the mechanics of the video call,” Daftari says. “We can focus on features and functions that help differentiate our product in the marketplace, while also significantly improving our speed to launch.” Salesforce further supports accessibility through embedding closed-captioning of video calls using Amazon Chime SDK live transcription. Amazon Chime SDK sends live audio streams to Amazon Transcribe, which automatically converts speech to text. Salesforce Health Cloud customers can use the live transcription capability to display subtitles, create meeting transcripts, or analyze content. Virtual Care goes a step further by incorporating Amazon Transcribe Medical, an automatic speech recognition service that makes it simple to add medical speech-to-text capabilities to voice applications. The solution also builds in protections in the case of event delivery failure. Using Amazon EventBridge, Salesforce customers route events to a variety of targets, such as Amazon Simple Queue Service (Amazon SQS), which provides fully managed message queuing for microservices, distributed systems, and serverless applications. To monitor the Amazon SQS queue depth and send alerts when it exceeds the configured threshold, Salesforce Health Cloud uses Amazon CloudWatch, which collects and visualizes near-real-time logs, metrics, and event data in automated dashboards. An Amazon CloudWatch alarm initiates email notifications to stakeholders, using Amazon Simple Notification Service (Amazon SNS), a fully managed service for application-to-application and application-to-person messaging. “It is critical that video visits are secure, responsive, and reliable,” says Daftari. “Using AWS helps us provide all this in a performant and scalable way.” Türkçe EventBridge makes it easier to build event-driven applications at scale using events generated from your applications, integrated SaaS applications, and AWS services. English Removes barriers to healthcare About Salesforce Amazon EventBridge Saves time for doctors and patients Deutsch Expands accessibility through live closed-captioning Tiếng Việt Italiano ไทย Amazon CloudWatch Learn more » Reduces appointment no-shows Português" High-quality human feedback for your generative AI applications from Amazon SageMaker Ground Truth Plus _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog High-quality human feedback for your generative AI applications from Amazon SageMaker Ground Truth Plus by Jesse Manders , Alex Williams , Jonathan Buck , Erran Li , Romi Datta , and Sarah Gao | on 30 MAY 2023 | in Amazon SageMaker , Amazon SageMaker Ground Truth , Artificial Intelligence , Foundational (100) , Generative AI | Permalink | Comments |  Share Amazon SageMaker Ground Truth Plus helps you prepare high-quality training datasets by removing the undifferentiated heavy lifting associated with building data labeling applications and managing the labeling workforce. All you do is share data along with labeling requirements, and Ground Truth Plus sets up and manages your data labeling workflow based on these requirements. From there, an expert workforce that is trained on a variety of machine learning (ML) tasks labels your data. You don’t even need deep ML expertise or knowledge of workflow design and quality management to use Ground Truth Plus. Now, Ground Truth Plus is serving customers who need data labeling and human feedback for fine-tuning foundation models for generative AI applications. In this post, you will learn about recent advancements in human feedback for generative AI available through SageMaker Ground Truth Plus. This includes new workflows and user interfaces (UIs) available for preparing demonstration datasets used in supervised fine-tuning, gathering high-quality human feedback to make preference datasets for aligning generative AI foundation models with human preferences, as well as customizing models to application builders’ requirements for style, substance, and voice. Challenges of getting started with generative AI Generative AI applications around the world incorporate both single-mode and multi-modal foundation models to solve for many different use cases. Common among them are chatbots, image generators, and video generators. Large language models (LLMs) are being used in chatbots for creative pursuits, academic and personal assistants, business intelligence tools, and productivity tools. You can use text-to-image models to generate abstract or realistic AI art and marketing assets. Text-to-video models are being used to generate videos for art projects, highly engaging advertisements, video game development, and even film development. Two of the most important problems to solve for both model producers who create foundation models and application builders who use existing generative foundation models to build their own tools and applications are: Fine-tuning these foundation models to be able to perform specific tasks Aligning them with human preferences to ensure they output helpful, accurate, and harmless information Foundation models are typically pre-trained on large corpora of unlabeled data, and therefore don’t perform well following natural language instructions. For an LLM, that means that they may be able to parse and generate language in general, but they may not be able to answer questions coherently or summarize text up to a user’s required quality. For example, when a user requests a summary of a text in a prompt, a model that hasn’t been fine-tuned how to summarize text may just recite the prompt text back to the user or respond with something irrelevant. If a user asks a question about a topic, the response from a model could just be a recitation of the question. For multi-modal models, such as text-to-image or text-to-video models, the models may output content unrelated to the prompt. For example, if a corporate graphic designer prompts a text-to-image model to create a new logo or an image for an advertisement, the model may not generate a relevant graphic related to the prompt if it has only a general concept of an image and elements of an image. In some cases, a model may output a harmful image or video, risking user confidence or company reputation. Even if models are fine-tuned to perform specific tasks, they may not be aligned with human preferences with respect to the meaning, style, or substance of their output content. In an LLM, this could manifest itself as inaccurate or even harmful content being generated by the model. For example, a model that isn’t aligned with human preferences through fine-tuning may output dangerous, unethical, or even illegal instructions when prompted by a user. No care will have been taken to limit the content being generated by the model to ensure it is aligned with human preferences to be accurate, relevant, and useful. This misalignment can be a problem for companies that rely on generative AI models for their applications, such as chatbots and multimedia creation. For multi-modal models, this may take the form of toxic, dangerous, or abusive images or video being generated. This is a risk when prompts are input to the model without the intention of generating sensitive content, and also if the model producer or application builder had not intended to allow the model to generate that kind of content, but it was generated anyway. To solve the issues of task-specific capability and aligning generative foundation models with human preferences, model producers and application builders must fine-tune the models with data using human-directed demonstrations and human feedback of model outputs. Data and training types There are several types of fine-tuning methods with different types of labeled data that are categorized as instruction tuning – or teaching a model how to follow instructions. Among them are supervised fine-tuning (SFT) using demonstration data, and reinforcement learning from human feedback (RLHF) using preference data. Demonstration data for supervised fine-tuning To fine-tune foundation models to perform specific tasks such as answering questions or summarizing text with high quality, the models undergo SFT with demonstration data. The purpose of demonstration data is to guide the model by providing it with labeled examples (demonstrations) of completed tasks being done by humans. For example, to teach an LLM how to answer questions, a human annotator will create a labeled dataset of human-generated question and answer pairs to demonstrate how a question and answer interaction works linguistically and what the content means semantically. This kind of SFT trains the model to recognize patterns of behavior demonstrated by the humans in the demonstration training data. Model producers need to do this type of fine-tuning to show that their models are capable of performing such tasks for downstream adopters. Application builders who use existing foundation models for their generative AI applications may need to fine-tune their models with demonstration data on these tasks with industry-specific or company-specific data to improve the relevancy and accuracy of their applications. Preference data for instruction tuning such as RLHF To further align foundation models with human preferences, model producers—and especially application builders—need to generate preference datasets to perform instruction tuning. Preference data in the context of instruction tuning is labeled data that captures human feedback with respect to a set of options output by a generative foundation model. It typically includes rating or ranking several inferences or pairwise comparing two inferences from a foundation model according to some specific attribute. For LLMs, these attributes may be helpfulness, accuracy, and harmlessness. For text-to-image models, it may be an aesthetic quality or text-image alignment. This preference data based on human feedback can then be used in various instruction tuning methods—including RLHF—in order to further fine-tune a model to align with human preferences. Instruction tuning using preference data plays a crucial role in enhancing the personalization and effectiveness of foundation models. This is a key step in building custom applications on top of pre-trained foundation models and is a powerful method to ensure models are generating helpful, accurate, and harmless content. A common example of instruction tuning is to instruct a chatbot to generate three responses to a query, and have a human read and rank all three according to some specified dimension, such as toxicity, factual accuracy, or readability. For example, a company may use a chatbot for its marketing department and wants to make sure that content is aligned to its brand message, doesn’t exhibit biases, and is clearly readable. The company would prompt the chatbot during instruction tuning to produce three examples, and have their internal experts select the ones that most align to their goal. Over time, they build a dataset used to teach the model what style of content humans prefer through reinforcement learning. This enables the chatbot application to output more relevant, readable, and safe content. SageMaker Ground Truth Plus Ground Truth Plus helps you address both challenges—generating demonstration datasets with task-specific capabilities, as well as gathering preference datasets from human feedback to align models with human preferences. You can request projects for LLMs and multi-modal models such as text-to-image and text-to-video. For LLMs, key demonstration datasets include generating questions and answers (Q&A), text summarization, text generation, and text reworking for the purposes of content moderation, style change, or length change. Key LLM preference datasets include ranking and classifying text outputs. For multi-modal models, key task types include captioning images or videos as well as logging timestamps of events in videos. Therefore, Ground Truth Plus can help both model producers and application builders on their generative AI journey. In this post, we dive deeper into the human annotator and feedback journey on four cases covering both demonstration data and preference data for both LLMs and multi-modal models: question and answer pair generation and text ranking for LLMs, as well as image captioning and video captioning for multi-modal models. Large language models In this section, we discuss question and answer pairs and text ranking for LLMs, along with customizations you may want for your use case. Question and answer pairs The following screenshot shows a labeling UI in which a human annotator will read a text passage and generate both questions and answers in the process of building a Q&A demonstration dataset. Let’s walk through a tour of the UI in the annotator’s shoes. On the left side of the UI, the job requester’s specific instructions are presented to the annotator. In this case, the annotator is supposed to read the passage of text presented in the center of the UI and create questions and answers based on the text. On the right side, the questions and answers that the annotator has written are shown. The text passage as well as type, length, and number of questions and answers can all be customized by the job requester during the project setup with the Ground Truth Plus team. In this case, the annotator has created a question that requires understanding the whole text passage to answer and is marked with a References entire passage check box. The other two questions and answers are based on specific parts of the text passage, as shown by the annotator highlights with color-coded matching. Optionally, you may want to request that questions and answers are generated without a provided text passage, and provide other guidelines for human annotators—this is also supported by Ground Truth Plus. After the questions and answers are submitted, they can flow to an optional quality control loop workflow where other human reviewers will confirm that customer-defined distribution and types of questions and answers have been created. If there is a mismatch between the customer requirements and what the human annotator has produced, the work will get funneled back to a human for rework before being exported as part of the dataset to deliver to the customer. When the dataset is delivered back to you, it’s ready to incorporate into the supervised fine-tuning workflow at your discretion. Text ranking The following screenshot shows a UI for ranking the outputs from an LLM based on a prompt. You can simply write the instructions for the human reviewer, and bring prompts and pre-generated responses to the Ground Truth Plus project team to start the job. In this case, we have requested for a human reviewer to rank three responses per prompt from an LLM on the dimension of writing clarity (readability). Again, the left pane shows the instructions given to the reviewer by the job requester. In the center, the prompt is at the top of the page, and the three pre-generated responses are the main body for ease of use. On the right side of the UI, the human reviewer will rank them in order of most to least clear writing. Customers wanting to generate this type of preference dataset include application builders interested in building human-like chatbots, and therefore want to customize the instructions for their own use. The length of the prompt, number of responses, and ranking dimension can all be customized. For example, you may want to rank five responses in order of most to least factually accurate, biased, or toxic, or even rank and classify multiple dimensions simultaneously. These customizations are supported in Ground Truth Plus. Multi-modal models In this section, we discuss image and video captioning for training multi-modal models such as text-to-image and text-to-video models, as well as customizations you may want to make for your particular use case. Image captioning The following screenshot shows a labeling UI for image captioning. You can request a project with image captioning to gather data to train a text-to-image model or an image-to-text model. In this case, we have requested to train a text-to-image model and have set specific requirements on the caption in terms of length and detail. The UI is designed to walk the human annotators through the cognitive process of generating rich captions by providing a mental framework through assistive and descriptive tools. We have found that providing this mental framework for annotators results in more descriptive and accurate captions than simply providing an editable text box alone. The first step in the framework is for the human annotator to identify key objects in the image. When the annotator chooses an object in the image, a color-coded dot appears on the object. In this case, the annotator has chosen both the dog and the cat, creating two editable fields on the right side of the UI wherein the annotator will enter the names of the objects—cat and dog—along with a detailed description of each object. Next, the annotator is guided to identify all the relationships between all the objects in the image. In this case, the cat is relaxing next to the dog. Next, the annotator is asked to identify specific attributes about the image, such as the setting, background, or environment. Finally, in the caption input text box, the annotator is instructed to combine all of what they wrote in the objects, relationships, and image setting fields into a complete single descriptive caption of the image. Optionally, you can configure this image caption to be passed through a human-based quality check loop with specific instructions to ensure that the caption meets the requirements. If there is an issue identified, such as a missing key object, that caption can be sent back for a human to correct the issue before exporting as part of the training dataset. Video captioning The following screenshot shows a video captioning UI to generate rich video captions with timestamp tags. You can request a video caption project to gather data to build text-to-video or video-to-text models. In this labeling UI, we have built a similar mental framework to ensure high-quality captions are written. The human annotator can control the video on the left side and create descriptions and timestamps for each activity shown in the video on the right side with the UI elements. Similar to the image captioning UI, there is also a place for the annotator to write a detailed description of the video setting, background, and environment. Finally, the annotator is instructed to combine all the elements into a coherent video caption. Similar to the image caption case, the video captions may optionally flow through a human-based quality control workflow to determine if your requirements are met. If there is an issue with the video captions, it will be sent for rework by the human annotator workforce. Conclusion Ground Truth Plus can help you prepare high-quality datasets to fine-tune foundation models for generative AI tasks, from answering questions to generating images and videos. It also allows skilled human workforces to review model outputs to ensure that they are aligned with human preferences. Additionally, it enables application builders to customize models using their industry or company data to ensure their application represents their preferred voice and style. These are the first of many innovations in Ground Truth Plus, and more are in development. Stay tuned for future posts. Interested in starting a project to build or improve your generative AI models and applications? Get started with Ground Truth Plus by connecting with our team today. About the authors Jesse Manders is a Senior Product Manager in the AWS AI/ML human in the loop services team. He works at the intersection of AI and human interaction with the goal of creating and improving AI/ML products and services to meet our needs. Previously, Jesse held leadership roles in engineering at Apple and Lumileds, and was a senior scientist in a Silicon Valley startup. He has an M.S. and Ph.D. from the University of Florida, and an MBA from the University of California, Berkeley, Haas School of Business. Romi Datta is a Senior Manager of Product Management in the Amazon SageMaker team responsible for Human in the Loop services. He has been in AWS for over 4 years, holding several product management leadership roles in SageMaker, S3 and IoT. Prior to AWS he worked in various product management, engineering and operational leadership roles at IBM, Texas Instruments and Nvidia. He has an M.S. and Ph.D. in Electrical and Computer Engineering from the University of Texas at Austin, and an MBA from the University of Chicago Booth School of Business. Jonathan Buck  is a Software Engineer at Amazon Web Services working at the intersection of machine learning and distributed systems. His work involves productionizing machine learning models and developing novel software applications powered by machine learning to put the latest capabilities in the hands of customers. Alex Williams is an applied scientist in the human-in-the-loop science team at AWS AI where he conducts interactive systems research at the intersection of human-computer interaction (HCI) and machine learning. Before joining Amazon, he was a professor in the Department of Electrical Engineering and Computer Science at the University of Tennessee where he co-directed the People, Agents, Interactions, and Systems (PAIRS) research laboratory. He has also held research positions at Microsoft Research, Mozilla Research, and the University of Oxford. He regularly publishes his work at premier publication venues for HCI, such as CHI, CSCW, and UIST. He holds a PhD from the University of Waterloo. Sarah Gao is a Software Development Manager in Amazon SageMaker Human In the Loop (HIL) responsible for building the ML based labeling platform. Sarah has been in AWS for over 4 years, holding several software management leadership roles in EC2 security and SageMaker. Prior to AWS she worked in various engineering management roles at Oracle and Sun Microsystem. Erran Li is the applied science manager at human-in-the-loop services, AWS AI, Amazon. His research interests are 3D deep learning, and vision and language representation learning. Previously he was a senior scientist at Alexa AI, the head of machine learning at Scale AI and the chief scientist at Pony.ai. Before that, he was with the perception team at Uber ATG and the machine learning platform team at Uber working on machine learning for autonomous driving, machine learning systems and strategic initiatives of AI. He started his career at Bell Labs and was adjunct professor at Columbia University. He co-taught tutorials at ICML’17 and ICCV’19, and co-organized several workshops at NeurIPS, ICML, CVPR, ICCV on machine learning for autonomous driving, 3D vision and robotics, machine learning systems and adversarial machine learning. He has a PhD in computer science at Cornell University. He is an ACM Fellow and IEEE Fellow. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Highlight text as its being spoken using Amazon Polly _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Highlight text as it’s being spoken using Amazon Polly by Varad Varadarajan | on 05 JUL 2023 | in Amazon Polly , Amazon Translate , Artificial Intelligence , Intermediate (200) , Technical How-to | Permalink | Comments |  Share Amazon Polly is a service that turns text into lifelike speech. It enables the development of a whole class of applications that can convert text into speech in multiple languages. This service can be used by chatbots, audio books, and other text-to-speech applications in conjunction with other AWS AI or machine learning (ML) services. For example, Amazon Lex and Amazon Polly can be combined to create a chatbot that engages in a two-way conversation with a user and performs certain tasks based on the user’s commands. Amazon Transcribe , Amazon Translate , and Amazon Polly can be combined to transcribe speech to text in the source language, translate it to a different language, and speak it. In this post, we present an interesting approach for highlighting text as it’s being spoken using Amazon Polly. This solution can be used in many text-to-speech applications to do the following: Add visual capabilities to audio in books, websites, and blogs Increase comprehension when customers are trying to understand the text rapidly as it’s being spoken Our solution gives the client (the browser, in this example), the ability to know what text (word or sentence) is being spoken by Amazon Polly at any instant. This enables the client to dynamically highlight the text as it’s being spoken. Such a capability is useful for providing visual aid to speech for the use cases mentioned previously. Our solution can be extended to perform additional tasks besides highlighting text. For example, the browser can show images, play music, or perform other animations on the front end as the text is being spoken. This capability is useful for creating dynamic audio books, educational content, and richer text-to-speech applications. Solution overview At its core, the solution uses Amazon Polly to convert a string of text into speech. The text can be input from the browser or through an API call to the endpoint exposed by our solution. The speech generated by Amazon Polly is stored as an audio file (MP3 format) in an Amazon Simple Storage Service (Amazon S3) bucket. However, using the audio file alone, the browser can’t find what parts of the text are being spoken at any instant because we don’t have granular information on when each word is spoken. Amazon Polly provides a way to obtain this using speech marks. Speech marks are stored in a text file that shows the time (measured in milliseconds from start of the audio) when each word or sentence is spoken. Amazon Polly returns speech mark objects in a line-delimited JSON stream. A speech mark object contains the following fields: Time – The timestamp in milliseconds from the beginning of the corresponding audio stream Type – The type of speech mark (sentence, word, viseme, or SSML) Start – The offset in bytes (not characters) of the start of the object in the input text (not including viseme marks) End – The offset in bytes (not characters) of the object’s end in the input text (not including viseme marks) Value – This varies depending on the type of speech mark: SSML – SSML tag Viseme – The viseme name Word or sentence – A substring of the input text as delimited by the start and end fields For example, the sentence “Mary had a little lamb” can give you the following speech marks file if you use SpeechMarkTypes = [“word”, “sentence”] in the API call to obtain the speech marks: {""time"":0,""type"":""sentence"",""start"":0,""end"":23,""value"":""Mary had a little lamb.""} {""time"":6,""type"":""word"",""start"":0,""end"":4,""value"":""Mary""} {""time"":373,""type"":""word"",""start"":5,""end"":8,""value"":""had""} {""time"":604,""type"":""word"",""start"":9,""end"":10,""value"":""a""} {""time"":643,""type"":""word"",""start"":11,""end"":17,""value"":""little""} {""time"":882,""type"":""word"",""start"":18, ""end"":22,""value"":""lamb""} The word “had” (at the end of line 3) begins 373 milliseconds after the audio stream begins, starts at byte 5, and ends at byte 8 of the input text. Architecture overview The architecture of our solution is presented in the following diagram. Highlight Text as it’s spoken, using Amazon Polly Our website for the solution is stored on Amazon S3 as static files (JavaScript, HTML), which are hosted in Amazon CloudFront (1) and served to the end-user’s browser (2). When the user enters text in the browser through a simple HTML form, it’s processed by JavaScript in the browser. This calls an API (3) through Amazon API Gateway , to invoke an AWS Lambda function (4). The Lambda function calls Amazon Polly (5) to generate speech (audio) and speech marks (JSON) files. Two calls are made to Amazon Polly to fetch the audio and speech marks files. The calls are made using JavaScript async functions. The output of these calls is the audio and speech marks files, which are stored in Amazon S3 (6a). To avoid multiple users overwriting each others’ files in the S3 bucket, the files are stored in a folder with a timestamp. This minimizes the chances of two users overwriting each others’ files in Amazon S3. For a production release, we can employ more robust approaches to segregate users’ files based on user ID or timestamp and other unique characteristics. The Lambda function creates pre-signed URLs for the speech and speech marks files and returns them to the browser in the form of an array (7, 8, 9). When the browser sends the text file to the API endpoint (3), it gets back two pre-signed URLs for the audio file and the speech marks file in one synchronous invocation (9). This is indicated by the key symbol next to the arrow. A JavaScript function in the browser fetches the speech marks file and the audio from their URL handles (10). It sets up the audio player to play the audio. (The HTML audio tag is used for this purpose). When the user clicks the play button, it parses the speech marks retrieved in the earlier step to create a series of timed events using timeouts. The events invoke a callback function, which is another JavaScript function used to highlight the spoken text in the browser. Simultaneously, the JavaScript function streams the audio file from its URL handle. The result is that the events are run at the appropriate times to highlight the text as it’s spoken while the audio is being played. The use of JavaScript timeouts provides us the synchronization of the audio with the highlighted text. Prerequisites To run this solution, you need an AWS account with an AWS Identity and Access Management (IAM) user who has permission to use Amazon CloudFront, Amazon API Gateway, Amazon Polly, Amazon S3, AWS Lambda, and AWS Step Functions. Use Lambda to generate speech and speech marks The following code invokes the Amazon Polly synthesize_speech function two times to fetch the audio and speech marks file. They’re run as asynchronous functions and coordinated to return the result at the same time using promises. const p1 = new Promise(doSynthesizeSpeech marks); const p2 = new Promise(doSynthesizeSpeech); var result; await Promise.all([p1, p2]) .then((values) => { //return array of presigned urls console.log('Values:', values); result = { ""output"" : values }; }) .catch((err) => { console.log(""Error:"" + err); result = err; }); On the JavaScript side, the text highlighting is done by highlighter(start, finish, word) and the timed events are set by setTimers() : function highlighter(start, finish, word) { let textarea = document.getElementById(""postText""); //console.log(start + "","" + finish + "","" + word); textarea.focus(); textarea.setSelectionRange(start, finish); } function setTimers() { let speech marksStr = sessionStorage.getItem(""speech marks""); //read through the speech marks file and set timers for every word console.log(speech marksStr); let speech marks = speech marksStr.split(""\n""); for (let i = 0; i < speech marks.length; i++) { //console.log(i + "":"" + speech marks[i]); if (speech marks[i].length == 0) { continue; } smjson = JSON.parse(speech marks[i]); t = smjson[""time""]; s = smjson[""start""]; f = smjson[""end""]; word = smjson[""value""]; setTimeout(highlighter, t, s, f, word); } } Alternative approaches Instead of the previous approach, you can consider a few alternatives: Create both the speech marks and audio files inside a Step Functions state machine. The state machine can invoke the parallel branch condition to invoke two different Lambda functions: one to generate speech and another to generate speech marks. The code for this can be found in the using-step-functions subfolder in the Github repo. Invoke Amazon Polly asynchronously to generate the audio and speech marks. This approach can be used if the text content is large or the user doesn’t need a real-time response. For more details about creating long audio files, refer to Creating Long Audio Files . Have Amazon Polly create the presigned URL directly using the generate_presigned_url call on the Amazon Polly client in Boto3. If you go with this approach, Amazon Polly generates the audio and speech marks newly every time. In our current approach, we store these files in Amazon S3. Although these stored files aren’t accessible from the browser in our version of the code, you can modify the code to play previously generated audio files by fetching them from Amazon S3 (instead of regenerating the audio for the text again using Amazon Polly). We have more code examples for accessing Amazon Polly with Python in the AWS Code Library. Create the solution The entire solution is available from our Github repo . To create this solution in your account, follow the instructions in the README.md file. The solution includes an AWS CloudFormation template to provision your resources. Cleanup To clean up the resources created in this demo, perform the following steps: Delete the S3 buckets created to store the CloudFormation template (Bucket A), the source code (Bucket B) and the website ( pth-cf-text-highlighter-website-[Suffix] ). Delete the CloudFormation stack pth-cf . Delete the S3 bucket containing the speech files ( pth-speech-[Suffix] ). This bucket was created by the CloudFormation template to store the audio and speech marks files generated by Amazon Polly. Summary In this post, we showed an example of a solution that can highlight text as it’s being spoken using Amazon Polly. It was developed using the Amazon Polly speech marks feature, which provides us markers for the place each word or sentence begins in an audio file. The solution is available as a CloudFormation template. It can be deployed as is to any web application that performs text-to-speech conversion. This would be useful for adding visual capabilities to audio in books, avatars with lip-sync capabilities (using viseme speech marks), websites, and blogs, and for aiding people with hearing impairments. It can be extended to perform additional tasks besides highlighting text. For example, the browser can show images, play music, and perform other animations on the front end while the text is being spoken. This capability can be useful for creating dynamic audio books, educational content, and richer text-to-speech applications. We welcome you to try out this solution and learn more about the relevant AWS services from the following links. You can extend the functionality for your specific needs. Amazon API Gateway Amazon CloudFront AWS Lambda Amazon Polly Amazon S3 About the Author Varad G Varadarajan is a Trusted Advisor and Field CTO for Digital Native Businesses (DNB) customers at AWS. He helps them architect and build innovative solutions at scale using AWS products and services. Varad’s areas of interest are IT strategy consulting, architecture, and product management. Outside of work, Varad enjoys creative writing, watching movies with family and friends, and traveling. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Host ML models on Amazon SageMaker using Triton_ ONNX Models _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Host ML models on Amazon SageMaker using Triton: ONNX Models by Abhi Shivaditya , Dhawalkumar Patel , James Park , and Rupinder Grewal | on 09 JUN 2023 | in Advanced (300) , Amazon SageMaker , Artificial Intelligence | Permalink | Comments |  Share ONNX ( Open Neural Network Exchange ) is an open-source standard for representing deep learning models widely supported by many providers. ONNX provides tools for optimizing and quantizing models to reduce the memory and compute needed to run machine learning (ML) models. One of the biggest benefits of ONNX is that it provides a standardized format for representing and exchanging ML models between different frameworks and tools. This allows developers to train their models in one framework and deploy them in another without the need for extensive model conversion or retraining. For these reasons, ONNX has gained significant importance in the ML community. In this post, we showcase how to deploy ONNX-based models for multi-model endpoints (MMEs) that use GPUs. This is a continuation of the post Run multiple deep learning models on GPU with Amazon SageMaker multi-model endpoints , where we showed how to deploy PyTorch and TensorRT versions of ResNet50 models on Nvidia’s Triton Inference server. In this post, we use the same ResNet50 model in ONNX format along with an additional natural language processing (NLP) example model in ONNX format to show how it can be deployed on Triton. Furthermore, we benchmark the ResNet50 model and see the performance benefits that ONNX provides when compared to PyTorch and TensorRT versions of the same model, using the same input. ONNX Runtime ONNX Runtime is a runtime engine for ML inference designed to optimize the performance of models across multiple hardware platforms, including CPUs and GPUs. It allows the use of ML frameworks like PyTorch and TensorFlow. It facilitates performance tuning to run models cost-efficiently on the target hardware and has support for features like quantization and hardware acceleration, making it one of the ideal choices for deploying efficient, high-performance ML applications. For examples of how ONNX models can be optimized for Nvidia GPUs with TensorRT, refer to TensorRT Optimization (ORT-TRT) and ONNX Runtime with TensorRT optimization . The Amazon SageMaker Triton container flow is depicted in the following diagram. Users can send an HTTPS request with the input payload for real-time inference behind a SageMaker endpoint. The user can specify a TargetModel header that contains the name of the model that the request in question is destined to invoke. Internally, the SageMaker Triton container implements an HTTP server with the same contracts as mentioned in How Containers Serve Requests . It has support for dynamic batching and supports all the backends that Triton provides . Based on the configuration, the ONNX runtime is invoked and the request is processed on CPU or GPU as predefined in the model configuration provided by the user. Solution overview To use the ONNX backend, complete the following steps: Compile the model to ONNX format. Configure the model. Create the SageMaker endpoint. Prerequisites Ensure that you have access to an AWS account with sufficient AWS Identity and Access Management IAM permissions to create a notebook, access an Amazon Simple Storage Service (Amazon S3) bucket, and deploy models to SageMaker endpoints. See Create execution role for more information. Compile the model to ONNX format The transformers library provides for convenient method to compile the PyTorch model to ONNX format. The following code achieves the transformations for the NLP model: onnx_inputs, onnx_outputs = transformers.onnx.export( preprocessor=tokenizer, model=model, config=onnx_config, opset=12, output=save_path ) Exporting models (either PyTorch or TensorFlow) is easily achieved through the conversion tool provided as part of the Hugging Face transformers repository. The following is what happens under the hood: Allocate the model from transformers (PyTorch or TensorFlow). Forward dummy inputs through the model. This way, ONNX can record the set of operations run. The transformers inherently take care of dynamic axes when exporting the model. Save the graph along with the network parameters. A similar mechanism is followed for the computer vision use case from the torchvision model zoo: torch.onnx.export( resnet50, dummy_input, args.save, export_params=True, opset_version=11, do_constant_folding=True, input_names=[""input""], output_names=[""output""], dynamic_axes={""input"": {0: ""batch_size""}, ""output"": {0: ""batch_size""}}, ) Configure the model In this section, we configure the computer vision and NLP model. We show how to create a ResNet50 and RoBERTA large model that has been pre-trained for deployment on a SageMaker MME by utilizing Triton Inference Server model configurations. The ResNet50 notebook is available on GitHub . The RoBERTA notebook is also available on GitHub . For ResNet50, we use the Docker approach to create an environment that already has all the dependencies required to build our ONNX model and generate the model artifacts needed for this exercise. This approach makes it much easier to share dependencies and create the exact environment that is needed to accomplish this task. The first step is to create the ONNX model package per the directory structure specified in ONNX Models . Our aim is to use the minimal model repository for a ONNX model contained in a single file as follows: / Model_name ├── 1 │ └── model.onnx └── config.pbtxt Next, we create the model configuration file that describes the inputs, outputs, and backend configurations for the Triton Server to pick up and invoke the appropriate kernels for ONNX. This file is known as config.pbtxt and is shown in the following code for the RoBERTA use case. Note that the BATCH dimension is omitted from the config.pbtxt . However, when sending the data to the model, we include the batch dimension. The following code also shows how you can add this feature with model configuration files to set dynamic batching with a preferred batch size of 5 for the actual inference. With the current settings, the model instance is invoked instantly when the preferred batch size of 5 is met or the delay time of 100 microseconds has elapsed since the first request reached the dynamic batcher. name: ""nlp-onnx"" platform: ""onnxruntime_onnx"" backend: ""onnxruntime"" max_batch_size: 32 input { name: ""input_ids"" data_type: TYPE_INT64 dims: [512] } input { name: ""attention_mask"" data_type: TYPE_INT64 dims: [512] } output { name: ""last_hidden_state"" data_type: TYPE_FP32 dims: [-1, 768] } output { name: ""1550"" data_type: TYPE_FP32 dims: [768] } instance_group { count: 1 kind: KIND_GPU } dynamic_batching { max_queue_delay_microseconds: 100 preferred_batch_size:5 } The following is the similar configuration file for the computer vision use case: name: ""resenet_onnx"" platform: ""onnxruntime_onnx"" max_batch_size : 128 input [ { name: ""input"" data_type: TYPE_FP32 format: FORMAT_NCHW dims: [ 3, 224, 224 ] } ] output [ { name: ""output"" data_type: TYPE_FP32 dims: [ 1000 ] } ] Create the SageMaker endpoint We use the Boto3 APIs to create the SageMaker endpoint. For this post, we show the steps for the RoBERTA notebook, but these are common steps and will be the same for the ResNet50 model as well. Create a SageMaker model We now create a SageMaker model . We use the Amazon Elastic Container Registry (Amazon ECR) image and the model artifact from the previous step to create the SageMaker model. Create the container To create the container, we pull the appropriate image from Amazon ECR for Triton Server. SageMaker allows us to customize and inject various environment variables. Some of the key features are the ability to set the BATCH_SIZE ; we can set this per model in the config.pbtxt file, or we can define a default value here. For models that can benefit from larger shared memory size, we can set those values under SHM variables. To enable logging, set the log verbose level to true . We use the following code to create the model to use in our endpoint: mme_triton_image_uri = ( f""{account_id_map[region]}.dkr.ecr.{region}.{base}"" + ""/sagemaker-tritonserver:22.12-py3"" ) container = { ""Image"": mme_triton_image_uri, ""ModelDataUrl"": mme_path, ""Mode"": ""MultiModel"", ""Environment"": { ""SAGEMAKER_TRITON_SHM_DEFAULT_BYTE_SIZE"": ""16777216000"", # ""16777216"", #""16777216000"", ""SAGEMAKER_TRITON_SHM_GROWTH_BYTE_SIZE"": ""10485760"", }, } from sagemaker.utils import name_from_base model_name = name_from_base(f""flan-xxl-fastertransformer"") print(model_name) create_model_response = sm_client.create_model( ModelName=model_name, ExecutionRoleArn=role, PrimaryContainer={ ""Image"": inference_image_uri, ""ModelDataUrl"": s3_code_artifact }, ) model_arn = create_model_response[""ModelArn""] print(f""Created Model: {model_arn}"") Create a SageMaker endpoint You can use any instances with multiple GPUs for testing. In this post, we use a g4dn.4xlarge instance. We don’t set the VolumeSizeInGB parameters because this instance comes with local instance storage. The VolumeSizeInGB parameter is applicable to GPU instances supporting the Amazon Elastic Block Store (Amazon EBS) volume attachment. We can leave the model download timeout and container startup health check at the default values. For more details, refer to CreateEndpointConfig . endpoint_config_response = sm_client.create_endpoint_config( EndpointConfigName=endpoint_config_name, ProductionVariants=[{ ""VariantName"": ""AllTraffic"", ""ModelName"": model_name, ""InstanceType"": ""ml.g4dn.4xlarge"", ""InitialInstanceCount"": 1, #""VolumeSizeInGB"" : 200, #""ModelDataDownloadTimeoutInSeconds"": 600, #""ContainerStartupHealthCheckTimeoutInSeconds"": 600, }, ],)' Lastly, we create a SageMaker endpoint: create_endpoint_response = sm_client.create_endpoint( EndpointName=f""{endpoint_name}"", EndpointConfigName=endpoint_config_name) Invoke the model endpoint This is a generative model, so we pass in the input_ids and attention_mask to the model as part of the payload. The following code shows how to create the tensors: tokenizer(""This is a sample"", padding=""max_length"", max_length=max_seq_len) We now create the appropriate payload by ensuring the data type matches what we configured in the config.pbtxt . This also give us the tensors with the batch dimension included, which is what Triton expects. We use the JSON format to invoke the model. Triton also provides a native binary invocation method for the model. response = runtime_sm_client.invoke_endpoint( EndpointName=endpoint_name, ContentType=""application/octet-stream"", Body=json.dumps(payload), TargetModel=f""{tar_file_name}"", # TargetModel=f""roberta-large-v0.tar.gz"", ) Note the TargetModel parameter in the preceding code. We send the name of the model to be invoked as a request header because this is a multi-model endpoint, therefore we can invoke multiple models at runtime on an already deployed inference endpoint by changing this parameter. This shows the power of multi-model endpoints! To output the response, we can use the following code: import numpy as np resp_bin = response[""Body""].read().decode(""utf8"") # -- keys are -- ""outputs"":[{""name"":""1550"",""datatype"":""FP32"",""shape"":[1,768],""data"": [0.0013,0,3433...]}] for data in json.loads(resp_bin)[""outputs""]: shape_1 = list(data[""shape""]) dat_1 = np.array(data[""data""]) dat_1.resize(shape_1) print(f""Data Outputs recieved back :Shape:{dat_1.shape}"") ONNX for performance tuning The ONNX backend uses C++ arena memory allocation. Arena allocation is a C++-only feature that helps you optimize your memory usage and improve performance. Memory allocation and deallocation constitutes a significant fraction of CPU time spent in protocol buffers code. By default, new object creation performs heap allocations for each object, each of its sub-objects, and several field types, such as strings. These allocations occur in bulk when parsing a message and when building new messages in memory, and associated deallocations happen when messages and their sub-object trees are freed. Arena-based allocation has been designed to reduce this performance cost. With arena allocation, new objects are allocated out of a large piece of pre-allocated memory called the arena . Objects can all be freed at once by discarding the entire arena, ideally without running destructors of any contained object (though an arena can still maintain a destructor list when required). This makes object allocation faster by reducing it to a simple pointer increment, and makes deallocation almost free. Arena allocation also provides greater cache efficiency: when messages are parsed, they are more likely to be allocated in continuous memory, which makes traversing messages more likely to hit hot cache lines. The downside of arena-based allocation is the C++ heap memory will be over-allocated and stay allocated even after the objects are deallocated. This might lead to out of memory or high CPU memory usage. To achieve the best of both worlds, we use the following configurations provided by Triton and ONNX : arena_extend_strategy – This parameter refers to the strategy used to grow the memory arena with regards to the size of the model. We recommend setting the value to 1 (= kSameAsRequested ), which is not a default value. The reasoning is as follows: the drawback of the default arena extend strategy ( kNextPowerOfTwo ) is that it might allocate more memory than needed, which could be a waste. As the name suggests, kNextPowerOfTwo (the default) extends the arena by a power of 2, whereas kSameAsRequested extends by a size that is the same as the allocation request each time. kSameAsRequested is suited for advanced configurations where you know the expected memory usage in advance. In our testing, because we know the size of models is a constant value, we can safely choose kSameAsRequested . gpu_mem_limit – We set the value to the CUDA memory limit. To use all possible memory, pass in the maximum size_t . It defaults to SIZE_MAX if nothing is specified. We recommend keeping it as default. enable_cpu_mem_arena – This enables the memory arena on CPU. The arena may pre-allocate memory for future usage. Set this option to false if you don’t want it. The default is True . If you disable the arena, heap memory allocation will take time, so inference latency will increase. In our testing, we left it as default. enable_mem_pattern – This parameter refers to the internal memory allocation strategy based on input shapes. If the shapes are constant, we can enable this parameter to generate a memory pattern for the future and save some allocation time, making it faster. Use 1 to enable the memory pattern and 0 to disable. It’s recommended to set this to 1 when the input features are expected to be the same. The default value is 1. do_copy_in_default_stream – In the context of the CUDA execution provider in ONNX, a compute stream is a sequence of CUDA operations that are run asynchronously on the GPU. The ONNX runtime schedules operations in different streams based on their dependencies, which helps minimize the idle time of the GPU and achieve better performance. We recommend using the default setting of 1 for using the same stream for copying and compute; however, you can use 0 for using separate streams for copying and compute, which might result in the device pipelining the two activities. In our testing of the ResNet50 model, we used both 0 and 1 but couldn’t find any appreciable difference between the two in terms of performance and memory consumption of the GPU device. Graph optimization – The ONNX backend for Triton supports several parameters that help fine-tune the model size as well as runtime performance of the deployed model. When the model is converted to the ONNX representation (the first box in the following diagram at the IR stage), the ONNX runtime provides graph optimizations at three levels: basic, extended, and layout optimizations. You can activate all levels of graph optimizations by adding the following parameters in the model configuration file: optimization { graph : { level : 1 }} cudnn_conv_algo_search – Because we’re using CUDA-based Nvidia GPUs in our testing, for our computer vision use case with the ResNet50 model, we can use the CUDA execution provider-based optimization at the fourth layer in the following diagram with the cudnn_conv_algo_search parameter. The default option is exhaustive (0), but when we changed this configuration to 1 – HEURISTIC , we saw the model latency in steady state reduce to 160 milliseconds. The reason this happens is because the ONNX runtime invokes the lighter weight cudnnGetConvolutionForwardAlgorithm_v7 forward pass and therefore reduces latency with adequate performance. Run mode – The next step is selecting the correct execution_mode at layer 5 in the following diagram. This parameter controls whether you want to run operators in your graph sequentially or in parallel. Usually when the model has many branches, setting this option to ExecutionMode.ORT_PARALLEL (1) will give you better performance. In the scenario where your model has many branches in its graph, setting the run mode to parallel will help with better performance. The default mode is sequential, so you can enable this to suit your needs. parameters { key: ""execution_mode"" value: { string_value: ""1"" } } For a deeper understanding of the opportunities for performance tuning in ONNX, refer to the following figure. Source: https://static.linaro.org/connect/san19/presentations/san19-211.pdf Benchmark numbers and performance tuning By turning on the graph optimizations, cudnn_conv_algo_search , and parallel run mode parameters in our testing of the ResNet50 model, we saw the cold start time of the ONNX model graph reduce from 4.4 seconds to 1.61 seconds. An example of a complete model configuration file is provided in the ONNX configuration section of the following notebook . The testing benchmark results are as follows: PyTorch – 176 milliseconds, cold start 6 seconds TensorRT – 174 milliseconds, cold start 4.5 seconds ONNX – 168 milliseconds, cold start 4.4 seconds The following graphs visualize these metrics. Furthermore, in our testing of computer vision use cases, consider sending the request payload in binary format using the HTTP client provided by Triton because it significantly improves model invoke latency. Other parameters that SageMaker exposes for ONNX on Triton are as follows: Dynamic batching – Dynamic batching is a feature of Triton that allows inference requests to be combined by the server, so that a batch is created dynamically. Creating a batch of requests typically results in increased throughput. The dynamic batcher should be used for stateless models. The dynamically created batches are distributed to all model instances configured for the model. Maximum batch size – The max_batch_size property indicates the maximum batch size that the model supports for the types of batching that can be exploited by Triton. If the model’s batch dimension is the first dimension, and all inputs and outputs to the model have this batch dimension, then Triton can use its dynamic batcher or sequence batcher to automatically use batching with the model. In this case, max_batch_size should be set to a value greater than or equal to 1, which indicates the maximum batch size that Triton should use with the model. Default max batch size – The default-max-batch-size value is used for max_batch_size during autocomplete when no other value is found. The onnxruntime backend will set the max_batch_size of the model to this default value if autocomplete has determined the model is capable of batching requests and max_batch_size is 0 in the model configuration or max_batch_size is omitted from the model configuration. If max_batch_size is more than 1 and no scheduler is provided, the dynamic batch scheduler will be used. The default max batch size is 4. Clean up Ensure that you delete the model, model configuration, and model endpoint after running the notebook. The steps to do this are provided at the end of the sample notebook in the GitHub repo. Conclusion In this post, we dove deep into the ONNX backend that Triton Inference Server supports on SageMaker. This backend provides for GPU acceleration of your ONNX models. There are many options to consider to get the best performance for inference, such as batch sizes, data input formats, and other factors that can be tuned to meet your needs. SageMaker allows you to use this capability using single-model and multi-model endpoints. MMEs allow a better balance of performance and cost savings. To get started with MME support for GPU, see Host multiple models in one container behind one endpoint . We invite you to try Triton Inference Server containers in SageMaker, and share your feedback and questions in the comments. About the authors Abhi Shivaditya is a Senior Solutions Architect at AWS, working with strategic global enterprise organizations to facilitate the adoption of AWS services in areas such as Artificial Intelligence, distributed computing, networking, and storage. His expertise lies in Deep Learning in the domains of Natural Language Processing (NLP) and Computer Vision. Abhi assists customers in deploying high-performance machine learning models efficiently within the AWS ecosystem. James Park  is a Solutions Architect at Amazon Web Services. He works with Amazon.com to design, build, and deploy technology solutions on AWS, and has a particular interest in AI and machine learning. In h is spare time he enjoys seeking out new cultures, new experiences,  and staying up to date with the latest technology trends.You can find him on LinkedIn . Rupinder Grewal  is a Sr Ai/ML Specialist Solutions Architect with AWS. He currently focuses on serving of models and MLOps on SageMaker. Prior to this role he has worked as Machine Learning Engineer building and hosting models. Outside of work he enjoys playing tennis and biking on mountain trails. Dhawal Patel is a Principal Machine Learning Architect at AWS. He has worked with organizations ranging from large enterprises to mid-sized startups on problems related to distributed computing, and Artificial Intelligence. He focuses on Deep learning including NLP and Computer Vision domains. He helps customers achieve high performance model inference on SageMaker. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" How AWS is helping thredUP revolutionize the resale model for brands _ AWS for Industries.txt,"AWS for Industries How AWS is helping thredUP revolutionize the resale model for brands by Madeline Steiner | on 06 JUN 2023 | in Amazon EC2 , Amazon QuickSight , Amazon RDS , Amazon SageMaker , Amazon Simple Storage Service (S3) , Auto Scaling , AWS Cost Explorer , Industries , Retail | Permalink | Comments |  Share Like global landfills, the fashion industry waste problem is growing by the second . Retailers are struggling to address an enormous (and pressing) concern: what happens to their products after point-of-sale and what are the environmental implications? In the United States, companies spend an estimated $50 billion on product returns. These returned goods are responsible for massive landfill waste and 27 million tons of carbon dioxide emissions annually. This is part of what’s called a linear economy, where we take materials from the Earth, make products from them and eventually throw them away as waste. For example, research shows that clothes in the US “are only worn for around a quarter of the global average and some garments are only worn between seven and ten times.” After little wear, “these huge volumes of clothes are landfilled or incinerated each year.” This wastes not just the materials, but also the energy, water, nutrients, land, and other resources used to produce the textiles and garments. On the flip side of this is what’s called a circular economy. According to the Ellen MacArthur Foundation , the circular economy is based on three principles driven by design: eliminate waste and pollution, circulate products and materials (at their highest value), and regenerate nature. Some examples of the circular economy in retail include resale, repairing, reusing, remanufacturing, recycling, rental, subscription, and more. With growing support of this model and the concept of resale, more retailers are discovering the benefits of sustainably driven design and production. Whether retailers are driven by customer demands, reputation risk, or they’re just trying to get ahead of looming regulation, resale is a positive path forward for retailers to achieve their sustainability goals. Some added benefits of resale include: acquiring new, eco-conscious customers or consumers that can access a brand at a discounted rate, controlling the resale experience for their brand and driving additional sales. If resale is so great for businesses, why isn’t every retailer embracing it? Unfortunately, building an in-house resale channel from scratch is complicated and expensive. Not all companies have the resources for complex initiatives like reverse logistics, authentication, and data collection, preventing them from making resale implementation a reality. Fortunately for retailers, this is where thredUP comes in. Reimagining resale thredUP is one of the largest online resale platforms that is transforming resale by making it easy to buy and sell secondhand clothing. Since its inception in 2009, thredUP has leveraged technology and data to build a thriving marketplace that connects buyers and sellers of gently used apparel, shoes, and accessories. Now, thredUP is taking things a step further, offering Resale-as-a-Service (RaaS) for some of the world’s leading brands and retailers that want to provide their customers with a sustainable, eco-friendly, and cost-effective way to shop. According to The Recommerce 100, a comprehensive review of branded resale programs, there are 139 brands with resale shops, a 3.4x growth from 2021 to 2022, with 260,000 total resale shop listings. If all 260,000 resale shop listings in The Recommerce 100 sold, it would be the equivalent of 29,000 trees planted, 400 homes powered annually, and $11.4 million estimated total revenue. Brands’ adoption of Resale showing 3.4x YTD growth between 2021 to 2022 In its 2023 Resale Report , thredUP reported that 86 percent of retail execs say their customers are already participating in resale. With 58 percent of retail executives saying offering resale is becoming table stakes for retailers, it’s safe to say resale is grabbing the attention of higher-ups in the retail industry. That number is only set to increase. In the U.S., the secondhand market is expected to nearly double by 2027, to $70 billion, while the global secondhand market is predicted to grow to $350 billion by 2027. Built for brands, powered by AWS Powering its RaaS offering, Amazon Web Services (AWS) is helping thredUP revolutionize the resale business model for brands. Let’s look at the key features and benefits of thredUP’s RaaS offering and how AWS is helping brands deliver a seamless resale experience to thredUP’s customers. From its start as a secondhand marketplace in 2009, thredUP selected AWS as its cloud provider due to scalability, cost-efficiency, security, reliability, and access to modern advanced technologies. AWS services like Amazon Elastic Compute Cloud (Amazon EC2) , Amazon Relational Database Service (Amazon RDS) , and Amazon Simple Storage Service (Amazon S3) form the foundation of thredUP.com’s infrastructure. Inventory Management thredUP’s RaaS uses Amazon SageMaker to manage and optimize inventory mix, ensuring that brands have the right products at the right time. thredUP has collected secondhand apparel sales data across 55,000 brands for longer than a decade. thredUP unlocks the power of that data to the benefit of resale buyers and sellers by making better decisions on pricing, inventory mix, and merchandising. Nine years ago, a thredUP engineer was able to programmatically provide probability that a given item would sell in the next 30 days using AWS Artificial Intelligence and AWS Machine Learning (AI/ML) services. thredUP was able to implement this model in a month without the need for data scientists or ML engineers. Pricing Optimization Using machine learning algorithms to automatically price products based on market demand, thredUP’s RaaS enables brands to maximize their profits while offering competitive prices to customers. thredUP handles millions of used products and reprices hundreds of thousands of items daily. On any given day, these new product arrivals are added, and millions of emails and push notifications are sent, all using Amazon Managed Streaming for Apache Kafka (Amazon MSK). With this much activity on different platforms and RaaS resale sites, thredUP greatly relies on Amazon MSK to help things run smoothly. Repricing in event driven architecture, Amazon MSK is also foundational to cross-list secondhand products on multiple resale websites and reprice as many as 100,000 items in one hour. Analytics and Insights thredUP’s RaaS employs Amazon QuickSight to supply brands with near real-time analytics and insights into their resale performance, enabling them to make data-driven decisions and optimize their operations. Amazon QuickSight dashboards provide usage-based pricing and gives thredUP the ability to provision access to brands programmatically and embed the dashboards and reports into web applications. Security thredUP’s RaaS clients require a high level of security and data protection from thredUP, and AWS is able to deliver on this with a wide range of robust security features, such as firewalls, encryption, and identity and access management. AWS has certifications with various industry standards, such as HIPAA, PCI DSS, and SOC 2, which helps thredUP provide brands with confidence that their RaaS services meet the necessary security requirements and are independently audited and certified by recognized industry organizations. Having a prominent level of compliance certification speeds up the sale process and vendor onboarding process significantly. Scale thredUP can scale its infrastructure and resources up or down based on demand using AWS Auto Scaling . Just like with typical ecommerce, sales are critical for resale. Sales generate revenue, attract and retain customers, build a strong brand, gain market share, and enable growth. Cost Efficiency thredUP is able to optimize costs with flexible usage-based pricing models for the resources they need, only when they need them. AWS Cost Explorer helps ensure efficiency for thredUP and the brands they work with. As a specific example, thredUP recently migrated from a self-managed Kubernetes cluster to Amazon Elastic Kubernetes Service (Amazon EKS) / Amazon Elastic Container Registry (Amazon ECR) because custom configuration became too complex to maintain internally and caused unplanned downtimes during upgrades. After the migration, thredUP was able to keep the infrastructure team small, supporting 80+ Kubernetes deployments and 20+ tools. The time spent on patching decreased by 80 percent, downtime related to unsuccessful patching was eliminated, security posture by outsourcing security hardening improved, and CIS Kubernetes Benchmarking was enabled. thredUP also enjoyed instance cost reduction of around 20 percent by switching to Graviton instances. While consumers do care about the planet, most can’t seem to shake the habit of wanting more clothes more frequently thanks to a history of fast fashion. thredUP believes secondhand is a way for consumers to satisfy constant newness while being mindful of their environmental impact. In fact, in thredUP’s 2023 Resale Report , 64 percent of Gen Z and Millennials say they look for an item secondhand before purchasing it new. By leveraging the power of AWS, thredUP is helping brands tap into the fast-growing resale market and provide their customers with a sustainable, affordable, and convenient shopping experience. With thredUP’s RaaS, brands can easily integrate resale into their existing business models, reduce their environmental impact, and drive customer loyalty and engagement. As the demand for sustainable and ethical fashion continues to grow, thredUP’s RaaS is poised to become a game-changer for the retail industry. Interested in how AWS tools and technologies can help revolutionize your business? Learn more about AWS for retail or contact an AWS Representative. Further Reading ● How immersive commerce can drive your sustainability goals while making your merch look fabulous ● Reduce food waste to improve sustainability and financial results in retail with Amazon Forecast ● AWS customers create sustainable solutions to impact climate change ● Green Is the New Black: How the Apparel Industry Is Embracing Circularity TAGS: ESG , sustainability Madeline Steiner Madeline Steiner leads Amazon Web Services’ Retail & CPG worldwide strategy and thought leadership for ESG (Environmental, Social, and Governance) Solutions. In partnership with the AWS Retail and CPG leadership teams, Madeline works to shape and deliver go-to-market strategies and innovative partner solutions for consumer enterprises looking for guidance on how to integrate environmental and social initiatives into their business operations. Madeline has 8+ years of experience in retail and retail technology, including 5 years of merchandising and fashion product development roles at Gap, Inc., and 3 years in customer success at Trendalytics, a consumer intelligence platform for data-driven product decisions. Comments View Comments Resources AWS for Industry AWS Events AWS Training & Certification AWS Whitepapers AWS Compliance Reports Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" How BrainPad fosters internal knowledge sharing with Amazon Kendra _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog How BrainPad fosters internal knowledge sharing with Amazon Kendra by Dr. Naoki Okada | on 13 JUN 2023 | in Amazon Kendra , Artificial Intelligence , AWS Lambda , Customer Solutions | Permalink | Comments |  Share This is a guest post by Dr. Naoki Okada, Lead Data Scientist at BrainPad Inc. Founded in 2004, BrainPad Inc. is a pioneering partner in the field of data utilization, helping companies create business and improve their management through the use of data. To date, BrainPad has helped more than 1,300 companies, primarily industry leaders. BrainPad has the advantage of providing a one-stop service from formulating a data utilization strategy to proof of concept and implementation. BrainPad’s unique style is to work together with clients to solve problems on the ground, such as data that isn’t being collected due to a siloed organizational structure or data that exists but isn’t organized. This post discusses how to structure internal knowledge sharing using Amazon Kendra and AWS Lambda and how Amazon Kendra solves the obstacles around knowledge sharing many companies face. We summarize BrainPad’s efforts in four key areas: What are the knowledge sharing problems that many companies face? Why did we choose Amazon Kendra? How did we implement the knowledge sharing system? Even if a tool is useful, it is meaningless if it is not used. How did we overcome the barrier to adoption? Knowledge sharing problems that many companies face Many companies achieve their results by dividing their work into different areas. Each of these activities generates new ideas every day. This knowledge is accumulated on an individual basis. If this knowledge can be shared among people and organizations, synergies in related work can be created, and the efficiency and quality of work will increase dramatically. This is the power of knowledge sharing. However, there are many common barriers to knowledge sharing: Few people are proactively involved, and the process can’t be sustained for long due to busy schedules. Knowledge is scattered across multiple media, such as internal wikis and PDFs, making it difficult to find the information you need. No one enters knowledge into the knowledge consolidation system. The system will not be widely used because of its poor searchability. Our company faced a similar situation. The fundamental problem with knowledge sharing is that although most employees have a strong need to obtain knowledge, they have little motivation to share their own knowledge at a cost. Changing employee behavior for the sole purpose of knowledge sharing is not easy. In addition, each employee or department has its own preferred method of accumulating knowledge, and trying to force unification won’t lead to motivation or performance in knowledge sharing. This is a headache for management, who wants to consolidate knowledge, while those in the field want to have knowledge in a decentralized way. At our company, Amazon Kendra is the cloud service that has solved these problems. Why we chose Amazon Kendra Amazon Kendra is a cloud service that allows us to search for internal information from a common interface. In other words, it is a search engine that specializes in internal information. In this section, we discuss the three key reasons why we chose Amazon Kendra. Easy aggregation of knowledge As mentioned in the previous section, knowledge, even when it exists, tends to be scattered across multiple media. In our case, it was scattered across our internal wiki and various document files. Amazon Kendra provides powerful connectors for this situation. We can easily import documents from a variety of media, including groupware, wikis, Microsoft PowerPoint files, PDFs, and more, without any hassle. This means that employees don’t have to change the way they store knowledge in order to share it. Although knowledge aggregation can be achieved temporarily, it’s very costly to maintain. The ability to automate this was a very desirable factor for us. Great searchability There are a lot of groupware and wikis out there that excel at information input. However, they often have weaknesses in information output (searchability). This is especially true for Japanese search. For example, in English, word-level matching provides a reasonable level of searchability. In Japanese, however, word extraction is more difficult, and there are cases where matching is done by separating words by an appropriate number of characters. If a search for “Tokyo-to (東京都)” is separated by two characters, “Tokyo (東京)” and “Kyoto (京都),” it will be difficult to find the knowledge you are looking for. Amazon Kendra offers great searchability through machine learning . In addition to traditional keyword searches such as “technology trends,” natural language searches such as “I want information on new technology initiatives” can greatly enhance the user experience. The ability to search appropriately for collected information is the second reason we chose Amazon Kendra. Low cost of ownership IT tools that specialize in knowledge aggregation and retrieval are called enterprise search systems. One problem with implementing these systems is the cost. For an organization with several hundred employees, operating costs can exceed 10 million yen per year. This is not a cheap way to start a knowledge sharing initiative. Amazon Kendra is offered at a much lower cost than most enterprise search systems. As mentioned earlier, knowledge sharing initiatives are not easy to implement. We wanted to start small, and Amazon Kendra’s low cost of ownership was a key factor in our decision. In addition, Amazon Kendra’s ease of implementation and flexibility are also great advantages for us. The next section summarizes an example of our implementation. How we implemented the knowledge sharing system Implementation is not an exaggerated development process; it can be done without code by following the Amazon Kendra processing flow. Here are five key points in the implementation process: Data source (accumulating knowledge) – Each department and employee of our company frequently held internal study sessions, and through these activities, knowledge was accumulated in multiple media, such as wikis and various types of storage. At that time, it was easy to review the information from the study sessions later. However, in order to extract knowledge about a specific area or technology, it was necessary to review each medium in detail, which was not very convenient. Connectors (aggregating knowledge) – With the connector functionality in Amazon Kendra, we were able to link knowledge scattered throughout the company into Amazon Kendra and achieve cross-sectional searchability. In addition, the connector is loaded through a restricted account, allowing for a security-conscious implementation. Search engine (finding information) – Because Amazon Kendra has a search page for usability testing , we were able to quickly test the usability of the search engine immediately after loading documents to see what kind of knowledge could be found. This was very helpful in solidifying the image of the launch. Search UI (search page for users) – Amazon Kendra has a feature called Experience Builder that exposes the search screen to users. This feature can be implemented with no code, which was very helpful in getting feedback during the test deployment. In addition to Experience Builder, Amazon Kendra also supports Python and React.js API implementations, so we can eventually provide customized search pages to our employees to improve their experience. Analytics (monitoring usage trends) – An enterprise search system is only valuable if a lot of people are using it. Amazon Kendra has the ability to monitor how many searches are being performed and for what terms. We use this feature to track usage trends. We also have some Q&A related to our implementation: What were some of the challenges in gathering internal knowledge? We had to start by collecting the knowledge that each department and employee had, but not necessarily in a place that could be directly connected to Amazon Kendra. How did we benefit from Amazon Kendra? We had tried to share knowledge many times in the past, but had often failed. The reasons were information aggregation, searchability, operational costs, and implementation costs. Amazon Kendra has features that solve these problems, and we successfully launched it within about 3 months of conception. Now we can use Amazon Kendra to find solutions to tasks that previously required the knowledge of individuals or departments as the collective knowledge of the entire organization. How did you evaluate the searchability of the system, and what did you do to improve it? First, we had many employees interact with the system and get feedback. One problem that arose at the beginning of the implementation was that there was a scattering of information that had little value as knowledge. This was because some of the data sources contained information from internal blog posts, for example. We are continually working to improve the user experience by selecting the right data sources. As mentioned earlier, by using Amazon Kendra, we were able to overcome many implementation hurdles at minimal cost. However, the biggest challenge with this type of tool is the adoption barrier that comes after implementation. The next section provides an example of how we overcame this hurdle. How we overcame the barrier to adoption Have you ever seen a tool that you spent a lot of effort, time, and money implementing become obsolete without widespread use? No matter how good the functionality is at solving problems, it will not be effective if people are not using it. One of the initiatives we took with the launch of Amazon Kendra was to provide a chatbot. In other words, when you ask a question in a chat tool, you get a response with the appropriate knowledge. Because all of our telecommuting employees use a chat tool on a daily basis, using chatbots is much more compatible than having them open a new search screen in their browsers. To implement this chatbot, we use Lambda, a service that allows us to run serverless, event-driven programs. Specifically, the following workflow is implemented: A user posts a question to the chatbot with a mention. The chatbot issues an event to Lambda. A Lambda function detects the event and searches Amazon Kendra for the question. The Lambda function posts the search results to the chat tool. The user views the search results. This process takes only a few seconds and provides a high-quality user experience for knowledge discovery. The majority of employees were exposed to the knowledge sharing mechanism through the chatbot, and there is no doubt that the chatbot contributed to the diffusion of the mechanism. And because there are some areas that can’t be covered by the chatbot alone, we have also asked them to use the customized search screen in conjunction with the chatbot to provide an even better user experience. Conclusion In this post, we presented a case study of Amazon Kendra for knowledge sharing and an example of a chatbot implementation using Lambda to propagate the mechanism. We look forward to seeing Amazon Kendra take another leap forward as large-scale language models continue to evolve. If you are interested in trying out Amazon Kendra, check out Enhancing enterprise search with Amazon Kendra . BrainPad can also help you with internal knowledge sharing and document exploitation using generative AI. Please contact us for more information. About the Author Dr. Naoki Okada is a Lead Data Scientist at BrainPad Inc. With his cross-functional experience in business, analytics, and engineering, he supports a wide range of clients from building up DX organizations to leveraging data in unexplored areas. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" How Earth.com and Provectus implemented their MLOps Infrastructure with Amazon SageMaker _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog How Earth.com and Provectus implemented their MLOps Infrastructure with Amazon SageMaker by Marat Adayev , Dmitrii Evstiukhin , and James Burdon | on 27 JUN 2023 | in Advanced (300) , Amazon SageMaker , Customer Solutions | Permalink | Comments |  Share This blog post is co-written with Marat Adayev and Dmitrii Evstiukhin from Provectus. When machine learning (ML) models are deployed into production and employed to drive business decisions, the challenge often lies in the operation and management of multiple models. Machine Learning Operations (MLOps) provides the technical solution to this issue, assisting organizations in managing, monitoring, deploying, and governing their models on a centralized platform. At-scale, real-time image recognition is a complex technical problem that also requires the implementation of MLOps. By enabling effective management of the ML lifecycle, MLOps can help account for various alterations in data, models, and concepts that the development of real-time image recognition applications is associated with. One such application is EarthSnap , an AI-powered image recognition application that enables users to identify all types of plants and animals, using the camera on their smartphone. EarthSnap was developed by Earth.com , a leading online platform for enthusiasts who are passionate about the environment, nature, and science. Earth.com’s leadership team recognized the vast potential of EarthSnap and set out to create an application that utilizes the latest deep learning (DL) architectures for computer vision (CV). However, they faced challenges in managing and scaling their ML system, which consisted of various siloed ML and infrastructure components that had to be maintained manually. They needed a cloud platform and a strategic partner with proven expertise in delivering production-ready AI/ML solutions, to quickly bring EarthSnap to the market. That is where Provectus , an AWS Premier Consulting Partner with competencies in Machine Learning, Data & Analytics, and DevOps, stepped in. This post explains how Provectus and Earth.com were able to enhance the AI-powered image recognition capabilities of EarthSnap, reduce engineering heavy lifting, and minimize administrative costs by implementing end-to-end ML pipelines, delivered as part of a managed MLOps platform and managed AI services. Challenges faced in the initial approach The executive team at Earth.com was eager to accelerate the launch of EarthSnap. They swiftly began to work on AI/ML capabilities by building image recognition models using Amazon SageMaker. The following diagram shows the initial image recognition ML workflow, run manually and sequentially. The models developed by Earth.com lived across various notebooks. They required the manual sequential execution run of a series of complex notebooks to process the data and retrain the model. Endpoints had to be deployed manually as well. Earth.com didn’t have an in-house ML engineering team, which made it hard to add new datasets featuring new species, release and improve new models, and scale their disjointed ML system. The ML components for data ingestion, preprocessing, and model training were available as disjointed Python scripts and notebooks, which required a lot of manual heavy lifting on the part of engineers. The initial solution also required the support of a technical third party, to release new models swiftly and efficiently. First iteration of the solution Provectus served as a valuable collaborator for Earth.com, playing a crucial role in augmenting the AI-driven image recognition features of EarthSnap. The application’s workflows were automated by implementing end-to-end ML pipelines, which were delivered as part of Provectus’s managed MLOps platform and supported through managed AI services . A series of project discovery sessions were initiated by Provectus to examine EarthSnap’s existing codebase and inventory the notebook scripts, with the goal of reproducing the existing model results. After the model results had been restored, the scattered components of the ML workflow were merged into an automated ML pipeline using Amazon SageMaker Pipelines, a purpose-built CI/CD service for ML. The final pipeline includes the following components: Data QA & versioning – This step run as a SageMaker Processing job, ingests the source data from Amazon Simple Storage Service (Amazon S3) and prepares the metadata for the next step, containing only valid images (URI and label) that are filtered according to internal rules. It also persists a manifest file to Amazon S3, including all necessary information to recreate that dataset version. Data preprocessing – This includes multiple steps wrapped as SageMaker processing jobs, and run sequentially. The steps preprocess the images, convert them to RecordIO format, split the images into datasets (full, train, test and validation), and prepare the images to be consumed by SageMaker training jobs. Hyperparameter tuning – A SageMaker hyperparameter tuning job takes as input a subset of the training and validation set and runs a series of small training jobs under the hood to determine the best parameters for the full training job. Full training – A step SageMaker training job launches the training job on the entire data, given the best parameters from the hyperparameter tuning step. Model evaluation – A step SageMaker processing job is run after the final model has been trained. This step produces an expanded report containing the model’s metrics. Model creation – The SageMaker ModelCreate step wraps the model into the SageMaker model package and pushes it to the SageMaker model registry. All steps are run in an automated manner after the pipeline has been run. The pipeline can be run via any of following methods: Automatically using AWS CodeBuild, after the new changes are pushed to a primary branch and a new version of the pipeline is upserted (CI) Automatically using Amazon API Gateway, which can be triggered with a certain API call Manually in Amazon SageMaker Studio After the pipeline run (launched using one of preceding methods), a trained model is produced that is ready to be deployed as a SageMaker endpoint. This means that the model must first be approved by the PM or engineer in the model registry, then the model is automatically rolled out to the stage environment using Amazon EventBridge and tested internally. After the model is confirmed to be working as expected, it’s deployed to the production environment (CD). The Provectus solution for EarthSnap can be summarized in the following steps: Start with fully automated, end-to-end ML pipelines to make it easier for Earth.com to release new models Build on top of the pipelines to deliver a robust ML infrastructure for the MLOps platform, featuring all components for streamlining AI/ML Support the solution by providing managed AI services (including ML infrastructure provisioning, maintenance, and cost monitoring and optimization) Bring EarthSnap to its desired state (mobile application and backend) through a series of engagements, including AI/ML work, data and database operations, and DevOps After the foundational infrastructure and processes were established, the model was trained and retrained on a larger dataset. At this point, however, the team encountered an additional issue when attempting to expand the model with even larger datasets. We needed to find a way to restructure the solution architecture, making it more sophisticated and capable of scaling effectively. The following diagram shows the EarthSnap AI/ML architecture. The AI/ML architecture for EarthSnap is designed around a series of AWS services: Sagemaker Pipeline runs using one of the methods mentioned above (CodeBuild, API, manual) that trains the model and produces artifacts and metrics. As a result, the new version of the model is pushed to the Sagemaker Model registry Then the model is reviewed by an internal team (PM/engineer) in model registry and approved/rejected based on metrics provided Once the model is approved, the model version is automatically deployed to the stage environment using the Amazon EventBridge that tracks the model status change The model is deployed to the production environment if the model passes all tests in the stage environment Final solution To accommodate all necessary sets of labels, the solution for EarthSnap’s model required substantial modifications, because incorporating all species within a single model proved to be both costly and inefficient. The plant category was selected first for implementation. A thorough examination of plant data was conducted, to organize it into subsets based on shared internal characteristics. The solution for the plant model was redesigned by implementing a multi-model parent/child architecture. This was achieved by training child models on grouped subsets of plant data and training the parent model on a set of data samples from each subcategory. The Child models were employed for accurate classification within the internally grouped species, while the parent model was utilized to categorize input plant images into subgroups. This design necessitated distinct training processes for each model, leading to the creation of separate ML pipelines. With this new design, along with the previously established ML/MLOps foundation, the EarthSnap application was able to encompass all essential plant species, resulting in improved efficiency concerning model maintenance and retraining. The following diagram illustrates the logical scheme of parent/child model relations. Upon completing the redesign, the ultimate challenge was to guarantee that the AI solution powering EarthSnap could manage the substantial load generated by a broad user base. Fortunately, the managed AI onboarding process encompasses all essential automation, monitoring, and procedures for transitioning the solution into a production-ready state, eliminating the need for any further capital investment. Results Despite the pressing requirement to develop and implement the AI-driven image recognition features of EarthSnap within a few months, Provectus managed to meet all project requirements within the designated time frame. In just 3 months, Provectus modernized and productionized the ML solution for EarthSnap, ensuring that their mobile application was ready for public release. The modernized infrastructure for ML and MLOps allowed Earth.com to reduce engineering heavy lifting and minimize the administrative costs associated with maintenance and support of EarthSnap. By streamlining processes and implementing best practices for CI/CD and DevOps, Provectus ensured that EarthSnap could achieve better performance while improving its adaptability, resilience, and security. With a focus on innovation and efficiency, we enabled EarthSnap to function flawlessly, while providing a seamless and user-friendly experience for all users. As part of its managed AI services, Provectus was able to reduce the infrastructure management overhead, establish well-defined SLAs and processes, ensure 24/7 coverage and support, and increase overall infrastructure stability, including production workloads and critical releases. We initiated a series of enhancements to deliver managed MLOps platform and augment ML engineering. Specifically, it now takes Earth.com minutes, instead of several days, to release new ML models for their AI-powered image recognition application. With assistance from Provectus, Earth.com was able to release its EarthSnap application at the Apple Store and Playstore ahead of schedule. The early release signified the importance of Provectus’ comprehensive work for the client. ”I’m incredibly excited to work with Provectus. Words can’t describe how great I feel about handing over control of the technical side of business to Provectus. It is a huge relief knowing that I don’t have to worry about anything other than developing the business side.” – Eric Ralls, Founder and CEO of EarthSnap. The next steps of our cooperation will include: adding advanced monitoring components to pipelines, enhancing model retraining, and introducing a human-in-the-loop step. Conclusion The Provectus team hopes that Earth.com will continue to modernize EarthSnap with us. We look forward to powering the company’s future expansion, further popularizing natural phenomena, and doing our part to protect our planet. To learn more about the Provectus ML infrastructure and MLOps, visit Machine Learning Infrastructure and watch the webinar for more practical advice. You can also learn more about Provectus managed AI services at the Managed AI Services. If you’re interested in building a robust infrastructure for ML and MLOps in your organization, apply for the ML Acceleration Program to get started. Provectus helps companies in healthcare and life sciences, retail and CPG, media and entertainment, and manufacturing, achieve their objectives through AI. Provectus is an AWS Machine Learning Competency Partner and AI-first transformation consultancy and solutions provider helping design, architect, migrate, or build cloud-native applications on AWS. Contact Provectus | Partner Overview About the Authors Marat Adayev  is an ML Solutions Architect at Provectus Dmitrii Evstiukhin  is the Director of Managed Services at Provectus James Burdon  is a Senior Startups Solutions Architect at AWS Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" How Forethought saves over 66 in costs for generative AI models using Amazon SageMaker _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog How Forethought saves over 66% in costs for generative AI models using Amazon SageMaker by Jad Chamoun , Salina Wu , Dhawalkumar Patel , James Park , and Sunil Padmanabhan | on 13 JUN 2023 | in Amazon SageMaker , Artificial Intelligence , Customer Solutions , Generative AI | Permalink | Comments |  Share This post is co-written with Jad Chamoun, Director of Engineering at Forethought Technologies, Inc. and Salina Wu, Senior ML Engineer at Forethought Technologies, Inc. Forethought  is a leading generative AI suite for customer service. At the core of its suite is the innovative SupportGPT™ technology which uses machine learning to transform the customer support lifecycle—increasing deflection, improving CSAT, and boosting agent productivity. SupportGPT™ leverages state-of-the-art Information Retrieval (IR) systems and large language models (LLMs) to power over 30 million customer interactions annually. SupportGPT’s primary use case is enhancing the quality and efficiency of customer support interactions and operations. By using state-of-the-art IR systems powered by embeddings and ranking models, SupportGPT can quickly search for relevant information, delivering accurate and concise answers to customer queries. Forethought uses per-customer fine-tuned models to detect customer intents in order to solve customer interactions. The integration of large language models helps humanize the interaction with automated agents, creating a more engaging and satisfying support experience. SupportGPT also assists customer support agents by offering autocomplete suggestions and crafting appropriate responses to customer tickets that align with the company’s based on previous replies. By using advanced language models, agents can address customers’ concerns faster and more accurately, resulting in higher customer satisfaction. Additionally, SupportGPT’s architecture enables detecting gaps in support knowledge bases, which helps agents provide more accurate information to customers. Once these gaps are identified, SupportGPT can automatically generate articles and other content to fill these knowledge voids, ensuring the support knowledge base remains customer-centric and up to date. In this post, we share how Forethought uses Amazon SageMaker multi-model endpoints in generative AI use cases to save over 66% in cost. Infrastructure challenges To help bring these capabilities to market, Forethought efficiently scales its ML workloads and provides hyper-personalized solutions tailored to each customer’s specific use case. This hyper-personalization is achieved through fine-tuning embedding models and classifiers on customer data, ensuring accurate information retrieval results and domain knowledge that caters to each client’s unique needs. The customized autocomplete models are also fine-tuned on customer data to further enhance the accuracy and relevance of the responses generated. One of the significant challenges in AI processing is the efficient utilization of hardware resources such as GPUs. To tackle this challenge, Forethought uses SageMaker multi-model endpoints (MMEs) to run multiple AI models on a single inference endpoint and scale. Because the hyper-personalization of models requires unique models to be trained and deployed, the number of models scales linearly with the number of clients, which can become costly. To achieve the right balance of performance for real-time inference and cost, Forethought chose to use SageMaker MMEs, which support GPU acceleration. SageMaker MMEs enable Forethought to deliver high-performance, scalable, and cost-effective solutions with subsecond latency, addressing multiple customer support scenarios at scale. SageMaker and Forethought SageMaker is a fully managed service that provides developers and data scientists the ability to build, train, and deploy ML models quickly. SageMaker MMEs provide a scalable and cost-effective solution for deploying a large number of models for real-time inference. MMEs use a shared serving container and a fleet of resources that can use accelerated instances such as GPUs to host all of your models. This reduces hosting costs by maximizing endpoint utilization compared to using single-model endpoints. It also reduces deployment overhead because SageMaker manages loading and unloading models in memory and scaling them based on the endpoint’s traffic patterns. In addition, all SageMaker real-time endpoints benefit from built-in capabilities to manage and monitor models, such as including shadow variants , auto scaling , and native integration with Amazon CloudWatch (for more information, refer to CloudWatch Metrics for Multi-Model Endpoint Deployments ). As Forethought grew to host hundreds of models that also required GPU resources, we saw an opportunity to create a more cost-effective, reliable, and manageable architecture through SageMaker MMEs. Prior to migrating to SageMaker MMEs, our models were deployed on Kubernetes on Amazon Elastic Kubernetes Service (Amazon EKS). Although Amazon EKS provided management capabilities, it was immediately apparent that we were managing infrastructure that wasn’t specifically tailored for inference. Forethought had to manage model inference on Amazon EKS ourselves, which was a burden on engineering efficiency. For example, in order to share expensive GPU resources between multiple models, we were responsible for allocating rigid memory fractions to models that were specified during deployment. We wanted to address the following key problems with our existing infrastructure: High cost – To ensure that each model had enough resources, we would be very conservative in how many models to fit per instance. This resulted in much higher costs for model hosting than necessary. Low reliability – Despite being conservative in our memory allocation, not all models have the same requirements, and occasionally some models would throw out of memory (OOM) errors. Inefficient management – We had to manage different deployment manifests for each type of model (such as classifiers, embeddings, and autocomplete), which was time-consuming and error-prone. We also had to maintain the logic to determine the memory allocation for different model types. Ultimately, we needed an inference platform to take on the heavy lifting of managing our models at runtime to improve the cost, reliability, and the management of serving our models. SageMaker MMEs allowed us to address these needs. Through its smart and dynamic model loading and unloading, and its scaling capabilities, SageMaker MMEs provided a significantly less expensive and more reliable solution for hosting our models. We are now able to fit many more models per instance and don’t have to worry about OOM errors because SageMaker MMEs handle loading and unloading models dynamically. In addition, deployments are now as simple as calling Boto3 SageMaker APIs and attaching the proper auto scaling policies. The following diagram illustrates our legacy architecture. To begin our migration to SageMaker MMEs, we identified the best use cases for MMEs and which of our models would benefit the most from this change. MMEs are best used for the following: Models that are expected to have low latency but can withstand a cold start time (when it’s first loaded in) Models that are called often and consistently Models that need partial GPU resources Models that share common requirements and inference logic We identified our embeddings models and autocomplete language models as the best candidates for our migration. To organize these models under MMEs, we would create one MME per model type, or task, one for our embeddings models, and another for autocomplete language models. We already had an API layer on top of our models for model management and inference. Our task at hand was to rework how this API was deploying and handling inference on models under the hood with SageMaker, with minimal changes to how clients and product teams interacted with the API. We also needed to package our models and custom inference logic to be compatible with NVIDIA Triton Inference Server using SageMaker MMEs. The following diagram illustrates our new architecture. Custom inference logic Before migrating to SageMaker, Forethought’s custom inference code (preprocessing and postprocessing) ran in the API layer when a model was invoked. The objective was to transfer this functionality to the model itself to clarify the separation of responsibilities, modularize and simplify their code, and reduce the load on the API. Embeddings Forethought’s embedding models consist of two PyTorch model artifacts, and the inference request determines which model to call. Each model requires preprocessed text as input. The main challenges were integrating a preprocessing step and accommodating two model artifacts per model definition. To address the need for multiple steps in the inference logic, Forethought developed a Triton ensemble model with two steps: a Python backend preprocessing process and a PyTorch backend model call. Ensemble models allow for defining and ordering steps in the inference logic, with each step represented by a Triton model of any backend type. To ensure compatibility with the Triton PyTorch backend, the existing model artifacts were converted to TorchScript format. Separate Triton models were created for each model definition, and Forethought’s API layer was responsible for determining the appropriate TargetModel to invoke based on the incoming request. Autocomplete The autocomplete models (sequence to sequence) presented a distinct set of requirements. Specifically, we needed to enable the capability to loop through multiple model calls and cache substantial inputs for each call, all while maintaining low latency. Additionally, these models necessitated both preprocessing and postprocessing steps. To address these requirements and achieve the desired flexibility, Forethought developed autocomplete MME models utilizing the Triton Python backend, which offers the advantage of writing the model as Python code. Benchmarking After the Triton model shapes were determined, we deployed models to staging endpoints and conducted resource and performance benchmarking. Our main goal was to determine the latency for cold start vs in-memory models, and how latency was affected by request size and concurrency. We also wanted to know how many models could fit on each instance, how many models would cause the instances to scale up with our auto scaling policy, and how quickly the scale-up would happen. In keeping with the instance types we were already using, we did our benchmarking with ml.g4dn.xlarge and ml.g4dn.2xlarge instances. Results The following table summarizes our results. Request Size Cold Start Latency Cached Inference Latency Concurrent Latency (5 requests) Small (30 tokens) 12.7 seconds 0.03 seconds 0.12 seconds Medium (250 tokens) 12.7 seconds 0.05 seconds 0.12 seconds Large (550 tokens) 12.7 seconds 0.13 seconds 0.12 seconds Noticeably, the latency for cold start requests is significantly higher than the latency for cached inference requests. This is because the model needs to be loaded from disk or Amazon Simple Storage Service (Amazon S3) when a cold start request is made. The latency for concurrent requests is also higher than the latency for single requests. This is because the model needs to be shared between concurrent requests, which can lead to contention. The following table compares the latency of the legacy models and the SageMaker models. Request Size Legacy Models SageMaker Models Small (30 tokens) 0.74 seconds 0.24 seconds Medium (250 tokens) 0.74 seconds 0.24 seconds Large (550 tokens) 0.80 seconds 0.32 seconds Overall, the SageMaker models are a better choice for hosting autocomplete models than the legacy models. They offer lower latency, scalability, reliability, and security. Resource usage In our quest to determine the optimal number of models that could fit on each instance, we conducted a series of tests. Our experiment involved loading models into our endpoints using an ml.g4dn.xlarge instance type, without any auto scaling policy. These particular instances offer 15.5 GB of memory, and we aimed to achieve approximately 80% GPU memory usage per instance. Considering the size of each encoder model artifact, we managed to find the optimal number of Triton encoders to load on an instance to reach our targeted GPU memory usage. Furthermore, given that each of our embeddings models corresponds to two Triton encoder models, we were able to house a set number of embeddings models per instance. As a result, we calculated the total number of instances required to serve all our embeddings models. This experimentation has been crucial in optimizing our resource usage and enhancing the efficiency of our models. We conducted similar benchmarking for our autocomplete models. These models were around 292.0 MB each. As we tested how many models would fit on a single ml.g4dn.xlarge instance, we noticed that we were only able to fit four models before our instance started unloading models, despite the models having a small size. Our main concerns were: Cause for CPU memory utilization spiking Cause for models getting unloaded when we tried to load in one more model instead of just the least recently used (LRU) model We were able to pinpoint the root cause of the memory utilization spike coming from initializing our CUDA runtime environment in our Python model, which was necessary to move our models and data on and off the GPU device. CUDA loads many external dependencies into CPU memory when the runtime is initialized. Because the Triton PyTorch backend handles and abstracts away moving data on and off the GPU device, we didn’t run into this issue for our embedding models. To address this, we tried using ml.g4dn.2xlarge instances, which had the same amount of GPU memory but twice as much CPU memory. In addition, we added several minor optimizations in our Python backend code, including deleting tensors after use, emptying the cache, disabling gradients, and garbage collecting. With the larger instance type, we were able to fit 10 models per instance, and the CPU and GPU memory utilization became much more aligned. The following diagram illustrates this architecture. Auto scaling We attached auto scaling policies to both our embeddings and autocomplete MMEs. Our policy for our embeddings endpoint targeted 80% average GPU memory utilization using custom metrics. Our autocomplete models saw a pattern of high traffic during business hours and minimal traffic overnight. Because of this, we created an auto scaling policy based on InvocationsPerInstance so that we could scale according to the traffic patterns, saving on cost without sacrificing reliability. Based on our resource usage benchmarking, we configured our scaling policies with a target of 225 InvocationsPerInstance . Deploy logic and pipeline Creating an MME on SageMaker is straightforward and similar to creating any other endpoint on SageMaker. After the endpoint is created, adding additional models to the endpoint is as simple as moving the model artifact to the S3 path that the endpoint targets; at this point, we can make inference requests to our new model. We defined logic that would take in model metadata, format the endpoint deterministically based on the metadata, and check whether the endpoint existed. If it didn’t, we create the endpoint and add the Triton model artifact to the S3 patch for the endpoint (also deterministically formatted). For example, if the model metadata indicated that it is an autocomplete model, it would create an endpoint for auto-complete models and an associated S3 path for auto-complete model artifacts. If the endpoint existed, we would copy the model artifact to the S3 path. Now that we had our model shapes for our MME models and the functionality for deploying our models to MME, we needed a way to automate the deployment. Our users must specify which model they want to deploy; we handle packaging and deployment of the model. The custom inference code packaged with the model is versioned and pushed to Amazon S3; in the packaging step, we pull the inference code according to the version specified (or the latest version) and use YAML files that indicate the file structures of the Triton models. One requirement for us was that all of our MME models would be loaded into memory to avoid any cold start latency during production inference requests to load in models. To achieve this, we provision enough resources to fit all our models (according to the preceding benchmarking) and call every model in our MME at an hourly cadence. The following diagram illustrates the model deployment pipeline. The following diagram illustrates the model warm-up pipeline. Model invocation Our existing API layer provides an abstraction for callers to make inference on all of our ML models. This meant we only had to add functionality to the API layer to call the SageMaker MME with the correct target model depending on the inference request, without any changes to the calling code. The SageMaker inference code takes the inference request, formats the Triton inputs defined in our Triton models, and invokes the MMEs using Boto3. Cost benefits Forethought made significant strides in reducing model hosting costs and mitigating model OOM errors, thanks to the migration to SageMaker MMEs. Before this change, ml.g4dn.xlarge instances running in Amazon EKS. With the transition to MMEs, we discovered it could house 12 embeddings models per instance while achieving 80% GPU memory utilization. This led to a significant decline in our monthly expenses. To put it in perspective, we realized a cost saving of up to 80%. Moreover, to manage higher traffic, we considered scaling up the replicas. Assuming a scenario where we employ three replicas, we found that our cost savings would still be substantial even under these conditions, hovering around 43%. The journey with SageMaker MMEs has proven financially beneficial, reducing our expenses while ensuring optimal model performance. Previously, our autocomplete language models were deployed in Amazon EKS, necessitating a varying number of ml.g4dn.xlarge instances based on the memory allocation per model. This resulted in a considerable monthly cost. However, with our recent migration to SageMaker MMEs, we’ve been able to reduce these costs substantially. We now host all our models on ml.g4dn.2xlarge instances, giving us the ability to pack models more efficiently. This has significantly trimmed our monthly expenses, and we’ve now realized cost savings in the 66–74% range. This move has demonstrated how efficient resource utilization can lead to significant financial savings using SageMaker MMEs. Conclusion In this post, we reviewed how Forethought uses SageMaker multi-model endpoints to decrease cost for real-time inference. SageMaker takes on the undifferentiated heavy lifting, so Forethought can increase engineering efficiency. It also allows Forethought to dramatically lower the cost for real-time inference while maintaining the performance needed for the business-critical operations. By doing so, Forethought is able to provide a differentiated offering for their customers using hyper-personalized models. Use SageMaker MME to host your models at scale and reduce hosting costs by improving endpoint utilization. It also reduces deployment overhead because Amazon SageMaker manages loading models in memory and scaling them based on the traffic patterns to your endpoint. You can find code samples on hosting multiple models using SageMaker MME on GitHub . About the Authors Jad Chamoun is a Director of Core Engineering at Forethought. His team focuses on platform engineering covering Data Engineering, Machine Learning Infrastructure, and Cloud Infrastructure.  You can find him on  LinkedIn . Salina Wu is a Sr. Machine Learning Infrastructure engineer at Forethought.ai. She works closely with the Machine Learning team to build and maintain their end-to-end training, serving, and data infrastructures. She is particularly motivated by introducing new ways to improve efficiency and reduce cost across the ML space. When not at work, Salina enjoys surfing, pottery, and being in nature. James Park  is a Solutions Architect at Amazon Web Services. He works with Amazon.com to design, build, and deploy technology solutions on AWS, and has a particular interest in AI and machine learning. In h is spare time he enjoys seeking out new cultures, new experiences,  and staying up to date with the latest technology trends.You can find him on LinkedIn . Sunil Padmanabhan is a Startup Solutions Architect at AWS. As a former startup founder and CTO, he is passionate about machine learning and focuses on helping startups leverage AI/ML for their business outcomes and design and deploy ML/AI solutions at scale. Dhawal Patel is a Principal Machine Learning Architect at AWS. He has worked with organizations ranging from large enterprises to mid-sized startups on problems related to distributed computing, and Artificial Intelligence. He focuses on Deep learning including NLP and Computer Vision domains. He helps customers achieve high performance model inference on SageMaker. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" How Generative AI will transform manufacturing _ AWS for Industries.txt,"AWS for Industries How Generative AI will transform manufacturing by Scot Wlodarczak | on 20 JUN 2023 | in *Post Types , Amazon Machine Learning , Amazon SageMaker , Artificial Intelligence , Generative AI , Industries , Manufacturing , Thought Leadership | Permalink |  Share Introduction Artificial intelligence (AI) and machine learning (ML) have been a focus for Amazon for decades, and we’ve worked to democratize ML and make it accessible to everyone who wants to use it, including more than 100,000 customers of all sizes and industries. This includes manufacturing companies who are looking beyond AI/ML to generative AI at the prospect of delivering even more exciting results. Generative AI is a type of AI that can create new content and ideas, including conversations, stories, images, videos, and music. It is powered by large models that are pre-trained on vast amounts of data, commonly referred to as foundation models (FMs). With generative AI, manufacturers have the potential to reinvent their businesses and disrupt their industry. The potential of generative AI is incredibly exciting. But, we are still in the very early days. Companies have been working on FMs for years, but how can manufacturers take advantage of what is out there today to transform their business, and where should they start? A study by IDC titled, The State of Manufacturing and Generative AI Adoption in Manufacturing Organizations ,¹ revealed that for manufacturers, the top business areas where survey respondents felt generative AI could make the most impact in the next 18 months were in manufacturing (production), product development and design, followed by sales and supply chain. In this blog we will focus on generative AI potential to create radical, new product designs, drive unprecedented levels of manufacturing productivity, and optimize supply chain applications. Innovate with Generative AI in Product Engineering The first area we will explore is product engineering. AI and ML are already being used alongside high-performance computing to enhance the design of discrete product components to ultimately offer new and innovative designs that humans don’t typically ideate. These technologies provide manufacturers with a way to more quickly and effectively explore various design options to find the most efficient solutions with minimized cost, mass, materials, engineering design time, and even production time. One example is from Autodesk – a leader in 3D design, engineering, and entertainment software. They have been producing software for the architecture, construction, engineering, manufacturing, and media and entertainment industries since 1982. To speed and streamline development, Autodesk has been steadily expanding its use of Amazon Web Services (AWS) and decreasing its data center footprint. Autodesk offers generative design capabilities – a generative AI-like service – in their Fusion 360 software to help product designers create innovative new designs within parameters specified by the user, including materials, manufacturing constraints, safety factors, and other variables. At the Hannover Messe tradeshow in Germany in April 2023, Autodesk gave a presentation on a mobility start-up who improved its processes for creating new mobility solutions to shorten lead times while rapidly exploring new mobility design concepts and controlling engineering and manufacturing costs. The start-up adopted Autodesk Fusion 360, which leverages Amazon SageMaker to enable AI-enhanced generative design and additive manufacturing. It was able to reduce the time-to-market for new designs from 3.5 years to 6 months, an 86% faster time-to-market. Beyond extensive design potential, with generative AI, engineers can analyze large data sets in an effort to help improve safety, create simulation datasets, explore how a part might be manufactured or machined faster, and bring their products to market more quickly. These data sets could become the source information, or FMs, upon which a manufacturer’s generative AI strategy can be built. This allows the data to remain private and secure, while also allowing them to reap the benefits of this technology. In April 2023, AWS announced Amazon Bedrock , a new managed service that makes FMs from AI21 Labs, Anthropic, Stability AI, and Amazon accessible via an API. Amazon Bedrock is the easiest way for customers to build and scale generative AI-based applications using FMs, democratizing access for all builders. One of the most important capabilities of Amazon Bedrock is how easy it is to customize a model. Customers simply point Bedrock at labeled examples in Amazon Simple Storage Service (S3) , and the service can fine-tune the model for a particular task without having to annotate large volumes of data (as few as 20 examples is enough). Imagine a content marketing manager who works at a leading fashion retailer and needs to develop fresh, targeted ad and campaign copy for an upcoming new line of handbags. To do this, they provide Bedrock a few labeled examples of their best performing taglines from past campaigns, along with the associated product descriptions. Bedrock makes a separate copy of the base foundational model that is accessible only to the customer and trains this private copy of the model. After training, Bedrock will automatically start generating effective social media, display ad, and web copy for the new handbags. None of the customer’s data is used to train the original base models. Customers can configure their Amazon Virtual Private Cloud (Amazon VPC) settings to access Bedrock APIs and provide model fine-tuning data in a secure manner and all data is encrypted. Customer data is always encrypted in transit (TLS1.2) and at rest through service managed keys. Optimize Production with Generative AI Manufacturers are often hesitant to adopt and implement new technology in production environments due to the high risk of production loss and the associated costs. In factory production, it is early days for generative AI use cases, but we are certainly hearing from factory leaders already about how generative AI might help optimize overall equipment effectiveness (OEE). As generative AI needs large amounts of data to create FM’s, manufacturers have a unique industry challenge of gaining access to their factory data and moving it into the cloud to begin their generative AI journey. Step one for many manufacturers is adopting an industrial data strategy. Data is the foundation of any digital transformation effort, and having an industrial data strategy is critical to enable business teams to easily and effectively leverage that data to address a variety of use cases across an organization. Why? Manufacturers have often struggled with disconnected and siloed data sources that were not designed to work together, making it challenging to gain economical, secure, structured, and easy access to high quality datasets for FMs. AWS addresses many of these challenges with Industrial Data Fabric solutions. Companies like Georgia Pacific (GP) have used AI and ML for years to optimize quality on paper production, for example. GP improved profits and maximized plant resources by using AWS data analysis technologies to predict how fast converting lines should run to avoid paper tearing in production. But how can generative AI help manufacturers with production? In conversations with business and production leaders, one issue that pops up again and again is that attrition continues to erode the knowledge and experience on their factory floors. Experienced workers are retiring, and their decades of knowledge is often lost with them. These are the kind of workers who can hear when a machine bearing needs grease, or feel when a machine is vibrating excessively and not running properly. The challenge is how to equip less experienced operators with the knowledge required to keep complex production operations running efficiently, and how to maximize production, quality, and machine availability. If manufacturers are willing to digitize and capture historical machine maintenance data, repair data, equipment manuals, production data, and potentially even other manufacturer’s data to augment an effective FM to influence real change. As an example, take a machine that continues to break down, causing unplanned downtime. What if production engineers could use generative AI to query possible failure causes, and get high-probability suggestions on equipment input adjustments, maintenance required, or even spare parts to purchase that will mitigate downtime. In the absence of experienced engineers and operators, generative AI holds real promise in production environments to maximize OEE. Optimize Supply Chains with Generative AI AWS offers multiple services to address supply chain use cases. AWS Supply Chain is an application that helps businesses increase supply chain visibility to make faster, more informed decisions that mitigate risks, save costs, and improve customer experiences. AWS Supply Chain automatically combines and analyzes data across multiple supply chain systems so businesses can observe their operations in real-time, find trends more quickly, and generate more accurate demand forecasts that ensure adequate inventory to meet customer expectations. Based on nearly 30 years of Amazon.com logistics network experience, AWS Supply Chain improves supply chain resiliency by providing a unified data lake, machine learning-powered insights, recommended actions, and in-application collaboration capabilities. Given the uncertainty in supply chains due to the pandemic, regional conflicts, raw material shortages, and even natural disasters, manufacturers supply chains continue to be an area of concern, if not outright angst. The sourcing function is fertile ground where generative AI could add value. Let’s say a manufacturer runs out of custom machined components, and is looking to find alternate vendors to deliver some custom machining work. Generative AI could be used to provide alternate vendors with the proper capabilities to provide the specialty work required. Another application might be substituting generative AI, where possible, for routine human interactions –  getting questions answered that formerly would have taken hours or days to get the right data and then make sense of it. Generative AI could also serve as a supply chain control tower by proactively assessing risk related to shipping challenges, natural disasters, strikes, or other geopolitical events. This would allow the supply chain function to properly allocate scarce resources to mitigate disruptions. Conclusion We are clearly at the beginning of a new and exciting foray into generative AI, and I’ve just scratched the surface of some potential applications in the manufacturing industry – from product design to production and supply chain. AWS announced some exciting new offering in the previous months: Amazon Bedrock , the easiest way for customers to build and scale generative AI-based applications using FMs, democratizing access for all builders Amazon Titan FMs, which allow customers to innovate responsibly with high-performing foundation models (FMs) from Amazon New, network-optimized Amazon EC2 Trn1 instances , which offer 1600 Gbps of network bandwidth and are designed to deliver 20% higher performance over Trn1 for large, network-intensive models Amazon EC2 Inf2 instances powered by AWS Inferentia2, which are optimized specifically for large-scale generative AI applications with models containing hundreds of billions of parameters Amazon CodeWhisperer , an AI coding companion that uses a FM under the hood to radically improve developer productivity by generating code suggestions in real-time based on developers’ comments in natural language and prior code in their Integrated Development Environment (IDE). We are excited about what our customers will build with generative AI on AWS. Starting exploring our services and finding out where generative AI could benefit your organization. Our mission is to make it possible for developers of all skill levels and for organizations of all sizes to innovate using generative AI. This is just the beginning of what we believe will be the next wave of ML, powering new possibilities in manufacturing. ¹ IDC, The State of Manufacturing and Generative AI Adoption in Manufacturing Organizations, 1Q23, r:# EUR250654623, May 2023 TAGS: AWS for Industrial , Industrial , Manufacturing Scot Wlodarczak Scot joined AWS in July 2018, where he now manages the manufacturing industry marketing efforts. Scot worked previously at Cisco, and Rockwell Automation where he held roles as Industrial Marketing Manager and Regional Marketing Leader. Scot has focused on marketing to industrial customers on their digital transformation journey, and bridging the gap between IT and operations. He has experience in automation across a wide range of industries. Scot holds a Mechanical Engineering degree from SUNY - Buffalo, and an MBA from Colorado University. He lives in Colorado. Resources AWS for Industry AWS Events AWS Training & Certification AWS Whitepapers AWS Compliance Reports Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" How Imperva uses Amazon Athena for machine learning botnets detection _ AWS Big Data Blog.txt,"AWS Big Data Blog How Imperva uses Amazon Athena for machine learning botnets detection by Ori Nakar and Yonatan Dolan | on 12 MAY 2021 | in Amazon Athena , Amazon SageMaker , Analytics , Artificial Intelligence | Permalink | Comments |  Share This is a guest post by Ori Nakar, Principal Engineer at Imperva. In their own words, “Imperva is a large cyber security company and an AWS Partner Network (APN) Advanced Technology Partner, who protects web applications and data assets. Imperva protects over 6,200 enterprises worldwide and many of them use Imperva Web Application Firewall (WAF) solutions to secure their public websites and other web assets.” In this post, we explain how Imperva used Amazon Athena , Amazon SageMaker , and Amazon QuickSight to develop a machine learning (ML) clustering algorithm that can efficiently detect botnets attacking your infrastructure. Athena is an interactive query service that makes it easy to analyze data in Amazon Simple Storage Service (Amazon S3) using standard SQL. Athena is serverless, easy to use, and makes it easy for anyone with SQL skills to quickly analyze large-scale datasets in multiple Regions. Imperva Cloud WAF protects hundreds of thousands of websites and blocks billions of security events every day. Security events are correlated online into security narratives, and an innovative offline process enables you to detect botnets. Events, narratives, and many other security data types are stored in Imperva’s Threat Research multi-Region data lake. Botnets and data flow Botnets are internet connected devices that perform repetitive tasks, such as Distributed Denial of Service (DDoS). In many cases, these consumer devices are infected with malicious malware that is controlled by an external entity, often without the owner’s knowledge. Imperva botnet detection allows you to enhance your website’s security and get detailed information on botnet attacks and come up with ways to mitigate their impact. The following is a visualization of a botnets attack map. Each botnet can be composed of tens to thousands of IPs, one or more source location, and one or more target locations, performing an attack such as DDoS, vulnerability scanning, and others. The following diagram illustrates Imperva’s flow to detect botnets. The remainder of this post dives into the process of developing the botnet detection capability and describes the AWS services Imperva uses to enable and accelerate it. Botnet detection development process Imperva’s development process has three main steps: query, detect and evaluate. The following diagram summarizes these steps. Query Imperva stores the narrative data in Imperva’s Threat Research data lake. Data is continuously added as objects to Amazon S3 and stored in multiple Regions due to regulation and data locality requirements. For more information about querying data stored in multiple Regions using Athena, see Running SQL on Amazon Athena to Analyze Big Data Quickly and Across Regions . One of the tables in the data lake is the narratives tables, which has the following columns. Column Description narrative_id ID of a detected narrative. ip Each narrative has one or more IPs. site_id ID of the attacked site. Narrative has a single attacked site. The following screenshot is a sample of the data being queried. Finding correlations between attacking IPs of the same website generates our initial dataset, which allows us to hone in on those that are botnets. The following query in Athena generates that initial list. The query first finds narratives and sites per IP, and stores those in arrays. Next, the query finds all the pairs using a SELF JOIN (L for left, R for right). For each IP pair, it calculates the number of narratives and number of attacked sites. Then it filters on pairs with one common narrative. See the following code: -------------------- STEP 1 -------------------- WITH nar_ips AS ( SELECT ip, ARRAY_AGG(narrativ_id) AS ids, ARRAY_AGG(site_id) AS sites FROM narratives GROUP BY 1) -------------------- STEP 2 -------------------- SELECT l.ip AS ip_1, r.ip AS ip_2, CARDINALITY(ARRAY_INTERSECT(l.ids, r.ids)) AS narratives, CARDINALITY(ARRAY_INTERSECT(l.sites, r.sites)) AS sites FROM nar_ips AS l INNER JOIN nar_ips AS r ON l.ip < r.ip AND ARRAYS_OVERLAP(l.ids, r.ids) The following screenshot shows a query result of IP pairs that attacked the same websites and the number of attacks that they performed together. Imperva uses Create Table as Select (CTAS) to store the query results in Amazon S3 using a CSV file format that the SageMaker training job uses in the next step. Use the following query: CREATE TABLE [temp_table_name] WITH (format='TEXTFILE', bucketed_by=ARRAY['ip_1'], bucket_count=5, external_location='s3://my-bucket/my-temp-location', field_delimiter = ',') AS [SQL] The TEXTFILE format saves the data compressed as gzip, and the bucketing information controls the number of objects and therefore their sizes. Athena CTAS supports multiple types of data formats, and it’s recommended to evaluate which file format is best suited for your use case. The following screenshot shows objects created in the S3 data lake by Athena. Detect: Botnets clustering The next step in Imperva’s process is to cluster the IP pairs from the previous step into botnets. This includes steps for input, model training and output. Input The first step is to calculate the distance between each IP pair in a narrative. This process raises a couple of options. The first is if you use Athena with either the included analytic functions such as cosine_similarity , or develop a custom UDF to perform the calculation. For Imperva’s needs, we decided to use SageMaker and implement the distance calculation using Python. For other implementations, you should experiment with your data and decide which big data processing method to use. The following diagram shows some of the characteristics of each method. Each language has different capabilities. For example, Java and Python are much more flexible than SQL, but makes the pipeline more complex in terms of development and maintenance. The volume of data consumed and processed by SageMaker directly impacts the time it takes to complete the model training. Model training and output We use the SageMaker Python SDK to create a training job, which is used for the model training. The jobs are created and monitored using simple Python code. When running the training job, you can choose which remote instance type best fits the needs of the job, and use Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances to save costs. Imperva used the Python Scikit-learn base image, which includes all libraries required, and more libraries can be installed if needed. Logs from the remote instance are captured for monitoring, and when the job is complete, the output is saved to Amazon S3. See the following code: from sagemaker.sklearn import SKLearn estimator = SKLearn(entry_point=""my_script.py"", use_spot_instances=True, hyperparameters={""epsilon"": 0.1, ""min_samples"": 10}, instance_type=""ml.m4.xlarge"") estimator.fit(inputs={""train"": ""s3://my_bucket/my_folder""}) The following code is the details of the script running in the remote instance that was launched. The distance function gets a list of features and returns a distance between 0–1: def distance(narratives: int, sites: int) -> float: return 1 - (1 / sites) - (1 / narratives) SageMaker copies the data from Amazon S3 and runs the calculation of distance based on all IP pairs. The following code goes over the files and records: distances_arr = [] for file_name in file_names: df = pd.read_csv(file_name, header=None, chunksize=100_000, names=[""ip_1"", ""ip_2"", ""sites"", ""narratives""]) for _, row in df.iterrows(): distances_arr.append(distance(row[""sites""], row[""narratives""])) The output of that calculation is transformed into a sparse distance matrix, which is fed into a DBSCAN algorithm and detects clusters. DBSCAN is one of the most common clustering algorithms. DBSCAN runs on a given set of points; it groups together points that are closely packed together. See the following code: model = DBSCAN(eps=0.1, min_samples=10, metric=""precomputed"") result = model.fit_predict(dist_mat) When the clustering results are ready, SageMaker writes the results to Amazon S3. The table is created by copying the output of SageMaker to a new table partition in Amazon S3. The results are IP clusters, and a working pipeline is established. The following screenshot shows an example of the clustering algorithm results. The pipeline allows for the evaluation and experimentation phase to begin. This is often the more time-consuming phase to help ensure optimal results are achieved. Evaluate: Run various experiments and compare between them The IP clusters (which Imperva refers to as botnets) that were found are written back to a dedicated table in the data lake. You can run the botnet detection process with different parameters within SageMaker. The following are some examples of parameters that you can alter: Adjust query parameters such as IP hits, sites hits, and more Change the distance function being used Adjust hyperparameters such as DBScan epsilon and minimum samples Change the clustering algorithm being used (for example, OPTICS) After you complete several experiments, the following step is to compare them. Imperva accomplishes this by using Athena to query the results for a set of experiments and joining the detected botnet IP data with various additional tables in the data lake. The following example code walks through joining the detected botnet IP data with newer narratives data: WITH narratives_ips AS ( SELECT experiment, botnet, ip, narrarive_id FROM botnets INNER JOIN narratives USING (validation_day, ip)) SELECT experiment, botnet, narrarive_id, COUNT() AS ips GROUP BY 1,2,3 For each detected botnet, Imperva finds the relevant narratives and checks if those IPs continue to jointly attack as a group. Visualizing results from multiple experiments allows you to quickly glean their level effectiveness. Imperva uses QuickSight connected to Athena to query and visualize the experiments table. In the following analysis example, for each experiment, the following information is reviewed: Number of botnets Total number of narratives Average number of IPs in a narrative—this means that the same IPs continued to attack as a group, as predicted The data is visualized using a pivot table in QuickSight, and additional conditional formatting allows for an easy comparison between experiments. To further analyze the results, it was hypothesized that the number of tools used by the botnet might provide additional insights. These tools could be custom-built code or common libraries such as PhantonJS used in malicious ways. The tool information is added to the pivot table, with the ability to drill down to each experiment to view how many tools were used by each botnet. The tool hypothesis is just one example of the analyses available. It’s also possible to drill down further and view the sum of narratives by tool as a donut chart. This visualization can help you quickly see the distribution of tools in a specific experiment. You can perform such analysis on any other field, table, or data source. Imperva uses this method to analyze, compare, and fine-tune experiments in order to improve results. Summary Thousands of customers use the Imperva Web Application Firewall to defend their applications from hacking and denial of service attacks. The most common source of these attacks are botnets, comprised of a large network of computers across the internet. For Imperva to improve our ability to identify, isolate, and stop these attacks, we developed a simple pipeline that allows us to quickly collect and store network traffic in Amazon S3 and analyze it using Athena to identify patterns. We used SageMaker to quickly experiment with different clustering and ML algorithms that help detect patterns in botnet activity. You can generalize this flow to other ML development pipelines, and use any part of it in a model development process. The following diagram illustrates the generalized process. Running many experiments quickly and easily helps achieve business objectives faster. Running experiments on large volumes of data often requires a lot of time and can be rather expensive. An AWS-based processing pipeline eliminates these challenges by utilizing various AWS services: Athena to quickly and cost-effectively analyze large amounts of data SageMaker to experiment with different ML algorithms in a scalable and cost-effective manner QuickSight to visualize and dive deep into the data in order to extract critical insights that help you fine-tune your ML models This blog post is based on a demo at re:Invent 2020 by the authors. You can watch that presentation on YouTube. About the Authors Ori Nakar is Principal Engineer at Imperva’s Threat Research Group. His main interests are WEB application and database security, data science, and big data infrastructure.     Yonatan Dolan is a Business Development Manager at Amazon Web Services. He is located in Israel and helps customers harness AWS analytical services to leverage data, gain insights, and derive value.         Comments View Comments Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" How KYTC Transformed the States Customer Experience for 4.1 Million Drivers Using Amazon Connect _ Case Study _ AWS.txt,"The Division of Customer Service under the Department of Vehicle Regulation in KYTC is the sole point of contact for all incoming customers with questions and issues to resolve. Its contact center assists a wide array of customer inquiries, from licensing and taxes to titles for motor vehicles through voice calls. Français The Kentucky Transportation Cabinet (KYTC) modernized its contact center solution in 6 weeks using Amazon Connect.  2023 Español Amazon Connect Customer Profiles 900,000 chatbot interactions per month KYTC chose to migrate from its previous cloud provider to AWS and to use Amazon Connect because of the opportunity for innovation. It chose Amazon Connect because of the scalability and pay-as-you-go pricing, which freed KYTC of needing to pay heavy licensing fees or for third-party assistance. After planning the design of what it wanted its new system to be capable of, KYTC worked to create it alongside AWS Professional Services, a global team of experts who work with customers to realize desired business outcomes. “The AWS Professional Services team could jump in from our preplanning and build out our current solution, which was amazing,” says Tony Momenpour, system consultant with the Division of Customer Service at KYTC. The modernization of the contact center solution for KYTC took 6 weeks, which was significantly faster than its previous solution migration. 日本語 KYTC agents are using a new desktop when interacting with customers, which has positively impacted training time and agent experience. This is the Amazon Connect Agent Workspace, empowering agents with a unified experience, including guided step-by-step actions. Whenever customers call in to KYTC, if their questions cannot be answered by the chatbot, they start with a tier-one agent. These agents can send customers to specialists (tier-two agents) or answer questions for customers. KYTC agents use a machine learning (ML) -based service, Amazon Connect Wisdom, that delivers information that the agents need to solve issues in near real time, and grants access to 45 wikis that house the information customers might need. Get Started 한국어 How KYTC Transformed the State’s Customer Experience for 4.1 Million Drivers Using Amazon Connect for average call time to assist customers reduced from 3–4 minutes Overview | Opportunity | Solution | Outcome | AWS Services Used If a customer is connected to a tier-two agent, a profile is immediately created using Amazon Connect Customer Profiles (Customer Profiles) so that agents can deliver faster, more personalized customer service. Putting these tools in its agents’ hands has improved employee retention for KYTC. The agency has also reduced the training time for new agents from 4 weeks to 2 weeks because Amazon Connect is simple to use. employee training time reduced from 4 weeks Amazon Connect Customer Profiles equips contact center agents with a more unified view of a customer’s profile with the most up to date information, to provide more personalized customer service. to modernize its contact center solution AWS Services Used Opportunity | How KYTC Used Amazon Connect to Modernize Its Contact Center Reduced 中文 (繁體) Bahasa Indonesia The agency serves 4.1 million drivers in Kentucky, providing customer service for vehicle licensing and taxes. KYTC’s previous solution had downtime during peak call times and required expensive third-party assistance. The agency now provides new chatbot features using Amazon Connect and improved customer call experience using Amazon Connect Wisdom and Amazon Connect Cases. KYTC has reduced employee training time by 2 weeks, reduced customer hold and wait times, and improved customer experience by adding several new features to its contact center solution. Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Toni Woolums Resource Management Analyst, Department of Vehicle Registration, Kentucky Transportation Cabinet KYTC plans to continue innovating its contact center solution using AWS and features of Amazon Connect. The agency is working alongside the AWS team to discover new and current features that fit its use case and enhance its contact center service for its customers. “The difference between what we had before and what we have now is like night and day,” says Ron Parritt, assistant director of the customer service center at KYTC. “Using AWS, we’re helping our customers more than before, which is great, because we are a customer service. I can’t say enough good things about AWS.” Learn more » Amazon Connect customer hold and waiting time Overview Amazon Connect Agent Workspace Türkçe The Kentucky Transportation Cabinet oversees the state’s highway, byway, and roadway maintenance, road safety mechanics, and motor vehicle regulation and licensing. The agency serves 4.1 million drivers in Kentucky. KYTC has improved both the customer and the agent experience in its contact center using Amazon Connect. “We can assist more customers in less time,” says Mike Miller, director of the Division of Customer Service at KYTC. “This upgrade brings more modern functionality for customers and customer service professionals.” The agency has reduced the duration of calls with customers because it can address their needs quicker. Prior to the AWS solution, KYTC averaged 3–4 minutes per call, and with the modernized contact center, it averages less than 2 minutes. With between 30,000 and 40,000 calls on average per month, this saves significant time for both agents and customers. English Another new feature implemented within the contact center solution is the phone callback queue. When customers have been on hold for 2 minutes, they are put into the callback queue, meaning they don’t have to wait on hold for 30–60 minutes. Instead, they will get a call when an agent is available. KYTC agents also use Amazon Connect Cases to track, collaborate on, and resolve customer issues quickly. Using this feature, agents can more efficiently manage customer issues requiring multiple interactions and follow-up tasks. KYTC now has more insight into the analytics of its customer calls and chats using Amazon Connect Contact Lens, offering near-real-time conversational analytics and quality management powered by ML. “We can run near-real-time reports without the fear of crashing the contact center like we had under the old solution,” says Miller. “Managers are very appreciative of having near-real-time access to metrics instead of needing to wait a day.” KYTC uses Amazon Connect Agent Workspace to integrate all the new capabilities of its call center in one place for its agents. By using Amazon Connect, KYTC added a chatbot functionality for customers to self-service their issues before needing to call in. The agency has an average of 900,000 chatbot interactions a month, and of those, only around 1,000 end up needing to be passed to a representative. KYTC also implemented a question-and-answer bot that sends customers a text message to direct them to the agency that they need to contact, which ultimately saves time for KYTC agents. “The question-and-answer bot is a really big feature of our AWS solution,” says Toni Woolums, resource management analyst with the Department of Vehicle Registration at KYTC. “Our new chatbot feature is a big enhancement for customers as well. We were blown away by the number of chat interactions in the new solution.” The Kentucky Transportation Cabinet (KYTC) needed to modernize its contact center solution to better serve the 4.1 million drivers in Kentucky. The previous solution for KYTC was unreliable with high third-party costs. Therefore, KYTC chose to use Amazon Web Services (AWS) to gain stability and innovate a succesful solution. By using Amazon Connect, a service with capabilities to set up a contact center in minutes that can scale to support millions of customers, KYTC improved its customer experience and reduced employee training time in 6 weeks. Amazon Connect Wisdom delivers agents the information they need, reducing the time spent searching for answers. It became critical for KYTC to assess its customer service organization when it began facing significant challenges with its previous contact center solution. The voice server of the previous on-premises solution needed to be restarted twice a day during peak volumes, leading to 30 minutes of downtime each time. In addition to the downtime issue, the ticketing portion of the service was stable but required high-cost third-party consulting during the cloud-migration process. This was a significant expense for KYTC, but it knew it needed to make a change to modernize its contact center solution. 6 weeks The question-and-answer bot is a really big feature of our AWS solution. Our new chatbot feature is a big enhancement for customers as well.” 2 weeks Deutsch Amazon Connect Wisdom Tiếng Việt Italiano ไทย Solution | Reducing Customer Hold Time and Employee Training Time Using Amazon Connect Customer Stories / Government With Amazon Connect, you can set up a contact center in minutes that can scale to support millions of customers. About the Kentucky Transportation Cabinet Less than 2 minutes Amazon Connect agent workspace is a single, intuitive application that provides your agents with all of the tools and step-by-step guidance they need to resolve issues efficiently, improve customer experiences, and onboard faster. Português Outcome | Innovating Using Amazon Connect for Continual Improvement" How Marubeni is optimizing market decisions using AWS machine learning and analytics _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog How Marubeni is optimizing market decisions using AWS machine learning and analytics by Hernan Figueroa , Pedram Jahangiri , Lino Brescia , Narcisse Zekpa , and Sarah Childers | on 08 MAR 2023 | in Amazon Athena , Amazon SageMaker , AWS Lambda , AWS Step Functions , Customer Solutions , Energy | Permalink | Comments |  Share This post is co-authored with Hernan Figueroa, Sr. Manager Data Science at Marubeni Power International. Marubeni Power International Inc (MPII) owns and invests in power business platforms in the Americas. An important vertical for MPII is asset management for renewable energy and energy storage assets, which are critical to reduce the carbon intensity of our power infrastructure. Working with renewable power assets requires predictive and responsive digital solutions, because renewable energy generation and electricity market conditions are continuously changing. MPII is using a machine learning (ML) bid optimization engine to inform upstream decision-making processes in power asset management and trading. This solution helps market analysts design and perform data-driven bidding strategies optimized for power asset profitability. In this post, you will learn how Marubeni is optimizing market decisions by using the broad set of AWS analytics and ML services, to build a robust and cost-effective Power Bid Optimization solution. Solution overview Electricity markets enable trading power and energy to balance power supply and demand in the electric grid and to cover different electric grid reliability needs. Market participants, such as MPII asset operators, are constantly bidding power and energy quantities into these electricity markets to obtain profits from their power assets. A market participant can submit bids to different markets simultaneously to increase the profitability of an asset, but it needs to consider asset power limits and response speeds as well as other asset operational constraints and the interoperability of those markets. MPII’s bid optimization engine solution uses ML models to generate optimal bids for participation in different markets. The most common bids are day-ahead energy bids, which should be submitted 1 day in advance of the actual trading day, and real-time energy bids, which should be submitted 75 minutes before the trading hour. The solution orchestrates the dynamic bidding and operation of a power asset and requires using optimization and predictive capabilities available in its ML models. The Power Bid Optimization solution includes multiple components that play specific roles. Let’s walk through the components involved and their respective business function. Data collection and ingestion The data collection and ingestion layer connects to all upstream data sources and loads the data into the data lake. Electricity market bidding requires at least four types of input: Electricity demand forecasts Weather forecasts Market price history Power price forecasts These data sources are accessed exclusively through APIs. Therefore, the ingestion components need to be able to manage authentication, data sourcing in pull mode, data preprocessing, and data storage. Because the data is being fetched hourly, a mechanism is also required to orchestrate and schedule ingestion jobs. Data preparation As with most ML use cases, data preparation plays a critical role. Data comes from disparate sources in a number of formats. Before it’s ready to be consumed for ML model training, it must go through some of the following steps: Consolidate hourly datasets based on time of arrival. A complete dataset must include all sources. Augment the quality of the data by using techniques such as standardization, normalization, or interpolation. At the end of this process, the curated data is staged and made available for further consumption. Model training and deployment The next step consists of training and deploying a model capable of predicting optimal market bids for buying and selling energy. To minimize the risk of underperformance, Marubeni used the ensemble modeling technique. Ensemble modeling consists of combining multiple ML models to enhance prediction performance. Marubeni ensembles the outputs of external and internal prediction models with a weighted average to take advantage of the strength of all models. Marubeni’s internal models are based on Long Short-Term Memory (LSTM) architectures, which are well documented and easy to implement and customize in TensorFlow. Amazon SageMaker supports TensorFlow deployments and many other ML environments. The external model is proprietary, and its description cannot be included in this post. In Marubeni’s use case, the bidding models perform numerical optimization to maximize the revenue using a modified version of the objective functions used in the publication Opportunities for Energy Storage in CAISO . SageMaker enables Marubeni to run ML and numerical optimization algorithms in a single environment. This is critical, because during the internal model training, the output of the numerical optimization is used as part of the prediction loss function. For more information on how to address numerical optimization use cases, refer to Solving numerical optimization problems like scheduling, routing, and allocation with Amazon SageMaker Processing . We then deploy those models through inference endpoints. As fresh data is ingested periodically, the models need to be retrained because they become stale over time. The architecture section later in this post provides more details on the models’ lifecycle. Power bid data generation On an hourly basis, the solution predicts the optimal quantities and prices at which power should be offered on the market—also called bids . Quantities are measured in MW and prices are measured in $/MW. Bids are generated for multiple combinations of predicted and perceived market conditions. The following table shows an example of the final bid curve output for operating hour 17 at an illustrative trading node near Marubeni’s Los Angeles office. Date Hour Market Location MW Price 11/7/2022 17 RT Energy LCIENEGA_6_N001 0 $0 11/7/2022 17 RT Energy LCIENEGA_6_N001 1.65 $80.79 11/7/2022 17 RT Energy LCIENEGA_6_N001 5.15 $105.34 11/7/2022 17 RT Energy LCIENEGA_6_N001 8 $230.15 This example represents our willingness to bid 1.65 MW of power if the power price is at least $80.79, 5.15 MW if the power price is at least $105.34, and 8 MW if the power price is at least $230.15. Independent system operators (ISOs) oversee electricity markets in the US and are responsible for awarding and rejecting bids to maintain electric grid reliability in the most economical way. California Independent System Operator (CAISO) operates electricity markets in California and publishes market results every hour prior to the next bidding window. By cross-referencing current market conditions with their equivalent on the curve, analysts are able to infer optimal revenue. The Power Bid Optimization solution updates future bids using new incoming market information and new model predictive outputs AWS architecture overview The solution architecture illustrated in the following figure implements all the layers presented earlier. It uses the following AWS services as part of the solution: Amazon Simple Storage Service (Amazon S3) to store the following data: Pricing, weather, and load forecast data from various sources. Consolidated and augmented data ready to be used for model training. Output bid curves refreshed hourly. Amazon SageMaker to train, test, and deploy models to serve optimized bids through inference endpoints. AWS Step Functions to orchestrate both the data and ML pipelines. We use two state machines: One state machine to orchestrate data collection and ensure that all sources have been ingested. One state machine to orchestrate the ML pipeline as well as the optimized bidding generation workflow. AWS Lambda to implement ingestion, preprocessing, and postprocessing functionality: Three functions to ingest input data feeds, with one function per source. One function to consolidate and prepare the data for training. One function that generates the price forecast by calling the model’s endpoint deployed within SageMaker. Amazon Athena to provide developers and business analysts SQL access to the generated data for analysis and troubleshooting. Amazon EventBridge to trigger the data ingestion and ML pipeline on a schedule and in response to events. In the following sections, we discuss the workflow in more detail. Data collection and preparation Every hour, the data preparation Step Functions state machine is invoked. It calls each of the data ingestion Lambda functions in parallel, and waits for all four to complete. The data collection functions call their respective source API and retrieve data for the past hour. Each function then stores the received data into their respective S3 bucket. These functions share a common implementation baseline that provides building blocks for standard data manipulation such as normalization or indexation. To achieve this, we use Lambda layers and AWS Chalice , as described in Using AWS Lambda Layers with AWS Chalice . This ensures all developers are using the same base libraries to build new data preparation logics and speeds up implementation. After all four sources have been ingested and stored, the state machine triggers the data preparation Lambda function. Power price, weather, and load forecast data is received in JSON and character delimited files. Each record part of each file carries a timestamp that is used to consolidate data feeds into one dataset covering a time frame of 1 hour. This construct provides a fully event-driven workflow. Training data preparation is initiated as soon as all the expected data is ingested. ML pipeline After data preparation, the new datasets are stored into Amazon S3. An EventBridge rule triggers the ML pipeline through a Step Functions state machine. The state machine drives two processes: Check if the bid curve generation model is current Automatically trigger model retraining when performance degrades or models are older than a certain amount of days If the age of the currently deployed model is older than the latest dataset by a certain threshold—say 7 days—the Step Functions state machine kicks off the SageMaker pipeline that trains, tests, and deploys a new inference endpoint. If the models are still up to date, the workflow skips the ML pipeline and moves on to the bid generation step. Regardless of the state of the model, a new bid curve is generated upon delivery of a new hourly dataset. The following diagram illustrates this workflow. By default, the StartPipelineExecution action is asynchronous. We can have the state machine wait for the end of the pipeline before invoking the bids generation step by using the ‘ Wait-for callback ‘ option. To reduce cost and time to market in building a pilot solution, Marubeni used Amazon SageMaker Serverless Inference . This ensures that the underlying infrastructure used for training and deployment incurs charges only when needed. This also makes the process of building the pipeline easier because developers no longer need to manage the infrastructure. This is a great option for workloads that have idle periods between traffic spurts. As the solution matures and transitions into production, Marubeni will review their design and adopt a configuration more suited for predictable and steady usage. Bids generation and data querying The bids generation Lambda function periodically invokes the inference endpoint to generate hourly predictions and stores the output into Amazon S3. Developers and business analysts can then explore the data using Athena and Microsoft Power BI for visualization. The data can also be made available via API to downstream business applications. In the pilot phase, operators visually consult the bid curve to support their power transaction activities on markets. However, Marubeni is considering automating this process in the future, and this solution provides the necessary foundations to do so. Conclusion This solution enabled Marubeni to fully automate their data processing and ingestion pipelines as well as reduce their predictive and optimization models’ deployment time from hours to minutes. Bid curves are now automatically generated and kept up to date as market conditions change. They also realized an 80% cost reduction when switching from a provisioned inference endpoint to a serverless endpoint. MPII’s forecasting solution is one of the recent digital transformation initiatives Marubeni Corporation is launching in the power sector. MPII plans to build additional digital solutions to support new power business platforms. MPII can rely on AWS services to support their digital transformation strategy across many use cases. “ We can focus on managing the value chain for new business platforms, knowing that AWS is managing the underlying digital infrastructure of our solutions. ” – Hernan Figueroa, Sr. Manager Data Science at Marubeni Power International. For more information on how AWS is helping energy organizations in their digital transformation and sustainability initiatives, refer to AWS Energy . Marubeni Power International is a subsidiary of Marubeni Corporation. Marubeni Corporation is a major Japanese trading and investment business conglomerate.  Marubeni Power International mission is to develop new business platforms, assess new energy trends and technologies and manage Marubeni’s power portfolio in the Americas. If you would like to know more about Marubeni Power, check out https://www.marubeni-power.com/ . About the Authors Hernan Figueroa leads the digital transformation initiatives at Marubeni Power International. His team applies data science and digital technologies to support Marubeni Power growth strategies. Before joining Marubeni, Hernan was a Data Scientist at Columbia University. He holds a Ph.D. in Electrical Engineering and a B.S. in Computer Engineering. Lino Brescia is a Principal Account Executive based in NYC. He has over 25 years of technology experience and has joined AWS in 2018. He manages global enterprise customers as they transform their business with AWS cloud services and perform large-scale migrations. Narcisse Zekpa is a Sr. Solutions Architect based in Boston. He helps customers in the Northeast U.S. accelerate their business transformation through innovative, and scalable solutions, on the AWS Cloud. When Narcisse is not building, he enjoys spending time with his family, traveling, cooking, playing basketball, and running. Pedram Jahangiri is an Enterprise Solution Architect with AWS, with a PhD in Electrical Engineering. He has 10+ years experience in the energy and IT industry. Pedram has many years of hands-on experience in all aspects of Advanced Analytics for building quantitative and large-scale solutions for enterprises by leveraging cloud technologies. Sarah Childers is an Account Manager based in Washington DC. She is a former science educator turned cloud enthusiast focused on supporting customers through their cloud journey. Sarah enjoys working alongside a motivated team that encourages diversified ideas to best equip customers with the most innovative and comprehensive solutions. TAGS: Amazon SageMaker , AWS Lambda , machine-learning , serverless , sustainability Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" How Technology Leaders Can Prepare for Generative AI _ AWS Cloud Enterprise Strategy Blog.txt,"AWS Cloud Enterprise Strategy Blog How Technology Leaders Can Prepare for Generative AI by Phil Le-Brun | on 24 MAY 2023 | in Artificial Intelligence , Generative AI , Thought Leadership | Permalink | Comments |  Share We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. —Roy Amara, Amara’s law I’m fascinated by the technological tipping points in history that have ignited the public’s imagination—the first TV broadcast, manned space flight, or video conference. Each of these events made a previously esoteric technology or concept tangible. As Amara implies in his “law,” these events are preceded by false starts and inflated expectations. When (if) a tipping point is reached, it is usually accompanied by decades of unseen work described by the S-curve of innovation. Think of past promises of virtual worlds becoming commonplace. While expectations have exceeded reality, organisations and leaders that have curiously leaned in to learn, grounding themselves in real-world business problems like customer demand for more immersive customer experiences, are better prepared for when virtual worlds become mainstream. The most glaring current example of such an emerging technology is generative AI. To the public, generative AI has seemingly appeared from nowhere. But if you dig deeper, you’ll note that the ideas underlying generative AI solutions trace their lineage back to inventions such as the Mark I perceptron in 1958 and neural networks in the late twentieth century. Advancements in statistical techniques, the vast growth of publicly available data, and the power of the cloud have all been instrumental in making generative AI possible. You’ve likely come across two terms associated with generative AI. Foundation Models (FMs) are machine learning (ML) models trained on massive quantities of structured and unstructured data, which can be fine-tuned or adapted for more specific tasks. Large Language Models (LLMs) are a subset of FMs focused on understanding and generating human-like text. These models are ideal for needs such as translation, answering questions, summarising information, and creating or identifying images. AWS and Generative AI AWS has been investing in and using FMs for several years in areas such as search on Amazon.com and delivering conversational experiences with Alexa. You’ve probably seen the announcements from AWS on generative AI, so I won’t repeat them here. With all the hype and marketing that can surround new technologies, having a clear executive understanding of the “what” and “why” is foundational. Since the launch of Amazon SageMaker in 2017, there has been a continual stream of ML and AI services broadening the reach of these tools to technologists and non-technologists alike. AWS’s mission has been to expand access, given the profound implications of these technologies. The recent announcements continue this mission with a more open approach to delivering the capabilities organisations need. For example, the approach with Amazon Bedrock will provide wide access to pre-trained models that can be customised with your own data, allow data to be kept private, and leverage the power of the cloud to deliver capabilities securely and at scale. Companies don’t have to think about model hosting, training, or monitoring and can instead focus on the outcomes they are driving towards. Amazon Bedrock addresses the simple fact that one solution – or one model – is unlikely to solve every business problem you face. Nor will the costly contribution of confidential data to public models, as some organisations have already learned. While generative AI is neither a silver bullet nor “just a better search engine,” it is clearly now on everyone’s radar. The potential is huge. Imagine pharmaceutical companies accelerating the design of gene therapies, borrowers having rich conversational experiences with mortgage providers that quickly approve their loans, or everyone everywhere gaining opportunities through broadening access to ongoing knowledge and educational pathways. I’m a nearly competent hobbyist coder and look forward to improving my skills with active suggestions from generative AI-powered real-time suggestions. So as a Chief Information Officer, Chief Technology Officer, or Chief Data Officer, what should you be thinking about, and how can you prepare? Here are a few topics we believe are important. Get Focused on Your Cloud Journey Do you remember those TV programmes you used to watch as children, the ones that warned: “Don’t try this at home”? I’d give a variant of this warning with generative AI: “Don’t try this without the cloud.” You want your teams focused on problem-solving and innovation, not on managing the underlying complexity and cost of enabling infrastructure and licenses. The cloud is the enabler for generative AI, making available cost-effective data lakes, sustainably provisioned GPUs and compute, high-speed networking, and consumption-based costing. Coupled with compute instances powered by AWS Trainium and AWS Inferentia chipsets to optimise model training and inferences, the cloud can provide lower costs, better performance, and an improved carbon footprint versus on-premises solutions, if the latter is even a realistic alternative. Get Your Data Foundations Right—Now The boldest house built on dodgy foundations will not last. The same is true in the world of ML. With generative AI, quality trumps the quantity of business data available. While it’s common to talk about technology debt, we need to acknowledge that many organisations have unwittingly accumulated analogous debt with data. This typically stems from a lack of data quality, fragmented or siloed data sources, a lack of data literacy, inadequate upfront considerations of how data should be integrated into products, and a culture that talks about data but doesn’t use it day-to-day. Now is the time to implement these fundamentals (many of which I’ve discussed in my previous blog post , including how critical the leaders of data in an organisation are). After all, the bulk of time spent bringing ML to life is still associated with activities such as data wrangling and labelling . Think Beyond the Technology The world of generative AI is incredibly exciting, but technology rarely operates in a vacuum. Face the law of unintended consequences. Start by considering your stance on ethics, transparency, data attribution, security, and privacy with AI. How can you ensure the technology is used accurately, fairly, and appropriately? Resources exist , as do great readings like Michael Kearns’s book The Ethical Algorithm , but these alone are insufficient. It’s a great opportunity to actually do something! For example, prioritise diversity of skills and worldviews and ensure those engaged in creating and using models represent the diversity of your customers; this helps ensure relevance and the early identification of potential biases. Train on these considerations; bake them into your governance and compliance frameworks and even into your vendor selection processes to select partners who share the same values as you. Upskill Yourself and Your People AI simultaneously evokes excitement and concern. It opens a world of knowledge, innovation, and efficiency but leaves many wondering about the implications for their job security. The continued emergence of AI as a profoundly impactful tool requires considering which skills might be needed less in the future and which will be in demand. Consider the technical skills required and how to infuse them into your organisation. Programmes like Machine Learning University can help, but it’s important to think bigger. Skills such as critical thinking and problem-solving will become even more vital. We ultimately want people, assisted by AI, to solve real business challenges and critically assess and question inferences from ML models. This is particularly important with generative AI models that distil data rather than provide considered answers. Make the space to practice these skills by incrementally and consistently eliminating low-value work—perhaps even by using ML! Upskilling goes beyond individuals developing their skills. According to Tom Davenport’s research , 35 percent of Chief Data Officers have found that running data and AI-enabled initiatives are powerful change tools. Hunkering down in data silos in an attempt to deliver value alone has given way to running cross-organisational initiatives. This functional approach helps broaden data advocacy and excitement about what might be possible. Start Considering Use Cases I love the saying, “Fall in love with the problem, not the solution.” It reminds us that while technology is a brilliant enabler, it is just one more set of tools we can apply to real-world problems. What time-consuming, difficult or impossible problems could generative AI help solve? Where do you have data to help in this process? Think big about the opportunities, but start small with problems that cause day-to-day irritations, what we call “paper cuts.” Can these annoyances be automated away, freeing up organisational time while improving comprehension of AI? For instance, developers can use Amazon Code Whisperer to gain an understanding of generative AI’s power in assisting productivity improvements while making suggestions for using unfamiliar APIs, coding more securely, and more. Internal benchmarks show a remarkable 57 percent improvement in productivity while increasing the success rate of completing tasks. What a fantastic, immediate opportunity to be a productivity hero in your organisation! Last, be excited but stay grounded. We’re at an inflexion point with LLMs. Sometimes it feels like the more we learn about AI, the less we know. Approach generative AI with an open, curious mind, but avoid the hype. Critically appraise what you read, and don’t believe there will be a singular best model to adopt. The best approach, and one I’m glad to see AWS has embraced with Amazon Bedrock, is to recognise that different FMs will serve different needs. It democratises access for all builders, allowing commercial and open-source FMs to be adopted. Those already experienced in AI will know this and recognise that the AWS cloud, which provides multiple models, offers a better approach than betting on a single model. Phil Further Reading Announcing New Tools for Building with Generative AI on AWS , Swami Sivasubramanian A guide to making your AI vision a reality , Tom Godden Activating ML in the Enterprise: An Interview with Michelle Lee, VP of Amazon Machine Learning Solutions Labs , Phil Le-Brun Machine Learning University Prioritising Business Value Creation from Data , Phil Le-Brun TAGS: Artificial Intelligence , Machine Learning Phil Le-Brun Phil Le-Brun is an Enterprise Strategist and Evangelist at Amazon Web Services (AWS). In this role, Phil works with enterprise executives to share experiences and strategies for how the cloud can help them increase speed and agility while devoting more of their resources to their customers. Prior to joining AWS, Phil held multiple senior technology leadership roles at McDonald’s Corporation. Phil has a BEng in Electronic and Electrical Engineering, a Masters in Business Administration, and an MSc in Systems Thinking in Practice. Comments View Comments Resources AWS Executive Insights Conversations with Leaders Podcast Conversations with Leaders Video Series AWS Executive Connection on LinkedIn Follow  Twitter  Facebook  LinkedIn  Twitch  RSS Feed  Email Updates" Idealo Case Study.txt,"Tiếng Việt Français 151% conversion rate increase in email campaign Español AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Learn how »  日本語 Amazon SageMaker 2023 With 2.5 million daily page views and over 76 million monthly visits, idealo offers an online portal for customers in six countries across Europe to compare prices for over 500 million products from about 50,000 vendors. User traffic drives revenue from advertisers who closely track certain key performance indicators (KPIs) in the highly competitive retail industry. These KPIs include click-through rates, a measure of how often a customer visits a website to make a purchase, and session rates, the amount of time a user spends on a website. “We wanted to improve what we were already doing as a company and explore other business opportunities,” says Luiz Davi, ML product manager at idealo. “The goal was to build a central offering for the whole company for product recommendations and user-based personalized recommendations.” The team uses solutions from AWS to alleviate much of the manual work involved with the orchestration of data so that it can experiment fast, iterate on models in development, and push useful ML models into production twice as fast as it previously could. It built a pipeline using Amazon SageMaker, which developers use to build, train, and deploy ML models for nearly any use case with fully managed infrastructure, tools, and workflows. “Using Amazon SageMaker really speeds up the whole iteration cycle,” says Arjun Roy, idealo ML engineer. “When I think of innovation, I think about playing around with the data and trying different models. And as an extension to that, the pipelines are very flexible.” For example, ML engineers could run one of their models in one-sixteenth of the time by using a technique called parallelizing. The team spun up 16 compute instances to speed the process of running the model on AWS. “If we had to run the servers and host the applications ourselves, that would require much, much more time,” says Davi. “Now, we can be agile and try different approaches as we go.” Furthermore, idealo allocates costs granularly to certain workloads using the cost transparency of AWS services. AWS Lambda 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Get Started Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. Solution | Building an ML Pipeline on AWS that Delivers Personalized Recommendations at Scale Build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows. Learn more » Outcome | Recommending Products in Near Real Time AWS Services Used Based in Germany, idealo is an online price comparison service that operates in six European countries. The website has over 76 million monthly visits, as customers compare prices for over 500 million products offered from about 50,000 vendors. Luiz Davi, Machine Learning Product Manager, idealo In early 2022, the team developed an ML model that provides complementary recommendations: items that correspond to a purchase, such as a case for a purchased mobile phone. In 3 months, the MLE team released the initial model into production. “That first model showed an impressive improvement from our past benchmark,” says Davi. “That opened multiple doors inside the company so that we could move forward and try more.” The team quickly built upon its success with another model that recommends similar products, which are items that are comparable to a purchased item. The team then created an even more sophisticated model, using data about complementary and similar purchases to deliver personalized recommendations to customers. idealo promotes items of interest to customers based on information collected automatically—with permission—about their shopping history. The German price-comparison site idealo built a machine learning pipeline on AWS that facilitated the ability of its data scientists to deliver models that drive improvements in key marketing metrics. idealo offers 500 million products to users in six European countries. Its Machine Learning Engineering team used Amazon SageMaker and AWS Lambda as tools to help the team experiment fast, automate manual processes, and get models into production quickly. Its user-recommendation model increased click-through rates by 111 percent and session rates by 151 percent, and it enhanced the overall customer experience. Bahasa Indonesia Opportunity | Using ML to Attract Customers Online Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي The automated user-recommendation engine has also generated success for the idealo website, which has seen a 111 percent rise in click-through rates and has increased session rates by 151 percent. “We’ve made a huge leap,” says Davi. “We can see the impact that it’s generating for our internal users. And then we see that impact on the website. People are deciding to buy specific items because they found what they wanted.”   中文 (简体) In 2021, idealo, a subsidiary of Axel Springer SE, decided to migrate all in to the cloud. It wanted to remove the operational risks of its aging on-premises data center, improve the scalability and reliability of the idealo solution for its customers, and boost KPIs. The MLE team was an early adopter of AWS services within idealo, identifying several use cases that it wanted to explore to enhance CRM. The team decided to build a small prototype in the cloud that could drive immediate value, and then iterate through A/B testing to evaluate the impact of the ML model and use the insights to steer business decision-making. 111% The Machine Learning Engineering (MLE) team of the German price-comparison service idealo wanted to create a scalable, customizable product recommendation engine to support the company’s marketing efforts. Targeted product recommendations help to increase online traffic, attract merchants, and inform consumers’ purchasing decisions. To build a streamlined, agile machine learning (ML) pipeline to support powerful data-driven recommendation tools, the team turned to solutions from Amazon Web Services (AWS). ML engineers released models into production and significantly improved the effectiveness of its customer-relationship management (CRM) campaigns. The click-through rates have doubled, session rates have increased by 151 percent, and personalized recommendations are enhancing the customer experience. Overview About Company We see great potential as we advance this initiative. Using AWS, we create products that support us as a company moving forward.” 154% Türkçe 中文 (繁體) English The team delivers additional functionality to the CRM team through the use of AWS Lambda, a serverless, event-driven compute service that lets organizations run code for virtually any type of application or backend service without provisioning or managing servers. Through AWS Lambda functions, customized bargains automatically generate as part of the CRM team’s monthly email campaign. “We have automated the process so that we don’t have to do manual work to keep it running,” says David Rosin, idealo ML engineer. “We set it up once, and ideally, it runs every month.” Customers who receive the emails see bargains that have been automatically selected specifically for them. “Using the MLE team implementation versus our old top-sellers’ logic, we achieved a conversion rate increase of 154 percent,” says Felix Gehlhaar, idealo’s CRM manager, who closely collaborated with the MLE team. “This is exciting for us.” idealo Doubles Click-Through Rate through Personalized Recommendation Engine Developed Using Amazon SageMaker to production for ML models in half increase in session rates Deutsch As the entire company continues its migration to AWS, internal idealo teams share data and collaborate more effectively. “One of our CRM managers told us that the ability to share information makes his life much simpler,” says Davi.Throughout 2023, the MLE team plans to explore using near-real-time data to continue to improve KPIs by driving recommendations, a process that builds upon its strong ML pipeline. “There’s a lot to build,” says Davi. “We have never tried something like this before, but we see great potential as we advance this initiative. Using AWS, we create products that support us as a company moving forward.” Customer Stories / Retail / Germany AWS Customer Success Stories Italiano ไทย Learn more » Cut time rise in click-through rates Português" IDEMIA Case Study _ Security and Compilance _ AWS.txt,"Transforming a compliance-driven, on-premise suite into a SaaS solution posed technical challenges, but Jerry O’Brien, IDEMIA’s Chief Product Manager, knew that AWS was the answer. “Many smaller jurisdictions would never have been able to afford our original product,” explains O’Brien. “But with the AWS Cloud, we saw that we could automate delivery, implementation and offer a subscription price model providing predictable year-to-year budgeting.” Technology and identity-security company IDEMIA is a biometrics industry leader known for forensic analysis software that enables law enforcement agencies to scan and identify fingerprints at scale. To expand their market range and serve more customers, IDEMIA needed to adapt their enterprise application into a lightweight, cloud-based software as a service (SaaS) solution, which would offer a subscription cost model that small agencies could deploy. IDEMIA leveraged the Amazon Web Services (AWS) Cloud and the AWS Go to Market team to bring their new solution, STORM ABIS, to life. Français Benefits of AWS Español “It’s about speed,” says Coleman. “If you can run prints right then and there, locally, you can solve crimes faster. You can solve problems faster, you can be proactive, you can catch repeat offenders—and your community will be safer, as a result.” For IDEMIA, the build-market-sell approach was a huge success—only weeks after launching the software, IDEMIA made their first sale. The first deployment of STORM ABIS will launch in Washington County in Oregon in Spring of 2022. Elastic Load Balancing 日本語 Amazon Elastic Block Store (EBS) Get Started 한국어 Trusted by hundreds of governments and thousands of enterprises in over 180 countries, IDEMIA is a global leader in providing identity-related security services. IDEMIA’s technologies enable our clients to credentialize, authenticate and analyze identities for frictionless access control, connectivity, identity, payments, public security, and travel—at scale and in total security. Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple targets and virtual appliances in one or more Availability Zones (AZs). The product was created with collaboration from AWS in two strategic areas. AWS provided strategic advisory services with a dedicated team of business and technical professionals from our AWS Service Creation and the AWS Professional Services teams. The final product is a multi-tenant solution, backed by Amazon Elastic Compute Cloud (Amazon EC2) instances, that can be deployed within weeks or faster, if an agency already had a mature cloud environment. To make the solution customizable, the IDEMIA team also created a features toggle, providing agencies the option to turn certain product features on or off depending on their needs. AWS and IDEMIA finished building STORM ABIS in 2021. And with an agile, scalable, and cost-effective end-product in hand, it was time to go to market. Amazon EC2 Jerry O’Brien Chief Product Manager, IDEMIA AWS Services Used With STORM ABIS ready for general availability, Randy Jones, AWS Independent Software Vendor (ISV) Acceleration Manager, worked alongside IDEMIA to implement a marketing and sales strategy targeting mid-market law enforcement agencies across the United States. Beyond providing funding and expertise to market the product, Jones and his team also helped support the product launch at IDEMIA’s annual user conference. “We had multiple members of AWS's Justice and Public Safety team attend the conference, which was crucial to connect with customers,” explains Jones. Jeremy Slavish of AWS’s Justice and Public Safety team had procured and used IDEMIA solutions in a previous role and worked closely with many attendees to determine how STORM would meet their unique ABIS needs. 中文 (繁體) Bahasa Indonesia Amazon Aurora Keeping communities safe with agile, cloud-based solutions Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский عربي Now, officers can have this technology at their fingertips, and it’s cost-effective...they can run prints for cold cases and minor crimes that might not otherwise be solved—and get repeat offenders off the streets."" No hardware, training, or on-boarding required Choosing AWS for compliance and scalability Learn more » STORM ABIS needed to adhere to strict security and compliance regulations from local, state, and federal agencies. AWS offered IDEMIA the security configurations they required, along with access to a team of cloud experts that could help build the solution from scratch. With a compliant and secure foundation to build on, IDEMIA and AWS worked together to design a cloud-first application that was made by examiners, for examiners. Christopher Coleman, Senior Director of Marketing at IDEMIA, adds that AWS not only offered their knowledge and networking support during product development, but they also supplemented their marketing and sales staff. “AWS expanded our reach beyond large federal and state agencies to reach those critical Tier two, three, and four jurisdictions,” Coleman says. “And they also provided extra support for sales and marketing to help us foster relationships in those smaller cities and counties. That was critical because we simply didn’t have the bandwidth to tackle that on our own.” Easily scales to add more users, as needed Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Making America's Neighborhoods Safer with IDEMIA Cloud-Based Fingerprinting Software English For small agencies, STORM ABIS is about more than cost savings. It’s about giving every law enforcement officer the tools they need to solve crimes faster—which ultimately acts as a preventative measure to keep communities safe. “Before STORM, local jurisdictions had to prioritize which fingerprints they ran because they had limited access to resources and a long backlog,” says O’Brien. “Now, officers have this technology at their fingertips, and it’s cost-effective—so they can run 10, 20, or 100 prints at a time. They can run prints for cold cases and minor crimes that might not otherwise be solved—and get repeat offenders off the streets.” An out-of-box SaaS-based solution, easily deployable by agencies of any size About IDEMIA Deutsch Amazon Aurora is a relational database management system (RDBMS) built for the cloud with full MySQL and PostgreSQL compatibility. Tiếng Việt Amazon Elastic Block Store (Amazon EBS) is an easy-to-use, scalable, high-performance block-storage service designed for Amazon Elastic Compute Cloud (Amazon EC2). Italiano ไทย Contact Sales Cloud storage automatically backs up data Marketing and launching the new product 2022 Accessible anywhere, via any web browser—including from home or directly from a crime scene 中文 (简体) Continuous updates of algorithms, features, and security patches via cloud-native architecture Português" Illumina Case Study _ Genomics _ AWS.txt,"Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Illumina platforms are also helping research transition seamlessly into a multiomic future. The cloud-based DRAGEN Single-Cell RNA Pipeline, for example, allows scientists to annotate gene expression in individual cells. With the DRAGEN-acceleration, the platform can process three cell samples simultaneously in parallel in approximately 53 minutes. Français Benefits of AWS While advanced users have the option to customize tools like ICA and DRAGEN to perform niche research, Illumina also offers end-to-end cloud solutions with out-of-the-box functionality for specific uses. These include the TruSightTM Software Suite, a variant analysis software solution for uncovering rare disease insights, and TruSight Oncology 500, a fine-tuned sequencing assay for analyzing tumors and identifying immune-oncology biomarkers. Español Amazon EC2 Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices. Learn more » 日本語 AWS Services Used See how AWS is supporting other leading life science organizations in their quest to improve human health.    With large population genetics initiatives on the rise and expanding access to powerful analysis software solutions like ICA, Illumina is fully embracing the power of “big data” in genomics to help customers mine rich insights from massive volumes of sequencing data. These projects will fuel a new era of personalized genomics, allowing researchers to draw connections between genes and health outcomes that were not evident in smaller samples. “The genomics industry is expanding in all directions, from direct-to-consumer testing to personalized cancer vaccines,” says Susan Tousi, Illumina’s chief commercial officer. “Illumina’s goal is to democratize access to genomics technologies around the globe; we’ve partnered with AWS from the beginning to give our customers the answers they need. Over the past decade, we’ve expanded our software portfolio available on AWS to provide a seamless, holistic suite of solutions that can be deployed out-of-the-box or customized to meet specific needs.” Building the Future of Genomics and Biotechnology 한국어 Amazon EC2 Spot Instances “With ICA, DRAGEN, and other tools deployed on AWS, we’re providing solutions that enable customers to aggregate any data types, including NGS and health data, to extract novel information from those large cohorts and improve human health at scale,” says Mehio. Data for these platforms is stored on Amazon Simple Storage Service (Amazon S3), a scalable object storage service. Illumina customers power and dramatically accelerate their analyses with DRAGEN running on Amazon Elastic Compute Cloud (Amazon EC2), a web service that provides secure, resizable compute capacity in the cloud. Navigating from Sample to Answer Illumina also lowers costs for customers by running many of its platforms’ compute jobs on Amazon EC2 Spot Instances, which are available at up to a 90 percent discount compared to On-Demand pricing.  “Our customers have used hundreds of thousands of hours of Spot Instances in the past year alone, which has provided significant cost savings for them,” says Tousi. Learn More AWS Virtual Private Cloud AWS supports thousands of security standards and compliance certifications, including HIPAA, GDPR, ISO 27001, and ISO 13485, helping customers satisfy compliance requirements throughout their genomics workflows. Illumina offers customers extra peace of mind by offering data management in Amazon Virtual Private Cloud (Amazon VPC), which launches other AWS resources in a logically isolated custom virtual network that separates one customer’s data from another’s. Deployed robust portfolio of genomics solutions globally in secure and compliant environment Cost savings and technical advantages can go hand in hand. Illumina recently migrated the tertiary analysis Correlation Engine to AWS, saving costs while scaling data ingestion pipelines to by six times to make the knowledgebase grow faster and become more powerful. Amazon S3 Storage Classes can be customized according to different data needs, making it easy for Illumina to optimize for maximum cost savings. By storing petabytes of infrequently accessed data in Amazon S3 Glacier Deep Archive, Illumina customers save over 90 percent in storage costs. Similarly, DRAGEN runs on Amazon EC2 F1 instances, which offer affordable, accelerated computing that can support the parallel processes Illumina needs. F1 instances offer customizable hardware acceleration with DRAGEN field-programmable gate arrays (FPGAs). To scale DRAGEN across F1 instances, the company used AWS Batch, a fully managed batch processing service that plans, schedules, and executes batch computing workloads. In the last decade, genomics has evolved from a specialty research area into a powerful clinical tool that has ushered in a new era of patient-focused healthcare. Genome sequencing and analysis have become simpler, cheaper, and more comprehensive, making it realistic for clinicians to order genetic tests for individual patients and for researchers to examine thousands of samples to draw connections between genetic variation and human disease. While the first human genome took decades to sequence, scientists can now efficiently sequence an entire human genome in under 24 hours. 中文 (繁體) Bahasa Indonesia Illumina's mission is to unlock the power of the genome to improve human health. An AWS Partner, the company has been a driving force behind technological advancement in genomics, evolving from a sequencing instrument vendor into a complete genomic solutions provider and deploying software solutions on Amazon Web Services (AWS) since 2013. Illumina’s AWS-backed software solutions are lowering barriers to entry and helping researchers generate new discoveries every day, driving drug discovery and more.  This global scalability and deployment facilitates meaningful collaboration for both long-term projects and expedient crisis response. Researchers worldwide processed over 371,000 COVID-19-related samples on Illumina’s COVID-19 BaseSpace Apps in 2020 and the first half of 2021. “If customers were only able to do this on premises, we would have met serious constraints. Therefore, the cloud was key for powering the global pandemic response on that level,” says Tousi. Contact Sales Ρусский عربي “AWS provides us options to optimize for speed, flexibility, and cost and cater for the end customer use case and needs,” says Mehio. “Some users may want to perform genetic analyses as quickly as possible, whereas some academic users might opt to sacrifice some speed to lower costs and save research dollars. By leveraging different F1 instance types and storage options, our users maintain flexibility and the ability to scale up and down as needed.” Reducing Costs by Saving on AWS 中文 (简体) A complete next-generation genomics workflow starts with sample collection, preparation, and sequencing, but that’s just the beginning. After that comes the heavy bioinformatics lifting, starting with raw read quality control, data preprocessing, and alignment. Scientists can then move into secondary analyses like variant calling, and finally, conduct advanced tertiary analyses based on their interests. These tertiary analyses can include phylogenetic annotation, genotype-phenotype associations, and much more. For researchers and clinicians who aren’t bioinformatics experts, performing each step on a separate platform can quickly become overwhelming. Learn more » Learn more » “We want to democratize access to genomics technologies; passing cost savings on to our customers is a huge part of this effort,” says Tousi. “Cost should not be a deciding factor for research or clinical applications—people should perform sequencing and analysis purely based on how they anticipate being able to use the data.” “Security is job zero––it’s at the center of everything we do,” says Tousi. “At the very foundation, we can count on the AWS Shared Responsibility Model to ensure that our underlying cloud infrastructure maintains enterprise-level security and compliance. By leveraging Amazon EC2 Regions globally, we’re bringing compute to the data, supporting customers in all regions while allowing them to maintain data sovereignty.” About Illumina Get Started Rami Mehio Vice President of Bioinformatics and Instrument Software, Illumina AWS Healthcare & Life Sciences Virtual Symposium 2021: Illumina We’re delivering a complete workflow—from sample preparation to tertiary analysis—in the secure AWS environment that allows all of the information generated before and after sequencing to be aggregated and analyzed.” Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. English Illumina streamlines this entire genomics workflow for customers, offering integrated solutions for every step. Starting from the beginning, BaseSpaceTM Clarity LIMS (Laboratory Information Management Systems) helps genomics customers track samples and optimize sequencing workflows. Sequencing instruments can upload data directly into the Illumina Connected Analytics (ICA) platform, where users can manage datasets and leverage analytical tools within the platform on AWS. The DRAGENTM Bio-IT platform provides accurate, ultra-rapid secondary analysis results. At the same time, BaseSpace Correlation Engine integrates individuals’ datasets and queries into a repository of open-access and controlled-access public datasets to enable a wide variety of tertiary analyses. Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Accelerated research and promoted collaboration of customers worldwide to process over 371,000 COVID-19 related samples Facilitated access to streamlined, unified, customizable samples-to-analysis workflows Drastically reduced computing and storage costs with Amazon EC2 Spot Instances and Amazon S3 Glacier Illumina Brings Genomics from Samples to Answers Using AWS “We’re delivering a complete workflow—from sample preparation to tertiary analysis—in the secure AWS environment that allows all of the information generated before and after sequencing to be aggregated and analyzed,” says Rami Mehio, vice president of software and bioinformatics at Illumina. “That’s powerful for customers who want to track samples over time, cross-reference their data with publicly available databases, and glean insights for faster results.” Since its inception, Illumina has reduced the cost of genomics technology at a rate that exceeds Moore’s Law. Sequencing a single human genome cost over $100 million in 2001; 20 years later, it can cost as little as $600. Deutsch Tiếng Việt Amazon S3 “We rely on the strength of AWS tools as a backbone that allows us to focus on designing genomics-specific algorithms,” says Mehio. “As researchers’ and clinicians’ needs change, we can easily deploy new features and versions of our products.” Italiano ไทย Human genomic data can be associated with highly personal health information, and data breaches are an ever-growing risk for healthcare organizations worldwide. As a result, security is a paramount consideration for Illumina and its customers, many of whom must adhere to increasingly strict data management regulations. 2021 Secure Solutions for Scaling Global Genomics Illumina develops, manufactures, and markets integrated systems for analyzing genetic variation and biological function. Amazon Virtual Private Cloud (Amazon VPC) is a service that lets you launch AWS resources in a logically isolated virtual network that you define. Português" Illumina Reduced Carbon Emissions by 89 and Lowered Data Storage Costs Using AWS _ Illumina Case Study _ AWS.txt,"Français 2023 Español 89% carbon emissions savings Learn how Illumina in the life sciences industry drove sustainability, reduced costs, and optimized data storage using AWS. “Before S3 Intelligent-Tiering, we were analyzing our bill every month to try to find ways to reduce our data storage costs,” says Maynard. Previously, Illumina’s teams would use Amazon S3 lifecycle policies to transition its data into different Amazon S3 storage classes to cut its data storage costs. To streamline this task and optimize its data storage, Illumina decided to adopt the S3 Intelligent-Tiering storage class. By using S3 Intelligent-Tiering, Illumina could allocate its cost savings toward expanding its service and software offering, enhancing the customer experience. 日本語 Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Amazon S3 Intelligent-Tiering is the only cloud storage class that delivers automatic storage cost savings when data access patterns change, without performance impact or operational overhead. in Amazon S3 Intelligent-Tiering, simplifying management Outcome | Reducing Carbon Emissions by 89% Using AWS Compared to On-Premises Outcome | Reducing Costs and Optimizing Data Storage Using Amazon S3 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Transferred data Using the AWS customer carbon footprint tool, Illumina realized an 89 percent reduction of carbon emissions for its usage in AWS during the 12-month period ending November 2022. During this period, the tool reported 290 metric tons of carbon dioxide equivalent (MTCO2e) for Illumina’s usage in AWS compared to an estimated 2,657 MTCO2e if the same workloads were run in an on-premises data center. “Illumina has committed to net-zero emissions by 2050 for our direct operations and across our value chain,” says Sharon Vidal, head of corporate social responsibility at Illumina. “As data demands increase, we are thrilled at the opportunity to reduce carbon emissions not only for our environmental footprint but also for our customers on their sustainability journeys.” Get Started in data storage costs Opportunity | Using Amazon S3 Intelligent-Tiering to Manage a Growing Data Footprint for Illumina AWS Customer Carbon Footprint Tool AWS Services Used About Illumina Illumina further optimized its storage footprint offering customers access to DRAGEN Original Read Archive compression technology. DRAGEN (Dynamic Read Analysis for genomics), Illumina’s premier secondary analysis solution, provides accurate, comprehensive, and efficient secondary analysis for customers performing genomic analysis. DRAGEN ORA technology reduces the data footprint of a human genome by up to 80 percent, eliminating the burden of data storage for customers. This technology can drastically reduce customers’ data storage needs while reducing associated carbon emissions and unlocking additional cost savings. 50 PB of data stored 中文 (繁體) Bahasa Indonesia In 2012, Illumina expanded its line of products to include BaseSpace Sequence Hub—a push-button platform for data management and analysis—where its customers can process, analyze, and store their genomic data securely in the cloud using a basic internet connection. In 2021, Illumina released Illumina Connected Analytics, a secure and flexible bioinformatics platform to drive scientific insights, providing its customers with a scalable and highly configurable platform. company-wide sustainability goals Contact Sales Ρусский Track, measure, review, and forecast the carbon emissions generated from your AWS usage. عربي Advancing Analytics and Further Optimizing Data Storage on AWS 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. “Typically, our customers keep a copy of the data that they generate through BaseSpace Sequence Hub,” says Al Maynard, director of software engineering at Illumina. “Our total data footprint has been climbing very fast because our customers rarely delete genomic data that could be used for future analysis.” Because its customers process their analytics on demand, it is a challenge for Illumina to predict when customers will need access to specific data. Learn more » Illumina specializes in genetic sequencing, offering a full range of software, instruments, and services that help its customers advance their genomic research. Illumina’s mission is to improve human health by unlocking the power of the genome. Illumina is a leading developer, manufacturer, and marketer of life science tools and systems for large-scale genetics analysis. Founded in 1998, Illumina offers a full range of software, instruments, and services that help its customers analyze genomes, make rapid advancements in life sciences research, and improve human health. Illumina’s customers use its genetic-sequencing solutions to accelerate therapeutic and pharmaceutical insights. into an Amazon S3 storage class in minutes With its mission to improve human health and a commitment to operate responsibly and sustainably, Illumina used the AWS Customer Carbon Footprint Tool to track the carbon emissions of its AWS usage. This tool uses easy-to-understand data visualizations to provide customers with their historical carbon emissions, evaluate emission trends as their use of AWS evolves, approximate the estimated carbon emissions avoided by using AWS instead of an on-premises data center, and review forecasted emissions based on current use. The forecasted emissions are based on current usage and show how a customer’s carbon footprint will change as AWS stays on path to powering its operations with 100 percent renewable energy by 2025 and reach net-zero carbon by 2040 as part of The Climate Pledge. Studies conducted by the international analyst firm 451 Research found that moving on-premises workloads to AWS can lower the workload carbon footprint by at least 80 percent and up to 96 percent after AWS is powered with 100 percent renewable energy, a target it is on a path to meet by 2025. The infrastructure of AWS is 3.6 times more energy efficient than the median of surveyed US enterprise data centers and up to 5 times more energy efficient than the average in the EU. Overview As the company expanded its customer base and product line, the amount of genetic data that Illumina securely stored in the cloud grew exponentially—from 1 PB to 100 PB in 8 years. The company’s data growth continued to accelerate, and during 2021–2022 alone, Illumina added over 24 PB of data in Amazon Simple Storage Service (Amazon S3), an object storage service built to store and retrieve virtually any amount of data from anywhere. Further, Illumina predicted that its stored data would continue to double every 2 years, prompting the company to explore ways to optimize its data storage, maximize cost savings, and reduce its carbon emissions. Illumina Reduced Carbon Emissions by 89% and Lowered Data Storage Costs Using AWS compared to on-premises equivalent Türkçe English For over 10 years, Illumina has stored data in AWS using Amazon S3. While looking for ways to optimize its data storage using AWS best practices, Illumina began using Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering), which automates storage cost savings by moving data when access patterns change and automatically moving objects that have not been accessed to lower-cost access tiers. This proved to be ideal for Illumina, given its customers’ unpredictable data access patterns; many of Illumina’s customers frequently access their genomic data during data generation, after which it lies dormant until reanalysis is needed. Opportunity | Driving Sustainability Using AWS As data demands increase, we are thrilled at the opportunity to reduce the carbon emissions not only for our internal environmental footprint but also for our customers on their sustainability journeys.” Amazon S3 Intelligent-Tiering After just 3 months of using S3 Intelligent-Tiering, Illumina began to see significant monthly cost savings. For every 1 TB of data, the company saves 60 percent on storage costs. “I think it’s the biggest return on investment that we’ve ever seen,” says Maynard. Further, Illumina can provide its customers with near-instant access to thousands of whole genome sequences at a low, competitive cost, helping its customers accelerate their research and development. Deutsch Tiếng Việt Amazon S3 Customer Stories / Healthcare Italiano ไทย 60% reduction Learn more » Illumina first tested the S3 Intelligent-Tiering storage class in its test environment and then ran a limited pilot with production data in AWS. A few months later, the company decided to transition 50 PB of data from its BaseSpace Sequence Hub to the S3 Intelligent-Tiering storage class, which took only a few minutes to set up. By using S3 Intelligent-Tiering, Illumina streamlined its internal workflows, simplified its data management, and benefited from more-predictable and lower-cost storage pricing, all while experiencing the same performance as the Amazon S3 Standard storage class. Sharon Vidal Head of Corporate Social Responsibility, Illumina Illumina is now in the process of moving its data from research and development and from Illumina Connected Analytics into S3 Intelligent-Tiering so that it can further optimize its data storage and reduce costs. The company is also looking at using Amazon S3 Storage Lens, which delivers organization-wide visibility into object-storage usage and activity trends, while making actionable recommendations to improve cost efficiency and apply best practices for data protection. “By using AWS, we can limit how much we have to think about managing our data,” says Maynard. “AWS does all the hard work for us, and we get the benefit of extra storage savings and continuous innovation to improve energy efficiency.” Supports Português" Implement unified text and image search with a CLIP model using Amazon SageMaker and Amazon OpenSearch Service _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Implement unified text and image search with a CLIP model using Amazon SageMaker and Amazon OpenSearch Service by Kevin Du and Ananya Roy | on 05 APR 2023 | in Advanced (300) , Amazon OpenSearch Service , Amazon SageMaker | Permalink | Comments |  Share The rise of text and semantic search engines has made ecommerce and retail businesses search easier for its consumers. Search engines powered by unified text and image can provide extra flexibility in search solutions. You can use both text and images as queries. For example, you have a folder of hundreds of family pictures in your laptop. You want to quickly find a picture that was taken when you and your best friend were in front of your old house’s swimming pool. You can use conversational language like “two people stand in front of a swimming pool” as a query to search in a unified text and image search engine. You don’t need to have the right keywords in image titles to perform the query. Amazon OpenSearch Service now supports the cosine similarity metric for k-NN indexes. Cosine similarity measures the cosine of the angle between two vectors, where a smaller cosine angle denotes a higher similarity between the vectors. With cosine similarity, you can measure the orientation between two vectors, which makes it a good choice for some specific semantic search applications. Contrastive Language-Image Pre-Training (CLIP) is a neural network trained on a variety of image and text pairs. The CLIP neural network is able to project both images and text into the same latent space , which means that they can be compared using a similarity measure, such as cosine similarity. You can use CLIP to encode your products’ images or description into embeddings , and then store them into an OpenSearch Service k-NN index. Then your customers can query the index to retrieve products that they’re interested in. You can use CLIP with Amazon SageMaker to perform encoding. Amazon SageMaker Serverless Inference is a purpose-built inference service that makes it easy to deploy and scale machine learning (ML) models. With SageMaker, you can deploy serverless for dev and test, and then move to real-time inference when you go to production. SageMaker serverless helps you save cost by scaling down infrastructure to 0 during idle times. This is perfect for building a POC, where you will have long idle times between development cycles. You can also use Amazon SageMaker batch transform to get inferences from large datasets. In this post, we demonstrate how to build a search application using CLIP with SageMaker and OpenSearch Service. The code is open source, and it is hosted on GitHub . Solution overview OpenSearch Service provides text-matching and embedding k-NN search. We use embedding k-NN search in this solution. You can use both image and text as a query to search items from the inventory. Implementing this unified image and text search application consists of two phases: k-NN reference index – In this phase, you pass a set of corpus documents or product images through a CLIP model to encode them into embeddings. Text and image embeddings are numerical representations of the corpus or images, respectively. You save those embeddings into a k-NN index in OpenSearch Service. The concept underpinning k-NN is that similar data points exist in close proximity in the embedding space. As an example, the text “a red flower,” the text “rose,” and an image of red rose are similar, so these text and image embeddings are close to each other in the embedding space. k-NN index query – This is the inference phase of the application. In this phase, you submit a text search query or image search query through the deep learning model (CLIP) to encode as embeddings. Then, you use those embeddings to query the reference k-NN index stored in OpenSearch Service. The k-NN index returns similar embeddings from the embedding space. For example, if you pass the text of “a red flower,” it would return the embeddings of a red rose image as a similar item. The following figure illustrates the solution architecture. The workflow steps are as follows: Create a SageMaker model from a pretrained CLIP model for batch and real-time inference. Generate embeddings of product images using a SageMaker batch transform job. Use SageMaker Serverless Inference to encode query image and text into embeddings in real time. Use Amazon Simple Storage Service (Amazon S3) to store the raw text (product description) and images (product images) and image embedding generated by the SageMaker batch transform jobs. Use OpenSearch Service as the search engine to store embeddings and find similar embeddings. Use a query function to orchestrate encoding the query and perform a k-NN search. We use Amazon SageMaker Studio notebooks (not shown in the diagram) as the integrated development environment (IDE) to develop the solution. Set up solution resources To set up the solution, complete the following steps: Create a SageMaker domain and a user profile. For instructions, refer to Step 5 of Onboard to Amazon SageMaker Domain Using Quick setup . Create an OpenSearch Service domain. For instructions, see Creating and managing Amazon OpenSearch Service domains . You can also use an AWS CloudFormation template by following the GitHub instructions to create a domain. You can connect Studio to Amazon S3 from Amazon Virtual Private Cloud (Amazon VPC) using an interface endpoint in your VPC, instead of connecting over the internet. By using an interface VPC endpoint (interface endpoint), the communication between your VPC and Studio is conducted entirely and securely within the AWS network. Your Studio notebook can connect to OpenSearch Service over a private VPC to ensure secure communication. OpenSearch Service domains offer encryption of data at rest, which is a security feature that helps prevent unauthorized access to your data. Node-to-node encryption provides an additional layer of security on top of the default features of OpenSearch Service. Amazon S3 automatically applies server-side encryption (SSE-S3) for each new object unless you specify a different encryption option. In the OpenSearch Service domain, you can attach identity-based policies define who can access a service, which actions they can perform, and if applicable, the resources on which they can perform those actions. Encode images and text pairs into embeddings This section discusses how to encode images and text into embeddings. This includes preparing data, creating a SageMaker model, and performing batch transform using the model. Data overview and preparation You can use a SageMaker Studio notebook with a Python 3 (Data Science) kernel to run the sample code. For this post, we use the Amazon Berkeley Objects Dataset . The dataset is a collection of 147,702 product listings with multilingual metadata and 398,212 unique catalogue images. We only use the item images and item names in US English. For demo purposes, we use approximately 1,600 products. For more details about this dataset, refer to the README . The dataset is hosted in a public S3 bucket. There are 16 files that include product description and metadata of Amazon products in the format of listings/metadata/listings_.json.gz . We use the first metadata file in this demo. You use pandas to load the metadata, then select products that have US English titles from the data frame. Pandas is an open-source data analysis and manipulation tool built on top of the Python programming language. You use an attribute called main_image_id to identify an image. See the following code: meta = pd.read_json(""s3://amazon-berkeley-objects/listings/metadata/listings_0.json.gz"", lines=True) def func_(x): us_texts = [item[""value""] for item in x if item[""language_tag""] == ""en_US""] return us_texts[0] if us_texts else None meta = meta.assign(item_name_in_en_us=meta.item_name.apply(func_)) meta = meta[~meta.item_name_in_en_us.isna()][[""item_id"", ""item_name_in_en_us"", ""main_image_id""]] print(f""#products with US English title: {len(meta)}"") meta.head() There are 1,639 products in the data frame. Next, link the item names with the corresponding item images. images/metadata/images.csv.gz contains image metadata. This file is a gzip-compressed CSV file with the following columns: image_id , height , width , and path . You can read the metadata file and then merge it with item metadata. See the following code: image_meta = pd.read_csv(""s3://amazon-berkeley-objects/images/metadata/images.csv.gz"") dataset = meta.merge(image_meta, left_on=""main_image_id"", right_on=""image_id"") dataset.head() You can use the SageMaker Studio notebook Python 3 kernel built-in PIL library to view a sample image from the dataset: from sagemaker.s3 import S3Downloader as s3down from pathlib import Path from PIL import Image def get_image_from_item_id(item_id = ""B0896LJNLH"", return_image=True): s3_data_root = ""s3://amazon-berkeley-objects/images/small/"" item_idx = dataset.query(f""item_id == '{item_id}'"").index[0] s3_path = dataset.iloc[item_idx].path local_data_root = f'./data/images' local_file_name = Path(s3_path).name s3down.download(f'{s3_data_root}{s3_path}', local_data_root) local_image_path = f""{local_data_root}/{local_file_name}"" if return_image: img = Image.open(local_image_path) return img, dataset.iloc[item_idx].item_name_in_en_us else: return local_image_path, dataset.iloc[item_idx].item_name_in_en_us image, item_name = get_image_from_item_id() print(item_name) image Model preparation Next, create a SageMaker model from a pretrained CLIP model. The first step is to download the pre-trained model weighting file, put it into a model.tar.gz file, and upload it to an S3 bucket. The path of the pretrained model can be found in the CLIP repo . We use a pretrained ResNet-50 (RN50) model in this demo. See the following code: %%writefile build_model_tar.sh #!/bin/bash MODEL_NAME=RN50.pt MODEL_NAME_URL=https://openaipublic.azureedge.net/clip/models/afeb0e10f9e5a86da6080e35cf09123aca3b358a0c3e3b6c78a7b63bc04b6762/RN50.pt BUILD_ROOT=/tmp/model_path S3_PATH=s3:////model.tar.gz rm -rf $BUILD_ROOT mkdir $BUILD_ROOT cd $BUILD_ROOT && curl -o $BUILD_ROOT/$MODEL_NAME $MODEL_NAME_URL cd $BUILD_ROOT && tar -czvf model.tar.gz . aws s3 cp $BUILD_ROOT/model.tar.gz $S3_PATH !bash build_model_tar.sh You then need to provide an inference entry point script for the CLIP model. CLIP is implemented using PyTorch , so you use the SageMaker PyTorch framework. PyTorch is an open-source ML framework that accelerates the path from research prototyping to production deployment. For information about deploying a PyTorch model with SageMaker, refer to Deploy PyTorch Models . The inference code accepts two environment variables: MODEL_NAME and ENCODE_TYPE . This helps us switch between different CLIP model easily. We use ENCODE_TYPE to specify if we want to encode an image or a piece of text. Here, you implement the model_fn , input_fn , predict_fn , and output_fn functions to override the default PyTorch inference handler . See the following code: !mkdir -p code %%writefile code/clip_inference.py import io import torch import clip from PIL import Image import json import logging import sys import os import torch import torch.nn as nn import torch.nn.functional as F from torchvision.transforms import ToTensor logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) logger.addHandler(logging.StreamHandler(sys.stdout)) MODEL_NAME = os.environ.get(""MODEL_NAME"", ""RN50.pt"") # ENCODE_TYPE could be IMAGE or TEXT ENCODE_TYPE = os.environ.get(""ENCODE_TYPE"", ""TEXT"") device = torch.device(""cuda"" if torch.cuda.is_available() else ""cpu"") # defining model and loading weights to it. def model_fn(model_dir): model, preprocess = clip.load(os.path.join(model_dir, MODEL_NAME), device=device) return {""model_obj"": model, ""preprocess_fn"": preprocess} def load_from_bytearray(request_body): return image # data loading def input_fn(request_body, request_content_type): assert request_content_type in ( ""application/json"", ""application/x-image"", ), f""{request_content_type} is an unknown type."" if request_content_type == ""application/json"": data = json.loads(request_body)[""inputs""] elif request_content_type == ""application/x-image"": image_as_bytes = io.BytesIO(request_body) data = Image.open(image_as_bytes) return data # inference def predict_fn(input_object, model): model_obj = model[""model_obj""] # for image preprocessing preprocess_fn = model[""preprocess_fn""] assert ENCODE_TYPE in (""TEXT"", ""IMAGE""), f""{ENCODE_TYPE} is an unknown encode type."" # preprocessing if ENCODE_TYPE == ""TEXT"": input_ = clip.tokenize(input_object).to(device) elif ENCODE_TYPE == ""IMAGE"": input_ = preprocess_fn(input_object).unsqueeze(0).to(device) # inference with torch.no_grad(): if ENCODE_TYPE == ""TEXT"": prediction = model_obj.encode_text(input_) elif ENCODE_TYPE == ""IMAGE"": prediction = model_obj.encode_image(input_) return prediction # Serialize the prediction result into the desired response content type def output_fn(predictions, content_type): assert content_type == ""application/json"" res = predictions.cpu().numpy().tolist() return json.dumps(res) The solution requires additional Python packages during model inference, so you can provide a requirements.txt file to allow SageMaker to install additional packages when hosting models: %%writefile code/requirements.txt ftfy regex tqdm git+https://github.com/openai/CLIP.git You use the PyTorchModel class to create an object to contain the information of the model artifacts’ Amazon S3 location and the inference entry point details. You can use the object to create batch transform jobs or deploy the model to an endpoint for online inference. See the following code: from sagemaker.pytorch import PyTorchModel from sagemaker import get_execution_role, Session role = get_execution_role() shared_params = dict( entry_point=""clip_inference.py"", source_dir=""code"", role=role, model_data=""s3:////model.tar.gz"", framework_version=""1.9.0"", py_version=""py38"", ) clip_image_model = PyTorchModel( env={'MODEL_NAME': 'RN50.pt', ""ENCODE_TYPE"": ""IMAGE""}, name=""clip-image-model"", **shared_params ) clip_text_model = PyTorchModel( env={'MODEL_NAME': 'RN50.pt', ""ENCODE_TYPE"": ""TEXT""}, name=""clip-text-model"", **shared_params ) Batch transform to encode item images into embeddings Next, we use the CLIP model to encode item images into embeddings, and use SageMaker batch transform to run batch inference. Before creating the job, use the following code snippet to copy item images from the Amazon Berkeley Objects Dataset public S3 bucket to your own bucket. The operation takes less than 10 minutes. from multiprocessing.pool import ThreadPool import boto3 from tqdm import tqdm from urllib.parse import urlparse s3_sample_image_root = ""s3:///"" s3_data_root = ""s3://amazon-berkeley-objects/images/small/"" client = boto3.client('s3') def upload_(args): client.copy_object(CopySource=args[""source""], Bucket=args[""target_bucket""], Key=args[""target_key""]) arugments = [] for idx, record in dataset.iterrows(): argument = {} argument[""source""] = (s3_data_root + record.path)[5:] argument[""target_bucket""] = urlparse(s3_sample_image_root).netloc argument[""target_key""] = urlparse(s3_sample_image_root).path[1:] + record.path arugments.append(argument) with ThreadPool(4) as p: r = list(tqdm(p.imap(upload_, arugments), total=len(dataset))) Next, you perform inference on the item images in a batch manner. The SageMaker batch transform job uses the CLIP model to encode all the images stored in the input Amazon S3 location and uploads output embeddings to an output S3 folder. The job takes around 10 minutes. batch_input = s3_sample_image_root + ""/"" output_path = f""s3:///inference/output"" clip_image_transformer = clip_image_model.transformer( instance_count=1, instance_type=""ml.c5.xlarge"", strategy=""SingleRecord"", output_path=output_path, ) clip_image_transformer.transform( batch_input, data_type=""S3Prefix"", content_type=""application/x-image"", wait=True, ) Load embeddings from Amazon S3 to a variable, so you can ingest the data into OpenSearch Service later: embedding_root_path = ""./data/embedding"" s3down.download(output_path, embedding_root_path) embeddings = [] for idx, record in dataset.iterrows(): embedding_file = f""{embedding_root_path}/{record.path}.out"" embeddings.append(json.load(open(embedding_file))[0]) Create an ML-powered unified search engine This section discusses how to create a search engine that that uses k-NN search with embeddings. This includes configuring an OpenSearch Service cluster, ingesting item embedding, and performing free text and image search queries. Set up the OpenSearch Service domain using k-NN settings Earlier, you created an OpenSearch cluster. Now you’re going to create an index to store the catalog data and embeddings. You can configure the index settings to enable the k-NN functionality using the following configuration: index_settings = { ""settings"": { ""index.knn"": True, ""index.knn.space_type"": ""cosinesimil"" }, ""mappings"": { ""properties"": { ""embeddings"": { ""type"": ""knn_vector"", ""dimension"": 1024 #Make sure this is the size of the embeddings you generated, for RN50, it is 1024 } } } } This example uses the Python Elasticsearch client to communicate with the OpenSearch cluster and create an index to host your data. You can run %pip install elasticsearch in the notebook to install the library. See the following code: import boto3 import json from requests_aws4auth import AWS4Auth from elasticsearch import Elasticsearch, RequestsHttpConnection def get_es_client(host = """", port = 443, region = """", index_name = ""clip-index""): credentials = boto3.Session().get_credentials() awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, 'es', session_token=credentials.token) headers = {""Content-Type"": ""application/json""} es = Elasticsearch(hosts=[{'host': host, 'port': port}], http_auth=awsauth, use_ssl=True, verify_certs=True, connection_class=RequestsHttpConnection, timeout=60 # for connection timeout errors ) return es es = get_es_client() es.indices.create(index=index_name, body=json.dumps(index_settings)) Ingest image embedding data into OpenSearch Service You now loop through your dataset and ingest items data into the cluster. The data ingestion for this practice should finish within 60 seconds. It also runs a simple query to verify if the data has been ingested into the index successfully. See the following code: # ingest_data_into_es for idx, record in tqdm(dataset.iterrows(), total=len(dataset)): body = record[['item_name_in_en_us']].to_dict() body['embeddings'] = embeddings[idx] es.index(index=index_name, id=record.item_id, doc_type='_doc', body=body) # Check that data is indeed in ES res = es.search( index=index_name, body={ ""query"": { ""match_all"": {} }}, size=2) assert len(res[""hits""][""hits""]) > 0 Perform a real-time query Now that you have a working OpenSearch Service index that contains embeddings of item images as our inventory, let’s look at how you can generate embedding for queries. You need to create two SageMaker endpoints to handle text and image embeddings, respectively. You also create two functions to use the endpoints to encode images and texts. For the encode_text function, you add this is before an item name to translate an item name to a sentence for item description. memory_size_in_mb is set as 6 GB to serve the underline Transformer and ResNet models. See the following code: text_predictor = clip_text_model.deploy( instance_type='ml.c5.xlarge', initial_instance_count=1, serverless_inference_config=ServerlessInferenceConfig(memory_size_in_mb=6144), serializer=JSONSerializer(), deserializer=JSONDeserializer(), wait=True ) image_predictor = clip_image_model.deploy( instance_type='ml.c5.xlarge', initial_instance_count=1, serverless_inference_config=ServerlessInferenceConfig(memory_size_in_mb=6144), serializer=IdentitySerializer(content_type=""application/x-image""), deserializer=JSONDeserializer(), wait=True ) def encode_image(file_name=""./data/images/0e9420c6.jpg""): with open(file_name, ""rb"") as f: payload = f.read() payload = bytearray(payload) res = image_predictor.predict(payload) return res[0] def encode_name(item_name): res = text_predictor.predict({""inputs"": [f""this is a {item_name}""]}) return res[0] You can firstly plot the picture that will be used. item_image_path, item_name = get_image_from_item_id(item_id = ""B0896LJNLH"", return_image=False) feature_vector = encode_image(file_name=item_image_path) print(feature_vector.shape) Image.open(item_image_path) Let’s look at the results of a simple query. After retrieving results from OpenSearch Service, you get the list of item names and images from dataset : def search_products(embedding, k = 3): body = { ""size"": k, ""_source"": { ""exclude"": [""embeddings""], }, ""query"": { ""knn"": { ""embeddings"": { ""vector"": embedding, ""k"": k, } } }, } res = es.search(index=index_name, body=body) images = [] for hit in res[""hits""][""hits""]: id_ = hit[""_id""] image, item_name = get_image_from_item_id(id_) image.name_and_score = f'{hit[""_score""]}:{item_name}' images.append(image) return images def display_images( images: [PilImage], columns=2, width=20, height=8, max_images=15, label_wrap_length=50, label_font_size=8): if not images: print(""No images to display."") return if len(images) > max_images: print(f""Showing {max_images} images of {len(images)}:"") images=images[0:max_images] height = max(height, int(len(images)/columns) * height) plt.figure(figsize=(width, height)) for i, image in enumerate(images): plt.subplot(int(len(images) / columns + 1), columns, i + 1) plt.imshow(image) if hasattr(image, 'name_and_score'): plt.title(image.name_and_score, fontsize=label_font_size); images = search_products(feature_vector) The first item has a score of 1.0, because the two images are the same. Other items are different types of glasses in the OpenSearch Service index. You can use text to query the index as well: feature_vector = encode_name(""drinkware glass"") images = search_products(feature_vector) display_images(images) You’re now able to get three pictures of water glasses from the index. You can find the images and text within the same latent space with the CLIP encoder. Another example of this is to search for the word “pizza” in the index: feature_vector = encode_name(""pizza"") images = search_products(feature_vector) display_images(images) Clean up With a pay-per-use model, Serverless Inference is a cost-effective option for an infrequent or unpredictable traffic pattern. If you have a strict service-level agreement (SLA) , or can’t tolerate cold starts, real-time endpoints are a better choice. Using multi-model or multi-container endpoints provide scalable and cost-effective solutions for deploying large numbers of models. For more information, refer to Amazon SageMaker Pricing . We suggest deleting the serverless endpoints when they are no longer needed. After finishing this exercise, you can remove the resources with the following steps (you can delete these resources from the AWS Management Console , or using the AWS SDK or SageMaker SDK): Delete the endpoint you created. Optionally, delete the registered models. Optionally, delete the SageMaker execution role. Optionally, empty and delete the S3 bucket. Summary In this post, we demonstrated how to create a k-NN search application using SageMaker and OpenSearch Service k-NN index features. We used a pre-trained CLIP model from its OpenAI implementation. The OpenSearch Service ingestion implementation of the post is only used for prototyping. If you want to ingest data from Amazon S3 into OpenSearch Service at scale, you can launch an Amazon SageMaker Processing job with the appropriate instance type and instance count. For another scalable embedding ingestion solution, refer to Novartis AG uses Amazon OpenSearch Service K-Nearest Neighbor (KNN) and Amazon SageMaker to power search and recommendation (Part 3/4) . CLIP provides zero-shot capabilities, which makes it possible to adopt a pre-trained model directly without using transfer learning to fine-tune a model. This simplifies the application of the CLIP model. If you have pairs of product images and descriptive text, you can fine-tune the model with your own data using transfer learning to further improve the model performance. For more information, see Learning Transferable Visual Models From Natural Language Supervision and the CLIP GitHub repo sitory. About the Authors Kevin Du is a Senior Data Lab Architect at AWS, dedicated to assisting customers in expediting the development of their Machine Learning (ML) products and MLOps platforms. With more than a decade of experience building ML-enabled products for both startups and enterprises, his focus is on helping customers streamline the productionalization of their ML solutions. In his free time, Kevin enjoys cooking and watching basketball. Ananya Roy is a Senior Data Lab architect specialised in AI and machine learning based out of Sydney Australia . She has been working with diverse range of customers to provide architectural guidance and help them to deliver effective AI/ML solution via data lab engagement. Prior to AWS , she was working as senior data scientist and dealt with large-scale ML models across different industries like Telco, banks and fintech’s. Her experience in AI/ML has allowed her to deliver effective solutions for complex business problems, and she is passionate about leveraging cutting-edge technologies to help teams achieve their goals. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Improve Patient Safety Intelligence Using AWS AI_ML Services _ AWS for Industries.txt,"AWS for Industries Improve Patient Safety Intelligence Using AWS AI/ML Services by Terrell Rohm, Gang Fu, Dr. Iona Maria Thraen, Sara McLaughlin Wynn, Rod Tarrago, and Stephen Andrews | on 19 JUN 2023 | in Artificial Intelligence , Healthcare , Industries , Public Sector | Permalink |  Share Today, healthcare organizations rely on a combination of automated and manual processes to compose, review, and classify patient safety reports. These reports are entered manually by front-line clinicians into the RL Datix reporting system. This entry includes both discrete data points as well as a free-text narrative. Although the data collection process may begin with the digital capture of data, once entered, the data generally remains inaccessible throughout the organization in terms of real-time trending and analysis. Each reporter sees only the adverse events they have reported. Unit and file managers are given broader access relevant to their unit or service line authority, but often the data remains in its raw format due to the textual nature of the event descriptions. As a result, patterns across the organization, such as an increase in infections or medication errors, are unit or service line dependent and appear to be isolated events. The current analysis of these reports is achieved through a combination of built-in reports/graphics depending on the software, manual data manipulation, and the display of discrete fields. Analysis is siloed to the respective units or authorities while an organization-wide or region-wide analysis is dependent on the employment of multiple patient safety analysts and data specialists. Additional reports may include separate databases and spreadsheets to triangulate around specific issues. In academic medical centers (AMCs), this process requires dedicated time, people, and resources. AMCs need a technology solution that can automate the analytical processes to free dedicated resources for much needed patient care improvement initiatives and activities. As a Proof of Concept (POC), we focused on the automated analysis of medication-related patient safety reports. The proposed solution intends to reduce manual analytical work and inefficiencies in current workflows, reduce time-to-insight, improve the information extracted from daily reports, and uncover patterns across reports and throughout the organization. We collaborated with University of Utah Health on this POC project, using five years of medication-related patient safety reports to fine-tune a couple of generalized and domain specific language models using Amazon SageMaker . This approach classifies the severity of errors using discrete fields, identifies high risk medications from text narratives, and visualizes high-risk medication-related events within the corresponding harm levels. Solution overview Amazon Comprehend Medical was used to detect high risk medications, and the results were summarized in a functional, interactive dashboard built upon Amazon QuickSight . The entire data processing pipeline was automated using event driven, serverless architecture via AWS Lambda . Given the fact that patient safety reports contain private and sensitive information, all of the services used in this solution are HIPAA eligible , and the project was carried out in a HIPAA-compliant landing zone account . In addition, de-identification of the patient safety reports was achieved using Amazon Comprehend Medical DetectPH API , which has been demonstrated in this post and reference solution . To improve efficiency of the patient safety reporting process, we have refined and compared different transformer based LLMs in AWS partner Huggingface to effectively detect and classify high risk medications based on free-text descriptions in the reports (see Table 1).  A sample Jupyter notebook  was prepared and it can be shared with academic medical centers for further customization. The architectural diagram in the following figure outlines the potential steps for patient safety professionals to run this solution on AWS. Figure 1. Architecture Diagram of the solution for patient safety intelligence Additionally, to provide a secure and compliant machine learning (ML) environment , Amazon SageMaker, data encryption, network isolation, authentication, and authorization are set as the default. Key features include: Encryption of data at rest in an Amazon Simple Storage Service (Amazon S3) bucket is turned on with your own key stored in AWS Key Management Service (AWS KMS) . The extra cost for AWS KMS provides better controlled security, and the same approach was used in this post . Encryption of data at rest in Amazon Elastic File System (Amazon EFS) (home folder for Notebook instances) is enabled using default AWS KMS key (aws/elasticfilesystem). Amazon SageMaker Studio environment is launched within a private VPC. With the network isolation, the VPC endpoints provide the access to other AWS services including S3 buckets through AWS PrivateLink . Amazon Identity and Access Management (IAM) is used for role-based access control, and it can determine which permissions the SageMaker user can have. If you want to have a secure research environment through a lockdown Virtual Desktop Infrastructure (VDI) without screen copy, then you can use Amazon AppStream 2.0 or Amazon Workspaces to access Amazon SageMaker domain presigned URL . This solution leverages AWS Analytics and artificial intelligence/machine learning (AI/ML) services for automatic data processing, information extraction, and AI predictions upon patient safety reports.  High-alert medications , extracted from the standard high-risk medication list compiled by the Institute for Safe Medication Practices (ISMP), have been consolidated into RxNorm concepts. These were used to map the named entities with alternative synonyms extracted by Amazon Comprehend Medical. They were further analyzed and displayed on an Amazon QuickSight dashboard (see the following figure). The dashboard displays multiple visualizations of the data both independently from discrete fields (such as counts by Safety Event Codes) and data from textual fields (counts of High Alert Medications), and also combines data from both discrete and textual sources as demonstrated by the Combination chart. Finally, the capacity to drill down by individual Patient Safety Codes and the corresponding High Alert Medications is provided. Note that a cell size of five or less has been removed for privacy purposes. This approach could additionally be constructed by location, time of day, or any other discrete data element. Figure 2. Example dashboard for high alert medications extracted by Amazon Comprehend Medical Outcomes Using the AI approach as described in the following, a comparison analysis for AI prediction POC results are found in Table 1. The general results range in Precision from .881 to .901; Recall from .874 to .899; Accuracy from .874 to .899; and F1 score of .873 to .899 depending on the application.   Table 1. AI Model prediction results to classify level of harms based on free text description Conclusion Given the success of this POC project, we plan to engage with an AWS partner to build other use case applications and to test a production-ready system that includes complete clinical data. This data can lead to additional metrics, models, and improvements. Furthermore, given the need for manual entry of clinical information into the patient safety reporting system, efforts are underway to integrate electronic health record (EHR) information into the analysis. ML is an effective tool to improve efficiency, reduce time to insight, and unearth potentially hidden information in medication-related patient safety reports. Given these results, it would be valuable to continue to improve outcome scores, expand this effort to other areas of patient safety reporting, and investigate integration with other clinical and demographic data sources. TAGS: #healthcare , AI/ML , amazon sagemaker , Patient safety , Personalized health Terrell Rohm Terrell Rohm is the Director of Quality Data Analytics & Technology for the Chief Quality Office at the University of Utah Health. He has over 20 years’ experience working in the private and public sectors in technology and leadership roles. He leads a department providing data analytics, data engineering, and business intelligence services focusing on healthcare quality. He holds an MBA from the Jon M. Huntsman School of Business at Utah State University and a bachelor’s degree in computer science from Brigham Young University. Gang Fu Gang Fu is a Healthcare Solution Architect at AWS. He holds a PhD in Pharmaceutical Science from the University of Mississippi and has over ten years of technology and biomedical research experience. He is passionate about technology and the impact it can make on healthcare. Dr. Iona Maria Thraen Dr. Iona Maria Thraen holds a PhD in Medical Informatics from the College of Medicine, University of Utah; sixty hours of graduate doctoral social work credits from the College of Social Work, University of Utah; thirty hours of graduate training in economics (Fordham University); a master’s degree in social work (University of Nebraska); and an undergraduate degree in Psychology with a minor in Theology (Creighton University) Dr. Thraen currently holds an appointment as adjunct assistant professor in the Dept of Biomedical Informatics and adjunct instructor with the Department of Operations and Information Systems, both with the University of Utah. In her role, Dr. Thraen sets the strategic direction for the department to move from Patient Safety 1.0 to Patient Safety 2.0; manages oversight of personnel, budget, and policy setting; leads patient safety initiatives across the organization in collaboration with Value Engineering, System’s Quality, and Nursing Quality; teaches patient safety content to Master of Health Administration Students; and participates in patient safety related research and development. Finally, Dr. Thraen has been involved in numerous research activities resulting in multiple publications, acknowledgements, and grants. Sara McLaughlin Wynn Sara McLaughlin Wynn is an Enterprise Account Manager at AWS. She has spent two decades working with higher education institutions in the Western United States and now supports the AWS mission to accelerate the digital transformation of higher education. Rod Tarrago Rod Tarrago, MD, is a Principal Business Development Manager at AWS. He leads clinical informatics for academic medicine. Rod brings 15 years of experience as a chief medical information officer. Clinically, he practiced pediatric critical care medicine for 20 years prior to joining AWS. Stephen Andrews Stephen Andrews is the Medication Safety Pharmacist for the University of Utah Health, comprised of 5 hospitals and 11 community health care centers. He is responsible for developing the vision and associated strategic plan for an ideal safe medication use system. He obtained his Doctor of Pharmacy from the University of Missouri-Kansas City, completed post-graduate residency training at the University of Kansas Health System, is a Board-Certified Pharmacotherapy Specialist and Board Certified Professional in Patient Safety. Stephen is passionate about improving the reliability of safe medication use by incorporating evidenced-based strategies and solutions. Resources AWS for Industry AWS Events AWS Training & Certification AWS Whitepapers AWS Compliance Reports Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Improving Geospatial Processing Faster using Amazon Aurora with Ozius _ Case Study _ AWS.txt,"Français The high demand for Biome has primarily to do with its strong performance. On the company’s previous system, it would have taken Biome 150 days to process the environmental data from all of continental Australia. Now, the solution can complete this task within only 8 hours—450 times faster than before. “Amazon Aurora is a game changer,” says Scarth. “It helped us complete our geospatial processing far faster than I could’ve possibly imagined.” Ozius has also increased the volume of data that it processes by a factor of 10, ingesting over a quarter of a billion data points using Aurora. to develop Ozius Biome Español to process data on all of Australia's vegetation 日本語 2022 Ozius has also improved the resolution of its environmental-intelligence products. Now the company can offer a close-up of vegetation within a 20-by-20-meter area, a huge improvement from the 200-by-200-meter-area resolution previously available. By achieving higher resolution, Ozius can reconstruct Australia’s vegetation with greater accuracy and fidelity. About Ozius 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Before using Aurora, the enterprise relied solely on its on-premises PostgreSQL with PostGIS databases to process the data points that it collected from satellites. On its previous system, it would have taken Ozius around 150 days to process the nearly 170 million data points that it gathered from continental Australia’s topography. To reduce the amount of time spent processing data, the Ozius team began searching for a robust database solution. Customer Stories / Aerospace Environmental intelligence enterprise Ozius strives to deliver advanced analytics to its customers through Ozius Biome (Biome), its proprietary solution that synthesizes environmental data from earth-observation satellites and spaceborne light-detection-and-ranging (lidar) technologies. Because Ozius gathers millions of data points from these satellites, it wanted to find a cloud service that would work alongside its existing PostgreSQL databases with PostGIS, a spatial database extender for PostgreSQL databases, and generate enough compute power to accelerate its processing time. Get Started “We shopped around with several cloud service providers,” says Peter Scarth, data science lead and chief technology officer at Ozius. “We chose Amazon Aurora because it is a readily supported, high-quality database solution within a scalable framework.” Moreover, the Ozius team received technical support from the AWS team during its implementation and any time that it needed help troubleshooting. AWS Services Used Learn how Ozius, an Austrailan environmental intelligence enterprise, uses artificial intelligence and Amazon Aurora to generate data on Australia's vegetation. 中文 (繁體) Bahasa Indonesia 10x Amazon Aurora Additionally, because Ozius uses serverless solutions on AWS, the company has optimized compute costs and resources. As a result, it can provide its Biome suite of products to customers at a competitive price point. “We’re saving our customers hundreds of thousands of dollars and months of time,” says Ben Starkey, managing director at Ozius. “The only way for a company to get similar data in localized areas is to fly a plane and use airborne lidar or to go into the field and measure it manually.” Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Ozius plans to officially launch Ozius Biome to the public in July 2022. From there, it will work toward further reducing its processing times and expanding its operations using serverless solutions on AWS. “We’ve opened a world of possibilities because we can ingest and compute this amount of data,” says Scarth. “Working on AWS gives us a whole lot of opportunities and provides better products to our customers.” 中文 (简体) Peter Scarth Data Science Lead and Chief Technology Officer, Ozius  Learn more » Amazon Aurora is a game changer. It helped us complete our geospatial processing far faster than I could’ve possibly imagined.” Overview with processing environmental data Amazon Aurora is a relational database management system (RDBMS) built for the cloud with full MySQL and PostgreSQL compatibility. Aurora gives you the performance and availability of commercial-grade databases at one-tenth the cost. Solution | Improving Performance and Cost Savings Using Amazon Aurora Outcome | Opening a World of Possibilities by Launching Biome to the Public During beta testing, Ozius sold data for approximately 10 million hectares to early bird stakeholders. Because Ozius outperformed its sales goals, it closed its early bird enrollment for Biome in December 2021. “The feedback that we have received has been incredible,” says Starkey. “We’re able to service lots of small queries really quickly, and we’ve received several queries to deliver data across large areas and even whole states.” Since completing this phase of the project, the company has received new sales leads every week, and its customers have placed data-order requests for up to 30 million hectares. Türkçe 450x faster Based in Australia, Ozius is a small enterprise that provides earth-observation analytics and intelligence to both public and private sectors across many industries including natural capital markets and government, energy, and defense sectors. The company conceptualized Biome in 2021, identifying the need to produce large datasets that would facilitate a highly accurate reconstruction of Australia’s forest and plant canopy using artificial intelligence and lidar technologies. With Biome, its customers can identify carbon-trading opportunities, monitor deforestation, prepare for bushfires, and detect landscape changes. Ozius has experienced an increased demand for this type of intelligence as more companies roll out environmental conservation and net-zero initiatives. English data requests following beta testing 30 million hectares 4 months 8 hours Deutsch Ozius began exploring different solutions from Amazon Web Services (AWS) and other third-party cloud providers. In July 2021, it decided to adopt Amazon Aurora, a MySQL- and PostgreSQL-compatible relational database built for the cloud that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open-source databases. Compared to its previous system, the company ingests up to 10 times more data points and processes them 450 times faster. Opportunity | Identifying the Need to Process Satellite Data with a Robust Cloud Solution Tiếng Việt Italiano ไทย Contact Sales Ozius provides earth-observation analytics and intelligence across natural capital markets and government, energy, and defense sectors. Its Ozius Biome solution uses artificial intelligence and spaceborne lidar and satellite technologies to generate data on Australia’s vegetation. more data ingested than its previous system In July 2021, Ozius worked alongside the AWS team to combine Aurora with its on-premises databases and accelerate the development of Biome. By November 2021, Ozius launched beta testing for Biome, reaching this milestone within a much shorter timeline than the company had originally expected. “We only spent 4 months developing our Biome solution,” says Alisa Starkey, founder, director, and chief science officer at Ozius. “That timeline for new, national-scale product development is unheard of in our industry.” Ozius Develops Biome in 4 Months, Offers Spatial Datasets Using Amazon Aurora Português" Improving Hiring Diversity and Accelerating App Development on AWS with Branch Insurance _ Case Study _ AWS.txt,"Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. DynamoDB offers built-in security, continuous backups, automated multi-Region replication, in-memory caching, and data import and export tools. Learn more » Amazon Cognito provides an identity store that scales to millions of users, supports social and enterprise identity federation, and offers advanced security features to protect your consumers and business.  Learn more » Français Español more Black engineers and 26% more Hispanic or Latino engineers than industry averages of typical cost for similarly sized startups AWS AppSync creates serverless GraphQL and Pub/Sub APIs that simplify application development through a single endpoint to securely query, update, or publish data. About Branch Insurance 4 products 日本語 AWS Services Used 6-month launched in just 3 years with a team of fewer than 20 developers However, offering this simplicity requires powerful infrastructure to process data quickly and store it efficiently and securely in compliance with regulations. Branch has been a serverless-native company on AWS since its founding in 2017 as a team of two. The startup wanted to use managed services to off-load as much of the infrastructure maintenance work as possible and reduce bespoke backend code to simplify its logic and improve scalability. “AWS has consistently provided better services that we can use to hand off more of the undifferentiated heavy lifting,” says Joe Emison, cofounder and chief technology officer of Branch. “By using AWS, we can focus our valuable time on what differentiates Branch.”  한국어 Amazon DynamoDB Overview | Opportunity | Solution | Outcome | AWS Services Used As the startup grew, it also recognized several challenges with the existing job market. The company wanted to avoid the typical cycle of hiring a lot of senior developers because that practice excluded many talented developers from underrepresented groups in the software industry. “It can be difficult to find experienced developers who are willing to learn and adapt to the way your company wants to do things,” says Herndon. To break out of that constrained hiring market, Branch decided to focus on hiring junior developers and upskilling them through an in-house boot camp program based on its specific technology stack. Opportunity | Off-Loading Infrastructure Maintenance Work and Diversifying Hiring Get Started AWS AppSync One of the biggest benefits of building on AWS has been the ability to duplicate environments and run multiple environments on the same configurations for staging, development, and production. “With this setup, we can be much more confident in our ability to test,” says Herndon. “Developers have more time for working with the code because they don’t have to wait for a feature to be scheduled on a single staging environment.” Doing a full deployment on AWS now takes just 10–15 minutes for Branch. On average, the company deploys 5 times per week, and each time it saves a significant amount of time and resources that translate to increased developer productivity. In all, Branch has accelerated its development cycles by an estimated 6 months. “Using serverless technology on AWS, we’ve replaced what would be an entire team with a system that’s relatively cheap,” says Emison. The company estimates that it spends just 3 percent as much as similarly sized startups.  Meanwhile, as developers come in from the boot camp, Branch creates new environments for them quickly on AWS. Further, new hires are better prepared to use the company’s serverless architecture so that they can more quickly get started building great products. The boot camp has also increased the diversity of Branch’s workforce. One-third of Branch’s engineering team is Black and one-third is Hispanic or Latino—much higher than the industry averages of 5 percent and 7 percent, respectively. In addition, Branch has 10 percent more female engineers than the industry average. “We’re trying to help these new hires acclimate more quickly to our team, but all of the skills we’re teaching are transferrable to other companies,” says Herndon. In that way, it’s also helping create a more diverse talent pool for all companies building in the cloud.  AWS Amplify Fast-growing insurance technology startup Branch set out to radically simplify the end-user experience for insurance customers by offering bindable prices based on just a couple of simple pieces of information—the customer’s name and address. “One of the things that makes us different is how quickly you can get a rate you can purchase,” says Ivan Herndon, vice president of engineering at Branch.  Branch built an API hub using AWS AppSync, which creates serverless GraphQL and Pub/Sub APIs that simplify application development through a single endpoint to securely query, update, or publish data. The company also used a serverless architecture to empower its junior developers and diversify its workforce. As a result, Branch drastically reduced the amount of time and resources that it needed to deploy updates and maintain its technology stack. 中文 (繁體) Bahasa Indonesia acceleration in app development velocity With this shift from hiring experience to nurturing expertise, Branch aimed to improve the diversity of its workforce while easing the onboarding process for new hires. It designed its boot camp curriculum to focus on the AWS services and serverless architecture that its developers use and build on every day. “Building on AWS works very well for us, and it scales seamlessly,” says Herndon. “We don’t have to worry about security compliance because it’s built into AWS services.” In addition, Branch leverages a fully-typed architecture, with TypeScript in its frontend code and a typed schema in its AppSync API hub, to create guardrails for its developers. Using JavaScript (TypeScript) in both front and backends also makes it much easier for each developer to be a full-stack developer at Branch. Branch Insurance (Branch) had goals for its internal development teams that were as ambitious as its efforts to provide uniquely simple insurance policies to its customers. The startup wanted to take an all-in approach to serverless architecture using Amazon Web Services (AWS) to make its infrastructure scalable, accelerate developer training, and simplify deployments.  Contact Sales Ρусский 3% عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. “Building a product on AWS is like doing it on ‘easy mode’ because there’s so much that’s simplified by using managed services,” says Emison. “We just write business logic and interfaces. That’s the great benefit of using AWS.” 2022 Overview Building a product on AWS is like doing it on ‘easy mode’ because there’s so much that’s simplified by using managed services. We just write business logic and interfaces. That’s the great benefit of using AWS.”  28% Customer Stories / Financial Services Amazon Cognito Improving Hiring Diversity and Accelerating App Development on AWS with Branch Insurance Branch Insurance is an insurance technology startup that provides simple insurance policies and comprehensive bundles to customers in 33 US states. The company was founded in 2017 in Columbus, Ohio. English Learn how Branch Insurance accelerated app development using AWS AppSync. more female engineers than the industry average Deutsch 10% Branch uses AWS AppSync as the foundation for its backend infrastructure and API service. AWS AppSync receives all the requests from the company’s website and mobile app, filters out malicious requests, makes sure each request is properly formatted, and finally initiates the proper business logic. The company also manages the authorization flow using libraries from AWS Amplify, open-source client libraries that developers can use to build cloud-powered mobile and web apps. “Branch’s entire backend, including all business logic and transactional data, runs on AWS AppSync,” says Emison. “By connecting AWS AppSync to AWS Amplify, the amount we have to deal with operations is extremely minimal.”  Outcome | Building Products on 'Easy Mode' Using AWS Services Tiếng Việt Solution | Using AWS AppSync Accelerated App Development Cycles by 6 Months for Branch Italiano ไทย Branch uses the scalability of Amazon DynamoDB, a key-value and document database that delivers single-digit millisecond performance at virtually any scale, to handle as much traffic as it needs. Meanwhile, the startup stores all member information on Amazon Cognito, which businesses can use to add sign-up, sign-in, and access control to web and mobile apps quickly and easily. Branch has made user authentication effortless by using AWS AppSync to route each user login request to Amazon Cognito. “One of the magical parts of AWS AppSync is how well it connects to Amazon Cognito to automatically respond to authentication requests,” says Emison.  Türkçe In just 3 years, Branch launched four insurance products—home, auto, renters, and umbrella insurance—in 33 US states. And the company did that with fewer than 20 full-time developers. As it continues to grow and hire new developers through its custom boot camp, it plans even more innovative features.  Joe Emison Co-Founder and Chief Technology Officer Learn more » AWS Amplify is a complete solution that lets frontend web and mobile developers easily build, ship, and host full-stack applications on AWS, with the flexibility to leverage the breadth of AWS services as use cases evolve. No cloud expertise needed. Learn more » Português" Improving Mergers and Acquisitions Using AWS Organizations with Warner Bros. Discovery _ Warner Bros. Discovery Case Study _ AWS.txt,"Amazon GuardDuty Amazon GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation. Français 2023 AWS CloudTrail Español   About Warner Bros. Discovery Discovery had been working to centralize its account creation to better operate at scale and support its multiple growing business units. As a result, it streamlined the process for any mergers and acquisitions (M&A). In 2022, the company began undergoing its largest merger to date when WarnerMedia and Discovery started to merge into Warner Bros. Discovery (WBD); this process is still ongoing. The main challenge of these kinds of M&As is securely integrating a newly merged or acquired company’s cloud footprint into Discovery’s existing footprint without impacting the day-to-day operations of either business. With a cloud-first approach, keeping the cloud infrastructure accessible, running, secure, and protected is vital. reduction in firewall rule deployment time 日本語 When account creation is centralized, we have the ability to view and control cloud spend in a single pane of glass. We’re plugged in, our support team is plugged in, and we can manage costs.” Customer Stories / Media & Entertainment Get Started 한국어 View and control cloud spend Overview | Opportunity | Solution | Outcome | AWS Services Used Before 2019, creating a new account could take up to 2 months. Now that the centralized process is used, with defined features and a controlled process, an account can be configured immediately, and the entire delivery is finished within 2 days. WBD also uses the centralized environment to detect and consolidate duplicate implementations. Achieved The Global Cloud Services team began making AWS Organizations a key part of its process in 2019. “As we learned about the capabilities of AWS Organizations and how organizational units could apply the controls and service control policies in a hierarchy, it really suited our goals,” says Kevin Woods, lead cloud solutions architect at WBD. As we learned about the capabilities of AWS Organizations and how organizational units could apply the controls and service control policies in a hierarchy, it really suited our goals.” AWS Services Used 中文 (繁體) Bahasa Indonesia in a single pane of glass The improved speed of development and deployment translates to a better time to market. “The development teams creating direct-to-consumer products immediately have a place to go,” says Lankford. “Development teams do not need to wait on their cloud environment. Their code can be deployed immediately.” This means that features get to market faster because content is produced faster. By using AWS Organizations and integrating other AWS services, WBD improved deployment time, which helps the company to scale with new growth. Contact Sales Ρусский Prevented costs عربي Learn more » Discovery had been improving its M&A process for years and now provides customers with an ever-widening portfolio of television, streaming, and gaming content. One of Discovery’s first large M&As was with Scripps in 2018. Discovery learned and matured through the Scripps integration. After the company merged with WarnerMedia to become WBD, a global media and entertainment leader, it created a centralized governance group (the Global Cloud Services team) to efficiently manage its new and old accounts. The team was created to implement governance at scale, using prior lessons learned to create a robust framework for security and governance tooling. The new control policies helped WBD to be proactive instead of reactive by using security baselines to track security findings, a necessity when scaling up services. The company began to treat governance as a product of its internal teams. “We wanted to be able to grow and use the power of the cloud while making sure our development teams had a secure, governed environment,” says Bianca Lankford, vice president of cloud security at WBD. 中文 (简体) AWS Firewall Manager Kevin Woods Lead Cloud Solutions Architect, Warner Bros. Discovery Learn more » AWS Firewall Manager is a security management service that allows you to centrally configure and manage firewall rules across your accounts and applications in AWS Organizations. Learn more » 2 months to 2 days WBD saves time for web application firewall rule deployment using AWS Organizations to create a centralized deployment model of AWS Firewall Manager to centrally configure and manage firewall rules across accounts. This process reduces deployment time from days to minutes, which is pivotal for some events that require expedited deployment and security tooling. Overview WBD also uses AWS CloudTrail to monitor and record account activity across AWS infrastructure. The purpose of AWS CloudTrail is to track user activity and API usage across all WBD’s AWS accounts. By using AWS CloudTrail and Amazon GuardDuty, WBD benefits from centralized security tooling while adopting governance controls. This centralized user activity and API usage in AWS CloudTrail also reduces costs for the company. “When account creation is centralized, we have the ability to view and control cloud spend in a single pane of glass,” says Lankford. “We’re plugged in, our support team is plugged in, and we can manage costs.” AWS Organizations Bianca Lankford Vice President of Cloud Security, Warner Bros. Discovery Outcome | Paving the Way for Larger M&As Türkçe Learn how Warner Bros. Discovery streamlined the process for mergers and acquisitions (M&A) using AWS Organizations. English To make sure that integration processes are smooth, the Global Cloud Services team preconfigures all accounts to include AWS Enterprise Support, which provides concierge-like service that is focused on achieving outcomes and finding success in the cloud. The team then gets out of the way so that internal development teams can operate independently and expedite innovation. “To encourage self-service, it’s important to have centralized guardrails,” says Lankford. “There is a degree of confidence that teams are operating within a standardized guardrail set.” From days to minutes AWS Organizations lets you create new AWS accounts at no additional charge. With accounts in an organization, you can easily allocate resources, group accounts, and apply governance policies to accounts or groups. faster time to market Improving Mergers and Acquisitions Using AWS Organizations with Warner Bros. Discovery Deutsch Tiếng Việt AWS CloudTrail monitors and records account activity across your AWS infrastructure, giving you control over storage, analysis, and remediation actions. WBD uses Amazon Web Services (AWS) to centralize account creation as well as automate and secure the M&A process at scale. The company uses AWS Organizations to create new AWS accounts at no additional charge, allocate resources, group accounts, and apply governance policies.  Using AWS, WBD decreases time to market, reduces cost, and creates a centralized and automated deployment of new accounts, all while making security a priority.  Italiano ไทย Solution | Reducing Costs and Speeding Up Account Creation to 2 Days Using AWS WBD uses the delegated administration capabilities of AWS Organizations to give its teams the capability to centrally manage security services. It protects cloud infrastructure using Amazon GuardDuty, a threat detection service that monitors AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation. WBD deploys Amazon GuardDuty across all accounts at creation before they are fully integrated. “Amazon GuardDuty is a default configuration,” says Woods. “I don’t think there’s any area where you would not have it in place.” The 2022 merger of Discovery and WarnerMedia has been the largest to go through the automated accounts deployment process. The company went from 270 accounts to thousands of accounts. By using AWS Organizations account management APIs, WBD has had the building blocks in place to be flexible during this process. The company is also actively integrating cloud environments from its M&As in a secure way. “It’s about how we use the cloud to keep growing,” says Lankford. “When our development teams have a secure, governed environment, they can work without hindrance and get compelling content into the marketplace and into the homes of our consumers.” reduction time in new account creation related to large M&As Opportunity | Using AWS Organizations to Improve M&As Warner Bros. Discovery is a global media and entertainment company based in New York City. The company provides customers with a vast portfolio of content in television, streaming, and gaming. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português" Improving Operational Efficiency with Predictive Maintenance Using Amazon Monitron _ Baxter Case Study _ AWS.txt,"Baxter International Inc. (Baxter), a global medical technology leader, is driven by its mission to save and sustain lives. The company’s network of 70 manufacturing sites worldwide operates 24/7 in a highly complex, dynamic, and regulated environment. Every minute of production is critical, and every minute of downtime avoided is valuable not only to the company but also to its customers and patients. Baxter needed an equipment-monitoring solution that could build resiliency in its operations and reduce unplanned equipment downtime. Français About Baxter International Inc. Español Baxter Improves Operational Efficiency with Predictive Maintenance Using Amazon Monitron To avoid supply chain disruptions and maintain quality, Baxter needed to build resiliency into its operations, keeping its facilities up and running without unexpected downtime so that the company could deliver lifesaving products to customers and patients on time and achieve its mission of saving and sustaining lives. Baxter uses a wide range of industrial equipment in its utilities, process, and packaging zones to produce medical devices and pharmaceuticals. With around-the-clock operations and precise requirements during production for factors like temperature and product movement, reliable operations are critical. Based on the success so far, Baxter plans to scale its use of Amazon Monitron to cover its complete network of 70 manufacturing sites worldwide in a few years. Baxter expects the deployment to continue creating a cultural change at its facilities as it implements a predictive maintenance program and advances the company’s digital transformation to continue using data and insights to improve business processes. Opportunity | Using Amazon Monitron to Reduce Unplanned Equipment Downtime 日本語 AWS Services Used operational efficiency and quality by automating inspection tasks Amazon Monitron has given us the actionable data needed to maintain the thousands of manufacturing assets in our facilities, allowing us to predict and preempt unplanned equipment downtime.” Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon Monitron Baxter International Inc. is a global medical technology company that helps facilitate patient care through its portfolio of outpatient, hospital, critical care, kidney, and surgical innovations that are available in over 100 countries. Improved “The time to value has been incredibly quick and has added momentum to Baxter’s digital transformation efforts. Amazon Monitron has given us the actionable data needed to maintain the thousands of manufacturing assets in our facilities, allowing us to predict and preempt unplanned equipment downtime,” says Karan. “This gives us a big advantage in creating reliable and sustainable supply for our customers, which is especially critical given supply chain challenges being felt across the industry.” A predictive maintenance task force at Baxter reviewed failure predictions and the scheduled maintenance logs and determined that vibration and temperature sensors deployed at scale, combined with ML technology, could be a powerful solution for detecting anomalies that could lead to failures of system components. In 2021, Baxter began a proof-of-concept project to use Amazon Monitron, installing wireless sensors to capture vibration and temperature data. In this initial deployment, the company installed 400 Amazon Monitron sensors in 1 month in one of its largest facilities in the United States. manual inspection time for technicians Because Amazon Monitron automatically detects abnormal machine operating states by analyzing vibration and temperature signals using International Organization for Standardization standards and ML models, Baxter could expand quickly without the need for a team with ML expertise. Baxter technicians can review any issues immediately from the Amazon Monitron app and take action. After the success of the proof-of-concept project, Baxter deployed 2,500 Amazon Monitron sensors at its lighthouse facility and plans to install tens of thousands of sensors in additional plants across the United States, Europe, and Asia. “Amazon Monitron costs one-tenth of what other products on the market cost and doesn’t require Baxter to hire dozens of ML engineers,” says A. K. Karan, global senior director of digital transformation at Baxter. “Amazon Monitron is one of the few solutions on the market that can meet our needs for speed, cost efficiency, and scalability for our global breadth of operations.” Reduced 中文 (繁體) Bahasa Indonesia ไทย Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Baxter looked to Amazon Web Services (AWS) for a predictive maintenance solution that was simple to deploy, equipment agnostic, cost efficient, and scalable. Using Amazon Monitron—an end-to-end condition monitoring system that uses machine learning (ML) to automatically detect abnormal conditions in industrial equipment and lets users implement predictive maintenance to reduce unplanned downtime—Baxter has significantly improved its operational efficiencies by preventing unplanned equipment downtime and emergency repairs. Amazon Monitron is an end-to-end condition monitoring system that uses machine learning to automatically detect abnormal conditions in industrial equipment and lets users implement predictive maintenance to reduce unplanned downtime. 2022 Overview A key motivating factor for switching from a reactive to a predictive maintenance strategy was increasing uptime and reducing maintenance costs by scheduling maintenance rather than responding to emergency repairs. “The power of ML combined with actionable data delivered instantaneously on a mobile app has improved the team’s productivity significantly. This is truly a game changer for us,” says Adam Aldridge, reliability engineering manager. 500 machine hours Türkçe Baxter’s previous equipment-monitoring system relied on manual, time-based inspections, which required technicians to walk around the sites to check equipment. The cycle to inspect thousands of manufacturing assets in a facility could take a few weeks. Equipment failure could occur between these inspection cycles and cause unplanned equipment downtime. For some equipment inspections, technicians needed to enter confined spaces, requiring the company to halt operations for safety. English Solution | Realizing Tangible Value by Saving 500 Machine Hours of Downtime with Amazon Monitron With headquarters in the United States and facilities around the world, Baxter strives to deliver high-quality products to treat patients in hospitals, outpatient offices and facilities, and in patient homes. Deutsch Tiếng Việt Outcome | Scaling Globally with a Predictive Maintenance Strategy Italiano Customer Stories / Life Sciences Contact Sales of unplanned downtime prevented in one facility Learn more » A. K. Karan Global Senior Director of Digital Transformation, Baxter International Inc. Since Baxter’s deployment of Amazon Monitron, Baxter has avoided over 500 hours of unplanned machine downtime from over 40 alerts in a short time span at its lighthouse facility. This number of machine hours equates to approximately 7 million units of production, so Baxter can positively affect the lives of about 10,000 patients. Learn how Baxter reduced unplanned equipment downtime using Amazon Monitron. Português Baxter saw immediate value through the reduction of technicians’ manual inspection time and the ability to rapidly scale to additional facilities. “The speed with which we could deploy the Amazon Monitron devices was incredible,” says Tim Marini, senior director of operations at Baxter. “Sticking on the sensors, downloading the Amazon Monitron application, and getting started happened in minutes.” Part of that value includes a cultural change for technicians to take a proactive rather than reactive approach. “Using Amazon Monitron has helped us change the paradigm from unplanned, unexpected, critical failures to near-real-time monitoring of critical systems,” says Krizay Elenitoba-Johnson, site director at Baxter’s manufacturing facility in Alabama. “We can convert unplanned equipment downtime into planned and well-managed outcomes.”" Improving Patient Outcomes Using Amazon EC2 DL1 Instances _ Leidos Case Study _ AWS.txt,"cost savings for model training Français Chetan Paul Vice President of Technology and Innovation Federal Health, Leidos In July 2021, Leidos first piloted the instances in a stand-alone on-premises environment provided by Habana Labs, verifying the instances’ cost-performance ratio and suitability for computer-vision and natural-language processing use cases. In November 2021, the company proposed to develop a pilot using Amazon EC2 DL1 Instances for the VA because the agency was already using AWS as a security-approved Authority to Operate environment. From January to August 2022, Leidos set up the Amazon EC2 DL1 Instances, trained and refined the deep learning models, performed demos, and incorporated feedback from the VA. The setup is expected go live by the end of 2022, just 1 year after the project started. “For large federal agencies like the VA to move at that speed is significant,” says Paul. “Amazon EC2 DL1 Instances were seamless from both a technology-setup and a development perspective.” Español The Leidos team has piloted two use cases on Amazon EC2 DL1 Instances. For the FDA, it developed a pilot to show how a neural network for image processing could be used to analyze chest X-rays of patients with COVID-19 and detect pneumonia early. The second use case was taking advantage of natural-language processing, using a DistilBERT model, to accelerate claims processing. “With every new technology, we anticipate a steep learning curve,” says Paul. “However, with the extensive user documentation, developer-portal use cases, study guides, and sample code from AWS and Habana Labs, learning was accelerated. Our customer saw that there are plenty of resources and support.” Solution | Using Amazon EC2 DL1 Instances to Cut Model Training Costs for Leidos by 66% Amazon EC2 DL1 instances powered by Gaudi accelerators from Habana Labs (an Intel company), deliver low cost-to-train deep learning models for natural language processing, object detection, and image recognition use cases.  Learn more » 日本語 Leidos is a science and technology solutions leader working to address some of the world’s challenges in the defense, intelligence, homeland security, civil, and healthcare markets. It has more than 400 locations in 30 countries. Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used About Leidos Now Leidos sees a price-performance ratio of 60 percent and cost savings of 66 percent on model training compared with the on-premises infrastructure, without compromising processing speed or accuracy. The company also reduced model training time from 8 hours to less than 1 hour for about 2,200 cases per day by distributing the training workloads across Amazon EC2 DL1 Instances. “It’s a great benefit to distribute workloads across Amazon EC2 DL1 Instances and aggregate the outcomes,” says Paul. “That scalability is important for our customers that expect their workloads, but not necessarily their workforce, to increase over time.” AWS Services Used Increased speed 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload.  Contact Sales Ρусский Leidos provides technology solutions across civil, defense, health, and intelligence sectors. It serves federal health agencies, including the VA and the US Food and Drug Administration (FDA), and commercial organizations, such as hospitals and clinics. QTC, a Leidos subsidiary, is the largest provider of disability and occupational health exam services for veterans, operating 65 US clinics and a network of more than 12,000 private care providers. Processing veterans’ disability claims requires a lot of paperwork: each veteran has to fill out the right disability questionnaire for their claim, which includes prescriptions and medical notes. “Speed and accuracy matter,” says Chetan Paul, vice president of technology and innovation federal health at Leidos. “A delay in processing the claim for a veteran is a delay in getting the right medical care for that veteran.” عربي Learn how Leidos improved patient outcomes while saving 66 percent on costs to train ML models using Amazon EC2 DL1 Instances. Leidos plans to use Amazon EC2 DL1 Instances for other use cases, such as electronic health record processing, for the VA, the FDA, and the National Institutes of Health. Amazon EC2 DL1 Instances are well suited for analyzing image data for the FDA’s Center of Devices and Radiological Health and for research on the lungs of patients with COVID-19. “At Leidos, we rank our solutions to our customers using the parameters of speed, scale, security, and usability,” says Paul. “Our solution on Amazon EC2 DL1 Instances checks all the boxes.” 中文 (简体) To improve speed and cost efficiency of automating claims processing, Leidos became an early adopter of Amazon EC2 DL1 Instances, available on AWS since October 2021. Because Amazon EC2 DL1 Instances feature eight Gaudi accelerators, each with 32 GiB of high bandwidth memory, they would support Leidos in distributing customers’ training jobs across instances, reducing model training time and cost. Leidos, a science and technology solutions leader, builds machine learning (ML) applications that accelerate the ability of public and private health organizations, like the US Department of Veterans Affairs (VA), to get patients the medical care that they need. However, the company’s traditional on-premises infrastructure made it challenging to achieve the performance and cost efficiencies needed by complex ML applications that use large datasets. So Leidos sought advanced compute solutions on Amazon Web Services (AWS) to cost-effectively build ML applications that automate the manual processes of health organizations and help them accelerate diagnosis and treatment of patients. Learn more » 2022 Overview Amazon EC2 DL1 Instances Türkçe 66% English Cut model training time Leidos had extensively used Amazon Elastic Compute Cloud (Amazon EC2), a broad and deep compute solution, and other AWS services that support Amazon EC2. In late 2021, after careful consideration, the company chose to migrate its ML workloads from its on-premises infrastructure to the new Amazon EC2 DL1 Instances. These instances are powered by Gaudi accelerators from Habana Labs, an Intel company and AWS Partner, to deliver low-cost-to-train deep learning models for natural-language processing and computer-vision use cases. By migrating its ML development to these instances, Leidos improved performance and decreased compute costs so that its customers could reap greater returns on investment while minimizing manual tasks. Leidos Improves Patient Outcomes Using Amazon EC2 DL1 Instances At Leidos, we rank our solutions to our customers using the parameters of speed, scale, security, and usability. Our solution on Amazon EC2 DL1 Instances checks all the boxes.” Customer Stories / Professional Services Outcome | Applying Amazon EC2 DL1 Instances and ML to Other Use Cases Opportunity | Using Amazon EC2 DL1 Instances to Cost-Effectively Automate Claims Processing 95–97% precision score better price performance Deutsch compared with 72% using hybrid solution Previously, QTC processed claims both manually, using human reviewers, and automatically, using a hybrid environment of virtual machines and Amazon EC2 instances to manage large workloads and datasets. However, that hybrid approach wasn’t fast enough in processing the huge volumes and variety of data involved in claims processing—including images, scientific literature, publications, and text—nor was the price performance optimal for customers’ return on investment. Tiếng Việt Italiano ไทย By taking advantage of distributed computing capabilities offered by the eight-node Amazon EC2 DL1 processor and scaling the compute by adding Amazon EC2 DL1 Instances as required, Leidos can train models with more data, thus increasing the F1 score, or precision and recall score. On traditional hybrid Amazon EC2 environments, the models had a maximum F1 score of 72 percent. By training on Amazon EC2 DL1 Instances, Leidos increased the F1 score to 95–97 percent. “This makes the reviewers’ lives so much easier,” says Paul. “It eliminates the fatigue and error from a manual review process, and workforce efficiency and productivity jumped: reviewers can process 40 claims in the time that it took to process 1 before. The veterans get to their claims and healthcare much faster.” for claim processing Amazon EC2 from 8 hours to less than 1 hour for about 2,200 cases a day 60% Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português" Improving Search Capabilities and Speed Using Amazon OpenSearch Service with ArenaNet _ ArenaNet Case Study _ AWS.txt,"Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any Learn more » Amazon OpenSearch Service Français 2023 Amazon Redshift Español Opportunity | Using Amazon OpenSearch Service to Enhance the Player Experience for ArenaNet  日本語 response time for complex search queries Customer Stories / Media & Entertainment Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Learn how online game developer ArenaNet optimized search functionality for players using Amazon OpenSearch Service. Using AWS managed solutions like Amazon OpenSearch Service, ArenaNet reduces the management, monitoring, and maintenance of the wiki pages, which had previously been the responsibility of a single engineer. Plus, because Amazon OpenSearch Service places the database name at the beginning of each key, all the Guild Wars wiki pages share one large cluster instead of requiring the engineer to generate multiple clusters to optimize users’ searches. “Having that single managed Amazon OpenSearch Service cluster was incredibly helpful in spinning up functionality in a relatively short timeframe,” says Lloyd. ArenaNet added search functionality, expanded syntax capabilities, and greatly improved the speed of searches for players. “It doesn’t sit there and churn,” says Mitch Sickler, systems engineering manager at ArenaNet. “Users immediately get a return of whatever they searched for.” For example, a user’s search for a character quote used to take so long that the server would time out after 1 minute. “After Amazon OpenSearch Service was working and everything was indexed properly, that same search would take 2 seconds, if that,” says Lloyd. To help further improve efficiency in querying as well as save costs, in January 2022, ArenaNet changed its cloud-based data warehouse solution to Amazon Redshift. The team migrated 100 TB to Amazon Redshift while cutting its costs by 50 percent. ArenaNet’s use of Amazon Redshift helped alleviate significant performance issues from its previous data warehouse solution, which cost more and performed slower because of high search loads, increased traffic, and other factors. “What we like about Amazon Redshift is that it gets less expensive and better over time,” says Clarke-Willson. ArenaNet has maintained near-100 percent game uptime alongside in-person help from AWS engineers and online support. “They’ve been great at assisting us in what we’re trying to accomplish,” Sickler says. “They strive to anticipate potential friction when we have big releases and try to get ahead of any issues. I’m super appreciative of that.” game uptime maintained AWS Services Used 中文 (繁體) Bahasa Indonesia Founded in 2000 and acquired by NCSoft in 2002, ArenaNet released the MMORPG Guild Wars in 2005 without a monthly subscription fee. Players go on quests with other players online, exploring fantasy worlds as characters that they create and design themselves, including customizing their outfits and equipment. By 2010, the company had sold nearly 6.5 million copies worldwide. It released Guild Wars 2 in August 2012 and sold 3.5 million copies in its first year to become the fastest-selling MMORPG up to that point. A unique aspect of the game is the ability of players to consult an accompanying Guild Wars wiki, a massive online reference source available through a browser or by typing “/wiki” and clicking an object within the game. Users contribute to and edit the wikis’ nearly 280,000 pages, detailing information about the characters, storylines, and other game content. ArenaNet needed a backend solution that could handle the increasing scale and complexity of the five wikis related to Guild Wars. More than 14,000 editors manage pages available in English, German, French, and Spanish languages. “Modern MMORPGs are really complicated and filled with features, and the wiki makes the game way more accessible,” says Stephen Clarke-Willson, vice president of engineering at ArenaNet. “It’s like, if you go to a distant country without a travel guide, you don’t know what’s going on. The wiki has become an organic part of the game.” Guild Wars players had asked ArenaNet to add search features to help them navigate the complexity of the information on the wiki pages. ArenaNet had been using MediaWiki, a free open-source software, to process, store, and display information for wiki users. As the Guild Wars wikis continued to grow in scope and complexity, the MediaWiki built-in search engine could not keep up with use that reached up to 400 searches per second. At the users’ request, in September 2021, ArenaNet implemented Amazon OpenSearch Service, an open-source distributed search and analytics suite derived from Elasticsearch. ArenaNet installed the specific MediaWiki extensions that would help the wikis to communicate with Amazon OpenSearch Service. Using Amazon OpenSearch Service, ArenaNet could index wiki content for faster search results while also offloading the search processing from the wikis’ web and database servers onto the dedicated Amazon OpenSearch servers. Further, instead of having to spin up multiple clusters to handle a search engine that would at times fall over under heavy loads, ArenaNet worked alongside the Amazon OpenSearch Service team proactively to find work-arounds that streamlined communication between MediaWiki and the AWS service. “After we did that, it was basically plug and play,” says Justin Lloyd, Linux engineer at ArenaNet. Amazon OpenSearch Service makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. OpenSearch is an open source, distributed search and analytics suite derived from Elasticsearch. Contact Sales Ρусский Based in Bellevue, Washington, ArenaNet is a video game developer best known for the popular massively multiplayer online role-playing franchise Guild Wars. عربي ArenaNet is the developer of the Guild Wars franchise, including one of the most popular massively multiplayer online role-playing games (MMORPGs) in the world, Guild Wars 2. The company sought to optimize the functionality of a unique feature of the game: its direct integration with wiki pages that provide a comprehensive online reference source, written by Guild Wars players. Players were requesting additional features, and ArenaNet wanted a cloud-based data warehouse with the speed and agility to respond to record numbers of users. As its current solution became increasingly expensive to maintain, the company’s small engineering team looked for a more cost-effective managed solution. ArenaNet turned to Amazon Web Services (AWS) and improved the speed and syntax capabilities of its search tools for users while cutting its costs by 50 percent and strengthening the durability of its data warehouse by using Amazon Redshift, which uses SQL to analyze structured and semistructured data across data warehouses, operational databases, and data lakes. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Near 100% Using Amazon OpenSearch Service helps our search functions to work so much better and be much more powerful. We don’t have to manage it ourselves, which is another huge benefit.” Overview Solution | Adding Capabilities and Improving Efficiency for Game Players While Cutting Costs by 50%  50% Türkçe Justin Lloyd Linux Engineer, ArenaNet English uses of improved search functionality 20 million + About ArenaNet Deutsch Improving Search Capabilities and Speed Using Amazon OpenSearch Service with ArenaNet Tiếng Việt Italiano ไทย 2 second Learn more » reduction in data warehouse costs The backend changes to the Guild Wars wiki have prompted overwhelmingly positive comments from players on social media. “We see how grateful people are to have the wikis by how much activity the wikis get,” says Lloyd. ArenaNet plans further optimizations in speed and functionality to its search capabilities, which have been used more than 21 million times. The company is also looking into using Amazon OpenSearch Service for observability so that it can centralize and better analyze logs generated by MediaWiki. “Using Amazon OpenSearch Service helps our search functions to work so much better and be much more powerful,” Lloyd says. “We don’t have to manage it ourselves, which is another huge benefit.” Outcome | Continuing to Optimize the Player Experience  Português" Improving Transportation with Mobility Data Using Amazon EMR and Serverless Managed Services _ Arity Case Study _ AWS.txt,"Arity was facing operational challenges associated with maintaining Kafka clusters, keeping them up to date with the latest security patches and bug scans and diagnosing the clusters when issues arose. To move away from having to keep detailed knowledge of individual services and to increase focus on its business logic, Arity transitioned to Amazon Managed Streaming for Apache Kafka (Amazon MSK), which makes it simple to ingest and process streaming data in near real time with fully managed Apache Kafka. Using Amazon MSK to manage Kafka, Arity reduced operational overhead and associated costs using Amazon MSK by taking advantage of automatic scaling to use clusters more efficiently, such as by reducing cluster idle time during periods of lower use. Arity’s modernization reduced monthly infrastructure costs by 30 percent, and the cost per trip connection decreased by 36 percent. These savings mean that the company can better devote its resources to core business needs instead of self-managing its telematics solution. Arity Improves Transportation with Mobility Data Using Amazon EMR and Serverless Managed Services Français Arity, a mobility and data analytics company that focuses on improving transportation, wanted to modernize its data collection infrastructure. Arity collects large amounts of driving data and uses predictive analytics to build solutions with the goal of turning that data into behavioral insights to make transportation smarter and safer for everyone. Since its inception, Arity has collected and analyzed more than a trillion miles of driving data. Looking to improve its data infrastructure, Arity decided that by deepening its use of Amazon Web Services (AWS), it could more efficiently use smart technologies while managing costs. Arity began its modernization process by migrating to Amazon EMR, a cloud big data solution for petabyte-scale data processing, interactive analytics, and machine learning using open-source frameworks. Arity uses Amazon EMR data science analytics use cases, empowering the company to process and access data that is used to make informed business decisions. As a managed solution, Amazon EMR simplified the overhead of running infrastructure and provided Arity with options to reduce total cost of ownership. Arity also uses Amazon EMR to decrease the overhead required to run its compute instances. Using Amazon EMR and other AWS services, Arity reduced by 20 percent the number hours it needed to manage on Amazon Elastic Compute Cloud (Amazon EC2), secure and resizable compute capacity for virtually any workload, resulting in compute cost savings. Español 30% reduction Learn more » 日本語 Contact Sales AWS offers support that helps Arity understand and use its products. “We receive great support from the teams at AWS,” says Banikazemi. “When we need something, they are within reach.” Arity looks at training as an investment in its team that enhances its architecture, and it takes advantage of the personalized training opportunities offered by AWS. The company recently offered a well-received training event and plans to offer more training in the future. Arity implemented a two-pronged approach to its modernization. First, to help prevent disruption of its road map and get the most value, it chose services offered by AWS that fit well within its existing architecture, which meant that Arity could efficiently shift to the new solution. Second, while Arity was focused on migrating its existing infrastructure, it started changing its architectural approach so that it could use its new solution from the beginning of product development. 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Modernizing its architecture has led Arity to increase its development capacity because of lower associated solution management overhead. Developers can better focus on their jobs, innovate faster, and improve product time to market. Arity also adds improvements to its products faster and identifies and resolves events sooner. “We can now solve customer challenges in weeks, where before it would have taken quarters,” says Banikazemi. Already on AWS, Arity wanted to better use these services to modernize its data infrastructure and architecture with the goal of freeing up developer resources and reinvesting them in its business to drive innovation. Ultimately, Arity knew that achieving these goals would reduce challenges associated with managing IT infrastructure, such as clusters. “The overhead of maintaining our infrastructure was becoming an operational burden,” says Reza Banikazemi, director of system architecture at Arity. To reduce its operational overhead and better allow its team to focus on delivering business outcomes, Arity decided to move from its self-managed processes to managed offerings on AWS. Improved Arity uses the self-managing ability of Amazon Kinesis Data Analytics to transform and analyze streaming data in near real time using Apache Flink. On Amazon Kinesis Data Analytics, Arity generates driving behavior insights based on collated driving data. As a bridge between data analysis on Amazon EMR and near-real-time data analyses and to connect data streams, Arity uses Amazon Kinesis Data Firehose, an extract, transform, load service that reliably captures, transforms, and delivers streaming data to data lakes, data stores, and analytics services. Arity gets data from its streaming infrastructure, pulls the data for downstream processing into a self-managed cluster into Amazon Simple Storage Service (Amazon S3)—an object storage service offering scalability, data availability, security, and performance—and then accesses the data from Amazon S3 using Amazon EMR and Amazon Athena, an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Solution | Modernizing Infrastructure to Free Resources and Focus on Business Outcome | Driving Down Management Burden Amazon Managed Streaming for Apache Kafka (Amazon MSK) makes it easy to ingest and process streaming data in real time with fully managed Apache Kafka. AWS Services Used Amazon MSK 中文 (繁體) Bahasa Indonesia innovation Founded in 2016 by The Allstate Corporation, Arity uses telematics to collect and analyze driving data to better understand and predict driving behavior. Telematics refers to the integrated use of communications and information technology to transmit, store, and receive information from telecommunications devices and send it to remote objects over a network. Arity uses that collected and analyzed driving data to help companies make informed choices and reduce costs, including costs for insurance companies, mobile app providers, cities and their departments of transportation, marketers, and more. Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 20% reduction 中文 (简体) About Arity Reza Banikazemi Director of System Architecture, Arity 2022 Overview Drives Get Started Amazon EMR is the industry-leading cloud big data solution for petabyte-scale data processing, interactive analytics, and machine learning using open-source frameworks such as Apache Spark, Apache Hive, and Presto. We can now solve customer challenges in weeks, where before it would have taken quarters.” Modernized Amazon Kinesis Data Firehose is an extract, transform, and load (ETL) service that reliably captures, transforms, and delivers streaming data to data lakes, data stores, and analytics services. Türkçe Customer Stories / Transportation & Logistics English to use of fully managed architecture in monthly infrastructure costs Going forward, Arity hopes to expand its use of AWS serverless technologies to eliminate the need to manage servers so that it can reduce infrastructure management tasks, implement automatic scaling, and optimize costs. “Working on AWS has been great. We made a lot of good strides this year, and we’re looking forward to continuing it next year,” says Banikazemi. Amazon EMR Learn how Arity modernized its data collection infrastructure using Amazon EMR. Deutsch Tiếng Việt Amazon Kinesis Data Firehose Italiano ไทย use of smart technologies  Learn more » Amazon EC2 in Amazon EC2 hours Opportunity | Improving the Use of AWS Services to Reduce Instance Needs for Arity by 20 Percent Arity is a mobility and data analytics company that focuses on improving transportation. The company helps to better understand and predict driving behavior at scale and delivers those insights using solutions that help companies to deliver smarter, safer, and more economical services to consumers. Português" Increasing Reach and Reliability of Healthcare Software by Migrating 300 Servers to AWS in 6 Weeks _ Mayden Case Study _ AWS.txt,"To migrate efficiently while still supporting patient services, Mayden joined the AWS Migration Acceleration Program (AWS MAP), a program to build strong cloud foundations, reduce risk, and offset the initial cost of migrations. Mayden also worked with Sourced, an AWS Partner, which offered expertise and augmented Mayden’s DevOps team during the migration. “We wouldn’t have gotten this done as quickly and with the low amount of downtime that we had if we hadn’t had the support of AWS MAP and worked with the Sourced team,” says Tom Dawson, product owner for the systems team at Mayden. This solution coordinates and automates large-scale migrations to the AWS Cloud, involving numerous servers. Enterprises can improve performance and prevent long cutover windows by automating manual processes and integrating multiple tools efficiently.  Learn more » The AWS Migration Acceleration Program (MAP) is a comprehensive and proven cloud migration program based upon AWS’s experience migrating thousands of enterprise customers to the cloud. Français Solution | Rehosting 300 Servers in 6 Weeks with Minimal Downtime Using AWS Application Migration Service during cross-cloud replication of servers  Outcome | Increasing Access to Innovative, Reliable Mental Health Services As NHS IAPT services are increasingly offered virtually, iaptus supports 200 mental health services with 40,000 users in the iaptus application. Even before the migration, an NHS-commissioned survey of users rated iaptus at 80.1 percent for reliability and responsiveness, compared with the NHS average of 58.1 percent. “Despite not having the level of stability that we might have liked from our former provider, we did a good job mitigating what we could,” says Rebecca Prestland, business development and marketing strategist at Mayden. “Given how important it is for our system to be available, fast, and responsive, we’re excited to see how that rating will improve now that we’re on AWS.” Español Automation 日本語 Contact Sales Founded in 2000, Mayden launched the iaptus patient-management system in 2008 as part of the pilot program for what became NHS IAPT. Today, iaptus supports 65 percent of all referrals to the NHS IAPT service; in 2021 alone, iaptus facilitated the care of 1.2 million out of the 1.8 million total referrals. The solution serves as an electronic health record (EHR) management system and hosts online patient services, such as appointment booking, self-referrals, and integrated video appointments. All the iaptus services are delivered through the cloud. In August 2021, Mayden’s technical team began assessing the benefits of moving to a cloud hyperscaler. The company searched for a new provider, expecting to meet with a lot of faceless websites. Instead, when Mayden approached AWS in November 2021, the team was greeted by people who provided personalized service and swiftly connected them with the experts and answers they needed. AWS Migration Acceleration Program 300 servers 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Healthcare technology company Mayden wanted to migrate to a new cloud provider but needed to do so without disrupting service for patients and care providers. Located in the United Kingdom, Mayden provides technology for mental healthcare services as part of the National Health Service (NHS) and NHS’s Improving Access to Psychological Therapies (IAPT) program. The company was not satisfied with the level of stability that it was experiencing, and it recognized that its previous cloud provider could not support Mayden in its next phase of growth. Mayden needed to find a new provider and migrate its servers in a way that caused as little disruption to its healthcare clients as possible. AWS Cloud Migration Factory Using AWS Application Migration Service, we rehosted the more complicated legacy parts of our application very quickly and with no downtime.” Mayden is growing and has ambitious plans. The company is exploring AWS machine learning tools to analyze the data collected in the IAPT program to drive better outcomes for patients. The team is also expanding into physical health services. “We believe that tech has an important role to play in creating sustainable healthcare systems,” says Prestland. “The migration to AWS was an important move for Mayden to support us strategically as we continue to grow.” AWS Services Used AWS Application Migration Service minimizes time-intensive, error-prone manual processes by automatically converting your source servers to run natively on AWS. It also simplifies application modernization with built-in, post-launch optimization options. 中文 (繁體) Bahasa Indonesia Amazon Route 53 Ρусский Mayden is a UK healthcare technology company creating digital technology that changes what’s possible for clinicians and patients. Its flagship solution, iaptus, is an EHR system for mental health services. عربي AWS Application Migration Service 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service. Route 53 connects user requests to internet applications running on AWS or on-premises. Learn more » Mayden migrated to Amazon Web Services (AWS) using AWS Application Migration Service, which minimizes time-intensive, error-prone manual processes by automatically converting customer source servers to run natively on AWS. Using AWS Application Migration Service, Mayden migrated 300 servers to AWS in 6 weeks with minimal downtime. Now, Mayden is expanding to new Regions, and it has already used AWS to build infrastructure for new services in Canada in only 10 days. 2022 Overview Get Started Since migrating to AWS, the availability and reliability of Mayden’s service has improved. “The stability is notable,” says Chris Eldridge, director of operations at Mayden. “Since migrating to AWS, we haven’t had any major service issues.” It’s crucial for Mayden’s solution to be available 24/7 because people might need to access its applications, such as self-referrals, at any time of day. “If you’re working in mental health, you’re always aware of the importance of your system being online and available,” says Eldridge. “If somebody can’t access a patient’s record when they need to, we’re aware of the weight of that responsibility.” Using AWS, Mayden’s IT team can do its job faster. Building the 75 database servers that make up a key part of Mayden’s infrastructure—a task that previously would have taken hours—took 2 minutes on AWS. “The speed of AWS is astonishing. The ability to create infrastructure that quickly makes such a massive difference to our small DevOps team,” says Eldridge. Mayden also uses managed services on AWS—including Amazon Route 53, a domain name system web service; AWS Client VPN, a fully managed remote access VPN solution; and Elastic Load Balancing, which distributes network traffic. Using these services frees up the team to concentrate on building and supporting its applications. Türkçe 98% faster Tom Dawson Product Owner for the Systems Team, Mayden Opportunity | Migrating to AWS to Facilitate Growth for Mayden English Learn how Mayden migrated mental health services software to AWS in 6 weeks with minimal downtime using AWS Application Migration Service. to build database servers on AWS Mayden Increases Reach and Reliability of Healthcare Software by Migrating 300 Servers to AWS in 6 Weeks A few weeks after completing the UK migration, Mayden used the tools and knowledge that it had gained to build a new environment in Canada. The infrastructure, which will support mental health and addictions services, took only 10 days to build. After this system is launched, Mayden will begin building new infrastructure in another geographic location. It will also apply its learnings to consolidate its infrastructure in Australia onto AWS.  After migrating test workloads at the end of April through May, Mayden migrated live workloads to AWS in June through mid-July 2022. It rebuilt about 40 percent of its servers using AWS-native services and cloud tools, such as Terraform, an open-source infrastructure-as-code service. The other 60 percent were rehosted to AWS with no downtime using AWS Application Migration Service. By using AWS Application Migration Service, Mayden moved these legacy applications to AWS with minimal or no changes to the code or core architecture. “Using AWS Application Migration Service, we rehosted the more complicated legacy parts of our application very quickly and with no downtime,” says Dawson. “The fact that the service runs entirely within the operating system meant that we didn’t need to get into the underlying physical infrastructure to do the replication.” Deutsch No downtime About Mayden minimized the need for code changes to legacy applications Tiếng Việt Customer Stories / Healthcare Italiano ไทย migrated in 6 weeks using AWS Application Migration Service To further accelerate its migration, Mayden used AWS Cloud Migration Factory, an orchestration solution powered by AWS Application Migration Service that coordinates and automates large-scale migrations to AWS. Using this service, Mayden migrated groups of 30 machines at once. Learn more » Português" Increasing Sales Opportunities by 83 Working with AWS Training and Certification with Fortinet _ Case Study _ AWS.txt,"Founded in 2000 in California, Fortinet is a global cybersecurity company with nearly 600,000 customers in diverse industries, such as manufacturing, education, and healthcare. Many of Fortinet’s customers use AWS and want to maximize their productivity while using Fortinet’s cybersecurity solutions. Roughly two-thirds of Fortinet’s revenue comes from overseas, and the organization needed to deliver a consistent knowledge base across different countries and industries. Français The APN Customer Engagements (ACE) program allows you to securely collaborate and co-sell with Amazon Web Services (AWS), drive successful engagements with customers, and grow your business. Learn more » Opportunity | Working with AWS Training and Certification to Develop Scalable Training Programs for Fortinet Español In 2013, Fortinet joined the AWS ISV Accelerate program, a co-sell program for organizations that provide software solutions that run on or integrate with AWS. In 2014, it placed its first listing on AWS Marketplace, a digital catalog where companies can find, test, buy, and deploy software that runs on AWS. Since then, its presence has grown to nearly 50 listings and 18,400 unique and active subscriptions, while the number of Fortinet employees tripled. In 2019, Fortinet started talking to AWS about using AWS Training and Certification to create a structured, scalable approach to training business development representatives (BDRs), the first line of contact with prospects who have expressed an interest in using Fortinet solutions. AWS Marketplace As an incentive to earn AWS Certified Cloud Practitioner certification, the company rewarded successful employees with sponsored participation in AWS re:Invent, an annual learning conference that is hosted by AWS for the global cloud computing community. “BDRs look at their training as a linchpin for their career,” Clark says. “It’s a great feather in their cap as they look to advance through the different roles at Fortinet.” 日本語 Contact Sales 2022 242 accreditations Get Started 한국어 Fortinet also added Co-Selling with AWS for ISV Partners, a course designed to articulate the value of the co-sell model. Fortinet BDRs received an overview of the AWS field structure, best practices on co-selling, and greater understanding of the motivation of AWS field teams. Fortinet also customized training through regular engagement of relevant guest speakers, such as an AWS sales representative who gave recommendations on how to engage with AWS for the mutual benefit of customers. Overview | Opportunity | Solution | Outcome | AWS Services Used The multinational cybersecurity company Fortinet needed a scalable way to educate its global salesforce so that employees could more knowledgeably talk to customers about their use of Amazon Web Services (AWS). Fortinet, an AWS Partner, also wanted to enrich its co-sell opportunities by aligning its IT nomenclature with that of AWS. It turned to AWS Partner Training and Certification to develop structured programs that could provide guidance and education to help customers along their cloud journeys. More than 500 Fortinet salespeople voluntarily participated in the program, resulting in a more thorough understanding of customers’ needs and 83 percent more sales opportunities. The overarching value of AWS Training and Certification is that it gives an employee a much more rounded view of the customer outcome, how they are using the cloud and transforming their business.” Find, test, buy, and deploy software that runs of AWS.  Learn more » Marty Hess Regional Vice President for Cloud Alliances and Ecosystem Strategy, Fortinet 83% increase Founded in 2000 in California, Fortinet is a global cybersecurity company serving nearly 600,000 customers in diverse industries. Its customers use Fortinet Security Fabric to protect users, devices, and applications across all network edges. AWS Services Used The AWS ISV Accelerate Program is a co-sell program for organizations that provide software solutions that run on or integrate with AWS. The program helps you drive new business and accelerate sales cycles by connecting participating independent software vendors (ISVs) with the AWS Sales organization. Learn more » 中文 (繁體) Bahasa Indonesia Outcome | Growing Business with the Help of AWS Training and Certification Fortinet also uses AWS Training and Certification programs to showcase potential career paths to job candidates. Plus, employees who have gone through AWS Training and Certification serve as mentors. They help new hires learn to work efficiently through the APN Customer Engagements Program (ACE), which lets AWS Partners securely collaborate and co-sell with AWS, drive successful engagements with their customers, and grow their businesses. Fortinet uses ACE to track customer engagements and sets goals against the metrics for global teams. Ρусский Customer Stories / Software & Internet عربي Fortinet Solutions Architects, who support sellers and their customers, have witnessed the success of AWS Training and Certification for the growing sales team and want to modify the AWS Training and Certification program for themselves. The company plans to continue to roll out structured programs, hoping to increase company-wide buy-in. “The overarching value of AWS Training and Certification is that it gives an employee a much more rounded view of the customer outcome, how they are using the cloud and transforming their business,” says Hess. “That’s what we’re trying to do: improve the better-together story as it relates to AWS and Fortinet and what value we bring to our joint customers.” 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. AWS Training and Certification achieved in 12 months AWS ISV Accelerate Overview About Fortinet 167% increase Türkçe APN Customer Engagements Program (ACE) English Increasing Sales Opportunities by 83% Working with AWS Training and Certification with Fortinet 857% increase Fortinet has seen continual business growth since launching its AWS Training and Certification programs, creating 54 percent more sales opportunities in 2021 and 83 percent more in 2022. Launched opportunities, a measure of customers who began to use a service through the AWS Marketplace, increased by 167 percent in 2021. Propel your organization with cloud fluency with AWS Training and Certification. Our content is created by experts at AWS and updated regularly so you can keep your cloud skills fresh. In mid-2021, AWS and Fortinet launched the first round of voluntary AWS Training and Certification programs. The program focused in part on providing an overall understanding of AWS through courses such as AWS Cloud Practitioner Essentials. The course addresses cloud concepts, AWS services, security, architecture, pricing, and support. In fact, 64 Fortinet employees—including 40 BDRs—earned AWS Certified Cloud Practitioner certification, which helps organizations identify and develop talent with critical knowledge related to implementing cloud initiatives. “AWS Training and Certification helped us better understand the different storage capabilities, compute capabilities, and overall breadth of the AWS portfolio,” says Stephen Clark, cloud security sales director at Fortinet. “It was an eye-opening experience.” Learn how Fortinet in cybersecurity increased sales opportunities by 83 percent and empowered its global salesforce with AWS Training and Certification. Deutsch Solution | Educating the Salesforce on Cloud Operations and Co-Sell Opportunities Tiếng Việt Italiano ไทย in AWS Certified Cloud Practitioner certification over previous period in total sales opportunities in the second year of the training program Learn more » From the beginning, course offerings had included iterations of what is now AWS Partner: Sales Accreditation, which provides best practices for co-selling with AWS and elucidates the factors that drive customer cloud adoption. Over time, Fortinet’s training programs increasingly began to emphasize the co-sell model. The third iteration of the program included AWS Partner: Cloud Economics Accreditation, which focuses on the cost dynamics and other business cases for migration from on-premises solutions to the cloud. The course helped Fortinet sellers grasp the nuances of the co-sell model and how it differs from traditional IT. “That helps us understand not only our technical role helping to secure customers in the cloud but also what their expectations are from a financial standpoint,” says Marty Hess, regional vice president for cloud alliances and ecosystem strategy at Fortinet. Since 2021, more than 500 Fortinet employees have registered for courses. Including the reuse of recorded assets, 197 individuals have received a total of 242 accreditations. Plus, 230 people have completed virtual classroom training sessions. “Now, BDRs are so much better at nurturing the leads that come in,” says Mishel Fletcher, director of cloud alliance marketing at Fortinet. “They are so much more confident in their discussions with prospects.”  in launched-won sales opportunities in the first year of the training program Português" Increasing Scalability and Data Durability of Television Voting Solution Using Amazon MemoryDB for Redis with Mediaset _ Mediaset Case Study _ AWS.txt,"The key requirement for Mediaset’s voting solution was scalability so that the company could handle the traffic volume and record all the votes. During the Amici finale, Mediaset supported more than four million viewers on live television and an additional one million using digital players on mobile devices or the company’s website. Using its solution built on Amazon MemoryDB for Redis, Mediaset received more than five million votes for the season 21 finale of Amici, which was more than five times the number of votes received in the previous finale using the company’s on-premises solution. Using Amazon MemoryDB for Redis, Mediaset also achieved data durability by storing votes to comply with government requirements. “Amazon MemoryDB for Redis has the features of both an in-memory cache and a database, so it’s really good for a lot of our business needs,” says Reni. “We serve a front-end application, so being fast is essential for our systems.” For viewers, the migration made the experience better by improving the response time and eliminating errors. During the season 21 Amici finale, response times were around one-tenth of a second. This response time was much faster than that of the previous voting system, where traffic sometimes exceeded limits and prevented viewers from submitting votes in time. “Using Amazon MemoryDB for Redis, viewers had a very good experience, could express their votes quickly, and didn’t encounter any errors,” says Curci. “It was very good for us.” Saved time Français Achieved Overview | Opportunity | Solution | Outcome | AWS Services Used |  2023 Mediaset’s most popular show, Amici, is a talent show in which teenagers sing, act, and dance to compete for a prize. Viewers can vote five times at set intervals throughout the show from a mobile application, website, or connected television. Because of traffic spikes during these 10- to 15-minute voting periods, Mediaset’s on-premises solution experienced performance issues, causing delays and errors that impacted the customer experience. Mediaset started comparing cloud alternatives in April 2022 and chose AWS because Mediaset was already using the cloud provider in other areas and knew the solution would be scalable and quick to deploy. “Time was a big factor for us,” says Marco Reni, technical project manager and architect at Mediaset. “The request to handle the voting for the finale came in shortly before the event, and we can’t move scheduled television programs. The show must go on.” Español Opportunity | Using Amazon MemoryDB for Redis to Support Traffic Spikes During Voting Sessions for Mediaset 日本語 AWS Services Used Reduced costs Customer Stories / Media & Entertainment AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. Learn more » Redis-compatible, durable, in-memory database service for ultra-fast performance 한국어 Using services like Amazon MemoryDB for Redis and the expertise provided by AWS Enterprise Support, we can rapidly build prototypes and test architecture in a few days, which we couldn’t have done without using AWS services”. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Learn how Mediaset in the media and entertainment industry scaled to support over five million votes during the finale of its most popular television show using Amazon MemoryDB for Redis. Mediaset chose Amazon Web Services (AWS) because of the flexibility, scalability, and ease of implementation that AWS offers. Using Amazon MemoryDB for Redis—a Redis-compatible, durable, in-memory database service for ultra-fast performance—Mediaset replaced its on-premises architecture in 30 days and successfully received more than five million votes during the finale. After the success of its voting solution for the Amici finale, Mediaset expanded to use it for all the shows with a voting system in the fall of 2022. These shows can run with concurrent voting sessions, but Mediaset can handle the traffic using Amazon MemoryDB for Redis and AWS Fargate—a serverless, pay-as-you-go compute engine for containers—to effectively scale up during prime time and scale back down afterward. Using the automatic scaling feature of AWS Fargate, Mediaset can determine the number of container instances needed and then flexibly scale in seconds instead of minutes if there is increased traffic. “Using Amazon MemoryDB for Redis, we could adapt the service to serve multiple shows with almost no effort,” says Reni. Overview 中文 (繁體) Bahasa Indonesia Outcome | Expanding and Enhancing Mediaset’s Voting Solution Using Amazon MemoryDB for Redis Contact Sales Ρусский implementation time عربي 中文 (简体) Learn more » Supported Increasing Scalability and Data Durability of Television Voting Solution Using Amazon MemoryDB for Redis with Mediaset Get Started AWS Fargate Daniele Curci Software Engineer and Solution Architect, Mediaset Mediaset’s solution built using AWS is more flexible and lower maintenance than its former solution, which saves time for the company. With an on-premises structure, Mediaset needed to involve multiple teams over 4–6 months for projects that required moving infrastructure. Using AWS, Mediaset can perform load tests at a low cost without investing in additional hardware, and its team no longer needs to worry about infrastructure. The company can add new features to the Mediaset Infinity streaming service in days or weeks instead of months using managed services. “Using services like Amazon MemoryDB for Redis, we can rapidly build prototypes and test architecture in a few days, which we couldn’t have done without managed services from AWS,” says Daniele Curci, software engineer and solution architect at Mediaset. “We can focus on the logic of our application without spending time on the physical infrastructure.” Türkçe data durability to meet government requirements English more than five million votes in Amici finale Amazon MemoryDB for Redis Along with being able to support variable traffic needs, Mediaset saves on costs because of the scalability of its solution. “For the Amici finale, we scaled up before the start of the show and scaled back after the show,” says Reni. “The costs for that night were very low, which would not have been possible with an on-premises architecture.” Based in Italy, Mediaset is a large commercial broadcaster that provides live channels and movie streaming. Amici, its most popular television show, draws millions of viewers to vote for contestants who sing, act, and dance for a prize.  Founded in 1993, Mediaset is a large commercial broadcaster based in Italy that produces and distributes television drama, film, news, sports, and multimedia content. The Mediaset Infinity streaming service provides live channels and movie streaming to viewers across Italy and around the world. Solution | Collecting Over Five Million Votes During Popular Television Finale Using Amazon MemoryDB for Redis and Using AWS Fargate Mediaset plans to extend its voting solution using AWS services to cover additional voting channels and expand analytics capabilities. The company also plans to use additional features of Amazon MemoryDB for Redis, such as using the service as persistent storage for its content management system needs. “The biggest benefit for us is the scalability,” says Reni. “Being able to scale almost instantly to whatever size we need using Amazon MemoryDB for Redis is important because we are never certain about how many viewers we will need to support.” Deutsch by scaling to meet variable demand Tiếng Việt About Mediaset Italiano ไทย The company met with experts from AWS throughout the implementation process. Mediaset designed the solution to meet various requirements, such as limiting the number of votes each viewer could submit and validating the user location. The company had to work quickly so that the solution could go live in May 2022 for the final episode of season 21 of Amici. ""The AWS team understood our urgency and went over the top,” says Reni. “From a technical point of view, it was really useful to have the AWS team’s expertise on Amazon MemoryDB for Redis while we were implementing the architecture. Furthermore, AWS Enterprise Support was always available to solve any last-minute doubt. Just weeks before the finale of its most popular television show, Italian mass media company Mediaset needed to migrate its on-premises voting solution to a cloud infrastructure. Mediaset expected a high volume of traffic and needed a scalable solution. Television engagement can be unpredictable, and the company had recently increased the number of votes that each viewer could submit.  for team with managed services Achieved 30-day Português" Indecomm Case Study _ Amazon Web Services.txt,"With decades of mortgage industry experience and millions of data points stored, Indecomm has a deep understanding of the loan lifecycle. The company’s mortgage automation products help lenders, insurers, and financial agents streamline back-office operations so they can spend more time improving the borrower experience. Its Genius product suite addresses many inefficiencies in underwriting and other preliminary stages of loan processing. In 2019, Indecomm sought to drive further automation in document processing and analysis by developing an improved data extraction solution using machine learning (ML).  AWS Lambda data classification accuracy, with 97% data extraction accuracy Français Prior to IDX, extracting data from a 100-page document took 30 minutes. With Amazon Textract, the new solution efficiently converts images to text at the field level and enriches data within 5–7 minutes. This has been especially helpful for mortgage lenders dealing with self-employed borrowers who often present non-standard income documentation.  Amazon Textract is a machine learning (ML) service that automatically extracts text, handwriting, and data from scanned documents. It goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables. Today, many companies manually extract data from scanned documents such as PDFs, images, tables, and forms, or through simple OCR software that requires manual configuration (which often must be updated when the form changes).  2023 In DecisionGenius, IDX works to automate data verifications, reducing the number of manual loan file interactions. As a result, lenders have lowered the required number of file “touches” by 50–60 percent, doubling underwriter and processor productivity. A knock-on effect of less manual intervention is higher accuracy; Indecomm’s IDX boasts a document classification accuracy rate of 100 percent and a data extraction average of 97 percent accuracy for its average loan package. 100% Español Indecomm’s ML-powered IDX helps lenders optimize business processes with automated data extraction and analysis, improving the performance, timelines, and cost structure of mortgage origination. Most importantly, lenders can focus more on front-line customer satisfaction. Integrated, automated workflows simplify decisions so Indecomm’s clients can close loans faster. Learn More Learn more » 日本語 Indecomm plans to apply its learnings in developing IDX to optimize other back- and middle-office operations in the mortgage origination, servicing, and capital markets. The company looks forward to using IDX to address new operational challenges within banking and financial services, recognizing that many of the same data extraction challenges are found across other industry verticals. Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Outcome | Closing Loans Faster with Integrated Workflows Automates scaling with parallel processing and serverless architecture AWS Services Used Underwriting and audit accuracy are vital in the mortgage loan origination process, as data oversights or errors can lead to a higher risk of default. With Amazon Textract, Indecomm’s Genius products capture critical data and flag missing data that could be overlooked during manual document review. Indecomm’s clients benefit from reduced risk when using the Genius product suite.  Solution | Developing a ML Solution to Reduce Manual Processing 中文 (繁體) Bahasa Indonesia Indecomm developed its Intelligent Document Extraction (IDX) solution to reduce the cost and time spent reviewing mortgage origination documents, resulting in quicker loan turnaround times and higher customer satisfaction. Opportunity | Driving Further Automation in Document Processing and Analysis Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. average total cost per page processed data classification and extraction time less manual document intervention required Overview Indecomm’s SVP of Engineering, Dr. Harish B. Kamath, says, “We found that Amazon Textract offered us the highest flexibility of use and lowest cost to develop our IDX module. We utilized all the capabilities of Amazon Textract, in combination with AWS Lambda machine-learning components, to map out hundreds of mortgage industry-related documents and extract over 4,000 data fields.” Indecomm and its clients have experienced a significant reduction in turnaround time and loan origination costs; jobs are now scheduled every 2 minutes, compared to the previous wait time of over 20 hours for sequential processing of 800 pages. Document classification costs have also decreased from $5 to $3 for the same 800 pages. Overall, costs, including data enrichment, security, storage, and reporting, have decreased to just 2 cents per page on AWS.  To learn more, visit aws.amazon.com/solutions/ai-ml. Türkçe The most time-consuming and critical tasks associated with mortgage loan origination—commonly referred to as the loan application process—are reading, analyzing, and comparing information across a large repository of documents. Lenders spend an inordinate amount of time manually reviewing documentation, which reduces productivity and lengthens the time required to obtain a loan. About Indecomm English Indecomm Automates Complex Mortgage Document Processing with Amazon Textract 50–60% Amazon Textract Indecomm is a software service provider that utilizes automation and technology to accelerate timelines, reduce costs, and simplify complex processes for mortgage lenders, servicers, insurers, and secondary market participants. The company processes about 1 million loans and 800,000 audits annually. Scalability, document turnaround time, cost, and configurability were leading considerations in solution evaluation. The business eventually decided on Amazon Textract on Amazon Web Services (AWS) to build IDX, choosing the service for its scalability and integration with serverless tools such as AWS Lambda. Previously, Indecomm required many virtual machines to meet data processing requirements, often exceeding budget thresholds when large jobs came in. Amazon Textract’s application programming interfaces (APIs) allow parallel processing, which facilitates rapid document analysis at scale without additional delays or overhead.  To ensure high levels of accuracy and efficiency in IDX, Indecomm leveraged Amazon Textract to automate complex document review and extract data from images and text for analysis. Clients using IDX can halve the time required for underwriting and mortgage origination, ensuring data accuracy with a predictable, affordable costing model. Indecomm is a SaaS provider whose GeniusWorks product suite automates back-office mortgage operations. The company set out to develop a machine learning–powered data extraction solution, which it named Intelligent Document Extraction (IDX). Deutsch 5–7 minutes Tiếng Việt Dr. Harish B. Kamath SVP of Engineering, Indecomm Italiano ไทย Furthermore, with IDX, Indecomm’s clients can rapidly scale their operations to meet sudden increases in demand—without hiring new employees or investing in extra hardware. They can also analyze data stored over time to predict business processing costs more accurately for mortgages. Unlike traditional data extraction solutions, which require continuous manual monitoring and corrective actions, Amazon Textract and IDX continuously learn and adapt to user-defined changes. Accuracy is thus not merely maintained but improved over time. Indecomm used to experience delays of up to 5–6 hours in processing long document queues due to corrupt files, leading to increased costs and management overheads from constant monitoring. However, the integration of AWS Lambda and built-in monitoring through IDX allows for on-demand monitoring, effectively removing bottlenecks from the system.  Contact Sales Learn more » In the two years since implementation, Amazon Textract has automated the classification and extraction of more than 700 mortgage forms with approximately 9,200 unique fields. Furthermore, clients have improved the efficiency and accuracy of quality control post loan distribution with Indecomm’s AuditGenius. Early in the loan lifecycle, data is stored within DecisionGenius and IncomeGenius. This data then serves as an easily referenceable repository that lenders can use to audit loan analyses and decisions using AuditGenius. The ability to instantly access and compare outcomes with source documents improves transparency, confidence, and auditing turnaround times.  Over a period of three years, Indecomm evaluated in-house and third-party alternatives to support the development of its Intelligent Document Extraction (IDX) ML tool. The tool goes beyond simple optical character recognition (OCR) to identify and classify documents; extract, validate, and certify data; and enrich data as needed. IDX serves as the underlying document extraction technology powering three Indecomm products: IncomeGenius, DecisionGenius, and AuditGenius. AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use. 2 cents Português We found that Amazon Textract offered us the highest flexibility of use and lowest cost to develop our IDX module.”" Indivumed Case Study.txt,"Unlocking Life-Saving Opportunities with AI and ML Français But the datasets are complex and extensive. To manage this complexity, the company turned to Amazon Web Services (AWS) and used cloud-based high performance computing (HPC) to build the world’s first and most extensive proprietary multi-omics database. Amazon EFS Amazon S3 Hamburg-based Indivumed specializes in using the highest quality biospecimen and comprehensive clinical data to advance research and development in precision oncology. Established 20 years ago, its headquarters is located in Hamburg, Germany. Español Amazon EC2 Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service to run and scale Kubernetes applications in the cloud or on-premises. The advances made by the organizations using the Indivumed technology could be life-changing for cancer patients. “We have the most highly automated multi-omics processing facility out there,” says Rene Steen, vice president for IT at Indivumed. “It’s driving the creation of new treatments that will ultimately save and extend people’s lives. That’s something to be proud of.” 日本語 Indivumed has made further enhancements to store data that’s no longer needed using Amazon S3 Glacier, which provides long-term, secure, durable storage classes for data archiving. “To be able to plow ahead with the business as it grows, and to know we have the pipeline to keep up with that growth, is essential,” says Woodsmith. JADBio is a software-as-a-service platform that runs on AWS, making integration with IndivuType straightforward through APIs. The JADBio technology supports Indivumed’s nRavel® artificial intelligence (AI) platform by recognizing and learning patterns of information found in tumor data. It’s also increased the number of samples it can process in parallel. IndivuType can now process 500 samples per week, up from 20, by using Amazon EKS to scale up to 1,000 instances. This is a 2,400 percent increase in processing capacity compared to its previous system. Amazon Elastic File System (Amazon EFS) automatically grows and shrinks as you add and remove files with no need for management or provisioning. These new capabilities have helped Indivumed establish new connections and partnerships. The company now offers advanced tissue sample analysis with IndivuType and nRavel® to several large pharmaceutical organizations and a number of small to medium-sized biotech companies. Get Started 한국어 Indivumed Boosts Cancer Research With Powerful Analytics Built on AWS To further optimize costs, the new cluster replaced several Amazon EFS workloads with object storage provided by Amazon Simple Storage Service (Amazon S3), which is built to retrieve any amount of data from anywhere. With the MOCCA cluster, Indivumed has saved more than 50 percent on total IT costs and reduced the cost per sample by around 41 percent, compared to its previous AWS setup. nRavel® includes bespoke tools that Indivumed has built and validated using data from disease models curated from comprehensive biological databases. Together with advanced analytical algorithms and ML, it helps Indivumed to better understand the biology, treatments, and outcomes of cancer. AWS Services Used For two decades, Hamburg-based Indivumed has specialized in biobanking, providing infrastructure, expertise, and technology for cancer research and development. Most of its customers and partners are academic research institutes and pharmaceutical companies that use the insights Indivumed generates to discover and validate novel drugs and ultimately develop new treatments for life-threatening cancers. Initially, Indivumed built an HPC cluster using Amazon Elastic Compute Cloud (EC2), which provides secure and resizable compute capacity, and Amazon Elastic File System (EFS), which automatically grows and shrinks as files are added and removed. 中文 (繁體) Bahasa Indonesia Rene Steen Vice President for IT, Indivumed Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский عربي As the company grew, Indivumed needed to ramp up the amount of data it could handle so that it could increase the number of samples it could process each year. To achieve this, Indivumed needed to refactor the cluster. “We spent a significant amount of time building a cloud-native tech platform,” says Woodsmith. 中文 (简体) Modernizing Cluster Increases Processing Capacity by 2,400% With IndivuType up and running, Indivumed wanted to generate novel insights about cancer biology that its customers and partners could use to develop new treatments. To create those insights, Indivumed applied machine learning (ML) to multi-omics data analysis. Alongside this, it used JADBio, an automated ML system that’s customized for life science applications that include large multi-omics clinical datasets and medical images. The result was IndivuType, a multi-omics database that combines diverse molecular biological information with clinical information from thousands of patients across Europe, the US, and Asia. The datasets for each cancer sample—including raw readouts from the molecular assay, which detects markers of disease—can reach 200 GB in size. Learn more » Benefits of AWS We have the most highly automated multi-omics processing facility out there. It’s driving the creation of new treatments that will ultimately save and extend people’s lives. That’s something to be proud of.” Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. English Launching a Multi-Omics Database on AWS Hamburg-based Indivumed specializes in using the highest quality biospecimen and comprehensive clinical data to advance research and development in precision oncology. Its IndivuType discovery solution uses AWS to store data and support analysis to decipher the complexity of cancer. By improving its AWS infrastructure, Indivumed has saved more than 50 percent on total IT costs and ramped up the number of samples it can process from 20 to 500 a week, a 2,400 percent increase. Indivumed knew its compute requirements would be significant. So it decided to build an HPC cluster that could not only handle huge datasets, but also scale resources up and down automatically based on the amount of processing required. With the life sciences field and pharmaceutical industry becoming more data-driven, Indivumed saw an opportunity to generate these insights through analyzing multi-omics data. Indivumed decided to use the thousands of tissue samples it stores to create a unique repository for deep molecular information on cancers. Developed a multi-omics database to store thousands of tissue samples for medical research Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Deutsch About Indivumed Tiếng Việt Indivumed and AWS kicked off the Multi-Omics for Cancer and Clinical Analytics (MOCCA) project to modernize the cluster. It’s based on Amazon Elastic Kubernetes Service (Amazon EKS), a managed container service to run and scale Kubernetes. Indivumed also used Intel-based compute-optimized Amazon EC2 Spot Instances to deliver high performance workloads at low cost. Italiano ไทย Amazon EKS Contact Sales 2022 Generated insights used to create new therapeutics for cancer treatments It chose AWS to help make its vision a reality. “AWS was the best choice to help us scale and it provides a range of secure, reliable, and serverless technologies for us to build on,” says Dr. Jonathan Woodsmith, vice president of advanced analytics and AI at Indivumed. Reduced total IT costs by 50 percent  Increased data processing capacity for samples by 2,500 percent Português" Infor Case Study.txt,"Facilitated training for over 2,405 employees Français Benefits of AWS Español Since upskilling its employees through AWS Training and Certification, Infor has increased its efficiency in developing customer solutions. Teams can make better use of AWS services in the customer solutions that they build. For example, through the course Running Containers on Amazon Elastic Kubernetes Service (Amazon EKS)—where participants develop practical, in-depth skills for managing containers using Amazon EKS—Infor teams learned how to use the service to improve, simplify, and speed up development. “Offering that course is going to save us money,” says Carlin. Teams that have gone through training adopt new AWS services and technology faster, especially if personnel could ask questions about applying new AWS services to specific products during a training class. “AWS upgrades its technology quite rapidly,” says Carlin, “and the training equips us to quickly adopt new services and technological transformations on AWS. Quick adoption means cost efficiency and performance improvements.” Increasing Speed to Market and Solution Functionality After Training 日本語 Contact Sales Get Started 한국어 Since Infor began working with AWS Training and Certification, over 400 Infor employees have received AWS Certification, validating technical skills and cloud expertise. The company is working to get over 1,000 employees certified as well. “We have continuous enrollment,” says Carlin. “In the latest round of deliveries, we have 400 people on the waiting list. That enthusiasm lets us continue offering these courses, which benefits employees personally and professionally.” DevOps Engineering on AWS Decreased the number of support tickets resulting in improved customer service DevOps Engineering on AWS teaches you how to use the combination of DevOps cultural philosophies, practices, and tools to increase your organization’s ability to develop, deliver, and maintain applications and services at high velocity on AWS. AWS Services Used 中文 (繁體) Bahasa Indonesia Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский عربي 中文 (简体) AWS Training and Certification Learn more » Architecting on AWS   The courses provided by AWS Training and Certification helped equip Infor to develop resilient, secure, and scalable solutions in the cloud and increase the velocity of development on AWS. Some of these courses included Architecting on AWS—where participants learn to build IT solutions on AWS following architecting best practices—and Developing on AWS, a course on how to develop secure and scalable cloud applications. Another top course was DevOps Engineering on AWS, a course on how to use DevOps philosophies, practices, and tools to develop, deliver, and maintain applications and services at high velocity on AWS. Between October 2020 and May 2022, 2,405 Infor employees participated in 90 courses. AWS Training and Certification helped Infor select courses and met Infor’s needs for broad accessibility and relevant content, including offering courses across three time zones. “AWS responded to our need to be creative in how we set classes up,” says Carlin. “The fact that we had the flexibility to offer courses across multiple time zones for employees around the world was very important for us as a global organization.” Employees at Infor responded enthusiastically to the courses offered by AWS Training and Certification, with hundreds of people on the waiting list after the initial slots filled up. “The courses led by instructors from AWS Training and Certification were much more interactive than our alternative, self-instruction offerings,” says Carlin. “The instructors addressed the issues that our technical personnel encounter much more specifically in real time in the sessions. We saw more immediate benefits, and the responses from our employees who took the classes were positive.” Facilitated AWS Certification for 400 employees Infor employees can also better respond to challenges and minimize potential problems during the software testing phase after going through training. Several product groups whose employees went through the training have reduced the volume of service tickets that they submit to AWS because they now have a better understanding of the underlying AWS services at work in the solution. For one AWS service, Infor submitted 18 service tickets in 2 months to AWS before training, and only 1 service ticket in 2 months after training on this service. Through a series of use case scenarios and practical learning, you’ll learn to identify services and features to build resilient, secure, and highly available IT solutions in the AWS Cloud. Expert AWS Instructors emphasize best practices using the AWS Well-Architected Framework and guide you through the process of designing optimal IT solutions, based on real-life scenarios. Building Training Pathways for Nontechnical Personnel Increased employee satisfaction Infor Bulks up AWS Expertise, Trains Over 2,400 Employees to Meet Customer Needs Türkçe English Infor has used AWS as its primary cloud services provider since 2011 and went all in on AWS in 2014, making it critical for employees to have strong AWS expertise. However, Infor lacked a formal training strategy and was unaware of the learning gaps present within its organization. Following the initial assessment and the AWS Learning Needs Analysis, Infor began working with AWS Training and Certification to offer virtual AWS Classroom Training to its employees. “One of the principles at Infor is to provide enrichment for our employees,” says Dan Carlin, vice president of cloud financial operations at Infor. “We wanted to give employees exposure to this material and training. As a result of training, we also expected to see more cost efficiency and optimization in how we consume AWS services as well as increased speed to functionality, which benefits our customers.” Improved efficiency and cost optimization About Infor Accelerated adoption of new AWS services We offer both digital and classroom training that allows you to learn online at your own pace and learn best practices from an expert instructor. Whether you are just starting out, building on existing IT skills, or sharpening your cloud knowledge, AWS Training and Certification can help you be more effective and do more in the cloud. Deutsch Dan Carlin Vice President of Cloud Financial Operations, Infor We see the value that working with AWS Training and Certification provides across our personnel environment. The output is better products, better performance, and better customer experience.”  Tiếng Việt Infor, a global leader in business cloud software, strives to serve customers by developing industry-specific functionality in each of its solutions. The company deploys solutions using Amazon Web Services (AWS) to serve 14,000 cloud customers. To effectively compete and satisfy the changing needs of customers, Infor needed robust cloud skills to deliver high-quality solutions and support quickly. By working with AWS Training and Certification, which helps customers build and validate skills to get more out of the cloud, Infor continues to deliver, with 2,400 employees currently training and more scheduled to be trained by the end of 2022. With this training, Infor can better meet customer needs by enhancing the performance and efficiency of its solutions and helping customers adopt new technology more quickly. Italiano ไทย As Infor continues expanding the courses that employees can take from AWS Training and Certification, the company plans to continue refining their development on AWS by conducting additional AWS Learning Needs Analysis, a self-assessment tool to identify an organization’s cloud skills gaps and build a data-driven plan. Using the AWS Learning Needs Analysis will help Infor continue enhancing training opportunities and meet its training needs. The company also plans to expand training opportunities to nontechnical roles, such as sales employees and solutions consultants. “If sales personnel can answer customers’ technology-related questions, it will address customer concerns and accelerate the sales process without taking time away from technical personnel,” says Carlin. “We see the value that working with AWS Training and Certification provides across our personnel environment. The output is better products, better performance, and better customer experience.” Investigating Opportunities and Strategy for Employee Training Running Containers on Amazon Elastic Kubernetes Service (Amazon EKS) 2022 Amazon EKS makes it easy for you to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane. In this three-day course, you'll learn container management and orchestration for Kubernetes using Amazon EKS. Infor provides cloud-based enterprise resource planning solutions to customers around the globe. The company has over 17,000 employees in 117 offices worldwide and over 65,000 customers. Português" Information Technology Institute Launches Postgraduate Artificial Intelligence Diploma Using AWS _ Case Study _ AWS.txt,"Hands-on learning Français Learn from AWS experts. Advance your skills and knowledge. Build your future in the AWS Cloud. Opportunity | Adapting Education to Meet Future Needs by preparing students to earn AWS Certifications Español  To train its instructors to deliver its new AI-Pro diploma program to students, ITI worked alongside the French Graduate School of Computer Science and Advanced Technologies (EPITA) to provide an online program in AI. Combining theory and practices of the computer vision and AI in neurolinguistics programming areas, the program is conducted remotely by AI experts from EPITA to certify qualified instructors to deliver a specialized AI program. 日本語 Contact Sales 2022 Equipping students 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used About Information Technology Institute using AWS services Customer Stories / Education ITI, an educational institution founded in Egypt in 1993, provides IT-related education for tertiary-level students at 11 campuses across Egypt. It also offers professional training programs for various branches of the Egyptian government, which in 2021, announced a national strategy to drive economic growth using AI and ML technologies. Egypt expects to undergo an increase in reliance on AI applications and solutions in government sectors over the next 3 years. The government has since spent the equivalent of $25 million on partnerships with international universities and companies to help create training programs and employment opportunities in the AI and ML fields.  The Information Technology Institute provides the AI-Pro postgraduate diploma program, which offers coursework through AWS Academy and gives students the opportunity to gain AWS Certification. AWS Services Used 中文 (繁體) Bahasa Indonesia Information Technology Institute Launches Postgraduate Artificial Intelligence Diploma Using AWS AWS Academy Empowering higher education institutions to prepare students for industry-recognized certifications and careers in the cloud. Learn more » Ρусский with on-demand skills for careers in the cloud عربي 中文 (简体) Real AWS infrastructure AWS Training and Certification Learn more » Overview Outcome | Bridging the Gap Between Academia and Industry AI-Pro integrated content and resources from AWS Education Programs and the first 400 students began the AI-Pro diploma program in April 2021. AWS Academy, which empowers higher education institutions to prepare students for industry-recognized certifications and careers in the cloud, is the foundation that ITI uses to provide education in AI/ML. ITI used AWS Academy to provide course materials related to the AI/ML fields of study, as well as cloud foundations. Students also received access to AWS Academy Learner Labs, long-running hands-on lab environments where educators can bring their own assignments and invite their students to get experience using select AWS services. “AWS Academy supports educators and students alike to apply new cloud knowledge immediately in an actual functioning cloud environment. The learning continuity is such a huge benefit,” says George Hany Fekry Iskander, head of the mechatronics and industrial automation department at ITI.  Get Started to deploy students’ graduation projects Validate technical skills and cloud expertise to grow your career and business. Learn more » Türkçe ITI provides education across its 11 campuses with its use of AWS Education Programs, equipping more students with in-demand skills for careers in the cloud. Because ITI also provides students with the opportunity to gain AWS Certification during their education, students can validate their technical skills and cloud expertise to potential employers, helping them to join the cloud workforce. “What makes AWS Certification especially valuable is that certified students can validate their skills and build confidence and credibility, which bolsters their employability,” says Dr. Heba Saleh Omar, chairwoman of ITI. Since the AI-Pro diploma program’s launch, nearly 300 students have received vouchers for AWS Certification examinations. ITI wanted to improve career opportunities for its students by developing their skills and preparing them for in-demand jobs. Focusing new diploma programs on AI helps fulfill today’s educational needs and tomorrow’s technological forecast in Egypt. To achieve these goals, ITI used AWS Education Programs to develop the AI-Pro diploma program and provide students with opportunities to gain AWS Certification, which validates technical skills and cloud expertise to grow careers and businesses. English ITI is looking to add more educational fields and degree tracks to its use of AWS Academy beyond AI and ML. It is specifically interested in adding cybersecurity and natural language processing diploma programs to be supported by AWS Academy. ITI intends to increase its number of educators holding AWS Certification from five to 15 to accommodate the expansion into different areas of expertise in the cloud. AWS Certification Solution | Developing a Hands-On Curriculum Deutsch ITI is also working alongside AWS to create a comprehensive lab environment that would encourage deeper, more immersive engagement with current and upcoming AWS services. If students were involved in the developmental and implementation stages, they could gain valuable experience for working in the cloud industry. “After joining the workforce, I discovered just how much the curriculum mirrored the tools and services used in the real world. I especially appreciated the hands-on lessons, which familiarized me with the latest cloud innovations the industry had to offer. ITI’s AI-Pro diploma is an ML career with cloud fundamentals,” says Omar Wahid, a graduate of ITI’s AI-Pro postgraduate diploma program. Tiếng Việt Italiano ไทย The Information Technology Institute (ITI) provides IT-related education for tertiary-level students at 11 campuses across Egypt. The institution also offers professional training programs for various branches of the Egyptian government. The Information Technology Institute (ITI) in Egypt used Amazon Web Services (AWS) to launch a new postgraduate degree, AI-Pro. With the rise of artificial intelligence (AI) and machine learning (ML) in Egypt’s digital development plan, ITI wanted to create a diploma program that provided students with relevant skills and certifications. The AI-Pro diploma program was developed working with AWS Training and Certification Education Programs, which support learners in building and validating skills to get more out of the cloud. These AWS Education Programs prepare diverse learners for in-demand, entry-level cloud roles around the world. ITI delivered these programs to 1,000 students across 9 months. Increases employability AWS Academy supports educators and students alike to apply new cloud knowledge immediately in an actual functioning cloud environment. The learning continuity is such a huge benefit.”  Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. George Hany Fekry Iskander Head of the Mechatronics and Industrial Automation Department, ITI Português" InMotion Inovasi Teknologi Boosts Local-Language Engagement with Millions of Indonesians on AWS _ Case Study _ AWS.txt,"Under 3 seconds Français 2023 Amazon Simple Storage Service Outcome | Distributing Millions of Messages in Bahasa Indonesia  Español Opportunity | Delivering Omni-Channel Communications to Millions of Indonesians Based in Jakarta, Indonesia, InMotion Inovasi Teknologi develops software solutions to help companies improve customer engagement. The business has more than 50 employees and focuses on industries such as finance, education, and the public sector. InMotion Inovasi Teknologi is an Indonesian-based technology company that builds software solutions for customer engagement across digital channels. As part of its continued development, the company migrated around 1,000 scripts from Amazon EC2 to Amazon CloudFront, leveraging Amazon S3 to distribute static content.  InMotion Inovasi Teknologi transforms the scalability of its applications using Amazon CloudFront, reducing response times for millions of customer interactions, and lowering costs.  日本語 Contact Sales As a result of the migration, the company reduced server costs by 10 percent and improved application performance by 30 percent. The business seamlessly supports more Indonesian enterprises, distributing millions of messages in Bahasa Indonesia. Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.  Learn more » Learn More 1.5x 한국어 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. To learn more, visit aws.amazon.com/cloudfront. Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon Elastic Kubernetes Service (Amazon EKS) gives you the flexibility to start, run, and scale Kubernetes applications in the AWS cloud or on-premises. Learn more » Get Started Reduced chatbot response times The company moved approximately 1,000 scripts for its 3Dolphins applications and chatbot service from Amazon EC2 instances to Amazon CloudFront, going live in two weeks.  AWS Services Used 99.95% Overview 中文 (繁體) Bahasa Indonesia InMotion’s founders engaged with the company’s AWS account team, who proposed offloading the scripts which facilitated client-server communication into Amazon CloudFront. “Our AWS team was proactive as always, listening to our objectives and providing the right solutions,” Hastomo says.  Ρусский Customer Stories / Software & Internet عربي decrease in server instance costs faster performance of web dashboards 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. When InMotion developed 3Dolphins in 2015, it chose to run the applications and chatbot service on Amazon Web Services (AWS), using Amazon Elastic Compute Cloud (Amazon EC2) instances to provide the optimal amount of compute performance for varying workloads.  About InMotion Inovasi Teknologi InMotion Inovasi Teknologi Boosts Local-Language Engagement with Millions of Indonesians on AWS Sonny Hastomo Chief Executive Officer, InMotion Inovasi Teknologi Amazon Elastic Kubernetes Service Thanks to the speed and scalability of Amazon CloudFront, InMotion is helping large enterprises communicate with millions of customers in Bahasa Indonesia in a timelier manner. Today, around 50 of InMotion’s customers are using the 3Dolphins SRM suite to distribute over 15 million messages a day in Bahasa Indonesia as part of their customer engagement programs.  InMotion also integrated Amazon CloudFront with Amazon Simple Storage Service (Amazon S3), where it stores static assets such as imagery for web dashboards and chatbot interfaces. Amazon CloudFront distributes the static content, caching the images at edge locations to reduce load time and protecting resources by checking content requests against access control lists.  Türkçe Amazon Elastic Compute Cloud The adoption of Amazon CloudFront has also increased application and service availability. Comments Hastomo, “Previously, we experienced occasional downtime, or some functionality wouldn’t work. Amazon CloudFront ensures scripts are continuously processed without any issues, improving reliability.”  English By migrating the scripts to Amazon CloudFront, InMotion decreased its Amazon EC2 costs by 10 percent. Amazon CloudFront’s caching service also reduces the number of requests served by the Amazon EC2 instances and lowers latency, improving application performance. “By leveraging Amazon CloudFront, our Service SRM web dashboard responds 30 percent faster and our chatbot answers queries in less than three seconds,” says Hastomo. Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. Learn more » 10% More than two hundred businesses in Indonesia use 3Dolphins applications by InMotion Inovasi Teknologi (InMotion) to engage with customers across digital channels. InMotion, an Indonesian-based technology company, leverages the power of artificial intelligence to offer solutions such as its 3Dolphins Social Relationship Management (SRM) application to enhance customer engagement and the 3Dolphins Service SRM application for omni-channel customer service. Meanwhile, businesses such as banks, financing companies, educational institutions, and automotive industries also use the 3Dolphins Sales SRM application to convert conversations into sales opportunities and 3Dolphins Chatbot SRM service to answer frequently asked questions. Deutsch Tiếng Việt Solution | Reducing Costs and Improving Performance with Amazon CloudFront In addition, the business sought to increase application scalability. Hastomo explains, “Our competitors’ solutions often lack the flexibility to easily scale to support millions of engagements in Bahasa Indonesia. We wanted to fill that gap, offering enterprises omni-channel communications for mass audiences, and opening new business opportunities for ourselves.” Italiano ไทย Amazon CloudFront Following its success with Amazon CloudFront, InMotion plans to continue developing its AWS architecture. It aims to containerize its applications to use resources more efficiently and further reduce costs through Amazon Elastic Kubernetes Service (Amazon EKS). “We continue working with AWS because it helps us deliver better software services to our customers in ways that are more cost effective to our business,” Hastomo concludes. more customer engagements Delivering messages on this scale in Bahasa Indonesia has given InMotion a significant advantage over competitors who also offer localized engagement tools. Hastomo estimates that 3Dolphins can handle workloads that are 1.5 times larger than those of its competitors. “This has helped us to secure business with Indonesian enterprises,” he explains. “We’re able to support a business that has 60 million customers and receives around 3 million website visitors each month.” Learn more » application availability As part of its continual improvement process, InMotion looked to optimize its Amazon EC2 architecture in 2021. Chief executive officer Sonny Hastomo, who co-developed the 3Dolphins solutions, says, “We wanted to offload script files from virtual server instances to lower our cloud costs as well as boost the performance of our applications.”  30% By leveraging Amazon CloudFront, our Service SRM web dashboard responds 30 percent faster and our chatbot processes queries in less than three seconds.” Português" Insightful.Mobi Decreases Costs and Enhances Dashboard Performance Using Amazon QuickSight _ Case Study _ AWS.txt,"Opportunity | Using AWS to Navigate Consumer Goods Data for Insightful.Mobi Français Achieved 2023 Enhanced Español Increased Auckland-based startup Insightful.Mobi delivers the next generation of field-based sales and merchandising tools for consumer goods companies that sell or provide services to retailers such as supermarkets. A software-as-a-service firm, Insightful.Mobi creates and embeds high-performance, interactive dashboards to empower its customers to efficiently manage their sales and merchandising workforce. Insightful.Mobi’s seamless integration of data into its customer web portal and application enhances productivity, delivering valuable insights while maintaining ease of use. About Insightful.Mobi 日本語 AWS Services Used Get Started 한국어 Learn how Insightful.Mobi decreased costs, enhanced performance, and increased revenue using Amazon QuickSight. Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon Redshift sales and revenues and enhanced return on investment Improved Solution | Migrating to Agile Cloud Dashboards Using QuickSight Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. Learn more » Insightful.Mobi Decreases Costs and Enhances Dashboard Performance Using Amazon QuickSight The benefits Insightful.Mobi has gained are passed on to its customers. For example, one of New Zealand’s biggest frozen-food brands simplified its sales team’s jobs, reducing the time it spent on administration. Similarly, a major beverage company reported a 26 percent increase in its sales representatives’ productivity. Using AWS technology, Insightful.Mobi can create precisely the products that its customers want. Happier customers mean higher revenues and—when paired with lower costs—an enhanced return on investment for Insightful.Mobi. 中文 (繁體) Bahasa Indonesia Accelerated Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Paul Miller CEO and Cofounder, Insightful.Mobi With promotional costs making up nearly one-quarter of the overall price of products, it’s critical for businesses to make sure they have the right items in the right stores and on the right shelves. Addressing this need can be challenging in the retail and consumer goods marketplace, which is fast paced and constantly changing. Overview Türkçe Insightful.Mobi offers integrated cloud and mobile tools for customer relationship management in consumer goods. A software-as-a-service business, it facilitates smarter field sales, promotions, and merchandising through near-real-time insights and business intelligence. English Insightful.Mobi also used SPICE (Superfast, Parallel, In-memory Calculation Engine), the robust in-memory engine designed to work with Amazon QuickSight to rapidly perform advanced calculations and serve data. With these tools, Insightful.Mobi can now help customers by creating and publishing dashboards with insights powered by machine learning. Insightful.Mobi’s customers can quickly access these dashboards from any device to look for patterns and outliers, leading to a better understanding and use of their data. “Amazon QuickSight is serverless, scalable, and superfast, so our customers can slice and dice their data in lots of different ways according to what suits their needs,” Miller says. Outcome | Making Every Effort Count The chain between manufacturers and buyers includes many links, from suppliers, distributors, and franchisees to marketers, promoters, and floor salespeople. Collectively, their interactions produce a volume, variety, and complexity of data that is difficult to navigate effectively. A firm might offer hundreds of unique products, each of which must be tracked by store, price, display space, and other variables related to promotion and distribution. By analyzing this data, businesses can better understand their customers and make informed decisions on their placement and promotions. Amazon QuickSight significant cost savings agility, performance, flexibility, and scalability Among the most important benefits Insightful.Mobi has achieved using its AWS technical stack are enhanced productivity and cost-efficiency. “On AWS, we’ve reduced the complexity of our production process so that the customer can be front and center,” Miller says. Moreover, analytics and reporting used to require the combined labor of both a business analyst and a core developer. “Previously, it would take us 2–3 weeks to make a new dashboard or set of reports for our customers,” Miller says. “Using Amazon QuickSight, a business analyst working directly with a customer can create new dashboards and reports in less than 1 day.” Deutsch Amazon QuickSight powers data-driven organizations with unified business intelligence (BI) at hyperscale. With QuickSight, all users can meet varying analytic needs from the same source of truth through modern interactive dashboards, paginated reports, embedded analytics, and natural language queries. Learn more » Tiếng Việt “Previously, it would take us 2–3 weeks to make a new dashboard or set of reports for our customers. Using Amazon QuickSight, a business analyst can create new dashboards and reports in less than 1 day.” The transition was straightforward and simple. “It was very well structured,” says Miller.  “AWS provided support as we put together a proof of concept to help our tech people understand how to implement the solution into the technology stack and systems.” To get the most out of QuickSight, the company used online videos and training workshops with product specialists who answered specific questions the team had. QuickSight offers a dashboard and reporting layer that has native, highly secure connectivity to Amazon Redshift. Italiano ไทย Given the complexity of today’s marketplace and the vast amounts of data coming from different sources, firms need all the insights they can get about consumers’ buying decisions. Such data helps guide marketing and sales strategies, so Insightful.Mobi turned to Amazon Web Services (AWS) to help its customers gain increased visibility into their data. It chose the architecture of Amazon QuickSight, which powers data-driven organizations with unified business intelligence at hyperscale so that all users can meet varying analytic needs from the same source. Using QuickSight, Insightful.Mobi quickly and cost-effectively provides embedded interactive dashboards and analytics to its clients so that they can derive insights, increase productivity, and realize efficiencies right away. creation of dashboards from 2–3 weeks to less than 1 day Insightful.Mobi is poised to keep growing. The grocery market in New Zealand is currently worth $14 billion, and Australia’s market, where Insightful.Mobi plans to expand, is five times as large. With QuickSight, Insightful.Mobi quickly and reliably provides its corporate customers with all the essential sales insights that they need. “Using AWS tools,” Miller says, “there is no limit to what we can do. We can pretty much do it all.” To deliver insights to consumer goods firms, Insightful.Mobi used to rely on traditional, server-based reporting tools to manage data, but such methods are too slow, time-consuming, and expensive for today’s complex supply chains. Insightful.Mobi needed to offer agility beyond the typical customer relationship management functionality in its field sales products so that its customers could analyze their data quickly and cost-effectively. For its data warehouse infrastructure, the firm already relied on Amazon Redshift, a service that provides data warehousing reinvented for an ever-changing data landscape. “We already knew the AWS way of doing things, so we built on our experience,” says Paul Miller, chief executive officer (CEO) and cofounder of Insightful.Mobi. So in 2021, Insightful.Mobi decided to migrate its visualization and insights layer to QuickSight. customer experience and improved satisfaction Português" Insilico Case Study _ Life Sciences _ AWS.txt,"Due to the volumes of experimental and methodical data procesed by Insilico’s platforms, they have extremely high graphics processing unit (GPU) requirements. The company turned to AWS to find the flexibility and scalability it needed available on-demand. Both PandaOmics and Chemistry42 run on Amazon Elastic Compute Cloud (Amazon EC2), a web service that provides secure, resizable compute capacity in the cloud. Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Français Benefits of AWS Insilico Achieves 99% Cost Savings in Drug Candidate Discovery Using AWS Español Eliminated bottlenecks from drug pipelines 日本語 Contact Sales “Using AWS has allowed us to easily scale up our business and facilitate cross-border collaboration,” adds Kamya. This collaborative element proved particularly helpful for the company’s COVID-19 project, which involved designing lead compounds aimed at treating SARS-CoV-2. Those compounds are now close to reaching studies that would enable Insilico to submit an Investigational New Drug (IND) application to the U.S. Food and Drug Administration (FDA). Get Started 한국어 About Insilico Medicine Learn more » Headquartered in Hong Kong, Insilico has over 150 collaborators worldwide. As a result, the platform architecture needed to be scalable and easily accessible. Insilico accomplished this by hosting the relevant data in the cloud using Amazon Simple Storage Service (Amazon S3), an object storage service. “Although we are a startup, we have a global team, and AWS allows us to coordinate our team globally without worrying about where we locate our servers,” says Zhu. Democratized access to computational tools   Learn More Petrina Kamya, PhD Global Business Development Director for Chemistry42 AWS Services Used AWS Healthcare & Life Sciences Virtual Symposium 2021: Insilico Insilico develops ML-powered tools widely accessible to the pharma industry through its suite of software-as-a-service (SaaS) platforms, including PandaOmics for accelerated identification of promising drug targets and Chemistry42, which leverages experimental data, ML algorithms, and physics-based methods to design and optimize novel compounds. The company has validated these platforms with its own internal drug pipelines to demonstrate concrete cost and time savings. 中文 (繁體) Bahasa Indonesia Fostered connection between different actors within the pharmaceutical industry AWS Makes ML Affordable and Globally Accessible at Every Step of the Drug Pipeline Ρусский “AWS gives us access to the computation power we need,” says Qingsong Zhu, Ph.D., Insilico’s chief operating officer. “As a startup, it’s been key for us to have access to powerful servers without needing to maintain huge computing clusters on-premises ourselves.” عربي 中文 (简体) Learn more »   Going forward, Insilico Medicine plans to become more involved in the later stages of the drug research and development process. The company intends to incorporate even more AWS tools to enable growth and maximize the potential of AI to revolutionize the pharmaceutical industry. The drug development process is simultaneously urgent and laborious. As of 2010, it took an average of 4.5 years and cost an average of $674 million to bring a single drug from target hypothesis to candidate validation—and those numbers have risen steadily in the past decade. Every step presents unique challenges that require specialized expertise, sometimes causing the process to be fragmented and inefficient. Insilico Medicine is a small biotechnology startup that has developed AI platforms for drug discovery. The company combines expertise in machine learning, bioinformatics, and chemistry to save cost and time at multiple stages of drug development. Using our PandaOmics and Chemistry42 platforms built on AWS, we were able to bring a fibrosis drug candidate from target discovery to compound validation in under 18 months for just $2.6 million.” Towards a Connected, Streamlined Pharmaceutical Industry Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. To help biopharma and biotechnology companies streamline and accelerate their drug discovery and development pipelines, Insilico Medicine developed a robust suite of machine learning (ML)-powered tools to aid in target identification, molecule design, and lead optimization. English Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. “If you're a biologist, you shouldn't be afraid of doing bioinformatics. If you're a chemist, you shouldn't be afraid of using computational tools,” says Kamya. “It was important to us at Insilico that we created a platform that is straightforward and easy to use, giving reliable outcomes regardless of scientific background. We want to democratize the use of AI for drug discovery and increase interoperability between different departments in the pharmaceutical industry.” Insilico’s drug discovery engine, built on Amazon Web Services (AWS), sits at the center of the company’s portfolio. The engine uses millions of data samples and multiple data types to discover disease biomarkers, identify the most promising targets, and design novel small molecule modulators that are specific to the target. Insilico layers advanced artificial intelligence (AI) and ML capabilities to perform these analyses and support all steps of the pharma research and development process. Deutsch Reduced average drug discovery costs by over $650 million Tiếng Việt Amazon S3 Each Insilico platform accelerates a specific part of the drug development process, but when interconnected they can save additional time by eliminating bottlenecks. The platforms’ ease of use democratizes access to sophisticated bioinformatics, making it simpler for different parties to use the same tools and coordinate analyses. Italiano ไทย “Using our PandaOmics and Chemistry42 platforms built on AWS, we were able to bring a fibrosis drug candidate from target discovery to compound validation in under 18 months for just $2.6 million,” says Petrina Kamya, Ph.D., Insilico’s global business development director for Chemistry42. 2022 Amazon EC2 See how AWS is working with other biopharma and life science companies to drive discovery and innovation. Accelerated drug development process by 3 years compared to average Português" Intelligently Search Media Assets with Amazon Rekognition and Amazon ES _ AWS Architecture Blog.txt,"AWS Architecture Blog Intelligently Search Media Assets with Amazon Rekognition and Amazon ES by Sridhar Chevendra, Shitij Agarwal, and Gurinder Singh | on 14 JUL 2021 | in Amazon OpenSearch Service , Amazon Rekognition , Amazon Simple Storage Service (S3) , Architecture , AWS Lambda | Permalink |  Share Media assets have become increasingly important to industries like media and entertainment, manufacturing, education, social media applications, and retail. This is largely due to innovations in digital marketing, mobile, and ecommerce. Successfully locating a digital asset like a video, graphic, or image reduces costs related to reproducing or re-shooting. An efficient search engine is critical to quickly delivering something like the latest fashion trends. This in turn increases customer satisfaction, builds brand loyalty, and helps increase businesses’ online footprints, ultimately contributing towards revenue. This blog post shows you how to build automated indexing and search functions using AWS serverless managed artificial intelligence (AI)/machine learning (ML) services. This architecture provides high scalability, reduces operational overhead, and scales out/in automatically based on the demand, with a flexible pay-as-you-go pricing model. Automatic tagging and rich metadata with Amazon ES Asset libraries for images and videos are growing exponentially. With Amazon Elasticsearch Service (Amazon ES) , this media is indexed and organized, which is important for efficient search and quick retrieval. Adding correct metadata to digital assets based on enterprise standard taxonomy will help you narrow down search results. This includes information like media formats, but also richer metadata like location, event details, and so forth. With Amazon Rekognition , an advanced ML service, you do not need to tag and index these media assets. This automatic tagging and organization frees you up to gain insights like sentiment analysis from social media. Figure 1 is tagged using Amazon Rekognition. You can see how rich metadata (Apparel, T-Shirt, Person, and Pills) is extracted automatically. Without Amazon Rekognition, you would have to manually add tags and categorize the image. This means you could only do a keyword search on what’s manually tagged. If the image was not tagged, then you likely wouldn’t be able to find it in a search. Figure 1. An image tagged automatically with Amazon Rekognition Data ingestion, organization, and storage with Amazon S3 As shown in Figure 2, use Amazon Simple Storage Service (Amazon S3) to store your static assets. It provides high availability and scalability, along with unlimited storage. When you choose Amazon S3 as your content repository, multiple data providers are configured for data ingestion for future consumption by downstream applications. In addition to providing storage, Amazon S3 lets you organize data into prefixes based on the event type and captures S3 object mutations through S3 event notifications. Figure 2. Solution overview diagram S3 event notifications are invoked for a specific prefix, suffix, or combination of both. They integrate with Amazon Simple Queue Service (Amazon SQS) , Amazon Simple Notification Service (Amazon SNS) , and AWS Lambda as targets. (Refer to the Amazon S3 Event Notifications user guide for best practices). S3 event notification targets vary across use cases. For media assets, Amazon SQS is used to decouple the new data objects ingested into S3 buckets and downstream services. Amazon SQS provides flexibility over the data processing based on resource availability. Data processing with Amazon Rekognition Once media assets are ingested into Amazon S3, they are ready to be processed. Amazon Rekognition determines the entities within each asset. Amazon Rekognition then extracts the entities in JSON format and assigns a confidence score. If the confidence score is below the defined threshold, use Amazon Augmented AI (A2I) for further review. A2I is an ML service that helps you build the workflows required for human review of ML predictions. Amazon Rekognition also supports custom modeling to help identify entities within the images for specific business needs. For instance, a campaign may need images of products worn by a brand ambassador at a marketing event. Then they may need to further narrow their search down by the individual’s name or age demographic. Using our solution, a Lambda function invokes Amazon Rekognition to extract the entities from the ingested assets. Lambda continuously polls the SQS queue for any new messages. Once a message is available, the Lambda function invokes the Amazon Rekognition endpoint to extract the relevant entities. The following is a sample output from detect_labels API call in Amazon Rekognition and the transformed output that will be updated to downstream search engine: {'Labels': [{'Name': 'Clothing', 'Confidence': 99.98137664794922, 'Instances': [], 'Parents': []}, {'Name': 'Apparel', 'Confidence': 99.98137664794922,'Instances': [], 'Parents': []}, {'Name': 'Shirt', 'Confidence': 97.00833129882812, 'Instances': [], 'Parents': [{'Name': 'Clothing'}]}, {'Name': 'T-Shirt', 'Confidence': 76.36670684814453, 'Instances': [{'BoundingBox': {'Width': 0.7963646650314331, 'Height': 0.6813027262687683, 'Left': 0.09593021124601364, 'Top': 0.1719706505537033}, 'Confidence': 53.39663314819336}], 'Parents': [{'Name': 'Clothing'}]}], 'LabelModelVersion': '2.0', 'ResponseMetadata': {'RequestId': '3a561e82-badc-4ba0-aa77-39a13f1bb3a6', 'HTTPStatusCode': 200, 'HTTPHeaders': {'content-type': 'application/x-amz-json-1.1', 'date': 'Mon, 17 May 2021 18:32:27 GMT', 'x-amzn-requestid': '3a561e82-badc-4ba0-aa77-39a13f1bb3a6','content-length': '542', 'connection': 'keep-alive'}, 'RetryAttempts': 0}} As shown, the Lambda function submits an API call to Amazon Rekognition, where a T-shirt image in .jpeg format is provided as the input. Based on your confidence score threshold preference, Amazon Rekognition will prompt you to initiate a human review using Amazon A2I. It will also prompt you to use Amazon Rekognition Custom Labels to train the custom models. Lambda then identifies and arranges the labels and updates the specified index. Indexing with Amazon ES Amazon ES is a managed search engine service that provides enterprise-grade search engine capability for applications. In our solution, assets are searched based on entities that are used as metadata to update the index. Amazon ES is hosted as a public endpoint or a VPC endpoint for secure access within the specified AWS account. Labels are identified and marked as tags, which are assigned to .jpeg formatted images. The following sample output shows the query on one of the tags issued on an Amazon ES cluster. Query: curl-XGET https:///<_IndexName>/_search?q=T-Shirt Output: {""took"":140,""timed_out"":false,""_shards"":{""total"":5,""successful"":5,""skipped"":0,""failed"":0},""hits"":{""total"":{""value"":1,""relation"":""eq""},""max_score"":0.05460011,""hits"":[{""_index"":""movies"",""_type"":""_doc"",""_id"":""15"",""_score"":0.05460011,""_source"":{""fileName"":""s7-1370766_lifestyle.jpg"",""objectTags"":[""Clothing"",""Apparel"",""Sailor Suit"",""Sleeve"",""T-Shirt"",""Shirt"",""Jersey""]}}]}} In addition to photos, Amazon Rekognition also detects the labels on videos. It can recognize labels and identify characters and entities. These are then added to Amazon ES to enhance search capability. This allows users to skip to specific parts of a video for quick searchability. For instance, a marketer may need images of cashmere sweaters from a fashion show that was streamed and recorded. Once the raw video clip is identified, it is then converted using Amazon Elastic Transcoder to play back on mobile devices, tablets, web browsers, and connected televisions. Elastic Transcoder is a highly scalable and cost-effective media transcoding service in the cloud. Segmented output renditions are created for delivery using the multiple protocols to compatible devices. Conclusion This blog describes AWS services that can be applied to diverse set of use cases for tagging and efficient search of images and videos. You can build automated indexing and search using AWS serverless managed AI/ML services. They provide high scalability, reduce operational overhead, and scale out/in automatically based on the demand, with a flexible pay-as-you-go pricing model. To get started, use these references to create your own sample architectures: Amazon S3 Amazon Elasticsearch Amazon Rekognition AWS Lambda Sridhar Chevendra Sridhar Chevendra is a Solutions Architect with Amazon Web Services. He works with digital native business customers to build secure, scalable, and resilient architectures in the AWS Cloud. Sridhar enjoys the outdoors and likes to read about macroeconomics. Shitij Agarwal Shitij Agarwal is a Partner Solutions Architect at AWS. He creates joint solutions with strategic ISV partners to deliver value to customers. When not at work, he is busy exploring New York city and the hiking trails that surround it, and going on bike rides. Gurinder Singh Gurinder Singh is a Solution Architect at AWS. He works with customers to design and implement a variety of solutions in the AWS Cloud. Gurinder enjoys landscaping and loves go on long drives. Resources AWS Architecture Center AWS Well-Architected AWS Architecture Monthly AWS Whitepapers AWS Training and Certification This Is My Architecture Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Interactively fine-tune Falcon-40B and other LLMs on Amazon SageMaker Studio notebooks using QLoRA _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Interactively fine-tune Falcon-40B and other LLMs on Amazon SageMaker Studio notebooks using QLoRA by Sean Morgan , Philipp Schmid , and Lauren Mullennex | on 29 JUN 2023 | in Amazon Machine Learning , Amazon SageMaker , Artificial Intelligence , Generative AI , Technical How-to | Permalink | Comments |  Share Fine-tuning large language models (LLMs) allows you to adjust open-source foundational models to achieve improved performance on your domain-specific tasks. In this post, we discuss the advantages of using Amazon SageMaker notebooks to fine-tune state-of-the-art open-source models. We utilize Hugging Face’s parameter-efficient fine-tuning (PEFT) library and quantization techniques through bitsandbytes to support interactive fine-tuning of extremely large models using a single notebook instance. Specifically, we show how to fine-tune Falcon-40B using a single ml.g5.12xlarge instance (4 A10G GPUs), but the same strategy works to tune even larger models on p4d/p4de notebook instances . Typically, the full precision representations of these very large models don’t fit into memory on a single or even several GPUs. To support an interactive notebook environment to fine-tune and run inference on models of this size, we use a new technique known as Quantized LLMs with Low-Rank Adapters (QLoRA) . QLoRA is an efficient fine-tuning approach that reduces memory usage of LLMs while maintaining solid performance. Hugging Face and the authors of the paper mentioned have published a detailed blog post that covers the fundamentals and integrations with the Transformers and PEFT libraries. Using notebooks to fine-tune LLMs SageMaker comes with two options to spin up fully managed notebooks for exploring data and building machine learning (ML) models. The first option is fast start, collaborative notebooks accessible within Amazon SageMaker Studio , a fully integrated development environment (IDE) for ML. You can quickly launch notebooks in SageMaker Studio, dial up or down the underlying compute resources without interrupting your work, and even co-edit and collaborate on your notebooks in real time. In addition to creating notebooks, you can perform all the ML development steps to build, train, debug, track, deploy, and monitor your models in a single pane of glass in SageMaker Studio. The second option is a SageMaker notebook instance , a single, fully managed ML compute instance running notebooks in the cloud, which offers you more control over your notebook configurations. For the remainder of this post, we use SageMaker Studio notebooks because we want to utilize SageMaker Studio’s managed TensorBoard experiment tracking with Hugging Face Transformer’s support for TensorBoard. However, the same concepts shown throughout the example code will work on notebook instances using the conda_pytorch_p310 kernel. It’s worth noting that SageMaker Studio’s Amazon Elastic File System (Amazon EFS) volume means you don’t need to provision a preordained Amazon Elastic Block Store (Amazon EBS) volume size, which is useful given the large size of model weights in LLMs. Using notebooks backed by large GPU instances enables rapid prototyping and debugging without cold start container launches. However, it also means that you need to shut down your notebook instances when you’re done using them to avoid extra costs. Other options such as Amazon SageMaker JumpStart and SageMaker Hugging Face containers can be used for fine-tuning, and we recommend you refer to the following posts on the aforementioned methods to choose the best option for you and your team: Domain-adaptation Fine-tuning of Foundation Models in Amazon SageMaker JumpStart on Financial data Train a Large Language Model on a single Amazon SageMaker GPU with Hugging Face and LoRA Prerequisites If this is your first time working with SageMaker Studio, you first need to create a SageMaker domain . We also use a managed TensorBoard instance for experiment tracking , though that is optional for this tutorial. Additionally, you may need to request a service quota increase for the corresponding SageMaker Studio KernelGateway apps. For fine-tuning Falcon-40B, we use a ml.g5.12xlarge instance. To request a service quota increase, on the AWS Service Quotas console, navigate to AWS services , Amazon SageMaker , and select Studio KernelGateway Apps running on ml.g5.12xlarge instances . Get started The code sample for this post can be found in the following GitHub repository . To begin, we choose the Data Science 3.0 image and Python 3 kernel from SageMaker Studio so that we have a recent Python 3.10 environment to install our packages. We install PyTorch and the required Hugging Face and bitsandbytes libraries: %pip install -q -U torch==2.0.1 bitsandbytes==0.39.1 %pip install -q -U datasets py7zr einops tensorboardX %pip install -q -U git+https://github.com/huggingface/transformers.git@850cf4af0ce281d2c3e7ebfc12e0bc24a9c40714 %pip install -q -U git+https://github.com/huggingface/peft.git@e2b8e3260d3eeb736edf21a2424e89fe3ecf429d %pip install -q -U git+https://github.com/huggingface/accelerate.git@b76409ba05e6fa7dfc59d50eee1734672126fdba Next, we set the CUDA environment path using the installed CUDA that was a dependency of PyTorch installation. This is a required step for the bitsandbytes library to correctly find and load the correct CUDA shared object binary. # Add installed cuda runtime to path for bitsandbytes import os import nvidia cuda_install_dir = '/'.join(nvidia.__file__.split('/')[:-1]) + '/cuda_runtime/lib/' os.environ['LD_LIBRARY_PATH'] =  cuda_install_dir Load the pre-trained foundational model We use bitsandbytes to quantize the Falcon-40B model into 4-bit precision so that we can load the model into memory on 4 A10G GPUs using Hugging Face Accelerate’s naive pipeline parallelism. As described in the previously mentioned Hugging Face post , QLoRA tuning is shown to match 16-bit fine-tuning methods in a wide range of experiments because model weights are stored as 4-bit NormalFloat, but are dequantized to the computation bfloat16 on forward and backward passes as needed. model_id = ""tiiuae/falcon-40b"" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type=""nf4"", bnb_4bit_compute_dtype=torch.bfloat16 ) When loading the pretrained weights, we specify device_map=”auto""  so that Hugging Face Accelerate will automatically determine which GPU to put each layer of the model on. This process is known as model parallelism . # Falcon requires you to allow remote code execution. This is because the model uses a new architecture that is not part of transformers yet. # The code is provided by the model authors in the repo. model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, quantization_config=bnb_config, device_map=""auto"") With Hugging Face’s PEFT library, you can freeze most of the original model weights and replace or extend model layers by training an additional, much smaller, set of parameters. This makes training much less expensive in terms of required compute. We set the Falcon modules that we want to fine-tune as target_modules in the LoRA configuration: from peft import LoraConfig, get_peft_model config = LoraConfig( r=8, lora_alpha=32, target_modules=[ ""query_key_value"", ""dense"", ""dense_h_to_4h"", ""dense_4h_to_h"", ], lora_dropout=0.05, bias=""none"", task_type=""CAUSAL_LM"" ) model = get_peft_model(model, config) print_trainable_parameters(model) # Output: trainable params: 55541760 || all params: 20974518272|| trainable%: 0.2648058910327664 Notice that we’re only fine-tuning 0.26% of the model’s parameters, which makes this feasible in a reasonable amount of time. Load a dataset We use the samsum dataset for our fine-tuning. Samsum is a collection of 16,000 messenger-like conversations with labeled summaries. The following is an example of the dataset: { ""id"": ""13818513"", ""summary"": ""Amanda baked cookies and will bring Jerry some tomorrow."", ""dialogue"": ""Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"" } In practice, you’ll want to use a dataset that has specific knowledge to the task you are hoping to tune your model on. The process of building such a dataset can be accelerated by using Amazon SageMaker Ground Truth Plus , as described in High-quality human feedback for your generative AI applications from Amazon SageMaker Ground Truth Plus . Fine-tune the model Prior to fine-tuning, we define the hyperparameters we want to use and train the model. We can also log our metrics to TensorBoard by defining the parameter logging_dir and requesting the Hugging Face transformer to report_to=""tensorboard"" : bucket = ” ” log_bucket = f""s3://{bucket}/falcon-40b-qlora-finetune"" import transformers # We set num_train_epochs=1 simply to run a demonstration trainer = transformers.Trainer( model=model, train_dataset=lm_train_dataset, eval_dataset=lm_test_dataset, args=transformers.TrainingArguments( per_device_train_batch_size=8, per_device_eval_batch_size=8, logging_dir=log_bucket, logging_steps=2, num_train_epochs=1, learning_rate=2e-4, bf16=True, save_strategy = ""no"", output_dir=""outputs"",  report_to=""tensorboard"", ), data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), ) Monitor the fine-tuning With the preceding setup, we can monitor our fine-tuning in real time. To monitor GPU usage in real time, we can run nvidia-smi directly from the kernel’s container. To launch a terminal running on the image container, simply choose the terminal icon at the top of your notebook. From here, we can use the Linux watch command to repeatedly run nvidia-smi every half second: watch -n 0.5 nvidia-smi In the preceding animation, we can see that the model weights are distributed across the 4 GPUs and computation is being distributed across them as layers are processed serially. To monitor the training metrics, we utilize the TensorBoard logs that we write to the specified Amazon Simple Storage Service (Amazon S3) bucket. We can launch our SageMaker Studio domain user’s TensorBoard from the AWS SageMaker console: After loading, you can specify the S3 bucket that you instructed the Hugging Face transformer to log to in order to view training and evaluation metrics. Evaluate the model After our model is finished training, we can run systematic evaluations or simply generate responses: tokens_for_summary = 30 output_tokens = input_ids.shape[1] + tokens_for_summary outputs = model.generate(inputs=input_ids, do_sample=True, max_length=output_tokens) gen_text = tokenizer.batch_decode(outputs)[0] print(gen_text) # Sample output: # Summarize the chat dialogue: # Richie: Pogba # Clay: Pogboom # Richie: what a s strike yoh! # Clay: was off the seat the moment he chopped the ball back to his right foot # Richie: me too dude # Clay: hope his form lasts # Richie: This season he's more mature # Clay: Yeah, Jose has his trust in him # Richie: everyone does # Clay: yeah, he really deserved to score after his first 60 minutes # Richie: reward # Clay: yeah man # Richie: cool then # Clay: cool # --- # Summary: # Richie and Clay have discussed the goal scored by Paul Pogba. His form this season has improved and both of them hope this will last long After you are satisfied with the model’s performance, you can save the model: trainer.save_model(""path_to_save"") You can also choose to host it in a dedicated SageMaker endpoint . Clean up Complete the following steps to clean up your resources: Shut down the SageMaker Studio instances to avoid incurring additional costs. Shut down your TensorBoard application . Clean up your EFS directory by clearing the Hugging Face cache directory: rm -R ~/.cache/huggingface/hub Conclusion SageMaker notebooks allow you to fine-tune LLMs in a quick and efficient manner in an interactive environment. In this post, we showed how you can use Hugging Face PEFT with bitsandbtyes to fine-tune Falcon-40B models using QLoRA on SageMaker Studio notebooks. Try it out, and let us know your thoughts in the comments section! We also encourage you to learn more about Amazon generative AI capabilities by exploring SageMaker JumpStart , Amazon Titan models, and Amazon Bedrock . About the Authors Sean Morgan is a Senior ML Solutions Architect at AWS. He has experience in the semiconductor and academic research fields, and uses his experience to help customers reach their goals on AWS. In his free time, Sean is an active open-source contributor and maintainer, and is the special interest group lead for TensorFlow Addons. Lauren Mullennex is a Senior AI/ML Specialist Solutions Architect at AWS. She has a decade of experience in DevOps, infrastructure, and ML. She is also the author of a book on computer vision. Her other areas of focus include MLOps and generative AI. Philipp Schmid is a Technical Lead at Hugging Face with the mission to democratize good machine learning through open source and open science. Philipp is passionate about productionizing cutting-edge and generative AI machine learning models. He loves to share his knowledge on AI and NLP at various meetups such as Data Science on AWS, and on his technical blog . Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Introducing popularity tuning for Similar-Items in Amazon Personalize _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Introducing popularity tuning for Similar-Items in Amazon Personalize by Julia Clark , Branislav Kveton , Nihal Harish , and Yifei Ma | on 08 JUN 2023 | in Amazon Machine Learning , Amazon Personalize | Permalink | Comments |  Share Amazon Personalize now enables popularity tuning for its Similar-Items recipe ( aws-similar-items ). Similar-Items generates recommendations that are similar to the item that a user selects, helping users discover new items in your catalog based on the previous behavior of all users and item metadata. Previously, this capability was only available for SIMS , the other Related_Items recipe within Amazon Personalize. Every customer’s item catalog and the way that users interact with it are unique to their business. When recommending similar items, some customers may want to place more emphasis on popular items because they increase the likelihood of user interaction, while others may want to de-emphasize popular items to surface recommendations that are more similar to the selected item but are less widely known. This launch gives you more control over the degree to which popularity influences Similar-Items recommendations, so you can tune the model to meet your particular business needs. In this post, we show you how to tune popularity for the Similar-Items recipe. We specify a value closer to zero to include more popular items, and specify a value closer to 1 to place less emphasis on popularity. Example use cases To explore the impact of this new feature in greater detail, let’s review two examples. [1] First, we used the Similar-Items recipe to find recommendations similar to Disney’s 1994 movie The Lion King ( IMDB record ). When the popularity discount is set to 0, Amazon Personalize recommends movies that have a high frequency of occurrence (are popular). In this example, the movie Seven (a.k.a. Se7en), which occurred 19,295 times in the dataset, is recommended at rank 3.0. By tuning the popularity discount to a value of 0.4 for The Lion King recommendations, we see that the rank of the movie Seven drops to 4.0. We also see movies from the Children genre like Babe, Beauty and the Beast, Aladdin, and Snow White and the Seven Dwarfs get recommended at a higher rank despite their lower overall popularity in the dataset. Let’s explore another example. We used the Similar-Items recipe to find recommendations similar to Disney and Pixar’s 1995 movie Toy Story ( IMDB record ). When the popularity discount is set to 0, Amazon Personalize recommends movies that have a high frequency occurrence in the dataset. In this example, we see that the movie Twelve Monkeys (a.k.a. 12 Monkeys), which occurred 6,678 times in the dataset, is recommended at rank 5.0. By tuning the popularity discount to a value of 0.4 for Toy Story recommendations, we see that the rank of the Twelve Monkeys is no longer recommended in the top 10. We also see movies from the Children genre like Aladdin, Toy Story 2, and A Bug’s Life get recommended at a higher rank despite their lower overall popularity in the dataset. Placing greater emphasis on more popular content can help increase likelihood that users will engage with item recommendations. Reducing emphasis on popularity may surface recommendations that seem more relevant to the queried item, but may be less popular with users. You can tune the degree of importance placed on popularity to meet your business needs for a specific personalization campaign. Implement popularity tuning To tune popularity for the Similar-Items recipe, configure the popularity_discount_factor hyperparameter via the AWS Management Console , the AWS SDKs, or the AWS Command Line Interface (AWS CLI). The following is sample code setting the popularity discount factor to 0.5 via the AWS SDK: { response = personalize.create_solution( name=""movie_lens-with-popularity-discount-0_5"". recipeARN=""arn:aws:personalize:::recipe/aws-similar-items"", datasetGroupArn=dsg_arn, solutionConfig={ ""algorithmHyperParameters"" : { # set the preferred value of popularity discount here ""popularity_discount_factor"" : ""0.50"" } } ] } The following screenshot shows setting the popularity discount factor to 0.3 on the Amazon Personalize console. Conclusion With popularity tuning, you can now further refine the Similar-Items recipe within Amazon Personalize to control the degree to which popularity influences item recommendations. This gives you greater control over defining the end-user experience and what is included or excluded in your Similar-Items recommendations. For more details on how to implement popularity tuning for the Similar-Items recipe, refer to documentation . References [1] Maxwell Harper and Joseph A. Konstan. 2015. The MovieLens Datasets: History and Context. ACM Transactions on Interactive Intelligent Systems (TiiS) 5, 4, Article 19 (December 2015), 19 pages. DOI= http://dx.doi.org/10.1145/2827872 About the Authors Julia McCombs Clark is a  Sr. Technical Product Manager on the Amazon Personalize team. Nihal Harish is a Software Development Engineer on the Amazon Personalize team. Yifei Ma is a Senior Applied Scientist at AWS AI Labs working on recommender systems. His research interests lie in active learning, sequential modeling, and online decision making. Branislav Kveton is a Principal Scientist at AWS AI Labs. He proposes, analyzes, and applies algorithms that learn incrementally, run in real time, and converge to near optimal solutions as the number of observations increases. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Introducing the latest Machine Learning Lens for the AWS Well-Architected Framework _ AWS Architecture Blog.txt,"AWS Architecture Blog Introducing the latest Machine Learning Lens for the AWS Well-Architected Framework by Raju Patil, Ganapathi Krishnamoorthi, Michael Hsieh, Neil Mackin, and Dhiraj Thakur | on 05 JUL 2023 | in Amazon Machine Learning , Announcements , Architecture , AWS Well-Architected Framework | Permalink | Comments |  Share Today, we are delighted to introduce the latest version of the AWS Well-Architected Machine Learning (ML) Lens whitepaper . The AWS Well-Architected Framework provides architectural best practices for designing and operating ML workloads on AWS. It is based on six pillars: Operational Excellence, Security, Reliability, Performance Efficiency, Cost Optimization, and—a new addition to this revision—Sustainability. The ML Lens uses the Well-Architected Framework to outline the steps for performing an AWS Well-Architected review for your ML implementations. The ML Lens provides a consistent approach for customers to evaluate ML architectures, implement scalable designs, and identify and mitigate technical risks. It covers common ML implementation scenarios and identifies key workload elements to allow you to architect your cloud-based applications and workloads according to the AWS best practices that we have gathered from supporting thousands of customer implementations. The new ML Lens joins a collection of Well-Architected lenses that focus on specialized workloads such as the Internet of Things (IoT), games, SAP, financial services, and SaaS technologies. You can find more information in AWS Well-Architected Lenses . What is the Machine Learning Lens? Let’s explore the ML Lens across ML lifecycle phases, as the following figure depicts. Figure 1. Machine Learning Lens The Well-Architected ML Lens whitepaper focuses on the six pillars of the Well-Architected Framework across six phases of the ML lifecycle. The six phases are: Defining your business goal Framing your ML problem Preparing your data sources Building your ML model Entering your deployment phase Establishing the monitoring of your ML workload Unlike the traditional waterfall approach, an iterative approach is required to achieve a working prototype based on the six phases of the ML lifecycle. The whitepaper provides you with a set of established cloud-agnostic best practices in the form of Well-Architected Pillars for each ML lifecycle phase. You can also use the Well-Architected ML Lens wherever you are on your cloud journey. You can choose either to apply this guidance during the design of your ML workloads, or after your workloads have entered production as a part of the continuous improvement process. What’s new in the Machine Learning Lens? Sustainability Pillar : As building and running ML workloads becomes more complex and consumes more compute power, refining compute utilities and assessing your carbon footprint from these workloads grows to critical importance. The new pillar focuses on long-term environmental sustainability and presents design principles that can help you build ML architectures that maximize efficiency and reduce waste. Improved best practices and implementation guidance : Notably, enhanced guidance to identify and measure how ML will bring business value against ML operational cost to determine the return on investment (ROI). Updated guidance on new features and services : A set of updated ML features and services announced to-date have been incorporated into the ML Lens whitepaper. New additions include, but are not limited to, the ML governance features, the model hosting features, and the data preparation features. These and other improvements will make it easier for your development team to create a well-architected ML workloads in your enterprise. Updated links : Many documents, blogs, instructional and video links have been updated to reflect a host of new products, features, and current industry best practices to assist your ML development. Who should use the Machine Learning Lens? The Machine Learning Lens is of use to many roles, including: Business leaders for a broader appreciation of the end-to-end implementation and benefits of ML Data scientists to understand how the critical modeling aspects of ML fit in a wider context Data engineers to help you use your enterprise’s data assets to their greatest potential through ML ML engineers to implement ML prototypes into production workloads reliably, securely, and at scale MLOps engineers to build and manage ML operation pipelines for faster time to market Risk and compliance leaders to understand how the ML can be implemented responsibly by providing compliance with regulatory and governance requirements Machine Learning Lens components The Lens includes four focus areas: 1. The Well-Architected Machine Learning Design Principles A set of best practices that are used as the basis for developing a Well-Architected ML workload. 2. The Machine Learning Lifecycle and the Well Architected Framework Pillars This considers all aspects of the Machine Learning Lifecycle and reviews design strategies to align to pillars of the overall Well-Architected Framework. The Machine Learning Lifecycle phases referenced in the ML Lens include: Business goal identification – identification and prioritization of the business problem to be addressed, along with identifying the people, process, and technology changes that may be required to measure and deliver business value. ML problem framing – translating the business problem into an analytical framing, i.e., characterizing the problem as an ML task, such as classification, regression, or clustering, and identifying the technical success metrics for the ML model. Data processing – garnering and integrating datasets, along with necessary data transformations needed to produce a rich set of features. Model development – iteratively training and tuning your model, and evaluating candidate solutions in terms of the success metrics as well as including wider considerations such as bias and explainability. Model deployment – establishing the mechanism to flow data though the model in a production setting to make inferences based on production data. Model monitoring – tracking the performance of the production model and the characteristics of the data used for inference. The Well-Architected Framework Pillars are: Operational Excellence – ability to support ongoing development, run operational workloads effectively, gain insight into your operations, and continuously improve supporting processes and procedures to deliver business value. Security – ability to protect data, systems, and assets, and to take advantage of cloud technologies to improve your security. Reliability – ability of a workload to perform its intended function correctly and consistently, and to automatically recover from failure situations. Performance Efficiency – ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as system demand changes and technologies evolve. Cost Optimization – ability to run systems to deliver business value at the lowest price point. Sustainability – addresses the long-term environmental, economic, and societal impact of your business activities. 3. Cloud-agnostic best practices These are best practices for each ML lifecycle phase across the Well-Architected Framework pillars irrespective of your technology setting. The best practices are accompanied by: Implementation guidance – the AWS implementation plans for each best practice with references to AWS technologies and resources. Resources – a set of links to AWS documents, blogs, videos, and code examples as supporting resources to the best practices and their implementation plans. 4. Indicative ML Lifecycle architecture diagrams to illustrate processes, technologies, and components that support many of these best practices. What are the next steps? The new Well-Architected Machine Learning Lens whitepaper is available now. Use the Lens whitepaper to determine that your ML workloads are architected with operational excellence, security, reliability, performance efficiency, cost optimization, and sustainability in mind. If you require support on the implementation or assessment of your Machine Learning workloads, please contact your AWS Solutions Architect or Account Representative. Special thanks to everyone across the AWS Solution Architecture, AWS Professional Services, and Machine Learning communities, who contributed to the Lens. These contributions encompassed diverse perspectives, expertise, backgrounds, and experiences in developing the new AWS Well-Architected Machine Learning Lens . TAGS: machine learning , ML Raju Patil Raju Patil is a Data Scientist in AWS Professional Services. He builds and deploys AI/ML solutions to help AWS customers overcome business challenges including computer vision, time-series forecasting, and predictive analytics use cases across financial services, telecom, and healthcare. He led data science teams in Advertising Technology and computer vision and robotics R&D initiatives. He enjoys photography, hiking, travel, and culinary exploration. Ganapathi Krishnamoorthi Ganapathi Krishnamoorthi is a Senior ML Solutions Architect at AWS. Ganapathi provides prescriptive guidance to startup and enterprise customers helping them to design and deploy cloud applications at scale. He is specialized in machine learning and is focused on helping customers leverage AI/ML for their business outcomes. When not at work, he enjoys exploring outdoors and listening to music. Michael Hsieh Michael Hsieh is a Principal AI/ML Specialist Solutions Architect. He solves business challenges using AI/ML for customers in the healthcare and life sciences industry. As a Seattle transplant, he loves exploring the great Mother Nature the city has to offer, such as the hiking trails, scenery kayaking in the SLU, and the sunset at Shilshole Bay. As a former long-time resident of Philadelphia, he has been rooting for the Philadelphia Eagles and Philadelphia Phillies. Neil Mackin Neil Mackin is a Principal ML Strategist and leads the ML Solutions Lab team of strategists in EMEA. He works to help customers realize business value through deploying machine learning workloads into production and guides our customers on moving towards best practice with ML. Dhiraj Thakur Dhiraj Thakur is a Solutions Architect with Amazon Web Services. He works with AWS customers and partners to provide guidance on enterprise cloud adoption, migration, and strategy. He is passionate about technology and enjoys building and experimenting in the analytics and AI/ML space. Comments View Comments Resources AWS Architecture Center AWS Well-Architected AWS Architecture Monthly AWS Whitepapers AWS Training and Certification This Is My Architecture Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" iptiQ Case Study.txt,"Français AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Learn more » 30 iptiQ uses a common code base for its European partners and has developed a single API to allow any of them to connect to its technology, regardless of the specific product and market combination. To accommodate individual requirements, iptiQ tailors what each partner gets and adapts its offering to the different categories of insurance that their partners’ customers need. Using AWS, this flexibility is possible. Español The scale-up relies on a number of services, including Amazon Relational Database Service (Amazon RDS), Amazon Elastic Kubernetes Service (Amazon EKS), Amazon SageMaker and AWS Lambda, which helped it set up, operate, and scale its relational database in the cloud with just a few clicks. Pozzoli also values the ability to easily draw on developer resources. “There's a large engineering pool with extensive experience on AWS, which allows you to ramp-up your teams quickly,” he says. 日本語 1% In addition, iptiQ has reduced partner onboarding time—from around 6–8 months, which is common with other insurers, to a few weeks. 2022 AWS Customer Success Story: iptiQ by Swiss Re | Amazon Web Services 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Learn how »  In Europe, iptiQ launched its Property & Casualty insurance business entirely using Amazon Web Services (AWS), knowing that it needed to grow and develop its products at speed. “Using AWS has been pivotal to our success,” says Claudio Pozzoli, chief technology officer (CTO) at iptiQ EMEA. “Our business is simplifying complex insurance practices, not IT maintenance. With our platform built on AWS, we can better support our partners, give them the products they need faster, and make the digital journey easier for their customers.” Amazon Lambda Maecenas efficitur neque ac ex porta Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. Build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows. Learn more » iptiQ, a B2B2C insurer and division of Swiss Re, provides a white-label, digital insurance solutions built on AWS that helps its consumer-brand partners sell insurance policies that are complementary to their core businesses.  These days, it’s not unusual to find yourself buying a mobile phone subscription from a grocery store, signing up for a credit card from your favorite sports team, or even getting home insurance when you buy furniture. Out-of-category purchasing, as it’s called, is becoming more common, especially for financial services. AWS Services Used Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Learn more » ipsum et velit consectetur 中文 (繁體) Bahasa Indonesia Donec placerat Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Data protection is critical in a highly regulated sector such as insurance. “Using AWS, we can easily comply with the security standards in our industry,” says Pozzoli. “We have peace of mind that our brand and reputation—and those of our partners—are fully protected.”  Learn more » Get Started About iptiQ Swiss Re’s iptiQ Helps its Partners Deliver Simple, Digital Insurance Solutions on AWS 2 AWS Customer Success Stories Türkçe English Amazon RDS With our platform built on AWS, we can better support our partners, give them the products they need faster, and make the digital journey easier for their customers.” Sed quis Customer Stories / Insurance Using AWS, iptiQ has the availability, speed, and flexibility to keep innovating. Its solution makes life easier for consumers, both when buying insurance and making claims. In addition, it has reduced partner onboarding time—from around 6–8 months, which is common with other insurers—to just a few weeks. Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. This innovative model is known as business-to-business-to-consumer (B2B2C) insurance, and the market for such services is set to almost triple in size between 2020 and 2031. iptiQ makes it easier for brands to sell insurance that complement their core products—while giving those companies’ customers a better insurance-buying experience. Nulla nisl massa, ullamcorper id Deutsch Tiếng Việt Pellentesque quis dui vel nunc cursus. elementum ac eget null integer interdum sodales felis pellentesque et Swiss Re’s iptiQ Helps its Partners Deliver Simple, Digital Insurance Solutions on AWS Italiano ไทย Amazon EKS For iptiQ, delivering a great experience is as important as ensuring that security is covered. So, if you find that your life is being made a little bit easier by the convenience of buying insurance services from your preferred brand, it might well be iptiQ that’s powering it. And as the company continues its rapid growth—Gross Written Premium grew 95 percent in 2021—in Europe it’s using AWS to gain the speed and availability it needs to deliver innovative insurance purchasing options to brands and consumers. iptiQ, a scale-up division of reinsurer Swiss Re, is making it easy for consumer brands to sell insurance to their customers. As a white-label insurance provider, iptiQ forms partnerships with insurance intermediaries and leading companies such as home furnishings retailer IKEA and real-estate marketplace ImmoScout24. “Today, more than 50 partners embed or integrate our insurance solutions into their products or customer journeys,” says Andreas Schertzinger, chief executive officer (CEO) at iptiQ EMEA. “This means that more than 1.6 million consumers benefit from our affordable and convenient products.” Claudio Pozzoli Chief Technology Officer, iptiQ EMEA Português Amazon SageMaker" Isetan Mitsukoshi System Solutions seamlessly migrates databases to Amazon Aurora using Amazon DMA _ Isetan Mitsukoshi System Solutions Case Study _ AWS.txt,"Français Learn how Mitsukoshi Isetan System Solutions (IMS) modernized to Amazon Aurora with the help of Amazon DMA. IMS then used AWS Database Migration Service (AWS DMS), a managed migration and replication service to help move workloads to AWS quickly with minimal downtime and zero data loss, to migrate from Amazon RDS for Oracle to Amazon Aurora PostgreSQL. Amazon RDS for Oracle is a fully managed commercial database that makes it easy to set up, operate, and scale Oracle deployments in the cloud. 2023 Solution | Achieving digital transformation through a phased cloud migration Español IMS’ database migration project is just the beginning. With over 50 databases remaining – both on-premises and lift-and-shifted to the cloud – IMS plans to gradually migrate them to Amazon Aurora. IMS is delighted with Amazon DMA and plans to continue to use the DMA team for future migration efforts. Isetan Mitsukoshi System Solutions seamlessly migrates databases to Amazon Aurora using Amazon DMA About Isetan Mitsukoshi System Solutions 日本語 Contact Sales Outcome | Establishing path to long-term cost-savings and business agility As a customer of AWS, IMS was already accustomed to AWS cloud services. In phase one, the goal was to quickly reduces the administrative burden of self-managing its on-premises system by re-platforming the databases to the cloud. The company migrated its commercial databases to Amazon Relational Database Service (Amazon RDS) for Oracle, a fully managed commercial database that makes it easy to set up, operate, and scale Oracle deployments in the cloud. 한국어 Amazon Database Migration Accelerator (DMA) is a solution that brings together AWS Database Migration Service (DMS), AWS Schema Conversion Tool (SCT), and AWS database experts to help customers migrate away from traditional commercial databases at fixed prices. Learn more » Get Started IMS continued working on a system for controlling customer service initiatives like in-store specials, events, and brand promotions. This system encompassed social media management, contact center assistance, and policy effectiveness evaluation. With the help of DMA, IMS migrated the database within budget and in the 19 weeks scheduled. “We had complex applications with syntax unique to the existing database, so we assumed convoluted modifications would be needed,” says Kazumi Saito of the ICT Operation Service of IMS. “We were worried about our English skills, but thanks to excellent, friendly support from the AWS Japan team, there weren’t any problems.” According to Kazumi Saito, DMA’s Japanese documentation on migration procedures and program modifications was a major benefit. This material also included thorough information on operating, maintaining, and enhancing the new system. AWS Database Migration Service AWS Services Used Amazon Aurora PostgreSQL 中文 (繁體) Bahasa Indonesia Completing the migration on schedule was exceptionally impressive. DMA provided quick solutions to challenges with clear explanations on the root causes and techniques to resolve. We’re able to focus on our own work with the system providing great performance with zero unplanned downtime since it launched.” Technical support “Because our system is technologically advanced and many staff members have come and gone over its lifespan, we thought migration would be tough,” says Masaki Saito, Manager of ICT Operation Service at IMS. “Completing the migration on schedule was exceptionally impressive. DMA provided quick solutions to challenges with clear explanations on the root causes and techniques to resolve. We’re able to focus on our own work with the system providing great performance with zero unplanned downtime since it launched.” In addition, by moving from on-premises to Amazon Aurora, IMS was able to lower costs through performance efficiencies and breaking free from expensive licensing fees. Ρусский Isetan Mitsukoshi System Solutions oversees information strategies and provides an extensive range of IT services for all department stores and companies in the Isetan Mitsukoshi Group. The company aims to fuse customer service with digital technology to create the ultimate customer experience as the core of department store DX initiatives. عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. on Amazon Aurora Overview Amazon Database Migration Accelerator Customer Stories / Information and Communication Türkçe Amazon RDS for Oracle to migrate as planned English Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram Amazon Aurora provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. Learn more » Opportunity | Offloading legacy systems to concentrate on digital transformation However, the end goal for IMS was to modernize to the cloud-native database, Amazon Aurora, which is designed for unparalleled high performance and availability at a global scale with full MySQL and PostgreSQL compatibility. “Amazon Aurora’s affordable high-performance databases reduce expensive licensing costs,” says Karasawa. In addition, IMS chose Amazon Aurora PostgreSQL-Compatible Edition for its performance capabilities to support a high concurrency environment, its lower conversion cost from Oracle PL/SQL stored procedure, and its ease of use. Takeshi Karasawa General Manager, ICT Engineer Services Department, Isetan Mitsukoshi System Solutions Ltd. Deutsch 19 weeks Tiếng Việt The Isetan Mitsukoshi Group, with a long history as one of the largest department store groups in Japan, recognized digital transformation is needed to keep up with modern demands. In 2019, Isetan Mitsukoshi System Solutions (IMS), which supports all IT usage across the Isetan Mitsukoshi Group, embarked on a multi-phase database migration and modernization journey with Amazon Web Services (AWS) to transform its digital infrastructure in order to drive innovation and better value for its customers. Understanding the value cloud services offer, IMS has embraced the cloud first strategy. In 2022, IMS procured the help of Amazon Database Migration Accelerator (Amazon DMA), a solution that brings together AWS services and AWS database experts to help customers migrate away from traditional commercial databases, to modernize their databases to Amazon Aurora. According to Karasawa, high-load legacy systems impeded the progression of DX. Historically, IMS operated on-premises commercial databases, but over time, the increasing costs to operate these databases became a major issue. Database licensing costs alone were a significant portion of the group’s total IT expenses, requiring significant annual renewal costs and operational workloads. The company therefore turned to AWS to help with their DX, shifting its databases to the cloud and alleviating the expense and time-consuming effort of self-managing databases. Zero unplanned downtime Italiano ไทย Architecture Diagram AWS Database Migration Service (AWS DMS) is a managed migration and replication service that helps move your database and analytics workloads to AWS quickly, securely, and with minimal downtime and zero data loss. Learn more » Learn more » access to experts and documentation Isetan Mitsukoshi System Solutions (IMS) develops IT strategies, provides solutions, and runs systems for the group’s 44 department stores and companies and 17,000 employees. IMS enables the group’s core department store business to receive mission-critical IT solutions like sales management, revenue control, and analytics, as well as digital transformation (DX) initiatives. “Our great variety of IT solutions ranges from operating business systems to using cutting-edge technology,” says Takeshi Karasawa, General Manager of ICT Engineer Services at IMS. “With DX as one of our main focuses, we use digital technology to provide new value to customers, improve employee productivity, and preserve the heritage of our department stores. We’re also committed to modernization to support smart devices, lowering operating workloads and expenses, and creating DX-friendly environments.” In phase two of its DX and modernization journey, IMS procured the help of Amazon Database Migration Accelerator (Amazon DMA) to help accelerate its database migrations to AWS. Due to limited engineering resources and the added complexity needed to convert schemas and source code objects to be compatible with the target engine, DMA provided the technical expertise needed to quickly convert schemas and applications. Amazon DMA also provided a detailed playbook for IMS to migrate the databases to production. Português IMS first migrated its Electronic Data Interchange (EDI) system database, which controls transactions between department stores and trading partners. Although the system contained unique database engine code, the DMA team quickly resolved all problems to complete the migration in nine weeks as planned." Isha Foundation Delivers on its Mission for Millions by Transforming Content Delivery on AWS _ Case Study _ AWS.txt,"In 2020, Isha Foundation moved most of its in-person programs, training, and events online. As a result, the foundation experienced an increase in the number of visitors attending events or watching videos of Sadhguru’s teachings on its websites. The foundation also needed to support two million users globally who take part in online events occurring during Maha Shivaratri, the most significant event in India's spiritual calendar. Français Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. Learn more » Español Delivered highly available content to millions of users Customer Stories / Social Services The foundation relies on Amazon Elastic Kubernetes Service (Amazon EKS) to run and scale containerized Kubernetes applications, which eliminates the need for internal resources to manage Kubernetes clusters. Senthilkumar V, DevOps engineer at Isha Foundation, says, “Previously, we had to spend time and money upgrading hardware every few years, investing engineering and security resources into the data center, and managing the environment. Now, we can allocate more resources into enhancing our website and other applications instead.”  日本語 2022 Isha Foundation transformed its content delivery network using Amazon CloudFront, ensuring a reliable experience for its growing online user base, and supporting its mission of helping them attain physical, mental, and spiritual well-being. Eliminated hardware upgrades and data center maintenance costs To enhance content delivery to a growing number of video subscribers on its websites, and support thousands of additional concurrent users during events, the foundation migrated its on-premises IT environment to AWS. Isha Foundation’s data center was supporting its websites, online educational resources, and an internal CRM solution. “Our data center limited our ability to scale in response to growth and this negatively impacted video quality and website response times. We chose AWS for improved scalability and ease of integration,” says Sivanesan Mathivanan, Delivery Manager–DevOps at Isha Foundation. 한국어 When Sadhguru introduced Conscious Planet, an initiative to create a world where humans act more consciously, the foundation’s main website maintained strong performance throughout the multi-day campaign.  “During this initiative, we streamed new videos and articles and hosted multiple events throughout the world without outages or issues,” Mathivanan says. “This helped us achieve our goal of encouraging people to find out more about what the movement is about.” Overview | Opportunity | Solution | Outcome | AWS Services Used Outcome | Enhancing the Content Experience for Millions on AWS Get Started With its websites, CRM, and CMS running on AWS, the foundation has expanded its various educational and outreach activities by offering daily events and programs. Scaling to support a surge in web traffic during special online events is no longer an issue for the organization, with millions of concurrent users during Maha Shivaratri, as well as special guided meditations occurring monthly during the full moon. “We can scale our application environment to manage 10 times more traffic during online events because of AWS,” says Senthilkumar V.  With Amazon CloudFront, Isha Foundation can deliver highly available content and achieve low latency; critical given the foundation is based in a remote area of India. “With AWS, we’re not restricted by borders. We want to reach more people worldwide, and AWS provides the high performance and availability we require,” says Mathivanan. Isha Foundation is now exploring additional AWS services, such as Amazon Polly that turns text into lifelike speech. “We’re looking at using text-to-speech in Amazon Polly to give everyone the experience of hearing Sadhguru’s voice in their own native language, which is exciting,” says Mathivanan. “Our focus at Isha Foundation is to engage with spiritual seekers, meditators, and volunteers in new ways as we grow. By leveraging Amazon CloudFront and new AWS technologies, we can constantly provide our users with a spiritual experience no matter where they are.” AWS Services Used Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. Learn more » Isha Foundation transformed its content delivery network using Amazon CloudFront, ensuring a reliable website experience for its expanding online user base, and supporting its mission of helping them attain physical, mental, and spiritual well-being. 中文 (繁體) Bahasa Indonesia 10x Solution | Scaling to Deliver Highly Available Content on Amazon CloudFront Contact Sales Ρусский Isha Foundation is running its CRM, CMS, websites, and an internal log system on Amazon Elastic Compute Cloud (Amazon EC2) instances. It chose Amazon CloudFront as the content delivery network for its websites and CMS. عربي About Isha Foundation 中文 (简体) Sivanesan Mathivanan Delivery Manager–DevOps, Isha Foundation In 1992, Indian yoga teacher and spiritual leader, Jagadish Vasudev, known popularly as Sadhguru, created a nonprofit organization called the Isha Foundation. The foundation is dedicated to raising human consciousness through yoga programs and inspiring projects for society, the environment, and education. What began as a grassroots organization grew into a worldwide movement, supported today by 11 million volunteers in 300 centers across the globe. Amazon Elastic Compute Cloud Overview Amazon Elastic Kubernetes Service Scaled to support 10x increase in web traffic On AWS, Isha Foundation leveraged Amazon CloudFront, a content delivery network built for high performance and security, and Amazon Elastic Kubernetes Service (Amazon EKS) for scalability. With these solutions, Isha Foundation is ensuring an improved online experience for its subscribers and supporting its mission of helping them attain overall well-being. Our focus at Isha Foundation is to engage with spiritual seekers, meditators, and volunteers in new ways as we grow. By leveraging Amazon CloudFront and new AWS technologies, we can constantly provide our users with a spiritual experience no matter where they are.” Türkçe Isha Foundation is a non-profit organization offering in-person and online courses and events to a growing number of users globally. To support this growth and securely deliver content with low latency, Isha Foundation migrated its customer relationship management (CRM), content management system (CMS), and website application to AWS. Deployed resources in minutes versus weeks English Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Cost Optimizations The foundation’s IT team can also better serve internal customers, including over 100 departments who often request software deployments. Resources can be deployed in minutes, whereas previously it took weeks to procure and install hardware or software for a department. Deutsch Amazon DynamoDB Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Tiếng Việt Rapid Deployments Isha Foundation, based in India, is a nonprofit organization dedicated to raising human consciousness. Guided by Sadhguru, the foundation offers a variety of programs that provide methods for anyone to attain physical, mental, and spiritual wellbeing. Its offerings allow participants to deepen their experience of life and reach their ultimate potential. Italiano ไทย Amazon CloudFront Opportunity | Supporting an Increase in Online Content and Users Learn more » Isha Foundation Delivers on its Mission for Millions by Transforming Content Delivery on AWS Content Delivery Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português" Jefferies Manages Packaged Applications at Scale in the Cloud through Amazon RDS Custom for Oracle _ Jefferies Case Study _ AWS.txt,"Français 2023 Español Manish Mohite Senior Vice President and Global Head of Cloud Engineering, Jefferies 日本語 Jefferies needed a way to automate time-consuming database administration tasks. “We operate in six regions, and we have over 50 accounts, so that’s 300 different custom engine versions that we would have to manage,” says Manish Mohite, senior vice president and global head of cloud engineering at Jefferies. The company saw an opportunity to automate and enhance the resilience and efficiency of its data infrastructure by developing packaged applications in the cloud and turned to Amazon Web Services (AWS). Jefferies began using Amazon Relational Database Service Custom (Amazon RDS Custom) for Oracle, a managed database service for applications that require privileged access to underlying operating system and database environments. The company selected this service to achieve cloud scale for legacy, custom, and packaged applications that require licensing and security tooling. This effort took approximately 6–8 months. Other attractive features of Amazon RDS Custom for Oracle included Oracle licensing portability with bring your own license (BYOL), AWS-managed provisioning with shared responsibilities, automated backup and recoveries, and cloud scalability to quickly and simply adjust to business needs. As a financial service subject to many regulations, the integration with standard security and compliance was another important feature because it made managing the process easier. Being able to use Jefferies’ existing tooling—such as IBM Guardium agent, Oracle Unified Directory to centrally manage Oracle identities, Atlassian DevOps toolset—with Amazon RDS Custom for Oracle was critical. “Amazon RDS Custom for Oracle really did provide us significant value in proposition, especially with these packaged applications that we could build in the cloud,” says Mohite. 한국어 Learn more » Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Get Started About Jefferies  AWS Services Used 中文 (繁體) Bahasa Indonesia The company also uses a host of AWS services within a sophisticated solution architecture that it built for the use case. Among them are Amazon Route 53, a highly available domain name system web service that connects user requests to internal applications, and Amazon Simple Storage Service (Amazon S3), a service for building fast, powerful cloud-native apps that scale automatically. These offerings store and route traffic to and from Jefferies’ applications. Contact Sales Ρусский عربي Jefferies is a leading global, full-service investment banking and capital markets firm that provides advisory, sales and trading, research, and wealth and asset-management services. With more than 40 offices around the world, Jefferies offers insights and expertise to investors, companies, and governments. 中文 (简体) Jefferies, a global investment banking firm, is modernizing its technology to advance innovation at the firm. By shifting from an application development to an application assembly and integration model, Jefferies’ vision is to build highly agile teams that can deliver fast, customized insights to achieve better client outcomes. Chief information officer at Jefferies, Vikram Dewan, says, “Our goal is for cloud-native platforms to serve as the foundation for more than 90 percent of new modernized workloads at the firm.” Jefferies’ Solution Jefferies Manages Packaged Applications at Scale in the Cloud through Amazon RDS Custom for Oracle Benefits of Using Amazon RDS Custom for Oracle “In the context of Amazon RDS Custom, we can now provision and lifecycle Amazon RDS Oracle databases in hours compared to weeks and months in the past,” says Mohite. By managing all engines through the Amazon S3 bucket and automating database setup and scaling using Amazon RDS Custom for Oracle, Jefferies has freed up time and money for more strategic activities. “We really use AWS for all the undifferentiated heavy lifting, and we can focus on the things that matter most to us,” says Mohite. Amazon RDS Custom for Oracle really did provide us significant value in proposition, especially with these packaged applications that we could build in the cloud.” Customer Stories / Financial Services Türkçe English Jefferies, a leading global investment banking firm, selected Amazon RDS Custom for Oracle to automate database administration tasks for legacy, custom, and packaged applications. The company consolidated hundreds of custom engine versions into one Amazon S3 bucket. Amazon RDS Custom for Oracle Deutsch Amazon RDS for Oracle is a fully managed commercial database that makes it easy to set up, operate, and scale Oracle deployments in the cloud. Tiếng Việt Italiano ไทย To verify that it was using the appropriate level of automation for its business needs, Jefferies used AWS Systems Manager, a management service that makes it simple to automatically collect software inventory, to implement its capabilities through automation and increase the value of AWS services. “For example, on those Amazon RDS Custom for Oracle instances, we don’t want to just tag the database. We want to tag everything with something more relevant, meaningful for us at Jefferies. Amazon RDS Custom for Oracle actually does all that automation for us,” says Mohite. Jefferies also uses AWS Service Catalog to abstract those automation documents and Amazon CloudWatch to monitor and audit automations and infrastructure at scale. This means that Jefferies is able to improve its client interactions with greater speed and additional features. Industry-Wide Opportunity Português" Kee Wah Bakery Brings Timeless Baked Goods to Modern Shoppers with Eshop on AWS _ Kee Wah Bakery Case Study _ AWS.txt,"To address these scalability and reliability issues, Kee Wah Bakery decided to migrate Eshop’s on-premises servers to the cloud. The business engaged Amazon Web Services (AWS) in Hong Kong, who connected them with APN Premier Consulting Partner, Nextlink Technologies (Nextlink), to support with the migration. Français To better handle traffic surges, Nextlink implemented Elastic Load Balancing (ELB), which dynamically scales Eshop’s load balancer in response to fluctuations in volume, preventing any individual server instance from becoming overloaded. Furthermore, the partner replaced its previous content delivery network with Amazon CloudFront, resulting in a 12 percent decrease in site latency. Kee Wah Bakery transformed the scalability and performance of its ecommerce site Kee Wah Eshop by migrating to AWS, supporting spikes in traffic and delivering a consistent online experience. 2023 Español Amazon EC2 日本語 Solution | Improving Eshop’s Performance, Security, and Availability with AWS 12% higher website traffic supported Our online presence is entering a new era with AWS. We want to engage with customers more actively via the web and communicate with them in more personalized ways across digital channels.” About Kee Wah Bakery 한국어 For Kee Wah Bakery, enhancing personalization is part of a broader set of goals that include increasing the business’s analytical capabilities. In the next six months, the company plans to migrate to a cloud-based SAP S/4HANA solution running on AWS. This move will provide the bakery with real-time operational reporting for the first time. It will also maximize production efficiency and offer more tailored promotional campaigns through online sales and tools that offer deep insight into customer buying patterns. Overview | Opportunity | Solution | Outcome | AWS Services Used Elastic Load Balancing Get Started To safeguard against prevalent web exploits and bots that threaten security and performance, the Nextlink team deployed AWS WAF, a reliable web application firewall. Lau says, “We received exceptional support from Nextlink. The team demonstrated its proficiency and strong partnership with AWS in Hong Kong. Thanks to Nextlink’s expertise and collaboration, we were able to complete the migration in under two months, which was a great accomplishment given our initial expectations for a much longer timeframe.” Since transitioning Eshop to AWS in October 2022, Kee Wah Bakery has experienced zero website crashes, even during peak traffic periods such as the Chinese New Year celebrations in January 2023, when daily site visits reached 20,000 and concurrent connections averaged around 300. Says Lau, “I’ve received positive feedback across the business and from customers on the improved performance of our Eshop. Our customers in Hong Kong hold high expectations, and our standards are equally demanding, so it’s satisfying to meet and exceed those expectations.” AWS Services Used 5x 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload.  Contact Sales Ρусский As part of its ongoing development, Kee Wah Bakery decided to introduce an ecommerce sales channel and launched its localized website, Eshop, in Q4 2022. The site gives customers in Hong Kong an avenue to conveniently order baked goods for home delivery. Soon after launching Eshop, the company experienced a surge in online orders leading up to holidays, especially the mid-autumn festival and Chinese New Year. Site traffic could surge by five times, with the number of daily site visits rising to around 20,000. These traffic surges often crashed Eshop, as its underlying IT infrastructure, partially on premises and partially on the cloud, was unable to scale sufficiently. Kee Wah Bakery Brings Timeless Baked Goods to Modern Shoppers with Eshop on AWS 2 months 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. The downtimes were concerning for the business, not only because of lost revenue but also for the potential reputational damage. Terry Lau, marketing manager at Kee Wah Bakery, explains, “We take pride in the quality of our products and strive to provide the best possible service to our customers. We couldn’t let any issues with Eshop’s performance undermine our hard work.” Customer Stories / Retail Overview to launch Eshop on AWS better site performance Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple targets and virtual appliances in one or more Availability Zones (AZs). Next, Kee Wah Bakery implemented Amazon Relational Database Service (Amazon RDS) and Amazon Elastic File System (Amazon EFS) to accelerate data read and write operations, boosting website performance by 900 percent. The bakery also utilized Amazon ElastiCache to swiftly retrieve frequently requested information and images. According to Lau, Eshop is now better equipped to support the company's strategy of driving sales globally. Thanks to the scalable and reliable performance of AWS, Lau can confidently introduce new localized websites, such as its recently launched US website, even as it expands its bricks-and-mortar stores. Lau adds, “We’re all aware of the immense potential for global sales through ecommerce. Providing a consistent, top-notch online experience on AWS to customers of Kee Wah Bakery, regardless of location, will be key to our success.” Türkçe English Outcome | Driving Personalization and Global Expansion Amazon RDS Terry Lau Marketing Manager, Kee Wah Bakery Nextlink worked with Kee Wah Bakery to develop a comprehensive plan to move the entire Eshop platform to AWS, which involved migrating both servers and the Magento ecommerce software. After conducting a thorough assessment of Eshop, including an analysis of traffic volumes, Nextlink proceeded to build the core AWS infrastructure for Eshop. This included replacing on-premises servers with Amazon Elastic Compute Cloud (Amazon EC2) instances and adopting Amazon Route 53 to manage website traffic. Kee Wah Bakery, a Hong Kong institution with almost 85 years of experience, migrated its ecommerce website—Kee Wah Eshop—to AWS to offer its customers consistent high-quality service both online and in-store. Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. 900% Kee Wah leverages Amazon Elastic Compute Cloud (Amazon EC2) virtual server instances with Amazon Route 53 to manage traffic on Eshop, and Elastic Load Balancing (ELB) and Amazon CloudFront to support order spikes. By transitioning its site to AWS, Kee Wah Bakery has enhanced its customers' online shopping experience while driving personalization. Kee Wah Bakery is a household name and one of the biggest bakery brands in Hong Kong. The company produces a range of specialty baked goods including wedding cakes, mooncakes, and traditional Chinese pastries. Deutsch عربي Tiếng Việt Italiano ไทย Amazon CloudFront Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Learn more » less network latency Kee Wah Bakery is one of Hong Kong’s oldest bakery businesses, well-known for its Cantonese mooncakes, popular during mid-autumn festival in September. The bakery, which first opened in 1938, has stores across Hong Kong and mainland China, Taiwan, Japan, and two locations in the United States. Opportunity | Enhancing Eshop to Manage Traffic Spikes and Prevent Revenue Loss Português Following the migration of Eshop to AWS, Kee Wah Bakery is exploring opportunities to enhance its online sales channels, including integrating Eshop with popular messaging platforms like WhatsApp. Customers will be able to interact more easily with different stores and streamline processes like in-store order pickups. “Our online presence is entering a new era with AWS. We want to engage with customers more actively via the web and communicate with them in more personalized ways across digital channels,” explains Lau." Kioxia uses AWS for better HPC performance and cost savings in semiconductor memory development and manufacturing _ Case Study _ AWS.txt,"Proactive cloud use by general IT workers as well as developers AWS CloudFormation AWS CloudFormation lets you model, provision, and manage AWS and third-party resources by treating infrastructure as code. Learn more » Français Kioxia, a world-leading semiconductor manufacturer, uses High Performance Computing (HPC) in its product development and manufacturing processes. When facing issues with resource flexibility during HPC usage peaks, the company turned to Amazon Web Services (AWS). With AWS Direct Connect, Kioxia securely connected its on-premises environment to AWS and distributed jobs according to needs and loads, cutting costs by around seven percent.  Kioxia Uses AWS for Better HPC Performance and Cost Savings in Semiconductor Memory Development and Manufacturing 2023 Amazon FSx for Lustre Español Agility Learn More 日本語 Customer Stories / Manufacturing “The semiconductor market has experienced rapid technological innovations and massive waves of change,” Kawabata continues. “We created the world's first NAND flash memory in 1987 under Toshiba, and in 2007, we were the first to announce 3D multilayer technology. Wherever we see a benefit, we strive to create through a bottom-up approach using systems that leverage advanced IT.” 한국어 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. Learn more » Unlocking innovation Cloud is in high demand especially for running HPC, and it will fully unlock the power of HPC once security considerations are addressed. We can get started quickly on the cloud, it expands your options, and it’s also useful for business continuity planning.” Outcome | Rebalancing 1% of design jobs to unlock 7% cost savings According to Takahashi, adopting AWS has allowed the company to respond to unexpected problems and factory requests calmly. In the past, a sudden request from a factory manager would take time and inter-departmental coordination to resolve. But with AWS resources, Kioxia can now make decisions and establish countermeasures on the spot. “When we scrutinized HPC jobs, we saw that just one percent of photomask design jobs determined overall specs,” explains Takahashi. “We can optimize workload and cost by assigning this one percent to AWS. Rebalancing this portion has reduced costs by seven percent.” AWS Services Used To manufacture semiconductor memory, photomask patterns are transferred to semiconductor wafers by shining ultraviolet light at ultra-high speeds, akin to developing photographs. Photomasks correct the original circuit design and enable accurate manufacturing, producing circuits of several nanometers (nm) that are thinner than the wavelength of UV light (approx. 300 nm). This design requires iterative simulations with the computational power of HPC. 中文 (繁體) Bahasa Indonesia “This confirmed that our established security measures and rules would work properly on the cloud and summarized the steps we needed to take to use the cloud with peace of mind,” says Kawabata. Contact Sales Ρусский عربي Kioxia turned to cloud services to solve the challenge. When the company spun off from Toshiba, it began researching many cloud solutions, focusing on AWS, which was already a popular key service. AWS’s components and APIs are similar to Solaris and UNIX, which have long been popular as semiconductor memory development environments. Takahashi decided the company’s experienced engineers would find AWS familiar, allowing them to leverage their knowledge. 中文 (简体) Solution | Mitigating fluctuating needs for HPC with the cloud This approach includes the cloud as the company embraces new technology with a forward-thinking attitude. “We also rigorously scrutinize security,” says Kawabata, emphasizing the importance of security measures. Overview Kioxia connects a portion of its large-scale HPC environment to AWS via AWS Direct Connect to offload processing to AWS. Once a project plan is complete, jobs are sent to an on-premises job scheduler and allocated to on-premises HPC or AWS, according to size and need. Resources on AWS are set to launch automatically when jobs run, turning off when jobs are completed to minimize costs. Get Started Its own IT workers are also enjoying new value not possible in on-premises environments as using AWS makes it easy for them to experiment, letting them proactively design architecture and test environments. Türkçe Security is another key factor for Kioxia’s adoption of AWS. Kioxia takes security measures very seriously as it handles highly sensitive design data. The company reviewed a 256-item checklist based on AWS Well-Architected Framework and identified essential points. Kioxia has a colossal on-premises HPC environment to meet a variety of computational needs. Through its many years of experience, the company has accumulated the knowledge to bolster resources to perfectly match market and technological needs. Despite this expertise, Kioxia still had difficulty preempting short-term peaks and problems by resourcing for them. English Kioxia creates flash memory and SSD products to “uplift the world with memory.” The company’s plant in Yokkaichi city, Japan, is one of the largest and most productive in the industry. Famous as a smart factory that leverages AI and other advanced technology, the Yokkaichi plant has proudly delivered unrivaled productivity and efficiency for 30 years. “The semiconductor memory industry is fiercely competitive, so we challenge ourselves and maintain the momentum of a young venture company,” says Toshiaki Kawabata, Chief Information and Security Officer and executive at Kioxia Holdings. Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram “We planned well, prepared resources, and distributed them appropriately, but external factors still caused a few events per year where we had to pause projects or find another solution,” says Masanori Takahashi, Chief Specialist of Memory Lithography, Kioxia. “We use HPC to replace human capabilities, reducing their workload and optimizing costs. However, when resources are in short supply, our workers must respond using their own knowledge, which increases labor costs.” Basic engineering requires massive HPC computing power to correctly simulate running circuit designs or to replicate the manufacturing process to predict problems. HPC power is especially important for designing manufacturing components called photomasks. About Kioxia Corporation IT is as essential for designing and manufacturing semiconductor memory as it is in other manufacturing industries. Computing power is especially essential in engineering, such as memory design and simulations. Due to the intricate design of semiconductor memory and the complex manufacturing process, Kioxia uses IT to save on labor and improve yields at every stage of the process. https://aws.amazon.com/hpc/. 7% For high-performance storage, Kioxia uses Amazon FSx for Lustre. With the company’s existing applications requiring high disk I/O speeds, Amazon FSx Lustre’s blazing throughput was an obvious choice. Deutsch AWS Auto Scaling Tiếng Việt Opportunity | Providing cutting-edge memory technologies and products Italiano ไทย Cost savings produced by rebalancing 1% of design jobs Amazon FSx for Lustre is a fully managed service that provides cost-effective, high-performance, scalable storage for compute workloads. Learn more » Numerous options and capabilities to handle unexpected factory requests Kioxia spun off from Toshiba in 2017, taking charge of Toshiba’s memory business. The semiconductor manufacturer chiefly produces NAND flash memory, pursuing the potential of memory, creating new value, and changing the world with all-new experiences as part of the Kioxia group. Learn more » Amazon EC2 Architecture Diagram Kioxia wants to learn more about AWS to improve its knowledge and skills for high-level use. “Once you have addressed the security considerations, the cloud is an ideal environment,” says Kawabata. “AWS provided accurate, fast, and friendly support for our intricate questions and requests. Cloud is in high demand especially for running HPC, and it will fully unlock the power of HPC. We can get started quickly on the cloud, it expands your options, and it’s also useful for business continuity planning.”  Toshiaki Kawabata Executive Officer, Chief Information and Security Officer; Kioxia Holdings Corporation Português To learn more, visit" Kirana Megatara Reduces Procurement Costs by 10 Percent for Raw Rubber with Speedy Reporting on AWS _ Case Study _ AWS.txt,"Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Learn more » Français Adds Adinugraha, “With Amazon QuickSight, we don't lose time with any programing. We just focus on the formulas and the display, and then it’s drag and drop. That's never happened before.” 2023 reduced time for report creation Español With Amazon QuickSight, Kirana Megatara can present the latest data in SAP in close to real time using interactive dashboards. As a result, its Sourcing Department can view the latest production targets per plant and know how much raw rubber to buy at the start of each day. Moreover, the department can identify the suppliers close to each plant that are consistently producing the amount of raw rubber it needs at the right price. Kirana Megatara chose Amazon Quicksight as a business intelligence tool to deliver a range of reports from SAP on AWS in hours and develop custom applications supporting SAP. Data from these applications are also extracted into Amazon Quicksight for further analysis. Adinugraha recalls, “Amazon QuickSight was more cost effective than the other solution we were considering. We could start small and expand usage as the demand for dashboards increased.”  Faster insights 日本語 About Kirana Megatara Customer Stories / Manufacturing Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Hendrik Iriawan Saputra General Manager of IT, Kirana Megatara Opportunity | Seeking Faster Insights for Improved Supplier Management and Sourcing Kirana Megatara is a world-class producer of rubber and a processor of crumb rubber, made from worn-out tires. It has 15 subsidiaries, including one of Indonesia’s oldest rubber processing companies, PT Djambi Waras, which opened in 1964. Kirana Megatara is also a member of the Global Platform for Sustainable Natural Rubber, which promotes sustainable practices. AWS Services Used Maintaining a good relationship with suppliers is as important as getting the best prices. With Amazon QuickSight, we have a constantly refreshed picture of the suppliers we should be working with to maximize production efficiency.” Outcome | Reducing Procurement Costs by 10 Percent for Raw Rubber Kirana Megatara Reduces Procurement Costs by 10 Percent for Raw Rubber with Speedy Reporting on AWS 中文 (繁體) Bahasa Indonesia To do this, Kirana Megatara deployed Amazon QuickSight, which gives an organized view of its business-critical data in SAP on AWS, running on Amazon Elastic Compute Cloud. As a result, the organization is optimizing procurement of raw rubber and building stronger relationships with its suppliers. Contact Sales Ρусский Kirana Megatara uses Amazon QuickSight to extract insights from data in SAP on AWS, providing buyers with daily production targets to improve the alignment of supply and demand and optimize expenditure. عربي Solution | Providing Clear Insight Faster with Amazon QuickSight To learn more, visit aws.amazon.com/quicksight. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Kirana Megatara buys more than one thousand metric tons of raw rubber from Indonesian suppliers every day. It ships the material to its 16 processing plants across the country to be processed into Standard Indonesian Rubber (SIR) for companies like Bridgestone, Goodyear, and Pirelli. In 2021, the plants produced 508,000 metric tons of SIR, worth a total of $857 million. Secure Overview For consistent data on each plant’s raw material usage and production figures, Kirana Megatara deployed SAP on premises in 2012. But after reliability and scalability issues, it migrated the system to Amazon Web Services (AWS) in 2021. Although Kirana Megatara gained better performance with SAP on AWS, the company wanted to improve report extraction. Its analytics team had to present data in Microsoft Excel spreadsheets, which was complex and time consuming. Narendra Adinugraha, head of analytics at Kirana Megatara, says, “We needed more than a day to import figures, process information, and create reports.” Amazon QuickSight is a fast, cloud-powered business intelligence service that makes it easy to deliver insights to everyone in your organization. Data in near real time for supply chain management Türkçe Amazon Elastic Compute Cloud 8–10 English Furthermore, Adinugraha can easily meet the demand from the business for new reports. He says, “We are producing between 8 to 10 reports a month in Amazon QuickSight, so we’re well on top of the requests coming in.” Just one of these reports could take weeks using Excel, but with the speed of Amazon QuickSight, the analytics team can deliver one report every 2.5 days on average, allowing departments better control over their operations. 75% Kirana Megatara produces rubber for leading tire manufacturers globally. The company wanted to increase the speed of reporting business data to improve decision making and operations.  Working with AWS Partner Technova, Kirana Megatara integrated Amazon Quicksight with SAP modules running on Amazon Elastic Compute Cloud (Amazon EC2). “We value our relationship with Technova; its AWS engineers are always available on short notice,” says Adinugraha. complex reports delivered per month Deutsch 10% Tiếng Việt Learn More Ensures critical data is encrypted end-to-end Italiano ไทย Using Amazon QuickSight, Kirana Megatara can ensure procurement is more precisely aligned with business needs, lowering the risk of oversupply. As a result, the Sourcing Department, which buys hundreds of thousands of metric tons of raw rubber each year, estimates it has reduced procurement costs by 10 percent using the reports.  Plus, the department has the data to develop an effective loyalty program with suppliers across Indonesia. Hendrik Iriawan Saputra says, “We can build special relationships with our best suppliers and develop incentives, such as fertilizer funding, equipment for rubber cultivation, and training, so they continue to supply us with the raw rubber we need to drive business growth.” Learn more » As a result, the analytics team couldn’t deliver daily reports on the changing production targets for each processing plant to its Sourcing Department. Without this data, the department lacked the insight to easily determine exactly how much raw rubber to buy in the market, running the risk of plants being under or over supplied. In addition, the analytics team lacked the capabilities to provide more than 100 new annual reports requested by the business, who is constantly on the search for new insights to improve processes. In addition, Kirana Megatara could securely provide dashboard views of its business-critical data in SAP to employees. Amazon QuickSight included end-to-end data encryption with row and column level security control. Hendrik Iriawan Saputra, general manager of IT at Kirana Megatara, says, “We could ensure that only authorized people had access to the reports.” Amazon QuickSight Looking ahead, Kirana Megatara is planning to use machine learning (ML) to extract more insight from supplier interactions and to predict changes in raw rubber prices and volumes with accuracy. “Our immediate step is to develop our competencies around ML and then see where it can add value to our analytics,” says Adinugraha. Português savings in procurement costs" KTO Case Study.txt,"Français AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Learn more » Amazon EC2 Auto Scaling Jonathan Bonett, Chief Technical Officer, KTO.com Español This solution allows the company to personalize campaigns using behavioral triggers, where specific customer actions on the platform determine which promotions and offers they receive. “Previously, our campaigns were based on assumptions about customer behavior, but now we run our campaigns based on triggered events that can happen in near-real time,” says Bonett. “This means that promotions and special offers can be tailored to each customer, which increases both engagement and loyalty to the brand.” Security for KTO.com is also stronger now because it follows AWS best practices. “We’ve doubled our defenses with the new platform, because we now have redundancy safety nets in place,” says Bonett. Our platform could take up to an hour to pay the winners and update all the accounts, now this process happens in seconds.” 日本語 Because the platform always has the resources it needs, customers always have a responsive experience and can seamlessly place bets based on the latest odds. Customers can also receive their payouts in seconds, a process that used to take much longer in the past. This is because the KTO.com platform can now scale to manage the spikes in traffic when customers check their winnings after an event is over. “Previously, our platform could take 30 minutes or even an hour to pay out to the winners and update all the accounts, now this process happens in seconds,” says Bonett. 2023 Contact Sales Get Started 한국어 Learn how »  Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used The company chose AWS because of its diverse range of services. When its customer growth took off, KTO.com needed to expand and optimize its infrastructure to maintain a good customer experience. It also needed to prepare for the soccer World Cup in late 2022, when it expected a massive influx of traffic from new customers and payment transactions. “We have thousands of new registrations every day and the number of bets placed on the platform has increased accordingly,” says Jonathan Bonett, chief technical officer (CTO) at KTO.com. KTO.com Reduces Costs, Improves Scaling for Latin America Betting Platform Using AWS Having built its platform on Amazon Web Services (AWS) from the beginning, KTO.com turned to the cloud provider to help it deal with growth. With the soccer World Cup approaching in 2022, KTO.com needed a cost-effective way to be able to support demand spikes for betting on this and other major sporting events. Amazon MSK makes it easy to ingest and process streaming data in real time with fully managed Apache Kafka. Learn more » Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. AWS Services Used Amazon MSK 中文 (繁體) Bahasa Indonesia Opportunity: Migrating for Improved Performance and Personalization About KTO.com Customer Stories / Software Internet / Brazil Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Previously, scaling compute services was a manual and time-consuming process that required technical expertise. Bonett wanted to automate scaling based on a pre-defined schedule to simplify resource management as the company grew. In addition, he wanted to integrate a new customer relationship management (CRM) service to enable personalized website experiences and customized marketing campaigns. KTO.com worked with AWS Partner 56Bit to plan and implement the changes to its platforms. Overview Amazon EC2 Auto Scaling helps you maintain application availability and lets you automatically add or remove EC2 instances using scaling policies that you define.  Learn more » AWS Customer Success Stories Türkçe English Non-technical employees at KTO.com can now scale compute resources using a simple scheduler, which lets them set the day and time for coming events that will increase betting volumes. Previously, this was a manual process that required involvement from the IT team. To scale in this way the company uses Terraform and AWS Lambda, a serverless, event-driven compute service that lets it run code for virtually any type of application. It also uses Amazon EC2 Auto Scaling, which allows it to add or remove compute capacity to meet changing demands. “Instead of taking up to 2 hours for scaling, the new scheduling system can be set in a few minutes—even by someone without technical knowledge,” says Bonett. Amazon Lambda Solution: Pre-Scheduling Infrastructure Scaling Using AWS Lambda and Amazon EC2 Deutsch Tiếng Việt KTO.com is looking to expand further in the Latin America region and offer services in locations such as Canada, Chile, and Peru. “Using AWS, we’ve been able to cope with exponential growth while maintaining a good customer experience and optimizing security,” says Bonett. “Now we’re in a position to continue our expansion throughout the Latin America region, and into new parts of the world.” Italiano ไทย Amazon CloudFront KTO.com provides an online sports betting and casino games platform for the Latin America market. The platform was created in 2018 by KTO Group, a software development company. KTO.com grew rapidly in 2022 with its active customers increasing by over 1,000 percent year on year.  Having built its platform on Amazon Web Services (AWS) from the beginning, KTO.com turned to the cloud provider to help it deal with growth. With the soccer World Cup approaching in 2022, KTO.com needed a cost-effective way to be able to support demand spikes for betting on this and other major sporting events.  Using AWS, KTO.com can easily schedule compute resources to scale up and down to meet traffic spikes, providing customers with a more responsive experience. The project has resulted in many performance improvements—including reduced latency and winning bets being settled in near-real time, a process that previously could take up to an hour. KTO.com is an online gambling platform built by KTO Group. The company targets the Latin America market and focuses on sports betting. Outcome: Digital Transformation Prepares KTO.com for Future Expansion Using AWS, KTO.com has also improved the focus and success of its marketing campaigns. KTO.com deployed a new CRM solution using Amazon Managed Streaming for Apache Kafka (MSK), which makes it easy to ingest and process streaming data in real time. To further improve customer experience, KTO.com uses Amazon CloudFront, a content delivery network (CDN), to securely deliver content with low latency and high transfer speeds. This minimizes latency issues that customers can experience when placing bets or playing online games. Português" LambdaTest Improves Software Test Insights and Cuts Dashboard Response Time by 33 Using Amazon Redshift _ Case Study _ AWS.txt,"LambdaTest is a cloud-based continuous quality testing platform that helps over 2 million developers and testers across 130+ countries ship code faster. To give customers quicker, better insights into software test results, the company worked with AWS Data Lab to build a new analytical dashboard solution on Amazon Redshift. Français response times for faster insights 2023 More than two million software developers and testers across the globe rely on LambdaTest, a continuous quality testing platform, to ensure quality code and ship their software to customers faster. The platform, which runs on Amazon Web Services (AWS), provides both manual and automated testing of web and mobile apps across more than 3000 different browsers, mobile devices, and operating systems. LambdaTest is used in over 130 countries and has hosted more than 200 million tests to date. About LambdaTest Español By implementing its new analytics platform on Amazon Redshift, LambdaTest has reduced the average response time by 33 percent, updating analytical dashboards in less than 10 seconds. “Using the federated query capability in Amazon Redshift, our customers have less than 50 millisecond response times for their test analysis dashboards and an average data refresh cycle of less than five minutes,” says Rahman. “This means they can get faster insights into test orchestration and execution and can easily see if tests fail. Overall, Amazon Redshift helps us give our customers better, faster insights into software test performance.” Learn More 日本語 customers served Opportunity | Seeking a Better View of Software Test Results Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used The new dashboard, which LambdaTest designed and implemented in just 4 weeks, reduces dashboard response times by 33 percent and gives customers faster insights into test orchestration and execution results.  The LambdaTest analytical platform on AWS can also scale seamlessly to support the ingestion of millions of data records annually. “Amazon Redshift is highly scalable, especially when we’re doing federated queries and ingesting data from Amazon RDS instances,” says Srivishnu Ayyagari, senior product manager at LambdaTest. “Even when more data comes onto our analytical platform, it continues to perform at a high level.” Amazon Redshift reduction in dashboard response time AWS Services Used Outcome | Reducing Response Time and Improving Test Insights 中文 (繁體) Bahasa Indonesia from POC to production LambdaTest Improves Software Test Insights and Cuts Dashboard Response Time by 33% Using Amazon Redshift Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. AWS Data Lab offers accelerated, joint engineering engagements between customers and AWS technical resources to create tangible deliverables that accelerate data and analytics modernization initiatives. Learn more » 4 weeks For the past several years, LambdaTest’s enterprise clients have been seeking analytical dashboards where they can quickly view insights and reports on test orchestration, execution, and results. “Our customers didn’t have a snapshot view of what tests had been run or what had failed,” says SS Rahman, head of technical integration at LambdaTest. To address this, the company attempted to build a new analytics solution with MySQL as the data source. However, database queries often took up to 15 seconds to complete, and the solution couldn’t meet the company’s goal of providing response times under 10 seconds. The result? A poor customer experience, and one that could not scale easily to support the millions of new records coming in every year.  Overview The solution is based on Amazon Redshift, a cloud data warehouse that uses SQL to analyze both structured and semi-structured data. The AWS team helped LambdaTest create a proof of concept (POC) for a new customer-facing dashboard that queries data from Amazon Relational Database Service (Amazon RDS) and ingests it in Amazon Redshift. The test metadata includes pass, failure, and completion information for each test. The dashboards also feature a variety of trend graphs and charts to visualize the distribution of test results among browsers, operating systems, and apps. Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. Türkçe English LambdaTest utilized Amazon Redshift to build new dashboards that reduce dashboard response times by up to 33 percent and give customers faster software test insights. Working with the AWS Data Lab team, LambdaTest completed the dashboard POC in four weeks. “If we managed this project on our own instead of relying heavily on the expertise of AWS, we would have taken at least eight weeks,” says Rahman.  SS Rahman Head of Technical Integration, LambdaTest 1 million Deutsch LambdaTest, based in San Francisco, California, is a continuous quality testing cloud platform that helps more than 2 million developers and testers across 130+ countries ship code faster. The company’s browser and app testing cloud runs manual and automated tests of web and mobile apps across 3,000+ environments including browsers, real devices, and multiple operating systems.  To design a new analytical dashboard solution, LambdaTest leveraged the AWS Data Lab program, an internal AWS community that offers technical resources and hands-on support for customers looking to accelerate data and analytics initiatives. Specifically, LambdaTest participated in the AWS Build Lab, an intensive multi-day engagement in which AWS Data Lab Solutions Architects and other AWS experts provide architectural guidance, share best practices, and remove technical roadblocks. “AWS has always been very available and helpful. When we discussed our latency and performance issues during the AWS Build Lab, AWS proposed the perfect solution,” Rahman says. Tiếng Việt Italiano ไทย LambdaTest is currently implementing Amazon OpenSearch Service to manage data log analytics in the cloud. “AWS releases new services frequently, and we always evaluate those services for our business,” Rahman says. “We’re a growing company focused on innovating in the testing space, and we will continue to work together with AWS as we expand.” 50 milliseconds Using the federated query capability in Amazon Redshift, our customers have less than 50 millisecond response times for their test analysis dashboards and an average data refresh cycle of less than five minutes. This means they can get faster insights into test orchestration and execution, and they can easily see if tests fail.” Learn more » AWS Data Lab Solution | Working with AWS Data Lab to Build a New Analytical Dashboard 33% To learn more, visit aws.amazon.com/redshift.  Português Contact Sales" Largest metastatic cancer dataset now available at no cost to researchers worldwide _ AWS Public Sector Blog.txt,"AWS Public Sector Blog Largest metastatic cancer dataset now available at no cost to researchers worldwide by Eric Oermann, Katie Link, Anthony Costa, and Erin Chu | on 08 JUN 2023 | in Amazon Machine Learning , Announcements , Education , Nonprofit , Public Sector , Research | Permalink | Comments |  Share Metastasis derives from Greek words for removal , or migral . Metastastic cancer—where tumor cells spread to sites far from the tissue of origin—accounts for over 90% of fatalities from cancer, the leading cause of death worldwide . Metastatic cancer presents a core challenge for modern oncology due to the high degree of variation that it can display on a genetic, molecular, or gross anatomic level compared to primary cancer — as well as the high degree of variation across patients in their disease presentation, progression, and outcome. Treating metastatic cancer can involve surgery, radiation therapy, chemotherapy, immunotherapy, and other treatments. Treatment plans require recurring imaging studies and clinical visits so patients can track their cancer and its response to therapy. So how do we best record, model, and study this incredibly heterogenous and lethal disease in order to develop treatment plans that save lives? The NYUMets team, led by Dr. Eric Oermann at NYU Langone Medical Center , is collaborating with Amazon Web Services (AWS) Open Data, NVIDIA, and Medical Open Network for Artificial Intelligence ( MONAI ), to develop an open science approach to support researchers to help as many patients as possible. NYUMets: Brain dataset now available for metastatic cancer research With support from the AWS Open Data Sponsorship Program , the NYUMets: Brain dataset is now openly available at no cost to researchers around the world. NYUMets: Brain draws from the Center for Advanced Radiosurgery and constitutes a unique, real-world window into the complexities of metastatic cancer. NYUMets: Brain consists of data from 1,005 patients, 8,003 multimodal brain MRI studies, tabular clinical data from routine follow-up, and a complete record of prescribed medications—making it one of the largest datasets in existence of cranial imaging, and the largest dataset of metastatic cancer. In addition, more than 2,300 images have been carefully annotated by physicians with segmentations of metastatic tumors, making NYUMets: Brain a valuable source of segmented medical imaging. Extending the MONAI framework to longitudinal data for cancer dynamics research In collaboration with NVIDIA, the NYUMets team is building tools to detect, automatically measure, and classify cancer tumors. The team used MONAI, co-founded by NVIDIA and King’s College London, to build an artificial intelligence (AI) model for segmentation tasks, as well as a longitudinal tracking tool. Now, NYUMets: Brain can be used as a starting dataset by which we can apply AI to recognize metastatic lesions in imaging studies. Together with NVIDIA, the NYUMets team is extending the MONAI framework for working with metastatic cancer data. This data is most frequently longitudinal in nature, meaning many imaging studies are performed on the same patient to track their disease. This facilitates the study of metastatic cancer and cancer dynamics over time, more closely capturing how physicians study and patients experience cancer in the real world. In addition, the NYUMets team built clinical measurements to augment the MONAI framework’s existing metrics. These cover practical medical use cases of treatment response and progression. With clinical metrics, the team intends to bridge the gap between AI technologies used in research and the application of these technologies in the clinic. One such clinical measurement tracks the change in tumor volume between imaging studies taken at different points in time. This is a crucial measurement for a patient undergoing cancer treatment—and could be applied to any disease where lesions change over time. Get started with no-cost machine learning services to power metastatic cancer research A preprint for the NYUMets flagship publication can be reviewed here . The NYUMets: Brain dataset is available to access at no cost with support from the AWS Open Data Sponsorship Program. It’s also available on the Registry of Open Data on AWS and on the AWS Data Exchange catalog . Users with AWS accounts can apply for access to the full dataset here . O nce approved, you can access the dataset in the  Amazon Simple Storage Service ( Amazon S3 ) bucket using an Amazon S3 Access Point. Documentation for bucket structure and naming conventions can be explored at nyumets.org , including the NYUMets MONAI Extension . You can explore the entire MONAI framework here . Read more about open data on AWS: Creating access control mechanisms for highly distributed datasets 33 new or updated datasets on the Registry of Open Data for Earth Day and more How researchers can meet new open data policies for federally-funded research with AWS Accelerating and democratizing research with the AWS Cloud Introducing 10 minute cloud tutorials for research Subscribe to the AWS Public Sector Blog newsletter to get the latest in AWS tools, solutions, and innovations from the public sector delivered to your inbox, or contact us . Please take a few minutes to share insights regarding your experience with the AWS Public Sector Blog in this survey , and we’ll use feedback from the survey to create more content aligned with the preferences of our readers. TAGS: AWS and open data , AWS Data Exchange , AWS Open Data Sponsorship Program , brain health , cancer , datasets , Machine Learning , NVIDIA , open data , Open Data for Public Good , Registry of Open Data on AWS Eric Oermann Eric Karl Oermann is an assistant professor of neurosurgery, radiology, and data science at NYU. He studied mathematics at Georgetown and philosophy with the President’s Council on Bioethics, and abandoned graduate studies in group theory to study artificial intelligence (AI) in medicine and neurological surgery while completing a postdoctoral fellowship at Verily Life Sciences and serving as an advisor at Google-X. He has published over one-hundred manuscripts spanning machine learning, neurosurgery, and philosophy in journals ranging from The American Journal of Bioethics to Nature and is dedicated to studying human and artificial intelligence to improve human health. Katie Link Katie Link leads healthcare and life sciences applications of artificial intelligence as a machine learning engineer at Hugging Face. She is also a medical student at the Icahn School of Medicine at Mount Sinai in New York City. Prior to Hugging Face, she has worked on artificial intelligence (AI) research applied to biomedicine at NYU Langone Health, Google X, and the Allen Institute for Brain Science, and studied Neuroscience and Computer Science at Johns Hopkins University. Anthony Costa Anthony has been leading initiatives in biomedical technologies, data science, and artificial intelligence (AI) for more than a decade. On the faculty of the Mount Sinai Health System, he served as founding director of Sinai BioDesign and chief operating officer for AISINAI, building and leading successful teams focused on improving outcomes in medicine through a needs-based approach to technology development and machine intelligence. At NVIDIA, he serves as the global head of life sciences alliances, with a particular focus on large language models and generative AI. In this role, he heads developer relations and strategic partnerships, in addition to external research collaborations, between NVIDIA and healthcare and life sciences partners. Erin Chu Erin Chu is the life sciences lead on the Amazon Web Services (AWS) open data team. Trained to bridge the gap between the clinic and the lab, Erin is a veterinarian and a molecular geneticist, and spent the last four years in the companion animal genomics space. She is dedicated to helping speed time to science through interdisciplinary collaboration, communication, and learning. Comments View Comments Resources AWS in the Public Sector AWS for Government AWS for Education AWS for Nonprofits AWS for Public Sector Health AWS for Aerospace and Satellite Solutions Case Studies Fix This Podcast Additional Resources Contact Us Follow  AWS for Government Twitter  AWS Education Twitter  AWS Nonprofits Twitter  Newsletter Subscription" Learn how MediSys in healthcare transformed its IT operations using AWS Professional Services _ MediSys Case Study _ AWS.txt,"MediSys’s alternate production environment runs on AWS services to securely store data from across the organization. It continuously replicates data to the cloud for MediSys, providing the organization with an up-to-date copy of vital information. Français Maintains 2023 MediSys Replicates Patient Records and Medical Images to AWS Español On AWS, our system is available when we need it. It is simple for us to switch to a cloud environment and make sure that we can access the electronic health record.” Learn how MediSys in healthcare transformed its IT operations using AWS Professional Services Learn more » 日本語 AWS Professional Services Get Started 한국어 Facilitates Overview | Opportunity | Solution | Outcome | AWS Services Used Reduces First, MediSys replicated millions of patient records and other data to its alternate production environment running on AWS. This migration included its EHR and GE Healthcare Picture Archiving and Communication System images for medical archiving. As part of its disaster recovery systems validation test, MediSys fully exercised its new disaster recovery environment by operating EHR production for 3 weeks on AWS. This test proved to be extremely successful, providing a day-to-day operating environment that outperformed the on-premises data center, based on exception percentage and response time. Through this collaboration, the three teams completed the migration while meeting all applicable security and performance standards. MediSys also achieved its return-on-investment goals by migrating to the cloud and reducing traditional data center management costs. AWS Services Used MediSys is a New York not-for-profit corporation. MediSys is a supporting organization to Jamaica Hospital Medical Center (JHMC) and Flushing Hospital Medical Center (FHMC). MediSys is also comprised of a multitude of entities and resources functioning within a complex integrated delivery system. With this innovative approach to alternate production, MediSys is supporting organizational continuity for high-quality patient care. This migration has empowered the network to transform its infrastructure on AWS and adopt cloud technologies to support its services. With access to cloud-native tools and security and compliance controls on AWS, MediSys will continue to transform its healthcare IT operating environment while driving new experiences for healthcare providers and patients. Improves 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي About MediSys Health Network 中文 (简体) MediSys Health Network (MediSys) is transforming its IT operations and innovation capabilities by migrating its alternate production environment to Amazon Web Services (AWS), the first step in its cloud journey. The New York–based healthcare network went live on AWS in October 2022, improving data resiliency while maintaining high security and compliance. With a cloud-native alternate production environment in place, MediSys can focus less on data center management and more on improving the quality of care and outcomes for the communities it serves. AWS Professional Services’ offerings use a unique methodology based on Amazon’s internal best practices to help you complete projects faster and more reliably, while accounting for evolving expectations and dynamic team structures along the way. Overview Sami Boshut Chief Information Officer, MediSys Health Network high-quality patient care Opportunity | Transforming EHR Operations Solution | Building Resiliency in the Cloud Türkçe English security and compliance MediSys engaged AWS Professional Services—a global team of experts that helps organizations realize their desired business outcomes when using AWS—to support the migration project. Working with a team from Epic, the AWS and MediSys teams strategized ways to optimally configure the EHR environment. AWS has more than 146 HIPAA-eligible services and holds certifications for global compliance standards, like HITRUST CSF. With the support of AWS Professional Services, MediSys has configured AWS services to meet its applicable compliance standards and safeguard protected health information. For example, MediSys deployed all services using the AWS Landing Zone Accelerator for Healthcare to warrant compliance with all healthcare industry standards and policies. If the organization experiences an issue with its production system, it can quickly step into its highly available alternate production environment. MediSys, which oversees 750 hospital beds, can continue providing patient care without wasting valuable time. “Every second counts in patient care,” says Boshut. “On AWS, our system is available when we need it. It is simple for us to switch to a cloud environment and make sure that we can access the EHR.” Deutsch operational costs Tiếng Việt Customer Stories / Healthcare Italiano ไทย Outcome | Facilitating High-Quality Patient Care Since 2010, MediSys has used the Epic electronic health record (EHR) to deliver a high-quality provider and patient experience. “Epic is used everywhere in our organization,” says Sami Boshut, chief information officer of MediSys. “It’s very important that we support the continuous operation of our EHR production and alternate production environment.” To support day-to-day operations, MediSys uses on-premises servers housed in an on-premises data center. The healthcare network migrated its EHR alternate production environment, used for disaster recovery, to AWS to improve availability, reduce operational costs, and maintain compliance with improved security. data resiliency business continuity Português" LegalZoom AWS Local Zones Case Study.txt,"to the cloud Using AWS Local Zones truly accelerated the migration of a very complex application to AWS by helping us break it down into smaller components.” Français of legacy applications with ease and zero downtime LegalZoom kicked off the transition to AWS Local Zones in late 2020, and it expects to finish the migration by the end of 2022. The rest of the project will see LegalZoom engineers develop new tools and make use of additional AWS offerings. “We’re often looking at building something ourselves to solve a customer problem,” says Hutchins. “Then we’ll get an announcement telling us that—oh, wait!—there’s a new offering from AWS being launched that does exactly what we need.” Español Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » 日本語 AWS Local Zones However, the mix of legacy and modern components in LegalZoom’s data center posed a challenge for the company’s engineering team. The team realized that it might have to re-engineer many legacy components to address potential latency issues before migrating to the cloud. This re-engineering would have been an enormous task. In its efforts to avoid time-consuming re-engineering, LegalZoom discovered AWS Local Zones, a type of infrastructure deployment that places compute, storage, database, and other select AWS services close to large population and industry centers. By using this solution, LegalZoom migrated incrementally and with ease—all without compromising on performance. In fact, the AWS Local Zone in Los Angeles, California, which is located very close to LegalZoom’s data center, offered lower latency than the company’s on-premises solution. Outcome | Completing a Complex Migration Using AWS Local Zones 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Hutchins and his team turned to Amazon Web Services (AWS) and discovered that a combination of solutions could help them meet this triple mandate. “The entire process of migrating from our data center to AWS has been seamless and painless, resulting in happier customers and happier engineers,” says Hutchins.   Implemented complex migration AWS Direct Connect AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. Learn more » About Company While the company’s cloud migration is still in progress, LegalZoom customers are already enjoying a streamlined experience. Using the AWS Local Zone in Los Angeles, LegalZoom is enjoying levels of agility and performance that are higher than what they have ever experienced. As Hutchins says, “Using AWS Local Zones, we have not had to make any compromises.”   AWS Services Used 中文 (繁體) Bahasa Indonesia to 5 milliseconds, accelerating the cloud migration process Founded in 2001, LegalZoom offers legal services to US and global customers seeking help with business formation, intellectual property protection, and estate planning, among others. After helping over two million entrepreneurs start their businesses, LegalZoom launched LZ Tax, a LegalZoom company, to help people file and save on their taxes in 2020. An on-premises data center solution was enough for the first 19 years of LegalZoom’s growth, but Hutchins and his team decided to migrate to the cloud as fast as possible in 2020. Learn how LegalZoom migrated to the cloud quickly without compromising agility or performance using AWS Local Zones. Cut latency on network calls Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Learn more » 中文 (简体) Solution | Accelerating the Pace of Innovation Using AWS Local Zones An online legal technology company, LegalZoom helps its customers create legal documents without necessarily having to hire a lawyer. Its services cover business formation, estate planning, and taxes. LegalZoom, an industry leader in online small business formations and a leading online platform for legal, compliance, and tax solutions, wanted to accelerate its pace of innovation by migrating its location-sensitive applications to the cloud. The challenge was that the company’s entirely on-premises data center contained a mix of modern and legacy components. “The legacy components were blocking us from migrating to the cloud,” says Jonathan Hutchins, director of engineering for site reliability engineering at LegalZoom. The engineering team had to find a way to migrate to the cloud as fast as possible, without compromising agility or performance. LegalZoom Accelerates Innovation with Hybrid Cloud Migrations using AWS Local Zones 2022   Overview Since LegalZoom’s engineers were freed from manually intervening in the data center to resolve API issues, the improvement in team morale has been palpable. “By choosing AWS, we’ve been able to attract and retain stronger engineering talent,” says Hutchins. “Engineers are much more excited to work now that it’s so easy to spin up a new service.” Further, migrating to AWS has unlocked LegalZoom’s ability to use a wide variety of AWS tools. The engineering team has found Amazon Elastic Kubernetes Service (Amazon EKS)—a managed container service to run and scale Kubernetes applications in the cloud or on premises—especially helpful. “Since we migrated to a microservices architecture, any issues within our APIs and the components that we’re running are self-healing,” Hutchins says. “It’s seamless. The components will encounter an issue, but the solution will alert us that something happened and spin up a new component.” Get Started Jonathan Hutchins Director, LegalZoom AWS Local Zones are a type of infrastructure deployment that places compute, storage, database, and other select AWS services close to large population and industry centers. Türkçe English Accelerated migration The LegalZoom engineering team moved carefully to migrate without impacting the customer experience. First, it containerized APIs and applications to avoid introducing latency issues when migrating components to the cloud. Then, it took advantage of Amazon Elastic Compute Cloud (Amazon EC2), a service that offers secure and resizable compute capacity for virtually any workload, for anything that wouldn’t run as a container. This incremental approach helped the team find the right solution for each challenge that it faced. Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service to run and scale Kubernetes applications in the cloud or on-premises. Learn more » Increased reliability Deutsch Amazon EKS Tiếng Việt Opportunity | Using AWS Local Zones Helped LegalZoom Achieve Single-Digit Millisecond Latency Italiano ไทย By migrating to the cloud, LegalZoom reduced latency on network calls to under 5 ms. In doing so, the company has not only avoided introducing customer experience issues but also enhanced the customer experience overall. “The infrastructure that our components are running on since the migration to AWS is so much faster than what we had in our data center, so we experience less downtime and fewer issues with latency,” says Hutchins. “Our system is faster using AWS Local Zones.” Contact Sales The migration to AWS Local Zones has helped the LegalZoom engineering team shift its focus from refactoring to innovation. The company migrated complex applications to AWS Local Zones, starting with smaller components, and kept other components in its data center during migration. The ability to migrate without moving everything at once meant that LegalZoom could continue providing services to customers without interruption during the migration. “Using AWS Local Zones truly accelerated the migration of a very complex application to AWS by helping us break it down into smaller components,” Hutchins says. “By migrating to AWS, we’ve been able to focus our engineering talent on building for our customers.” Amazon EC2 Customer Stories / Software and Internet with migration and modernization of architecture LegalZoom’s use of AWS Direct Connect, a cloud service that delivers the shortest path to AWS resources, has made it simple to migrate data when the team is ready. The company used Direct Connect for the migration setup by connecting its data center to AWS to efficiently migrate the pieces of its complex applications. “Using AWS Direct Connect was absolutely crucial for us to be able to migrate to AWS Local Zones,” says Hutchins, “and setting it up on the AWS side was very simple.” Português" Lendingkart _ Amazon Web Services.txt,"The company also has plans to market its products outside India in the near future by taking advantage of the global network of AWS About Lendingkart Adopting a Structured Approach to Product Development Français the credit gap MSMEs face at around $240 billion. Benefits Reliability, uptime, transaction speed, and automation were also key elements of Lendingkart’s SaaS vision. “We wanted all these elements taken care of by an experienced provider so we could focus on developing the offering itself. The journey with AWS SaaS Factory has been fantastic from both technical and business development perspectives,” says Singh. Español Amazon EC2 With the AWS SaaS Factory, Lendingkart adopted a structured approach to product development, considering key points such as pricing models and product journeys. It first clearly defined the problem statement: a lack of efficient digital credit scoring mechanisms for MSMEs among banks and NBFCs. Following that, the development team began building with potential customers in mind, anticipating their needs. Learn More “AWS has served as a trusted advisor collaborating with our teams from the start, suggesting ways to optimize resources and minimize technology gaps,” he adds. Lendingkart currently uses 日本語 The AWS SaaS Factory accelerated the speed at which we were able to execute this project. The structures for building a SaaS were already in place for us to adopt and modify, which helped us to have more efficient, enriching conversations with our prospects.” AWS Global Infrastructure gives us the confidence to go to market quickly for international launches.” Get Started 한국어 Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Amazon Relational Database Service (Amazon RDS) with Multi-AZ for fault-tolerant scaling and database administration. Building Reliable, Multi-tenant Architecture on the Cloud With 20 banks and NBFCs—such as Aditya Birla Finance Limited, Canara bank, and Punjab National Bank—already onboarded, Lendingkart has disbursed over 2,000 crore Rupees (US$307 million) in 2021. New customers can onboard quickly to Lendingkart 2gthr, without complex integrations or interfaces. “Within two weeks, a bank or NBFC can start using Lendingkart 2gthr to evaluate MSME candidates for loans. This short launch cycle empowers them to scale quickly with minimal resource investment, without delaying their internal initiatives. We’re incredibly excited to offer such a short time-to-value for our customers,” Singh says. Lendingkart 2gthr SaaS platform in November 2020. Lendingkart 2gthr provides enhanced loan management capabilities for financial institutions, a specialized credit underwriting model, and the flexibility to configure specific policy rules to support all stages of loan processing. AWS Services Used SaaS Factory Program to receive guidance on creating a secure platform that other lenders—competitors to Lendingkart’s MSME lending business—would trust. “We needed to create a neutral third-party environment where financial services companies trusted that Lendingkart Finance wouldn’t be able to access their customer data,” Singh explains. Lendingkart successfully launched its 中文 (繁體) Bahasa Indonesia Since its SaaS launch, Lendingkart has been working to refine its offering based on customer feedback. It’s also considering listing Lendingkart 2gthr on Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Ρусский In addition to setting up an isolated SaaS environment on AWS, Lendingkart worked with the AWS SaaS Factory team to develop its go-to-market strategy. The company benefited from learning about other financial companies on AWS that have built similar SaaS products. “AWS played a large part in shaping our thought process and making sure we had the right direction early on for this project,” Singh says. عربي Learn more » The AWS SaaS Factory Program helps AWS Partners at any stage of the software-as-a-service (SaaS) journey. Whether you are looking to build new products, migrate existing applications, or optimize SaaS solutions on AWS, the AWS SaaS Factory Program can help. 中文 (简体) Abhishek Singh Chief Business Officer, Lendingkart Amazon Elastic Compute Cloud  (Amazon EC2) for secure, resizable compute capacity. It also uses Leveraging AWS Global Infrastructure to Expand Internationally Lendingkart built its microservices architecture on Amazon Web Services (AWS) and consulted with its AWS account team on how to develop a software as a service (SaaS) for digital underwriting. “We asked AWS to help us understand how to diversify our original Lendingkart finance business into an independent, scalable SaaS product,” says Singh. Lendingkart is on a mission to harness technology to close the MSME credit gap. Abhishek Singh, chief business officer at Lendingkart, shares, “Our dream is to enable all Indian MSMEs to have the capital they need to fulfill their potential.” Utilizing machine learning-driven underwriting, it provides offers for unsecured business loans to MSMEs in just 72 hours. Lendingkart has disbursed nearly $1 billion in loans since its inception in 2015 and currently serves customers in over 4,000 cities and towns. Lendingkart Builds Digital Underwriting Platform with AWS SaaS Factory to Close MSME Credit Gap in India To learn more, visit aws.amazon.com/solutions/compute-networking.   Onboards new Lendingkart 2gthr customers in 2 weeks Amazon Elastic Kubernetes Service Amazon Elastic Kubernetes Service (Amazon EKS) to manage containers and “The AWS SaaS Factory accelerated the speed at which we were able to execute this project. The structures for building a SaaS were already in place for us to adopt and modify, which helped us to have more efficient, enriching conversations with our prospects,” relates Singh. Türkçe Diversifying Business with Scalable SaaS Product English Onboarding to Lendingkart 2gthr in 2 Weeks the backbone of the Indian economy. However, these businesses often struggle to obtain financing to sustain or expand their operations due to a lack of banking history. Recent estimates peg Micro, small, and medium enterprises (MSMEs) in India, defined as businesses with investment limits of $128,000–$6.4 million, form AWS Marketplace. “By listing our product on AWS Marketplace, customers who are already on AWS can easily find us. We can add a lot more value in terms of security, ease of procurement, and available integrations,” Singh explains. Regions and Availability Zones. Singh concludes, “As we expand into other countries, we’ll be looking to AWS to help us continually improve our services. Additionally, the Ensures reliability and uptime of SaaS platform Deutsch Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service to run and scale Kubernetes applications in the cloud or on-premises. Tiếng Việt Lendingkart has been growing 100 percent year on year and recognized that demand for its services continued to rise. In 2020, the startup hatched an idea to share its digital underwriting expertise with the larger lending market. This would achieve a dual purpose, boosting domestic economic prosperity while monetizing the company’s credit scoring models. Harshvardhan Lunia, founder and chief executive officer, insisted that the data and analytics platforms developed in-house should ideally be made available to the entire market, including banks and non-banking financial companies (NBFCs). AWS SaaS Factory Program Italiano ไทย Lendingkart embarked on the AWS Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. 2022 Accelerates time-to-market with expert guidance Receives early guidance on shaping product development Lendingkart is a fintech providing micro, small, and medium enterprises (MSMEs) in India with unstructured business loans through its machine learning-driven underwriting algorithm. Since launching in 2015, the company has disbursed over $1 billion in loans and serves MSMEs in 4,000 cities. Adopts structured, customer-first approach Supports global business expansion Amazon Relational Database Service Português Contact Sales" Lenme builds a secure and reliable lending platform with AWS _ Lenme Case Study _ AWS.txt,"About Lenme Amazon OpenSearch Service Lenme Builds a Secure and Reliable Lending Platform Using AWS Français 2023 Español Solution | Full Process Automation with AI and Machine Learning 日本語 Amazon SageMaker Lenme researched various cloud service providers and chose AWS because trust and reliability are key for mobile financial services. The service capability and maturity, regional availability, cost savings, and other considerations of AWS services also played a crucial role in Lenme's decision to choose AWS. In addition, AWS offered a comprehensive suite of services and solutions that made it a perfect fit for Lenme's needs. Building on Amazon Rekognition, Amazon SageMaker, and Amazon OpenSearch Service, Lenme created a fully automated, now-standard suite of services, such as identity verification, and the ability to qualify borrowers accurately while minimizing lending risks and improving default rates. Using Amazon SageMaker, a service to build, train, and deploy machine learning (ML) models for virtually any use case with fully managed infrastructure, tools, and workflows, Lenme created a ML algorithm that deployed to the cloud, minimizing up to 80 percent of the risk associated with manual verification processes in lending. It also improved the average default rate for lenders using its data services. The company also uses OpenSearch, a distributed, community-driven, 100 percent open-source search and analytics suite used for a broad set of use cases like real-time application monitoring, log analytics, and website search, to analyze borrowers' banking data more accurately than ever before and to run queries on unstructured data in an easy and seamless way. Lenme also removed barriers and allowed easy access for lenders who can now fund loans and deploy services with their own algorithms and requirements on the Lenme platform using Lenme APIs. “Our business outlook and opportunities are positive with how we can scale with Amazon Rekognition as needed. We look forward to continuing our relationship with AWS and leveraging their technology to further revolutionize the lending industry,"" says Mark. Lenme’s platform powered by AWS services and solutions is transforming the lending industry. Lenme can now authenticate customers in three clicks and within seconds. The platform is fully automated and includes the option of being leveraged by Lenme’s customers through the use of API technology built on AWS. The AWS pay-as-you-go model also helps Lenme scale as needed to meet market demands. Its commercial customers can use the benefit of reducing the risk associated with lending up to 80 percent, reducing the cost of acquiring new customers by up to 40 percent, all while increasing the conversion rate of new customers by 34 percent. The cost savings and reduced risk are possible due to the savings through automation with AI and ML on Lenme’s lending platform. Lenme continues to drive toward its vision to create a fully open-source platform where data providers, lenders, developers, and others can deploy funding and financial services. Building on AWS has helped Lenme increase the potential for growth and impact. 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Improved Minimized 80% Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization,intelligent shopping, robotics, and voice-assisted devices.  AWS Services Used 中文 (繁體) Bahasa Indonesia of risk associated with lending 40% reduction Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Learn more » Mark Maurice CEO, Lenme 34% Amazon OpenSearch Service makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. Overview Lenme, a subscription-based service, has revolutionized the lending industry by leveraging Amazon Web Services (AWS) to automate a platform that is now solving longstanding challenges of acquiring, verifying, and evaluating borrowers. With over 500,000 active users, Lenme connects individual borrowers with financial institutions, businesses, individuals, lenders, and data providers, to transfer these savings directly to users who have been traditionally underserved. Get Started Customer Stories / Financial Services With 138 million Americans struggling financially and in need of a short-term loan product, Lenme is committed to building an automated platform that scales as needed to serve this purpose. The challenge of customer acquisition, verification, and evaluation has always been a priority and a costly essential for lenders. It is a complex and time-consuming process that often involves high costs and risks. “The AWS suite of artificial intelligence and machine learning services have enabled us to address the longstanding challenges of acquiring, then accurately verifying, and evaluating customers in the lending industry,” says Mark Maurice, chief executive officer (CEO) of Lenme. Lenme addressed this challenge by using AWS services to verify and qualify borrowers in just three clicks with the artificial intelligence (AI) capabilities in Amazon Rekognition Identity Verification API, which helps Lenme to verify customers with high accuracy and within seconds. Amazon Rekognition is a fully-managed AI service that offers pre-trained and customizable computer vision (CV) capabilities to extract information and insights from images and videos. Lenme’s new technology is helping the company provide low-cost products while establishing itself as a trusted leader in the lending financial industry. Türkçe English Opportunity | Reducing Cost and Increasing Lending Process Speed Lenme, a lending platform founded in 2018 and headquartered in San Francisco, connects people looking to borrow money with financial institutions, lending businesses, and individual investors looking to invest in the small amount loan market. Lenme’s mission is to enable individuals to lend and borrow with confidence, at a lower cost, and on secure platforms. Amazon Rekognition improvement in customer conversion rate Increased speed Amazon Rekognition offers pre-trained and customizable computer vision (CV) capabilities to extract information and insights from your images and videos to implement identity verification and content moderation solutions at a low cost.  in the cost of customer acquisition Explore how Lenme is revolutionizing lending with identity verification and subscriber authentication capabilities using Amazon Rekognition and AWS Identity Verification solutions. Deutsch to verify customers’ identities from days to seconds Tiếng Việt Our platform is now faster and more efficient, helping us verify and authenticate customers in three clicks and a few seconds. This helps us provide our lenders with more data and to reduce lending risks up to 80%."" Italiano ไทย Contact Sales Outcome | Speed, Accuracy, and Safety for Lenders and Borrowers Learn more » the average default rate for lenders Português" LetsGetChecked Case Study _ Amazon Connect _ AWS Lex.txt,"Managing Patients’ Healthcare Journeys Using AWS Amazon Connect and other AWS services now underpin LetsGetChecked’s operations. “AWS is at the heart of everything we do, whether it’s on a data level or an integration level,” says Murphy. “It would be a lot more expensive, a lot more difficult, and a lot more fragmented, if we were developing different technology. AWS has the scope we need.” Français Benefits of AWS Reducing Agent Calls by 50% Using Voice Data and Amazon Connect Español LetsGetChecked’s business plan called for expanding services beyond testing to playing a broader part in its customers’ healthcare, including managing interactions with health professionals and pharmacies. This meant building systems to support the business logic to deliver this and complying with general data protection and specific healthcare regulations about patient records. In the highly regulated telehealth market, patients must be served by people who are licensed in their particular region. Learn more 日本語 Using Amazon Connect to Meet Regulatory Requirements Get Started 한국어 Automatically routes customers to the correct region through natural voice conversations. Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. For the company’s own customers (and its customers’ clients), this meant having a full view of all interactions and knowing which interaction types, delivered at what time, encouraging higher levels of engagement, and enabling better patient outcomes. LetsGetChecked saw that the functionality it needed for a full view of its customers was possible with Amazon Connect, and suitable for European GDPR and US HIPAA regulatory compliance. After configuration, integration, and testing, the results could be presented in a form that was ready to be signed off by its compliance and information security teams. This manual process required agents to arrange pick up of the testing kits, check tracking codes, and handle other logistics. By integrating with Amazon Connect data collected from client calls about these events—including times and addresses—LetsGetChecked has automated this process, reducing agent calls by up to 50 percent. Because this service generates the vast bulk of the company’s telephone contacts, this means a reduction of 30 or 40 percent in agent costs with future features planned to automate more tasks. LetsGetChecked is using Amazon Connect data to drive its analytics. Call data is streamed into Amazon Redshift, which can accelerate time to insights with fast, easy, and secure cloud data warehousing at scale. The company has expanded its data analysis team to extract and analyze this and other data in a single-source-of-truth model, which makes everything available to different business units across the company. Provide superior customer service at a lower cost with an easy-to-use omnichannel cloud contact center. AWS Services Used LetsGetChecked is a global healthcare solutions company that provides the tools to manage health from home through direct access to diagnostic testing, virtual care, and medication delivery for a wide range of health and wellness conditions. LetsGetChecked’s end-to-end model includes manufacturing, logistics, lab analysis, physician support, and prescription fulfilment. Founded in 2015, the company empowers people with accessible health information and care to live longer, happier lives. 中文 (繁體) Bahasa Indonesia LetsGetChecked Transforms Home Healthcare Using AWS Helped company meet regulatory requirements for different territories. Contact Sales Ρусский عربي Few businesses can look back over the past 2 years and see them as fulfilling but Murphy views healthcare as more than just a business. “Working for LetsGetChecked is a wonderful experience,” he says. “Seeing the difference we can make in people’s lives, just by doing our jobs. Look at the number of people who get a diagnosis earlier than they would—it’s tremendous. We’re providing healthcare first and we’re a business second. We do things properly, to the highest possible standard. That’s how we work now, that’s how we're always going to work, and AWS helps us do that.” 中文 (简体) Amazon Redshift LetsGetChecked found that compliance was simplified through the integration of Amazon Connect with AWS best-practice data and account security models. AWS is at the heart of everything we do, whether it’s on a data level or an integration level. It would be a lot more expensive, a lot more difficult, and a lot more fragmented, if we were developing on different technology. AWS has the scope we need.” Amazon Connect About LetsGetChecked Although this transformation was part of the company’s plan for growth, and was well underway when the COVID-19 pandemic hit, the sudden 3x spike in user traffic meant the company had to manage more customer interactions than before with the sudden demand for COVID-19 testing. Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. The result of the second project was to establish the foundation for the company’s expansion into more areas of customer healthcare, without incurring major overheads due to regulatory requirements. The first project was improving customer call management. LetsGetChecked had already decided to replace its original call management system with Amazon Connect to scale capacity and increase interoperability with internal systems. Working with VoiceFoundry, an AWS Partner and Amazon Connect specialist, the company migrated from its existing system. After the migration was completed seamlessly, LetsGetChecked had the confidence to continue development. The result was a call center system that can scale with new functionality, delivering immediate benefits through automation and integration. However, as a virtual healthcare solutions business working across geographical areas, this increase in business efficiency would only be acceptable if it complied with multiple regulatory environments. Türkçe LetsGetChecked turned to Amazon Web Services (AWS) and chose Amazon Connect, an easy-to-use omnichannel cloud contact center, to manage its customer interactions and deliver a better service. “The COVID-19 pandemic did not pause our roadmap,” says Colm Murphy, customer solutions technical manager at LetsGetChecked. “Quite the opposite. We knew more people would need at-home health support more than ever and used the opportunity to enable development.” English The company had two challenges. First, it needed to scale its systems to respond to the immediate demand—in particular, customer call management—created by COVID-19 testing. Second, it had to manage its long-term transformation to a full healthcare management business. The company’s second project was building a system for patient information management beyond just recording tests and results into a full history of interactions with their health providers. This would help it achieve the business benefits of large-scale data ownership. AWS Lex LetsGetChecked is a global healthcare solutions company that provides people with the tools to manage health from home through direct access to diagnostic testing, virtual care, and medication delivery for a wide range of health and wellness conditions. LetsGetChecked is available nationwide in the United States, the United Kingdom, and most EU countries. It is co-headquartered in Dublin and New York. The company’s at-home diagnostic and care services experienced high demand during the COVID-19 pandemic. Moving its call centers to Amazon Connect not only provided scalable performance, but also allowed integration with other core systems. Using Amazon Connect, LetsGetChecked built a foundation for its business transformation to a full-spectrum telehealth management business. Amazon Lex is a fully managed artificial intelligence (AI) service with advanced natural language models to design, build, test, and deploy conversational interfaces in applications. Amazon Connect is now a key part of LetsGetChecked’s system. “Our unique advantage is that we own the entire chain from test production, deployment, and lab analysis,” says Murphy. “As we develop our telehealth management system, we’ll handle patients’ journeys through healthcare, their medications, tests, and interactions with professionals. It’s a unique mix of business-to-business and business-to-consumer, and with Amazon Connect, we have a system with the flexibility and capabilities to manage that effectively.” Reduced agent calls by 50% by automating collection of test kits. Scaled to meet high demand during the COVID-19 pandemic. Deutsch A good example was the implementation of call recording, which helped LetsGetChecked implement access and auditing rules for different tasks such as quality assurance, query resolution, and freedom-of-information requests. Tiếng Việt Colm Murphy Customer Solutions Technical Manager, LetsGetChecked Italiano ไทย Irish unicorn LetsGetChecked is an end-to-end global healthcare solutions company that helps people manage their health from home through direct access to diagnostic testing, virtual care, and medication delivery. With its core diagnostic testing business already established, LetsGetChecked was executing on a planned expansion into virtual care and medication delivery.  LetsGetChecked used Amazon Lex to build natural-language chatbots with conversational artificial intelligence to allow the automated routing of calls to appropriate regional queues. Another area where the company benefits from the combination of Amazon Connect and Amazon Lex is the transfer of completed home test kits to the lab. LetsGetChecked runs a huge operation and coordinates this by using its customer relationship management system to communicate directly with the dispatch system of its delivery firm. LetsGetChecked was growing rapidly and needed to improve its call center systems to handle increasing numbers of customers and tests. 2022 Português" Leverage pgvector and Amazon Aurora PostgreSQL for Natural Language Processing Chatbots and Sentiment Analysis _ AWS Database Blog.txt,"AWS Database Blog Leverage pgvector and Amazon Aurora PostgreSQL for Natural Language Processing, Chatbots and Sentiment Analysis by Shayon Sanyal | on 13 JUL 2023 | in Advanced (300) , Amazon Aurora , Generative AI , PostgreSQL compatible , Technical How-to | Permalink | Comments |  Share Generative AI – a category of artificial intelligence algorithms that can generate new content based on existing data — has been hailed as the next frontier for various industries, from tech to financial services, e-commerce and healthcare. And indeed, we’re already seeing the many ways Generative AI is being adopted . ChatGPT is one example of Generative AI, a form of AI that does not require a background in machine learning (ML); virtually anyone with the ability to ask questions in simple English can utilize it. The driving force behind the capabilities of generative AI chatbots lies in their foundation models . These models consist of expansive neural networks meticulously trained on vast amounts of unstructured, unlabeled data spanning various formats, including text and audio. The versatility of foundation models enables their utilization across a wide range of tasks, showcasing their limitless potential. In this post, we cover two use cases in the context of pgvector and Amazon Aurora PostgreSQL-Compatible Edition : First, we build an AI-powered application that lets you ask questions based on content in your PDF files in natural language. We upload PDF files to the application and then type in questions in simple English. Our AI-powered application will process questions and return answers based on the content of the PDF files. Next, we make use of the native integration between pgvector and Amazon Aurora Machine Learning . Machine learning integration with Aurora currently supports Amazon Comprehend and Amazon SageMaker . Aurora makes direct and secure calls to SageMaker and Comprehend that don’t go through the application layer. Aurora machine learning is based on the familiar SQL programming language, so you don’t need to build custom integrations, move data around or learn separate tools. Overview of pgvector and large language models (LLMs) pgvector is an open-source extension for PostgreSQL that adds the ability to store and search over ML-generated vector embeddings. pgvector provides different capabilities that let you identify both exact and approximate nearest neighbors. It’s designed to work seamlessly with other PostgreSQL features, including indexing and querying. Using ChatGPT and other LLM tooling often requires storing the output of these systems, i.e., vector embeddings, in a permanent storage system for retrieval at a later time. In the previous post, Building AI-powered search in PostgreSQL using Amazon SageMaker and pgvector , we provided an overview of storing vector embeddings in PostgreSQL using pgvector, and a sample implementation for an online retail store. Large language models (LLMs) have become increasingly powerful and capable. You can use these models for a variety of tasks, including generating text, chatbots, text summarization, image generation, and natural language processing capabilities such as answering questions. Some of the benefits offered by LLMs include the ability to create more capable and compelling conversational AI experiences for customer service applications or bots, and improving employee productivity through more intuitive and accurate responses. LangChain is a Python module that makes it simpler to use LLMs. LangChain provides a standard interface for accessing LLMs, and it supports a variety of LLMs, including OpenAI’s GPT series, Hugging Face, Google’s BERT, and Facebook’s RoBERTa. Although LLMs offer many benefits for natural language processing (NLP) tasks, they may not always provide factual or precisely relevant responses to specific domain use cases. This limitation can be especially crucial for enterprise customers with vast enterprise data who require highly precise and domain-specific answers. For organizations seeking to improve LLM performance for their customized domains, they should look into effectively integrating their enterprise domain information into the LLM. Solution overview Use case 1: Build and deploy an AI-powered chatbot application Prerequisites Aurora PostgreSQL v15.3 with pgvector support. Install Python with the required dependencies (in this post, we use Python v3.9). You can deploy this solution locally on your laptop or via Amazon SageMaker Notebooks . This solution incurs costs. Refer to Amazon Aurora Pricing to learn more. How it works We use a combination of pgvector, open-source foundation models ( flan-t5-xxl for text generation and all-mpnet-base-v2 for embeddings), LangChain packages for interfacing with its components and Streamlit for building the bot front end. LangChain’s Conversational Buffer Memory and ConversationalRetrievalChain allows chatbots to store and recall past conversations and interactions as well as to enhance our personalized chatbot by adding memory to it. This will enable our chatbot to recall previous conversations and contextual information, resulting in more personalized and engaging interactions. NLP question answering is a difficult task, but recent developments in transformer-based models have greatly enhanced its ease of use. Hugging Face’s Transformers library offers pre-trained models and tools that make it simple to do question-answering activities. The widely used Python module Streamlit is used to create interactive online applications, while LangChain is a toolkit that facilitates retrieving documentation context data based on keywords. The following diagram illustrates how it works: The application follows these steps to provide responses to your questions: The app reads one or more PDF documents and extracts their text content. The extracted text is divided into smaller chunks that can be processed effectively. The application utilizes a language model to generate vector representations (embeddings) of the text chunks and stores the embeddings in pgvector (vector store). When you ask a question, the app compares it with the text chunks and identifies the most semantically similar ones. The selected chunks are passed to the language model, which generates a response based on the relevant content of the PDFs. Environment setup To get started, we need to install the required dependencies. You can use pip to install the necessary packages either on your local laptop or via SageMaker Jupyter notebook : pip install streamlit langchain pgvector PyPDF2 python-dotenv altair huggingface-hub InstructorEmbedding sentence-transformers Create the pgvector extension on your Aurora PostgreSQL database (DB) cluster: CREATE EXTENSION vector; Note : When you use HuggingFaceEmbeddings , you may get the following error: StatementError: (builtins.ValueError) expected 1536 dimensions, not 768 . This is a known issue (see pgvector does not work with HuggingFaceEmbeddings #2219 ). You can use the following workaround: Update ADA_TOKEN_COUNT = 768 in local ( site-packages ) langchain/langchain/vectorstores/pgvector.py on line 22. Update the vector type column for langchain_pg_embedding table on your Aurora PostgreSQL DB cluster: alter table langchain_pg_embedding alter column embedding type vector (768); Import libraries Let’s begin by importing the necessary libraries: import streamlit as st from dotenv import load_dotenv from PyPDF2 import PdfReader from langchain.embeddings import HuggingFaceInstructEmbeddings from langchain.llms import HuggingFaceHub from langchain.vectorstores.pgvector import PGVector from langchain.memory import ConversationBufferMemory from langchain.chains import ConversationalRetrievalChain from htmlTemplates import css, bot_template, user_template from langchain.text_splitter import RecursiveCharacterTextSplitter import os To load the pre-trained question answering model and embeddings, we import HuggingFaceHub and HuggingFaceInstructEmbeddings from LangChain utilities. For storing vector embeddings, we import pgvector as a vector store, which has a direct integration with LangChain. Note that we’re using two additional important libraries –  ConversationBufferMemory , which allows for storing of messages, and ConversationalRetrievalChain , which allows you to set up a chain to chat over documents with chat history for follow-up questions. We use RecursiveCharacterTextSplitter to split documents recursively by different characters, as we’ll see in our sample app. For the purpose of creating the web application, we additionally import Streamlit. For the demo, we use a popular whitepaper as the source PDF document – Amazon Aurora: Design considerations for high throughput cloud-native relational databases . Create the Streamlit app We start by creating the Streamlit app and setting the header: st.header(""GenAI Q&A with pgvector and Amazon Aurora PostgreSQL"") user_question = st.text_input(""Ask a question about your documents:"") This line sets the header of our web application to “ GenAI Q&A with pgvector and Amazon Aurora PostgreSQL. ” Next, we take our PDFs as input and split them into chunks using RecursiveCharacterTextSplitter : def get_pdf_text(pdf_docs): text = """" for pdf in pdf_docs: pdf_reader = PdfReader(pdf) for page in pdf_reader.pages: text += page.extract_text() return text def get_text_chunks(text): text_splitter = RecursiveCharacterTextSplitter( separators=[""\n\n"", ""\n"", ""."", ""!"", ""?"", "","", "" "", """"], chunk_size=1000, chunk_overlap=200, length_function=len ) chunks = text_splitter.split_text(text) return chunks Load the embeddings and LLM into Aurora PostgreSQL DB cluster Next, we load the question answering embeddings using the sentence transformer sentence-transformers/all-mpnet-base-v2 into Aurora PostgreSQL DB cluster as our vector database using the pgvector vector store in LangChain: CONNECTION_STRING = PGVector.connection_string_from_db_params( driver = os.getenv(""PGVECTOR_DRIVER""), user = os.getenv(""PGVECTOR_USER""), password = os.getenv(""PGVECTOR_PASSWORD""), host = os.getenv(""PGVECTOR_HOST""), port = os.getenv(""PGVECTOR_PORT""), database = os.getenv(""PGVECTOR_DATABASE"") ) def get_vectorstore(text_chunks): embeddings = HuggingFaceInstructEmbeddings(model_name=""sentence-transformers/all-mpnet-base-v2"") vectorstore = PGVector.from_texts(texts=text_chunks, embedding=embeddings,connection_string=CONNECTION_STRING) return vectorstore Note that pgvector needs the connection string to the database. We load it from the environment variables. Next, we load the LLM. We use Google’s flan-t5-xxl LLM from the HuggingFaceHub repository: llm = HuggingFaceHub(repo_id=""google/flan-t5-xxl"", model_kwargs={""temperature"":0.5, ""max_length"":1024}) By default, LLMs are stateless, meaning that each incoming query is processed independently of other interactions. The only thing that exists for a stateless agent is the current input. There are many applications where remembering previous interactions is very important, such as chatbots. Conversational memory allows us to do that. ConversationBufferMemory and ConversationalRetrievalChain allow us to provide the user’s question and conversation history to generate the chatbot’s response while allowing room for follow-up questions: def get_conversation_chain(vectorstore): memory = ConversationBufferMemory( memory_key='chat_history', return_messages=True) conversation_chain = ConversationalRetrievalChain.from_llm( llm=llm, retriever=vectorstore.as_retriever(), memory=memory ) return conversation_chain # create conversation chain st.session_state.conversation = get_conversation_chain(vectorstore) User input and question answering Now, we handle the user input and perform the question answering process: user_question = st.text_input(""Ask a question about your documents:"") if user_question: handle_userinput(user_question) with st.sidebar: st.subheader(""Your documents"") pdf_docs = st.file_uploader( ""Upload your PDFs here and click on 'Process'"", accept_multiple_files=True) if st.button(""Process""): with st.spinner(""Processing""): # get pdf text raw_text = get_pdf_text(pdf_docs) # get the text chunks text_chunks = get_text_chunks(raw_text) Demonstration Streamlit is an open-source Python library that makes it simple to create and share beautiful, custom web apps for machine learning and data science. In just a few minutes you can build and deploy powerful data apps. Let’s explore a demonstration of the app. To install Streamlit: $ pip install streamlit $ streamlit run app.py The starting UI looks like the following screenshot: Follow the instructions in the sidebar: Browse and upload PDF files. You can upload multiple PDFs because we set the parameter accept_multiple_files=True for the st.file_uploader function. Once you’ve uploaded the files, click Process . You should see a page like the following: Start asking your questions in the search bar. For example, let’s start with a simple question – “ What is Amazon Aurora? ” The following response is generated: Let’s ask a different question, a bit more complex – “ How does replication work in Amazon Aurora? ” The following response is generated: Note here that the conversation history is preserved due to Conversational Buffer Memory . Also, ConversationalRetrievalChain allows you to set up a chain with chat history for follow-up questions. We can also upload multiple files and ask questions. Let’s say we uploaded another file “ Constitution of the United States ” and ask our app – “ What is the first amendment about? ” The following is the response: For full implementation details about the code sample used in the post, see the GitHub repo. Use Case 2: pgvector and Aurora Machine Learning for Sentiment Analysis Prerequisites Aurora PostgreSQL v15.3 with pgvector support. Install Python with the required dependencies (in this post, we use Python v3.9). Jupyter (available as an extension on VS Code or through Amazon SageMaker Notebooks ). AWS CLI installed and configured for use. For instructions, see Set up the AWS CLI . This solution incurs costs. Refer to Amazon Aurora Pricing to learn more. Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find insights and relationships in text. No prior machine learning experience is required. This example will walk you through the process of integrating Aurora with the Comprehend Sentiment Analysis API and making sentiment analysis inferences via SQL commands. For our example, we have used a sample dataset for fictitious hotel reviews. We use Hugging Face’s sentence-transformers/all-mpnet-base-v2 model for generating document embeddings and store vector embeddings in our Aurora PostgreSQL DB cluster with pgvector. Use Amazon Comprehend with Amazon Aurora Create an IAM role to allow Aurora to interface with Comprehend. Associate the IAM role with the Aurora DB cluster. Install the aws_ml and vector extensions. For installing the aws_ml extension, see Installing the Aurora machine learning extension . Setup the required environment variables. Run through each cell in the given notebook pgvector_with_langchain_auroraml.ipynb . Run Comprehend inferences from Aurora. 1. Create an IAM role to allow Aurora to interface with Comprehend aws iam create-role --role-name auroralab-comprehend-access \ --assume-role-policy-document ""{\""Version\"":\""2012-10-17\"",\""Statement\"":[{\""Effect\"":\""Allow\"",\""Principal\"":{\""Service\"":\""rds.amazonaws.com\""},\""Action\"":\""sts:AssumeRole\""}]}"" Run the following commands to create and attach an inline policy to the IAM role we just created: aws iam put-role-policy --role-name auroralab-comprehend-access --policy-name inline-policy \ --policy-document ""{\""Version\"":\""2012-10-17\"",\""Statement\"":[{\""Effect\"":\""Allow\"",\""Action\"":[\""comprehend:DetectSentiment\"",\""comprehend:BatchDetectSentiment\""],\""Resource\"":\""*\""}]}"" 2. Associate the IAM role with the Aurora DB cluster Associate the role with the DB cluster by using following command: aws rds add-role-to-db-cluster --db-cluster-identifier $(echo $DBENDP | cut -d'.' -f1) \ --role-arn $(aws iam list-roles --query 'Roles[?RoleName==`auroralab-comprehend-access`].Arn' --output text) --feature-name Comprehend Run the following command and wait until the output shows as available , before moving on to the next step: aws rds describe-db-clusters --db-cluster-identifier $(echo $DBENDP | cut -d'.' -f1) \ --query 'DBClusters[*].[Status]' --output text Validate that the IAM role is active by running the following command: aws rds describe-db-clusters --db-cluster-identifier $(echo $DBENDP | cut -d'.' -f1) \ --query 'DBClusters[*].[AssociatedRoles]' --output table You should see an output similar to the following: For more information or instructions on how to perform steps 1 and 2 using the AWS Console see: Setting up Aurora PostgreSQL to use Amazon Comprehend . 3. Connect to psql or your favorite PostgreSQL client and install the extensions CREATE EXTENSION IF NOT EXISTS aws_ml CASCADE; CREATE EXTENSION IF NOT EXISTS vector; 4. Setup the required environment variables We use VS Code for this example. Create a .env file with the following environment variables: HUGGINGFACEHUB_API_TOKEN=<> PGVECTOR_DRIVER='psycopg2' PGVECTOR_HOST='<>' PGVECTOR_PORT='5432' PGVECTOR_DATABASE='<>' PGVECTOR_USER='<>' PGVECTOR_PASSWORD='<>' 5. Run through each cell in the given notebook pgvector_with_langchain_auroraml.ipynb Import libraries Begin by importing the necessary libraries: from dotenv import load_dotenv from langchain.document_loaders import CSVLoader from langchain.text_splitter import CharacterTextSplitter from langchain.embeddings import HuggingFaceInstructEmbeddings from langchain.vectorstores.pgvector import PGVector, DistanceStrategy from langchain.docstore.document import Document import os Use LangChain’s CSVLoader library to load CSV and generate embeddings using Hugging Face sentence transformers: os.environ[""HUGGINGFACEHUB_API_TOKEN""] = os.getenv('HUGGINGFACEHUB_API_TOKEN') embeddings = HuggingFaceInstructEmbeddings(model_name=""sentence-transformers/all-mpnet-base-v2"") connection_string = PGVector.connection_string_from_db_params( driver = os.environ.get(""PGVECTOR_DRIVER""), user = os.environ.get(""PGVECTOR_USER""), password = os.environ.get(""PGVECTOR_PASSWORD""), host = os.environ.get(""PGVECTOR_HOST""), port = os.environ.get(""PGVECTOR_PORT""), database = os.environ.get(""PGVECTOR_DATABASE"") ) loader = CSVLoader('./data/test.csv', source_column=""comments"") documents = loader.load() If the run is successful, you should see an output as follows: /../pgvector-with-langchain-auroraml/venv/lib/python3.9/site-packages/InstructorEmbedding/instructor.py:7: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console) from tqdm.autonotebook import trange load INSTRUCTOR_Transformer load INSTRUCTOR_Transformer max_seq_length 512 Split the text using LangChain’s CharacterTextSplitter function and generate chunks: text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) docs = text_splitter.split_documents(documents) print(len(documents)) print(len(docs)) # Access the content and metadata of each document for document in documents: content = print(document.page_content) metadata = print(document.metadata) If the run is successful, you should see an output as follows: 10 10 <> comments: great hotel night quick business trip, loved little touches like goldfish leopard print robe, complaint wifi complimentary not internet access business center, great location library service fabulous, {'source': 'great hotel night quick business trip, loved little touches like goldfish leopard print robe, complaint wifi complimentary not internet access business center, great location library service fabulous, ', 'row': 0} comments: horrible customer service hotel stay february 3rd 4th 2007my friend picked hotel monaco appealing website online package included champagne late checkout 3 free valet gift spa weekend, friend checked room hours earlier came later, pulled valet young man just stood, asked valet open said, pull bags didn__Ç_é_ offer help, got garment bag suitcase came car key room number says not valet, car park car street pull, left key working asked valet park car gets, went room fine bottle champagne oil lotion gift spa, dressed went came got bed noticed blood drops pillows sheets pillows, disgusted just unbelievable, called desk sent somebody 20 minutes later, swapped sheets left apologizing, sunday morning called desk speak management sheets aggravated rude, apparently no manager kind supervisor weekend wait monday morning {'source': 'horrible customer service hotel stay february 3rd 4th 2007my friend picked hotel monaco appealing website online package included champagne late checkout 3 free valet gift spa weekend, friend checked room hours earlier came later, pulled valet young man just stood, asked valet open said, pull bags didn__Ç_é_ offer help, got garment bag suitcase came car key room number says not valet, car park car street pull, left key working asked valet park car gets, went room fine bottle champagne oil lotion gift spa, dressed went came got bed noticed blood drops pillows sheets pillows, disgusted just unbelievable, called desk sent somebody 20 minutes later, swapped sheets left apologizing, sunday morning called desk speak management sheets aggravated rude, apparently no manager kind supervisor weekend wait monday morning', 'row': 1} . . . Create a table in Aurora PostgreSQL with the name of the collection. Make sure that the collection name is unique and the user has the permissions to create a table: collection_name = 'fictitious_hotel_reviews' db = PGVector.from_documents( embedding=embeddings, documents=docs, collection_name=collection_name, connection_string=connection_string ) Run a similarity search using the similarity_search_with_score function from pgvector. query = ""What do some of the positive reviews say?"" docs_with_score: List[Tuple[Document, float]] = db.similarity_search_with_score(query) for doc, score in docs_with_score: print(""-"" * 80) print(""Score: "", score) print(doc.page_content) print(doc.metadata) print(""-"" * 80) If the run is successful, you should see an output as follows: -------------------------------------------------------------------------------- Score: 0.9238530395691034 comments: nice hotel expensive parking got good deal stay hotel anniversary, arrived late evening took advice previous reviews did valet parking, check quick easy, little disappointed non-existent view room room clean nice size, bed comfortable woke stiff neck high pillows, not soundproof like heard music room night morning loud bangs doors opening closing hear people talking hallway, maybe just noisy neighbors, aveda bath products nice, did not goldfish stay nice touch taken advantage staying longer, location great walking distance shopping, overall nice experience having pay 40 parking night, {'source': 'nice hotel expensive parking got good deal stay hotel anniversary, arrived late evening took advice previous reviews did valet parking, check quick easy, little disappointed non-existent view room room clean nice size, bed comfortable woke stiff neck high pillows, not soundproof like heard music room night morning loud bangs doors opening closing hear people talking hallway, maybe just noisy neighbors, aveda bath products nice, did not goldfish stay nice touch taken advantage staying longer, location great walking distance shopping, overall nice experience having pay 40 parking night, ', 'row': 5} -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 0.975017819981635 comments: great location need internally upgrade advantage north end downtown seattle great restaurants nearby good prices, rooms need updated literally thought sleeping 1970 bed old pillows sheets, net result bad nights sleep, stay location, staff friendly, {'source': 'great location need internally upgrade advantage north end downtown seattle great restaurants nearby good prices, rooms need updated literally thought sleeping 1970 bed old pillows sheets, net result bad nights sleep, stay location, staff friendly, ', 'row': 3} -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 1.0084132474978011 comments: great hotel night quick business trip, loved little touches like goldfish leopard print robe, complaint wifi complimentary not internet access business center, great location library service fabulous, {'source': 'great hotel night quick business trip, loved little touches like goldfish leopard print robe, complaint wifi complimentary not internet access business center, great location library service fabulous, ', 'row': 0} -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Score: 1.0180131593936907 comments: good choice hotel recommended sister, great location room nice, comfortable bed- quiet- staff helpful recommendations restaurants, pike market 4 block walk stay {'source': 'good choice hotel recommended sister, great location room nice, comfortable bed- quiet- staff helpful recommendations restaurants, pike market 4 block walk stay', 'row': 2} -------------------------------------------------------------------------------- Use the Cosine function to refine the results to the best possible match: store = PGVector( connection_string=connection_string, embedding_function=embeddings, collection_name='fictitious_hotel_reviews', distance_strategy=DistanceStrategy.COSINE ) retriever = store.as_retriever(search_kwargs={""k"": 1}) retriever.get_relevant_documents(query='What do some of the positive reviews say?') If the run is successful, you should see an output as follows: [Document(page_content='comments: nice hotel expensive parking got good deal stay hotel anniversary, arrived late evening took advice previous reviews did valet parking, check quick easy, little disappointed non-existent view room room clean nice size, bed comfortable woke stiff neck high pillows, not soundproof like heard music room night morning loud bangs doors opening closing hear people talking hallway, maybe just noisy neighbors, aveda bath products nice, did not goldfish stay nice touch taken advantage staying longer, location great walking distance shopping, overall nice experience having pay 40 parking night,', metadata={'source': 'nice hotel expensive parking got good deal stay hotel anniversary, arrived late evening took advice previous reviews did valet parking, check quick easy, little disappointed non-existent view room room clean nice size, bed comfortable woke stiff neck high pillows, not soundproof like heard music room night morning loud bangs doors opening closing hear people talking hallway, maybe just noisy neighbors, aveda bath products nice, did not goldfish stay nice touch taken advantage staying longer, location great walking distance shopping, overall nice experience having pay 40 parking night, ', 'row': 5})] Similarly, you can test results with other distance strategies such as Euclidean or Max Inner Product. Euclidean distance depends on a vector’s magnitude whereas cosine similarity depends on the angle between the vectors. The angle measure is more resilient to variations of occurrence counts between terms that are semantically similar, whereas the magnitude of vectors is influenced by occurrence counts and heterogeneity of word neighborhood. Hence for similarity searches or semantic similarity in text, the cosine distance gives a more accurate measure. 6. Run Comprehend inferences from Aurora Aurora has a built-in Comprehend function which can call the Comprehend service. It passes the inputs of the aws_comprehend.detect_sentiment function, in this case the values of the document column in the langchain_pg_embedding table, to the Comprehend service and retrieves sentiment analysis results (note that the document column is trimmed due to the long free form nature of reviews): select LEFT(document, 100) as document, s.sentiment, s.confidence from langchain_pg_embedding, aws_comprehend.detect_sentiment(document, 'en') s; You should see results as shown in the screenshot below. Observe the columns sentiment, and confidence. The combination of these two columns provide the inferred sentiment for the text in the document column, and also the confidence score of the inference. For full implementation details about the code sample used in the post, see the GitHub repo. Conclusion In this post, we explored how to build an interactive chatbot app for question answering using LangChain and Streamlit and leveraged pgvector and its native integration with Aurora Machine Learning for sentiment analysis. With this sample chatbot app, users can input their questions and receive answers based on the provided information, making it a useful tool for information retrieval and knowledge exploration, especially in large enterprises with a massive knowledge corpus. The integration of embeddings generated using LangChain and storing them in Amazon Aurora PostgreSQL-Compatible Edition with the pgvector open-source extension for PostgreSQL presents a powerful and efficient solution for many use cases such as sentiment analysis, fraud detection and product recommendations. Now Available The  pgvector extension is available on Aurora PostgreSQL  15.3, 14.8, 13.11, 12.15 and higher in AWS Regions including the AWS GovCloud (US) Regions. To learn more about this launch, you can also tune in to AWS On Air at 12:00pm PT on 7/21 for a live demo with our team! You can watch on Twitch or LinkedIn . If you have questions or suggestions, leave a comment. About the Author Shayon Sanyal is a Principal Database Specialist Solutions Architect and a Subject Matter Expert for Amazon’s flagship relational database, Amazon Aurora. He has over 15 years of experience managing relational databases and analytics workloads. Shayon’s relentless dedication to customer success allows him to help customers design scalable, secure and robust cloud native architectures. Shayon also helps service teams with design and delivery of pioneering features. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Aurora Amazon DocumentDB Amazon DynamoDB Amazon ElastiCache Amazon Keyspaces (for Apache Cassandra) Amazon Managed Blockchain Amazon MemoryDB for Redis Amazon Neptune Amazon Quantum Ledger Database (Amazon QLDB) Amazon RDS Amazon Timestream AWS Database Migration Service AWS Schema Conversion Tool Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" LG AI Research Develops Foundation Model Using Amazon SageMaker _ LG AI Research Case Study _ AWS.txt,"LG AI Research successfully deployed its foundation model, EXAONE, to production in one year. EXAONE, which stands for “expert AI for everyone,” is a 300-billion-parameter multi-modal model that uses both images and text data. Français 2023 Amazon FSx for Lustre Español LG AI Research, the artificial intelligence (AI) research hub of South Korean conglomerate LG Group, was founded to promote AI as part of its digital transformation strategy to drive future growth. The research institute developed its foundation model EXAONE engine within one year using Amazon SageMaker and Amazon FSx for Lustre. 日本語 Amazon SageMaker increase in data preparation speed Built on Amazon Web Services (AWS), the foundation model mimics humans as it thinks, learns, and takes actions on its own through large-scale data training. The multi-purpose foundation model can be employed in various industries to carry out a range of tasks. Get Started 한국어 South Korean conglomerate LG Group collects vast amounts of data from its companies, which include home appliances, telecommunications, batteries, and pharmaceuticals. A key pillar of the group’s digital transformation is developing AI technology and integrating AI into its products and services. The group established LG AI Research to harness the power of AI in its digital transformation strategy, develop better customer experiences, and solve common industry challenges. 35% When LG AI Research decided to develop its next-generation foundation model, which takes inspiration from how the human brain works and has an advanced capacity for learning and making judgments, it searched for the most efficient machine learning (ML) platform to handle vast amounts of data and large-scale training and inference. The foundation model needed to train on dozens of terabytes of data to make human-like deductions and comprehend texts and images. Moreover, the project required a high-performance compute infrastructure and the flexibility to increase the number of parameters to billions during training. LG AI Research’s Gwang-mo Song explains, “By using Amazon SageMaker’s high performance distributed training infrastructure, researchers can focus solely on model training instead of managing infrastructure. In addition, by leveraging the parallel data library from Amazon SageMaker, we could obtain training results quickly as the number of GPUs and model parameters increased.” AWS Services Used With Tilda, EXAONE has shown how foundation models can be used to transform a wide range of sectors, from manufacturing and research to education and finance. LG AI Research continues its work to make human life more valuable using its foundation model and looks forward to collaborating closely with AWS on future projects. that supports linear scaling By using Amazon SageMaker’s high-performance distributed training infrastructure, researchers can focus solely on model training instead of managing infrastructure.” 中文 (繁體) Bahasa Indonesia Click to enlarge for fullscreen viewing.  LG AI Research used Amazon SageMaker to train its large-scale foundation model and Amazon FSx for Lustre to distribute data into instances to accelerate model training. By building on AWS, LG AI Research was able to resolve issues, implement checkpoints, fine-tune, and successfully deploy the model to production. 60% Contact Sales Ρусский Workflow automation was also important, as multiple models or downstream tasks needed to be completed simultaneously. To meet these requirements, the institute looked at an on-premises infrastructure, but costs were too high, and it would require 20 employees to configure and maintain the on-premises hardware. It would also require upgrading the GPUs every year and adding more GPUs to support workload spikes. Considering all the challenges in an on-premises solution, LG AI Research decided that Amazon SageMaker was the best fit for this project. عربي LG AI Research built EXAONE—a foundation model that can be used to transform business processes—using Amazon SageMaker, broadening access to AI in various industries such as fashion, manufacturing, research, education, and finance. 中文 (简体) LG AI Research is an AI think tank dedicated to developing AI technology. The institute is expanding the AI ecosystem by encouraging cross-industry collaboration across fashion, manufacturing, research, education, and finance through EXAONE. Kim Seung Hwan Head of LG AI Research Vision Lab LG AI Research Develops Foundation Model Using Amazon SageMaker About LG AI Research Overview Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. Using EXAONE, LG AI Research developed an AI virtual artist called Tilda. The fundamental power of Tilda’s artistic qualities comes from EXAONE, which was trained using 600 billion pieces of artwork and 250 million high-resolution images accompanied with text. The virtual artist created 3,000 images and patterns for fashion designer Yoon-hee Park, who designed more than 200 outfits for the 2022 New York Fashion Week using Tilda’s images and patterns. reduction in cost of building AI engine Outcome | Offering New Possibility for Expanding Fields by Using EXAONE Türkçe Opportunity | Developing a Super-Giant Multimodal AI English Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram Amazon FSx for Lustre provides fully managed shared storage with the scalability and performance of the popular Lustre file system. Learn more » LG AI Research reduced costs by approximately 35 percent by eliminating the need for a separate infrastructure management team. It also increased the data processing speed by about 60 percent using the Amazon SageMaker distributed data parallel library. 1 year Park’s work with LG AI Research demonstrated the potential of expanding AI technology to the art industry, growing the AI ecosystem and fostering cross-industry collaboration. The company recently announced a partnership with Parsons School of Design in New York City to conduct joint research on advanced AI technologies to leverage in the fashion industry. Scalability Deutsch Customer Stories / Research and Development Tiếng Việt Italiano ไทย to develop the EXAONE AI engine Architecture Diagram Close Learn more » EXAONE’s Architecture Diagram on AWS Solution | Building the Foundation Model EXAONE Using Amazon SageMaker Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português" LifeOmic Case Study _ AWS Lambda _ AWS.txt,"AWS Lambda Français Achieved HIPAA compliance quickly Founded in 2016, LifeOmic has over 100 employees and a variety of healthcare software solutions. The company started by creating Precision Health Cloud, a secure cloud solution that integrates and indexes disparate data sources, including genomic, clinical, imaging, and population data. This system currently stores 400 million clinical data points and 500 billion genetic variants, including 55 billion unique genetic variants. In addition to supporting healthcare organizations, LifeOmic wanted to offer solutions to help consumers live healthier lives. After it had achieved a solution that was compliant with HIPAA and the HITRUST Alliance, LifeOmic developed mobile apps designed to empower individuals to manage their own health. “We can support all of these products on the same solution and reuse a lot of code, so we’re able to achieve a lot and expand into new marketplaces with a relatively small team,” says Anthony Roach, technical director at LifeOmic. Español Scales to meet peak demand 日本語 AWS Services Used Contact Sales AWS Step Functions 한국어 Makes an average of 100 production deployment updates per day Chris Hemp Vice President of Engineering, LifeOmic  Supports a growing base of over four million users With its secure, scalable serverless architecture on AWS, LifeOmic is equipped to support the full continuum of healthcare, from research and preventive medicine to diagnosis and treatment management. “We wouldn’t have had nearly as much breadth if we hadn’t used AWS,” says Roach. Four million users and counting have downloaded LifeOmic’s mobile applications, which connect with wearable devices and pacemakers. The company can scale to meet demand—such as when New Year’s resolutions led to a three-times increase in application sessions in January compared to December—using simple controls without needing to add new hardware. Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the ""front door"" for applications to access data, business logic, or functionality from your backend services. Achieving Scalable, HIPAA-Compliant Data Storage on AWS Get Started About LifeOmic By using fully managed serverless solutions on AWS, LifeOmic was able to reduce or remove its ongoing maintenance and operations costs, launch quickly, and prepare to scale with agility as the company adds new products and features. “The ease and speed of serverless development on AWS has helped our small team deliver a large set of features in just a few months,” says Chris Hemp, vice president of engineering at LifeOmic. LifeOmic Achieves up to 50% Cost Savings after Building Serverless Architecture on AWS LifeOmic has built a secure health solution that powers analytics, interventions, and engagement solutions for improving health outcomes across the continuum of care, from prevention and wellness to clinical care and research. 中文 (繁體) Bahasa Indonesia Becoming multitenant to support everything from small clinical practices to large hospital systems was also an important goal for LifeOmic. To achieve this, the company needed scalable data stores, and it saw that AWS provided a variety of potential solutions. By using managed services like Lambda, LifeOmic could keep operational costs low and empower its team to focus on developing software, not running the backend. “Some companies try to do cloud-agnostic development, but they lose the benefits that a designated cloud vendor can provide,” says Roach. “On AWS, we gain everything we need, from serverless code to data stores, so we don’t have to worry about multiple vendors and compatibilities.” In April 2020, LifeOmic sought to become compliant with the Federal Risk and Authorization Management Program, and it achieved this goal by April 2021. “We wouldn’t have achieved these federal standards in 1 year if we weren’t using AWS,” says Hemp. “Using AWS, we were able to keep up with the requirements for documentation and security and have the support that we needed.” Avoids infrastructure costs and capital expenses Ρусский Achieved Federal Risk and Authorization Management Program compliance in 1 year عربي Initially, LifeOmic focused on building genomic pipelines using AWS services such as Amazon Elastic Container Service (Amazon ECS), a fully managed container orchestration service. Early on, the company started building APIs and began using AWS Lambda to speed up API development processes as soon as the service became HIPAA eligible. By using AWS Lambda with a Hypertext Transfer Protocol interface layered on, LifeOmic’s developers were able to write and deliver code with ease, even if they were unfamiliar with AWS Lambda. 中文 (简体) Software company LifeOmic knew that to improve health outcomes, researchers, clinicians, and device manufacturers in healthcare and biotech organizations needed a secure solution for interaction and data management. To build this solution quickly and cost efficiently, LifeOmic chose a serverless architecture on Amazon Web Services (AWS).  Amazon Elastic Container Service (Amazon ECS) Learn more »   Benefits of AWS Amazon ECS is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications. It deeply integrates with the rest of the AWS platform to provide a secure and easy-to-use solution for running container workloads in the cloud and now on your infrastructure with Amazon ECS Anywhere. Growing the company from the ground up on AWS has helped LifeOmic focus on innovation instead of infrastructure management. Next, the company is looking into using Amazon Timestream, a serverless time series database service, to add new features that call for continuous data, such as intraday heart rate and continuous glucose monitoring. LifeOmic also continues to expand its customer base and is seeing growing trust in the cloud. “Our customers are confident in the reliability of AWS,” says Roach. “That and our ability to put out new features so quickly have created a winning combination.” Türkçe The company has also realized business benefits, including faster time to market from using automation to make an average of 100 production deployment updates in 1 day. LifeOmic has also achieved cost savings of 30–50 percent by adopting Lambda, including using provisioned concurrency and Compute Savings Plans, a flexible pricing model that offers low prices on AWS Lambda usage. The company has also seen success recruiting and retaining employees, who are excited to use AWS services. Many participate in AWS Training and have either renewed their AWS Certifications or have achieved one for the first time. English Reduced costs by 30%–50% Scaling Healthcare Applications Using AWS Lambda Amazon API Gateway Improved employee recruitment and retention Deutsch When Amazon OpenSearch Service—which makes it easy to perform interactive log analytics, near-real-time application monitoring, and website searches—became HIPAA eligible, LifeOmic was able to add analytics and search features to its Precision Health Cloud. LifeOmic now uses OpenSearch Service as its biggest data store, housing 500 billion documents. Another milestone for LifeOmic was joining AWS Activate, a program that offers startups free tools, resources, and more to quickly get started on AWS. The program offered insights into the AWS road map, helping LifeOmic make its own decisions about its next steps. AWS Step Functions is a low-code, visual workflow service that developers use to build distributed applications, automate IT and business processes, and build data and machine learning pipelines using AWS services. Tiếng Việt The ease and speed of serverless development using AWS has helped our small team deliver a large set of features in just a few months.”  Continuing to Grow and Innovate Italiano ไทย In LifeOmic’s pipeline, applications make code initiation requests through Amazon API Gateway, a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. The pipeline then uses AWS Lambda to run code to retrieve data from Amazon DynamoDB, a fast, flexible NoSQL database service. And to achieve smooth workflows, LifeOmic uses AWS Step Functions, a low-code, visual workflow service that developers can use to build distributed applications and automate IT and business processes. “Using AWS Step Functions, we can achieve long-running processes easily because everything is managed for us,” says Roach. 2022 AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use. LifeOmic decided to build its solution from the ground up on AWS because AWS services like AWS Lambda—a serverless, event-driven compute service—make it simpler to process, store, and transmit protected health information, facilitating HIPAA compliance. “It can take years for startups to meet HIPAA compliance requirements,” says Roach. “LifeOmic started under the assumption of meeting these requirements and more. We tackled and achieved the rigorous HITRUST CSF Certification in less than 6 months with zero corrective actions, and using AWS made it much easier.” Português Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today." Lotte Data Communication Company Vietnam Simplifies API Integrations for Online Retailers on AWS _ Case Study _ AWS.txt,"1.7 million Amazon EC2 Auto Scaling helps you maintain application availability and allows you to automatically add or remove EC2 instances according to conditions you define. You can use the fleet management features of EC2 Auto Scaling to maintain the health and availability of your fleet. Opportunity | Simplifying External API Integration for Online Retailers Français Amazon EC2 Auto Scaling To streamline API management, LDCC VN created Lotte API Transit Gateway (LATG), a single integration point that online businesses can use to link to any other platform. Moon Geun Jae, platform director at Lotte Data Communication Company Vietnam, says, “LATG provides a wide spectrum of services for payment, delivery, shopping, and membership platforms. Our customers have only one connection to manage and don’t need to modify the core of their IT system to perform smooth API integrations with third parties, which reduces security risk and resource costs.”  Español Amazon GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation. Learn more » Lotte Data Communication Company Vietnam (LDCC VN) is an IT solutions provider and member of Lotte Group, one of South Korea’s leading retail conglomerates. When LDCC VN began building its Lotte API Transit Gateway solution, it chose AWS to leverage high availability and cloud-native security tools.  Outcome | Reducing Infrastructure Complexity with Low Maintenance and Investment 日本語 Amazon GuardDuty Since launching, LATG has experienced 99.999 percent uptime. The solution was built using Amazon EC2 Auto Scaling to maintain application availability. LDCC VN also deployed several native AWS security tools to build a resilient solution for its customers, including Amazon GuardDuty for intelligent threat protection, AWS Shield for managed distributed denial of service (DDoS) protection, and AWS WAF – Web Application Firewall to guard against common web exploits. This assures LATG customers that confidential data such as personally identifiable information and financial details are protected on the AWS Cloud.  99.999% Lotte Data Communication Company (LDCC), a division of South Korea’s Lotte Group conglomerate, aims to strengthen its B2B customers’ global competitiveness with proven IT solutions. Because Lotte Group also runs its own large-scale ecommerce business, it’s sensitive to the challenges online retailers face, such as the complexity of secure integrations with digital partners. Aiming to bring its expertise to international customers, LDCC established its first office abroad, Lotte Data Communication Company Vietnam (LDCC VN), in 2009. LDCC VN offers locally optimized solutions to customers in industries including retail, manufacturing, and finance.  Get Started 한국어 to fully build API configurations Overview | Opportunity | Solution | Outcome | AWS Services Used Lotte Data Communication Company (LDCC), part of South Korea’s Lotte Group, was established in 1996 as a total IT solutions provider. The provider offers solutions centered on future core technologies such as the metaverse, mobility, and data. LDCC launched its Vietnam business in 2009 to provide locally optimized solutions in industries such as retail, finance, and manufacturing.  solution uptime Guards against common web exploits Solution | Scaling LATG to Process Millions of Transactions While Reducing Costs Lotte Data Communication Company Vietnam Simplifies API Integrations for Online Retailers on AWS AWS Services Used 中文 (繁體) Bahasa Indonesia AWS provides high availability and scalability to accommodate any level of demand, whether it’s 10,000 or 1 billion transactions.” Lotte Data Communication Company Vietnam built Lotte API Transit Gateway using Amazon EC2 Auto Scaling with AWS WAF, simplifying API configuration for millions of transactions and ensuring data protection for its customers. Contact Sales Ρусский About Lotte Data Communication Company Vietnam عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Secure 2022 Overview 1 day to 1 month AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. Learn more » Türkçe English In addition, LATG customers also save 20–50 percent of IT costs over 5 years compared to doing their own API configuration work. To illustrate, LDCC VN estimates that customers performing their own API configurations spend 2–4 months and $20,000–$250,000 on setting up direct API connections, plus up to $20,000 on annual management costs including server leasing. LATG customers, on the other hand, can complete configurations within one month and spend less than $20,000. Depending on configuration complexity, configurations can even be completed in as fast as two weeks and with zero cost.  LDCC VN operates a hybrid IT environment, running some workloads out of its data center in Hanoi and using Amazon Web Services (AWS) for others, such as backup and storage. The company chose to build LATG on AWS to leverage fast deployment, on-demand scaling, and high service uptime. Moon says, “AWS provides high availability and scalability to accommodate any level of demand, whether it’s 10,000 or 1 billion transactions. We also value the strong cybersecurity capabilities AWS offers.” Moon Geun Jae Platform Director, Lotte Data Communication Company Vietnam Deutsch Its solution uses Amazon EC2 Auto Scaling for elastic scaling and Amazon GuardDuty for intelligent threat protection. By building its API solution on AWS, LDCC VN can provide a highly available, secure tool that saves customers up to 50 percent in direct IT spending and labor costs. AWS Shield Tiếng Việt By building LATG on AWS, LDCC VN has a reliable, scalable cloud infrastructure that reduces infrastructure complexity for customers. Low complexity is a key value proposition, because one of LATG’s target customer segments is non-technical companies who need API integrations without heavy IT investment. “Our customers benefit from enhanced security, cost savings, and a lowered requirement for headcount by using LATG,” explains Moon.  Among the first customers to try LATG was Vanila Studio (Vani Studio), a lifestyle and fintech platform in Vietnam. Vani Studio used LATG to integrate its app with a renowned global membership platform, adding and modifying APIs to improve integration flows and creating a monitoring dashboard. The dashboard alerts Vani Studio of any connection issues and offers a recommended action plan to resolve them. LATG now processes more than 1.7 million monthly transactions within the Vani Studio app.  Italiano ไทย LDCC VN has plans to conduct accelerated go-to-market activities in 2023 to leverage economies of scale for LATG globally. But first, it’s focused on further developing its cloud expertise. “Our teams need to be confident and proficient at using the cloud. To achieve this, AWS has been helping us upskill and ensure we deliver high-quality service for our customers,” Moon says.  A key challenge LDCC VN wants to address for its customers is the burden of configuring application programming interfaces (APIs) to link to multiple online partners. APIs are a core software communication intermediary among modern service platforms. Online retailers, for example, need to connect with payment providers such as banks and digital wallets, delivery services such Grab and Gojek, and various loyalty programs. However, each of these external platforms require unique API configurations that entail a high initial configuration cost and time-consuming maintenance. Insecurely configured API connections can also pose serious cybersecurity risks.  Learn more » monthly transactions supported for 1 customer Customer Stories / Software and Internet Up to 50% cost reduction for customers Português Several developers from LDCC VN have attended AWS Training and Certification courses, and members of the sales team learned from project-based sales training ahead of the LATG launch. More project-based and online training are planned for 2023. “We value the support we receive from AWS to enhance our confidence during the sales and project delivery process,” Moon emphasizes." LTIMindtree Drives Digital Transformation for Global Customers with AWS Training and Certification.txt,"LTIMindtree enrolled over 4,600 employees in online AWS Skill Builder courses, virtual and in-person classroom training with hands-on labs, and AWS Certification exam readiness sessions. 18 months into the training, LTIMindtree is attracting new business opportunities and its sales team is more confident in proposing customized cloud solutions. It has also improved workforce retention and is attracting new talent. Français 2023 The number of recognized technical initiatives undergone also elevates LTIMindtree in the eyes of its customers. After 18 months of AWS Training and Certification coursework, LTIMindtree has notched 9 AWS Competencies and is aiming for 15 by early 2023. The business has also achieved 12 Service Delivery designations for services such as Amazon EMR and AWS Database Migration Service (AWS DMS) and plans to achieve more relevant AWS Service Delivery designations in 2023 to showcase its deep expertise in AWS skills. Español 2x employees trained in 18 months LTIMindtree Drives Digital Transformation for Global Customers with AWS Training and Certification Learn More In addition, the training program is contributing to workforce retention and talent management. In response to specific requests from LTIMindtree’s leaders to attract and upskill fresh graduates, LTIMindtree worked with AWS to develop a new-hire training program. The program includes three dedicated days of training followed by two days of on-the-job coaching. 日本語 As part of its three-year investment in workforce development, LTIMindtree has committed to train an additional 5,000–8,000 people in the next 12 months. The business is more than halfway through its three-year training plan. Furthermore, innovation is on the rise because of the training program. LTIMindtree recently introduced three new solutions for customers in the insurance and media industries. “We’re able to innovate faster, launch new solutions, have more meaningful conversations with customers, and drive new business; it’s a snowball effect,” Vijayakumar concludes. Contact Sales Within one year, LTIMindtree trained about 4,600 technical and non-technical employees, with over 6,200 AWS Partner Accreditations and around 450 AWS Certifications achieved. Training was tailored to meet the needs of LTIMindtree’s complex organization structure, covering 9 business units, 15 industry verticals, and employees with different roles and skill sets spread across the globe. 450+ 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used AWS Training & Certification To learn more, visit aws.amazon.com/training. AWS Certification helps learners build credibility and confidence by validating their cloud expertise with an industry-recognized credential, and organizations identify skilled professionals to lead cloud initiatives using AWS. Opportunity | Upskilling Continuously for Creative Problem Solving AWS Services Used LTIMindtree is a digital solutions provider with more than 90,000 employees and a presence in over 30 countries. The company was formed via a merger on November 14, 2022 between former Larsen & Toubro Infotech (LTI) and Mindtree. LTIMindtree is committed to addressing its customers’ business challenges, as reflected in its tagline: ‘Getting to the future, faster. Together.’ To help its team members tackle customers’ challenges, LTIMindtree has an internal motto: shoshin, a Japanese concept that refers to having an attitude of openness, eagerness, and lack of preconceptions when studying a subject, also known as “beginner’s mind.” LTIMindtree continuously upskills employees so they can approach problems from all angles and develop innovative solutions.  One of the challenges LTIMindtree faced in designing a training program was its employees’ busy schedules and work commitments, which required a flexible approach to training. The AWS Training and Certification team offered a range of course formats, from digital to in-person classroom instruction, to help employees access the training anytime, in the format of their choice. Outcome | Doubling Sales Opportunities and Attracting New Talent 中文 (繁體) Bahasa Indonesia Solution | Building a Cross-Functional, Flexible Program for Employees AWS Skill Builder is an online learning center that offers one-of-a-kind digital training built by experts at AWS. AWS business opportunities Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Learn more » 6,200+ AWS Training and Certification provides free digital AWS Partner Accreditation courses for individuals in business and technical roles. These courses give you a foundational understanding of AWS products and services, best practices, and APN programs so you can effectively address customer business and technical needs. AWS Partner Accreditation courses are available on demand and allow you to learn at your own pace. AWS Partner accreditation Overview By working with AWS Training and Certification, LTIMindtree upskills thousands of employees to attract more business opportunities, launch new solutions, and improve workforce retention. AWS Skill Builder Vijayakumar Pandian, associate vice president at LTIMindtree, says, “The cloud is spurring digital innovations across the industries we serve. It’s not a question of ‘can they build it,’ but rather, ‘how fast can they build it.’” Requests for cloud-based transformation projects are accelerating, and so is the demand for human resources who are certified in cloud operations. LTIMindtree’s goal is for every employee to have basic knowledge of AWS, with accreditation in business or technical areas. “Our customers want to do more with their data and are requesting trained engineers who are familiar with the AWS Well Architected principles,” Vijayakumar adds. Get Started The service revenue from our AWS business has grown significantly, and the momentum continues to build. Türkçe About LTIMindtree Vijayakumar Pandian Associate Vice President, LTIMindtree English 9 The training program was organized across four learning pathways as defined by LTIMindtree: migration and modernization, SAP, Internet of Things (IoT), and data. The training plan prescribed three key training opportunities, including self-paced AWS Skill Builder courses, AWS Partner Courses with hands-on labs in a classroom setting, and AWS Certification exam readiness sessions. AWS Certifications AWS Partner Accreditation Deutsch “Cloud is going to be the fabric of everything graduates do in the future, and they recognize the value of training early in their careers. Programs such as AWS Training and Certification are helping us attract and retain employees, because they believe in an organization that continuously helps them upskill,” Vijayakumar says. Tiếng Việt AWS Competencies 4,600+ Italiano ไทย By pursuing a comprehensive AWS Training and Certification program, LTIMindtree has refined its expertise in assisting enterprises to achieve their cloud technology goals. In 2022, the provider won a contract with one of the largest banks in the United States to help the bank build an AWS-native data analytics stack.  LTIMindtree, an AWS Partner, is a global technology consulting company with operations in over 30 countries. To improve its cloud expertise, the company embarked on a three-year AWS Training and Certification initiative. As a result of the training program, LTIMindtree has seen significant growth in the number of AWS business opportunities with new and existing customers. For example, from one financial quarter to the next, LTIMindtree doubled the number of sales opportunities related to AWS. “The service revenue from our AWS business has grown significantly, and the momentum continues to build,” says Vijayakumar. In addition to technical training, the curriculum included seller enablement programs to help front-line employees—who might not have the right cloud knowledge to communicate the various use cases and challenges LTIMindtree can solve—understand the value of AWS Cloud solutions. “The seller enablement programs from AWS Training and Certification are powering our salespeople in specific verticals to engage in more meaningful cloud transformation conversations with customers,” says Vijayakumar. Prior to the merger, LTI had been an Amazon Web Services (AWS) Partner for over five years and acquired a business called Powerupcloud, an AWS Partner, in 2019. This was a catalyst for further engagement with AWS, to provide even more advanced technology consulting services. LTI entered into a three-year Strategic Collaboration Agreement (SCA) with AWS in March 2021. Part of the agreement included a commitment to help LTI’s customers harness the full potential of AWS, by training its employees with the help of the AWS Partner Training and Certification team.  Concurrently, the business formed a separate business unit dedicated to the AWS Cloud. The data used in this story is based on the results of Larsen & Toubro Infotech's partnership with AWS Training and Certification prior to the merger. Português On November 14, 2022, Larsen & Toubro Infotech and Mindtree—consulting and digital solutions companies under the Larsen & Toubro Group—announced a merger, combining their strengths and unlocking the benefits of scale. The merged entity, LTIMindtree, now operates as a global technology consulting and digital solutions company helping more than 750 global enterprises proactively harness digital technologies. With operations in over 30 countries, LTIMindtree is now one of India’s largest IT services companies in terms of market capitalization." Lucid Motors and Zerolight Case Study.txt,"Français About ZeroLight Español Doubled visitors’ duration time on website versus visits to other automakers’ sites 日本語 Even before the COVID-19 pandemic temporarily closed dealerships worldwide, the average car-shopping experience was trending from traditional showrooms to the internet: the average number of times a car buyer visits a dealership before a purchase has dropped from 7 to 1.5 in the past decade. In reaction, automotive visualization software specialist ZeroLight offers SpotLight Suite, a cloud-based platform that brands, agencies, and dealers use to customize sales and marketing to each shopper. SpotLight users create personalized sales materials with visual content production informed by the car models that shoppers build with ZeroLight’s Palette and Palette+ configurators. In 2020, nascent luxury electric carmaker Lucid Motors enlisted ZeroLight to differentiate itself ahead of the launch of its flagship vehicle, the Lucid Air sedan. Increased the revenue generated per session by 51% 한국어 Lucid Motors and ZeroLight Host Virtual Car Launch on AWS, See 46% Higher Conversion Rate To offer customers a seamless experience, ZeroLight needs readily accessible compute power—so it turned to Amazon Web Services (AWS), which offers globally available GPU instances, low-latency content-delivery tools, and a large selection of advanced artificial intelligence services to help marketers find and engage with customers. ZeroLight implemented Amazon Elastic Compute Cloud (Amazon EC2) G4 Instances powered by NVIDIA T4 Tensor Core GPUs. They are the industry’s most cost-effective and versatile GPU instances for graphics-intensive applications such as remote graphics workstations and graphics rendering. Those G4 Instances were key to the success of the Lucid Air’s September 2020 online launch, which was moved online because of the COVID-19 pandemic. Lucid Motors and ZeroLight Host Virtual Car Launch on AWS, See 46% Higher Conversion Rate (2:47) Hosting a Successful Virtual Launch Using Amazon EC2 G4 Instances Formed in 2014, ZeroLight works to fully integrate online and in-person car shopping. “We’re moving away from a linear buying funnel—where we take a customer from an advert to a website to a dealer—to a circular, more flexible journey where the customer chooses what they want to do,” explains Francois de Bodinat, chief product officer at ZeroLight. Using a digital twin model created with computer-aided design data from the car’s production, ZeroLight shows shoppers a photo-realistic rendering customized to their specifications. That personalized model informs every part of the customer journey from advertising to conversion, optimizing online retail advertising for automakers and better engaging with car shoppers: the goal is for every email, webpage, and ad they see to reflect their personalized car model rather than a generic car. “We want to make the customer the center of the sales process—not to feel like ‘I’m buying a Lucid,’ but to feel like ‘That’s my Lucid. And they know me,’” says Thomas Orenz, director of digital interactive marketing for Lucid Motors. Get Started Amazon EC2 Keeping Up with an Evolving Industry on AWS Amazon EC2 G4 instances are the industry’s most cost-effective and versatile GPU instances for deploying machine learning models such as image classification, object detection, and speech recognition, and for graphics-intensive applications such as remote graphics workstations, game streaming, and graphics rendering. AWS Services Used In the first 10 weeks after the Lucid Air’s debut, more than 436,000 sessions were recorded. Compared to an image-based experience in A/B testing, Lucid has seen a 46 percent increase in car reservations from visitors who engage with the fully interactive configurator, and the revenue generated per session has increased by 51 percent. It also saw increased user engagement on the 3D configurator by up to 47 percent. Multiplies the power of local devices by 10x 中文 (繁體) Bahasa Indonesia ZeroLight needs a lot of power to provide that level of graphical output and real-time computation. Before, that meant being tethered to a high-end physical computer, confining the company to working with dealerships. But that wasn’t sustainable for growth. In an increasingly remote world, customers want to benefit from quality wherever they are, and three-quarters of the car buyer’s journey happens online. Using Amazon EC2 G4 Instances, ZeroLight can offer its vehicle configurator to end users on their own devices. “We needed to bring that physical machine to the cloud and keep the power needed to serve high-quality content,” says de Bodinat. “On AWS, we have more capabilities in the cloud than we would have with physical machines.” Rather than having to be in store to use ZeroLight’s configurator, shoppers can now access it on a smartphone; ZeroLight can use AWS to deliver an iPhone 11 viewing experience that is 10 times more powerful. Contact Sales Ρусский عربي 中文 (简体) Benefits of AWS Increased user engagement on 3D configurator by up to 47% Continuously Improving the Customer Experience on AWS We needed to bring that physical machine to the cloud and keep the power needed to serve high-quality content. On AWS, we have more capabilities in the cloud than we would have with physical machines."" Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Handled peaks of 650 concurrent users English Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Using the scalable compute power of AWS, ZeroLight gives its customers free rein to create a personalized car-shopping experience for end users. “I don’t know where ZeroLight would be if we had to manage a farm of servers as assets,” admits de Bodinat. “The credibility of AWS in the market helps to gain trust with the customer to say, ‘Hey, it’s powered by AWS. You’re safe.’” Shoppers can configure the car to meet their preferences using ZeroLight’s Palette+, powered by Amazon EC2 G4 Instances. When visitors reach the Lucid website, AWS needs just 5 seconds to find their location across the United States, Europe, or the United Arab Emirates; trigger the engine on ZeroLight; begin 3D streaming; and deliver the first live image. Each session is assigned a dedicated EC2 instance, enabling Lucid to deliver immersive, 360-degree visualizations. These feature world-first volumetric-video environments brought to Lucid by ZeroLight and the AWS team, which are enhanced by another world first: real-time, cloud-rendered ray tracing, a technique that realistically re-creates the way light interacts with physical objects—enabled by NVIDIA GPUs, which power the Amazon EC2 G4 Instances. Lucid planned a virtual launch for the Air, and ZeroLight built the company a website to facilitate customer engagement and mimic the in-person shopping experience. Customers and journalists could navigate around the vehicle as if in a showroom and inspect every detail—from home. At peak traffic, 650 users concurrently configured their own Air model using the interactive 3D experience—a number enabled by ZeroLight’s ability, derived from AWS, to elastically provision more and then release unneeded instances to cost-effectively meet demand. Visitors’ sessions lasted twice as long as visits to other automakers’ sites. Though other launches are lucky to see a 10 percent conversion for reservations, Lucid saw 17 percent through the configurator. Increased conversion rate by 46% Deutsch Tiếng Việt ZeroLight’s plans to increase the configurator’s capabilities by integrating with other platforms such as Salesforce and Facebook. The company recently announced the reveal of the 2022 Mitsubishi Outlander directly on an Amazon Live landing page using ZeroLight Palette+ live configurator technologies. Lucid looks forward to a ZeroLight-built virtual reality experience using only NVIDIA CloudXR and AWS. Italiano ไทย Lucid had planned to launch the Air at the 2020 New York Auto Show. When the COVID-19 pandemic dashed those plans, the automaker decided that a fully online launch created as many opportunities as challenges. “Wherever you can engage with the customer, you should,” Orenz says. Enabled 430,000 configurator sessions for virtual car launch in 10 weeks 2021 Learn more » Francois de Bodinat Chief Product Officer, ZeroLight “I’ve never seen so much engagement on a single website at launch,” Orenz says. “There were other major launches around the same time; we totally overperformed those numbers in terms of sessions, engagement, and concurrent users on the site and the configurator—and in making reservations. And it’s stable—whatever we did, we couldn’t break it.” Amazon EC2 G4 Instances ZeroLight is an automotive visualization specialist that integrates cutting-edge technologies and personalized media into a single market-leading platform. Its automotive solutions enhance every stage of the vehicle-shopping journey by increasing engagement, delivering hyperpersonalization, and driving sales. Português" Lyell GxP Compliance _ Case Study _ AWS.txt,"Overview | Opportunity | Solution | Outcome | AWS Services Used mso-hansi-font-family:Calibri { Opportunity | Reducing the Time to Validate GxP Compliance from Weeks to Minutes  Español What sets AWS apart is its breadth and depth of services, its expertise, and its commitment to the healthcare and life sciences industry."" *.MsoChpDefault { 日本語 2022 한국어 Deployed AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Learn more » mso-bidi-font-family:Cambria { AWS Services Used Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Learn more » With a mission to use autologous cell therapy to cure solid-tumor cancer, Lyell uses reprogrammed T cells to develop potential new therapies. Once extracted, these T cells are processed at Lyell’s manufacturing facility and then infused back into the patient. To mitigate process deviations that could lead to negative patient outcomes, it is critical to validate the environment in which all the manufacturing systems are running. The data generated by the system also needs to have robust integrity and accuracy so that it can be used for analytics downstream. However, Lyell’s manual validation process was slow and not scalable. “We would compare screenshots to assert that each environment matched our specifications. This was laborious and prone to human error,” says Adin Stein, head of IT, cloud infrastructure, and cybersecurity at Lyell. “The process would take anywhere from 2 to 3 weeks.” *, sans-serif { Contact Sales Overview   Adin Stein Head of IT, Cloud Infrastructure, and Cybersecurity, Lyell Immunopharma When Lyell makes any change to the software code base on its systems, a continuous integration (CI) workflow is initiated and runs a series of tests to qualify the installation. These test results are posted to Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. Each incoming test result initiates workflows to generate Rapid Q reports, powered by AWS Lambda, a serverless, event-driven compute service that lets organizations run code without provisioning or managing servers. To verify that none of the messages are lost and prevent the system from becoming overwhelmed with multiple incoming changes, Lyell relies on Amazon Simple Queue Service (Amazon SQS), an automatic, fully managed message queuing service. page: WordSection1; mso-hansi-theme-font:minor-latin { With this automation in place, Lyell can spend more time writing test cases and less time documenting changes, identifying areas of compliance risks, and performing exploratory analytics. To complete the auditing process, it uses Amazon DynamoDB, a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability, to store data and re-create compliance documentation. Multiple systems use Amazon Relational Database Service (Amazon RDS), a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud, to connect to Rapid Q for validation, including Lyell’s commercial environment monitoring and endotoxin testing systems. With Rapid Q, Lyell has achieved significant time savings and gained a scalable, paperless environmental monitoring solution that is future proof. generated to focus on other business areas, by eliminating manual workflows * { ไทย p.MsoNormal, li.MsoNormal, div.MsoNormal { validation tests automatically Learn more » to run validation processes versus 2–3 weeks mso-font-pitch:variable { AWS Lambda Français Lyell Reduces Time to Validate GxP Compliance from Weeks to Minutes Using AWS 中文 (繁體) Bahasa Indonesia Lyell Immunopharma is a clinical-stage T-cell reprogramming company headquartered in South San Francisco, California, dedicated to developing curative cell therapies for patients with solid-tumor cancer. Amazon DynamoDB Industry Innovators 2022: Lyell - Streamlining GxP validation on AWS Lyell turned to Amazon Web Services (AWS) and built Rapid Q, a solution that automatically validates FDA compliance and documents changes made to an environment or application. Now, the company can validate compliance in minutes instead of weeks and deploy new updates and systems at a faster pace. Türkçe English mso-ascii-font-family:Calibri { Solution | Building Rapid Q on AWS to Automate Compliance Validation  Tiếng Việt Lyell is also aligning with the FDA’s new CSA guidance, which encourages manufacturers to spend 80 percent of their time on critical thinking and applying testing to higher-risk activities and the remaining 20 percent on documenting IT environments and applications. Because Rapid Q automatically documents any changes, Lyell no longer needs to create reports manually. “This has freed up our resources so that we can focus on other critical aspects of the business,” says Stein. “Now, we can spend more time building solutions that help interpret data coming from manufacturing facilities and clinical sites.” Lyell wanted to increase the efficiency and reliability of its compliance validation workflows using automation, not only for the initial implementation but also for periodic system updates. This was important so that Lyell could gain the agility that it needed to adopt new technologies and make frequent upgrades, without the barriers created by manual validation reporting. An AWS customer since 2018, the company turned to the range of curated industry solutions on AWS to streamline this labor-intensive process. It worked with AWS Professional Services, a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. “What sets AWS apart is its breadth and depth of services and expertise and its commitment to the healthcare and life sciences industry,” says Stein. Português mso-fareast-theme-font:minor-latin { For Lyell Immunopharma (Lyell), an immuno-oncology company with a mission to cure solid tumor cancers, it is critical to validate the systems and applications for its T-cell reprogramming workflows to comply with US Food and Drug Administration (FDA) regulations. Previously, these validations were done manually, which was expensive, time-consuming, and prone to potential errors. To facilitate compliance and meet the FDA’s computer software assurance (CSA) guidelines, Lyell needed a more efficient validation process. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. On AWS, Lyell has reduced manual effort for compliance and can focus more on innovation. In the future, it will use Rapid Q to run all its cloud workloads that require validation. To support these initiatives, Lyell will continue to build on AWS. “AWS brings a lot to the table in terms of opportunities,” says Stein. “We want to take full advantage of them.” Reduced manual errors mso-bidi-theme-font:minor-bidi { } عربي Times New Roman { About Lyell Immunopharma The Rapid Q system parses the data from the test results to generate automated compliance reports and confirm that the installation meets compliance specifications. Lyell also uses Amazon Simple Notification Service (Amazon SNS), a fully managed messaging service for both application-to-application and application-to-person communication, to send out notifications each time a new Rapid Q report is generated or alerts if an issue arises. Learn how Lyell Immunopharma automates continuous GxP compliance and deploys system changes and upgrades faster using AWS. Deutsch mso-fareast-font-family:Cambria { mso-pagination:widow-orphan { Amazon S3 Italiano mso-fareast-font-family:Calibri { mso-generic-font-family:roman { Working with its internal quality team, Lyell built Rapid Q, an automated reporting solution for compliance validation, to assess and document every code change made to its infrastructure. “With Rapid Q, we automated not only the specifications that define each environment or application but also the validation testing,” says Stein. “As we make changes, we can run tests with the push of a button, decreasing the time that it takes to validate compliance from weeks to minutes. We can also generate reports for our quality team automatically.” Minutes Ρусский 中文 (简体) mso-ascii-theme-font:minor-latin { div.WordSection1 { Outcome | Future Proofing GxP Validation on AWS  More Time Arial, sans-serif { Amazon RDS Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Using Rapid Q, Lyell has significantly reduced the time and cost involved in validating compliance for its systems, which has improved its agility to deploy new features and upgrades at a faster pace. More importantly, Lyell can remain in a state of reporting compliance whenever changes are made to its underlying processes through automation, saving time and reducing human error. “Every time we perform an upgrade or implement a new system that needs to be validated, we realize the immediate benefits of Rapid Q,” says Stein. “We can deliver new solutions to the business faster and at a lower cost. We can spend more time interpreting and building solutions to better understand our manufacturing data in a richer, more accelerated way.” Get Started Customer Stories / Life Sciences Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. Learn more » p.Normal0, li.Normal0, div.Normal0 {" MARVEL SNAP_ How Second Dinner and Nuverse Built and Scaled the Mobile Game of the Year Using AWS for Games _ Case Study _ AWS.txt,"The founders of Second Dinner had an ambitious vision: for its small team of engineers to develop and maintain a free-to-play online game for millions of users worldwide. The company wanted to launch quickly and free developers to work on game features rather than maintain infrastructure. In collaboration with its publisher, Nuverse, Second Dinner built an innovative serverless architecture that quickly scaled to millions of players using managed solutions from Amazon Web Services (AWS). Within 4 months of its release, the game became one of the most popular and critically acclaimed games in the world and won the Mobile Game of the Year award. Français AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Learn more » Amazon GameLift deploys and manages dedicated game servers hosted in the cloud, on-premises, or through hybrid deployments. Amazon GameLift provides a low-latency and low-cost solution that scales with fluctuating player demand.  2023 An important feature of MARVEL SNAP is matchmaking: the evaluation and selection of compatible players for card battles in seconds. As its in-house matchmaking solution reached scalability limits, Second Dinner turned to a feature of Amazon GameLift, which provides dedicated server management for session-based multiplayer games. The company used the feature Amazon GameLift FlexMatch as a stand-alone matchmaking service that it customized to MARVEL SNAP’s needs. Second Dinner’s use of Amazon GameLift FlexMatch resulted in the highest volume of matches ever for a game using the service. “The stand-alone Amazon GameLift FlexMatch feature slotted right in, fitting the event-driven serverless architecture that we had already embraced,” says Brenna Moore, Second Dinner senior software engineer. “It provided configurable rule sets and let us do what we needed to get a quality match make.” Español Amazon EventBridge makes it easier to build event-driven applications at scale using events generated from your applications, integrated SaaS applications, and AWS services. Learn more » About Nuverse 日本語 Amazon GameLift Customer Stories / Games Opportunity | Increasing Game Development Speed and Flexibility Using AWS for Games   Get Started 한국어 Solution | Building a Fully Managed Serverless Architecture for Developers to Focus on Game Features Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. Learn more » AWS Services Used In 2022, MARVEL SNAP won Best Mobile Game at The Game Awards. Second Dinner continues to push new features as the game continues to rise in popularity, aiming to serve millions more players around the world concurrently. “MARVEL SNAP is a great flagship product,” says van Dam. “The Second Dinner team has the ambition of getting to a really big user base worldwide, and we’re delivering at scale. We want to replicate what we did for MARVEL SNAP with a lot more developers.” Reduced 中文 (繁體) Bahasa Indonesia of players worldwide MARVEL SNAP accommodates millions of players across its six global regions. A player’s mobile device calls a game client that connects to Amazon API Gateway, a fully managed service that makes it simple to create, publish, maintain, monitor, and secure APIs. Amazon API Gateway invokes functions of AWS Lambda, a serverless, event-driven compute service that helps organizations run code for virtually any type of application or backend service without provisioning or managing servers. Second Dinner built its serverless architecture around AWS Lambda functions that integrate with other AWS services within Nuverse’s account for stable online user experiences. Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Traditionally, similar games run on a single server in a data center or in the cloud, but Second Dinner had committed to a serverless architecture using solutions from AWS for Games, which helps customers to build, run, and grow their games with purpose-built cloud services and solutions. “We adopted AWS early on and identified a set of services that could help us accomplish our goal,” says Aaron Brunstetter, Second Dinner’s vice president of engineering. “We realized that we could just use AWS and focus on things that we could do uniquely and powerfully.” Second Dinner developed the game under its own AWS account, then migrated the architecture to Nuverse’s AWS account for stress testing and deployment. Teams from Second Dinner and Nuverse worked alongside AWS technical account managers to complete the transfer in 3 weeks. “On our own, it would have taken us about 6 months,” says Brunstetter. “The near-immediate turnaround was essential to a successful launch.” The fully managed serverless architecture means that engineers can focus on game features, not infrastructure. “The support from AWS has helped our organization to learn quickly,” says van Dam. “The essentially problem-free launch of MARVEL SNAP speaks for itself.” 中文 (简体) To further build resilience into the architecture, Second Dinner uses Amazon EventBridge, a serverless event bus that helps to receive, filter, transform, route, and deliver events. For example, events from Amazon EventBridge can trigger AWS Lambda to update player data stored in Amazon DynamoDB, a fully managed, serverless, key-value NoSQL database. “We didn’t want to build a backend for the game,” says Moore. “We were building the actual game, and that’s where we want to spend all our time.” In fact, Second Dinner saves the equivalent of up to 20 additional engineers who otherwise would have needed to focus completely on running servers and managing the backend infrastructure. About Second Dinner 20 Nuverse is the gaming division of the Chinese internet technology company ByteDance and a game development and publishing brand for players and developers around the world. Overview Second Dinner founders were behind the successful digital card game Hearthstone, which had gained 10 million player accounts within 1 month of its release in 2014. As a newly formed independent game studio in 2019, Second Dinner secured a license from Marvel Entertainment and began to develop a game based on Marvel characters. At an industry event, the team by chance met representatives from Nuverse, the gaming division of ByteDance, who were looking to collaborate with experienced studios with global ambitions. Second Dinner engineers showed the Nuverse team a prototype of MARVEL SNAP, in which players compete in an online Marvel universe with digital decks of cards that contain special powers. “Nuverse brings scale to developers, including access to key capabilities that indie studios don’t have in house, such as marketing resources and investments,” says Tom van Dam, head of the Nuverse global business development team. “We also are responsible for the backend infrastructure, which gives autonomy and creative freedom to the US developers.” Additionally, Second Dinner and Nuverse gain greater insights into infrastructure costs, and they avoid operating under the burden of financial commitments to hardware and software that they had to build themselves. “What was important for us from the beginning was the cost aspect,” says van Dam. “We’ve also been able to conquer time zones and language barriers. We work alongside AWS teams in multiple locations, supporting an infrastructure that doesn’t require a lot of time away from focusing on development of core features.” The architecture’s support for match play across regions facilitates the implementation of new features. For example, the Battle Mode game feature allows players to compete live against their friends in addition to anonymous players on the internet. MARVEL SNAP launched in October 2022 and rapidly scaled to millions of global players in a few months. Early stress tests had pushed concurrency levels to 140,000 games per minute without interruptions, giving the team confidence that it could handle massive numbers of users. “Second Dinner engineers have been through many game launches before and, to a person, we felt like this was the smoothest, most successful launch technically that we’d ever experienced,” says Brunstetter. “Without a doubt, our reasons for that were the choices we made and the services provided by AWS.” MARVEL SNAP: How Second Dinner and Nuverse Built and Scaled the Mobile Game of the Year Using AWS for Games Aaron Brunstetter Vice President of Engineering, Second Dinner Türkçe English Outcome | Scaling Smoothly to Millions of Players Worldwide Amazon API Gateway Millions Amazon EventBridge To a person, we felt like this was the smoothest, most successful launch technically that we’d ever experienced. Without a doubt, our reasons for that were the choices we made and the services provided by AWS.” Learn how Second Dinner and Nuverse used AWS-managed services to build a scalable architecture that supports millions of players worldwide. Deutsch Tiếng Việt Italiano ไทย full-time engineering job saved from backend management Based in California, Second Dinner is a startup independent game studio founded in 2018. Its first game, MARVEL SNAP, won Mobile Game of the Year within 4 months of its release. Learn more » time to market for new game features AWS Lambda Português" Maxar Case Study.txt,"Cloud HPC Achieves the “Impossible” Français Benefits of AWS In addition, Hartman says, “There are a number of new programs and funding vehicles being appropriated by the US government as well as international organizations that want to leverage HPC in the cloud. We believe Maxar’s experience and recent achievements should allow us to extend this technology into these same organizations.” Optimizing Compute Costs to Compete against a Free Service Cecelski concludes, “We look forward to taking advantage of new services as AWS continues to expand its offerings, shapes the future of HPC in the cloud, and helps enable us to deliver high-performing, cost-effective services to our clients.” Español Amazon FSx for Lustre makes it easy and cost effective to launch and run the world’s most popular high-performance file system. Accelerating Forecast Delivery 日本語 Historically, many industries have relied on reports generated by the on-premises supercomputer operated by the National Oceanic and Atmospheric Administration (NOAA). However, the weather predictions take an average of 100 minutes to process global data. Over time, many companies began to realize they would require much faster weather warnings to protect their interests. Similar to how NASA has expanded its partnerships with private firms to acquire commercial space hardware and services, the processing and delivery of critical weather data products could also be effectively commercialized. Contact Sales Thanks to the success of the application, Maxar clients can now take proactive measures earlier when assets and personnel are threatened by extreme weather. “Our clients can better protect equipment and evacuate personnel sooner,” says Hartman. “And if weather threatens a commodity, our financial clients now have more time to make buy-sell decisions.” Generates weather forecasts 58% faster 한국어 Decreases compute costs by 45% Initially, Maxar designed a cloud HPC cluster with 234 Amazon EC2 instances capable of producing a numerical weather prediction forecast in roughly 53 minutes, just about half the 100 minutes that the NOAA supercomputer takes to complete the same forecast. This accomplished Maxar’s initial performance goal, so the team set its eyes on enhancing the design to reduce cost. When weather threatens drilling rigs, refineries, and other energy facilities, oil and gas companies want to move fast to protect personnel and equipment. And for firms that trade commodity shares in oil, precious metals, crops, and livestock, the weather can significantly impact their buy-sell decisions. To limit damage, these companies need the earliest possible notice before a major storm strikes. That’s the challenge Maxar Technologies set out to solve. Amazon FSx for Lustre With the fast networking speed provided by AWS, we accomplished what many IT experts considered impossible."" Reduces required server instances by 33% Amazon EC2 C5 Instances Stefan Cecelski Data Scientist, Maxar Technologies Elastic Fabric Adapter AWS Services Used Maxar delivers Earth Intelligence and space infrastructure and currently has more than 90 geo-communication satellites in orbit and five robotic arms on Mars. The company collects data across more than 3 million square kilometers of satellite imagery per day and has an archive of over 110 petabytes of satellite images spanning the globe. Amazon EC2 C5 instances deliver cost-effective high performance at a low price per compute ratio for running advanced compute-intensive workloads. 中文 (繁體) Bahasa Indonesia Provides clients with more time to react to extreme weather Elastic Fabric Adapter (EFA) is a network interface for Amazon EC2 instances that enables customers to run applications requiring high levels of inter-node communications at scale on AWS. Shaping the Future of High Performance Computing Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Using EFA networking, Maxar reduced that cluster from 234 c5.18xlarge instances to just 156 c5n.18xlarge instances, which was driven by the ability of the C5n instances to communicate at 100 Gbps network speeds. The EFA interconnect made it possible to outperform the NOAA supercomputer, shortening the forecast time even further—from 53 to 42 minutes, a 22 percent decrease. The team’s new configuration can now produce a forecast 58 percent faster than NOAA’s supercomputer. Additional testing and optimization with AWS revealed Maxar could complete a forecast in under 30 minutes. With further system tuning, Maxar projects it can cut its processing time by an additional 25 percent. The environment automatically spins up when weather data becomes available and then quickly shuts down until a new dataset is available, using numerous AWS services to orchestrate a highly scalable, redundant, and fault-tolerant workflow. The overall cost-optimization measures applied by AWS—including the integration of Amazon EC2 C5n instances with EFA—have enabled Maxar to reduce compute cost by approximately 45 percent. “We need the AWS compute resources for only about 45 minutes each day to run our numerical weather prediction application, so it is a huge benefit to have an AWS environment that we can use only when required,” says Cecelski. To learn more, visit aws.amazon.com/hpc. Get Started “Prior to using AWS, no one thought any cloud environment was capable of outperforming an on-premises supercomputer in generating numerical weather predictions,” says Stefan Cecelski, a data scientist at Maxar. “But with the fast networking speed provided by AWS, we accomplished what many IT experts considered impossible.” Türkçe English About Maxar Technologies AWS ParallelCluster is an AWS-supported open source cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. The comprehensive tools, utilities, and the overall AWS technology stack not only allowed Maxar to optimize the solution for cost and performance, but also to get to market more quickly. “In the past, it was typically cost-prohibitive for any non-government or non-academic entity to go through the procurement and investment activities to research, buy, build, configure, and then set up a traditional on–premises, bare-metal HPC environment,” says Hartman. “However, with AWS, the barrier for commercial solutions has truly been eliminated. Plus, given the experience our team has gained through setting up our cloud HPC programs and offerings, we are well-positioned to help numerical weather prediction users—and even the core authors of numerical weather prediction models like NOAA and ECMWF (European Centre for Medium-Range Weather Forecasts)—better understand and leverage commercial solutions for numerical weather prediction applications as well as other HPC needs for all areas of Earth Intelligence.” Deutsch Tiếng Việt Maxar worked with AWS to create an HPC solution that includes four key technologies. The company relies on Amazon Elastic Compute Cloud (Amazon EC2) for highly secure, resizable compute resources and the ability to configure capacity with minimal friction. Maxar also uses the Elastic Fabric Adapter (EFA) network interface to run its application with a hardware bypass interface that speeds up inter-instance communications. To complement the enhanced computing and networking, the application uses Amazon FSx for Lustre to accelerate the read/write throughput of the application. Maxar also takes advantage of AWS ParallelCluster, an open source cluster management tool that makes it easy to deploy HPC clusters with a simple text file that automatically models and provisions resources. AWS ParallelCluster Italiano ไทย Automatically spins 156 server instances up and down To resolve this issue, Maxar sought to significantly reduce the time needed to generate numerical weather predictions. Its data scientists, engineers, and DevOps team decided to build a high performance computing (HPC) solution to deliver forecasts in half the time of the NOAA supercomputer. “We first considered an effort that would involve building the system in an on-premises data center,” says Travis Hartman, director of analytics and weather at Maxar. “But we realized we needed a cloud environment to build a cost-effective solution that our DevOps team could easily manage and which would allow us to significantly reduce our timeline to get the results to market.” 2020 Learn more » Maxar Uses AWS to Deliver Forecasts 58% Faster Than Weather Supercomputer Having achieved its performance goal, Maxar next focused on delivering the service profitably. Maxar needed to keep the cost of its weather application as low as possible to compete with the free, yet slower, service that NOAA provides. Maxar realized this objective by reducing the number of servers and optimizing the cost of the system—without negatively impacting performance. By using AWS ParallelCluster with Amazon EC2 C5n instances and EFA, Maxar generates the same computing power while decreasing the number of clustered servers by 33 percent. So Maxar turned to Amazon Web Services (AWS). “We knew HPC on AWS could provide an environment that balances performance, cost, and manageability,” Hartman says. “The key AWS capabilities we wanted to leverage for our numerical weather prediction application included automatic environment builds and shutdowns, elastic compute resources, the necessary networking bandwidth to crunch the numbers quickly, and the ability to do so with the velocity required by our business and customer goals.” Português" Measurable-AI-case-study.txt,"Amazon Simple Storage Service Leveraging Managed Services to Simplify Scaling and Control Overhead MailTime currently has 1.4 million users and Measurable AI processes more than 10 million emails each day to extract granular, itemized insights. These actionable insights are used by digital economy companies, consultancies, academia, and financial institutions to better predict revenues, and gain an in-depth understanding of their customer purchasing behavior and competitive intel. The company is currently the largest provider of e-receipt data across emerging markets, with a dominant position in Southeast Asia, the Middle East, Latin America, and India.   Français To offload the burden of database administration, Measurable AI is also using Amazon Relational Database Service (Amazon RDS) for MySQL. Gary Lau, cofounder and CTO of Measurable AI, says, “We determined that AWS managed services, such as Amazon EKS and Amazon RDS, would simplify scaling while controlling cost and overheads. This is important as we’re still a small team.” The startup currently has 20 employees in Hong Kong and the UK. Transferring Data Securely via Amazon S3 Buckets Español Reducing Query Times from Hours to Minutes Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service to run and scale Kubernetes applications in the cloud or on-premises. Gary Lau Cofounder & CTO, Measurable AI The results have been impressive. Since adopting Amazon OpenSearch Service, Measurable AI has reduced average query times from hours to minutes, meaning customers can obtain actionable consumer insights at a faster speed. Furthermore, developers now utilize built-in dashboards for monitoring instead of building their own. The startup is saving at least 20 percent of developers’ time previously spent on monitoring and maintenance. “Amazon OpenSearch Service has delivered faster search and query performance with rich client libraries for easy integration. Plus, it’s freed up more time for us to focus on developing,” Lau says. 日本語 Contact Sales Alternative data is all about speed. Freeing up our developers’ time to deliver insights to the market faster is key, and managed services from AWS allows us to do that.""   한국어 Lau says, “Amazon S3 is an industry standard for secured and convenient data sharing. The solution provides managed, secure, and scalable data storage with low latency. Another advantage is we can create a temporary link for customers to download data directly from Amazon S3 rather than our own servers, offsetting some bandwidth from our compute requirements.” Measurable AI can also transfer data via restful application programming interfaces (APIs) for customers that don’t have a data pipeline or prefer an alternative method to Amazon S3 buckets. Amazon OpenSearch Service makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. Measurable AI is a B2B provider of aggregated, anonymous data insights for digital economy companies, financial institutions, and researchers. Based in Hong Kong, its data coverage spans emerging markets in Southeast Asia, Latin America, and the Middle East. Benefits Get Started AWS Services Used In 2018, Measurable AI migrated to Amazon Web Services (AWS) from another cloud provider. Among other reasons, it sought to leverage the rich features available in Amazon Elastic Kubernetes Service (Amazon EKS), such as customized node groups to improve scalability, a feature not available with the company’s previous provider. Amazon OpenSearch Service 中文 (繁體) Bahasa Indonesia After migrating to AWS, Measurable AI looked for other ways to improve operations with managed services on AWS. One of its focus areas is query performance, a key success criterion for the startup. In typical use cases, Measurable AI customers query the startup’s data sets to explore and parse information about their own customers or markets. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский عربي Learn more » 中文 (简体) About Measurable AI Measurable AI Empowers Businesses with Faster Insights from Alternative Data on AWS Initially, Measurable AI deployed the open-source Elasticsearch engine on Amazon EKS. However, its developers were spending too much time maintaining infrastructure, and complex queries could take hours to run. It switched to Amazon OpenSearch Service, a managed analytics suite, to perform queries on the 70 TB of email data currently stored in Amazon Simple Storage Service (Amazon S3). Developers also appreciate the ease with which they can upgrade instance types without managing additional storage requirements and configuration changes. “If we need to improve query performance, we simply upgrade the instance and the attached storage is managed by Amazon OpenSearch Service,” explains Lau.   Amazon Elastic Kubernetes Service Türkçe English AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. AWS Fargate Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. •  Reduces query times from hours to minutes •  Queries 70TB of data each day •  Frees up 20% of developers’ time •  Simplifies scaling with customized Kubernetes node groups •  Transfers data securely to customers’ data pipelines •  Automates storage configuration changes •  Reduces time-to-market with serverless technology The startup is growing its customer base for both its B2C and B2B operations and is prepared to scale with an agile foundation on AWS. Lau concludes, “Alternative data is all about speed. Freeing up our developers’ time to deliver insights to the market faster is key, and managed services from AWS allows us to do that.” Deutsch Freeing Up Developers with Serverless Technology Last year, Measurable AI introduced RewardMe, a cashback reward app to reward individual users for contributing anonymous data points. Consumers sign up for RewardMe, link the app to their credit card or email account, and automatically earn cryptocurrency or cash back with every purchase they make across 100 merchants worldwide. To reduce time-to-market, Measurable AI used AWS Fargate as a serverless compute engine to launch RewardMe. Tiếng Việt Learn More Italiano ไทย To learn more, visit aws.amazon.com/solutions/analytics. Measurable AI is an alternative data startup specializing in providing corporations with granular insights extracted from its own transactional e-receipt consumer panel. Founded in 2014, this innovative data provider started out pioneering MailTime, an email productivity app which helps ‘declutter’ mailboxes and prioritize emails in an easy-to-use SMS format. 2022 To receive weekly insights from Measurable AI, most customers request data transfers via Amazon S3 buckets. Measurable AI defines read-only permission settings, grants access rights using AWS Identity and Access Management (IAM), and then customers receive data in Amazon S3 direct to their own data pipelines. According to research, the market for alternative data is expected to grow to $3.2 billion in 2022 and reach $13.9 billion by 2026 at a compound annual growth rate of 44%. Alternative data is defined as unstructured text and imagery from news feeds, social media, online communities, communications metadata, satellite imagery, geo-spatial information, and other sources that can help businesses derive unique—and valuable—market insights. Português" Mediality Leverages Automation to Deliver Racing Data Faster on AWS _ Case Study _ AWS.txt,"Mediality Racing worked with AWS Partner Cevo to migrate from legacy Microsoft Windows workloads and develop a cloud-native, serverless data framework using AWS Amplify and AWS Lambda. 10 minutes Français With the help of Cevo, Mediality has automated several formerly manual workflows. Efficiency has skyrocketed, and employees can redirect their attention to more value-added tasks like product development. Employee satisfaction has likewise increased because monotonous, time-consuming tasks have been removed from daily workflows. “We can use our resources and in-depth racing knowledge better to our competitive advantage,” McLean explains.  The increase in automation across all data processes has drastically improved operation-wide efficiency. Mediality also has higher visibility into workflows on the AWS Cloud, to see where further automation could be introduced. Its teams are currently putting the finishing touches on a public API, which will be a first for the business.  2023 AWS Landing Zone is a solution that helps customers more quickly set up a secure, multi-account AWS environment based on AWS best practices. With the large number of design choices, setting up a multi-account environment can take a significant amount of time, involve the configuration of multiple accounts and services, and require a deep understanding of AWS services. Learn more » Formed after the separation in 2020 of Australian Associated Press (AAP), Mediality Pty Ltd offers diverse media and publishing solutions including the country’s premier press release distribution network. Its Mediality Racing division, formerly AAP Thoroughbred Information Services and then AAP Racing, has decades of experience delivering data on thoroughbred horses to clients such as wagering operators, horse owners, and individual punters.  Español Mediality, a company formed of business units previously known as the Australian Associated Press, provides modern media and publishing solutions for businesses of all sizes. To offer faster, more flexible data delivery, its Mediality Racing division decided to migrate from Microsoft Windows and older legacy workloads in the data center to more open-source alternatives on the AWS Cloud.  Amazon DocumentDB (with MongoDB compatibility) is a fully managed native JSON document database that makes it easy and cost effective to operate critical document workloads at virtually any scale without managing infrastructure. Learn more » Better automation Learn More Philip McLean Managing Director, Mediality Racing Pty Ltd 日本語 Opportunity | Modernizing 40-Year-Old Data Center Architecture Contact Sales AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use. Learn more » Customer Stories / Media & Entertainment Mediality has highly skilled developers on staff, but most of their experience prior to this project was with the .NET framework, and they were struggling to keep up with the company's growth. To build upon its developers’ expertise, the business chose to work with Cevo, an Amazon Web Services (AWS) Partner. Mediality had other workloads on AWS and wanted to execute the data project on a trusted platform following cloud best practices. The company has an ongoing relationship with Cevo and valued its deep knowledge and experience in developing solutions for customers—including those in the racing industry—using AWS NoSQL and serverless technologies.  2 hours 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Because racing workflows are cyclical and prone to spikes just before events, Cevo recommended that Mediality use a serverless, pay-per-use approach for data transfers. Mediality is now using AWS Lambda serverless code to check for and automatically retrieve input data as it’s updated. Data retrieval and ingestion are fully automated, event-driven processes. Many files that formerly required manual transfer are now sent immediately to customers, saving about 10–15 minutes per event. Previously, Mediality Racing’s account manager would spend at least 2 hours a day preparing and loading files for each race. “This project will finally allow our account manager to focus on business and product development,” explains Philip McLean, managing director at Mediality Racing.  AWS Landing Zone Solution | Developing User-Friendly, Cloud-Native Data Workflows Get Started saved daily on database management After analyzing how data was flowing in and out of its core database, Cevo helped Mediality migrate from Microsoft SQL Server, a relational database hosted in a managed data center, to Amazon DocumentDB, a fully managed non-relational database service.  AWS Services Used AWS Amplify Mediality, formed after the Australian Associated Press (AAP) was restructured in 2020, provides modern media and publishing solutions for businesses of all sizes. Its Mediality Racing division (formerly AAP Racing) has been supplying accurate, updated horse racing data used in form guides for nearly four decades.  中文 (繁體) Bahasa Indonesia Eliminates technical debt from data center Amazon DocumentDB ไทย Ρусский About Mediality عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Futureproofed operation The enhanced platform will enrich our existing customer relationships and provide a future-proofed foundation for new business opportunities.” Overview Mediality Racing plans to release its public API in 2023, and the company anticipates the move will open the door to a whole new set of use cases for its customers, including bespoke racing app development. “Having a public API transforms the way we can deliver our product and ultimately the way customers consume our data. The enhanced platform will enrich our existing customer relationships and provide a future-proofed foundation for new business opportunities,” McLean concludes. Mediality Racing now uses AWS Amplify as a user-friendly development framework, AWS Lambda to drive event-driven automation, and Amazon DocumentDB as a fully managed database service. The company is able to offer customers an API for faster data delivery and consumption, freeing up employees from the many file management tasks that filled their workdays.  Türkçe AWS Lambda English A company restructuring in 2021 provided the opportunity to streamline. Tim Mansour, technology initiatives manager at Mediality Racing, explains, “We decided to move forward with a greenfield approach to redesign our data platform to be cloud native, leaving the past behind and deploying modern technologies to boost workflow efficiency.”  By modernizing its data platform on the AWS Cloud, Mediality can offer customers a flexible API that facilitates faster retrieval of time-sensitive racing data. “The faster our customers can get their products to market—products that rely on our data—the more likely they are to capture the punter’s dollar,” McLean explains.  Mediality Racing had attempted a piecemeal approach to modernization, but this ended up adding rather than reducing workflow complexity. Meanwhile, several of its customers were asking for more modern data delivery formats, including application programming interfaces (APIs). The company had been delivering racing data via large XML files for many years.  AWS Amplify is a set of tools and services that can be used together or on their own, to help front-end web and mobile developers build scalable full stack applications, powered by AWS.  Outcome | Eliminating Technical Debt with Flexible API Solution With the API, Mediality expects to see even greater efficiencies in file transfer timelines. Currently, employees take 7–8 minutes to review updated racing files and validate the data before sending updates to customers. Luke Donnelley, operations manager at Mediality Racing, says, “We’re expecting to see a significant uptick—up to 5 minutes—in the speed that we can deliver data. Five minutes is very significant in the corporate online book-making industry in Australia, which has become ultra-competitive. It’s a race for information.”  Deutsch To learn more, visit aws.amazon.com/solutions/migration. Tiếng Việt When Mediality was spun off from AAP, the business—and its subsidiaries such as Mediality Racing—inherited legacy data center and application architecture, with Windows-based workloads that were initially built nearly 40 years ago. Mediality recognized the need to modernize but lacked the investment capital to move towards an open-source architecture on the cloud.  Italiano Mediality Leverages Automation to Deliver Racing Data Faster on AWS Mediality has also boosted resilience and future-proofed its operation with the migration, by eliminating the technical debt associated with running legacy on-premises applications. Mansour elaborates, “We have very loyal staff that have been with us for 20-plus years and knew how to run our on-premises SQL database well. But that came with a significant business continuity risk, as that knowledge resided with just a few individuals. People just aren’t learning those types of legacy workflows and programming languages like COBOL anymore.” With the implementation of Amazon DocumentDB, Mediality has a lower total cost of ownership with a fully managed database that eliminates undifferentiated management tasks and licensing fees. Learn more » Specifically, Mediality Racing wanted to shift from bespoke Windows applications to web interfaces. Its primary database, built on Microsoft SQL, stores horse racing data back to the 1980s and is the core of the business. Mediality Racing supplies Australia’s major newspapers with information for form guides and has a long-standing reputation for data accuracy. Ensuring the integrity of its data during a planned migration was critical. Cevo quickly began helping Mediality Racing develop cloud-native data workflows, setting up an AWS Landing Zone and using AWS Amplify as a user-friendly development framework. Mansour says, “AWS Amplify has been incredibly useful because it allows us to deploy very quickly and easily, pushing code changes to new environments in about 10 minutes.” This faster deployment directly accelerates Mediality’s development process by cutting testing time in half, Mansour explains. AWS Amplify also detects if parts of code are broken and prevents deployment in such cases—thwarting potential errors in racing data due to breaks in code.  to push changes in code Redirects resources to more value-added tasks Português" Mercks Manufacturing Data and Analytics Platform Triples Performance and Reduces Data Costs by 50 on AWS _ Case Study _ AWS.txt,"MANTIS unifies data across business units and makes it ready for analysis and decision-making to unlock business value. The platform uses Français By using AWS services, Merck’s Digital Manufacturing organization is effectively overcoming the challenges of implementing and sustaining a huge, complex data platform. The company provides data analytics solutions and capabilities to thousands of users across the globe. Looking ahead, Merck will focus on scaling the platform for low-latency data availability, virtualization, and no-code self-service. data scalability, democratization, and advanced analytics Español Amazon Redshift, which uses SQL to analyze structured and semi-structured data and model large datasets. This makes it simple for thousands of engineers, supply chain managers, and process engineers to create and consume data models. “MANTIS is using AWS services to develop reusable solutions using both ‘lake house’ and ‘data warehouse’ architectures to offer the flexibility and agility required by users,” says Silai. in operating costs Opportunity | Using AWS to Build a Scalable Platform for Manufacturing Data at Merck  日本語 Solution | Creating a Scalable Data Lake and Warehouse and Saving 50 Percent in Operating Costs   Learn how Merck scaled data storage by migrating its legacy manufacturing data platform to AWS. Merck’s Manufacturing Data & Analytics Platform Reduces Costs by 50% on AWS 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used The team further complements Amazon S3 with Improved Get Started To share raw and aggregate data, Merck paired Amazon S3 with AWS Services Used For over 130 years, Merck has developed important medicines and vaccines to prevent and treat diseases in people and animals. In 2017, the company’s IT team developed MANTIS, a centralized data and analytics platform, to help store, visualize, and analyze global manufacturing data in an effective, efficient, secure, and reliable manner. The platform was initially built on premises. “MANTIS not only streamlines manufacturing operations and helps achieve our strategic goals but also helps us become a more data-driven organization,” says Ram Silai, director in the Digital Manufacturing organization at Merck. 400 TB Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. Learn more » In 2019, Merck migrated MANTIS to the cloud. It chose AWS due to the flexibility of different services, the ability to run programs at a global scale, and the combination of low-cost storage with high-speed data processing capabilities. Moreover, AWS has been an important component of Merck’s enterprise cloud journey. “AWS provides key enterprise services for Merck and supports our cloud-first strategy at every touchpoint across the organization,” says Silai. “We engage with the AWS team on a constant basis so we can align our road map, improve our capabilities, and become more efficient for our users and businesses.” Using AWS tools like Reduced 中文 (繁體) Bahasa Indonesia With MANTIS and other data platforms within Merck adopting similar AWS-based architectures, the company can better unify and share data across business units and divisions. “We will be able to share data between research, manufacturing, commercial, and global support functions seamlessly,” says Silai. “Using AWS capabilities, we’re truly bringing data to the heart of decision-making at Merck. And it’s just the beginning of what is possible.” ไทย Ρусский AWS Glue عربي Merck’s Manufacturing Data and Analytics Platform Triples Performance and Reduces Data Costs by 50% on AWS 中文 (简体) AWS Glue, a serverless data integration service that simplifies discovering, preparing, migrating, and integrating data from multiple sources for analytics. This architecture simplifies and democratizes data usage for everyone at Merck through powerful data visualizations and user-friendly applications built on top of the AWS-powered data lake. Using this platform, stakeholders can get a holistic and near-real-time view of Merck’s manufacturing operations and supply chain. They can also run advanced analytics to optimize manufacturing processes, reduce operational risks, and drive meaningful outcomes. “The solution helps teams spend less time searching and moving data and more time using it for meaningful patient and business outcomes,” says Silai. in performance versus legacy solution Outcome | Pursuing Data Innovation Using AWS Services   Amazon Redshift AWS Glue is a serverless data integration service that makes it easier to discover, prepare, move, and integrate data from multiple sources for analytics, machine learning (ML), and application development. Learn more » Amazon CloudWatch collects and visualizes real-time logs, metrics, and event data in automated dashboards to streamline your infrastructure and application maintenance. Learn more » Overview Since implementing AWS solutions, there has been a 50 percent reduction in operating costs and a three-time improvement in performance compared to the legacy on-premises solution. Merck has also seen a significant decrease in time to ingest data for developing solutions, an improved compliance posture, and increased supply chain visibility. MANTIS stores roughly 400 TB of data, adding about 1 TB of data each day. “What’s also significant is that the new platform has made it simpler to develop and implement solutions that are required to follow Good Manufacturing Practices requirements,” says Silai. AWS CloudTrail, which monitors and records account activity across AWS infrastructure, to gain more control over storage, analysis, and remediation. “AWS CloudTrail is very important to our approach because we want to have a clear audit trail to meet Good Manufacturing Practice requirements,” says Silai. Türkçe Merck (known as MSD outside of United States and Canada) is a global healthcare company that delivers innovative health solutions through its prescription medicines, vaccines, biologic therapies, and animal health products.   3x Improvement English Global biopharmaceutical company Merck uses the power of leading-edge science to save and improve lives around the world. To enhance the efficiency of its global manufacturing operations, it needed complete visibility across production lines and sites, along with robust data and analytics capabilities, to identify areas in need of improvement. With Merck’s manufacturing data growing in volume and complexity, its legacy data platform was constantly challenged by performance and scalability thresholds. Furthermore, it was increasingly expensive to manage a full-stack platform on premises. Amazon CloudWatch, which collects and visualizes near-real-time logs and metrics, Merck monitors its collection and use of data, notes problems as they arise, and maintains compliance. The platform has a single access management governance framework based on different data domains. In addition to Amazon CloudWatch, Merck uses About Merck time to ingest data significantly Amazon Simple Storage Service (Amazon S3), an object storage service built to store and retrieve any amount of data from anywhere, to enhance data availability for users while lowering storage costs and time to market. Using Amazon S3, Merck unifies data silos and increases data availability at low cost while providing the highest levels of security and reliability. Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Due to exponential growth and increasing variety of data, MANTIS constantly hit its performance and scalability limits. With data from over 120 source systems and thousands of users, Merck needed a more scalable and reliable system that provided maximum efficiency and reduced operating costs. “We wanted speed and efficiency to develop applications on a data platform,” says Silai. “Plus, because we had a range of technology solutions with multiple vendors, it was cumbersome to move, share, and analyze data.” Deutsch Ram Silai Director in the Digital Manufacturing Organization, Merck Tiếng Việt Amazon S3 Using AWS capabilities, we’re truly bringing data to the heart of decision-making at Merck. And it’s just the beginning of what is possible.” Italiano Customer Stories / Life Sciences To overcome this challenge, Merck’s IT team used Amazon Web Services (AWS) to build and implement a holistic platform, MANTIS, to bring data and analytics capabilities to the heart of decision-making for manufacturing. The platform unifies data originating from over 120 manufacturing systems and external parties, providing over 3,000 users with a simpler and more cost-effective way to access and analyze data. Using MANTIS, Merck’s manufacturing division can achieve its strategic goals and ensure that life-saving medications make it to the right place at the right time with the highest levels of quality. Amazon CloudWatch Contact Sales of data stored Learn more » 2023 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. 50% Reduction Português" Midtrans Case Study _ Amazon Web Services.txt,"Amazon Simple Storage Service Français Midtrans, an Indonesian-based epayment gateway and subsidiary of digital payment technology organization, GoTo Financial (formerly GoJek Group), wanted to solve this problem by helping SMBs gain easier access to cloud technology and services. “The cloud has accelerated innovation and led to digital transformation for many enterprises. But many Indonesian SMBs face business challenges when trying to digitize their infrastructures,” says Eizel Mauldy Muhammad, project manager for Pojok Usaha, “For example, traditional merchants, such as small shop owners with no website or social media presence, sometimes lack the technical resources to support digital transformation.” Midtrans wanted to make it easier for these companies to use the cloud to change the way they sell products and services. Drives digital transformation for Indonesian SMBs Español Midtrans (GoToFinancials) Collaborates with AWS to Drive Digital Transformation for SMBs through its Pojokusaha.com Online Portal Learn More 日本語 AWS Services Used Offering More Applications and Helping Additional Customers Drive Innovation Builds portal in 7 months Get Started 한국어 The portal contains over 30 cloud-based products from both AWS partners and Midtrans customers. SMBs can purchase the products through the Midtrans payment gateway. These offerings include web development services, chat bots, and point-of-sale applications. The portal is designed to assist two types of SMBs: those with no digital footprint, and those with digital services seeking to expand their customer base. By using the portal, businesses can connect with sales teams for applications and services, facilitating a simpler approach to onboarding. About Midtrans To learn more, visit aws.amazon.com/campaigns/small-medium-businesses.  Many small and medium businesses (SMBs) that want to move to the cloud never get off the ground. Some SMBs let concerns around maintenance, security, and costs prevent them from fully adopting cloud services. Helps AWS partners grow their business  Driving Digital Transformation for Indonesian Merchants 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Aside from merchants, AWS Partners are also leveraging the portal to connect seamlessly with merchants. One partner is Jurnal by Mekari, which provides a cloud-based accounting application for its customers through the portal, with the goal of increasing technology adoption and sophistication among Indonesian SMBs. “Through the Pojok Usaha portal on AWS, we are giving partners a way to provide their digital products to merchants in one place to help them grow their business faster,” says Eizel. Creating a Digital Portal in 7 Months The Pojok Usaha portal makes it simpler for SMBs across Indonesia to quickly find and procure cloud services and solutions from AWS and its partners. “By working with AWS to create this portal, we’re serving traditional businesses lacking the technical resources to tap into the digital world,” says Eizel. “By accessing the portal, they can simply click and sign up for a new application or service and begin using cloud solutions without building and maintaining their own software.” Contact Sales Ρусский Midtrans, based in Indonesia, provides complete digital payment solutions for enterprises, startups, and small and medium businesses. More than 500,000 businesses use the Midtrans payment gateway for electronic payments, and the platform processes over 20 million transactions every month. عربي Simplifies procurement of cloud-based applications and services 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Learn more » Benefits of AWS Midtrans and AWS will continue to collaborate to offer additional applications and services through the portal, including a broader suite of AWS-native services alongside seamless payment capabilities via GoPay, a digital wallet for online payments. Working together, the two companies will also grow the portal via the new AWS Asia Pacific (Jakarta) Region. With three Availability Zones, AWS customers and partners have wider ability to process and store data locally.“ By working with AWS to create this portal, we’re serving traditional businesses lacking the technical resources to tap into the digital world.” Eizel Mauldy Muhammad Project Manager, Pojok Usaha To achieve its goals, Midtrans engaged with Amazon Web Services (AWS) to build a solution that extends its payment gateway to SMBs across Indonesia. The two organizations conducted joint planning and “working backwards” sessions—a product development approach in which companies start from the ideal customer end state and work backwards, to align business priorities. Following these sessions, the two companies agreed to collaborate on a new digital portal for SMBs. AWS supported Midtrans by offering financial support and technical expertise from a local AWS partner. Eizel adds, “We collaborated closely to create the portal, from strategizing to implementation.” This joint effort resulted in Midtrans completing the project, from ideation to design to launch, within seven months. Türkçe The AWS Asia Pacific (Jakarta) Region will help us reach more SMBs in Indonesia,” says Eizel. “We hope to attract more than 10,000 businesses through the portal and help them create new efficiencies and drive innovation in the cloud. The outcome was Pojok Usaha (the Business Corner in Bahasa Indonesia), an online portal that acts as a centralized hub for SMBs. The portal runs on Amazon Elastic Compute Cloud (Amazon EC2) instances and relies on additional services including Amazon Relational Database Service (Amazon RDS) and Amazon Simple Storage Service (Amazon S3) for data storage. English Amazon Relational Database Service Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Deutsch Through the Pojok Usaha portal, Midtrans and AWS are reaching their goal of helping SMBs in Indonesia digitize their businesses and ultimately accelerate their cloud journeys. One merchant taking advantage of the portal to drive digital transformation is Mutia Karya, a food supplier that developed a platform, Mikrolet, to connect food stall operators with suppliers. Another company, Livina Global Teknologi, is using Pojok Usaha to sell an application called Mostore, which allows food and beverage companies to promote their products digitally. Tiếng Việt Italiano ไทย Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. 2022 Simplifying Cloud Application Procurement for SMBs Amazon Elastic Compute Cloud Creates centralized online portal featuring more than 30 products and services Português" Migrating Large-Scale SAP Workloads Seamlessly to AWS with Sony _ Sony Case Study _ AWS.txt,"Additionally, Sony GISC-IN used Amazon Elastic File System (Amazon EFS), a serverless, fully elastic file storage service, for its main SAP directories in a high-availability cluster. AWS Enterprise Support worked with Sony GISC-IN to optimize its Amazon EFS usage. By configuring Amazon EFS throughput, incorporating lifecycle policies to migrate infrequently accessed data to an infrequent-access tier, and optimizing mounts so that they used recommended parameters for optimal performance, Sony reduced Amazon EFS costs by 40 percent. As it moves forward, Sony plans to develop advanced solutions to help business users work faster and smarter. These solutions include dynamic pricing strategies, self-management applications, and ML models. The possibilities are virtually endless, and Sony is excited to explore the potential of its new AWS infrastructure. in data footprint Français Key Highlights of SAP West Platform inluded that the platform is a multitenant environment that serves the following Sony business units: Sony Europe, Sony North America, Sony Interactive Entertainment Europe, Sony Corporation of America, Sony Global Treasury Services PLC, Sony Russia, Sony Ukraine, Sony Overseas AG, Sony Turkey, Professional Services Middle East and Africa (Sony Dubai), Sony Semiconductor Solutions, and Hawk-Eye Innovations. Maintains Sony migrated SAP West Platform to the cloud to address multiple drivers, including return on investment, cost reduction, technology refresh, service improvement, agility, and preparations for its migration to SAP S/4HANA on AWS—which helps companies achieve faster time to value with the AWS on-demand infrastructure. Español About Sony Electronics AWS Enterprise Support provides you with concierge-like service where the main focus is helping you achieve your outcomes and find success in the cloud. Learn more » 30% reduction Migrating Large-Scale SAP Workloads Seamlessly to AWS with Sony 日本語 AWS Services Used 2023 Sony Electronics is a multinational conglomerate corporation headquartered in Tokyo, Japan. Across Sony, SAP West Platform migration has set standards for building resilient workloads, migrating large-scale SAP Business Warehouse systems, managing information security, and achieving redundant network connectivity through integration with network hub and active directory services. For other business units, it serves as a blueprint for implementing a successful migration project while maintaining workload security and resilience, achieving redundant network connectivity, and avoiding cost overruns. Customer Stories / Media & Entertainment Get Started 한국어 Empowers Sony GISC-IN also developed a serverless solution to automate SAP refresh, removing the need for manual refresh processes. Overall, these efforts resulted in improved backup and refresh capabilities with reduced costs for business units running SAP workloads on AWS. Overview | Opportunity | Solution | Outcome | AWS Services Used AWS Service Optimization Efficiency and innovation are part of Sony’s DNA. As a longtime AWS customer, it knows the advantages of AWS services, including cost reduction, better performance, and access to cutting-edge capabilities like machine learning (ML). With an eye on the future, Sony chose to migrate SAP West Platform to AWS and embrace the cloud’s operational benefits. When the new infrastructure was ready, Sony migrated SAP applications from on-premises data centers to AWS. The teams ran the migration in the US East (Northern Virginia) Region and distributed traffic across two Availability Zones. This approach meant that if one Availability Zone were to fail, the other would take over, minimizing disruption to the business. As a result, Sony completed the migration while maintaining high resilience and availability. Reduces Sony is already an AWS enterprise customer and has many workloads on AWS. So, it was easy to choose AWS rather than migrate to another cloud provider.” … global business users 40% improvement Amazon Elastic Block Store (Amazon EBS) is an easy-to-use, scalable, high-performance block-storage service designed for Amazon Elastic Compute Cloud (Amazon EC2).  Learn more » “Sony is already an AWS enterprise customer and has many workloads on AWS and many other Enterprise apps running in AWS,” says Umesh Kesavan, associate director at Sony. “So, it was easy to choose AWS rather than migrate to another cloud provider.” Globally over 6,000 Sony users rely on SAP West Platform for business-critical activities, from demand planning to warehouse management. When Sony embarked on a journey to improve agility, cost efficiency, and technological modernization, SAP West Platform became a key focus. The scope of the project included the following elements: migrating SAP application infrastructure from a traditional on-premises data center to AWS; modernizing SAP Business Warehouse by upgrading to a new version and replacing Business Intelligence Accelerator with an SAP HANA database; modernizing the legacy IBM mainframe to a Linux x86 model on AWS and rearchitecting on-premises solutions, such as SAP Master Data Management, IBM InfoPrint, and Business Warehouse Accelerator, for the cloud and SAP HANA. Additionally, the scope included demonstrating the ability to continue business transformation projects without delays or additional costs, while adhering to project timelines and business service-level agreements; avoiding functional changes that would require extensive testing to expedite user acceptance testing; improving service, agility, and sustainability for infrastructure services; and achieving service and operational improvements, increasing service scalability, and implementing reliable high-availability disaster recovery. 1 In July 2021, Sony’s SAP cloud migration successfully went live, with very smooth support in the hypercare period. The project achieved several noteworthy accomplishments, such as promised cost savings, reduced downtime, and several other benefits related to agility, transparency, and modernization. 中文 (繁體) Bahasa Indonesia AWS Enterprise Support Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Opportunity | Using AWS Services to Modernize SAP West Platform for Sony As one of the world’s largest companies, Sony Electronics (Sony) oversees a diverse range of business units with thousands of employees. Given its intricate nature, the company’s technology estate is equally complex. no items found  Umesh Kesavan Associate Director, Sony Electronics Learn more » Sony worked with AWS Enterprise Support—which provides 24/7 technical support from high-quality engineers, tools, and technology—to achieve its objectives and carry out the project successfully. The close collaboration between Sony and AWS Enterprise Support team members, as well as smooth communication and coordination, resulted in a seamless process. Throughout the migration, Sony’s technical account manager provided architectural and operational guidance to help the company achieve the greatest possible value from its AWS migration. The benefits delivered were significant, including cost reductions and increased agility, transparency, and modernization. Overview The service supports 6,000 corporate users for Sony across multiple regions, which comprise SAP West. Users rely on many SAP application products, including SAP Enterprise Resource Planning Central Component, SAP Business Warehouse, and SAP Supplier Relationship Management. Although the core applications support all tenants, the noncore applications serve specific tenants or regions. The service manages and stores more than 100 TB of application data. Outcome | Improving Performance by 40% While Reducing Data Footprint by 30% The migration delivered significant cost savings, which could be reallocated to other areas of the business. Over 200 compute instances supporting Sony’s SAP landscape were migrated to the cloud, and the company reduced its data footprint by 30 percent. The project also resulted in a 40 percent runtime performance improvement across all applications. Additionally, the migration was fully managed by Sony’s Global Information Security and Communication (Sony GISC-IN) teams with minimal business intervention. AWS Migration Acceleration Program (AWS MAP) provides tools that reduce costs and automate and accelerate execution, tailored training approaches and content, expertise from Partners in the AWS Partner Network, a global partner community, and AWS investment. Türkçe To keep the migration under budget, Sony participated in the AWS Enterprise Discount Program and the AWS Migration Acceleration Program (AWS MAP), a comprehensive and proven cloud-migration program. The credits provided by these programs helped mitigate expenses. Sony collaborated with the AWS Enterprise Support team to choose the right version of Savings Plans, a flexible pricing model that can help companies reduce their bills by up to 72 percent compared to On-Demand prices. More Media & Entertainment Customer Stories English AWS IEM  in runtime performance high availability and resilience AWS Infrastructure Event Management (IEM) offers architecture and scaling guidance and operational support during the preparation and execution of planned events, such as shopping holidays, product launches, and migrations. Learn more » AWS MAP Project Scope Sony GISC-IN worked postmigration with the AWS Enterprise Support team to optimize Amazon EBS volumes by rightsizing, converting io1 volumes to gp3 based on volume activity. It also migrated more volumes from gp2 to gp3. These optimization efforts resulted in an 84 percent reduction in Amazon EBS storage expenses. The teams began by building new AWS infrastructure for both SAP and non-SAP workloads. Then, they participated in an AWS Well-Architected review, which assists cloud architects in building secure, high-performing, resilient, and efficient infrastructure for a variety of applications and workloads. By taking part in these sessions, Sony made sure that its infrastructure met best practices for architecture, scalability, resiliency, and security. Deutsch Tiếng Việt Italiano ไทย Solution | Successfully Migrating Business Users across Regions to the Cloud With 6,000 users across 200 locations in 50 countries, the migration was no small feat. The project involved migrating 15 SAP applications on AWS, decommissioning 3 applications to upgrade the SAP Business Warehouse cloud, and modernizing from SAP NetWeaver Business Warehouse Accelerator to SAP S/4HANA on AWS. It also needed to be completed under a tight budget, within a short timeframe, and with minimal disruption to business operations. costs Sony GISC-IN adopted AWS Backint Agent, an SAP-certified backup and restore solution for SAP HANA workloads, to back up its database to Amazon Simple Storage Service (Amazon S3), an object storage service built to retrieve any amount of data from anywhere. Using this solution, the team quickly backed up 4 TB of data in less than 1 hour. With the help of AWS Enterprise Support, Sony GISC-IN optimized its Amazon S3 usage and reduced costs by 20 percent by implementing lifecycle policies, setting up Amazon S3 tiering, and adopting Amazon S3 Glacier Instant Retrieval, the lowest-cost archive storage with milliseconds retrieval for rarely accessed data. The migration also showcased the strength of Sony GISC-IN, demonstrating its ability to deliver complex and time-sensitive projects with precision and excellence. Managing such a large-scale migration project while minimizing disruption to business operations is a testament to Sony GISC-IN’s capabilities. In fact, Sony’s chief information officer awarded the Sony GISC-IN team a gold medal in recognition of this project’s success. Amazon EBS Sony also took advantage of AWS Infrastructure Event Management (AWS IEM), a program that offers architecture and scaling guidance and operational support for planned events, such as migrations. By participating in AWS IEM, Sony quickly detected and responded to events that had the potential to disrupt its applications. This helped improve operational efficiency and further minimize downtime. 中文 (简体) In April 2020, Sony began to migrate SAP West Platform to Amazon Web Services (AWS)—all within an aggressive timeline and budget. Português After the migration, Sony collaborated with AWS Enterprise Support to further optimize its usage of AWS services. For example, Sony GISC-IN initially used gp2 volumes on Amazon Elastic Block Store (Amazon EBS), a scalable, high-performance block-storage service, as its primary storage during the migration. Later, it switched to gp3 volumes due to the ability to provide input/output operations per second and throughput independently without increasing storage size, resulting in up to 20 percent lower costs per gigabyte compared with gp2 volumes." Mobileye Cuts Costs Using Amazon EC2 _ Case Study _ AWS.txt,"Opportunity | Determining the Need for Increased Compute Power at a Reduced Cost Français The REM team updates the map in near real time: accessing, changing, rebuilding, and stitching together more than 2 million kilometers of drivable paths with detail down to the level of a single stop sign. Each map in development is saved to Amazon Aurora, which is designed for unparalleled high performance and availability at a global scale with full MySQL and PostgreSQL compatibility. “We chose Aurora because it gave us the ability to work at a large scale without having to deal with a lot of maintenance or trying to optimize it ourselves,” says Reisman. “We get excellent performance out of the box.” Amazon S3 is an object storage service offering industry-leading scalability, data availability, security, and performance. Customer Stories / Automotive Español Mobileye is now able to use a single, highly scalable, self-managed Apache Spark cluster to map the entirety of Europe, using crowdsourced RSD that is tailored to the functionality of autonomous vehicles. Crowdsourced data is stored in Amazon Simple Storage Service (Amazon S3), an object storage service offering high scalability, data availability, security, and performance. “Our DevOps team worked alongside the AWS team to figure out how to store huge datasets on Amazon S3 in the most cost-effective way, giving developers access to an almost infinite number of scenarios while not breaking the bank,” says Reisman. The REM team has also begun using the Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering) storage class, which delivers automatic storage cost savings when data access patterns change, without performance impact or operational overhead. “Within Mobileye, Amazon S3 Intelligent-Tiering has been used for quite some time and has shown significant cost reductions,” says Reisman. “From the deep analysis we did alongside the AWS team, it looks like REM will be substantially reducing costs by using this as well.” Solution | Optimizing Costs for Compute and Storage 日本語 Contact Sales 2022 Working alongside AWS subject matter experts, the REM team planned a load test to address the scalability issue of a single cluster. The load test would attempt to map significant parts of Germany using the company’s actual operational code and real RSD information fed into a single cluster of Apache Spark, an open-source, distributed processing system used for big data workloads. The team started small, tweaking the parameters and improving any bottlenecks. The load test involved several stages, gradually increasing the compute until it peaked at 1,300 parallel cells running on 250,000 vCPUs on a single Apache Spark cluster without issue, a significant improvement over REM’s previous maximum capacity of 60,000 vCPUs. Mobileye could map the entire country of Germany in just 2–4 days running on 200,000 vCPUs. “Using AWS, the same map was considerably cheaper to create than before, and it took less than half the time to complete the same area,” says Pini Reisman, director of REM cloud application at Mobileye. “This was achieved by trying to push the envelope and figuring out what was limiting us from running this at the scale that we wanted in one Apache Spark cluster.” 한국어 Mobileye Optimizes Ability to Build Crowdsource HD Maps and Cut Costs Using Amazon EC2 Spot Instances Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. storage costs reduced 250,000 Get Started In 2022, the company plans to map the entirety of Europe, which will require the system to scale up to 200,000 concurrent vCPUs for 20 days—96 million vCPU hours in total. “It’s not that our architecture has changed,” says Reisman. “It’s that we managed to break the boundaries that we had before.” AWS Services Used Outcome | Expanding REM Functionality Further Large dataset Reduced 中文 (繁體) Bahasa Indonesia As a leading supplier of technologies for driving systems, Mobileye needed a way to create high-definition (HD) maps that provided a full set of features for driving-assist technologies and self-driving cars at an affordable cost. The creation of HD driving maps for an entire continent requires enormous compute power that must simultaneously collect data from vehicles and continuously update existing maps, a process that can quickly become unwieldy with soaring costs. About Mobileye Mobileye’s Road Experience Management (REM) group, which is responsible for the creation of its HD maps, addressed these challenges by developing a complex microservices architecture using Amazon Web Services (AWS). The solution is powered by Amazon Elastic Compute Cloud (Amazon EC2), which offers secure and resizable compute capacity for virtually any workload. Using a suite of managed services from AWS, Mobileye could simplify its infrastructure, reduce operational overhead, and scale to more than 250,000 virtual CPUs (vCPUs) running concurrently at a fraction of the cost. Ρусский عربي 中文 (简体) Learn more » Founded in 1999, Mobileye develops technology for advanced driver assistance and autonomous driving systems. The company collects data for its mapping by crowdsourcing: vehicles navigating the roads send back road segment data (RSD) that the system ingests and processes. Mobileye extracts only the valuable information from the RSD, a process that minimizes the size and processing cost of the data. By early 2019, the REM team started receiving millions of RSD files daily, which was too much data to run on one compute cluster. As a result, the team had to split the continent of Europe into four disjointed areas and scale, debug, and monitor each one. The overhead of running four clusters contributed to a significant operational challenge that added to the cost and required the team to stitch the clusters together to achieve full functionality. Overview Amazon Aurora Mobileye develops technology for advanced driver assistance and autonomous driving systems. The company was founded in Israel in 1999 and is a leading provider of both camera-based driving-assist systems and solutions for self-driving systems. Amazon Simple Storage Service (Amazon S3) Türkçe English vCPUs on a single Apache Spark cluster Amazon Aurora provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. Learn more » high-performance compute costs Pini Reisman Director of REM Cloud Application, Mobileye Deutsch Amazon S3 Intelligent-Tiering Tiếng Việt To manage the cost of running hundreds of thousands of vCPUs, the company used Amazon EC2 Spot Instances, which let companies take advantage of unused Amazon EC2 capacity and receive up to a 90 percent discount compared with On-Demand prices. But because AWS can reclaim Spot Instances when it needs the capacity in exchange for steep discounts, Mobileye runs its fleet of Spot Instances across many Availability Zones, one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. Additionally, the fleet consists of many Amazon EC2 instance types to diversify traffic and minimize interruptions, with priority given to the largest machines within a single Availability Zone. The solution uses primarily R-instance types for optimal CPU and memory rationing and cost. It prioritizes 24xlarge instances within the R-instance family before using 16xlarge, then 8xlarge, and so forth before opening a new Availability Zone. “Using Spot Instances, we have a very big discount in our enterprise account,” says Ofer Eliassaf, Mobileye’s cloud infrastructure group lead. Using AWS, the same map was considerably cheaper to create than before, and it took less than half the time to complete the same area.” Overview | Opportunity | Solution | Outcome | AWS Services Used  Italiano ไทย S3 Intelligent-Tiering is the only cloud storage class that delivers automatic storage cost savings when data access patterns change, without performance impact or operational overhead. Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Learn more » Amazon EC2 Spot Instances Português" Mobileye Improves Deep Learning Training Performance and Reduces Costs Using Amazon EC2 DL1 Instances _ Mobileye Case Study _ AWS.txt,"As they sought to solve tasks in detection, tracking, and segmentation, Mobileye teams had been working independently to train the computationally heavy DL models that were deployed on EyeQ. In 2021, Mobileye began a project to improve performance while lowering the cost of DL by consolidating models—what the company calls “squeezing.” This involved creating a common backbone so that all the tasks could share compute resources. To train these DL models while keeping price down, the company needed cloud-based compute powered by accelerators that could run the largest number of samples per dollar. It began comparing instances of Amazon Elastic Compute Cloud (Amazon EC2), which offers secure and resizable compute capacity for virtually any workload. Mobileye Improves Deep Learning Training Performance and Reduces Costs Using Amazon EC2 DL1 Instances production workloads daily Français Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. Learn more » Customer Stories / Automotive Español 40 percent 日本語 AWS Services Used Opportunity | Using Amazon EC2 DL1 Instances to Cost-Effectively Train DL Models that Improve Driver Safety 한국어 Ohad Shitrit Senior Director of AI Engineering and Algorithms, Mobileye Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon EC2 DL1 Instances Accelerates Amazon EC2 R5 instances are the next generation of memory optimized instances for the Amazon Elastic Compute Cloud. R5 instances are well suited for memory intensive applications such as high-performance databases, distributed web scale in-memory caches, mid-size in-memory databases, real time big data analytics, and other enterprise applications. Learn more » Headquartered in Israel, Mobileye develops self-driving technology and advanced driver-assistance systems using cameras, computer chips, and software. More than 50 original equipment manufacturers have adopted Mobileye’s solutions in more than 800 vehicle models, running on a proprietary driver-assistance chip called EyeQ. The company has sold more than 100 million EyeQ chips, which are designed to deploy and run DL models in near real time, processing hundreds of images per second to solve many computer vision problems simultaneously. For example, autonomous vehicles use object-detection algorithms to accurately see pedestrians, other vehicles, and traffic signals. Tracking algorithms follow the trajectory of such objects. And segmentation involves the collection and ingestion of individual pixels to feed DL models that attempt to re-create real-time road conditions. Get Started While Mobileye off-loads DL to Amazon EC2 DL1 Instances, it meets the compute needs of its Amazon EKS workflows using Amazon EC2 R5 Instances, which accelerate performance for workloads that process large datasets in memory. In short, the workflow determines the instance configuration. Using a heterogeneous compute structure, Mobileye speeds its development cycles and improves time to market. It runs more than 250 production workloads daily, scaling to more than 3,500 nodes on Amazon EKS. “By setting up our deep learning training batch workflows using Amazon EC2 DL1 Instances, we’re training more and spending less,” says Shitrit. Together, the AWS, Habana, and Mobileye teams tested Amazon EC2 DL1 Instances for several use cases. Mobileye was able to use Amazon EC2 DL1 Instances to implement distributed training, where one DL training workload was distributed across several instances. The company used Amazon EC2 DL1 Instances within its existing architecture on Amazon Elastic Kubernetes Service (Amazon EKS), a managed Kubernetes service. “We built the automatic scaling groups, created the virtual private cloud, and facilitated communication among different instances with support from Amazon EKS solution architects,” Shitrit says. On the research side, several Mobileye developers had been working with Habana Labs, a company that is part of Intel, an AWS Partner. Habana Labs had developed a Gaudi accelerator designed to optimize deep neural networks and power purpose-built instances for DL. After the Mobileye research teams’ success, other Mobileye teams began testing Amazon EC2 DL1 Instances, which deliver low cost-to-train DL models for natural language processing, object detection, and image-recognition use cases. Mobileye collaborated with teams from Habana Labs and AWS so that its custom models could be trained on Amazon EC2 DL1 Instances. “With efficient training, we can run large numbers of experiments, find the best model, and improve our accuracy,” says Ohad Shitrit, Mobileye’s senior director of AI engineering and algorithms. “Then our product will be better, which means that the driver will be safer.” Learn how Mobileye, a driving automation technology provider, improved price performance by 40 percent and lowered deep learning model training costs using Amazon EC2 DL1 Instances. 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. with increasing numbers of Amazon EC2 DL1 Instances The solution also works seamlessly alongside Argo Workflows, the open-source container-native workflow engine the company uses to orchestrate parallel jobs on Kubernetes and observe model deployment and release. Mobileye benefited from the simple integration of solutions and overall ease of use. “You need very few changes in the code to run your network using Amazon EC2 DL1 Instances,” Shitrit says. “It’s straightforward. A talented developer can do it in a few hours.” Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 250 Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. In the cloud, Amazon EKS automatically manages the availability and scalability of the Kubernetes control plane nodes responsible for scheduling containers, managing application availability, storing cluster data, and other key tasks. Learn more » 中文 (简体) improvement in price performance 2022 Overview development cycle for tasks involving computer vision Sees near-linear improvement For example, one use case took Mobileye just 2 weeks to scale training workloads across eight Amazon EC2 DL1 Instances and saw near-linear improvement as the number of instances increased. For model training, the company improved price performance by as much as 40 percent on Amazon EC2 DL1 Instances compared to the same number of instances using NVIDIA-based accelerators. To further save money on its DL workflows, Mobileye used Amazon EC2 Spot Instances, which let companies take advantage of unused Amazon EC2 capacity in the cloud at up to a 90 percent discount compared to On-Demand instances, which are primarily used by NVIDIA-based GPUs. Türkçe Alongside AWS and Habana teams, Mobileye is continuing to optimize the use of Amazon EC2 DL1 Instances for model training and is starting to deploy them to production, with plans to deliver to its clients soon. The company also plans to adopt Elastic Fabric Adapter (EFA), a network interface for Amazon EC2 instances that customers use to run applications requiring high levels of internode communications at scale on AWS. “Amazon EC2 DL1 is powerful hardware with a relatively low price,” says Shitrit. “When we train cost effectively, we can deploy better models to mobilize and improve our products.” English Amazon Elastic Compute Cloud (Amazon EC2) Solution | Creating a Heterogeneous Compute Infrastructure to Drive Development Amazon EC2 R5 Instances About Mobileye Scales to more than 3,500 Deutsch By setting up our deep learning training batch workflows using Amazon EC2 DL1 Instances, we’re training more and spending less.” Tiếng Việt Outcome | Improving Products for Customers by Deploying Better Models Italiano ไทย Based in Jerusalem, Mobileye develops autonomous driving technologies and advanced driver-assistance systems using cameras, computer chips, and software. More than 800 vehicle models use its technology, with more than 100 million chips sold. nodes on Amazon EKS Learn more » Amazon Elastic Kubernetes Service (Amazon EKS) Português Mobileye develops innovative autonomous vehicle technologies and powers its solutions with deep learning (DL) models. The company is constantly optimizing the price performance of its custom computer vision models, which are critical to building autonomous driving solutions that can adapt to ever-changing road conditions. To train these custom computer vision models, Mobileye turned to compute solutions in the cloud from Amazon Web Services (AWS). The company developed a heterogeneous compute cluster that included a novel Gaudi accelerator that was developed specifically for DL workloads. Mobileye’s solution facilitated more than 250 production workloads daily, delivered 40 percent better price performance, and accelerated the company’s DL development cycle." Mobiuspace delivers up to 40 improved price-performance using Amazon EMR on EKS and Graviton instance _ Mobiuspace Case Study _ AWS.txt,"and automated O&M Français 2023 Mobiuspace, a global internet technology company, wanted to optimize its content recommendation algorithm more effectively with big data. Aiming to provide personalized entertainment experience, Mobiuspace has rolled out a line of products to cater to global users’ need for discovering, exploring, consuming, and creating pan-entertainment contents. Mobiuspace has over 200 million active monthly users across over 100 countries and regions, including emerging markets such as the Latin America, the Middle East, and North Africa. It processes 100,000 QPS and billions of user behavioral events processed at peak. Looking to providing better, more localized, and more personalized video streaming services, Mobiuspace decided that by adopting Amazon Web Services (AWS), it will improve content recommendation, shorten mode iteration, and optimize its recommendation algorithm. and better price performance Español and independent architecture building As the growing business placed increasing demands on its architecture, Mobiuspace underwent a data modernization effort and containerization transformation led by the big data team. Mobiuspace migrated its big data operation from Amazon EMR on EC2 to a fully-managed Kubernetes container platform—Amazon Elastic Kubernetes Service (Amazon EKS). With Amazon EMR on EKS, Mobiuspace integrated its big data and front-end applications to enable a microservice-based, containerized, and highly automated system and simpler operations and maintenance (O&M) management. In addition, Amazon EMR on EKS uses containers instead of virtual machines as the smallest resource unit to allow finer management and better utilization of resources. Learn more » 日本語 Amazon SageMaker Get Started 한국어 Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used security compliance Shenzhen Mobiuspace Technology Co., Ltd. (“Mobiuspace”) is a global internet technology company committed to inspiring every corner of the world through technology. Its expanding services and customer base had also significantly driven up data operation costs. Its front-end server was processing as many as 100,000 QPS at peak hours and billions of users’ behavioral events. Mobiuspace wanted a cost-effective solution to address its massive data processing needs. It decided to improve the performance and efficiency of its big data operation by using AWS. This would help Mobiuspace keep pace with its rapid growth and boost business development through rapid cost reduction and continuous optimization. Mobiuspace Delivers up to 40% Improved Price-Performance Using Amazon EMR on EKS AWS Services Used 中文 (繁體) Bahasa Indonesia With Amazon EMR on EKS and the ARM-based AWS Graviton 2 instances, we improved the overall performance of our big data operations by 30% and reduced cost by 20%.” Li Rui Vice President of Technology, Mobiuspace Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Outcome | Accelerating System Development Solution | Reducing Costs and Enhancing Agility Overview Easy Agile management About Mobiuspace Building on the modern data architecture of AWS, Mobiuspace uses Amazon SageMaker, a fully managed service that provides developers and data scientists with the ability to build, train, and deploy machine learning models quickly, to recommend video contents based on users’ interest. In addition, Amazon SageMaker is built with and optimizes commonly-used machine learning algorithms to save users from spending excessive time on algorithm selection and framework. Using Amazon SageMaker, Mobiuspace effectively shortened the cycles of continuous model iteration and updates to the optimized recommendation algorithm, improving user experience and customer satisfaction. Türkçe For better virtual machine scheduling on Amazon EKS, Mobiuspace made full use of the AWS best practices: it runs Spot instances and Amazon EC2 instances powered by AWS Graviton processors to further reduce virtual machine costs of pod pools. Amazon EC2 Spot Instances allow users to tap into the unused EC2 capacity in the AWS Cloud. Available at up to a 90 percent discount compared to On-Demand prices, Spot instances are suitable for container and big data workloads. Amazon EMR or Amazon EKS also facilitate easy, seamless scheduling of and access to Spot resources. In 2020, Amazon EC2 instances powered by AWS Graviton processors were released. Mobiuspace’s testing on the containerized Java back-end services shows that Amazon EC2 M6g instances deliver 40 percent better price performance over M5 instances. “With Amazon EMR on EKS and the ARM-based AWS Graviton 2 instances, we improved the overall performance of our big data operations by 30 percent and reduced cost by 20 percent,” says Li Rui, vice president of technology at Mobiuspace. English Operational efficiency Opportunity | Optimizing Big Data Operations to Enhance The User Experience Already running on Amazon EMR and Amazon Elastic Compute Cloud (Amazon EC2), Mobiuspace intended to better use these services to improve cluster resources utilization and gain more flexibility across AWS global infrastructure. Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Amazon EMR is the industry-leading cloud big data solution for petabyte-scale data processing, interactive analytics, and machine learning using open-source frameworks such as Apache Spark, Apache Hive, and Presto. Amazon EMR Learn how Mobiuspace adopted a modern data architecture with Amazon EMR on EKS Deutsch Amazon Elastic Kubernetes Service (EKS) is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. Tiếng Việt With video streaming becoming the mainstay of mobile internet consumption, many users want to consume culturally-relevant content and find easier ways to access such information online. However, it was not easy, especially for users in Latin America and other emerging markets, to find localized and personalized content. Mobiuspace made it a priority to analyze and learn user behavior based on their media consumption, cultural, and national backgrounds to provide relevant video recommendations. This would lead to better localized and personalized video streaming services. Italiano ไทย Amazon EKS Contact Sales Founded in 2016, Mobiuspace is a global internet technology company that provides a diversified product portfolio for users to discover, explore, consume and create pan-entertainment content. This makes for a personalized experience anytime, anywhere. Mobiuspace deployed all its businesses and systems on AWS and theyw ere comprised of three major parts. First, its online service system supports service requests of all products running on different operating systems (Android/IOS/Web). These requests include user center, in-feed video recommendation, channel recommendation, follows, video resolution, short URL sharing, push notification, and upgrade services. Second, its big data system collects behavioral data from the client software, provides raw data for analysis and recommendation, and processes billions of behavioral events daily. Finally, its video recommendation system runs on Amazon SageMaker that captures user activity data and uses machine learning models to recommend video content based on users’ interest. Learn more » Rapid system development Amazon EC2 Spot Instances Português" mod.io Provides Low Latency Gamer Experience Globally on AWS _ Case Study _ AWS.txt,"Given that most of its gaming community are in the United States and Europe, burstable scaling was often needed during game launches and updates, particularly when games landed on the subscription services and reached millions of new players. High availability and autoscaling during these spikes and sustained periods of growth were also essential. Patrick Sotiriou, co-founder and vice president of Technology at mod.io, says, “Having the agility to spin up resources instantly in another global region is critical to our business.” Tiếng Việt Français 200% global latency, down from 700 ms Amazon Redshift Español Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. Solution | Leveraging Managed Services to Relieve Infrastructure Burden 250 ms Amazon Managed Streaming for Apache Kafka is a fully managed service that enables you to build and run applications that use Apache Kafka to process streaming data. Learn more » scales web applications to support two-fold spike in API requests 日本語 Amazon Managed Streaming for Apache Kafka mod.io had been using Amazon Web Services (AWS) since its launch, deploying resources such as Amazon Simple Storage Service (Amazon S3) to store images and mod files. It chose to migrate fully from on-premises to the AWS Cloud in 2021, leveraging managed services such as AWS Lambda that would ease its infrastructure “heavy lifting” burden. With the migration, mod.io began breaking up its monolithic database and supporting architecture, prioritizing cloud-native services wherever possible. 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Get Started Since completing its migration to AWS, mod.io has expanded its international presence with a platform that’s highly scalable and more responsive to its users. “AWS and our cloud migration effectively unlocked the ability for us to scale globally in seconds,” says Macsok. Platform performance and reliability have increased significantly, and mod.io no longer needs to spend unnecessary time and money on hardware maintenance.  AWS Services Used Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. Learn more » The company briefly considered other cloud providers but chose AWS because of its positive experience with AWS subject matter experts and familiarity with the platform. Greg Macsok, vice president of Infrastructure at mod.io, says, “The near real-time support we’ve received from AWS, from a technical and account management perspective, was a major driver in our decision. We also appreciate how we’ve been able to continue developing at speed during the migration thanks to the ease of using the AWS platform.” Since day one, mod.io has focused on continually adding features and functionalities to its product, so this aspect was an important consideration.  mod.io Provides Low Latency Gamer Experience Globally on AWS 中文 (繁體) Bahasa Indonesia Amazon Aurora Contact Sales Ρусский mod.io is a middleware platform that powers user-generated content for video games. Trusted by more than 14 million users for successful integration with over 130 games, mod.io can be utilized across PCs, consoles, mobiles, and virtual-reality devices. عربي I doubt there’s a use case we’d want to tackle that we couldn’t achieve with the multitude of services AWS offers.” 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Patrick Sotiriou Co-Founder and Vice President of Technology, mod.io In September 2021, when beginning its cloud migration journey, mod.io had a daily active user base of 240,000. By November 2022, that figure had more than doubled to 530,000. Despite the massive increase in users, mod.io did not need to drastically scale its engineering team to support new users. “Being on AWS means that no matter how much or how fast our business grows, we don’t need to scale human resources 1:1,” says Macsok.  2022 Overview Modding—the modification of video games through user-generated content (UGC)—has become an integral way of connecting game studios with their communities. mod.io is a middleware provider whose platform powers UGC within games such as SnowRunner. Operating out of Australia, mod.io boasts over 14 million users and integrations with more than 130 games.  AWS Elastic Beanstalk Opportunity | Seeking Better Support and Instant Global Scaling mod.io also implemented Amazon Aurora as a fully managed database service available across three AWS Regions and multiple availability zones. Before the migration, mod.io had servers in the US West (Northern California) Region; it has since expanded to Frankfurt and Singapore. mod.io has set up redundant database replicas around the world to better support gamers in any location and reduced its platform’s global latency from 700 milliseconds to 250 milliseconds on AWS.  Türkçe mod.io rapidly scales its database and multi-region architecture using Amazon Aurora and AWS Elastic Beanstalk, enlarging its global footprint, and reducing latency for gamers globally. English mod.io is an open middleware platform enabling gamers to modify (mod) existing games with user generated content. To support rapid growth and reduce its manual infrastructure burden, mod.io migrated to AWS.  To rapidly autoscale its web applications, the company is using AWS Elastic Beanstalk. Elastic architecture ensures smooth responses to spikes in mod.io traffic during game releases. In one situation, the number of application programming interface (API) requests to the mod.io platform doubled overnight, and the system had no issues or downtime while processing the increased load. Since migrating to AWS, mod.io has experienced no major outages.  To leverage data accumulated in Amazon Aurora and optimize performance using the right tools for the right job, mod.io is now finalizing a bespoke analytics pipeline using Amazon Redshift and Amazon Managed Streaming for Apache Kafka (Amazon MSK). It plans to use behavioral analytics to generate valuable insights that would benefit the game companies it works with, alongside loyal modders on the mod.io platform.  About mod.io with multi-AZ database architecture Deutsch Customer Stories / Gaming Italiano ไทย On-demand scaling, particularly during prime gaming hours or around major game releases, was a priority that became increasingly challenging with its data center. When a new game or new version of a popular game is released, mod.io requires increased compute resources in a short span of time. Since launching in 2018, mod.io has experienced phenomenal growth, averaging 250 percent year-on-year in mods downloaded. The business quickly realized, however, that its bare-metal servers could not keep pace with this growth rate over the long term. Plus, the company had difficulty getting real-time support from its data center and hardware vendors. When experiencing data center failures, mod.io would typically suffer outages while awaiting hardware vendor support.  Outcome | Expanding Reach with Highly Responsive Platform Learn more » The company uses AWS Elastic Beanstalk to autoscale its applications, Amazon Aurora as a multi-region managed database, and Amazon Redshift as a data warehouse. On AWS, mod.io can support limitless expansion with high availability architecture that scales on demand and has embarked on a data and analytics journey to improve end users’ gaming experience.  AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers. Learn more » High availability Português Sotiriou says, “There’s so much potential for us to scale in several areas. I doubt there’s a use case we’d want to tackle that we couldn’t achieve with the multitude of services AWS offers.” Aside from its current analytics project, mod.io plans to evaluate containerization and the creation of a data lake. “We’re looking very far into the future and constantly comparing what we want to do at the product level with how AWS can help us achieve it at a technical level,” Sotiriou concludes. mod.io is now exploring the AWS Partner Network to jointly pursue new business opportunities within the AWS global gametech customer community." Modern Electron Case Study.txt,"Using Amazon Web Services (AWS) to simulate and optimize the technology, Modern Electron has run tens of thousands of complex simulations on compute-optimized Intel-based Amazon Elastic Compute Cloud (Amazon EC2) C5 Instances. When AWS launched 64-bit Amazon EC2 C6g Instances, powered by Arm-based AWS Graviton2 processors, Modern Electron adopted the new technology to achieve better price performance. The savings enabled engineers to iterate faster at a 50 percent lower cost.   Français Founded in 2015, Modern Electron is an energy technology company developing deep tech for distributed energy generation that is greener, cheaper, and climate resilient. Modern Electron is developing technology to enable hundreds of millions of homeowners worldwide to save money on energy while reducing carbon emissions that degrade the environment. The company is working with heating appliance manufacturers to integrate new technology into the next generation of home heating systems. The technology is a new way to approach combined heat and power, converting a portion of the heat into high-efficiency electricity to increase a home’s energy efficiency and heating reliability while reducing its reliance on grid electricity. Achieving Cost Reductions and Better Performance The team also uses AWS Batch, a service that provisions compute resources and optimizes job distribution based on the volume and resource requirements. Most Modern Electron simulations run on a single node, which means less worry about networking performance. “Our use of AWS Batch lets us worry a lot less about the infrastructure because AWS spins up the exact nodes we need as we need them,” says Scherpelz. The team’s local scripts submit runs to AWS Batch to explore specific sets of parameters. AWS Batch automatically boots up the compute node with the right resources and then launches the job. As each job finishes, AWS Batch shuts down that node. Español By migrating from Amazon EC2 C5 Instances to Amazon EC2 C6g Instances, Modern Electron reduced compute costs by an additional 50 percent. Combining this with the savings from Spot Instances, the company achieved an overall cost reduction of more than 75 percent. These savings enable the company to invest in running more simulations. Increased elasticity to accommodate spiking compute demands  日本語 On AWS, we have access to the right computing resources for the science we need to do. The solutions are there for us to use.” Exploring High Performance Computing on AWS Shortened time to solutions Get Started 한국어 Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Expects to improve resiliency to blackouts Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices. You can use Spot Instances for various stateless, fault-tolerant, or flexible applications Reducing the Carbon Footprint Worldwide In 2018, Modern Electron began running simulations involving large clusters of Amazon EC2 C5 Instances, powered by Intel x86 processors. Capacity fluctuated depending on how many simulations it had to run, so the company opted for Amazon EC2 Spot Instances—spare Amazon EC2 capacity offered at discounted rates. This pricing option saved the company 50 percent compared to the cost of using Amazon EC2 On-Demand Instances for its simulations. Modern Electron then decided to explore the new Amazon EC2 C6g Instances, released in July 2020. The technology is powered by AWS Graviton processors, custom built by AWS using 64-bit Arm Neoverse cores to deliver better price performance for cloud workloads running on Amazon EC2. AWS Services Used The funded startup is bringing a commercial product to market with appliance manufacturing partners around the world. The technology requires optimized designs for a range of different products and models in some of the world’s most demanding conditions, including extreme temperature, lifecycle, and reliability requirements. Thermionic converters have existed for decades and were historically used to power satellites. But engineers at Modern Electron have made breakthroughs on the technology and materials to optimize that technology for use in terrestrial appliances for the first time. Optimization requires powerful compute to run complex simulations, and the infrastructure became available only recently. “We often simulate tens of millions of particles,” says Peter Scherpelz, senior computational physicist at Modern Electron. “We track how each particle moves and simulate that over millions of time steps—that’s trillions of calculations. A desktop computer won’t suffice.”  中文 (繁體) Bahasa Indonesia Amazon C6g AWS Batch Contact Sales Ρусский عربي Using Amazon EC2 C6g Instances has put the company on a faster path to an optimized product. With Modern Electron’s technology, consumers worldwide will be able to squeeze both electricity and heat out of fuel, thus saving money and reducing carbon emissions. “On AWS, we have access to the right computing resources for the science we need to do,” says Scherpelz. “The solutions are there for us to use.” Modern Electron Optimizes Home Mini Power Plants Using Amazon EC2 中文 (简体) Peter Scherpelz Senior Computational Physicist, Modern Electron AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. Benefits of AWS The company also gained elasticity. “Elasticity to scale is crucial for us because we’re a fairly small computational team and have spiking compute demands,” says Roelof Groenewald, computational physicist at Modern Electron. “In the first week of a month, we might run 1,000 simulations, then not run any in the second week. Having the exact resources available that we need at any time is important to us.” Now Modern Electron’s design team can quickly simulate the detailed electron physics in its technology architectures, enabling it to iterate rapidly and improve its design. Ultimately, Modern Electron expects its device will bring efficient electricity and cost savings to hundreds of millions of consumers regardless of whether they’re connected to the power grid. Modern Electron plans to run more-extensive simulations based on hundreds of millions of particles rather than on the 10-million-particle range explored so far. The team is working on using multiple nodes to run larger parallel jobs and establish the infrastructure required to submit these simulations to the cloud and get results quickly. Optimized code speed to enable larger simulations Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. English Additionally, the company’s engineers have used AWS compute resources to continually optimize code speed, enabling much larger simulations, especially on Amazon EC2 C6g Instances, which have a large number of cores per node. And by running more-extensive simulations, the time to solution scales accordingly. “Aside from lower costs, the real payoff of Amazon EC2 C6g Instances is in speed to solution,” says Scherpelz. “When we save 10 percent, we can do 10 percent more runs or harder and bigger runs. Now we can get solutions in a reasonable time.” About Modern Electron Reduced compute costs by more than 75% Amazon EC2 C6g instances are powered by Arm-based AWS Graviton2 processors. They deliver up to 40% better price performance over current generation C5 instances and are ideal for running advanced compute-intensive workloads.  Expects to reduce carbon emissions Deutsch Tiếng Việt Italiano ไทย Founded in 2015, Modern Electron has since grown to 32 employees. The company’s vision is to minimize carbon emissions by developing a thermionic converter that uses high-temperature heat from combustion already in household boilers and furnaces to generate power that is up to 5 times cheaper and much less carbon intensive than the electricity most homes can purchase from the grid. The device has no moving parts and delivers electricity more efficiently than the power grid, reducing household energy costs and carbon footprints. The technology provides new features such as blackout-proof heating, enabling homeowners to run the heat even when the power grid is down. “Recent winter weather disasters created widespread grid outages in Texas and other states, causing millions to lose power and heat,” says Justin Ashton, vice president of product at Modern Electron. “Having efficient, blackout-proof heating is more relevant than ever. Any home with a gas appliance already has half a power plant in place. Our thermionic technology is the missing piece.” The heating appliances enhanced by Modern Electron’s technology are future compatible with renewable fuels, such as green gas and hydrogen, lowering society’s cost on the environment and speeding up decarbonization. 2021 Learn more » Amazon EC2 Amazon EC2 Spot Instances Expects to reduce household energy costs for consumers Português" Moderna Drives Commercial Innovation Using Amazon Connect and AI _ Moderna Case Study _ AWS.txt,"Overview | Opportunity | Solution | Outcome | AWS Services Used Barbara Salami Vice President of Digital for Commercial, Moderna * { Standardization Español *.MsoChpDefault { 日本語 2023 한국어 Moderna, a digital biotechnology company, is best known for the mRNA vaccine it developed during the COVID-19 pandemic. With several other therapeutics in the pipeline, the Massachusetts-based innovator is changing the world of medicine by harnessing the power of mRNA. It is exploring new frontiers while focusing on digitization and making systems modular, agile, and extensible by integrating them. mso-bidi-font-family:Cambria { AWS Services Used through increased customer satisfaction, retention, and brand loyalty Amazon Lex Contact Sales Learn more » Overview   metrics tracking AWS re:Invent 2022 - Commercial innovation at Moderna using Amazon Connect and AI (LFS201) page: WordSection1; mso-hansi-font-family:Cambria { * { ไทย p.MsoNormal, li.MsoNormal, div.MsoNormal { Learn more » Moderna’s goal of commercial excellence hinges on top-notch, fully integrated customer relationship management to power exceptional experiences. With OC3, the company is building a future-ready, modular infrastructure that gives a 360-degree view of the customer in a dynamic landscape. “Machine learning was key to bringing Moderna’s mRNA products to market, so it was natural to extend its use to commercial efforts,” says Salami. With Amazon Connect, you can set up a contact center in minutes that can scale to support millions of customers. Learn more » As it pivots to becoming a commercial organization, Moderna is using Amazon Web Services (AWS) to build personalized experiences for all stakeholders—patients, customers, agents, and supervisors. Its omnichannel cloud contact center delivers a consistent experience for users in all their interactions with the company while furthering Moderna’s vision to be a data-driven organization. Moreover, Moderna can better meet the changing needs of the broader healthcare community, including regulatory bodies and governments. mso-font-pitch:variable { Français Moderna is a global biotechnology company whose mission is to deliver the greatest impact to people through mRNA medicines. OC3 is intuitive, powered by a humanized, conversational artificial intelligence (AI) engine. Using Amazon Lex, a fully managed AI service with advanced natural language models, Moderna builds chatbots with AI that understand intent, maintain context, and automate simple tasks across languages. It also uses Amazon Polly to deploy high-quality, natural-sounding human voices in dozens of languages. In 2022, Moderna piloted a bot library with different personas for different functions and a single desktop to help make agents’ work more accessible. Solution | Deploying Machine Learning to Power Exceptional Customer Experiences  no items found  Amazon Connect Improved 中文 (繁體) Bahasa Indonesia Amazon DynamoDB Opportunity | Enhancing Digital Experiences for Moderna Using AWS   Learn how Moderna is building innovative, digital-first customer experiences with contact center automation using AWS. Long-term savings Türkçe Amazon Lex is a fully managed AI service with advanced natural language models to design, build, test, and deploy conversational interfaces in applications. Learn more » English The growing scope of Moderna’s work using AWS is based on successful past collaborations. “AWS has deep cross-industry expertise, which helps us be future ready, innovate continuously, and scale with agility,” Salami says. “AWS continues to disrupt itself and be a leader, and as AWS learns, we learn.” employee experience Tiếng Việt Because of a global strategic partnership between AWS and Salesforce, Moderna’s engineers can innovate faster with pre-built applications. And using Amazon DynamoDB, a fast, flexible NoSQL database service, Moderna delivers its apps with nearly unlimited throughput, storage, and replication. “Our aim is to deliver seamless, integrated, personalized experiences with the agility to match the changing needs of patients and the broader system,” says Bhowmick. “That means we have no silos, and we build dynamic cross-solution systems.” Português About Moderna Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. The contact center is currently running in four regions to comply with the local compliance and regulatory needs, with standardized workflows across markets and lines of business. Added agility helps Moderna transition from geography-based vendors to a centralized cloud approach so that it can fully control its contact center. Using Amazon Connect—which helps set up a contact center in minutes—Moderna quickly set up its simple-to-use cloud contact center and onboarded agents to provide superior customer service at a lower cost. “The platform is vendor-agnostic, allowing us to deploy it across regions seamlessly,” says Salami. … Explore Moderna's journey of innovation using AWS The ecosystem is connected to various other downstream systems for adverse event reporting and triaging of quality cases. “AWS shares Moderna’s DNA,” says Bhowmick. “Using AWS, we built an operationally efficient solution while providing the best experience for our customers and patients and can scale with agility and extensibility. AWS collaboration has been fundamental to this entire journey,” says Bhowmick. } عربي Outcome | Personalizing Healthcare through Digitalization and Innovation   Moderna Drives Commercial Innovation Using Amazon Connect and AI mso-ascii-font-family:Cambria { Moderna is currently piloting several new projects to better serve its customers with an integrated global experience, like bot libraries. In addition, it is working to make agents’ work easier through simplified, standardized user interfaces and workflows and exploring different models for commercialization by incorporating best practices from other industries, like fintech. This amalgamation of science and technology is driving its progress toward personalized medicine so that patients can get the right information, the right access, and the right therapy at the right time. Amazon Polly Given its global aspirations and a drive for commercial excellence, Moderna needed a robust, automated customer-management solution. Its omnichannel cloud contact center (OC3) platform, built on AWS, helps the company provide a streamlined, personalized customer interaction experience in every touchpoint, across all lines of business and markets. Deutsch mso-fareast-font-family:Cambria { mso-pagination:widow-orphan { Italiano Moderna’s choice to use AWS to build OC3 was driven by a shared culture of innovation, iteration, and improvement. “AWS was a natural fit from a technology standpoint. Both Moderna and AWS are digital-first and share a mindset of delivering data-driven value for external stakeholders, from patients to governments to healthcare providers,” says Barbara Salami, vice president of digital for commercial at Moderna. “Our relationship with AWS is 10 years strong and spans across the company from genomics to manufacturing. It’s more than technology; it’s about the art-of-the-possible thinking. across functions and regions for driving continuous improvements mso-generic-font-family:roman { To build OC3, the team worked backward, starting with an ideal customer journey and streamlining operations toward that end. The platform handles inquiry, intake, interaction, and support, with built-in capabilities to support communications through customers’ preferred channels, like voice, chat, emails, web, and SMS. Customer service agents get intelligent content routed to their screen in near-real-time through a few clicks to address customer queries, additionally supported by a keyword-based search engine so that they don’t have to scramble for information to help customers. Built-in self-service capabilities further improve the customer experience, while integration with Moderna’s customer relations management system unlocks a 360-degree view of the customer. “Everything is integrated, modular, and cloud-based to support scaling and agility,” says Arpita Bhowmick, senior director, omnichannel contact center products for Moderna. “What’s unique is that the platform can scale to serve the entire gamut of business functions while following the compliance guardrails.” Founded in 2010, Moderna aims to deliver the greatest possible impact to people through its pioneering mRNA technology. With a robust technology platform as its backbone, it started the digital production and commercialization of its COVID-19 vaccine in 2021 and has delivered over 900 million doses thus far. Today, Moderna has 3,800 employees worldwide and 46 products in the pipeline, 31 of which are in clinical trials. Furthermore, the company is committed to having a diverse workforce and achieving carbon neutrality by 2030. *, serif { 1 End-to-end Ρусский 中文 (简体) div.WordSection1 { More Moderna Stories Arial, sans-serif { AWS has deep cross-industry expertise, which helps us be future ready, innovate continuously, and scale with agility. AWS continues to disrupt itself and be a leader, and as AWS learns, we learn.” Amazon Polly uses deep learning technologies to synthesize natural-sounding human speech, so you can convert articles to speech. Greater agility Get Started Customer Stories / Life Sciences Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. Learn more » and speed to market" Modernizing FINRA Data Collection with Amazon DocumentDB _ FINRA Case Study _ AWS.txt,"storage space With the new solution, translation is no longer needed between code and storage. Because Amazon DocumentDB natively stores data in JSON, it is simpler for FINRA to query and index data, reducing development cycles by 50 percent and extending the usability of data by seamlessly working with other systems that use JSON. This reduction in development time helps FINRA spend more time on innovation. “We no longer need to create one data model for the backend and another for the API layer,” says Elghoul. “We can take advantage of the development time that we’re saving to be more innovative and focus on the real business problems that we are solving.” Amazon OpenSearch Service The data that FINRA ingests must be secure. Amazon DocumentDB was an effective choice because it integrates with other AWS services used to deliver strict network isolation—services such as Amazon Virtual Private Cloud (Amazon VPC), used to define and launch AWS resources in a logically isolated virtual network. All data is encrypted at rest using AWS Key Management Service (AWS KMS), used to create and control keys to encrypt or digitally sign data. Encryption in transit is provided with Transport Layer Security. Using Amazon DocumentDB, FINRA can automatically monitor and back up data to Amazon Simple Storage Service (Amazon S3), object storage built to store and retrieve any amount of data from anywhere. Français Reduced The migration to Amazon DocumentDB also simplified the management of data versioning. Because filings and industry needs evolve over time, it is critical for FINRA to support and adapt to these changes. Using its legacy relational database, FINRA would have to track changes to its data using complex logic. Using Amazon DocumentDB, the service automatically publishes change events. Data collection and availability was the first piece of the puzzle for FINRA. Important goals for FINRA are making the data gathered in Amazon DocumentDB available for analytics, working alongside AWS to find the right services to help investigators find bad actors in the industry, and continuing to innovate. By achieving these goals, the organization will continue to improve on fulfilling its mission to protect investors by using data analysis. “To build products to support the future, we use services built for the future, providing capabilities at a pace our users and stakeholders expect,” says Elghoul. Español Using AWS, we are removing limits and moving faster. If we had to build all the services ourselves, it would have taken years to get where we are.” Outcome | Providing Analytics and Investigating Bad Actors Using AWS 日本語 About FINRA 2023 Close Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Opportunity | Improving Query and Indexing Performance for Regulatory Documents Using Amazon DocumentDB FINRA wanted to reduce time to market, the development time required to build new regulatory filings, and time to migrate existing files to JSON format. FINRA considered alternative database solutions and selected Amazon DocumentDB (with MongoDB compatibility), a fully managed native JSON database designed for scaling enterprise workloads, which the organization found to be a good fit for its use case. The organization has been using AWS since 2013 and began working on proofs of concept for Amazon DocumentDB in 2019. FINRA migrated to Amazon DocumentDB in early 2020 and delivered the Form U4 (Uniform Application for Securities Industry Registration or Transfer), used to register broker-dealers and investment advisers, in October 2020. Using AWS, FINRA has also simplified the storage process and improved its business across multiple vectors. “We are removing limits and moving faster. If we had to build all the services ourselves, it would have taken years to get where we are,” says Elghoul. In addition to Amazon DocumentDB, the organization uses Amazon OpenSearch Service—which facilitates performing interactive log analytics, near-real-time application monitoring, website search, and more—for advanced full-text search across the multiple databases it has for different use cases. Click to enlarge for fullscreen viewing.  Amazon OpenSearch Service makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. OpenSearch is an open source, distributed search and analytics suite derived from Elasticsearch. Learn more » Mohammed Elghoul Senior Principal Architect of Regulatory Operations and Registration Platforms Technology, FINRA AWS Services Used FINRA works under the supervision of the US Securities and Exchange Commission to write and enforce rules governing brokerage firms that do business with the public in the United States. FINRA examines firms for compliance, fosters market transparency, and educates investors. Overview For cost optimization, FINRA uses AWS Graviton2 instances for Amazon DocumentDB, custom built by AWS using 64-bit Arm Neoverse cores to deliver optimal price performance. “We saved over 50 percent month over month by migrating to the new instance type and resizing the Amazon DocumentDB cluster to reduce the number of instances used and to gain better performance,” says Elghoul. 中文 (繁體) Bahasa Indonesia Simplified AWS Graviton2 operational cost savings month over month for the data collection framework Modernizing FINRA Data Collection with Amazon DocumentDB Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Amazon DocumentDB 2.5 million Over 50% As of January 2023, FINRA has collected about 2.5 million filings since the inception of the new framework. With the migration to Amazon DocumentDB, FINRA simplified its data collection applications and decreased development times by reducing the code necessary to map objects to relational tables. “We wanted to reduce getting involved in tweaking services or maintaining code. That’s why we prefer to use fully managed services from AWS,” says Mohammed Elghoul, senior principal architect of regulatory operations and registration platforms technology at FINRA. filings collected between October 2020 and January 2023 Customer Stories / Financial Services AWS Graviton2 instances provide up to 30% price/performance improvement for Amazon DocumentDB depending on database size and workload characteristics vs. Intel-based instances. Türkçe Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Customers of all sizes and industries can store and protect any amount of data for virtually any use case, such as data lakes, cloud-native applications, and mobile apps.  Learn more » English FINRA is a not-for-profit organization that writes and enforces the rules governing brokers and broker-dealer firms in the United States. FINRA’s overarching goal is to protect investors and safeguard market integrity. It chose to build on AWS to fulfill this mission. The organization needs efficient data collection that is accurate and consistent. FINRA’s legacy database solution for data collection was a relational database that stored data in XML format. The organization decided to shift to using JSON format, improving query and indexing performance for regulatory documents while reducing storage space. Solution | Shortening Development Cycles and Achieving 50% Cost Savings Using AWS Amazon DocumentDB (with MongoDB compatibility) is a fully managed native JSON document database that makes it easy and cost effective to operate critical document workloads at virtually any scale without managing infrastructure. development cycles reduction that improved the time taken to go to market Deutsch Tiếng Việt Amazon S3 Italiano ไทย The Financial Industry Regulatory Authority (FINRA) wanted to improve data collection and data usability by switching from XML to JSON format across its entire data collection framework. FINRA collects data from several thousand providers, such as investment advisers and stock exchanges, and it tracks, aggregates, and analyzes market events to protect investors, making data usability critical. To improve the accuracy, reliability, and consistency of information collected and disseminated, FINRA used Amazon Web Services (AWS) for its solution. The organization accelerated development time, reduced ongoing maintenance costs, and strengthened data security. Architecture Diagram data collection applications Learn more » Learn how FINRA in the financial services industry reduced development times and ongoing maintenance costs using Amazon DocumentDB (with MongoDB compatibility) for its data collection framework. Português Contact Sales" Modernizing Infrastructure to Improve Reliability Using Amazon EC2 with Loacker _ Case Study _ AWS.txt,"reduction in infrastructure costs Français Even though Loacker was new to the cloud, it migrated its primary SAP application to AWS quickly, cutting infrastructure costs by 32 percent. Loacker first contacted AWS in March of 2020, and its new solution went into deployment in June of 2021, after a 5-month migration to the cloud. Loacker began its modernization process by migrating its SAP application to Amazon Elastic Compute Cloud (Amazon EC2)—which provides secure and resizable compute capacity for virtually any workload—to provide a secure location to store and access its data. Outcome | Using AWS Solutions to Innovate 2023 Overview | Using AWS to Modernize Infrastructure for Loacker Español Amazon EC2 Increased to increase agility, availability, and resiliency 日本語 AWS Storage Gateway is a set of hybrid cloud storage services that provide on-premises access to virtually unlimited cloud storage. Learn more » Customer Stories / Manufacturing Amazon S3 Get Started 한국어 A. Loacker Spa/AG (Loacker), a South Tyrolean company leader in the international wafer market specializing in chocolate confections, wanted to modernize its infrastructure capacity and scalability so that it could increase the agility, availability, and resiliency of the systems that its manufacturing processes rely on. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Now that the migration of the SAP application is complete, Loacker has seen a reduction in costs and an improvement in reliability, including zero unavailability events from 2021 to 2023 due to using cloud resources. It will migrate more of its onsite applications to the cloud. In the future, Loacker will evaluate to enhance quality control and improve production processes by using machine learning services from AWS to interact with onsite production data. AWS has provided proofs of concept regarding business intelligence and remote workplaces. The ability to try different solutions has helped to boost innovation within the company. Ultimately, Loacker hopes to continue its journey of using AWS as it moves toward a software-as-a-service solution. Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 600 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Loacker keeps its SAP disaster recovery environment aligned using AWS DataSync, a secure online service that automates and accelerates migrating data between on-premises and AWS storage services. It also uses this service to back up some large onsite file services that it was unable to back up using its previous software. Additionally, Loacker hosts its business-to-business website using Amazon CloudFront, a content delivery network service built for high performance, security, and developer convenience. In addition to using AWS services, Loacker used the nearby AWS Europe (Milan) Region to host its solutions. Using this AWS Region, which has three Availability Zones, Loacker can reliably spread applications across multiple data centers, adding even greater reliability and business continuity and eliminating any network latency considerations. Loacker has always had a special connection to the mountains, where it creates high-quality wafer and chocolate products. Its Italian and Austrian production plants are surrounded by a natural Alpine landscape, which helps it to focus on the respect of nature and the environment and use optimal, genuine ingredients. Founded in 1925 as a small pastry shop in Bolzano, Italy, Loacker now sells products in more than 100 countries. The onsite location of Loacker’s hardware and software often led to system access issues because of hardware limitations and sometimes due to extreme weather. Loacker decided to use Amazon Web Services (AWS) and to migrate the most important piece of its infrastructure, its SAP application, to the cloud. Because of this migration, the company increased system reliability while reducing costs. AWS Services Used 中文 (繁體) Bahasa Indonesia A. Loacker Spa/AG is a South Tyrolean company leader in the international wafer market specializing in chocolate confections. Loacker products are manufactured in the heart of the Alps and inspire people in over 100 countries. Loacker had no experience using AWS or the cloud before its migration, but it considers itself to be determined, disciplined, and open to new technologies. Because the migration from onsite to cloud solutions is a significant change, a successful migration requires transforming the mindset of the entire company. Loacker was fully committed to the cloud transition and invested in training and upscaling its workforce so that its employees could directly manage the solution. Ρусский عربي 中文 (简体) Opportunity | Reducing Cost and Improving Availability About A. Loacker Spa/AG Solution | Improving System Reliability and Reducing Infrastructure Costs by 32% Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » Santo Natale IT Infrastructure Field Manager, Loacker Türkçe AWS provides a very good set of services in terms of availability, stability of services, and documentation.” As a 24/7 production factory, the company depends on an IT infrastructure that is the basis of production. Before its migration to the cloud, Loacker used two sites to host its business resources to provide high availability in case of failure of one site. However, the remote mountain location and associated extreme weather, especially in winter, could still result in issues with accessing its onsite hardware. Loacker needed to improve the reliability of its systems, and it looked to AWS. English AWS DataSync is a secure, online service that automates and accelerates moving data between on premises and AWS Storage services. Learn more » AWS Storage Gateway Modernizing Infrastructure to Improve Reliability Using Amazon EC2 with Loacker Deutsch capacity to innovate 32% Tiếng Việt Loacker chose AWS because of its efficient cloud-based architecture, how well its services interacted, and its cost advantages. “AWS provides a very good set of services in terms of availability, stability of services, and documentation,” says Natale. Loacker considers the reliability and availability of AWS services as its most important benefit. Loacker’s new cloud-based solutions have eliminated availability lapses and corresponding interruptions in production. Loacker has also replaced onsite file servers and physical tapes with videotape libraries using AWS Storage Gateway, a set of hybrid cloud storage services that provides on-premises access to virtually unlimited cloud storage. The company also uses this service as network file system storage for its Linux machines. Overview | Opportunity | Solution | Outcome | AWS Services Used Italiano ไทย The company also uses Amazon Simple Storage Service (Amazon S3)—an object storage service offering industry-leading scalability, data availability, security, and performance—to store long-term and SAP backups. Previously using onsite hardware, Loacker experienced situations in which it was unable to retrieve its data, resulting in disruption of its business continuity. Since Loacker’s migration to the cloud in June 2021, the infrastructure has had zero downtime, with no associated production losses. By migrating its SAP application to AWS, Loacker reduced its infrastructure costs by 32 percent. Zero “We are a manufacturing company, not a technology company, so adoption of new technologies is a bit challenging,” says Santo Natale, IT infrastructure field manager. “AWS provided us with a lot of training resources, and one of the reasons we chose AWS was the very high quality of the support.” The expertise that AWS brings to the process has helped smooth the transition. “We performed an SAP technology upgrade—from R3 to SAP HANA—and that was a big accomplishment for us. We did not have any delays or issues in the migration of the infrastructure to AWS. Everything was great,” says Manfred Mayr, head of IT organization at Loacker. Learn more » Learn how Loacker modernized its manufacturing infrastructure using Amazon EC2. unavailability events from 2021 to 2023 AWS DataSync Modernized Português Contact Sales" Money Forward Increases Development Velocity 3x Working with AWS Training and Certification _ Case Study _ AWS.txt,"Running Containers on Amazon EKS Français When Japanese financial services provider Money Forward used Amazon Web Services (AWS) to rapidly scale for expansion, it realized that success would be more than a matter of improving its technological infrastructure. The company also needed to train its employees and increase the number of engineers that had AWS expertise. Money Forward has upskilled nearly 200 of its engineers by working with AWS Training and Certification, which helps individuals build and validate their skills to get more out of the cloud. In boosting engineers’ knowledge and confidence about AWS through training, Money Forward has significantly increased the development speed and product release cadence of its services. AWS Training and Certification Learn from AWS experts. Advance your skills and knowledge. Build your future in the AWS Cloud. Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service to run and scale Kubernetes applications in the cloud or on-premises. Learn more » Español Money Forward worked with AWS Training and Certification to provide training in AWS services for its engineers. The company wanted to increase the number of engineers who could use AWS services, like Amazon Elastic Kubernetes Service (Amazon EKS), a managed container service to run and scale Kubernetes applications in the cloud or on premises. The company viewed the training as an investment in the business as well as human potential. “If you let employees experience things, you can expect growth from them,” says Yosuke Suzuki, general manager of the service infrastructure division at Money Forward. “We wanted our engineers to experience things that lead to growth.” Company operations have improved after AWS Training. Before, everything from adding middleware and capacity planning for media exposure to scaling up and scaling out had to go through the central infrastructure team, but now the operations team can complete them. This has led to faster product releases and more service offerings for customers. Backend engineers who participated in the training have also been adding and using AWS-managed middleware. Developers who previously took 30 minutes to complete deployment of new features now take only 10 minutes, and the volume of infrastructure changes has now accelerated by three times. 日本語 2022 Develop practical, in-depth skills for managing containers with Amazon EKS. Learn more » Get Started 한국어 Solution | Enhancing the Autonomy of the Application Team Overview | Opportunity | Solution | Outcome | AWS Services Used Learn how to design good cloud architecture with Architecting on AWS. With different knowledge levels of AWS among in-house engineers, Money Forward chose two AWS Training courses. Developers who were new to AWS took Architecting on AWS, which teaches learners to identify services and features to build resilient, secure, and highly available IT solutions in the AWS Cloud. This introductory training lowered hurdles to using AWS and helped engineers learn the fundamentals of building IT infrastructure on AWS. Engineers already familiar with using AWS took Running Containers on Amazon EKS, an intermediate course aimed at helping engineers learn container management and orchestration for Kubernetes using Amazon EKS, to promote the usage of the company’s in-house infrastructure made by using AWS and Amazon EKS. With the help of the AWS training team, the course was customized to teach the use of tools for the in-house infrastructure to make sure the training was practical. Established in 2012, Tokyo-based Money Forward, with its mission of ""Money Forward. Move your life forward."" has developed various businesses in the financial technologies and software-as-a-service (SaaS) domain for corporations, individuals, and financial institutions. Money Forward provides more than 200,000 billing companies with services—including the Money Forward Cloud, which uses SaaS solutions for back-office optimization such as accounting and finance, personnel, and legal affairs. The company also provides more than 12.8 million users with asset-management services, such as Money Forward ME, to solve personal money issues. “We wish to deliver even more and greater value to our users,” says Yosuke Tsuji, CEO of Money Forward. “We have achieved only 1 percent of our vision.” The demand for AWS Training courses exceeded the company’s expectations. Between November 2020 and April 2022, 260 engineers took the AWS Training and Certification courses, which received a 4.9/5.0 score in the post-training survey. “We have had engineers with skills and knowledge on AWS but who have not been able to train other engineers systematically. This training became a catalyst to spark a wide range of interest in AWS. The training also gave engineers a common language when using AWS, and our in-house infrastructure has been very effective,” says Junya Ogasawara, chief technology officer of Money Forward Home Company, a consumer company within Money Forward. Junya Ogasawara Chief Technology Officer, Money Forward Home Company 3x the speed Outcome | Improving Company Culture and Profits by Optimizing Team Structure AWS Services Used About Money Forward 中文 (繁體) Bahasa Indonesia From 30 to 10 minutes Amazon Elastic Kubernetes Service Contact Sales Ρусский Money Forward soon faced challenges amid this rapid growth and active use of AWS. The expansion stretched Money Forward’s central infrastructure team too thin. The company knew this challenge could slow the company’s advancing business growth. Money Forward developed a framework to speed up service improvement by allowing the application development team to build and operate infrastructure and provide services autonomously. The goal was to help service teams manage their own infrastructure, which meant scaling the organization and business. For this culture to work at Money Forward, its developers and systems engineers, who had different levels of knowledge about AWS and Kubernetes at the time, needed upskilling and in-depth understanding of AWS services. عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. In company infrastructure Architecting on AWS By upskilling employees through courses offered by AWS Training and Certification, financial services provider Money Forward improved the speed of its product releases and increased the number of products and services it offers to customers. Overview Bolstered confidence Reduced average product deployment time Customer Stories / Financial Services Türkçe Money Forward hopes to further optimize its newly established DevOps system. The application development teams will continue to be involved in operations, and software engineers will continue to use the AWS infrastructure. Money Forward believes it is essential to help more engineers learn AWS and to continue to release stable services faster. As a result of AWS Training and Certification, the company has improved not only its service to customers but also its culture. As Suzuki says, “Our message to potential hires is that you’ll be able to grow as an engineer and individual employee by joining Money Forward.” In Money Forward’s early days, a central infrastructure team handled the building and operating of the infrastructure portion of all products in an on-premises environment. With the growth of its existing services and the expansion of new ones, there was an increasing requirement to respond more quickly to its users’ needs. To make the infrastructure more robust and scalable, Money Forward started using AWS. In 2017, it started building new services on AWS and moved existing on-premises products, such as Money Forward Cloud Payroll and Money Forward ME, to the AWS Cloud in 2020 and 2021, respectively. Money Forward has services with more than 12.8 million users operating on AWS. English Opportunity | Solving a Bottleneck to Spur Business Growth AWS Training has boosted the use of AWS within the company, and application developers have been able to take over system-setting authority from the infrastructure engineers. “If we were still in a traditional on-premises environment where only infrastructure engineers could touch AWS, the current growth of Money Forward might have been slower,” says Ogasawara. “But now, even with the in-house infrastructure, the percentage of application teams that can use and operate by themselves is increasing, which has improved the release speed and productivity of our services.” Deutsch Money Forward Increases Development Velocity by 3x Working with AWS Training and Certification Tiếng Việt Accelerated the volume of infrastructure changes Established in 2012, Tokyo-based Money Forward is a fintech company that delivers tools to visualize and improve the financial health of individuals and small to midsize organizations. It serves its customers with a variety of personal finance apps and software-as-a-service solutions. Italiano ไทย We have had engineers with skills and knowledge on AWS but who have not been able to train other engineers systematically. This training became a catalyst to spark a wide range of interest in AWS.”  Learn more » In developers to use AWS services more proactively Reduced bottlenecks Português" myposter Case Study.txt,"Migrating to the cloud has helped myposter innovate faster and also launch a second business, Kartenliebe, which makes personalized stationery and cards for weddings, birthdays, religious festivals, and other occasions. Français Max Tafelmayer Chief Technology Officer, myposter Español AWS designed and implemented solutions for myposter’s storage and compute needs, using Amazon Simple Storage Service (Amazon S3), an object storage service offering scalability, data availability, security, and performance. The company also uses Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for myposter’s fluctuating and varied workloads. This includes high levels of automation needed for customizing products in its web shop. The IT team now spends less time on maintenance, so it can focus on delivering value through service innovation. This has helped myposter launch and support Kartenliebe, by deploying another Kubernetes cluster. “Kubernetes has the best and widest range of tools and libraries,” says Tafelmayer, “which means the team can program in any language and pick the most appropriate features of each package to develop the business.” AWS offers a great menu of different services to choose from—and the opportunity for our people to develop and learn along the way.” Adopting AWS has also been beneficial when hiring talent. “We’re a fully digital business, with ambitions to grow,” says Tafelmayer. “The people we want to attract expect to work with the latest tools, so they have the opportunity to learn and grow themselves.” Get Started 한국어 More Flexible Storage and Improved Agility myposter is an ecommerce and photo production business based in Munich, Germany. Customers upload photos to create personalized photobooks, greeting cards, calendars, posters, and other printed items. myposter also licenses its platform to third parties. About myposter Amazon EC2 Scales to meet up to 400 percent increase in demand AWS Services Used Scaling Automatically to Meet Demand 中文 (繁體) Bahasa Indonesia Paves the way for new business launch using AWS Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский myposter decided to migrate to AWS in 2018, after the company had tried to create its own storage system using an open-source solution. Not satisfied with the storage environment’s reliability and stability, myposter turned to AWS to modernize its setup. “It was quickly clear to us that AWS was the best fit for our business in terms of storage, and for wider operations too,” says Max Tafelmayer, chief technology officer (CTO) at myposter. “It had all the services and flexibility we could ever need.” عربي With the new environment, myposter has resolved capacity issues during times of peak demand and reduced cost per customer order by 5 percent. In addition, replacing the open-source storage cluster with Amazon S3 has provided a more stable and reliable environment and freed up time for myposter’s IT team to focus on product development. myposter Scales, Modernizes, and Future-Proofs its Business Using AWS Learn more » Benefits of AWS No longer tied to rigid and time-consuming processes, myposter now has the freedom to do what it wants. “AWS offers a great menu of different services to choose from,” says Tafelmayer, “and the opportunity for our people to develop and learn along the way.” Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Türkçe Amazon EKS automatically manages the availability and scalability of the Kubernetes control plane nodes responsible for scheduling containers, managing application availability, storing cluster data, and other key tasks. Using Amazon RDS for MySQL, myposter can easily set up and operate relational databases in the cloud. English Amazon RDS myposter is an ecommerce and photo production business based near Munich, Germany. Its customers upload photos to create personalized photobooks, greeting cards, calendars, posters, and other printed items. myposter turned to AWS to create an infrastructure that could scale in times of high demand, such as during Black Friday sales promotions and over the Christmas period. Using AWS has helped the company innovate faster and also launch a second business, Kartenliebe, which makes personalized cards for weddings, birthdays, religious festivals, and other social occasions. Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Reduces workload for IT maintenance and monitoring Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. myposter chose Amazon Web Services (AWS) to create an infrastructure that could scale in times of high demand, such as during Black Friday sales promotions and over the run-up to Christmas. At these times, the strain on its system could be up to 400 percent higher compared to other periods of the year. Deutsch myposter is an ecommerce and photo production business with 100 employees, based in Munich, Germany. myposter operates in the competitive market of digital image editing and printing, where a fast and efficient service for customers is essential. Its users upload photos to create personalized photobooks, greeting cards, calendars, posters, and other printed items. In addition to its digital printing operations, myposter rents its web shop infrastructure to third parties that require high-end visual processing and production services. To achieve the agility required to offer this service, myposter chose Amazon Elastic Kubernetes Service (Amazon EKS), a solution that uses a managed container service to run and scale Kubernetes applications in the cloud. Tiếng Việt Amazon S3 Italiano ไทย Contact Sales Frees IT team to focus on innovation 2022 日本語 Working with the previous infrastructure, it took the myposter IT team a week to set up a new server. Now, it takes just 5 minutes to add or remove a server to match fluctuating demand. The company believes that images are safer and more retrievable on AWS using Amazon S3. Issues that used to impede myposter’s operations, such as databases going out of sync, just do not happen anymore. 中文 (简体) By its nature, myposter’s business experiences uneven demand, with spikes and troughs of activity throughout the year and also at different times of the week. Managing the company’s on-premises infrastructure represented a significant operational overhead and myposter was concerned about the availability of customer images stored on its servers. Português Server Setup Now Takes 5 Minutes Instead of a Week" N KID Group Case Study Amazon Web Services (AWS).txt,"“Our vision is always to have a robust system that can serve our customers in the best way possible. We are expanding fast and being on the AWS Cloud has given us a lot more flexibility and scalability,” Khoa comments. Picking the Right Cloud Partner Français With AWS Elastic Beanstalk, N KID engineers now conduct multiple deployments during the day, using a continuous integration/continuous development (CI/CD) approach, to improve functionality. At night, instances are scheduled to scale down, which has cut operational costs by 30 percent. “Developers now have peace of mind, and we are all more relaxed because we can deploy automatically using AWS Elastic Beanstalk with no constraints,” Khoa says. Benefits of AWS Español Develops and executes promotions 3 times faster Having successfully standardized its digital operations on AWS, N KID began working toward its next goal: providing a consistent customer experience. Payment at tiNiWorld indoor playgrounds is mostly digital, with visitors using the N KID mobile app or branded Near Field Communication (NFC) cards. However, with the group’s on-premises system, crashes frequently occurred during school holidays and on weekends when traffic spiked. This resulted in customers being limited to cash payments and employees needing to manually record transactions, which incurred a high risk of errors and dissatisfaction due to wait times. Learn More Amazon RDS for SQL Server 日本語 Contact Sales “With managed services like Amazon RDS for SQL Server, our developers can conduct performance checks on their own, and we can take advantage of native services on Amazon RDS for SQL Server such as backups and snapshots to upgrade our database without a dedicated DBA. Furthermore, we have reduced risk to better serve our growing customer base,” Khoa says. Since migrating to AWS, N KID has doubled the number of tiNiWorlds from 30 to 60 and branded retail outlets from 10 to 42. AWS Step Functions 한국어 Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Get Started AWS Step Functions is a serverless function orchestrator that makes it easy to sequence AWS Lambda functions and multiple AWS services into business-critical applications. AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. Prior to running its payment processing and Windows-based workloads on AWS, updates were conducted weekly and a server restart was performed overnight to avoid affecting N KID customers. “If we had to fix an urgent bug, we could deploy immediately but with a lot of anxiety because we were afraid of the system going down,” Khoa recalls. Shorter Time-to-Market with Serverless Developers now have peace of mind, and we are all more relaxed because we can deploy automatically using AWS Elastic Beanstalk with no constraints.” Saves on headcount by offloading database administration AWS Services Used To learn more, visit Business Applications and Software. Cuts operational costs by 30% 中文 (繁體) Bahasa Indonesia For more than 10 years, N KID Group has been operating indoor playgrounds under its flagship brand, tiNiWorld, to give children a space to safely run, play, and explore. In 2016, the group introduced a mobile app and began a digital transformation to enrich its offline experience with online touchpoints. Its renewed vision is to be the top children’s platform in Vietnam. AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers, creating workload-aware cluster scaling logic, maintaining event integrations, or managing runtimes. Ρусский عربي The Innovation Journey Continues N KID Group Modernizes Child’s Play on AWS 中文 (简体) Do Bui Anh Khoa Chief Technology Officer, N KID Group N KID continues to explore new opportunities for innovation on the cloud with Renova and AWS. Currently the group is working on turning all its services into dockers for a full container-based architecture. Additionally, N KID plans to reduce its Windows workloads from 40 to 20 percent to better support Kubernetes integration and its CI/CD approach. N KID Group Modernizes Child’s Play on AWS Auto scales during peak periods to prevent system crashes About N KID Group AWS Elastic Beanstalk Auto Scaling to Prevent System Downtime Top Online and Offline Platform for Kids Türkçe English Doubles number of play centers and retail outlets in 2 years More and more businesses are moving away from reliance on traditional hosting centers, and N KID did not want to be left behind. N KID recognized the benefits of cloud computing and the automation opportunities afforded to businesses on the cloud. “For us, it was never about whether or not we would move to the cloud, but rather when we would move,” explains Do Bui Anh Khoa, chief technology officer of N KID Group. Adopting a Stress-Free Approach to Deployment Vietnam and its children are in serious need of more green spaces to run and move freely. In Ho Chi Minh City, public parks cover only 0.55 square meters per citizen. This is a far cry from neighboring countries such as Singapore, where 8 square meters of land per citizen are reserved for parks and trees. N KID Group was founded in 2009 in Vietnam with a vision to become a leader in children’s entertainment. The group operates 60 tiNiWorld play centers and 42 tiNiStore and provides digital engagement platforms for kids and parents. Offloading Tedious Database Maintenance Deutsch Tiếng Việt The root of the problem was a massive server used prior to AWS that N KID’s lead developer used to manually deploy resources. When that lead developer left the company, leaving no documentation in his wake, N KID took the opportunity to automate. The company applied AWS Elastic Beanstalk to its transaction processing application, a .NET workload on Windows that is key to avoiding service interruption on the ground—or in N KID’s case, a bouncy rubber mat. Since implementing AWS Elastic Beanstalk, the group has not experienced any major instances of downtime, much to the relief of its customer service employees. SQL Server is a relational database management system developed by Microsoft. Amazon RDS for SQL Server makes it easy to set up, operate, and scale SQL Server deployments in the cloud. Italiano ไทย As the next step in the group’s modernization, N KID implemented serverless features executed with AWS Lambda code to automate scheduled tasks and break down monolithic architecture. This has resulted in tighter integration with distributors and retail partners through shared APIs and container-based services orchestrated with Kubernetes. The onus of backend design work for regular promotions can now be shared with N KID’s partners. The group wanted to embark on its cloud journey with an experienced consultant, which it found in Renova Cloud, an Amazon Web Services (AWS) Advanced Consulting Partner. The first step in N KID’s cloud journey was standardizing operations across its digital platforms. The group engaged Renova in 2017 to start migrating non-critical workloads, such as its website, to the AWS Cloud. Motivated by the positive experience, N KID decided to go all-in on AWS. Khoa says, “Renova played, and continues to play, an important role in N KID’s journey of modernizing our technology stack to provide a robust, ever-evolving experience for our customers.” With database management offloaded to AWS and serverless architecture in place, N KID engineers have more time to write quality code and build new features that bridge the online and offline N KID experience. An example of a new feature that N KID is launching is crossover holiday promotions where tiNiWorld visitors can enjoy discounts on the group’s ecommerce sites. “Being on AWS allows us to execute new ideas quickly, and marketing promotions can be developed and executed three times faster as a result,” Khoa says. 2020 After migrating its website and customer-facing assets to the cloud, N KID began modernizing its backend. Databases were the first in line for an upgrade. N KID was prompted to use Amazon Relational Database Service (RDS) for SQL Server by the departure of one of its database administrators (DBAs), and the switch to a managed database service has further reduced maintenance overhead. Learn more » N KID is also using AWS Step Functions to visualize workflows and better target the source of any issues that arise during promotions. As a recent example, N KID sent coupon codes to members’ phones and emails but noticed that several members didn’t receive them. Engineers were able to easily trace and repair the errors. AWS Lambda Português" Naranja X Modernizes Financial Services More Efficiently with SaaS Solutions in AWS Marketplace _ Naranja X Case Study _ AWS.txt,"Français 2023 After validating a POC for a cloud solution, IT teams have the information they need to build a business case for senior leadership in order to acquire approval and budget for implementation. For example, when internal business requests for data modeling were taking up too much development time, the Naranja X IT team looked for a way to let different departments access analytics centrally and complete data modeling faster. Searching AWS Marketplace led the team to a free trial of Matillion Data Productivity Cloud, an enterprise tool that enables codeless data transformation. Español AWS Marketplace access to benefits Naranja X business teams could quickly configure the solution via a web-based user interface, test it, and provide feedback. When the solution successfully helped shrink time-to-insight from weeks to days, Naranja X didn’t have to delay for further contract negotiations. Its teams just continued using the SaaS solution while the company paid monthly in AWS Marketplace, knowing it could easily reassess and revise the agreement as needed in the future. Before, many different steps were required to assess solutions for security, functionality, UX, and integration capability with existing tools. In AWS Marketplace, we can simply choose one, opt for a SaaS free trial, and keep moving.” 日本語 Opportunity to test Outcome | Delivering a more Data-Centric Company Culture Get Started 한국어 Providing excellent service to millions of customers across more than 180 bank branches and a mobile app does not happen in a single transaction, especially as Naranja X continues along its journey to become a digital banking ecosystem. IT teams rely on quick, cloud-native improvements to support a seamless, cross-channel customer experience and optimize evolving business processes. Discovering new software is so efficient in AWS Marketplace, Naranja X IT leaders can hear about a new, ISV cloud solution at an AWS Summit, look it up in AWS Marketplace, message team members a link, and even test drive it immediately to understand the expected return on investment. “There are thousands of amazing cloud solutions from various vendors in AWS Marketplace,” says Pablo Adrián Mlynkiewicz, chief data and analytics officer at Naranja X. “But it’s the confidence it gives me that keeps me coming back.” Improved Solution | Subtracting Complexity with Consolidated Billing About Naranja X Established pricing models in AWS Marketplace don’t have to disrupt or limit existing business relationships. Private offers help Naranja X to continue existing relationships with preferred vendors with the added convenience of procurement in AWS Marketplace. And procurement strategies can always evolve. For example, when its pay-as-you-go model reached maturity with Snowflake, which helps organizations to mobilize data with Data Cloud, Naranja X reached out directly to its trusted advisor SEIDOR, which offered contract conditions for procuring Snowflake services in AWS Marketplace that served the company better at the time. AWS Services Used 中文 (繁體) Bahasa Indonesia Giving teams free rein to try different SaaS solutions simultaneously may sound like an invoicing headache, but for Naranja X finance teams, consolidated billing in AWS Marketplace quiets the noise. All AWS Marketplace purchases and agreements can be managed in Naranja X’s AWS account, where managers can quickly reference current spending and future commitments. Pablo Adrián Mlynkiewicz Chief Data & Analytics Officer, Naranja X Ρусский Naranja X is a FinTech enterprise modernizing banking and credit card services for nearly 5 million customers across Argentina. The company migrated to Amazon Web Services (AWS) to connect customers with more convenient products, services, and benefits that support financial health. عربي 20% faster 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. AWS Marketplace is a curated digital catalog enabling customers to quickly find, test, buy, deploy, and manage the third-party software, data, and professional services necessary to build solutions and run their business. Procurement teams leverage AWS Marketplace to accelerate innovation and enable cloud users to deploy solutions rapidly and securely, while reducing total cost of ownership and improving operational oversight. Learn more » Overview Naranja X Modernizes Financial Services More Efficiently with SaaS Solutions in AWS Marketplace Customer Stories / Financial Services Türkçe English Opportunity | Adding Confidence with Frictionless Deployment Previously, it could take months to set up suppliers in our system and conduct POCs. With AWS Marketplace SaaS free trials and flexible pricing options, our teams can test three or four ISV SaaS solutions in days and decide which is the best fit for our needs. This makes the overall procurement process so much faster.” invoice management But Naranja X teams can’t always do it alone. Working with independent software vendors (ISVs) to deploy readymade software-as-a-service (SaaS) solutions can enable Naranja X developers to build and solve at speed. But managers must also protect against accelerating costs or security risks. When leaders at Naranja X procure in AWS Marketplace, they have access to thousands of third-party cloud solutions that can be deployed almost instantly with little to no upfront commitment and are supported by powerful cost-control tools. And Naranja X doesn’t have to leave any preferred ISVs behind. Previously, when Naranja X needed a cloud solution, it contacted vendors one by one. Those vendors could provide ample documentation and demos, but IT leaders couldn’t say firsthand if the solutions worked well in their own environment. AWS Marketplace SaaS free trials allow Naranja X teams to get hands-on experience with ISV cloud solutions and create proof of concepts (POCs) before procuring them—without compromising on security. Flexible payment methods are another important benefit for Naranja X. The pay-as-you-go option can help launch shorter-term projects faster. For example, when Naranja X needed a Palo Alto Networks firewall solution to support data migration between AWS Regions, the procurement process didn’t slow the team down. Naranja X obtained licenses almost immediately and realized the benefits of the solution an estimated 20 percent faster, compared to previous procurements that required emailing back and forth to develop and finalize proposals—all while developers waited for a green light. Procuring SaaS solutions in AWS Marketplace has not only helped Naranja X get through the procurement process an estimated 20 perent faster, but it has also democratized access to data and delivered other efficiencies across the company. Where it once took weeks to assemble the right team and agree on business priorities to shape data modeling, for example, business teams are now using Matillion Data Productivity Cloud to create data models themselves within 3 days, without asking IT teams for help. Deutsch Tiếng Việt Such efficiencies contribute to building a stronger data culture at Naranja X, meaning more team members are equipped to make data-driven decisions. And as more teams use these solutions, time to value shrinks and the possibilities for new customer solutions grow. Cristian Deferrari Head of Infrastructure, Naranja X Italiano ไทย Contact Sales and validate POC before procuring services Naranja X is an Argentine FinTech working to make people's financial lives simpler. In addition to issuing over 10 million credit cards, the company has become a platform for access to financial products and services, giving opportunities to millions more people who are left out of the traditional financial system. Overview | Opportunity | Solution | Outcome Centralizing vendor management in AWS Marketplace also helps Naranja X finance teams to conduct better forecasting because what, how, and when to pay is within the company’s control. Before using AWS Marketplace, Naranja X consistently spent around 50 percent of vendor onboarding time discussing how Argentina’s unpredictable currency exchange rates might dramatically change the dynamics of a contract with an ISV. AWS Marketplace offers a more consistent process around billing and invoicing, so Naranja X can dedicate more time to agile innovation instead of lengthy negotiation. Português" NBCUniversal Case Study _ Advertising _ AWS.txt,"One Platform relies on AWS ephemeral compute solutions such as Amazon EMR, a big data solution for petabyte-scale data processing, interactive analytics, and machine learning in the cloud. NBCU uses machine learning to tailor its 200 jobs to 8,000 servers and cost-efficiency models built around Amazon Elastic Compute Cloud (Amazon EC2), which offers secure and resizable compute capacity for virtually any workload. Amazon EMR is the industry-leading cloud big data solution for petabyte-scale data processing, interactive analytics, and machine learning using open-source frameworks such as Apache Spark, Apache Hive, and Presto. Using One Platform, media buyers can plan effectively regardless of whether viewers are consuming content through a streaming service or traditional television. NBCU automates buying through demand-side platforms and through APIs that democratize access. Français 2023 Español NBCUniversal’s Solution AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers.  Learn more » 日本語 NBCU Uses AWS to Build First-Party Data Solution within Its One Platform Technology Stack In order to more effectively manage big data workloads, NBCU migrated 4 PB of data into its data lake on AWS. “We’ve worked with the AWS team to reformat these data pipelining activities for this big data and synthesize it into our forecasting across linear and digital to help with our planning across these holistic media plans,” says Jeff Pinard, NBCU’s senior vice president of ad technology. Get Started 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Industry Challenge NBCU used Amazon Web Services (AWS) to build a first-party data solution within One Platform to help manage and process large volumes of data effectively and synthesize it for forecasting across linear and digital. To further efficiency in a pipeline that takes in 15.4 TB of interactive reporting data in near real time, NBCU uses AWS Lambda, a serverless, event-driven compute service that lets companies run code for virtually any type of application or backend service without provisioning or managing servers. The company was also able pivot its analysis of viewer patterns from outdated rating methods to near-real-time insights from big data. AWS Services Used 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский عربي Benefits of Using AWS 中文 (简体) The migration to AWS enabled data workloads to become more flexible, efficient and cost effective. NBCU estimates that its migration to AWS will save more than $35 million over 10 years—a 40 percent reduction. Within an ever-changing industry landscape, NBCUniversal (NBCU) sought to facilitate how advertisers reach target audiences across television, streaming services, and mobile apps. It needed to unite siloed linear and digital media planning and monetization while maintaining the privacy of viewers’ data. NBCU also needed a more flexible and scalable solution for managing large volumes of data for holistic forecasting. About NBCU Türkçe English NBCUniversal’s One Platform is an industry first, combining years of world-class digital and linear expertise with the benefits of big tech: first-party data, precision targeting, automated buying, and outcome-based measurement. We would have incurred a huge cost to be able to get the server power to do that in any on-premises environment ... We’ll do it on AWS in a very cost-effective way and provide the business near-real-time data that is exponentially increasing year over year."" Using its data solution built on AWS, NBCU has also become more agile and reactive to business needs. Data volumes surrounding the Olympic Games, for example, increased to 7 GB for the Tokyo games in 2021. With the AWS data infrastructure in place, NBCU was able to scale and meet the needs of their customers without experiencing latency. NBCU also achieved near-real-time reporting and delivery analysis, helping the business manage or redirect buying patterns quickly and analyze its delivery day over day. “We would have incurred a huge cost to be able to get the server power to do that in any on-premises environment,” says Pinard. “We’ll do it on AWS in a very cost-effective way and provide the business near-real-time data that is exponentially increasing year over year.” Using NBCUniversal's One Platform, media buyers can plan effectively regardless of whether viewers are consuming content through a streaming service or traditional television. NBCU automates buying through demand-side platforms and through APIs that democratize access, and by building on AWS, NBCU estimates that it will save more than $35 million over 10 years. Deutsch Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 600 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Tiếng Việt Customer Stories / Advertising & Marketing Italiano ไทย Amazon EMR With content reaching a billion people monthly, NBCUniversal is a global media company that includes broadcast and streaming channels, cable television, theme parks, and a movie studio. Learn more » Amazon EC2 Jeff Pinard NBCU’s Senior Vice President of Ad Technology AWS Lambda Português" NeuroPro Case Study.txt,"NeuroPro is Changing the Way Brain Related Diseases are Diagnosed Using AWS Français Compliance with regulations in different territories NeuroPro is a Swiss-based digital health solutions company that aims to solve data challenges in healthcare around diagnosing and treating brain-related diseases. Its VMLpro platform provides physicians with access to the data and tools they need to diagnose patients quickly and accurately. Español One reason for this is siloed, static, and incomplete data sources, which make it difficult for doctors to access the information they need to make quick and accurate diagnoses. Due to a skills shortage, the level of specialist knowledge needed to draw conclusions from the data is also not always a given. VMLpro uses Amazon Web Services (AWS) to process large volumes of patient data in real time and facilitates collaboration among healthcare professionals, who can connect via the platform to get a second opinion when they need it most. 日本語 Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. 2022 NeuroPro aims to improve treatments and outcomes for patients with brain diseases, by reducing misdiagnoses. Its real-time collaboration and remote diagnostics platform VMLpro uses Amazon Web Services (AWS) to support faster and more accurate diagnoses by providing physicians with access to the data and tools they need. It helps any healthcare provider quickly and easily access cloud-based resources to get a full picture of a patient’s brain function. Hospitals have reported that the VMLpro platform has reduced the time it takes to share medical data and make a diagnosis from weeks to minutes. Contact Sales Get Started 한국어 When doctors need a second opinion, VMLpro supports collaboration among physicians located across the globe. They can quickly and easily collaborate, accessing multiple data sources and files, to come to a solution. “With VMLpro, a doctor in Switzerland can quickly communicate with an expert in Australia, who is immediately able to see a live picture of a patient’s journey,” says Dr. El-Imad. “That means physicians have extra support, confidence, and guidance in their decision making.” Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used Using AWS, the company encrypts electroencephalogram (EEG), magnetic resonance imaging (MRI), and computed tomography (CT) scan datasets before storing them in the cloud. Because encrypting large volumes of data is resource heavy, NeuroPro uses AWS on-demand compute power to perform these tasks quickly and cost efficiently, which means each hospital is not limited by its own resources to protect its data. Dr Abbas Badran Head of Development, NeuroPro Manages infrastructure maintenance with one full-time staff member instead of four. NeuroPro is Changing the Way Brain Related Diseases are Diagnosed Using AWS AWS Key Management Service (AWS KMS) lets you create, manage, and control cryptographic keys across your applications and more than 100 AWS services. Learn more » About NeuroPro NeuroPro, a Swiss-based digital health company, has created the first cloud-based collaboration platform for remote diagnostics of complex neurological cases using Amazon Web Services.  We’re dealing with medical data, which is highly sensitive, so it’s essential that it’s secure. AWS offers high levels of encryption for all stored data.” Using AWS, the NeuroPro team has the time to focus on customer experience and innovation because it manages infrastructure maintenance with just one full-time employee instead of the four it would take without AWS managed services. The team saves time and effort by automating tasks such as backup and recovery, queue management, lifecycle management, and system monitoring. “Maintaining our infrastructure is like magic,” says Dr. Badran. “Without AWS, it would take at least four people, but we can do it with one full-time member of staff. This means we have more resources to engage with our customers and make sure our platform is intuitive for them.” NeuroPro is confident that AWS will continue to support its growth and innovation as it looks to offer its solutions to more physicians across the world. “We want to help healthcare providers deliver the best possible treatment to brain disease patients, and those with other complex medical conditions,” says Dr. El-Imad. “Using AWS, we have the flexibility and power to achieve this.” AWS Services Used NeuroPro’s VMLpro helps any healthcare provider quickly and easily access cloud-based resources to get a full picture of a patient’s brain function. Running diagnostic algorithms on Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for any workload, increases the speed and accuracy of diagnoses. Hospitals have reported that the VMLpro platform has reduced the time it takes to share medical data and make a diagnosis from weeks to minutes. Close collaboration with specialized partners remotely means that referrals can be made more quickly and in a more targeted manner. In many cases, reliable diagnosis and prompt treatment are crucial and the effective sharing of findings enables timely initiation of necessary treatments. 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. To diagnose brain diseases, experts must analyze large volumes of patient health data, which is extracted from a variety of hospital monitoring equipment. Much of this must happen in real time. “We’re talking about terabytes of data,” says Dr. Jamil El-Imad, chief scientist at NeuroPro. “It’s simply not possible for some organizations to deal with data on this scale without our platform.” to enable global health expert collaboration with peace of mind. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Securing Medical Data Using AWS 中文 (简体) This means that any clinic or hospital—big or small—has remote access to the information they need to provide correct diagnoses for patients faster. This can ultimately improve care and outcomes and doctor expertise. Overview Amazon CloudFront Speeding Up Time to Innovation With data stored securely on AWS, NeuroPro can be confident about compliance with data regulations in different territories—it can share data with other institutions around the world with confidence. AWS Customer Success Stories Türkçe English Swiss-based NeuroPro aims to improve the diagnosis of brain diseases and other medical conditions with a digital solution that provides simple access to the tools and data that physicians need. Aimed at busy physicians who don’t have the time to learn complex new systems, the company’s solutions must be easy to learn and use. This was the premise for NeuroPro’s Virtual Mobile Laboratory for Professionals (VMLpro). Diagnosing Brain Disease in Hours, Not Days 6 days to 6 hours Secure patient data Using the content delivery network Amazon CloudFront, patient videos are transcoded and served over secure channels to physicians who can use them to help diagnose patients. “Running on AWS, our platform has the flexibility to support any data format,” says Dr. Teresa Sollfrank, chief product officer at NeuroPro. “Doctors can upload the relevant data and set permissions to streamline and speed up collaboration right from their desks.” For patients with brain diseases such as epilepsy and multiple sclerosis, misdiagnosis can mean years of taking the wrong medication and lead to other serious health problems. This is a sad reality for many. For example, some researchers suggest that one in three epilepsy patients are misdiagnosed. AWS KMS Deutsch Securing and protecting data is a top priority for NeuroPro, which must meet the highest Advanced Encryption Standard (AES) specifications and be compliant with regional regulations such as the EU General Data Protection Regulation (GDPR). “We’re dealing with medical data, which is highly sensitive, so it’s essential that it’s secure. It assures our customers that patient data is safe while being shared on the platform” says Dr Abbas Badran, head of development. “Additionally, AWS offers high levels of encryption for all stored data.” Tiếng Việt Customer Stories / Healthcare Italiano ไทย Learn how »  The platform also brings together other necessary diagnostic resources from care givers at all stages of the patient journey, including test results, clinician notes, and video files. On VMLpro, even large files, such as videos of patient symptoms, are easy to access and share from any location. 1 Reduces healthcare providers’ diagnosis times for brain disease. Learn more » Amazon EC2 Breaking Down Healthcare Silos to Enable Global Expert Collaboration Português" NodeReal case study.txt,"NodeReal is a blockchain infrastructure and services provider that offers one-stop blockchain infrastructure services including full-fledged node services, blockchain as a service, and blockchain application tools and Application Programming Interface (API). Founded in 2021, NodeReal onboarded around 10,000 developers within its first 12 months, including projects such as BNB Chain, Aptos, CoinMarketCap, CertiK, Galxe, Trust Wallet, and ApeSwap. Français “Thanks to the high performance global network and cloud services from AWS, NodeReal has achieved its vision ‘Make Your Web3 Real’, and built the fastest and most reliable blockchain infrastructure for Web3 builders across the world,” says Jimmy Zhao, technology solutions director at NodeReal. NodeReal will next introduce a one-stop blockchain platform to help its customers build their own chains, as well as Layer-2 blockchains to support high-speed transactions. The company will also aim to build an open and community-driven API marketplace for Web3 developers. Jimmy Zhao Technology Solutions Director, NodeReal Español NodeReal uses AWS Graviton2-based instances, Amazon Elastic Compute Cloud (Amazon EC2) and AWS Managed Services for better price performance. NodeReal was also able to save money and resources by building on the AWS Cloud, as it did not have to secure physical servers and storage. 日本語 2022 Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Get Started 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. NodeReal Provides Scalable Infrastructure Solutions With Strong Price Performance for Web3 Development Striking a Balance Between Performance, Stability, and Scalability Benefits It deploys Amazon Aurora to automatically scale its database across multiple regions to support its global customers, and Amazon Elastic Kubernetes Service (Amazon EKS) to manage its container-based applications running in the Kubernetes open-source orchestration system. Combined with AWS Global Accelerator, which improves global application availability and performance, it maintains consistently low latencies for its customers and end-users.   On AWS, NodeReal’s customers can deliver faster transactions for their end-users on the blockchain with more responsive applications. AWS Graviton Processor AWS Services Used Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. In the cloud, Amazon EKS automatically manages the availability and scalability of the Kubernetes control plane nodes responsible for scheduling containers, managing application availability, storing cluster data, and other key tasks. Learn more » 中文 (繁體) Bahasa Indonesia Amazon Aurora Amazon Aurora provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) NodeReal is a blockchain infrastructure and services provider that offers full-fledged node services, as well as blockchain application tools and API. Founded in 2021, NodeReal registered over 10,000 users and 1,000 active weekly users within its first 12 months, including BNB Chain, Apeswap, Project Galaxy, and Trust Wallet.   Solution Overview Amazon Elastic Kubernetes Service About Company Amazon Elastic Compute Cloud By running on AWS, NodeReal provides its customers with a high-performing, stable, and scalable environment to build Web3-based applications. This has helped grow its customer base to over 10,000 worldwide within the first 12 months of its founding in September 2021. Türkçe English ● 700,000 QPS: The number of queries NodeReal can handle per second ● 26 ms: The average latency achieved by deploying on the AWS Cloud ● 700,000/second: API calls that the company can scale to support within 30 minutes NodeReal is fully built and deployed on Amazon Web Services (AWS), which helps to maintain performance, stability, and scalability of its blockchain infrastructure. The company now handles 700,000 API requests per second from its Web3 customers. Furthermore, it supports 70 percent of all public API requests for the BNB Chain, making it the leading blockchain infrastructure provider for Web3 companies on the BNB Chain.   Helping NodeReal Become a Major Player in the Decentralized Economy Thanks to the high performance global network and cloud services from AWS, NodeReal has achieved its vision ‘Make Your Web3 Real’, and built the fastest and most reliable blockchain infrastructure for Web3 builders across the world.” Creating a Conducive Environment for Web3 Development Deutsch Opportunity Tiếng Việt Most of NodeReal’s Web3 customers develop decentralized, throughput-intensive applications for end-users, such as Non-Fungible Tokens (NFTs), decentralized finance (DeFi) wallets, and play-to-earn blockchain games (GameFi). One such customer is Trust Wallet, a multi-chain universal crypto wallet that has over 5,000,000 active users weekly. Italiano ไทย Find out how NodeReal came to support about 70 percent of all public Remote Procedure Calls for BNB Chain, a Layer-1 blockchain supporting leading cryptocurrency exchanges and other Web3 applications, within 12 months of its founding. AWS Graviton processors are designed by AWS to deliver the best price performance for your cloud workloads running in Amazon EC2. Learn more » Learn more » Outcome Overview | Opportunity | Solution | Benefits | Outcome | AWS Services Used As such, NodeReal built its blockchain infrastructure on the AWS Cloud, which is able to scale to deliver robust performance and reliability for high throughput requirements.   Português" Novo Nordisk Uses ML for Computer Vision to Optimize Pharmaceutical Manufacturing on AWS _ Novo Nordisk Case Study _ AWS.txt,"ML models in production Amazon SageMaker helps you build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows Français Español to support other quality-assurance use cases   Novo Nordisk has successfully built an automated pipeline to deploy ML models at scale to different edge devices. The company is turning the cartridge-counting proof of concept into a production-grade solution and will continue to build the proof of concept for its agar plate use case. These solutions will significantly impact Novo Nordisk’s efficiency, improving its time to market and reducing manual labor so that its team can focus on innovation. Automates 日本語 Amazon SageMaker 2023 About Novo Nordisk Contact Sales Opportunity | Using Amazon SageMaker Pipelines to Deploy ML Models at Scale  Get Started 한국어 Novo Nordisk Uses ML for Computer Vision to Optimize Pharmaceutical Manufacturing on AWS time to market Novo Nordisk A/S is a multinational pharmaceutical company based in Denmark. Founded in 1923, the organization makes and markets pharmaceutical products with a focus on diabetes care and hormone therapy. Scales For the past 100 years, Novo Nordisk has developed innovative products to treat chronic diseases like diabetes, endocrine disorders, and rare blood conditions. More than 34 million patients use its diabetes-care products globally, and the company constantly seeks new digital technologies to optimize its processes for the benefit of its customers. It strives to get medicines to the people who need them at a faster pace and lower price while ensuring compliance. AWS Services Used Improves 中文 (繁體) Bahasa Indonesia Solution | Automating Key Quality-Assurance Tasks with ML and Computer Vision  Deploys ไทย Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي On AWS, Novo Nordisk created an automated ML pipeline that covers all the steps involved in the ML development process, from deployment to monitoring, while optimizing for scalability, customization, cost, and traceability. It used Amazon SageMaker Pipelines, the first purpose-built continuous integration and continuous delivery service for ML, to create each specific step in the pipeline and combine them to form a complete, interconnected solution. The pipeline used prelabeled images stored in Amazon Simple Storage Service (Amazon S3)—an industry-leading object storage service. It then resizes, labels, processes, and splits the images into three datasets: training, validation, and testing. Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. 中文 (简体) “Through our engagement with the AWS team, we proved to ourselves and our company that we could take a computer-vision use case, put it into the cloud, and build a working pipeline,” says Kristensen. “And we can do it in a fast and scalable way.” Outcome | Using AWS Services to Streamline the Pharmaceutical Manufacturing Line  Overview Amazon QuickSight powers data-driven organizations with unified business intelligence (BI) at hyperscale. Learn more » Jonas Vejlgård Kristensen Solutions Architect, Novo Nordisk Türkçe English Novo Nordisk A/S (Novo Nordisk) supplies nearly 50 percent of the world’s insulin. Digital technologies are critical to optimizing the company’s manufacturing operations, enhancing quality, improving yield, and decreasing costs. To this end, Novo Nordisk is using computer vision combined with machine learning (ML) to automate key tasks on manufacturing lines, like cartridge counting and anomaly detection for agar plates, to reduce manual labor. AWS IoT Greengrass is an open-source edge runtime and cloud service for building, deploying, and managing device software. Learn more » Monitors On Amazon Web Services (AWS), Novo Nordisk has created a prototyping solution that effectively trains, deploys, and monitors its ML models and manages the datasets resulting from the pipelines. Alongside the AWS team, the company has built a workflow where a robotic arm places a box full of drug cartridges on a platform; a camera rig takes images of the box; ML inference is performed using an edge device; and the final results are displayed on a dashboard powered by Amazon QuickSight, which provides unified business intelligence at hyperscale. Learn how Novo Nordisk uses AWS to streamline manufacturing processes and reduce manual labor through automation. Deutsch AWS IoT Greengrass Tiếng Việt Amazon S3 Through our engagement with the AWS team, we proved to ourselves and our company that we could take a computer-vision use case, put it into the cloud, and build a working pipeline."" Italiano Customer Stories / Life Sciences After the data is processed, the pipeline passes it to either model training, where it is trained with predefined parameters, or model tuning, where it is run through different parameters to find the optimal combination. Then, Novo Nordisk uses the test dataset to generate an evaluation report and determine whether the model is ready for deployment. After registering the model, it compiles the model and packages for deployment using Amazon SageMaker Edge, which makes it simple to operate ML models running on edge devices. The company also uses Amazon SageMaker Edge Manager, which provides model management for edge devices, to perform ML inference of each image.   Next, Novo Nordisk uses AWS IoT Greengrass, an open-source edge runtime and cloud service, to deploy the ML model and serve as the core software for the edge device. “We use AWS services to optimize our ML model for a specific edge device,” says Codina. “When we have the model up and running, every time that we make a prediction, we process the data and send it to the cloud to perform model monitoring.” Novo Nordisk monitors its ML models in production using Amazon QuickSight and Amazon Timestream, a fast, scalable, and serverless time-series database. With these monitoring capabilities, it can detect any anomalies and identify inaccurate results. For example, if a hand or object is covering a box of cartridges, Novo Nordisk can find the issue on an Amazon QuickSight dashboard, review the analyzed image, and correct the error. Moreover, the company has complete traceability of the ML model in production, a necessity in the highly regulated pharmaceutical industry.   After building out the pipeline to run its cartridge-counting model, Novo Nordisk wanted to see if it could repurpose it for a different use case for scalability. During the last 2 weeks of the prototyping engagement, the company configured the pipeline to detect bacteria growth on agar plates, thousands of which are manually analyzed every day. “We didn’t need to change much,” says Jonas Vejlgård Kristensen, solutions architect at Novo Nordisk. “We simply took a new dataset and used a different ML model. Then, we employed an anomaly-detection approach and adjusted the camera settings.” Learn more » quality-assurance tasks Amazon QuickSight ML models at scale to different edge devices Overview | Opportunity | Solution | Outcome | AWS Services Used Novo Nordisk had explored ML to automate time-consuming, manual tasks, but many of its processes were disconnected and difficult to scale. “We had all the parts of the ML-development process running locally on individual machines, from data processing to model training and even the manual transfer of the model to the edge devices,” says Carlos Ribera Codina, ML engineer at Novo Nordisk. “They were not interconnected, so this process could become quite difficult, especially when we had to deploy the models at scale and maintain them in production.” The team chose to migrate because it could use AWS services to create a pipeline that would run all these models automatically and interconnect them to expedite the development process.   Novo Nordisk entered into a 6-week prototyping engagement with the AWS team to train and deploy an ML model that uses computer vision to count the number of drug cartridges in a box—a task that it previously performed manually and was time and resource intensive. The new process involved capturing images of cartridge boxes from above, using pre-trained models to detect a cartridge, and counting the number of locations where a cartridge is identified in an image. Português" NTT DOCOMO builds a new data analysis platform for 9000 workers with AWS attracting 13 times more users and invigorating data use _ NTT Docomo Case Study _ AWS.txt,"However, as cloudification progressed, cost consideration became even more important. With on-premises environments, users can operate data platforms provided by the Information Systems department without worrying cost. On the cloud, with more users and usage time, the cost rises. To alleviate the issue and raise awareness from IT and user departments, NTT DOCOMO ran a FinHack AWS Cloud Financial Management workshop. Tiếng Việt Français Syusaku Ijiri General Manager, Information Systems Department, NTT DOCOMO, INC. Increase data catalog monthly active users Español Increase user accounts Download PDF Version Here ≫ NTT DOCOMO’s use of data has burgeoned since the new service opened. And with the platform now firmly entrenched in user divisions, the enterprise is planning more ways to incorporate data into business and expand its range of use, including a verification sandbox that will make it easier for users to try new tools. The NTT DOCOMO Group is also aiming to expand the service to its new subsidiary, NTT Communications. 日本語 Amazon SageMaker 2023 Honoka Kudo The first was in providing separate analysis environments focused on users. The Data Analysis Lab provides functions like machine learning and visualization environments, which companies can pay for based with their AWS accounts. NTT DOCOMO predicts that use of these analytics environments will expand and benefit businesses. The company has also bolstered in-house training so user departments can build their own analytics environments. It also provides an a la carte service allowing users to select and combine AWS tools as needed. They have the option of combining these with tools that the Information Systems department provide. 한국어 NTT DOCOMO provides services for telecommunications and smart lifestyles as the parent company of the NTT DOCOMO Group. As of the end of fiscal year 2021, the enterprise serviced 84 million mobile phone users and 89 million d Point Club subscribers. Overview | Opportunity | Solution | Outcome | AWS Services Used Opportunity | Cloudifying data platform to grow and ingrain data-driven management The second initiative was data catalogs. Data catalogs are itemized forms summarizing the location and contents of data. The enterprise had previously used Excel to create similar environments, but this presented serious challenges where employees had to decipher scattered information. Creating data catalogs enable workers to check unified sets of data when needed. AWS PrivateLink provides private connectivity between virtual private clouds (VPCs), supported AWS services, and your on-premises networks without exposing your traffic to the public internet. Learn more » However, its on-premises data platform prevented quick infrastructure scaling and use of the latest tools. As NTT DOCOMO added services to its line up, data became increasingly decentralized and difficult for company departments to use properly. NTT DOCOMO started on July 1, 1992, with the NTT Group’s NTT Communications and NTT Comware becoming subsidiaries In May 2022. These three companies work together as the NTT DOCOMO Group to expand business, strengthen the competitiveness of its network, create and develop services, and promote digital transformation. AWS Services Used Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Kouji Yamamoto 中文 (繁體) Bahasa Indonesia 10x Solution | Accelerating business use with distinct user-based analytics environments Contact Sales Ρусский New data platform construction period from On-Premise to Cloud عربي Shifting to the cloud saw an explosion in departmental use of the data platform. User numbers for the analytics environment increased 10-fold within a year of the July 2021 release. The number of users of accounts paid for by the Information Systems department rose 13-fold, and data catalog monthly active user numbers increased by a factor of 2.4. Customer Stories / Telecommunications Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. NTT DOCOMO began building its data infrastructure in January 2021. The platform was completed and available to users in July. Alongside the cloud shift, the Information Systems department challenged itself with two transformational initiatives. Syusaku Ijiri 7 months Overview 2.4x Hirotaka Hikage Get Started Amazon QuickSight powers data-driven organizations with unified business intelligence (BI) at hyperscale. Learn more » Mobile telecommunications carrier NTT DOCOMO migrated its on-premises data platform to Amazon Web Services (AWS) in just seven months. The company switched from a one-size-fits-all analytics platform to environments tailored to the needs of individual organizations while establishing data catalogs for easier analytics. The move saw analytics accounts multiply by 13 times in under a year, 10 times more analysis environment builds, and an boost in internal data user numbers.  Outcome | Changing mindset as a provider for more user-focused development Türkçe AWS PrivateLink English Moving to the cloud to evolve our data use, changing our IT staff’s mindset, and increasing our cost awareness will generate more satisfaction for analytics environment users and increase customer value.” After evaluating several cloud services, NTT selected AWS for its popularity among the DOCOMO Group, its low learning curve, its ease of linking between systems, and a comprehensive service line up. According to Kouji Yamamoto, an assistant manager in the Data Platform Group, “Our concept was to use the new data platform for environments where users could choose the right tools, instead of solutions provided by the Information Systems department. AWS was superior to other services because it ensured security while enabling us to build a reliable environment with plenty of flexibility.” “Shared cost awareness lets our IT and user departments easily reach mutual understanding,” says Hikage. “After operating the platform for a year, the cost is 30 percent less than at its peak, thanks to running the FinHack event with Information Systems departments and system integrators.” Enhance analytics environments About NTT DOCOMO, INC. Deutsch The enterprise was also highly impressed with AWS’s comprehensive cloud skills training, friendly support from dedicated AWS teams, and cloud economics and cost management tools that aid cost management. “AWS was the perfect partner to guide us in our data expansion,” says Hikage. Jun Kobayashi NTT DOCOMO builds a new data analysis platform on AWS, growing its users 13X and invigorating organization data use Amazon S3 “We’ve been able to cut service delivery times from six months on-premises to about three months on the cloud, and our business speed is steadily accelerating,” says Honoka Kudo of the Data Platform Group. “Shifting to the cloud eliminated the need to come to the office, and working from home during the COVID-19 pandemic was effortless. User departments can directly refer to data catalogs and build analytics environments with plenty of flexibility. As internal use of the new data platform grew, we received requests from multiple departments to expand functionality, and they can now use their preferred analytics tools more freely. Because we can pay for AWS accounts for any project wanting to employ the new platform and users can visualize expenses, cost awareness has increased throughout the company.” Italiano ไทย Build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflowsGet Started with SageMakerTry a hands-on tutorial According to Jun Kobayashi, a manager of the Data Platform Group, “Shifting to the cloud means we don’t have to build servers based on demand forecasts as with on-premises solutions, and it's easier to scale up and out. We can control costs by raising our own awareness. As the Information Systems department providing analytics environments to user departments, we had become accustomed to scratch development, but we’re now more conscious of system-based fit-to-standard. We’ve realized the importance of developing from the perspective of data platform users and making them familiar with information through data catalogs and Q&A sites.” Says Hikage, “We’ll expand the new platform to more departments, quantify the relative value of data, and select and collect data needing refinement for a better managed Group.” “This step allowed us to cut the time our users require to decipher catalogs and accumulate knowledge,” says Yamamoto. “Data catalogs organize and visualize our Information Systems department’s knowledge and information of mission-critical systems.” Learn more » Amazon QuickSight 中文 (简体) 13x To solve this, the company shifted its data platform to the cloud. “We decided to transform our organization and shift to the cloud to obtain a clearer, data-driven understanding of our customers and offer superior services,” says Hirotaka Hikage, senior manager of Data Platform Group, Information Systems Department. Português" Numerix Scales HPC Workloads for Price and Risk Modeling Using AWS Batch _ Numerix Case Study _ AWS.txt,"The Numerix team found a way to avoid these costs and increase efficiency by migrating its HPC analytics solution to Amazon Web Services (AWS) and using AWS Batch, which provides fully managed batch processing at any scale. Now, instead of asking its clients to invest in CPU cores, Numerix can offer access to an environment that is not limited by the amount of hardware on hand. “What AWS has afforded us is like what streaming has done for entertainment,” says Jim Jockle, chief marketing officer at Numerix. “Using AWS, we can run calculations that used to take a month in under 40 minutes, which is near real time for trade and risk management."" Français “The cloud has been an inevitable journey for Numerix to provide efficiency and availability,” says Jockle. Numerix began undertaking some software-as-a-service projects in the cloud in 2012. In 2019, the migration to AWS accelerated as engineers started using Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity in the cloud, to run its HPC solutions. Numerix started using AWS Batch in 2021 to dynamically provision the optimal quantity and type of compute resources on Amazon EC2. With the new approach, analytics performance has improved by 180 times. risk management Enhanced Español Amazon Elastic Compute Cloud (EC2) More importantly, by using dynamic resource allocation on AWS, Numerix can meet demanding client constraints more effectively. “Using AWS Batch, we meet service-level agreements of 40 minutes or less on portfolios with tens of thousands of trades,” says Jockle. “That’s absolutely unheard of.” Engineers are staging information using Amazon Simple Storage Service (Amazon S3), cloud object storage built to retrieve any amount of data from anywhere. The increased memory and storage capacity on AWS have reduced bottlenecks across the analytics process. Now, Numerix is much better prepared to take on larger portfolios. Instead of telling clients that they will have to wait several months to purchase, receive, and install servers each time they scale up, Numerix can help them respond to sizing changes in days or hours. “Just being able to adapt quickly is a huge win,” says Humphrey. 日本語 AWS Services Used Outcome | Reaching Virtually Limitless Scalability at Limited Cost Using AWS Close Many of Numerix’s clients have appreciated the transition to a cloud-first mindset. “In the cloud model, clients no longer need a very large IT department to run our HPC solutions,” Humphrey says. Instead of buying more servers every time they scale up, organizations can adapt to sizing changes in the cloud in a matter of hours. Numerix also makes extensive use of Amazon EC2 Spot Instances, which help users to run fault-tolerant workloads for up to 90 percent off Amazon EC2 On-Demand pricing. By using Amazon EC2 Spot Instances and serverless technology, Numerix has experienced significant cost savings. financial analytics Italiano 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Numerix is eager to transition more of its clients to the cloud and is working to expand its software-as-a-service model as a key delivery and operational framework. “AWS provides such a huge range of services and capabilities,” says Humphrey. Instead of preparing hardware for the worst possible case, clients pay for computing power as they go. Numerix provides its analytics software to more than 250 global clients, including banks, regulators, and insurance companies. Its extensive mathematical models price deals against a wide variety of market states to simulate the likely effects if stock prices took a tumble. Financial institutions rely on this data to make decisions with billion-dollar implications, and they require the most advanced analytics available. Further complicating matters, financial markets have been in unprecedented territory since the early days of the COVID-19 pandemic. Trade and risk management information is especially valuable in this time of instability. “We have clients that are doing portfolios of 20,000 trades,” says Jockle. And those portfolios are only growing larger as firms embrace risk analytics in an attempt to shield themselves from vulnerability. Get Started Our clients are using our risk analytics to avoid billion-dollar losses. The introduction of near-real-time analytics with the virtually limitless scalability of AWS has been a real game changer.” These technical enhancements have a real-world impact. “Our clients are using our risk analytics to avoid billion-dollar losses,” says Jockle. “The introduction of near-real-time analytics with the virtually limitless scalability of AWS has been a real game changer.” About Numerix Opportunity | Using AWS Batch to Increase Analytics Performance for Numerix Figure 1: Advanced Analytics Architecture 中文 (繁體) Bahasa Indonesia Click to enlarge for fullscreen viewing.  AWS Step Functions in analytics performance Contact Sales Ρусский Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » عربي Numerix, a financial technology company, needed to find a way to scale its high performance computing (HPC) solution as client portfolios ballooned in size. Its institutional customers require insight into thousands of possible market scenarios to avoid being dangerously vulnerable to market changes. The rapidly increasing complexity of these capital markets meant that risk and pricing models were consuming costly and unwieldy computing resources. Financial organizations like Numerix and its customers had to invest in the expensive on-premises computing infrastructure for HPC. Founded in 1996, Numerix is a financial technology company headquartered in New York City, with 16 offices in 16 countries. It provides analytics software for more than 250 global clients, including banks, regulators, and insurance companies. 中文 (简体) near-real-time analytics 2022 Overview Numerix leaders agree that adopting cloud-native orchestrator and serverless architecture has been the key to taking advantage of the full elasticity of the cloud. Although Numerix used a lift-and-shift approach in the early stages of the migration, the full migration to a serverless model was a milestone. “The serverless model is exactly what we need so that we don’t have expensive resources running all the time,” says Humphrey. “We submit these workloads to AWS Batch, which orchestrates compute resources by provisioning the right Amazon EC2 instances for the jobs submitted, runs these jobs, and then shuts the instances down when the work is completed, and we’re charged for only the actual seconds of use.” Numerix uses AWS Step Functions, a low-code, visual workflow service for modern applications, to run its serverless capabilities. Learn how Numerix improved performance of its financial risk analytics solution by 180 times using AWS Batch. AWS Step Functions is a visual workflow service that helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and machine learning (ML) pipelines. Customer Stories / Financial Services Scaled 180x improvement Türkçe English Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Unlocked Deutsch bottlenecks in analytics The complexity of this increase in trading and analytics volume is an immense mathematical challenge that requires a lot of compute power. Bill Humphrey, chief technology officer at Numerix, says, “For clients to run our solutions on premises, we have to tell them, ‘This is how many CPU cores you need to have in your data center when you install our software and run it every day. And you’ll have to buy even more next year because your portfolio is growing.’” That startup cost has been a barrier to the adoption of Numerix tools. Tiếng Việt Amazon S3 Numerix Scales HPC Workloads for Price and Risk Modeling Using AWS Batch ไทย AWS Batch lets developers, scientists, and engineers efficiently run hundreds of thousands of batch and ML computing jobs while optimizing compute resources, so you can focus on analyzing results and solving problems. Learn more » Architecture Diagram Solution | Reaching Virtually Limitless Scalability at Limited Cost Using AWS Learn more » Jim Jockle Chief Marketing Officer, Numerix Amazon Batch Decreased Português" Oportun Increases the Accuracy of Sensitive-Data Discovery by 95 Using Amazon Macie _ Oportun Case Study _ AWS.txt,"in speed-to-discovery Français Customer Stories / Fintech About Oportun Español Oportun Architecture Diagram To accomplish its security goals—in addition to satisfying regulatory mandates and member demands for privacy—Oportun needed a solution that would not burden its security team with false positives as it scanned data. Other solutions Oportun tried required significant technology investments and still failed to achieve accuracy goals. “Accuracy is key,” says Carlos. “And we’ve found that Amazon Macie is 95 percent accurate for the critical attributes that we scan for, including social security numbers and tax identification numbers.” 99% reduction in cost Solution | Communicating Business Impact Using Amazon QuickSight 日本語 Contact Sales Amazon Athena is a serverless, interactive analytics service built on open-source frameworks, supporting open-table and file formats. Learn more » Get Started 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Amazon Athena Oportun is continually developing innovative data protection solutions as it seeks to remain ahead of both threats and competitors. Next, the company will use AWS capabilities to complement its current pipeline and add features, like observability and alerting, to improve risk monitoring and response. In addition to developing new tools, the team will be driving optimization to reduce its total cost of ownership. Due to the rapidly changing nature of member data, Oportun’s data security efforts have far-reaching effects. “When we started using Amazon Macie, scanning time went from days or weeks to hours, even hitting 30 minutes for smaller Amazon S3 buckets under 1 TB,” says Carlos. “And we saw that these findings were valid.” Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » Learn how fintech Oportun, a neobank lender, achieved 95 percent data discovery accuracy using Amazon Macie. 100x improvement AWS Services Used It’s vital that Oportun’s technical teams can articulate the financial impact of risk issues to a nontechnical audience. To that end, the company uses a combination of AWS services to identify, assess, and communicate risk across the enterprise. Oportun uses Amazon Macie to identify sensitive data, and then uses Amazon Athena, an interactive query service that makes it simple to analyze data in Amazon S3 using standard SQL, to evaluate it. “We scan Amazon S3 buckets with Amazon Macie, send the results back to Amazon S3, and use Amazon Athena to read that result,” says Cruz. “Then, we use internal tools to identify unique records across many files to calculate data risk.” 中文 (繁體) Bahasa Indonesia of discovering sensitive data Within its new solution, Oportun makes heavy use of Amazon Macie–automated data discovery to identify Amazon S3 buckets with potential PII in a cost-effective and scalable way. With automated data discovery, Oportun doesn’t have to scan every single Amazon S3 bucket completely. Instead, it can identify and prioritize which Amazon S3 buckets must be remediated to accelerate risk reduction. The data security organization works with a heat map of priority buckets to remediate. Based on the heat map, the data security team engages other teams in agile sprints to rapidly remediate potentially risky data. Increased visibility into exposure has made it easier to align the organization around data security. The team also uses Amazon QuickSight, a service that powers data-driven organizations with unified business intelligence at hyperscale, to make it simple for everyone in the organization to understand the data. Ρусский عربي Learn more » 中文 (简体) The company is comfortable leading the way with new ideas. “We’re happy to collaborate with the AWS team on proof-of-concept work for new technologies,” says Carlos. “We want to do more, and using Amazon Macie is making that simpler.” 95% achievement in risk exposure 2022 Overview Oportun is a mission-driven organization that provides responsible and affordable financial services, at scale, to millions of people in the United States who are often poorly served by traditional financial services companies. At the core of its advanced credit decisioning engine is Oportun’s ability to process and interpret large volumes of consumer data, including PII, from disparate sources. The security and integrity of that data are absolutely essential. Oportun’s data security organization spends a great deal of time and money working cross-functionally with other teams to raise awareness around PII data security and remediate issues when they find them. Still, the team is always on the lookout for better tools to help reduce Oportun’s risk. That’s how Oportun discovered Amazon Macie in late 2021. Opportunity | Using Amazon Macie to Automatically Scan TB of Data for Oportun Amazon Macie Türkçe Over the past 8 years, Oportun has built several solutions on Amazon Web Services (AWS) and stored a considerable amount of data using Amazon Simple Storage Service (Amazon S3), object storage built to retrieve any amount of data from anywhere. So, when the Oportun data security team started looking for a new data discovery offering for use with Amazon S3 buckets, it considered staying on AWS using Amazon Macie, which automates sensitive data discovery at scale. After initial testing indicated high speed and accuracy, Oportun implemented this solution. “Using Amazon Macie, we’re seeing a 100 times improvement on both speed to scan and time to discovery,” says Oswaldo Cruz, data security engineer at Oportun. English Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram Outcome | Building a Comprehensive Data Protection Offering Using AWS Services Amazon Macie is a data security and data privacy service that uses machine learning (ML) and pattern matching to discover and protect your sensitive data. Amazon QuickSight powers data-driven organizations with unified business intelligence (BI) at hyperscale. Learn more » Oportun Increases the Accuracy of Sensitive-Data Discovery by 95% Using Amazon Macie Deutsch Tiếng Việt Amazon S3 in data-scanning speed Significant decrease Italiano ไทย Architecture Diagram Oswaldo Cruz Data Security Engineer, Oportun Close A primary goal was to reduce risk as much as possible so that member PII is safer in the event of inadvertent access. Oportun is proud of the work that it has done to achieve that goal. “Using Amazon Macie, I think we’re pushing the envelope for the fintech space,” says Carlos. “We have a better sense of where our data is across a number of sources.” Click to enlarge for fullscreen viewing.  Oportun, a fintech lender and neobank with 1.9 million members, needed a better way to quickly identify and remediate potential security risks to its members’ personally identifiable information (PII). Other solutions Oportun tried could take weeks or months to scan data and identify exposed PII, making it difficult for company leaders to reduce risk. “We knew that there was a lot of PII in our systems,” says Carlos Carlos, director of data security at Oportun. “But we wanted to have a good sense of where that data was at virtually any moment.” Amazon QuickSight in scanning accuracy  Oportun is an AI-powered digital banking solution that has provided more than $12 billion in responsible and affordable credit. The company is certified as a Community Development Financial Institution. Português Using Amazon Macie, we’re seeing a 100 times improvement on both speed to scan and time to discovery.”" Optimize software development with Amazon CodeWhisperer _ AWS DevOps Blog.txt,"AWS DevOps Blog Optimize software development with Amazon CodeWhisperer by Dhaval Shah , Nikhil Sharma , and Vamsi Cherukuri | on 30 MAY 2023 | in Amazon CodeWhisperer | Permalink |  Share Businesses differentiate themselves by delivering new capabilities to their customers faster. They must leverage automation to accelerate their software development by optimizing code quality, improving performance, and ensuring their software meets security/compliance requirements. Trained on billions of lines of Amazon and open-source code, Amazon CodeWhisperer is an AI coding companion that helps developers write code by generating real-time whole-line and full-function code suggestions in their IDEs. Amazon CodeWhisperer has two tiers: the individual tier is free for individual use, and the professional tier provides administrative capabilities for organizations seeking to grant their developers access to CW. This blog provides a high-level overview of how developers can use CodeWhisperer. Getting Started Getting started with CodeWhisperer is straightforward and documented here . After setup, CodeWhisperer integrates with the IDE and provides code suggestions based on comments written in the IDE. Use TAB to accept a suggestion, ESC to reject the suggestion ALT+C (Windows)/Option + C(MAC) to force a suggestion, and left and right arrow keys to switch between suggestions. CodeWhisperer supports code generation for 15 programming languages. CodeWhisperer can be used in various IDEs like Amazon Sagemaker Studio , Visual Studio Code, AWS Cloud9 , AWS Lambda and many JetBrains IDEs. Refer to the  Amazon CodeWhisperer documentation for the latest updates on supported languages and IDEs. Contextual Code Suggestions CodeWhisperer continuously examines code and comments for contextual code suggestions. It will generate code snippets using this contextual information and the location of your cursor. Illustrated below is an example of a code suggestion from inline comments in Visual Studio Code that demonstrates how CodeWhisperer can provide context-specific code suggestions without requiring the user to manually replace variables or parameters. In the comment, the file and Amazon Simple Storage Service ( Amazon S3 ) bucket are specified, and CodeWhisperer uses this context to suggest relevant code. CodeWhisperer also supports and recommends writing declarative code and procedural code, such as shell scripting and query languages. The following example shows how CodeWhisperer recommend the blocks of code in a shell script to loop through servers to execute the hostname command and save their response to an output file. In the following example, based on the comment, CodeWhisperer suggests Structured Query Language (SQL) code for using common table expression. CodeWhisperer works with popular Integrated Development Environments (IDEs), for more information on IDE’s supported please refer to CodeWhisperer’s documentation. Illustrated below is CodeWhisperer integrated with AWS Lambda console. Amazon CodeWhisperer is a versatile AI coding assistant that can aid in a variety of tasks, including AWS-related tasks and API integrations, as well as external (non AWS) API integrations. For example, illustrated below is CodeWhisperer suggesting code for Twilio’s APIs. Now that we have seen how CodeWhisperer can help with writing code faster, the next section explores how to use AI responsibly. Use AI responsibly Developers often leverage open-source code, however run into challenges of license attribution such as attributing the original authors or maintaining the license text. The challenge lies in properly identifying and attributing the relevant open-source components used within a project. With the abundance of open-source libraries and frameworks available, it can be time-consuming and complex to track and attribute each piece of code accurately. Failure to meet the license attribution requirements can result in legal issues, violation of intellectual property rights, and damage to a developer’s reputation. Code Whisperer’s reference tracking continuously monitors suggested code for similarities with known open-source code, allowing developers to make informed decisions about incorporating it into their project and ensuring proper attribution. Shift left application security CodeWhisperer can scan code for hard-to-find vulnerabilities such as those in the top ten Open Web Application Security Project (OWASP), or those that don’t meet crypto library best practices, AWS internal security best practices, and others. As of this writing, CodeWhisperer supports security scanning in Python, Java, and JavaScript languages. Below is an illustration of identifying the most known CWEs (Common Weakness Enumeration) along with the ability to dive deep into the problematic line of code with a click of a button. In the following example, CodeWhisperer provides file-by-file analysis of CWE’s and highlights the top 10 OWASP CWEs such as Unsensitized input is run as code, Cross-site scripting, Resource leak, Hardcoded credentials, SQL injection, OS command injection and Insecure hashing. Generating Test Cases A good developer always writes tests. CodeWhisperer can help suggest test cases and verify the code’s functionality. CodeWhisperer considers boundary values, edge cases, and other potential issues that may need to be tested. In the example below, a comment referring to using fact_demo() function leads CodeWhisperer to suggest a unit test for fact_demo() while leveraging contextual details. Also, CodeWhisperer can simplify creating repetitive code for unit testing. For example, if you need to create sample data using INSERT statements, CodeWhisperer can generate the necessary inserts based on a pattern. CodeWhisperer with Amazon SageMaker Studio and Jupyter Lab CodeWhisperer works with SageMaker Studio and Jupyter Lab, providing code completion support for Python in code cells. To utilize CodeWhisperer, follow the setup instructions to activate it in Amazon SageMaker Studio and Jupyter Lab . To begin coding, see User actions . The following illustration showcases CodeWhisperer’s code recommendations in SageMaker Studio. It demonstrates the suggested code based on comments for loading and analyzing a dataset. Conclusion In conclusion, this blog has highlighted the numerous ways in which developers can leverage CodeWhisperer to increase productivity, streamline workflows, and ensure the development of secure code. By adopting Code Whisperer’s AI-powered features, developers can experience enhanced productivity, accelerated learning, and significant time savings. To take advantage of CodeWhisperer and optimize your coding process, here are the next steps: 1. Visit feature page to learn more about the benefits of CodeWhisperer. 2. Sign up and start using CodeWhisperer. 3. Read about CodeWhisperer success stories About the Authors Vamsi Cherukuri Vamsi Cherukuri is a Senior Technical Account Manager at Amazon Web Services (AWS), leveraging over 15 years of developer experience in Analytics, application modernization, and data platforms. With a passion for technology, Vamsi takes joy in helping customers achieve accelerated business outcomes through their cloud transformation journey. In his free time, he finds peace in the pursuits of running and biking, frequently immersing himself in the thrilling realm of marathons. Dhaval Shah Dhaval Shah is a Senior Solutions Architect at AWS, specializing in Machine Learning. With a strong focus on digital native businesses, he empowers customers to leverage AWS and drive their business growth. As an ML enthusiast, Dhaval is driven by his passion for creating impactful solutions that bring positive change. In his leisure time, he indulges in his love for travel and cherishes quality moments with his family. Nikhil Sharma Nikhil Sharma is a Solutions Architecture Leader at Amazon Web Services (AWS) where he and his team of Solutions Architects help AWS customers solve critical business challenges using AWS cloud technologies and services. TAGS: codewhisperer , Developer Tools , DevOps Resources AWS Development Center AWS Developer Tools Blog AWS Cloud9 AWS CodeStar AWS Elastic Beanstalk AWS X-Ray Follow  AWS .NET on Twitter  AWS Cloud on Twitter  AWS on Reddit  LinkedIn  Twitch  Email Updates" Optimizing Fast Access to Big Data Using Amazon EMR at Thomson Reuters _ Case Study _ AWS.txt,"Thomson Reuters is a leading provider of business information services. Its products include highly specialized information software and tools for legal, tax, accounting, and compliance professionals combined with the global news service Reuters. AWS CloudFormation Français 2023 Español John Engelhart Associate Architect, Thomson Reuters 300 automated 日本語 Outcome | Streamlining Data Accessibility to Drive Company-Wide Innovation Contact Sales The team also uses AWS CloudFormation to automate deployment of other resources. AWS CloudFormation manages artifacts generated from AWS CodeBuild, a fully managed continuous integration service that compiles source code, runs tests, and produces ready-to-deploy software packages. These artifacts are used at later steps in the pipeline as part of an automated process that reduces manual errors so that the big data team iterates faster. It deploys workflows using AWS CodePipeline, a fully managed continuous delivery service that organizations use to automate their release pipelines for fast and reliable application and infrastructure updates. Instead of staggering workflows over specific times, each step now automatically initiates the next step. “I can’t imagine prioritizing our resources and getting near-real-time updates with our previous architecture,” says Scott Berres, lead developer at TR. “Using Amazon EMR ephemeral clusters, we can go as big as we want at near real time.” AWS Step Functions 한국어 Amazon EMR is the industry-leading cloud big data solution for petabyte-scale data processing, interactive analytics, and machine learning using open-source frameworks such as Apache Spark, Apache Hive, and Presto. Overview | Opportunity | Solution | Outcome | AWS Services Used Using Amazon EMR, TR’s solution automatically adjusts to a fluctuating number of core nodes, from about 200 to more than 10,000 cores per hour. Amazon EMR clusters are right-sized and created automatically through AWS Step Functions, a visual workflow service for developers who are using AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and ML pipelines. The team deploys AWS Step Functions through AWS CloudFormation, which organizations use to model, provision, and manage AWS and third-party resources by treating infrastructure as code. After 7 years of big data workflows, the team had increasingly complex business requirements that constantly required new hardware for resource-intensive jobs. The team had been running its 300 workflows on premises using a multitenant single cluster of Apache Hadoop, an open-source framework that is used to store and process large datasets efficiently. For greater stability, the team created a second Apache Hadoop cluster that ran the same code, doubling costs and taking months to coordinate, schedule, and test upgrades. TR wanted to replace its higher-latency computing solution, which was designed for efficient batch processing, with a workflow that could handle the near-real-time data that its demanding business use cases increasingly required. seamlessly migrated to AWS Get Started Thomson Reuters is a leading provider of business information services. Its products include highly specialized information software and tools for legal, tax, accounting, and compliance professionals combined with the global news service Reuters. AWS Services Used Overview Using Amazon EMR, we spin up more resources and run our workflow more frequently. That is a huge win.” 中文 (繁體) Bahasa Indonesia About Thomson Reuters Rather than running all its workflows on a single Apache Hadoop cluster, TR runs each Apache Spark job on an ephemeral Amazon EMR cluster, which closes out after completion of the job. To manage datasets, the solution uses Apache Hudi on Amazon EMR, an open-source data management framework used to simplify incremental data processing and data pipeline development. As a result, TR has reduced cluster runtime by 48 percent. Instead of writing results to the Hadoop Distributed File System, Apache Hudi writes datasets to Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, high availability, security, performance, and durability. With TR’s decision to modernize its technologies and migrate its solutions to the cloud, the big data environment needed a plan. The team started with a small proof of concept around different compute solutions in the cloud. Ultimately, the team chose Amazon EMR, a cloud big data solution for petabyte-scale data processing, interactive analytics, and machine learning (ML) using open-source frameworks Apache Spark, Apache Hive, Presto, and more. Every other week throughout the migration, the TR team met with AWS engineers who made suggestions, set up working sessions, and even examined TR’s Apache Spark logs to find answers for any glitches. The team completed its migration of 3,000 Apache Spark jobs to AWS in 18 months. “The overall migration went about as smoothly as it could go,” says John Engelhart, associate architect at TR. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Learn more » for new services in time for new product updates 3,000 Apache Spark jobs Thomson Reuters (TR) needed to refresh its data center’s hardware and faced a costly license renewal for its enterprise data management system. TR also wanted to modernize its infrastructure to provide innovative features for customers. Using Amazon Web Services (AWS), TR’s big data team built a solution that streamlined and standardized its development processes in the cloud. The new solution provided seamless orchestration for TR’s 300 workflows, improved time to market for new features, and simplified access to TR’s big data assets, spurring innovation. Opportunity | Using Amazon EMR to Build an Elastic Compute Solution for Thomson Reuters Customer Stories / Financial Services in cluster runtime Türkçe AWS Step Functions is a visual workflow service that helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and machine learning (ML) pipelines. English Solution | Automating Workflows in the Cloud Improved time to market 48% reduction and more stable workflows Teams throughout TR have benefited from the ability of the big data team to provide more streamlined, accessible data. For example, TR has merged its big data tech stack with ML applications within the company. Research and development teams simply read data from Amazon S3 and use it to develop and productionize ML models for other internal teams, speeding innovation and facilitating the release of new products. “Other teams create custom business features, and that wasn’t the case when we were on premises,” says Engelhart. “Now lots of teams can find our data. They ask for it, and with justification and approval, we simply grant access. It’s spreading like wildfire through the company.” AWS CloudFormation lets you model, provision, and manage AWS and third-party resources by treating infrastructure as code. In September 2022, TR launched Westlaw Precision, a new version of TR’s online research service and proprietary database for legal professionals. Using TR’s improved workflow built on AWS, Westlaw Precision doubles the speed at which lawyers conduct research, and it improves the quality of searches, reducing the risk of missing relevant cases. “Using Amazon EMR, we spin up more resources and run our workflows more frequently,” says Engelhart. “That is a huge win. We can provide content updates every 1 hour instead of every 24 hours.” Amazon EMR Deutsch Tiếng Việt 24 hours to 1 hour reduction Italiano ไทย Learn how Thomson Reuters built scalable, simplified workflows for big data using Amazon EMR. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces ready-to-deploy software packages. AWS CodeBuild Learn more » Optimizing Fast Access to Big Data Using Amazon EMR at Thomson Reuters Português" Optimizing Storage Cost and Performance Using Amazon EBS _ Devo Case Study _ AWS.txt,"“We use terabytes and even petabytes of storage space,” says Miguel Martín, VP of product operations at Devo. “So the migration was a no-brainer from the financial side after the technical side had been validated.” Devo uses Amazon EBS to improve margins and flexibility, and the savings can be invested elsewhere, such as into value-adding innovation. By managing its infrastructure with block storage, Devo also saves 30–40 percent of the time it would otherwise have to spend on compliance. Today, in 2022, Devo serves customers among the Fortune 2000. Firms look to Devo to ingest their log data and manage it securely. As part of the SIEM process, Devo provides near-real-time analysis of alerts, which is crucial in an interconnected world with more pathways for security events to occur. “You can think of SIEM as a barrier, like our ozone layer,” says Tony Le, director of cloud partnerships at Devo. “It helps mitigate threats to customers’ networks.” Français Optimizing Storage Cost and Performance Using Amazon EBS with Devo Solution | Providing Scalability and Speed Using Amazon EBS While Saving 20% on Storage Sub-millisecond Amazon EBS Español   AWS Enterprise Support provides you with concierge-like service where the main focus is helping you achieve your outcomes and find success in the cloud. Learn more » When company teams need a consultation, Devo turns to AWS Enterprise Support, a 24/7 technical concierge service with high-quality engineers, tools, and technology. In weekly catch-up calls with a Technical Account Manager, Devo works toward cost optimization, operational efficiency, and new projects. Initiatives include a plan to use artificial intelligence and machine learning to automate up to 95 percent of security operations. Devo draws on AWS expertise to make the most of the services it uses while driving the pace of innovation and increasing Devo’s visibility in the marketplace. Throughout the 3-month migration, Devo’s top concern was serving its customers, so data processes continued to work seamlessly. “The most important factor was migrating without impacting our service availability,” says Martín. “There was zero downtime because we made the changes live. The customers didn’t even notice.” 日本語 Amazon Enterprise Support Devo is a cloud-native logging and security analytics company. Devo empowers global organizations to optimize the value of their security and operational data by providing solutions for near-real-time visibility and insight. Devo needed powerful storage capacity and scalable grid capacity for high-speed response. Older systems can struggle to keep up with new challenges and security events. But using capabilities native to AWS, Devo helps companies with legacy systems achieve next-generation SIEM seamlessly. Working alongside AWS has helped us grow from a five-person startup to a truly global company. There wouldn’t be Devo without AWS.” query response times 한국어 On-demand storage flexibility Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon Elastic Block Store (Amazon EBS) is an easy-to-use, scalable, high-performance block-storage service designed for Amazon Elastic Compute Cloud (Amazon EC2). by outsourcing infrastructure Get Started in costs by migrating to gp3 AWS Services Used Learn how Devo used Amazon EBS to improve profit margins, performance, and competitive flexibility. In July 2022, Devo became an AWS Partner. Since 2020, Devo has accelerated sales cycles by participating in AWS ISV Accelerate, a co-sell program for organizations that provide software solutions that run on or work alongside AWS. 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Ρусский Customer Stories / Software & Internet عربي 20% reduction 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Founded in Madrid, Spain, in 2011, Devo chose to build its infrastructure on AWS for its maturity and performance. Devo uses AWS to create cybersecurity solutions for organizations, helping transform security operations centers to empower investigation efforts and turning legacy and microservices-based applications into scalable, cloud-based solutions. based on changing business needs during migration 2022 AWS ISV Accelerate Overview Fast queries, near-real-time alerts, and data security are essential to migrating and managing large volumes of data for high-profile organizations and companies. “Working alongside AWS has helped us grow from a five-person startup to a truly global company,” says Le. “There wouldn’t be Devo without AWS.” Türkçe English The AWS ISV Accelerate Program is a co-sell program for organizations that provide software solutions that run on or integrate with AWS. The program helps you drive new business and accelerate sales cycles by connecting participating independent software vendors (ISVs) with the AWS Sales organization. About Devo Outcome | Achieving Optimal Cost Structure for Enhanced Storage Performance and Flexibility Devo, a global cloud-native logging and security analytics company, needed to optimize storage cost and performance for its customers while limiting downtime. As a security information and event management (SIEM) company, Devo is entrusted with mission-critical workloads, so the company cannot afford errors or breaks in availability. It also needs to be scalable because the amount of cybersecurity data a user ingests expands over time. Needing a way to store large amounts of data with powerful grid capacity for fast query response times, Devo turned to Amazon Web Services (AWS) to optimize cost performance and innovation with zero downtime for customers. Deutsch Tiếng Việt Opportunity | Boosting Devo’s Security Analytics Solutions Using AWS Italiano ไทย Zero downtime 30–40% time saved Learn more » Amazon EC2 Tony Le Director of Cloud Partnerships, Devo Devo centralizes customers’ raw data, configuring alerts and dashboards so that customers can rapidly identify malicious activity and unauthorized access. Customers can also take advantage of the powerful analytics that Devo provides on the backend for actionable insights. For example, customers might use these insights to create self-protection strategies for the future. And using the scale and speed of AWS, Devo can respond to queries at submillisecond speeds. To match workloads, the Devo analytics cluster uses Amazon Elastic Compute Cloud (Amazon EC2), secure and resizable compute capacity for virtually any workload, relying on nonvolatile memory express drives to write and replicate ephemeral data quickly. To dynamically increase performance with minimal downtime, Devo uses Amazon Elastic Block Store (Amazon EBS), an easy-to-use, scalable, high-performance block-storage service designed for Amazon EC2. Using Amazon EBS, Devo handles critical workloads, providing reliable storage and processing frequently accessed data while optimizing costs and accommodating customers’ daily needs, whether that’s 500 GB or 10 TB per day. Devo backs up customers’ data using Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. The company also maintains archive data for customers who need 3–7 years’ worth of data for compliance purposes. Devo meets its customers’ high expectations using a combination of current-generation instance types with EBS gp3 and st1 volumes for optimized compute, memory, and storage. In 2021, Devo migrated to Amazon EBS General Purpose Volumes gp3 volumes for data replication, realizing a 20 percent cost saving while maintaining performance. Amazon EBS gp3 volumes are general purpose solid state drive–based Amazon EBS volumes that can be used to provision performance independent of storage capacity in peak hours. For nonpeak hours, the data is written to Amazon EBS st1 volumes. With this strategy, Devo can quickly scale its solution capacity, tune performance, and change the type of live volumes with zero interruption to workload. By scaling input/output operations per second and throughput without additional block storage, Devo pays only for the storage it needs. Português Contact Sales" Optoma-customer-references-case-study.txt,"Amazon Simple Storage Service Turning to the Cloud for Agile Software Development Français MariaDB is a popular open source relational database created by the original developers of MySQL. Optoma built Creative Board using Amazon Relational Database Service (Amazon RDS) for MariaDB with multi-AZ architecture to ensure service availability for its global customers. It uses Amazon CloudFront for low-latency data transmission, which is essential to support the real-time interaction component of Creative Board. It also employs Amazon ElastiCache for Redis to power real-time applications with sub-millisecond latency. Español Additionally, Optoma leverages Amazon Simple Storage Service (Amazon S3) for data storage and retrieval. “Amazon S3 has 99.999999999 percent data durability, which reduces our risk of service interruption by providing high stability to our customers,” Tsuei says. Optoma recently concluded a technical review of its Creative Board build with the AWS team, learning from and applying the principles of the AWS Well-Architected tool. “AWS helped evaluate our architecture design to make sure our service is robust enough to meet the real-time demands of educators and students around the world,” adds Tsuei. Learn More Tarcy Y.M. Tsuei Chief Digital Officer, Optoma 日本語 AWS Services Used Get Started 한국어 Tarcy Y.M. Tsuei, chief digital officer at Optoma, says, “We knew AWS would provide us with the flexibility to scale dynamically based on actual usage during development and production.” Optoma’s core values include reliability, innovation, and customer focus, and the AWS Cloud supports all three of these elements. Tsuei says, “Using AWS as a platform as a service has helped us provide a more reliable, stable, and secure service offering compared to managing these aspects on our own. We can focus on business logic and trust AWS for the rest.”   Optoma’s application engineering team is focusing on enhancing Creative Board and collecting user feedback to improve the product. Next, it will evaluate how artificial intelligence (AI) can be applied for further innovation in Creative Board or other education technology applications. “We’re considering how to help teachers determine how effective a class was by measuring participation rates or interaction with the board,” explains Tsuei. “Or how AI could improve students’ concentration and ability to absorb the information shared on Creative Board.” Stiff competition and long innovation cycles have led many equipment manufacturers to start developing more integrated solutions. Successful manufacturers are using their application and process expertise to create holistic hardware-plus-software solutions tailored to their customers’ needs. This approach has proven sustainable and profitable. Manufacturers that are further ahead in this transformation cycle delivered higher total shareholder returns over the past three years than peers that are just beginning to offer integrated solutions. Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Maintains low latency for real-time interactions 中文 (繁體) Bahasa Indonesia Using AWS as a platform as a service has helped us provide a more reliable, stable, and secure service offering compared to managing these aspects on our own."" Optoma Facilitates Virtual Collaboration with Hybrid Learning Platform on AWS Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский عربي Learn more » 中文 (简体) Another division within Optoma, the internal IT team, is also taking advantage of the AWS Cloud and Amazon Elastic Container Service (Amazon ECS) to develop a market intelligence platform. The platform will collect data from internal sales and external sources such as social media to stay aligned with sentiment and developments in Optoma’s target markets. The company recently launched the Creative Board hybrid learning platform, its latest foray into IoT innovation. Creative Board allows users to simultaneously work or learn on Optoma’s interactive panel displays by providing a connected whiteboard with embedded annotation tools. Teachers, students, and corporate employees can use their computer or smartphone browsers to participate in classes or collaborate in brainstorming sessions. Until its pivot to software-driven innovation, Optoma relied on on-premises infrastructure for its IT requirements. However, when its software team was formed, the company turned to cloud computing for faster software development. Optoma had been using Amazon Web Services (AWS) to run its website and chose AWS as its application development platform. Optoma launched Creative Board 56 percent faster on the AWS Cloud compared to previous application launches on premises. Its engineers can create development infrastructure in as little as one week, whereas the on-premises infrastructure procurement cycle could take three months. Receives support for robust architecture builds Exploring AI to Enhance Learning Experiences Amazon Relational Database Service for MariaDB About Optoma Optoma is a global leader in display technologies such as projectors and interactive flat-panel displays. Its interactive solutions are currently used by corporate and education customers, plus individual consumers, in 159 countries. Türkçe In 2017, Optoma introduced Internet of Things (IoT) technology to enable the remote management of its devices. It first launched the Optoma Connect app for consumers to control projectors in the home. Optoma Connect uses the MQTT IoT messaging protocol running on Amazon Elastic Compute Cloud (Amazon EC2) instances, and it relies on Amazon Alexa to enable voice-activated commands. Amazon ElastiCache for Redis Launches products 56% faster English Making Virtual Collaboration Easy with Creative Board Benefits Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Deutsch Launching Products 56% Faster at a 36% Lower Cost Optoma is a leading provider of large format display solutions for large-venue installations, businesses, educators, and consumers. Since establishing its brand in 2000, the company has aspired to captivate, inspire, and help its customers connect via its comprehensive display offerings, from award-winning projectors to interactive flat panels and direct-view indoor LED displays. In 2016, Optoma began building proprietary software solutions to facilitate presentations, collaboration, and communication for remote and hybrid work environments.  Tiếng Việt Ensures reliable global service delivery Italiano ไทย Saves 36% on infrastructure costs Amazon CloudFront Contact Sales 2022 To learn more, visit aws.amazon.com/solutions/iot. In addition to speed of iteration and development, Optoma has found building on the AWS Cloud more cost-efficient. “We estimate a 36 percent cost savings by adopting AWS services because we are saving on the purchase of hardware and software licenses,” Tsuei says. Previously, Optoma would buy and renew licenses for security software, for example, as part of its application stack. On AWS, however, the company benefits from security by design, a foundational concept behind every AWS service. Tsuei concludes, “AWS continues to support our teams and innovation mindset to bring new and reliable products to market faster.” Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. Português Architecting for Global Stability and Low Latency" Paige Case Study _ AWS.txt,"To run its ML training workloads, Paige uses Amazon EC2 P4d Instances, powered by NVIDIA A100 Tensor Core GPUs, which deliver high performance for ML training and HPC applications in the cloud. Paige uses these instances to queue orchestrated ML jobs optimized to avoid paying for the idle time in between jobs and providing fit-for-purpose compute across its two compute environments. “Using Amazon EC2 P4d Instances, we increased our compute capacity while balancing costs across our on-premises and cloud environments,” says Razik Yousfi, vice president of engineering at Paige. “We didn’t have to come up with a substantial amount of capital to improve the performance of our HPC clusters.” Contact Sales Français Optimizes 72% faster Paige’s Compute Environments Click to enlarge for fullscreen viewing. Paige is using the power of AI to drive a new era of cancer discovery and treatment. To improve the lives of patients with cancer, Paige has created a cloud-based platform that transforms pathologists’ workflow and increases diagnostic confidence as well as productivity. Español In 2021, Paige created a proof of concept to determine which cloud services would best suit its HPC needs and work alongside its existing solutions, including PyTorch, which it uses as its ML framework. “The AWS team was great in connecting us with subject matter experts,” says Fleishman. “Those subject matter experts helped us evolve our proof of concept without wasting resources and successfully pitch using AWS to leadership.” With the information it gleaned from this test, Paige decided to replicate its on-premises workflow in the cloud, using AWS to expand its compute resources for intensive ML workloads. Using Amazon EC2 P4d Instances, we increased our compute capacity while balancing costs across our on-premises and cloud environments.” Now that Paige has built an ML workflow in the cloud, it will continue exploring more of the latest cloud technologies to find new ways to innovate and deliver more value to life sciences and healthcare organizations. “We’ve used AWS services to deploy a workflow that looks like what we have on premises with additional flexibility and scalability,” says Sarte. “On AWS, we can test out new cloud services more efficiently and find purpose-built solutions to support our ML training.” 日本語 Opportunity | Using Amazon S3 to Simplify Data Management for Paige and innovation Razik Yousfi Vice President of Engineering, Paige Get Started 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Amazon FSx for Lustre Amazon EC2 P4d instances deliver the highest performance for machine learning (ML) training and high performance computing (HPC) applications in the cloud. Processes ML workflows internal workflows Outcome | Exploring AWS Cloud Services to Drive Innovation in Healthcare  To overcome this challenge, Paige turned to Amazon Web Services (AWS) and adopted a hybrid infrastructure model for running its PyTorch-based ML workloads and managing its growing data footprint. To improve the runtime performance of its software, the company adopted Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. Paige has replicated its on-premises workflows in the cloud, giving it the ability to use its on-premises and cloud environments in parallel through similar user interfaces. Additionally, the company can access compute capacity in bursts, helping it scale up and down as required by its ML workloads. This scalability helps Paige minimize operational overhead, reduce compute costs, and improve staff productivity. AWS Services Used Increases time savings 中文 (繁體) Bahasa Indonesia Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. ไทย Ρусский عربي data management 中文 (简体) Simplifies Learn more » 2022 With its hybrid cloud architecture, the Paige development team doesn’t have to manually run every ML workload. “On AWS, our developers can queue up our software and run our ML workloads without having to keep their hands on their keyboards,” says Matthew Sarte, senior systems engineer for HPC at Paige. Now that the company has streamlined its internal workflows to save time and improve productivity, the Paige team can focus on training more ML models and driving innovation. Overview About Paige Founded in 2017, Paige strives to transform cancer diagnostics by developing clinical-grade AI solutions to extract key insights from digital slides, such as large-size pathology images. Using ML, Paige can assist pathologists in the diagnosis of cancer and unlock hidden insights that are not visible to the naked eye, helping advance drug discovery and clinical breakthroughs. Türkçe Solution | Adopting Amazon EC2 P4d Instances to Speed Up Internal Workflows by 72 Percent English Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram In 2019, Paige adopted Amazon Simple Storage Service (Amazon S3), an object storage service built to retrieve any amount of data from anywhere. Based on its experience using this service, the company wanted to deepen its use of AWS so it could maintain consistency across its cloud technologies. “Amazon S3 simplified our data management,” says Brandon Rothrock, director of AI science at Paige. “This service gave us the ability to use common interfaces and deep integration with our data platform, annotation platform, HPC compute, and many other applications that surround AI development operations.” AWS Storage Gateway Paige uses Elastic Fabric Adapter—which facilitates HPC and ML applications at scale—to distribute training workloads across multiple servers and accelerate training large ML models. To host its imaging and slide data, Paige uses Amazon FSx for Lustre, fully managed shared storage built on a popular high-performance file system. The company connected this service with some of its Amazon S3 buckets, which helps its development teams address petabytes of ML input data without manually prestaging data on high-performance filesystems. “By connecting Amazon FSx for Lustre to Amazon S3, we can train on 10 times the amount of data that we have ever tried in the on-premises infrastructure without any trouble,” says Alexander van Eck, staff AI engineer at Paige. The company manages assets that need to be visible both in the cloud and on premises using AWS Storage Gateway, which provides on-premises applications with access to virtually unlimited cloud storage. Biotechnology company Paige develops complex, advanced machine learning (ML) applications that support healthcare professionals in delivering precision diagnoses and treatment plans, helping improve their quality of care and patient outcomes. Because of its innovative approach to cancer detection, Paige became the first company to receive U.S. Food and Drug Administration approval for using artificial intelligence (AI) in the field of pathology. The company had built an on-premises solution, with a high performance computing (HPC) cluster powered by NVIDIA GPUs for running its ML workloads. Because Paige wanted to continue expanding its operations and developing more ML models, it needed to update its infrastructure given its growing computational requirements. To meet this need, Paige wanted to use cost-effective, scalable HPC resources in the cloud. Amazon S3 To support its operations, Paige requires a robust infrastructure that can handle the complexity of its training codebase and amount of training data. Before building its cloud infrastructure, the company developed its ML models natively on PyTorch and deployed their software using an HPC cluster that it had built using on-premises hardware. As Paige expanded its product and scientific pipeline, the company needed to scale its compute resources to match the increased demand. “Our on-premises solutions were maxed out,” says Mark Fleishman, senior director of infrastructure at Paige. “Our main goal is to train AI and ML models to help with cancer pathology. And the more compute capacity we have, the faster we can train our models and help solve diagnostic problems.” Learn how Paige in the life sciences industry accelerates PyTorch-based ML model training using Amazon EC2 P4d Instances powered by NVIDIA. Deutsch Tiếng Việt compute costs Amazon FSx for Lustre provides fully managed shared storage with the scalability and performance of the popular Lustre file system. Italiano Customer Stories / Life Sciences AWS Storage Gateway is a set of hybrid cloud storage services that provide on-premises access to virtually unlimited cloud storage. Architecture Diagram Close in parallel Paige Furthers Cancer Treatment Using a Hybrid ML Workflow Built with Amazon EC2 P4d Instances Português Amazon Elastic Compute Cloud (EC2) P4d Instances" PayEye Launches POC for Biometric Payments in 5 Months Using AWS _ Amazon EKS.txt,"Launched proof of concept for biometric payments in 5 months Because PayEye uses individuals’ personal biometric information to authenticate payments, security and data protection are major concerns. To gain approval to launch its service, it needed to ensure compliance with the EU General Data Protection Regulation (GDPR) and demonstrate to the Polish Financial Supervision Authority that it could ensure high levels of security for its users. “Security is crucial to our service,” says Łyczba. “ Using tools available from AWS, we are satisfied that we have achieved the high regulatory standard required.” Français Processed over 10,000 commercial transactions PayEye’s secure biometrics technology converts facial and iris features into unique patterns to authenticate payments. Consumers can use the technology to make biometrically authenticated purchases at shops, restaurants, and sports clubs after a very short registration on a mobile application, using point-of-sale devices called the eyePOS. Español The startup uses AWS for many aspects of its solution. “From security and databases to configuration, deployment, and caching, AWS was critical to developing our biometrics technology,” says Łyczba, chief technology officer (CTO) at PayEye. “Most of our solution relies on it.” 日本語 PayEye realized further cost savings by following suggestions from its AWS account team on ways to optimize its services. “We were able to precisely track our budget to ensure we could launch our proof of concept without seeking additional funding,” says Łyczba. Łukasz Łyczba Chief Technology Officer, PayEye Startup PayEye, founded in Poland in 2019, developed a biometrics payment service that uses a person’s iris and face to authenticate purchases. The company needed to act fast to secure funding, gain regulatory approvals, and win over retail partners before launching its solution. PayEye built its platform on AWS and completed a proof of concept for its biometric authentication technology in 5 months. Assisted by the tools and services available from AWS, it navigated security and data protection regulations, and launched a complete and secure payment ecosystem soon after the initial proof of concept. PayEye also uses AWS to analyze real-time data on device performance and user numbers to improve customer experience. PayEye uses Amazon QuickSight, a cloud-native, serverless business intelligence service. “From Amazon QuickSight dashboards we’re able to see which units are the most profitable and prioritize any tweaks that need to be made to functionality—this maximizes uptime for key revenue generators,” says Łyczba. Get Started 한국어 PayEye has a vision for a future where customers can authenticate purchases using their iris and face. Founded in 2019, the Polish startup knew that it needed to quickly demonstrate the technology for its biometric payment service to secure funding and win over retail and ecommerce partners. About PayEye PayEye, assisted by the tools and services available from AWS, navigated security and data protection regulations and has processed over 10,000 commercial transactions. The company also uses AWS to provide data-driven insights that help it to improve customer experience and support the international rollout of its payment service. Building a Secure Iris-Recognition Payment System on AWS AWS Services Used PayEye has more than 150 retail partners and has logged over 2,000 verified users for its payment service. This early success is due in part to the company monitoring and analyzing real-time device performance and customer usage. From this analysis, it gains insights into how it can improve its platform and customer experience. “With hardware it’s crucial to know how the devices are operating and which are most profitable,” says Łyczba. “This dictates how we prioritize maintenance and development.” 中文 (繁體) Bahasa Indonesia Ensured high levels of security for customer data  Ρусский عربي Learn more » 中文 (简体) Using Amazon Web Services (AWS), the company launched a proof of concept within 5 months and soon after conducted the first commercial transaction in June 2020. PayEye customers can now authenticate payments from 150 point-of sale devices installed in retail shops, restaurants, and sports clubs in the Polish city of Wrocław. Speeding up Development, Saving Costs, and Clearing Regulatory Hurdles From security and databases to configuration, deployment, and caching, AWS is critical to developing our biometrics technology. Our solution relies on it."" Benefits of AWS Generating Business Insights Using Amazon QuickSight Amazon MQ Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service to run and scale Kubernetes applications in the cloud or on-premises.Deploy applications with Amazon EKS in the cloud Deploy applications with Amazon EKS Anywhere Deploy applications with your own tools. Türkçe English Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easy to set up and operate message brokers on AWS. Amazon MQ reduces your operational responsibilities by managing the provisioning, setup, and maintenance of message brokers for you. Because Amazon MQ connects to your current applications with industry-standard APIs and protocols, you can easily migrate to AWS without having to rewrite code. PayEye built its solution on Amazon Elastic Kubernetes Service (Amazon EKS), which makes it easy to deploy, manage, and scale containerized applications using Kubernetes. It also uses Amazon MQ, which reduces operational responsibilities by managing the provisioning, setup, and maintenance of message brokers. Analyzed real-time customer and device performance Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), IT managers, and product owners. CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, and optimize resource utilization. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events.  Deutsch Tiếng Việt PayEye sped up the development process and entered the production phase within just a few months, using out-of-the box AWS services. This approach reduced the time and effort needed to find and hire talent, and has freed up PayEye’s team to focus on developing its core offering while being supported by just one cloud architect, lead DevOps engineer Lukasz Garncarz. “Using AWS is like having an in-house team,” says Łyczba. “We’ve saved money on recruitment, and we didn’t have to sink time into a lengthy hiring process,” says Łyczba. PayEye has created a biometric payment system that authenticates purchases through biometrics recognition. Founded in 2019, the Polish company provides its proprietary eyePOS terminals to retailers and restaurants and its mobile application to end users. Italiano ไทย Amazon EKS Amazon CloudWatch Contact Sales PayEye has just launched the next generation of its eyePOS devices and plans to launch its new biometric technology internationally in the coming months. The company expects it will be easy to recruit new team members as they continue to grow. ""Everyone wants to work for a company that is changing global trends,” says Łyczba. “AWS supports us in this."" PayEye Launches Proof of Concept for Biometric Payments in 5 Months Using AWS 2022 Amazon QuickSight allows everyone in your organization to understand your data by asking questions in natural language, exploring through interactive dashboards, or automatically looking for patterns and outliers powered by machine learning. Amazon QuickSight Português Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today." Postis Case Study.txt,"Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Français Benefits of AWS Postis is pursuing further international expansion to scale-up its territorial coverage, the number of retailers it serves, and to optimize and increase the volume of deliveries. Postis is using local AWS compute resources to run its ML models. Español Expanded delivery services to 25 European countries in 3 years Learn more » Postis wants to help retailers and delivery companies master the last mile of the journey to their customers. The fast-growing Romanian startup provides a real-time digital platform for logistics automation, optimization, and tracking that ensures an excellent service experience across the entire consumer journey, from ordering all the way through to receiving goods. Lowered refusal rates by 20% 日本語 Amazon SageMaker Postis is off to a strong start, rapidly expanding its customer base and tripling revenues every year since its inception. “We’re now prepared to offer our services in all of Europe and to continue adding features to increase our ecosystem’s reach,” says Bulgarov. “Building our products on AWS has helped us achieve a lot in a short amount of time.” Retailers that use Postis can now offer new features to end users, thanks to speedy access to delivery data on the platform. For example, buyers receive the cost of their selected shipping option in real time, so retailers can provide the exact delivery cost while a buyer is still placing an order. Contact Sales Get Started 한국어 To do this, Postis uses machine learning (ML) to help sellers find the most suitable and cost-effective delivery solution for every type of product, customer journey, or destination. The company used Amazon Web Services (AWS) to create a scalable system with the power to run heavy ML workloads and support its global growth. This means Postis’ customers can then offer deliveries in new areas without the need to adjust their IT systems. “Our customers can quickly get set up to accept orders from new countries,” says Florin. “We have all of the infrastructure and data ready for them, so they just need to sign contracts with local couriers.”  Using AWS SageMaker to Quickly Train ML Models AWS Services Used Looking for a more efficient solution, the company began using Amazon SageMaker to build, train, and deploy its ML model. After that model started producing good results in a timely manner, Postis used Amazon Kinesis—which makes it easy to collect, process, and analyze real-time streaming data so you can get timely insights—to create easy-to-use dashboards to track the progress of deliveries in real time. It shares these dashboards with all internal departments to quickly identify bugs and to streamline customer service processes. Using AWS, the company, founded in 2017, now serves more than 200 customers in 25 countries across retail, ecommerce, logistics, and transportation—including big names such as Ikea, Carrefour, Auchan, and Intersport. It works to help customers provide efficient deliveries and make smarter strategic decisions.  中文 (繁體) Bahasa Indonesia Amazon Kinesis Ρусский Build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows. عربي When customers use Postis, the system analyzes their delivery operations and provides insights on the real-life behavior of their actual buyers. They can then use this information to make better strategic decisions. For instance, retailers can identify last-mile delivery failures and other common buyer experience issues, and then implement new policies to remedy problems. With historical data, retailers can analyze and compare performance and quality across their entire pool of carriers, improving their selection and contract negotiation. Data-driven decisions are also taken in real time, choosing the best solution based on more than 100 criteria in under 20 milliseconds. Postis Simplifies International Deliveries Using ML and AWS SageMaker 中文 (简体) Scaling to Meet Rising Demand During Busy Shopping Periods Postis is a fast-growing tech startup from Romania that provides a real-time digital platform for logistics automation, optimization, and tracking. Its software-as-a-service offering helps retailers and other businesses improve the efficiency of their delivery systems using machine learning. In just 3 years, Postis has expanded to manage orders in 25 European countries. Florin Bulgarov Chief Data Scientist, Postis Tracking data has also reduced order refusal rates—how often buyers don’t accept their delivery at their home—by 20 percent. “Some of our customers save hundreds of thousands of euros annually because our system reduces their refusal rates,” says Bulgarov. Learning How ML Can Speed up Deliveries and Improve their Quality We’re now prepared to offer our services in all of Europe and to continue to add features to increase our ecosystem’s reach. Building our service on AWS has helped us achieve a lot in a short amount of time."" Türkçe Postis provides a real-time digital platform for logistics automation, optimization, and tracking, helping retailers and delivery companies master the last mile of the journey to their customers, ensuring a good experience from ordering all the way through to receiving goods. The fast-growing Romanian startup uses machine learning to help sellers find the most suitable and cost-effective delivery solution for every type of product, customer journey, or destination. It used AWS to create a scalable system with the power to run heavy machine learning workloads and support its global growth. Postis now serves more than 200 customers in 25 countries using AWS, including big names such as Ikea, Carrefour, Auchan, and Intersport. English Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Amazon RDS Its databases scale automatically to meet variable demand using Amazon Elastic Compute Cloud (Amazon EC2), which offers secure and resizable compute capacity for any workload, and Amazon Relational Database Service (Amazon RDS), which allows it to set up, operate, and scale relational databases in the cloud. “We don’t have to intervene when traffic spikes. Everything scales automatically,” says Bulgarov. “This means we’re confident we’re providing a reliable service and our IT teams can focus on other tasks.” Using APIs built by AWS, Postis can send real-time alerts on the progress of deliveries back to retailers or directly to consumers who have ordered goods. Providing consumers with direct access to tracking details helps Postis customers reduce the load on their customer service teams. Retailers received fewer calls—25 percent fewer—to their contact centers after they began using Postis’ real-time updates system to provide consumers with SMS or email alerts. Scaled to handle 7-10 times more orders during busy shopping periods Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. About Postis Deutsch Reduced customer service calls by 25% Tiếng Việt Expanding Across Europe and the World Using AWS Italiano ไทย Because it works with retailers, Postis needs to handle spikes in demand during busy shopping periods such as Black Friday and the Christmas season. It handles 7–10x more during these peak times. 2022 Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Postis spent a year collecting this data from customers and manually creating statistical formulas to produce useful insights. The process helped the team realize which data points were most valuable for training the model it now uses. “Our initial model ran too slowly on our on-premises resources, but the process was useful, because that’s when we started to understand the different factors that affect deliveries,” says Florin Bulgarov, chief data scientist at Postis.  Português Postis knew that by using ML it could provide the most efficient delivery options to its customers. But training an ML model requires vast amounts of data about transport and logistics operations, including how long deliveries take, the customer delivery preferences, the best alternatives between fulfilment points and delivery places, the performance of local couriers, and how often deliveries are rejected by recipients." Power recommendation and search using an IMDb knowledge graph Part 1 _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Power recommendation and search using an IMDb knowledge graph – Part 1 by Gaurav Rele , Soji Adeshina , Divya Bhargavi , Karan Sindwani , Vidya Sagar Ravipati , and Matthew Rhodes | on 20 DEC 2022 | in Advanced (300) , Amazon ML Solutions Lab , Amazon Neptune , Amazon OpenSearch Service , Amazon SageMaker , AWS Data Exchange | Permalink | Comments |  Share The IMDb and Box Office Mojo Movies/TV/OTT licensable data package provides a wide range of entertainment metadata, including over 1 billion user ratings; credits for more than 11 million cast and crew members; 9 million movie, TV, and entertainment titles; and global box office reporting data from more than 60 countries. Many AWS media and entertainment customers license IMDb data through AWS Data Exchange to improve content discovery and increase customer engagement and retention. In this three-part series, we demonstrate how to transform and prepare IMDb data to power out-of-catalog search for your media and entertainment use cases. In this post, we discuss how to prepare IMDb data and load the data into Amazon Neptune for querying. In Part 2 , we discuss how to use Amazon Neptune ML to train graph neural network (GNN) embeddings from the IMDb graph. In Part 3 , we walk through a demo application out-of-catalog search that is powered by the GNN embeddings. Solution overview In this series, we use the IMDb and Box Office Mojo Movies/TV/OTT licensed data package to show how you can built your own applications using graphs. This licensable data package consists of JSON files with IMDb metadata for more than 9 million titles (including movies, TV and OTT shows, and video games) and credits for more than 11 million cast, crew, and entertainment professionals. IMDb’s metadata package also includes over 1 billion user ratings, as well as plots, genres, categorized keywords, posters, credits, and more . IMDb delivers data through AWS Data Exchange, which makes it incredibly simple for you to access data to power your entertainment experiences and seamlessly integrate with other AWS services. IMDb licenses data to a wide range of media and entertainment customers, including pay TV, direct-to-consumer, and streaming operators, to improve content discovery and increase customer engagement and retention. Licensing customers also use IMDb data to enhance in-catalog and out-of-catalog title search and power relevant recommendations. We use the following services as part of this solution: AWS Lambda Amazon Neptune Amazon Neptune ML Amazon OpenSearch Service AWS Glue Amazon SageMaker notebooks Amazon SageMaker Processing Amazon SageMaker Training The following diagram depicts the workflow for part 1 of the 3 part blog series. In this post, we walk through the following high-level steps: Provision Neptune resources with AWS CloudFormation . Access the IMDb data from AWS Data Exchange. Clone the GitHub repo . Process the data in Neptune Gremlin format. Load the data into a Neptune cluster. Query the data using Gremlin Query Language. Prerequisites The IMDb data used in this post requires an IMDb content license and paid subscription to the IMDb and Box Office Mojo Movies/TV/OTT licensing package in AWS Data Exchange. To inquire about a license and access sample data, visit developer.imdb.com . Additionally, to follow along with this post, you should have an AWS account and familiarity with Neptune, the Gremlin query language, and SageMaker. Provision Neptune resources with AWS CloudFormation Now that you’ve seen the structure of the solution, you can deploy it into your account to run an example workflow. You can launch the stack in AWS Region us-east-1 on the AWS CloudFormation console by choosing Launch Stack : To launch the stack in a different Region, refer to Using the Neptune ML AWS CloudFormation template to get started quickly in a new DB cluster . The following screenshot shows the stack parameters to provide. Stack creation takes approximately 20 minutes. You can monitor the progress on the AWS CloudFormation console. When the stack is complete, you’re now ready to process the IMDb data. On the Outputs tab for the stack, note the values for NeptuneExportApiUri and NeptuneLoadFromS3IAMRoleArn . Then proceed to the following steps to gain access to the IMDb dataset. Access the IMDb data IMDb publishes its dataset once a day on AWS Data Exchange. To use the IMDb data, you first subscribe to the data in AWS Data Exchange, then you can export the data to Amazon Simple Storage Service (Amazon S3). Complete the following steps: On the AWS Data Exchange console, choose Browse catalog in the navigation pane. In the search field, enter IMDb . Subscribe to either IMDb and Box Office Mojo Movie/TV/OTT Data (SAMPLE) or IMDb and Box Office Mojo Movie/TV/OTT Data . Complete the steps in the following workshop to export the IMDb data from AWS Data Exchange to Amazon S3. Clone the GitHub repository Complete the following steps: Open the SageMaker instance that you created from the CloudFormation template. Clone the GitHub repository. Process IMDb data in Neptune Gremlin format To add the data into Amazon Neptune, we process the data in Neptune gremlin format. From the GitHub repository, we run process_imdb_data.py to process the files. The script creates the CSVs to load the data into Neptune. Upload the data to an S3 bucket and note the S3 URI location. Note that for this post, we filter the dataset to include only movies. You need either an AWS Glue job or Amazon EMR to process the full data. To process the IMDb data using AWS Glue, complete the following steps: On the AWS Glue console, in the navigation pane, choose Jobs . On the Jobs page, choose Spark script editor . Under Options , choose Upload and edit existing script and upload the 1_process_imdb_data.py file. Choose Create. On the editor page, choose Job Details . On the Job Details page, add the following options: For Name , enter imdb-graph-processor . For Description , enter processing IMDb dataset and convert to Neptune Gremlin Format . For IAM role , use an existing AWS Glue role or create an IAM role for AWS Glue . Make sure you give permission to your Amazon S3 location for the raw data and output data path. For Worker type , choose G 2X . For Requested number of workers , enter 20. Expand Advanced properties . Under Job Parameters , choose Add new parameter and enter the following key value pair: For the key, enter --output_bucket_path . For the value, enter the S3 path where you want to save the files. This path is also used to load the data into the Neptune cluster. To add another parameter, choose Add new parameter and enter the following key value pair: For the key, enter --raw_data_path . For the value, enter the S3 path where the raw data is stored. Choose Save and then choose Run . This job takes about 2.5 hours to complete. The following table provide details about the nodes for the graph data model. Description Label Principal cast members Person Long format movie Movie Genre of movies Genre Keyword descriptions of movies Keyword Shooting locations of movies Place Ratings for movies rating Awards event where movie received an award awards Similarly, the following table shows some of the edges included in the graph. There will be in total 24 edge types. Description Label From To Movies an actress has acted in casted-by-actress Movie Person Movies an actor has acted in casted-by-actor Movie Person Keywords in a movie by character described-by-character-keyword Movie keyword Genre of a movie is-genre Movie Genre Place where the movie was shot Filmed-at Movie Place Composer of a movie Crewed-by-composer Movie Person award nomination Nominated_for Movie Awards award winner Has_won Movie Awards Load the data into a Neptune cluster In the repo, navigate to the graph_creation folder and run the 2_load.ipynb . To load the data to Neptune, use the %load command in the notebook, and provide your AWS Identity and Access Management (IAM) role ARN and Amazon S3 location of your processed data. role = '' %load -l {role} -s --store-to load_id The following screen shot shows the output of the command. Note that the data load takes about 1.5 hours to complete. To check the status of the load, use the following command: %load_status {load_id['payload']['loadId']} --errors --details When the load is complete, the status displays LOAD_COMPLETED , as shown in the following screenshot. All the data is now loaded into graphs, and you can start querying the graph. Fig: Sample Knowledge graph representation of movies in IMDb dataset. Movies “Saving Private Ryan” and “Bridge of Spies” have common connections like actor and director as well as indirect connections through movies like “The Catcher was a Spy” in the graph network. Query the data using Gremlin To access the graph in Neptune, we use the Gremlin query language. For more information, refer to Querying a Neptune Graph . The graph consists of a rich set of information that can be queried directly using Gremlin. In this section, we show a few examples of questions that you can answer with the graph data. In the repo, navigate to the graph_creation folder and run the 3_queries.ipynb notebook. The following section goes over all the queries from the notebook. Worldwide gross of movies that have been shot in New Zealand, with minimum 7.5 rating The following query returns the worldwide gross of movies filmed in New Zealand, with a minimum rating of 7.5: %%gremlin --store-to result g.V().has('place', 'name', containing('New Zealand')).in().has('movie', 'rating', gt(7.5)).dedup().valueMap(['name', 'gross_worldwide', 'rating', 'studio','id']) The following screenshot shows the query results. Top 50 movies that belong to action and drama genres and have Oscar-winning actors In the following example, we want to find the top 50 movies in two different genres (action and drama) with Oscar-winning actors. We can do this by using three different queries and merging the information using Pandas: %%gremlin --store result_action g.V().has('genre', 'name', 'Action').in().has('movie', 'rating', gt(8.5)).limit(50).valueMap(['name', 'year', 'poster']) %%gremlin --store result_drama g.V().has('genre', 'name', 'Drama').in().has('movie', 'rating', gt(8.5)).limit(50).valueMap(['name', 'year', 'poster']) %%gremlin --store result_actors --silent g.V().has('person', 'oscar_winner', true).in().has('movie', 'rating', gt(8.5)).limit(50).valueMap(['name', 'year', 'poster']) The following screenshot shows our results. Top movies that have common keywords “tattoo” and “assassin” The following query returns movies with keywords “tattoo” and “assassin”: %%gremlin --store result g.V().has('keyword','name','assassin').in(""described-by-plot-related-keyword"").where(out(""described-by-plot-related-keyword"").has('keyword','name','tattoo')).dedup().limit(10).valueMap(['name', 'poster','year']) The following screenshot shows our results. Movies that have common actors In the following query, we find movies that have Leonardo DiCaprio and Tom Hanks: %%gremlin --store result g.V().has('person', 'name', containing('Leonardo DiCaprio')).in().hasLabel('movie').out().has('person','name', 'Tom Hanks').path().by(valueMap('name', 'poster')) We get the following results. Conclusion In this post, we showed you the power of the IMDb and Box Office Mojo Movies/TV/OTT dataset and how you can use it in various use cases by converting the data into a graph using Gremlin queries. In Part 2 of this series, we show you how to create graph neural network models on this data that can be used for downstream tasks. For more information about Neptune and Gremlin, refer to Amazon Neptune Resources for additional blog posts and videos. About the Authors Gaurav Rele is a Data Scientist at the Amazon ML Solution Lab, where he works with AWS customers across different verticals to accelerate their use of machine learning and AWS Cloud services to solve their business challenges. Matthew Rhodes is a Data Scientist I working in the Amazon ML Solutions Lab. He specializes in building Machine Learning pipelines that involve concepts such as Natural Language Processing and Computer Vision. Divya Bhargavi is a Data Scientist and Media and Entertainment Vertical Lead at the Amazon ML Solutions Lab,  where she solves high-value business problems for AWS customers using Machine Learning. She works on image/video understanding, knowledge graph recommendation systems, predictive advertising use cases. Karan Sindwani is a Data Scientist at Amazon ML Solutions Lab, where he builds and deploys deep learning models. He specializes in the area of computer vision. In his spare time, he enjoys hiking. Soji Adeshina is an Applied Scientist at AWS where he develops graph neural network-based models for machine learning on graphs tasks with applications to fraud & abuse, knowledge graphs, recommender systems, and life sciences. In his spare time, he enjoys reading and cooking. Vidya Sagar Ravipati is a Manager at the Amazon ML Solutions Lab, where he leverages his vast experience in large-scale distributed systems and his passion for machine learning to help AWS customers across different industry verticals accelerate their AI and cloud adoption. TAGS: Amazon Neptune ML , Knowledge Graph Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Power recommendations and search using an IMDb knowledge graph Part 3 _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Power recommendations and search using an IMDb knowledge graph – Part 3 by Divya Bhargavi , Soji Adeshina , Gaurav Rele , Karan Sindwani , Vidya Sagar Ravipati , and Matthew Rhodes | on 06 JAN 2023 | in Amazon ML Solutions Lab , Amazon Neptune , Amazon OpenSearch Service , Amazon SageMaker , Customer Solutions , Data Science & Analytics for Media , Media & Entertainment , Technical How-to | Permalink | Comments |  Share This three-part series demonstrates how to use graph neural networks (GNNs) and Amazon Neptune to generate movie recommendations using the IMDb and Box Office Mojo Movies/TV/OTT licensable data package, which provides a wide range of entertainment metadata, including over 1 billion user ratings; credits for more than 11 million cast and crew members; 9 million movie, TV, and entertainment titles; and global box office reporting data from more than 60 countries. Many AWS media and entertainment customers license IMDb data through AWS Data Exchange to improve content discovery and increase customer engagement and retention. The following diagram illustrates the complete architecture implemented as part of this series. In Part 1 , we discussed the applications of GNNs and how to transform and prepare our IMDb data into a knowledge graph (KG). We downloaded the data from AWS Data Exchange and processed it in AWS Glue to generate KG files. The KG files were stored in Amazon Simple Storage Service (Amazon S3) and then loaded in Amazon Neptune . In Part 2 , we demonstrated how to use Amazon Neptune ML (in Amazon SageMaker ) to train the KG and create KG embeddings. In this post, we walk you through how to apply our trained KG embeddings in Amazon S3 to out-of-catalog search use cases using Amazon OpenSearch Service and AWS Lambda . You also deploy a local web app for an interactive search experience. All the resources used in this post can be created using a single AWS Cloud Development Kit (AWS CDK) command as described later in the post. Background Have you ever inadvertently searched a content title that wasn’t available in a video streaming platform? If yes, you will find that instead of facing a blank search result page, you find a list of movies in same genre, with cast or crew members. That’s an out-of-catalog search experience! Out-of-catalog search (OOC) is when you enter a search query that has no direct match in a catalog. This event frequently occurs in video streaming platforms that constantly purchase a variety of content from multiple vendors and production companies for a limited time. The absence of relevancy or mapping from a streaming company’s catalog to large knowledge bases of movies and shows can result in a sub-par search experience for customers that query OOC content, thereby lowering the interaction time with the platform. This mapping can be done by manually mapping frequent OOC queries to catalog content or can be automated using machine learning (ML). In this post, we illustrate how to handle OOC by utilizing the power of the IMDb dataset (the premier source of global entertainment metadata) and knowledge graphs. OpenSearch Service is a fully managed service that makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. OpenSearch is an open source, distributed search and analytics suite derived from Elasticsearch. OpenSearch Service offers the latest versions of OpenSearch, support for 19 versions of Elasticsearch (1.5 to 7.10 versions), as well as visualization capabilities powered by OpenSearch Dashboards and Kibana (1.5 to 7.10 versions). OpenSearch Service currently has tens of thousands of active customers with hundreds of thousands of clusters under management processing trillions of requests per month. OpenSearch Service offers kNN search, which can enhance search in use cases such as product recommendations, fraud detection, and image, video, and some specific semantic scenarios like document and query similarity. For more information about the natural language understanding-powered search functionalities of OpenSearch Service, refer to Building an NLU-powered search application with Amazon SageMaker and the Amazon OpenSearch Service KNN feature . Solution overview In this post, we present a solution to handle OOC situations through knowledge graph-based embedding search using the k-nearest neighbor (kNN) search capabilities of OpenSearch Service. The key AWS services used to implement this solution are OpenSearch Service, SageMaker, Lambda, and Amazon S3. Check out Part 1 and Part 2 of this series to learn more about creating knowledge graphs and GNN embedding using Amazon Neptune ML. Our OOC solution assumes that you have a combined KG obtained by merging a streaming company KG and IMDb KG. This can be done through simple text processing techniques that match titles along with the title type (movie, series, documentary), cast, and crew. Additionally, this joint knowledge graph has to be trained to generate knowledge graph embeddings through the pipelines mentioned in Part 1 and Part 2 . The following diagram illustrates a simplified view of the combined KG. To demonstrate the OOC search functionality with a simple example, we split the IMDb knowledge graph into customer-catalog and out-of-customer-catalog. We mark the titles that contain “Toy Story” as an out-of-customer catalog resource and the rest of the IMDb knowledge graph as customer catalog. In a scenario where the customer catalog is not enhanced or merged with external databases, a search for “toy story” would return any title that has the words “toy” or “story” in its metadata, with the OpenSearch text search. If the customer catalog was mapped to IMDb, it would be easier to glean that the query “toy story” doesn’t exist in the catalog and that the top matches in IMDb are “Toy Story,” “Toy Story 2,” “Toy Story 3,” “Toy Story 4,” and “Charlie: Toy Story” in decreasing order of relevance with text match. To get within-catalog results for each of these matches, we can generate five closest movies in customer catalog-based kNN embedding (of the joint KG) similarity through OpenSearch Service. A typical OOC experience follows the flow illustrated in the following figure. The following video shows the top five (number of hits) OOC results for the query “toy story” and relevant matches in the customer catalog (number of recommendations). Here, the query is matched to the knowledge graph using text search in OpenSearch Service. We then map the embeddings of the text match to the customer catalog titles using the OpenSearch Service kNN index. Because the user query can’t be directly mapped to the knowledge graph entities, we use a two-step approach to first find title-based query similarities and then items similar to the title using knowledge graph embeddings. In the following sections, we walk through the process of setting up an OpenSearch Service cluster, creating and uploading knowledge graph indexes, and deploying the solution as a web application. Prerequisites To implement this solution, you should have an AWS account , familiarity with OpenSearch Service, SageMaker, Lambda, and AWS CloudFormation , and have completed the steps in Part 1 and Part 2 of this series. Launch solution resources The following architecture diagram shows the out-of-catalog workflow. You will use the AWS Cloud Development Kit (CDK) to provision the resources required for the OOC search applications. The code to launch these resources performs the following operations: Creates a VPC for the resources. Creates an OpenSearch Service domain for the search application. Creates a Lambda function to process and load movie metadata and embeddings to OpenSearch Service indexes ( **-ReadFromOpenSearchLambda-** ). Creates a Lambda function that takes as input the user query from a web app and returns relevant titles from OpenSearch ( **-LoadDataIntoOpenSearchLambda-** ). Creates an API Gateway that adds an additional layer of security between the web app user interface and Lambda. To get started, complete the following steps: Run the code and notebooks from Part 1 and Part 2 . Navigate to the part3-out-of-catalog folder in the code repository. Launch the AWS CDK from the terminal with the command bash launch_stack.sh . Provide the two S3 file paths created in Part 2 as input: The S3 path to the movie embeddings CSV file. The S3 path to the movie node file. Wait until the script provisions all the required resources and finishes running. Copy the API Gateway URL that the AWS CDK script prints out and save it. (We use this for the Streamlit app later). Create an OpenSearch Service Domain For illustration purposes, you create a search domain on one Availability Zone in an r6g.large.search instance within a secure VPC and subnet. Note that the best practice would be to set up on three Availability Zones with one primary and two replica instances. Create an OpenSearch Service index and upload data You use Lambda functions (created using the AWS CDK launch stack command) to create the OpenSearch Service indexes. To start the index creation, complete the following steps: On the Lambda console, open the LoadDataIntoOpenSearchLambda Lambda function. On the Test tab, choose Test to create and ingest data into the OpenSearch Service index. The following code to this Lambda function can be found in part3-out-of-catalog/cdk/ooc/lambdas/LoadDataIntoOpenSearchLambda/lambda_handler.py : embedding_file = os.environ.get(""embeddings_file"") movie_node_file = os.environ.get(""movie_node_file"") print(""Merging files"") merged_df = merge_data(embedding_file, movie_node_file) print(""Embeddings and metadata files merged"") print(""Initializing OpenSearch client"") ops = initialize_ops() indices = ops.indices.get_alias().keys() print(""Current indices are :"", indices) # This will take 5 minutes print(""Creating knn index"") # Create the index using knn settings. Creating OOC text is not needed create_index('ooc_knn',ops) print(""knn index created!"") print(""Uploading the data for knn index"") response = ingest_data_into_ops(merged_df, ops, ops_index='ooc_knn', post_method=post_request_emb) print(response) print(""Upload complete for knn index"") print(""Uploading the data for fuzzy word search index"") response = ingest_data_into_ops(merged_df, ops, ops_index='ooc_text', post_method=post_request) print(""Upload complete for fuzzy word search index"") # Create the response and add some extra content to support CORS response = { ""statusCode"": 200, ""headers"": { ""Access-Control-Allow-Origin"": '*' }, ""isBase64Encoded"": False } The function performs the following tasks: Loads the IMDB KG movie node file that contains the movie metadata and its associated embeddings from the S3 file paths that were passed to the stack creation file launch_stack.sh . Merges the two input files to create a single dataframe for index creation. Initializes the OpenSearch Service client using the Boto3 Python library. Creates two indexes for text ( ooc_text ) and kNN embedding search ( ooc_knn ) and bulk uploads data from the combined dataframe through the ingest_data_into_ops function. This data ingestion process takes 5–10 minutes and can be monitored through the Amazon CloudWatch logs on the Monitoring tab of the Lambda function. You create two indexes to enable text-based search and kNN embedding-based search. The text search maps the free-form query the user enters to the titles of the movie. The kNN embedding search finds the k closest movies to the best text match from the KG latent space to return as outputs. Deploy the solution as a local web application Now that you have a working text search and kNN index on OpenSearch Service, you’re ready to build a ML-powered web app. We use the streamlit Python package to create a front-end illustration for this application. The IMDb-Knowledge-Graph-Blog/part3-out-of-catalog/run_imdb_demo.py Python file in our GitHub repo has the required code to la­­­­unch a local web app to explore this capability. To run the code, complete the following steps: Install the streamlit and aws_requests_auth Python package in your local virtual Python environment through for following commands in your terminal: pip install streamlit pip install aws-requests-auth Replace the placeholder for the API Gateway URL in the code as follows with the one created by the AWS CDK: api = '/opensearch-lambda?q={query_text}&numMovies={num_movies}&numRecs={num_recs}' Launch the web app with the command streamlit run run_imdb_demo.py from your terminal. This script launches a Streamlit web app that can be accessed in your web browser. The URL of the web app can be retrieved from the script output, as shown in the following screenshot. The app accepts new search strings, number of hits, and number of recommendations. The number of hits correspond to how many matching OOC titles we should retrieve from the external (IMDb) catalog. The number of recommendations corresponds to how many nearest neighbors we should retrieve from the customer catalog based on kNN embedding search. See the following code: search_text=st.sidebar.text_input(""Please enter search text to find movies and recommendations"") num_movies= st.sidebar.slider('Number of search hits', min_value=0, max_value=5, value=1) recs_per_movie= st.sidebar.slider('Number of recommendations per hit', min_value=0, max_value=10, value=5) if st.sidebar.button('Find'): resp= get_movies() This input (query, number of hits and recommendations) is passed to the **-ReadFromOpenSearchLambda-** Lambda function created by the AWS CDK through the API Gateway request. This is done in the following function: def get_movies(): result = requests.get(api.format(query_text=search_text, num_movies=num_movies, num_recs=recs_per_movie)).json() The output results of the Lambda function from OpenSearch Service is passed to API Gateway and is displayed in the Streamlit app. Clean up You can delete all the resources created by the AWS CDK through the command npx cdk destroy –app “python3 appy.py” --all in the same instance (inside the cdk folder) that was used to launch the stack (see the following screenshot). Conclusion In this post, we showed you how to create a solution for OOC search using text and kNN-based search using SageMaker and OpenSearch Service. You used custom knowledge graph model embeddings to find nearest neighbors in your catalog to that of IMDb titles. You can now, for example, search for “The Rings of Power,” a fantasy series developed by Amazon Prime Video, on other streaming platforms and reason how they could have optimized the search result. For more information about the code sample in this post, see the GitHub repo . To learn more about collaborating with the Amazon ML Solutions Lab to build similar state-of-the-art ML applications, see Amazon Machine Learning Solutions Lab . For more information on licensing IMDb datasets, visit developer.imdb.com . About the Authors Divya Bhargavi is a Data Scientist and Media and Entertainment Vertical Lead at the Amazon ML Solutions Lab,  where she solves high-value business problems for AWS customers using Machine Learning. She works on image/video understanding, knowledge graph recommendation systems, predictive advertising use cases. Gaurav Rele is a Data Scientist at the Amazon ML Solution Lab, where he works with AWS customers across different verticals to accelerate their use of machine learning and AWS Cloud services to solve their business challenges. Matthew Rhodes is a Data Scientist I working in the Amazon ML Solutions Lab. He specializes in building Machine Learning pipelines that involve concepts such as Natural Language Processing and Computer Vision. Karan Sindwani is a Data Scientist at Amazon ML Solutions Lab, where he builds and deploys deep learning models. He specializes in the area of computer vision. In his spare time, he enjoys hiking. Soji Adeshina is an Applied Scientist at AWS where he develops graph neural network-based models for machine learning on graphs tasks with applications to fraud & abuse, knowledge graphs, recommender systems, and life sciences. In his spare time, he enjoys reading and cooking. Vidya Sagar Ravipati is a Manager at the Amazon ML Solutions Lab, where he leverages his vast experience in large-scale distributed systems and his passion for machine learning to help AWS customers across different industry verticals accelerate their AI and cloud adoption. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Prima Group Case Study.txt,"Solution | Ensuring Customer Satisfaction and Supporting Streaming Platform Subscriber Growth Using AWS, Prima Group has been able to scale its iPrima platform by a factor of 10, helping it increase content availability and serve more users at the same time. “If users note any issues with our streaming service, we get complaints, plus we risk upsetting our advertisers,” says Marek Kouřimský, chief of development at Prima Group. “We now have a stable and reliable service and feel assured we can grow without issues.” AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. Learn more » Français Español Prima Group is the first commercial TV station in the Czech Republic. Launched in 1993, it now comprises 10 terrestrial TV channels in Czech Republic and one channel in Slovakia. Its streaming service, iPrima, became available in 2012 and offers a rich diversity of content, including in-house-produced Prima ORIGINALS programming, movies, TV series, sports, and documentaries. It offers both ad-supported, free-to-view services, and ad-free subscriber services, and holds streaming rights for popular international TV shows and movies. iPrima operates on two models: a free-to-view, advertising-supported model and an ad-free subscription model. Content for iPrima is taken from parent company Prima Group’s terrestrial channels, including popular in-house-produced programming—Prima ORIGINALS—and international content. A third-tier subscription model with online-only premium content is in the works to boost platform growth and offer a rival to well-known streaming brands. Learn how »  日本語 2023 Get Started 한국어 Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used The company is now working on its next stage of modernization to connect internal and external services and minimize manual processes. For this, it is using Amazon Managed Streaming for Apache Kafka (Amazon MSK) to securely stream data with a fully managed, highly available Apache Kafka service. “We can now manage new subscribers more easily,” says Kouřimský. “Instead of manually creating an event when a new user comes on board, the system change is automatic.” Outcome | Ongoing Modernization to Minimize Manual Processes Amazon MSK makes it easy to ingest and process streaming data in real time with fully managed Apache Kafka. Learn more » Amazon Fargate Opportunity | Improving Scaling to Maintain TV Content Availability AWS Services Used Amazon MSK 中文 (繁體) Bahasa Indonesia Television company Prima Group, based in the Czech Republic, has migrated its iPrima streaming service platform to AWS to boost its ability to scale, and support the growth of its subscriber base. Using AWS, it has improved scaling by a factor of 10, compared to its previous hosting company, and reduced the size of its IT staff by 50 percent. To support platform stability, it uses Amazon Elastic Kubernetes Service (Amazon EKS). After the migration, it was able to develop Kubernetes clusters in just 14 days, compared to the 2 years it took previously. Ρусский To keep customers satisfied, and support growth of its iPrima streaming service, Prima Group decided to move to a cloud setup built on Amazon Web Services (AWS). By doing this, it could benefit from managed services, modernize its monolithic applications, and improve scaling. iPrima’s previous IT setup relied on a hosting service using a small number of bare-metal servers. With future growth in mind, the company chose AWS to help it increase the stability and availability of its streaming platform. This was essential to Prima Group, so it could scale to meet traffic peaks, provide a good customer experience, and support the introduction of new subscriber models. عربي 中文 (简体) Prima Group also uses AWS for its news website—operated in partnership with CNN—to speed up image loading times, making the process 1.5 times faster than before. It uses Amazon CloudFront, which securely delivers content with low latency and high transfer speeds. “We’ve been very impressed with this service, and we are now looking at using it for video files,” adds Kouřimský. By building on AWS, Prima Group was able to get its Kubernetes clusters up and running in just 14 days, compared with the 2 years it took previously. “Our hosting company spent a long time to get Kubernetes clusters into production—even then, they weren’t in proper working order,” says Kouřimský. “It’s hard to start from scratch when developing Kubernetes, and there are not many people who are proficient in it. With AWS, you can simply click and have clusters available quickly—plus, as a managed service, all the painful administration parts are managed automatically.” Learn more » Overview Prima Group Boosts Streaming Uptime and Creates Platform for Growth on AWS AWS Customer Success Stories Türkçe About Prima Group English Using AWS, Prima Group’s streaming platform is set for growth. “Managing our infrastructure is much easier, plus now we scale infinitely and keep our streaming customers happy,” says Kouřimský. “This gives us total peace of mind for the future of our streaming service.” Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. Prima Group began by moving its streaming platform databases and applications to AWS as part of a two-phase project, completing the migration in 5 months. The new setup meant 50 percent fewer people were needed for infrastructure maintenance, so more of the IT team could focus on higher-value tasks such as product development. Deutsch Marek Kouřimský Chief of Development, Prima Group Tiếng Việt Content uptime is important for Prima Group’s streaming platform. The platform regularly streams selected Prima ORIGINALS a week ahead of the official broadcast dates, and commonly sees a surge in viewers when the most popular shows are aired. The company started to notice scaling issues during peak prime-time periods, especially when two of the Czech Republic’s most beloved TV series—ZOO and Slunečná—were shown. With availability starting to become an issue, Prima Group needed to address its ability to scale and grow. Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. Italiano ไทย Amazon EKS Amazon CloudFront Contact Sales Established in 1993 as the first commercial channel in the Czech Republic, Prima Group has grown steadily and today broadcasts 10 terrestrial television channels nationwide. It also offers a streaming service, iPrima. Launched in 2012, the service offers a mix of original TV series, movies, sports, news, and documentary content. The first part of Prima Group’s modernization process has been to containerize applications to increase platform stability and support further scaling. For this, it is using Amazon Elastic Kubernetes Service (Amazon EKS), a managed container service to run and scale Kubernetes applications in the cloud or on-premises. Customer Stories / Media & Entertainment / Czech Republic Solution | Fast Kubernetes Development and Automated Cluster Management Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português Managing our infrastructure is much easier, plus now we scale infinitely and keep our streaming customers happy. This gives us total peace of mind for the future of our streaming service.”" Processing Data 10x Faster Using Amazon Redshift Serverless with BlocPower _ BlocPower Case Study _ AWS.txt,"The BlocPower team worked alongside the AWS team to create a proof of concept to see how Amazon Redshift Serverless would affect the performance and handling of the increased data volume for BlocMaps. “We performed benchmark tests with BlocMaps, which is what really raised our eyebrows,” says Davis. “Our application performed so much better, and our billing benefited from Amazon Redshift Serverless.” Specifically, the startup could process and query its data in minutes—10 times faster compared with its previous architecture. BlocPower’s mission is to make buildings in the United States smarter, greener, and healthier. The company has successfully implemented electrification, solar, and other energy-efficiency measures in more than 4,000 buildings to date. Français Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » Solution | Processing Data 10x Faster to Deliver Actionable Energy Analytics Amazon S3 10x faster Español Since 2016, BlocPower has been building its data processing pipeline on AWS, adopting several cloud-based compute solutions, including Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. Initially, its DevOps team scaled its data processing pipeline by selecting different Amazon EC2 instances for running its clusters, which could take 2–3 hours to complete. “As we were gaining more customers on BlocMaps and working with more data, we were having to scale our cluster horizontally,” says Ankur Garg, director of data architecture and analytics at BlocPower. Optimized 日本語 AWS Services Used Climate technology leader BlocPower wanted to improve the user experience of its flagship product, BlocMaps—a software-as-a-service (SaaS) solution that provides actionable insights for building decarbonization to municipalities and utility companies—so that it could more effectively support its customers in their efforts to reduce greenhouse gas emissions in their buildings. With clean power at the core of its mission, BlocPower built a high-performance compute environment on Amazon Web Services (AWS). BlocPower can now minimize its own carbon footprint while processing data from over 100 million energy profiles of buildings across the United States. Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Contact Sales Get Started 한국어 5 seconds or less Overview | Opportunity | Solution | Outcome | AWS Services Used Opportunity | Using Amazon Redshift Serverless to Improve Data Warehousing for BlocPower About BlocPower Afterward, BlocPower decided to adopt Amazon Redshift Serverless. In doing so, the company reduced the amount of time that its DevOps engineers spent on scaling its clusters. Additionally, by implementing Amazon Redshift Serverless alongside Amazon S3 and Amazon Redshift, BlocPower gained the ability to query its data across numerous data sources, including Amazon S3 buckets and data pulled with remote APIs through AWS Glue, which helps companies discover, prepare, and integrate all their data at virtually any scale. BlocPower intermittently runs processes to merge data sources and perform data transformations. Then, the team loads the results into Amazon Redshift. After introducing Amazon Redshift Serverless clusters that automatically scale to usage spikes, BlocPower improved its runtime performance by a factor of 10. “We can query our data in near real time,” says Davis. “We also saw an improvement in our APIs. Those two factors made using Amazon Redshift Serverless a no-brainer.” Amazon Redshift Learn how BlocPower in cleantech improved the performance of its energy analytics by 10x using Amazon Redshift Serverless. Our application performed so much better, and our billing benefited from Amazon Redshift Serverless.” Sean Davis Data Architect, BlocPower 中文 (繁體) Bahasa Indonesia Processing Data 10x Faster Using Amazon Redshift Serverless with BlocPower Reduced time Ρусский The company had also migrated its data to a combination of cloud-based data storage solutions, including Amazon Redshift, which is a fast, simple, and widely used cloud data warehouse. BlocPower stores the data that it gathers from 100 million building profiles in Amazon Simple Storage Service (Amazon S3), which offers object storage that is built to retrieve any amount of data from anywhere. As the complexity of BlocPower’s data profiles grew, the company wanted to increase access to more compute resources and resource management options for its teams. The startup was interested in the benefits of Amazon Redshift Serverless and engaged the AWS team. “The AWS team gave us an introduction to Amazon Redshift Serverless, which was very helpful and resolved any kind of apprehension that we had with using it moving forward,” says Sean Davis, data architect at BlocPower. عربي to deliver energy analytics on BlocMaps 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Learn more » These performance gains on the backend of its BlocMaps application have rendered a smoother user experience for BlocPower’s customers. By using Amazon Redshift, the startup has also reduced latency on the front end of its application, which is critical when demonstrating the application to new customers. Customers can view, filter, and visualize decarbonization metrics for buildings in specific geographic locations faster than before. Under its previous model, the BlocMaps application could take 20–30 seconds to load building profiles for its customers. Now, the application delivers these insights in under 5 seconds—an improvement that has resulted in positive customer feedback. “The performance of our BlocMaps applications is one of our top priorities from a revenue standpoint,” says Garg. “Good word of mouth helps us enter into new markets and new cities.” Outcome | Investing in a Serverless-First Approach to Support Social Equity 2022 BlocPower will continue to investigate AWS serverless solutions to improve the performance of its products. Based on its experience with this project, the company plans to migrate the Internet of Things data that it collects to Amazon Redshift Serverless as well. “The amount of time that it would’ve taken us to deliver insights from raw data would’ve been unimaginable if we had tried to set up our infrastructure on premises,” says Garg. “Working on AWS has been a huge advantage for us. The amount of time and money that we save helps us deliver energy insights to additional low- and moderate-income households.” Overview Customer Stories / Cleantech Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. Türkçe English Amazon Redshift Serverless makes it easier to run and scale analytics without having to manage your data warehouse infrastructure. As the number of energy profiles grew, BlocPower needed a data warehouse that would automatically meet its workload-performance requirements and reduce the administrative burden. In July 2022, BlocPower learned about one of the latest AWS product offerings, Amazon Redshift Serverless, which companies use to get insights from their data in seconds without having to manage data warehouse infrastructure. BlocPower decided to test Amazon Redshift Serverless in its AWS environment, and it experienced a decrease in processing times by 90 percent while optimizing compute costs. These performance gains positioned the startup to streamline its DevOps workflows, allowing it to focus more on its decarbonization efforts. 90% reduction Deutsch Tiếng Việt compute costs Italiano ไทย in data processing times for managing clusters Founded in 2014, BlocPower is a Brooklyn-based leader that focuses on making American cities greener, smarter, and healthier. With a diverse, inclusive workforce that consists of 60 percent minorities and 30 percent women, the BlocPower team provides energy analytics to building managers and property owners in over 10 cities, helping them understand the potential of retrofitting their buildings with renewable energy sources. As of 2022, BlocPower successfully implemented electrification, solar, and other energy efficiency measures in over 4,000 buildings. Learn more » Amazon EC2 Amazon Redshift Serverless data processing Português Not only has BlocPower increased its revenue opportunities, but the startup has also optimized its compute costs. Having adopted Amazon Redshift Serverless, BlocPower no longer pays for its clusters’ idle time. “The serverless model has been perfect for us,” says Davis. “We pay less for our processes, and we get more compute resources when we need it. Overall, it’s been a very positive experience.”" Purple Technology Case Study _ AWS Step Functions.txt,"In addition to better compliance with regulations through improved transparency, the Purple IT team has improved and accelerated software development processes using AWS. AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use.File processing Stream processing Web applications IoT backends Mobile backends. Français Benefits of AWS Jan Červinka Director of Engineering, Purple Technology Español Amazon EC2 The Czech-based company builds apps that complement online trading platforms and support the changing and demanding needs of brokers. Purple’s solution enables tens of thousands of clients to trade many billions of dollars of assets each month. 日本語 In addition, brokers and traders must comply with rules that change from country to country. These rules are subject to sudden changes in regulation—and even to evolving legal interpretations. Get Started 한국어 Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. DynamoDB offers built-in security, continuous backups, automated multi-Region replication, in-memory caching, and data import and export tools. Increases transparency of complex processes Purple Technology Responds Rapidly to Changing Regulations and Customer Needs Using AWS Amazon Lambda Purple needed a more transparent and effective way to manage the complex ruleset that governed customer onboarding and allow it to respond more quickly to changing rules. It found the solution it needed using AWS Step Functions, a low-code, visual workflow service that developers can use to build applications. “Onboarding involves complex processes that we have to be able to understand and update easily,” says Jan Červinka, director of engineering at Purple Technology. “We can now map and design all of these processes using AWS Step Functions.” To register new trading accounts with brokers, users need to go through a number of steps to qualify. The registration process checks many conditions, some via API, to confirm that the new customer is not disqualified from trading. Purple’s onboarding process also supports Know Your Customer (KYC) user verification and anti-money laundering (AML) processes. Amazon Step functions AWS Services Used Purple wanted greater transparency and control to improve its services and reduce the in-house resources required to maintain its applications. Using Amazon Web Services (AWS), Purple found a way to easily manage changes to the backend ruleset and make the ruleset more transparent to internal and external stakeholders. 中文 (繁體) Bahasa Indonesia While the Purple application has a user-friendly front end, the backend was a complex code base. Changes to the rules required developers to delve into the code to make amendments and make sure the app was compliant. Questions from product managers about the rules and processes required developers to create diagrams that would quickly become outdated. Contact Sales Ρусский عربي 中文 (简体) To simplify this maintenance process further, Purple built a Slack extension to allow rules to be repaired and amended from the messaging platform. This also means customer service teams at brokers can operate the tool and provide a responsive service to their own customers. Using AWS we have significantly improved the self-service capabilities of the customer support teams,” says Červinka. “That leads to a much faster time to resolution of certain issues customers may encounter.” About Purple Technology Learn more » Build and run applications without thinking about servers. Severless on AWS Responding to Changing Regulatory Changes On AWS, Purple also has greater freedom to innovate. Using AWS infrastructure as code means that developers can spin up test environments to work on new features. These test environments consume fewer resources than production sites. “Using AWS we can experiment and play with new ideas. And we have the confidence that we can stay on top of the changes to regulations through better control and transparency with AWS services,” says Pýrek. Using AWS Step Functions, the company always has up-to-date product documentation as it is automatically generated. Now if a regulator or legal counsel asks to review the application’s processes, Purple can share the documentation to demonstrate how it complies. “It’s much easier to produce visual reports and diagrams for our compliance stakeholders,” says Červinka. “That frees up IT teams from having to provide complex, time-intensive—and not really fun—support so they can instead focus on building new features.” Reduces maintenance burden on developer team Türkçe FinTech company Purple Technology builds applications and services for brokerage firms to onboard customers efficiently. End users self-manage their accounts and portfolios, which leaves brokerages free to focus on core functions such as client services and risk management. Users creating new trading accounts with brokers need to follow a stringent onboarding process that complies with complex rules and regulations to verify their identities. Using AWS, Purple has simplified the way these rulesets are coded into the app, making it easier for non-technical employees to manage the application and keep on top of changing regulations. English But managing those rules was a complex and time-consuming process, often requiring developer resources that would be better spent on product innovation, not maintenance. Resolves software issues more efficiently Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Developers can also devote more time to improving the platform rather than troubleshooting issues, because AWS Step Functions has reduced the time required for debugging. This has increased the team’s speed of development. Deutsch Amazon DynamoDB Trading and investing online relies on transparency, and trust in the platform, in the brokers, and in the identity of traders. Purple Technology helps build that trust. Tiếng Việt Italiano ไทย Using AWS, we have greater visibility into our complex processes, making them simple to visualize, manage, and update. This means we can be more responsive to any new or changing regulations and to customer needs.” Boosts speed of development of new features Communication with business decision-makers on new software features is now more productive. “It’s easy to read and modify AWS Step Functions,” says Filip Pýrek, serverless architect at Purple Technology. “We use it for prototyping when designing features, so non-technical colleagues can understand and discuss new processes.” 2022 Based in the Czech Republic, Purple Technology is a financial technology company founded in 2011. It provides an online trading platform for brokerages and their clients around the world. Faster Development and Product Maintenance As a FinTech company, Purple’s solution has to comply with a huge number of legal and regulatory rules that vary from territory to territory and are subject to constant change. Purple’s solution needs to accurately capture these rules to run checks during new trader account registrations. AWS Step Functions is a low-code, visual workflow service that developers use to build distributed applications, automate IT and business processes, and build data and machine learning pipelines using AWS services. Workflows manage failures, retries, parallelization, service integrations, and observability so developers can focus on higher-value business logic. Português Using AWS Step Functions, Purple Technology maps out the workflows for each process so that it can easily fix any issues and demonstrate to regulators how customer checks are carried out. In addition, rather than drawing on developers to make changes, Purple can use trained, non-technical people to carry out maintenance." Queensland University of Technology Advances Global Research on Rare Diseases Using the AWS Cloud.txt,"Bellgard shared that the TRRF is applicable across all clinical care settings. As the digital research platform is cloud-based, it acts as a central coordinating data repository, which can be accessed by patients, clinicians and researchers from any device. This allows for current and convenient information sharing, closer engagement between patients and clinicians, and ultimately, improved patient care. Français “AS is a complex neurogenetic condition with multiple genotypes and phenotypes,” states Megan Cross, chairperson of the Foundation for Angelman Syndrome (FAST). “Prior to the creation of this platform, there was no capacity to collect, collate, and disseminate patient-reported data on a global scale. The TRRF has allowed parents of patients to engage with research, empowering their journey with AS."" 2023 Enhanced Español Expansion Learn more » eResearch@QUT selected Amazon Relational Database Service (Amazon RDS) with Amazon Aurora Serverless to deliver high availability, compute performance, and scalability of its databases. For added efficiencies, the eResearch@QUT team deployed Amazon Elastic Container Service (Amazon ECS) on AWS Fargate which helped eliminate the management of virtual machines, and made it easier for the team to focus on application development, and automate the deployment of new registries. Development of the TRRF system has received funding by both nonprofit organizations and national competitive funding schemes, including MTPConnect and the National Health and Medical Research Council. Operating on a strict budget requires the eResearch@QUT team to cost-effectively manage the TRRF’s scalability, security, and compute capacity to support the global AS registry.  日本語 data governance frameworks AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. AWS Fargate is compatible with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Amazon Aurora Serverless 한국어 Matthew Bellgard Director of eResearch and TRRF Project Lead, Queensland University of Technology  Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon Aurora Serverless is an on-demand, autoscaling configuration for Amazon Aurora. It automatically starts up, shuts down, and scales capacity up or down based on your application's needs. Learn more » To learn more, visit aws.amazon.com/education. Get Started Amazon CloudFront AWS Services Used Solution | Delivering an Efficient, Highly Available and Secure Platform The Office of eResearch in Queensland University of Technology supports QUT researchers and external stakeholders, using innovative end-to-end digital and data solutions and strategies, to deliver real-world impact. The platform has also opened the possibility to launch registries for other rare diseases, faster. Bellgard, who also currently chairs the Asia Pacific Economic Community Rare Disease Network, mentioned that the eResearch@QUT DevOps team can now deploy a complete registry for other research projects within hours compared to days, prior to working with AWS. 中文 (繁體) Bahasa Indonesia Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Professor Matthew Bellgard, director of eResearch at QUT and TRRF project lead said, “Working with Amazon Web Services (AWS) to host our open-source digital platform has helped the team to continually apply optimizations reducing the total cost of ownership and improve the overall security posture of the platform.” Queensland University of Technology Advances Global Research on Rare Diseases Using the AWS Cloud The Office of eResearch at the Queensland University of Technology developed an open-source digital platform to collect and analyze health data in an Amazon Virtual Private Cloud (Amazon VPC) to secure patient information. About Office of eResearch, Queensland University of Technology “With multiple layers of security and, in particular, AWS’s service level agreement of up to 99.99% uptime, we are assured of protecting the patient data we collect, and meeting the stringent data handling and governance guidelines of Australia and other countries,” shared Bellgard. Customer Stories / Higher Education Overview uptime Türkçe The Office of eResearch at the Queensland University of Technology (eResearch@QUT) designs digital platforms to support and facilitate research projects from around the world. It helps researchers to apply real world applications of digital technology, including custom cloud solutions, and quantitative research methods with machine learning, to promote data-driven discoveries. To aid rare disease research, the eResearch@QUT team has developed the Trial Ready Registry Framework (TRRF), an open-source digital platform to collect and analyze health data. The TRRF has been deployed for Angelman Syndrome (AS), a rare neurodevelopmental disorder. Through this cloud-based platform, individuals and their parents or guardians living with AS from around the world can self-register and share patient-reported information to accelerate clinical research on the natural progression of the disease and facilitate clinical trial participation. Most recently, the TRRF has been deployed to establish the first Australian patient and clinical Australian Motor Neurone Disease (MND) registry through the MiNDAUS partnership. This is a national collaboration of clinicians and scientists, consumer advocacy groups, and consumers to improve person-centered care for people living with MND by providing data-driven policy direction in health care and research. Associate Professor Paul Talman, clinical lead of the MiNDAUS Registry shared, “The TRRF allows us to move from a rather static state where researchers obtain snapshots of data at any given timepoint to a more dynamic health care tool that the patients control.” English 99.99% AWS Fargate Amazon Virtual Private Cloud Deutsch Driving Tiếng Việt Learn More of the user network Italiano ไทย Amazon Virtual Private Cloud (Amazon VPC) gives you full control over your virtual networking environment, including resource placement, connectivity, and security. Learn more » novel research opportunities Opportunity | Setting Up a Digital Health Framework for Clinical Research security profile Delivering end-to-end eResearch@QUT housed the AS registry in an Amazon Virtual Private Cloud (Amazon VPC) to secure patient information. AWS WAF was added to Amazon VPC to monitor and block web traffic that may pose a threat to the platform. 中文 (简体) Outcome | Expanding the Use of the TRRF for Other Diseases and Clinical Care Settings Português Working with AWS to host our open-source digital platform has helped the team to continually apply optimizations reducing the total cost of ownership and improve the overall security posture of the platform.”" Query Response Time Improved Using Amazon Redshift Serverless _ Playrix Case Study _ AWS.txt,"Français analyst productivity “We have a long-term relationship with AWS and use AWS solutions everywhere—in our games, development, researching, and more,” says Ivanov. “Adding Amazon Redshift Serverless to our solution has been another win.” cost savings Español Since adopting Amazon Redshift Serverless, Playrix has improved its ability to rapidly analyze near-real-time player data and allocate marketing spend as part of its demand-generation activities. Handling spikes in user queries is no longer a problem. The company is also better equipped to perform research using historical player data to identify and reengage inactive gamers. In the past, running queries on old data risked disrupting other critical processes, so the Playrix team avoided doing so. Query Response Time Improved Using Amazon Redshift Serverless with Playrix 日本語 AWS Services Used Ireland-based Playrix is one of the largest gaming companies in Europe and is among the top three most successful mobile developers in the world. Every month, more than 100 million people play the company’s popular games, which include Gardenscapes, Fishdom, Manor Matters, Homescapes, Wildscapes, and Township. Part of Playrix’s marketing strategy is to analyze past player data to identify inactive players, reengage them, and inspire them to start gaming again. To do so, it needed to efficiently analyze a massive quantity of player data, dating back 4–5 years, without disrupting other compute processes. In addition, Playrix wanted to achieve more predictable response times when providing one-time analytics to help allocate marketing spend. “Our stakeholders want to see dashboards with data from the previous day, including financial data used for quick decision-making,” says Igor Ivanov, technical director at Playrix. “So, it’s important for us to avoid any delays in the data.” Get Started 한국어 The company used Amazon Redshift to achieve these aims, eventually upgrading to three nodes of Amazon Redshift to meet its scaling needs. However, the company still had 600 TB of data remaining to migrate to Amazon Redshift and realized that three nodes weren’t enough. When Amazon Redshift Serverless became available, Playrix knew that it was the right solution to house the company’s data and to meet its needs during times when higher performance is necessary. “Amazon Redshift Serverless is great for achieving the on-demand high performance that we need for massive queries,” says Ivanov. Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon Redshift Serverless is great for achieving the on-demand high performance that we need for massive queries.” Decreased cost Playrix began implementing Amazon Redshift Serverless in April 2022 and finished in July of that year. Initially, as a proof of concept, Playrix had upgraded its cluster from 3 to 12 nodes and saw how much more efficiently its teams could perform complicated analyses. When Amazon Redshift Serverless became available, Playrix was one of the first companies to pilot the service. The company migrated its remaining 600 TB of data from the past 4–5 years into an Amazon Redshift cluster, where it can also be accessed using Amazon Redshift Serverless—no need to store two copies of the data. Using Amazon Redshift Serverless, Playrix can query its historical data without disruption to regular analytics jobs. Playrix added Amazon Redshift Serverless to its provisioned cluster using the data-sharing feature, so unpredictable one-time queries and regular queries can access the same data—resulting in cost savings for Playrix. Using Amazon Redshift Serverless, the company can not only rapidly run queries on past data but has also decreased its response times to 4–5 minutes. “For analysts, it’s very important to be able to use the history of our games for decision-making,” says Ivanov. “Now that we’re using Amazon Redshift Serverless to more efficiently analyze results from the past 4 years, we can develop more accurate machine learning models.” Mobile gaming company Playrix, which had already been using solutions from Amazon Web Services (AWS), wanted to advance its use of Amazon Redshift, the fastest and most widely used cloud data warehouse, to enhance the analytics it uses to market to players. The company had successfully used Amazon Redshift and other AWS services for 5 years but wanted to scale its data analytics needs without disrupting other systems and processes—particularly when analyzing past player data. About Playrix Now that it uses Amazon Redshift Serverless as part of its solution for analyzing player data, Playrix is equipped to run massive queries on player data more cost effectively and without downtime, helping the company get more value out of its historic data. The resulting analytics drive marketing strategies to reengage inactive players and generate sales revenue. Due to its ongoing success using AWS solutions, Playrix plans to continue using AWS for data analysis and other business needs. Learn how Playrix, a leader in mobile gaming, improved query response time using Amazon Redshift Serverless. Igor Ivanov Technical Director, Playrix 中文 (繁體) Bahasa Indonesia Improved response times Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) In 2022, Playrix began using Amazon Redshift Serverless, a service that makes it easier for companies to run and scale analytics without having to manage data warehouse infrastructure, alongside Amazon Redshift. Since adopting Amazon Redshift Serverless, Playrix has improved response times for queries on massive amounts of historical data, improved its use of marketing analytics to increase game sales, and reduced its monthly costs by 20 percent. Amazon Redshift 2022 Overview Based in Ireland, Playrix is one of the largest gaming companies in Europe and is among the top three most successful mobile developers in the world. Each month, over 100 million people play the company’s games, which include hits such as Gardenscapes, Fishdom, and Manor Matters. of customer acquisition Solution | Using Amazon Redshift Serverless to Efficiently Run Queries on 600 TB of Data Türkçe English Amazon Redshift Serverless makes it easier to run and scale analytics without having to manage your data warehouse infrastructure. downtime Outcome | Driving Revenue with Historic Player Data Opportunity | Using Amazon Redshift Serverless to Analyze Near-Real-Time Player Data  Improved Playrix has also achieved significant cost savings now that it uses Amazon Redshift Serverless as part of a more flexible architecture featuring fixed clusters. The company saves 20 percent of the cost of its marketing stack and has decreased its cost of customer acquisition. In addition, analysts at the company now work more productively and save time when performing complex operations. “We now have more time for experimenting, developing solutions, and planning new research,” says Ivanov. Deutsch for massive data queries on historical data Tiếng Việt Italiano ไทย 20% Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. Learn more » Amazon Redshift Serverless Decreased Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português" Rackspace Automates Infrastructure Management across Cloud Providers Using AWS Systems Manager _ Rackspace Case Study _ AWS.txt,"SmartTickets—a component in VM Management and other Rackspace services that performs automatic remediation and gathers data in response to monitoring events in customer systems—handled more than 38,000 incidents across all of Rackspace’s managed products in just 2 months, between August and September 2021. Of those incidents, Rackspace used AWS Systems Manager to send 10,660 automated responses, which not only saved 1,480 labor hours and reduced costs but also drove faster response times for customers. Overall, Rackspace automated 70 percent of manual remediation. Rackspace also uses AWS Systems Manager to automatically resolve some of those issues. “Now we can provide services to customers at more economical rates,” says Prewitt. Français 2023 Español for customers at scale Manually managing hundreds of thousands of compute instances across multicloud and hybrid environments is a tremendous challenge—not to mention one that can become expensive. Technology services company Rackspace Technology (Rackspace) set out to resolve that dilemma for its customers by building a solution on Amazon Web Services (AWS). On AWS Systems Manager, Rackspace’s VM Management reduces complexity for customers by providing a single-pane view of their environments, even hybrid and multicloud ones. “More or less everything that AWS Systems Manager can do is exposed through an API,” says Gignac. That capability means Rackspace can automatically aggregate all the infrastructure data on AWS Systems Manager and expose it to customers through a user-friendly control panel. Previously, compiling data on disparate systems was challenging for customers. “Using the consistent dashboard improves customers’ security and peace of mind because they better understand what is powering their applications,” says Prewitt. With that visibility, decision makers can be agile and quickly adapt to industry changes to pursue business goals. 日本語 Learn how Rackspace Technology used AWS Systems Manager to automate management of multicloud and hybrid infrastructures, saving hundreds of labor hours monthly, cutting costs, and reducing complexity. AWS Systems Manager Automated 70% Get Started 한국어 Rackspace also uses Amazon CloudWatch, a monitoring and observability service, to support VM Management and other core offerings. The Amazon CloudWatch agent on the VMs performs monitoring and alerting based on the events happening in customers’ infrastructure. During the same 2-month span in 2021, Rackspace used Amazon CloudWatch to ingest 14,670 alarm events across all its products that use the AWS service. Rackspace also used AWS Systems Manager to automate more than 150 runbooks on its Advanced Monitoring & Resolution solution, which provides real-time monitoring and alerts for customers’ infrastructure. Each runbook performs diagnostics and troubleshooting on a specific issue detected using Amazon CloudWatch. “Instead of having to manually gather that information, Rackspace employees can see it right there,” says Prewitt. Overview | Opportunity | Solution | Outcome | AWS Services Used When things go wrong, customers expect Rackspace to step in and act swiftly to solve their problem. Using AWS Systems Manager, we can do that much more quickly."" Reduces infrastructure complexity Solution | Supporting Automation, Staff Productivity, and Transparency on AWS Rackspace plans to work with customers to develop custom runbooks instead of generic ones. “In some cases, we’ll use AWS Systems Manager to automate and orchestrate the response and resolution of those runbooks,” says Gignac. AWS Services Used Amazon CloudWatch collects and visualizes real-time logs, metrics, and event data in automated dashboards to streamline your infrastructure and application maintenance. Reduced 中文 (繁體) Bahasa Indonesia Performs mass patching Managing multicloud environments at scale reliably and cost-effectively was a challenge because organizations had to manually perform activities across a fleet of hundreds of thousands of different compute instances. If the Rackspace team detected a security vulnerability on a customer’s system or a customer requested a patching activity, a Rackspace employee had to log in to the customer’s infrastructure, investigate and troubleshoot the issue, and perform manual patching. “Having humans doing that one by one on a large scale is not sustainable,” says Brad Gignac, principal engineer at Rackspace. “It also delays resolution time.” Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Learn more » About Rackspace Technology of manual remediation tasks through SmartTickets Overview AWS Systems Manager is a secure end-to-end management solution for resources on AWS, on premises, and on other clouds. Called VM Management, the solution supports Rackspace in managing customers’ virtual machines (VMs) across AWS or other cloud providers and multicloud environments. It runs on AWS Systems Manager, which supports managing servers running on AWS and in a user’s on-premises data center through a single interface. Using AWS Systems Manager to support VM Management and several other of its managed services, Rackspace transformed its core offerings from manual, resource-intensive processes to highly scalable, automated, simple solutions that reduce labor and decrease costs for Rackspace and its customers. Türkçe Outcome | Taking Automation to the Next Level on AWS English Rackspace Automates Infrastructure Management across Cloud Providers Using AWS Systems Manager Opportunity | Finding Scalability on AWS Systems Manager through automation VM Management automates the traditionally manual management of VMs or bare metal infrastructure. Historically, organizations have each needed a large information technology team to complete time-consuming tasks such as patching, agent distribution, server diagnostics, and issue remediation. “AWS Systems Manager has been a cornerstone of the automation and capabilities that we’ve built,” says Prewitt. Now customers can outsource that responsibility to Rackspace and eliminate the cost and complexity of patching their own infrastructure. The automation also improves security by avoiding errors associated with manual tasks. Josh Prewitt Chief Product Officer, Rackspace Technology Deutsch for Rackspace customers In 2015, Rackspace began taking advantage of AWS Systems Manager for various products, but in 2019 it extended its use of AWS services to other cloud environments. Since 2019, Rackspace has run VM Management on AWS Systems Manager to power patching activities across all the major cloud providers it supports. Using AWS Systems Manager, Rackspace performs mass patching at scale, covering more than 62,000 VMs across all its managed services. The company also reduced overhead and improved support efficiency by using a single solution. Tiếng Việt overhead costs Italiano ไทย Amazon CloudWatch Founded in 1998, Rackspace Technology is a global cloud solutions and services company that specializes in creating and managing multicloud solutions across infrastructure, applications, data, and security. It serves customers in 120 countries. Improves security Learn more » Rackspace helps organizations across 120 countries adopt modern technologies and intelligently manage and optimize them. The company specializes in creating solutions for hybrid and multicloud environments. “Many customers want us to shepherd them through the complexity and help them best take advantage of the technology,” says Josh Prewitt, chief product officer at Rackspace. Since first using AWS in 2015, Rackspace has transformed from building and running many of its applications internally to building them on AWS and is now an AWS Partner. On AWS, Rackspace solved a major industry challenge with a solution that saved time, cut costs, and reduced complexity for its customers and itself. “When things go wrong, customers expect Rackspace to step in and act swiftly to solve their problem,” says Prewitt. “Using AWS Systems Manager, we can do that much more quickly.” Português Rackspace needed a solution that could run both on premises and on the cloud. “We wanted one tool to use across the full suite of solutions that Rackspace manages,” says Gignac. AWS Systems Manager met that requirement and offered programmability. “That’s a key differentiator of AWS: we can use AWS Systems Manager to run shell scripts on individual VMs and do advanced orchestration,” Gignac continues." Razer Deepened Gamer Engagement using Amazon Personalize _ Video Testimonial _ AWS.txt,"Français 2023 Español 日本語 Customer Stories / Retail & Wholesale Get Started 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Razer, the world’s leading lifestyle brand for gamers, wanted to provide personalized hardware recommendations across a number of different applications and data domains to deepen engagement across its growing number of gamers. The company was keen to test the possibilities of machine learning (ML). However, as a small team, it posed a challenge when it needed to maintain the infrastructure that supports and scales the appropriate resources for training and inferencing a recommendation model, all while being accurate and applicable across multiple business domains. Razer turned to Amazon Web Services (AWS) for a solution and used Amazon Personalize intelligent user segmentation and advanced filtering features. Click-through rates for Razer Synapse, its unified cloud-based hardware configuration tool, were 10x better than industry standards using Amazon Personalize, generating additional revenue for the business. AWS Services Used 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Hong Jie Wee Big Data Lead, Razer, Inc. عربي 中文 (简体)   About Razer Amazon Personalize allows developers to quickly build and deploy curated recommendations and intelligent user segmentation at scale using machine learning (ML). Türkçe English Amazon Personalize Razer Deepened Gamer Engagement Using Amazon Personalize Deutsch Tiếng Việt Italiano ไทย Implementing personalized recommendations in Razer Synapse has enabled us to see a click-through-rate 10x better than industry standards, generating additional revenue for the business. Leveraging ML and Amazon Personalize made it easier and more convenient for us to maintain a personalization system.” Learn how Razer built and maintained a robust personalization engine to keep gamers engaged using Amazon Personalize. Razer is a leading lifestyle brand for gamers. With a fan base that spans every continent, the company has designed and built a gamer-focused marketplace of hardware, software, and services. Learn more » Português" Reaching Remote Learners Globally Using Amazon CloudFront _ Doping Hafiza Case Study _ AWS.txt,"Learn how Doping Hafiza transformed its educational technology services on AWS with the help of Sufle, an AWS Partner. Français Habil Bozali Head of Software Architecture, Doping Hafiza We can upload a video and convert, transport, and distribute it automatically to our content delivery network using AWS services.” 2023 Español to support millions of learners 日本語 increase in content delivery speed reduction in time and effort to access storage As an AWS Advanced Tier Services Partner, Sufle has supported organizations in their digital transformations for over 10 years. The company won AWS Partner of the Year for Turkey in 2022, a recognition of its success and expertise in the field. After participating in a webinar hosted by Sufle, Doping Hafiza saw an opportunity to modernize its content delivery network. “Doping Hafiza wanted a solution to host all its content, optimized for millions of users throughout Turkey and beyond,” says Gür. “The video-based learning environment required minimal latency and conversion because students might not have a very good internet connection or speed at home. After this engagement, we started developing and designing the new infrastructure together.” 한국어 decrease in processing, storage, and delivery costs Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » Amazon Elemental MediaConvert Customer Stories / Education AWS Services Used About Doping Hafiza 中文 (繁體) Bahasa Indonesia Opportunity | Using AWS Services to Deliver Educational Video Content for Doping Hafiza AWS Elemental MediaConvert is a file-based video transcoding service with broadcast-grade features. It allows you to easily create video-on-demand (VOD) content for broadcast and multiscreen delivery at scale. Learn more » Ρусский With help from Sufle, Doping Hafiza migrated its data from multiple on-premises data centers to AWS. It uses 32 AWS services to host and deliver its content; in particular, the company has centralized its educational media on Amazon CloudFront, a content delivery network service built for high performance, security, and developer convenience. “Doping Hafiza relied on different solutions to host paid and free content. Before, it was not possible to accomplish this on a shared system,” says Gür. “On Amazon CloudFront, we were able to centralize the videos and make it possible to stream both types of content from one place.” عربي Reaching Remote Learners Globally Using Amazon CloudFront with Doping Hafiza Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. 5x Scales Overview or downtime in 6 months Get Started Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer experience. Türkçe English 95% Solution | Increasing Content Delivery Speed by 5x Using Amazon CloudFront To address these challenges, Doping Hafiza migrated to Amazon Web Services (AWS) with the help of Sufle, an AWS Partner. The company centralized its data on the cloud and adopted 32 AWS services to improve its content delivery capabilities, improving speed, latency, and scalability. Through this engagement, Doping Hafiza has vastly enhanced its service quality and is now better equipped to expand its offerings and serve learners worldwide. Amazon S3 “On AWS, we helped Doping Hafiza transform its content delivery service in a short amount of time,” says Gür. “With managed AWS services, the operational cost was minimal, and with the right cloud architecture, it was simple for Doping Hafiza to migrate. Everything related to content delivery was made possible in one place.” Deutsch Outcome | Reaching Remote Learners on a Global Scale with Cloud Infrastructure Founded in 2011, Doping Hafiza is an educational technology company that provides video-based learning environments for Turkish primary, middle, and secondary school students. Using machine learning and other advanced technologies, the company helps millions of students prepare for exams with personalized studying programs, coaching services, lectures, and more. Before migrating to AWS, Doping Hafiza relied on several on-premises systems to store its data and delivered educational content to students using multiple websites and video players. “Doping Hafiza uploaded all its video content to public streaming services and embedded those videos on its website,” says Gizem Gür, senior solutions architect and cofounder of Sufle. “One of those public providers asked for thousands of dollars because these videos generated a large amount of traffic. This is when Doping Hafiza engaged Sufle.” Tiếng Việt No interruptions With the global scalability of Amazon CloudFront, Doping Hafiza increased the speed of content delivery by five times. Additionally, the company has not seen any interruptions or downtime since migrating to AWS, which has improved service quality. “Before the migration, we had some problems with availability and latency. Sometimes, our service crashed,” says Habil Bozali, head of software architecture at Doping Hafiza. “In the first 6 months on AWS, we saw no issues with Amazon CloudFront or any other AWS media service.” All Doping Hafiza’s data is stored, encrypted, and versioned using Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. Using Amazon S3, Doping Hafiza can manage and search for content in a centralized location rather than multiple systems, reducing the time and effort to access storage by 95 percent. Italiano ไทย During the COVID-19 pandemic, demand for online learning services increased exponentially around the world, and educational technology providers like Doping Hafiza needed to quickly adapt to the new reality. To continue providing advanced learning tools, the company needed scalable infrastructure that could deliver video content to remote learners at low latency. However, this was not a simple task. Doping Hafiza needed to migrate a vast amount of data from multiple on-premises systems and third-party providers so that learners could enjoy a seamless experience across different channels. Amazon CloudFront Contact Sales Now that all Doping Hafiza’s media services are hosted on AWS, Sufle is helping the company migrate the last of its applications to AWS. Once the migration is complete, Doping Hafiza’s next step is to expand its services to learners on a global scale—with the speed, cost effectiveness, and scalability of the cloud. Learn more » Founded in 2011, Doping Hafiza is an educational technology company that provides video-based learning environments for Turkish primary, middle, and secondary school students. Its advanced technologies empower millions of learners. 30% 中文 (简体) Português Before migrating to AWS, Doping Hafiza relied on a costly, nonoptimized third-party solution to transcode and host some of its videos. Now, it uses AWS Elemental MediaConvert, a file-based video transcoding service with broadcast-grade features, to transform video content to different output options and deliver adaptive streams to users with Amazon CloudFront. Using this service, the company can automatically convert all high-quality video streams to different bit rates, including low-level qualities for students who do not have a good internet connection at home. “Before, we would need to download the video, convert it, and upload it to a different backup service and to different content delivery networks, which was time consuming,” says Bozali. “Now, we can upload a video and convert, transport, and distribute it automatically to our content delivery network using AWS services.” By adopting this solution, Doping Hafiza has reduced its processing, storage, and delivery costs by 30 percent." Read Innovates Video Call Transcription Using Amazon EC2 G5 Instances Powered by NVIDIA _ Read Case Study _ AWS.txt,"Français reduced from 30–60 seconds When Transcription 2.0 is integrated into videoconferencing software, like Zoom, Microsoft Teams, and Google Meet, Read can measure the effectiveness of an organization’s meetings over the course of a month and make specific recommendations to improve the quality of the meetings. After that, Read can continue monitoring meetings to make sure that its customers achieve their goals. Español Read uses Amazon Web Services (AWS) to host its solution on Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. To power its transcription tool, the company also used NVIDIA Riva (Riva), a GPU-accelerated speech artificial intelligence software development kit from NVIDIA, an AWS Partner. Using Riva on Amazon EC2, Read improved the performance of its transcription tool while keeping costs low. Solution | Saving up to 30% on Costs Using Amazon EC2 G5 Instances and NVIDIA’s Riva Read runs Riva on Amazon EC2 G5 Instances to deliver highly accurate transcription in near real time. In addition to this natural-language-processing use case, Read also uses Amazon EC2 G5 Instances for training and deploying its video models. Within 6 weeks of adopting Riva and Amazon EC2 G5 Instances, Read deployed a solution that minimizes costs and maximizes performance. “Deploying Riva on Amazon EC2 G5 Instances was very easy,” says Dillon Dukek, Read’s senior software engineer. “We didn’t have to train any of our own acoustic or language models to convert audio to text. It’s a bundled solution that can just be rolled out.” Finding highly performant and cost-effective technology was the driving force behind Read’s decision to choose an AWS solution. The high performance of Amazon EC2 G5 Instances powered by NVIDIA A10G Tensor Core GPUs makes this solution a particularly cost-efficient choice for making ML inferences and training moderately complex ML models, like those needed for natural language processing. In fact, Amazon EC2 G5 Instances offer anywhere between 15 and 40 percent better price performance compared with the previous generation of GPU-based instances. “We significantly improved costs per meeting hour,” says Rob Williams, vice president of engineering at Read. After transitioning to Amazon EC2 G5 Instances, Read saw a 20–30 percent reduction in costs. 日本語 AWS Nitro System Amazon EC2 G5 instances are the latest generation of NVIDIA GPU-based instances that can be used for a wide range of graphics-intensive and machine learning use cases.  Learn more » on per-request basis Contact Sales Get Started 한국어 up to 0.2 streams per machine Overview | Opportunity | Solution | Outcome | AWS Services Used 40–50ms latency response Read’s solution also led to faster response times for users. Dukek says that, with Read’s old tools, the real-time meeting reports and feedback were showing up after about 30–60 seconds. Such high latency wasn’t effective at helping presenters to course correct their meetings when quality and engagement dropped. “Now, we have that down to the 1-second range,” he says. “We’re providing feedback on a quick basis, and people can see a near-real-time view of how their meetings are going.” Williams adds, “We view the ability to have these effective metrics in response to the ongoing conversation as a critical part of our value offering.” Now, Read can deliver its feedback and meeting reports to more clients much faster than it could before. AWS Services Used 中文 (繁體) Bahasa Indonesia Amazon EC2 G5 Instances Learn how software company Read reduced costs by 20–30 percent using Amazon EC2 G5 Instances. Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Learn more » Rob Williams Vice President of Engineering, Read 2022 About Read Overview reduction in costs Türkçe English 30 streams on CPU-only boxes Read, a videoconferencing software startup, needed to reduce costs to sustain its growing business. The company relies on an always-on automatic speech recognition service to provide near-real-time augmented transcriptions of video meetings. When Read’s customer base grew suddenly, Read began looking for a more cost-effective solution to support its new customers. Read originally used CPUs to process audio and video and provide augmented transcripts to its clients. However, in Read’s unique use case, which requires always-on audio streaming, a quick explosion of growth made its tools too cost prohibitive. In late 2021, Read executives decided to move away from the original transcription tool. After researching options and creating a successful proof of concept, the company switched to Riva and ran it on Amazon EC2 G5 Instances—high-performance GPU-based instances for graphics-intensive applications and ML inference. Opportunity | Building Voice-to-Text Transcription Using Services from AWS and NVIDIA  Using Riva and Amazon EC2 G5 Instances, Read improved costs and performance. In pursuit of the company mission to make virtual human interactions better and smarter, Read expects to continue scaling up. As Read expands, the company will continue to deploy sophisticated ML models on Amazon EC2 G5 Instances powered by NVIDIA GPUs to meet its growing needs. Williams says, “Using AWS, we have the ability to scale and extend our quotas and the resources to support our business.” The AWS Nitro System is the underlying platform for our next generation of EC2 instances that enables AWS to innovate faster, further reduce cost for our customers, and deliver added benefits like increased security and new instance types. Learn more » Deutsch Tiếng Việt Read is a Seattle-based videoconferencing software company founded in 2021. It offers an innovative transcription tool that augments near-real-time text transcription with information on listener sentiment and engagement to make meetings better. Italiano ไทย Read Innovates Video Call Transcription Using Amazon EC2 G5 Instances Powered by NVIDIA Using Amazon EC2 G5 Instances also led to multiple performance benefits. Amazon EC2 G5 Instances are built on the AWS Nitro System to maximize resource efficiency through a combination of dedicated hardware and lightweight hypervisor facilitating faster innovation and enhanced security. On its previous CPUs, Read saw only about 0.2 streams per machine, but using Riva on Amazon EC2 G5 Instances, it can process about 30 concurrent streams per machine with only 40–50 milliseconds of latency per request. 1-second response times Founded in mid-2021, Read meets the needs of today’s hybrid and remote working environments. As the number and frequency of online meetings increased, so did the need for innovative near-real-time voice-to-text transcription. One part of Read’s services is the innovative tool Transcription 2.0. In addition to automatic transcriptions of meetings, the tool uses machine learning (ML) to offer insights about audience sentiment and engagement. It also identifies impactful statements throughout the meeting. This allows meeting hosts—such as managers, professors, recruiters, and presenters—to adjust content around what participants focus on and what they ignore. Amazon EC2 Using AWS, we have the ability to scale and extend our quotas and the resources to support our business.” 30% Outcome | Accelerating Continued Growth Using Amazon EC2 G5 Instances Português" Realizing the Full Value of EHR in a Digital Health Environment on AWS with Tufts Medicine _ Tufts Medicine Case Study _ AWS.txt,"Dr. Shafiq Rab Chief Data Officer, System Chief Information Officer and Executive Vice President, Tufts Medicine  AWS CloudFormation Français 2023 Tufts Medicine began laying the groundwork for its digital transformation in 2020, extensively evaluating cloud providers. “Our goal was to get out of the data center entirely,” says Jeremy Marut, chief of digital modernization at Tufts Medicine. “We wanted to implement a single EHR in the cloud and migrate all critical applications so that we could take advantage of high availability and modern technologies.” Español Learn how Tufts Medicine implemented its EHR in the cloud and migrated 42 applications in 14 months using AWS AWS CloudFormation lets you model, provision, and manage AWS and third-party resources by treating infrastructure as code. The team at Tufts Medicine compressed a 6-month on-premises hardware procurement-and-deployment process into a 4-week cloud deployment for its AWS landing zone and Epic build environment. Tufts Medicine also migrated 42 business-critical third-party applications to AWS in just 9 months, with a go-live by the end of March 2022. “We’re in the business of saving lives,” says Marut. “We’re not here to run data centers, and using AWS frees us from the rote, mundane work that is typically required, freeing up funds and minds to change healthcare for the better.” Learn more » 日本語 significant cost savings AWS Professional Services’ offerings use a unique methodology based on Amazon’s internal best practices to help you complete projects faster and more reliably, while accounting for evolving expectations and dynamic team structures along the way. Tufts Medicine wanted to modernize its healthcare technology to provide better care for patients by leaving traditional data centers and liberating the organization from technical debt. It decided to implement Epic as its electronic health record (EHR) system and migrate 42 integrated third-party applications to Amazon Web Services (AWS). Tufts Medicine deployed its entire EHR environment—including production systems, disaster recovery, and training—using AWS infrastructure in 14 months. The organization stands out as the first health system to implement a full Epic environment on AWS. Through this migration, Tufts Medicine consolidated technology stacks, modernized its applications, optimized cost, and most important, delivered new and improved services for patients and care providers. Get Started 한국어 AWS Professional Services Tufts Medicine also has improved its security and monitoring. “Security is critical,” says Dr. Rab. “As part of this implementation, we made sure everything was encrypted, both in motion and at rest. It’s amazing how many legacy applications were not supporting these best practices.” Tufts Medicine uses AWS Control Tower—a service to set up and govern a secure multiaccount AWS environment—to automate alerting and monitoring. In addition, Tufts Medicine has implemented Amazon CloudWatch to collect and visualize near-real-time logs, metrics, and event data in automated dashboards, streamlining infrastructure and application maintenance. Using AWS Control Tower and Amazon CloudWatch, Tufts Medicine has deployed canaries to test infrastructure operations and can automatically launch remediation efforts based on best practices. “When things aren’t working, we’re not only alerting, but we’re also autohealing and autofixing,” says Marut. “We’ve improved safety and security by using these tools.” Overview | Opportunity | Solution | Outcome | AWS Services Used Realizing the Full Value of EHR in a Digital Health Environment on AWS with Tufts Medicine Improved Previously known as Wellforce, Tufts Medicine is an integrated health system in Massachusetts comprising three hospitals, a home-healthcare network, and more than 15,000 healthcare providers. Outcome | Serving Patients and Redefining the Provider Experience Using AWS Tufts Medicine chose AWS as its cloud provider for the elastic compute, processing, and memory capabilities to handle Tufts Medicine’s EHR implementation. In February 2021, Tufts Medicine started working alongside AWS Professional Services, a global team of experts that help organizations achieve their desired business outcomes using AWS. Tufts Medicine used AWS Professional Services to build out the cloud infrastructure and configure the required cloud services to operate its healthcare environment. AWS Services Used Amazon CloudWatch collects and visualizes real-time logs, metrics, and event data in automated dashboards to streamline your infrastructure and application maintenance. 中文 (繁體) Bahasa Indonesia Through the end of 2023, Tufts Medicine is working to rationalize and migrate its 800-application portfolio to AWS, a move that the company expects will save millions of dollars per year. Tufts Medicine is also adding seven more languages to myTuftsMed, working to open a virtual pharmacy, and developing machine learning capabilities to provide precision therapies to patients. “Using AWS, our goal at Tufts Medicine is not only to redefine healthcare but to reinvent the way that it is delivered,” says Dr. Rab. As part of its cloud migration, Tufts Medicine consolidated 109 patient portals into a single portal—myTuftsMed. Through the portal, patients communicate with caregivers, request prescriptions, access test results, and manage appointments in three languages. To streamline virtual care, Tufts Medicine used Amazon Connect to set up a contact center that can scale to support millions of patients. Tufts Medicine also built chatbots using Amazon Lex, a fully managed artificial intelligence service with advanced natural language models, so that patients can get answers and access services simply and quickly. “Our goal was to remove the barriers for patients and consumers to access the healthcare that they need,” says Dr. Shafiq Rab, chief data officer, system chief information officer, and executive vice president at Tufts Medicine. Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Migrated The systems migration to AWS has improved user response time for Tufts Medicine’s care providers and its IT department. The system delivers submillisecond speeds, whether providers are accessing it from the clinic, the hospital, or a home office. “Our physicians and our nursing leaders are expressing delight. Now, caregivers can quickly access the EHR and supporting information that they need,” says Dr. Rab. “Because of the architecture that we defined to deploy Epic on AWS, we are seeing very fast response times.” Additionally, IT personnel can focus on innovations rather than routine maintenance. “It opens our teams up to do things that will drive healthcare innovation,” says Marut. “The morale is through the roof.” Tufts Medicine is a healthcare system comprising three hospitals, a home-healthcare network, and a large clinical integrated network in eastern Massachusetts. The organization serves four million patients and involves 18,000 healthcare workers and employees in providing care. Before the migration, its portfolio consisted of more than 800 applications with duplicative licensing across hospitals. Each hospital also maintained independent IT departments. Achieved Overview By migrating to AWS, Tufts Medicine has saved significant cost while increasing the speed to innovation across the health system. For example, it defines all systems architecture using infrastructure-as-code templates in AWS CloudFormation, a service for users to model, provision, and manage AWS and third-party resources. Using AWS CloudFormation, Tufts Medicine can spin up an environment in less than 6 minutes. When it needed to ingest document images for four million patients, initial estimates indicated that the process would take 200 days. Instead, Tufts Medicine completed the process in 72 hours. Türkçe AWS Control Tower English  patient records transferred to initialize EHR in the cloud Using AWS, our goal at Tufts Medicine is not only to redefine healthcare but to reinvent the way that it is delivered.” Solution | Building the Cloud Technology Stack and Data Estate for Tufts Medicine in 14 Months AWS Control Tower simplifies AWS experiences by orchestrating multiple AWS services on your behalf while maintaining the security and compliance needs of your organization. In only 14 months, Tufts Medicine deployed a new EHR implementation entirely using AWS infrastructure, across two independent AWS Regions, with three independent Availability Zones per Region. This deployment provides Tufts Medicine with multitiered disaster recovery. Failure of a data center in the primary production Region is quickly addressed by requesting additional capacity in the remaining Availability Zones. Opportunity | Migrating EHR and Integrated Applications to Create Cloud-Based Systems and Data Estate for Tufts Medicine Deutsch About Tufts Medicine Tiếng Việt Customer Stories / Healthcare Italiano ไทย Amazon CloudWatch patient experience, system response, workflow consistency, and employee satisfaction Four million Learn more » 42 applications in 14 months Português" Recommend and dynamically filter items based on user context in Amazon Personalize _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Recommend and dynamically filter items based on user context in Amazon Personalize by Gilles-Kuessan Satchivi , Aditya Pendyala , and Prabhakar Chandrasekaran | on 29 JUN 2023 | in Amazon Personalize , Intermediate (200) , Technical How-to | Permalink | Comments |  Share Organizations are continuously investing time and effort in developing intelligent recommendation solutions to serve customized and relevant content to their users. The goals can be many: transform the user experience, generate meaningful interaction, and drive content consumption. Some of these solutions use common machine learning (ML) models built on historical interaction patterns, user demographic attributes, product similarities, and group behavior. Besides these attributes, context (such as weather, location, and so on) at the time of interaction can influence users’ decisions while navigating content. In this post, we show how to use the user’s current device type as context to enhance the effectiveness of your Amazon Personalize -based recommendations. In addition, we show how to use such context to dynamically filter recommendations. Although this post shows how Amazon Personalize can be used for a video on demand (VOD) use case, it’s worth noting that Amazon Personalize can be used across multiple industries. What is Amazon Personalize? Amazon Personalize enables developers to build applications powered by the same type of ML technology used by Amazon.com for real-time personalized recommendations. Amazon Personalize is capable of delivering a wide array of personalization experiences, including specific product recommendations, personalized product reranking, and customized direct marketing. Additionally, as a fully managed AI service, Amazon Personalize accelerates customer digital transformations with ML, making it easier to integrate personalized recommendations into existing websites, applications, email marketing systems, and more. Why is context important? Using a user’s contextual metadata such as location, time of day, device type, and weather provides personalized experiences for existing users and helps improve the cold-start phase for new or unidentified users. The cold-start phase refers to the period when your recommendation engine provides non-personalized recommendations due to the lack of historical information regarding that user. In situations where there are other requirements to filter and promote items (say in news and weather), adding a user’s current context (season or time of day) helps improve accuracy by including and excluding recommendations. Let’s take the example of a VOD platform recommending shows, documentaries, and movies to the user. Based on behavior analysis, we know VOD users tend to consume shorter-length content like sitcoms on mobile devices and longer-form content like movies on their TV or desktop. Solution overview Expanding on the example of considering a user’s device type, we show how to provide this information as context so that Amazon Personalize can automatically learn the influence of a user’s device on their preferred types of content. We follow the architecture pattern shown in the following diagram to illustrate how context can automatically be passed to Amazon Personalize. Automatically deriving context is achieved through Amazon CloudFront headers that are included in requests such as a REST API in Amazon API Gateway that calls an AWS Lambda function to retrieve recommendations. Refer to the full code example available at our GitHub repository . We provide a AWS CloudFormation template to create the necessary resources. In following sections, we walk through how to set up each step of the sample architecture pattern. Choose a recipe Recipes are Amazon Personalize algorithms that are prepared for specific use cases. Amazon Personalize provides recipes based on common use cases for training models. For our use case, we build a simple Amazon Personalize custom recommender using the User-Personalization recipe. It predicts the items that a user will interact with based on the interactions dataset. Additionally, this recipe also uses items and users datasets to influence recommendations, if provided. To learn more about how this recipe works, refer to User-Personalization recipe . Create and import a dataset Taking advantage of context requires specifying context values with interactions so recommenders can use context as features when training models. We also have to provide the user’s current context at inference time. The interactions schema (see the following code) defines the structure of historical and real-time users-to-items interaction data. The USER_ID , ITEM_ID , and TIMESTAMP fields are required by Amazon Personalize for this dataset. DEVICE_TYPE is a custom categorical field that we are adding for this example to capture the user’s current context and include it in model training. Amazon Personalize uses this interactions dataset to train models and create recommendation campaigns. { ""type"": ""record"", ""name"": ""Interactions"", ""namespace"": ""com.amazonaws.personalize.schema"", ""fields"": [ { ""name"": ""USER_ID"", ""type"": ""string"" }, { ""name"": ""ITEM_ID"", ""type"": ""string"" }, { ""name"": ""DEVICE_TYPE"", ""type"": ""string"", ""categorical"": True }, { ""name"": ""TIMESTAMP"", ""type"": ""long"" } ], ""version"": ""1.0"" } Similarly, the items schema (see the following code) defines the structure of product and video catalog data. The ITEM_ID is required by Amazon Personalize for this dataset. CREATION_TIMESTAMP is a reserved column name but it is not required. GENRE and ALLOWED_COUNTRIES are custom fields that we are adding for this example to capture the video’s genre and countries where the videos are allowed to be played. Amazon Personalize uses this items dataset to train models and create recommendation campaigns. { ""type"": ""record"", ""name"": ""Items"", ""namespace"": ""com.amazonaws.personalize.schema"", ""fields"": [ { ""name"": ""ITEM_ID"", ""type"": ""string"" }, { ""name"": ""GENRE"", ""type"": ""string"", ""categorical"": True }, { ""name"": ""ALLOWED_COUNTRIES"", ""type"": ""string"", ""categorical"": True }, { ""name"": ""CREATION_TIMESTAMP"", ""type"": ""long"" } ], ""version"": ""1.0"" } In our context, historical data refers to end-user interaction history with videos and items on the VOD platform. This data is usually gathered and stored in application’s database. For demo purposes, we use Python’s Faker library to generate some test data mocking the interactions dataset with different items, users, and device types over a 3-month period. After the schema and input interactions file location are defined, the next steps are to create a dataset group, include the interactions dataset within the dataset group, and finally import the training data into the dataset, as illustrated in the following code snippets: create_dataset_group_response = personalize.create_dataset_group( name = ""personalize-auto-context-demo-dataset-group"" ) create_interactions_dataset_response = personalize.create_dataset( name = ""personalize-auto-context-demo-interactions-dataset"", datasetType = ‘INTERACTIONS’, datasetGroupArn = interactions_dataset_group_arn, schemaArn = interactions_schema_arn ) create_interactions_dataset_import_job_response = personalize.create_dataset_import_job( jobName = ""personalize-auto-context-demo-dataset-import"", datasetArn = interactions_dataset_arn, dataSource = { ""dataLocation"": ""s3://{}/{}"".format(bucket, interactions_filename) }, roleArn = role_arn ) create_items_dataset_response = personalize.create_dataset( name = ""personalize-auto-context-demo-items-dataset"", datasetType = ‘ITEMS’, datasetGroupArn = items_dataset_group_arn, schemaArn = items_schema_arn ) create_items_dataset_import_job_response = personalize.create_dataset_import_job( jobName = ""personalize-auto-context-demo-items-dataset-import"", datasetArn = items_dataset_arn, dataSource = { ""dataLocation"": ""s3://{}/{}"".format(bucket, items_filename) }, roleArn = role_arn ) Gather historical data and train the model In this step, we define the chosen recipe and create a solution and solution version referring to the previously defined dataset group. When you create a custom solution, you specify a recipe and configure training parameters. When you create a solution version for the solution, Amazon Personalize trains the model backing the solution version based on the recipe and training configuration. See the following code: recipe_arn = ""arn:aws:personalize:::recipe/aws-user-personalization"" create_solution_response = personalize.create_solution( name = ""personalize-auto-context-demo-solution"", datasetGroupArn = dataset_group_arn, recipeArn = recipe_arn ) create_solution_version_response = personalize.create_solution_version( solutionArn = solution_arn ) Create a campaign endpoint After you train your model, you deploy it into a campaign . A campaign creates and manages an auto-scaling endpoint for your trained model that you can use to get personalized recommendations using the GetRecommendations API. In a later step, we use this campaign endpoint to automatically pass the device type as a context as a parameter and receive personalized recommendations. See the following code: create_campaign_response = personalize.create_campaign( name = ""personalize-auto-context-demo-campaign"", solutionVersionArn = solution_version_arn ) Create a dynamic filter When getting recommendations from the created campaign, you can filter results based on custom criteria. For our example, we create a filter to satisfy the requirement of recommending videos that are only allowed to be played from user’s current country. The country information is passed dynamically from the CloudFront HTTP header. create_filter_response = personalize.create_filter( name = 'personalize-auto-context-demo-country-filter', datasetGroupArn = dataset_group_arn, filterExpression = 'INCLUDE ItemID WHERE Items.ALLOWED_COUNTRIES IN ($CONTEXT_COUNTRY)' ) Create a Lambda function The next step in our architecture is to create a Lambda function to process API requests coming from the CloudFront distribution and respond by invoking the Amazon Personalize campaign endpoint. In this Lambda function, we define logic to analyze the following CloudFront request’s HTTP headers and query string parameters to determine the user’s device type and user ID, respectively: CloudFront-Is-Desktop-Viewer CloudFront-Is-Mobile-Viewer CloudFront-Is-SmartTV-Viewer CloudFront-Is-Tablet-Viewer CloudFront-Viewer-Country The code to create this function is deployed through the CloudFormation template. Create a REST API To make the Lambda function and Amazon Personalize campaign endpoint accessible to the CloudFront distribution, we create a REST API endpoint set up as a Lambda proxy. API Gateway provides tools for creating and documenting APIs that route HTTP requests to Lambda functions. The Lambda proxy integration feature allows CloudFront to call a single Lambda function abstracting requests to the Amazon Personalize campaign endpoint. The code to create this function is deployed through the CloudFormation template. Create a CloudFront distribution When creating a CloudFront distribution, because this is a demo setup, we disable caching using a custom caching policy, ensuring the request goes to the origin every time. Additionally, we use an origin request policy specifying the required HTTP headers and query string parameters that are included in an origin request. The code to create this function is deployed through the CloudFormation template. Test recommendations When the CloudFront distribution’s URL is accessed from different devices (desktop, tablet, phone, and so on), we can see personalized video recommendations that are most relevant to their device. Also, if a cold user is presented, the recommendations tailored for user’s device are presented. In the following sample outputs, names of videos are only used for representation of their genre and runtime to make it relatable. In the following code, a known user who loves comedy based on past interactions and is accessing from a phone device is presented with shorter sitcoms: Recommendations for user: 460 ITEM_ID GENRE ALLOWED_COUNTRIES 380 Comedy RU|GR|LT|NO|SZ|VN 540 Sitcom US|PK|NI|JM|IN|DK 860 Comedy RU|GR|LT|NO|SZ|VN 600 Comedy US|PK|NI|JM|IN|DK 580 Comedy US|FI|CN|ES|HK|AE 900 Satire US|PK|NI|JM|IN|DK 720 Sitcom US|PK|NI|JM|IN|DK The following known user is presented with feature films when accessing from a smart TV device based on past interactions: Recommendations for user: 460 ITEM_ID GENRE ALLOWED_COUNTRIES 780 Romance US|PK|NI|JM|IN|DK 100 Horror US|FI|CN|ES|HK|AE 400 Action US|FI|CN|ES|HK|AE 660 Horror US|PK|NI|JM|IN|DK 720 Horror US|PK|NI|JM|IN|DK 820 Mystery US|FI|CN|ES|HK|AE 520 Mystery US|FI|CN|ES|HK|AE A cold (unknown) user accessing from a phone is presented with shorter but popular shows: Recommendations for user: 666 ITEM_ID GENRE ALLOWED_COUNTRIES 940 Satire US|FI|CN|ES|HK|AE 760 Satire US|FI|CN|ES|HK|AE 160 Sitcom US|FI|CN|ES|HK|AE 880 Comedy US|FI|CN|ES|HK|AE 360 Satire US|PK|NI|JM|IN|DK 840 Satire US|PK|NI|JM|IN|DK 420 Satire US|PK|NI|JM|IN|DK A cold (unknown) user accessing from a desktop is presented with top science fiction films and documentaries: Recommendations for user: 666 ITEM_ID GENRE ALLOWED_COUNTRIES 120 Science Fiction US|PK|NI|JM|IN|DK 160 Science Fiction US|FI|CN|ES|HK|AE 680 Science Fiction RU|GR|LT|NO|SZ|VN 640 Science Fiction US|FI|CN|ES|HK|AE 700 Documentary US|FI|CN|ES|HK|AE 760 Science Fiction US|FI|CN|ES|HK|AE 360 Documentary US|PK|NI|JM|IN|DK The following known user accessing from a phone is returning filtered recommendations based on location (US): Recommendations for user: 460 ITEM_ID GENRE ALLOWED_COUNTRIES 300 Sitcom US|PK|NI|JM|IN|DK 480 Satire US|PK|NI|JM|IN|DK 240 Comedy US|PK|NI|JM|IN|DK 900 Sitcom US|PK|NI|JM|IN|DK 880 Comedy US|FI|CN|ES|HK|AE 220 Sitcom US|FI|CN|ES|HK|AE 940 Sitcom US|FI|CN|ES|HK|AE Conclusion In this post, we described how to use user device type as contextual data to make your recommendations more relevant. Using contextual metadata to train Amazon Personalize models will help you recommend products that are relevant to both new and existing users, not just from the profile data but also from a browsing device platform. Not only that, context like location (country, city, region, postal code) and time (day of the week, weekend, weekday, season) opens up the opportunity to make recommendations relatable to the user. You can run the full code example by using the CloudFormation template provided in our GitHub repository and cloning the notebooks into Amazon SageMaker Studio . About the Authors Gilles-Kuessan Satchivi is an AWS Enterprise Solutions Architect with a background in networking, infrastructure, security, and IT operations. He is passionate about helping customers build Well-Architected systems on AWS. Before joining AWS, he worked in ecommerce for 17 years. Outside of work, he likes to spend time with his family and cheer on his children’s soccer team. Aditya Pendyala is a Senior Solutions Architect at AWS based out of NYC. He has extensive experience in architecting cloud-based applications. He is currently working with large enterprises to help them craft highly scalable, flexible, and resilient cloud architectures, and guides them on all things cloud. He has a Master of Science degree in Computer Science from Shippensburg University and believes in the quote “When you cease to learn, you cease to grow.” Prabhakar Chandrasekaran is a Senior Technical Account Manager with AWS Enterprise Support. Prabhakar enjoys helping customers build cutting-edge AI/ML solutions on the cloud. He also works with enterprise customers providing proactive guidance and operational assistance, helping them improve the value of their solutions when using AWS. Prabhakar holds six AWS and six other professional certifications. With over 20 years of professional experience, Prabhakar was a data engineer and a program leader in the financial services space prior to joining AWS. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Red Canary Architects for Fault Tolerance and Saves up to 80 Using Amazon EC2 Spot Instances _ Red Canary Case Study _ AWS.txt,"Amazon Simple Queue Service (SQS) lets you send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available. Français Since early 2020, Red Canary further optimizes costs by using Savings Plans, a flexible pricing model to reduce costs by up to 72 percent compared with On-Demand prices, in exchange for a 1-year or 3-year hourly spend commitment. The company’s Compute Savings Plan covers the compute demand for additional services that Red Canary hosts to run a third-party product for customers, which is not as flexible as its own MDR solution. In December 2021, Red Canary also began using AWS Graviton processors, designed by AWS to deliver the best price performance for cloud workloads running on Amazon EC2. Using AWS Graviton processors, the company achieves an additional 30 percent of savings on top of the savings realized from using Spot Instances while achieving equivalent processing speeds to what it experienced using x86 processors. Founded in 2014, Red Canary is a cybersecurity company providing managed detection and response services. Its mission is to create a world where every company can make its greatest impact without fear of damage from cyberthreats. 2023 Red Canary Platform Diagram Español Amazon EC2 while optimizing costs Red Canary Architects for Fault Tolerance and Saves up to 80% Using Amazon EC2 Spot Instances 日本語 “We’re investing our effort into making sure that we’re the experts and can help customers protect their cloud environments,” says Rothe. “We will use AWS in the future to make sure that when unauthorized users get ahold of access keys that they shouldn’t have, we can detect them and shut them down before they cause any damage.” Contact Sales Red Canary uses containerization to manage the scaling of its solution. In 2020, Red Canary migrated its containers to Amazon Elastic Kubernetes Service (Amazon EKS), a managed Kubernetes service. In Amazon EKS, each of the processing components can be scaled individually using automatic scaling functions, making it much simpler to manage the MDR as workloads scale from 500 to 1,000 nodes throughout the day. Additionally, using Amazon EKS, Red Canary has more flexibility to use different types of instances, making it simpler to take advantage of Spot Instances. “Before, running our own Kubernetes clusters meant that we had to be experts on all things Kubernetes. Now, using Amazon EKS, we don’t have to manage cluster maintenance, and we have near zero operational issues,” says Rothe. 65–80% reduction Get Started 한국어 Increase in Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. 30% savings Amazon EKS Outcome | Investing in Cloud Expertise Using AWS durability, scalability, and fault tolerance Processes 1 PB of data daily Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. AWS Services Used Opportunity | Using Amazon EC2 Spot Instances to Reduce Compute Costs for Red Canary by 65–80% 中文 (繁體) Bahasa Indonesia Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. In the cloud, Amazon EKS automatically manages the availability and scalability of the Kubernetes control plane nodes responsible for scheduling containers, managing application availability, storing cluster data, and other key tasks. Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Now, Red Canary is working alongside AWS Enterprise Support—which provides customers with concierge-like service focused on helping customers achieve outcomes and find success in the cloud—to perform a review of its architecture using the AWS Well-Architected Framework. This framework lays out architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. Using AWS, Red Canary’s solution is highly reliable. “The design tenets that we used when we built these engine components give us the confidence that, even when we make a mistake, we know how to recover from it,” says Davis. The MDR is built to be thorough—to make sure that every piece of data gets processed—with a service-level objective to get data through the detection pipeline and in front of a detection engineer in 15 minutes. “We don’t have to detect and stop unauthorized users in seconds; it takes them time, so it’s more important for our system to be durable and to make sure all the data gets processed,” says Rothe. Brian Davis Principal Engineer, Red Canary Learn more » In 2016, Red Canary migrated to Amazon Web Services (AWS) and rebuilt its architecture to be highly fault tolerant. This architecture made it possible for Red Canary to benefit from more cost-effective instances on Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. Using Amazon EC2 Spot Instances to take advantage of unused Amazon EC2 capacity at a discount, Red Canary built a durable, scalable, cost-effective solution to monitor client workloads and protect them from unauthorized users. Overview About Red Canary On any given day, Red Canary might ingest and run analytics on over 1 PB of telemetry data from third-party products or directly from customer environments. The company reduced costs by running its data processing pipeline on Spot Instances. “Amazon EC2 Spot Instances give us cost-effective compute to process massive amounts of data,” says Brian Davis, principal engineer at Red Canary. “Our infrastructure is mature enough to tolerate the dynamic nature of Spot Instances.” Red Canary estimates that it saves 65–80 percent per instance by using Spot Instances. Türkçe Amazon SQS using AWS Graviton2 processors Learn how cybersecurity firm Red Canary built a fault-tolerant compute pipeline that facilitated as much as 80 percent savings using Amazon EC2 Spot Instances. English Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram To use Spot Instances, Red Canary built its architecture to handle having compute instances removed in the middle of processing. Red Canary’s MDR ingests data from customer environments into Amazon Simple Storage Service (Amazon S3), an object storage service, for analysis. At each step in the analysis, the component that is processing the data picks up a file from an Amazon S3 bucket, applies its analytics, and then writes it to the next bucket down the chain. Each Amazon S3 bucket is connected to Amazon Simple Notification Service (Amazon SNS), a fully managed Pub/Sub service for application-to-application messaging. Amazon SNS sends a message to the next component, which picks up the message using Amazon Simple Queue Service (Amazon SQS), a service for users to send, store, and receive messages between software components. In Red Canary’s solution, when a compute instance drops out while a component is processing a file, the job will return to the Amazon SQS queue, and the system will spin up a new replica of the component to run that job. “We take pride in the fact that all the data that we’re meant to process gets processed and that we don’t miss threats to our customers,” says Rothe. “We use Amazon S3—with its legendary availability and performance—as a core part of our data processing pipeline because we want durability.” Deutsch Cybersecurity company Red Canary needed a reliable, scalable solution to process over 1 PB of data daily while optimizing costs. The company offers managed detection and response (MDR) services, continually monitoring customer environments for potential cyberthreats. As the company grew, its previous solution was unable to provide the amount of compute power that Red Canary required at a low enough price for the company to stay competitive. Tiếng Việt Amazon S3 Amazon EC2 Spot Instances give us cost-effective compute to process massive amounts of data.” Italiano ไทย Solution | Containerizing to Make a Scalable Solution Using Amazon EKS Architecture Diagram Close Learn more » Click to enlarge for fullscreen viewing.  Red Canary was founded in 2014 with the vision to create a world where every company can make its greatest impact without fear of damage from cyberthreats. To support that vision, Red Canary’s MDR provides 24/7 monitoring to 800 companies across multiple industries—including financial services, social media, healthcare, and manufacturing—and helps these companies respond to cyberthreats when needed. (See Figure 1: Red Canary Platform Diagram.) When Red Canary migrated to AWS in 2016, it sought ways to reduce costs on its new architecture. “We needed to find a way to perform threat detection across this massive flood of data and do it within a cost envelope that fit the profile of our industry,” says Chris Rothe, chief technology officer at Red Canary. “We wanted to focus on detecting threats for our customers and keeping them safe, not on being infrastructure experts.” in compute costs Português" Reducing Adverse-Event Reporting Time for Its Clients by 80 _ Indegene Case Study _ AWS.txt,"Overview | Opportunity | Solution | Outcome | AWS Services Used AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Learn more » Español *.MsoChpDefault { 日本語 2022 The NAEM solution built on AWS helps reduce the average processing time of adverse events from over 90 minutes to under 15 minutes, achieving over 80 percent time savings. Clients use an electronic data interchange built on AWS to send adverse-event report files to a system that initiates the NAEM workflows. These reports get stored securely using Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. Automation services pick up files from Amazon S3 and route them to an AI-augmented application that supports clinical experts, who complete and review cases. Then, they deliver the files back to the client as industry-standard E2B (R2) or E2B (R3) files. 한국어 mso-hansi-font-family:Calibri { mso-fareast-theme-font:minor-latin { AWS Services Used Indegene’s NAEM uses AI to help agents automate the reporting of adverse events, product-quality complaints, and allied medical information. Its intelligent call flow assists with automatic capture and population of adverse event data, which makes the process faster and more accurate. “Using AWS, our users can make judgment decisions readily, perform duplicate checks, and accurately triage and validate cases,” says Vladimir Penkrat, Indegene’s Practice Head of Safety and Regulatory Affairs. *, sans-serif { Founded in India in 1998, Indegene is a technology-led healthcare solutions provider. Now in 15 offices worldwide, the company helps its clients with digital transformation, from research and development to management to commercial applications. Bahasa Indonesia Learn how Indegene helps life sciences companies streamline and scale adverse event reporting while generating efficiencies and cost savings, using its solution built on AWS. In 2021, a global pharmaceutical company asked Indegene for help addressing a sudden increase in case volume after a product launch. The solution needed to work with its enterprise environment to properly exchange files and leave full audit trails. The company implemented an upgraded version of NAEM, which uses Amazon Comprehend, a natural-language processing service that uses machine learning (ML) to uncover valuable insights and connections in text. NAEM also uses a related service, Amazon Comprehend Medical—which uses ML that has been pre-trained to understand and extract health data from medical text—to extract information from doctors’ notes and clinical trial reports. The solution has scaled to process about half a million cases, automates over 400 rules, and uses AI to improve overall processing efficiency by 60 percent. mso-fareast-font-family:Arial { Overview Cost optimization page: WordSection1; mso-hansi-theme-font:minor-latin { * { ไทย p.MsoNormal, li.MsoNormal, div.MsoNormal { Learn more » mso-font-pitch:variable { AWS Lambda Français Solution | Extracting Insights from Adverse-Events Data Using AWS Services   中文 (繁體) Contact Sales 60% Tarun Mathur Chief Technology Officer, Indegene Türkçe Indegene began using AWS in the early 2000s, when it adopted Amazon Elastic Compute Cloud (Amazon EC2) to provide secure and resizable compute capacity for its workloads. This relationship has strengthened, and today, Indegene is an AWS Partner. “Our mission is to help pharmaceutical organizations be future ready and drive business transformation by using technology in an agile, efficient way,” says Mathur. “Keeping up with all the new AWS services and capabilities has been a good challenge, and the variety of training programs and great technical support is a bonus. AWS is leading the pack in innovation.” English mso-ascii-font-family:Calibri { Tiếng Việt Português reduction in cases requiring follow-up Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. “On AWS, we are more efficiently capturing data about medicines and maintaining full compliance with global regulations, which results in a much healthier patient population,” says Sameer Lal, Indegene’s senior vice president. “And that’s what we are hoping for in the end: delivery to a much healthier world.” Indegene also uses AWS Lambda, a serverless, event-driven compute service, to direct files into its database, which is built using Amazon Relational Database Service (Amazon RDS), a collection of managed services that make it simple to set up, operate, and scale databases in the cloud. For security, the company uses AWS services to implement predefined actions, such as how long the system retains certain pieces of information. Indegene uses encryption certificates for data in transit and at rest, and clients can access a virtual private cloud through the AWS Client VPN. mso-bidi-theme-font:minor-bidi { } عربي Using AWS Auto Scaling, which monitors applications and automatically adjusts capacity, Indegene can scale on demand to serve clients of virtually all sizes without having to provision physical infrastructure and servers. “AWS is our go-to cloud infrastructure,” Mathur says. “We have had virtually no downtime. Even with spikes or surges in volumes, our systems are fully available. The cost savings, innovation, security, compliance, and reliability are unparalleled.” Times New Roman { About Company Amazon Comprehend Medical is a HIPAA-eligible natural language processing (NLP) service that uses machine learning that has been pre-trained to understand and extract health data from medical text, such as prescriptions, procedures, or diagnoses. Learn more » Most pharmaceutical companies process over 50 percent of PV cases manually to record adverse events, enter them into a specialized safety database, reconcile with corresponding medications, and submit data to health authorities using industry-standard E2B protocols. About 75 percent of cases require follow-up days or weeks later. Ultimately, data elements are compiled into a loose-text format, known as a narrative, which articulates the case’s disposition. This process is inefficient and diminishes the potential value of analytics. Using its automated workflows, Indegene can extract structured and unstructured data and send it to the client’s enterprise environment for submission and downstream analytics. Pharmaceutical companies can produce safer medicines with fewer side effects, supporting a healthier population. “AWS is already well respected in the life sciences industry,” says Tarun Mathur, Chief Technology Officer at Indegene. “Many of the big pharmaceutical companies use AWS, so many issues related to IT approvals and certifications are accelerated when you’re deploying your solution to the AWS environment.” Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Learn more » improvement in adverse event management efficiency Deutsch mso-pagination:widow-orphan { Amazon S3 Italiano Efficient scaling mso-fareast-font-family:Calibri { Amazon Comprehend Medical 80% AWS is our go-to cloud infrastructure. We have had virtually no downtime. Even with spikes or surges in volumes, our systems have been fully available. The cost savings, innovation, security, compliance, and reliability are unparalleled.” mso-generic-font-family:roman { Indegene is growing its AI and ML capabilities—expanding the intake channels and formats the system can ingest—to include much greater unstructured capability. The company plans to incorporate more automation into the user interface with smarter intake functionality. The next generation of NAEM will be even more scalable by using Amazon ElastiCache for Redis—an in-memory data store that provides sub-millisecond latency to power internet-scale near-real-time applications. This upgrade will substantially reduce turnaround time while maintaining quality. Ρусский The solution has also reduced the number of follow-ups by 50 percent. “Our clients can look at a patient’s case and make the right judgment based on the patient’s risk,” says Penkrat. “They can effectively make use of high-throughput activity that is compliant and that sometimes needs to be processed in 1 day. Our clients use the dashboards and the intelligence that the system provides to properly prioritize case types.” 中文 (简体) mso-ascii-theme-font:minor-latin { div.WordSection1 { for database management 50% Indegene Reduces Adverse-Event Reporting Time for Its Clients by 80% Using AWS Arial, sans-serif { The pharmacovigilance (PV) process for life sciences companies still relies heavily on inefficient and manual operations. Indegene, a technology-led healthcare solutions provider, sought to transform this process to help its clients drive efficient, meaningful PV outcomes. Using Amazon Web Services (AWS), Indegene built a modern, agile, efficient, and compliant solution for pharmaceutical safety case processing: the NEXT Adverse Event Management System (NAEM). NAEM helps pharmaceutical companies reduce turnaround time for case reporting while improving quality, traceability, reconciliation, and cost efficiency. Using NAEM, organizations have boosted efficiencies by 60 percent using artificial intelligence (AI) and advanced analytics, delivering effective outcomes for patients in over 50 countries. Amazon RDS to handle about half a million cases reduction in time to report adverse events Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Outcomes | Contributing to a Healthier Patient Population Using Solutions Built on AWS  Get Started Customer Stories / Life Sciences Opportunity | Improving Adverse Event Management Process Efficiency" Reducing Costs of Cryo-EM Data Storage and Processing by 50 Using AWS _ Vertex Pharmaceuticals Case Study.txt,"Vertex is a pharmaceutical company headquartered in Boston that studies complex molecules and researches treatments for serious diseases using the latest microscopy technologies around the world. Vertex Pharmaceuticals (Vertex) is a global biotechnology company that invests in scientific innovation to create transformative medicines for people with serious diseases. Vertex uses cryogenic electron microscopy (cryo-EM) to generate sophisticated images and insights into a protein’s 3D structure and the structure of potential drug targets. Through that process, the company’s chemists can design better drug molecules by optimizing their structure to bind to their targets. Français Vertex has already reduced the time needed for delivering analysis results, and it hopes to accelerate it further. “With live processing, we could jump-start analysis just as data comes off the microscope,” says Posson. “We might be able to cut our 1-week timeline in half.” However, cryo-EM workflows require a huge amount of compute and storage resources. Scientists doing analyses across multiple research sites generate petabytes of data. Vertex needed to make its infrastructure scalable to support its growing needs while providing adequate processing power to accelerate the research. >50% Español 2x To manage compute for data processing, Vertex uses AWS ParallelCluster, an open-source cluster management tool that makes it straightforward to deploy and manage elastic HPC clusters on AWS. It will spin HPC nodes up and down based on the demands of the analysis software. “When they’re done, we can go back to paying almost zero,” says Iturralde. “We don’t have to worry that the pace of science is going to overwhelm our resources or divert our attention toward maintaining the infrastructure.”   However, while this advanced technology has unlocked the potential for new discoveries and treatments, the need for storage and compute capacity has also increased. “Running a microscope for cryo-EM generates terabytes of data every day,” says Roberto Iturralde, senior director of software engineering for Vertex Pharmaceuticals. “It’s common to generate 1 PB of data in 1 year.” Further, scientists need insights fast. Vertex’s on-premises infrastructure for running its cryo-EM workloads was struggling to keep pace with its rapidly growing compute and storage demands. 日本語 AWS Services Used 2022 Solution | Reducing Data Storage Costs and Accelerating Processing Using AWS ParallelCluster  Get Started 한국어 Amazon FSx for Lustre Overview | Opportunity | Solution | Outcome | AWS Services Used Vertex added native single sign-on support using Amazon Cognito, which businesses can use to add sign-up, sign-in, and access control to web and mobile apps quickly and easily. “Using Amazon Cognito gives us that additional comfort that only the appropriate employees have access to the software,” says Iturralde. Alongside this, Vertex uses Application Load Balancer—which load balances HTTP and HTTPS traffic with advanced request routing targeted at the delivery of modern applications—to secure its networking. AWS ParallelCluster is an open source cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. Learn more » Amazon EC2 improvement in data processing times After processing, Vertex sends the data back to Amazon S3. The company sorts data efficiently using Amazon S3 Lifecycle policies, sets of rules that define actions that Amazon S3 applies to a group of objects. “Using Amazon S3 Lifecycle policies, we can put data into different tiers to lower the cost of storage,” says Iturralde. The company can also scale its storage seamlessly, limiting data center overhead. Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. scalability & improved productivity Vertex also plans to continue making its HPC infrastructure more elastic and cloud native to save costs. “By working on AWS, we’re able to spend more time focusing on how we can innovate,” says Iturralde. “We can be creative and take advantage of the cloud to accelerate our science.” 中文 (繁體) Bahasa Indonesia Several days ไทย Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Vertex uses cryo-EM to discover treatments for diseases by analyzing the molecular structure of potential drug targets. “Cryo-EM helps us get sufficient resolution for deeper insights into protein structures that we were unable to study only a few years ago,” says David Posson, principal research scientist for Vertex Pharmaceuticals. 中文 (简体) About Vertex Pharmaceuticals Vertex Pharmaceuticals Reduces Costs of Cryo-EM Data Storage and Processing by 50% Using AWS Roberto Iturralde Senior Director of Software Engineering, Vertex Pharmaceuticals Learn more » Storing data long term presented another challenge. After a few weeks, scientists rarely accessed the older microscope data. However, Vertex’s on-premises environment wasn’t optimized to save costs based on usage and access patterns. With the domain evolving quickly, it was becoming expensive to keep up with the continuous hardware, software, networking, and security upgrades needed to manage the cryo-EM infrastructure on premises. In early 2022, Vertex realized it needed a more elastic solution with better performance. Overview   Enchanced reduction in costs AWS ParallelCluster Türkçe Amazon Elastic Compute Cloud (Amazon EC2) provides secure and resizable compute capacity for virtually any workload. Learn more » English Vertex had already been using AWS since 2015 for different workloads. Inspired by new features launched at AWS re:Invent 2021, Vertex redesigned its entire cryo-EM workload and migrated it to AWS. The company prototyped the new architecture in just 3 months. “AWS has the broadest and deepest set of cloud-native technologies that we want to use at Vertex,” says Iturralde. “Using AWS, we quickly switched to a new design that better met the evolving requirements of our scientists.” Amazon FSx for Lustre provides fully managed shared storage with the scalability and performance of the popular Lustre file system. Learn more » By matching its compute costs to workload demands, Vertex has reduced costs by 50 percent. Further, it has achieved two times better performance than its previous architecture. And Vertex has removed the bottlenecks its cryo-EM team faced in the on-premises environment when sharing resources with other groups, which it often did. “Previously, it took several weeks to analyze cryo-EM data, even when no one else was using resources,” says Posson. “Now, we can reliably deliver data in under 1 week using AWS.” 3 months On AWS, Vertex has made its processes efficient, scalable, and cost effective while reducing manual maintenance. Building on AWS also means that the company has access to the latest compute and GPU resources without the months-long lead time associated with procuring data center hardware. For example, Vertex is running Amazon EC2 G5 instances, which deliver a powerful combination of CPU, host memory, and GPU capacity. By performing cryo-EM processes in the cloud, scientists can do near-real-time analysis. Vertex uses expensive microscope time more efficiently and facilitates scientific breakthroughs. By working on AWS, we’re able to spend more time focusing on how we can innovate. We can be creative and take advantage of the cloud to accelerate our science.” improvement in performance By migrating to AWS, Vertex migrated its workloads closer to where the data arrived in Amazon Simple Storage Service (Amazon S3)—an object storage service that offers industry-leading scalability, data availability, security, and performance. Vertex also uses Amazon FSx for Lustre, a fully managed shared storage built on one of the world’s most popular high-performance file systems, to give scientists exactly the amount of storage resources that they need during active analysis. Learn how Vertex Pharmaceuticals accelerates drug discovery by running its cryo-EM workflows on AWS. Deutsch Vertex initially had to transfer all the data from microscopes in external facilities to its data center using hard disks, which took weeks. When new data came in, the company’s on-premises HPC clusters couldn’t efficiently handle the bursts in activity. They also couldn’t scale down during periods of low activity. Tiếng Việt Amazon S3 Opportunity | Accelerating the Processing Performance of Cryo-EM Workflows to Generate Insights Faster  Italiano Customer Stories / Life Sciences Contact Sales to complete prototype of new architecture Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity to support virtually any workload. Vertex improved the performance of its high-performance computing (HPC) workloads, accelerated data analyses, and made its system scalable while reducing overall storage and compute costs by over 50 percent. Outcome | Accelerating Data Processing to Speed Up Research Using Amazon EC2  Português Vertex migrated its data storage and processing to Amazon Web Services (AWS). The company used several AWS services, including" Reducing Failover Time from 30 Minutes to 3 Minutes Using Amazon CloudWatch _ Thomson ReutersCase Study _ AWS.txt,"AWS KMS Amazon Elastic Kubernetes Service (Amazon EKS) automatically manages the availability and scalability of the Kubernetes control plane nodes responsible for scheduling containers, managing application availability, storing cluster data, and other key tasks. Learn more » Français Increased reduction in failover time Zafar Khan Architect, Platform Engineering Department, Thomson Reuters Enhanced Español Should two health checks fail, Thomson Reuters uses Amazon Route 53, a highly available and scalable Domain Name System web service, to automatically forward traffic to the closest AWS Region to minimize latency. Once the route is fixed, traffic reverts to the original AWS Region. Having automated the failover process using Amazon Route 53 health checks and Amazon CloudWatch, Thomson Reuters has seen failover time drop from 30 minutes to 3 minutes. Recovery point objective time has improved as well. “We want to avoid any manual intervention when we have an incident, and the automated process to achieve the failover has reduced our recovery point objective from 2 hours to 30 minutes,” says Vyas. Thomson Reuters expects to see availability improvements from the team’s implementation of a nearest-available, latency-based routing using Amazon Route 53. The company also used additional AWS services with security in mind. Thomson Reuters used AWS Secrets Manager to centrally manage the lifecycle of secrets using AWS Key Management Service (AWS KMS) to create and control keys used to encrypt data. Using these solutions helps Thomson Reuters adapt to best practices without impeding employee access to company assets. labor time To create an identity solution used by the company’s applications within its internal network that would achieve reliability goals while meeting security constraints, Thomson Reuters built a failover solution that uses AWS Lambda, a serverless, event-driven compute service, to monitor application health. The solution also uses Amazon CloudWatch, which collects and visualizes near-real-time logs, metrics, and event data in automated dashboards. An Amazon CloudWatch alarm is automatically initiated when metrics indicate poor application health. Health alerts unlock a more granular approach to application monitoring, freeing up engineering resources for value-added projects. “Using AWS, we have health alerts in place to address our enhancement goals in alignment with our long-term strategy of moving from a holding company to an operating company,” says Khan. 日本語 2023 Outcome | Preparing for Continued Cloud Migration on AWS Customer Stories / Media & Entertainment 1.5 hour Get Started 한국어 27 minute Overview | Opportunity | Solution | Outcome | AWS Services Used reduction in recovery point objective time AWS Key Management Service (AWS KMS) lets you create, manage, and control cryptographic keys across your applications and more than 100 AWS services. Learn more » Thomson Reuters wants to achieve more on the cloud than just strengthening the resiliency and scalability of its authentication solution. “Since we started our journey to use the cloud in 2016, we’ve believed that cloud-native architecture delivers the most value for our company,” says Matt Dimich, vice president, enablement in platform engineering at Thomson Reuters. From 2020 to 2022, the company launched a change program that combined both lift-and-shift and cloud-native elements, ultimately migrating multiple products to AWS. This project is slated to be three to four times the size of prior migrations. Thomson Reuters will use distributed microservices architecture for the projects that it can migrate directly to cloud-native services, which will facilitate the adoption of DevOps best practices and containerization benefits. Meanwhile, the company sees its lift-and-shift projects as a stepping stone to later modernization, keeping with customer needs. Solution | Using Amazon Route 53 and Amazon CloudWatch to Apply Health Checks and Reduce the Recovery Point Objective from 2 Hours to 30 Minutes   AWS Services Used Opportunity | Prioritizing SSO as Part of a Broad Cloud-Migration Strategy 中文 (繁體) Bahasa Indonesia Thomson Reuters is a global provider of business information services. Its products include highly specialized information-facilitated software and tools for legal, tax, accounting, and compliance professionals, combined with the renowned news service, Reuters. Learn how global content-driven technology company Thomson Reuters bolstered availability using Amazon CloudWatch. Amazon Route 53 Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service. Route 53 connects user requests to internet applications running on AWS or on-premises. Learn more » To overcome the authentication challenges that its employees faced and to harden its security posture, Thomson Reuters selected Amazon Elastic Kubernetes Service (Amazon EKS), a managed Kubernetes service that runs Kubernetes on AWS and on-premises data centers. “We use Amazon EKS to deliver an automated solution that offers resilience and scalability on an as-needed basis,” says Khan. As a result, Thomson Reuters reduced both manual effort and recovery time. On Amazon EKS, the company also gained high availability and a wide range of features, including Amazon EKS control pane audit logs for simplifying cluster management.  Overview With its identity solution in place, Thomson Reuters feels confident that its global workforce will have secure and easy access to company systems. “Our project using AWS services is one of the success stories of hybrid solutions,” says Khan. About Thomson Reuters Türkçe English Amazon CloudWatch collects and visualizes real-time logs, metrics, and event data in automated dashboards to streamline your infrastructure and application maintenance. availability Saved Our project using AWS services is one of the success stories of hybrid solutions.” Deutsch Reducing Failover Time from 30 Minutes to 3 Minutes Using Amazon CloudWatch with Thomson Reuters Tiếng Việt Italiano ไทย Amazon EKS Amazon CloudWatch Contact Sales Learn more » Thomson Reuters operates in more than 100 countries and has over 38,000 employees. Those employees need to authenticate themselves and securely sign in to company systems no matter where they are. The need for a new SSO solution was part of a broader shift toward cloud development. Thomson Reuters committed to its cloud strategy in 2016 as part of its customer-focused mindset, and it has launched many migration projects since then, moving toward cloud-native architecture to establish a foundation for future innovation. “As part of our strategic direction, we wanted to use a hybrid solution to unlock cloud offerings, save costs, and automate deployments,” says Zafar Khan, architect with the platform engineering department at Thomson Reuters. Because Thomson Reuters has considerable experience on AWS, it was a natural choice for the build of its new SSO solution. Amid efforts to boost its operational efficiency, global content-driven technology company Thomson Reuters needed a secure and highly available identification solution for its international workforce. The manual failover process from its legacy on-premises solution left employees locked out of company systems for as long as 30 minutes. “Single sign-on (SSO) is highly critical, and not only from the revenue perspective,” says Bhavin Vyas, lead systems engineer at Thomson Reuters. “If our authentication service is not working, there will be a huge internal impact.” As part of its broader cloud strategy, the company decided to build a new solution on Amazon Web Services (AWS) to deliver highly available SSO authentication. Português security" Reducing Infrastructure Costs by 66 by Migrating to AWS with SilverBlaze _ SilverBlaze Case Study _ AWS.txt,"Solution | Cutting Infrastructure Costs by 66% and Improving Scalability on AWS Français Reducing Infrastructure Costs by 66% by Migrating to AWS with SilverBlaze 2023 AWS Application Migration Service and performance Español After comparing several options, SilverBlaze chose AWS for the high performance and low cost that it offered. SilverBlaze did not have experience using AWS, but other businesses within Harris did, which gave SilverBlaze further confidence in choosing AWS. The company also decided to work alongside an AWS Partner to facilitate the migration and chose Atayo because of its proven track record of migrating customers to AWS. Saved employee time 日本語 to customers Outcome | Investing in the Future on AWS Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Opportunity | Using AWS Application Migration Service to Migrate to AWS in 2 Months for SilverBlaze Improved scalability AWS Services Used reduction in annual infrastructure costs 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский previously spent on troubleshooting عربي Minimized disruptions 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Adam Smith Senior Vice President, SilverBlaze Overview AWS Application Migration Service minimizes time-intensive, error-prone manual processes by automating the conversion of your source servers to run natively on AWS. It also simplifies application modernization with built-in and custom optimization options. Now on AWS, SilverBlaze employees redirect time that they previously spent troubleshooting to working on more important tasks. Additionally, the company can add new features using advanced AWS tools and services. SilverBlaze has recommended AWS and Atayo to other businesses within Harris. SilverBlaze, a subsidiary of Harris Computer (Harris), provides software to over 100 utility companies with millions of end customers. SilverBlaze’s applications offer a self-service portal for consumers of electricity, water, gas, and telecommunications services to track their consumption and manage payments. The company had been using a colocation data center to host its solutions for 10 years, but the host was small and couldn’t offer SilverBlaze the scalability and performance that it needed to meet its service-level agreements with customers. To avoid renewing an expensive contract with the data center, SilverBlaze began looking for a cloud provider. In addition to cost savings, SilverBlaze has also improved performance and scalability using AWS. In the colocation data center, the company experienced some performance issues and could scale up only by giving advanced notice to the hosting provider. Now on AWS, SilverBlaze can scale as needed, as well as take advantage of built-in security features that reduce the amount of time that SilverBlaze employees spend managing security and compliance. “Using AWS, we can scale—increasing or decreasing our size—which we couldn’t easily do before. We’ve realized an increase in security, and we can provide our customers with better disaster recovery and high availability that we couldn’t do before,” says Smith. “We’re exceeding our service-level agreements with customers, and the customers are happy.”   66% 2 months Türkçe Using AWS Application Migration Service, SilverBlaze rehosted 45 servers to AWS. SilverBlaze installed agents on its source servers that performed a block-level replication of the servers to AWS in near real time and kept the replicas up to date—with a recovery point objective of seconds—during the whole migration process. One of the biggest benefits of using the service was quickly launching new test environments. When SilverBlaze launched a test server using AWS Application Migration Service, the service continued to sync with the original machines; therefore, SilverBlaze could relaunch new test environments without resyncing the replicas each time. “AWS Application Migration Service reduces the time between test cycles significantly,” says Fonseca. English Now that the SilverBlaze application is running on AWS, infrastructure costs have decreased by 66 percent, which equates to hundreds of thousands of dollars in savings per year. “Every month going forward from this point on, we continue realizing those savings,” says Smith. “It’s a huge benefit to our business. We can focus our funds on innovation and technology and building out our products.” SilverBlaze further cost-optimized by rightsizing its instances and by choosing instance types that better fit its use cases. SilverBlaze, a software innovation, development, and consulting firm for utility companies, wanted to reduce infrastructure costs and better meet fluctuating demand by migrating from a colocation data center to the cloud. As usage of its applications increased, SilverBlaze had to pay higher prices to the data center to scale its capacity. “The costs kept increasing, and we weren’t seeing the value of the increase,” says Adam Smith, senior vice president at SilverBlaze. “We knew that we needed to go to one of the large cloud providers.” When I look at the money that we could have saved, I realize that we should have migrated sooner. Now on AWS, we can take advantage of all the features that we couldn’t before.” Customer Stories / Professional Services The migration was completed with minimal disruption to customers. The cutover window was less than 1 hour, with a few additional hours of testing to verify that everything was running smoothly. Because users might access the application at any time of day to view their utility consumption, this quick cutover was important to SilverBlaze and its customers. SilverBlaze needed to migrate quickly before the end of its contract and wanted to minimize disruption to customers, so Atayo proposed that SilverBlaze use AWS Application Migration Service. “We’ve used AWS Application Migration Service a lot in the past and have had fantastic success with it,” says Luis Fonseca, solution architect at Atayo. “Not only did the service meet the requirements for what SilverBlaze was trying to accomplish by migrating in a particular timeline, but it also just makes the process of migrating and doing lift-and-shift operations incredibly simple.” The migration began in February 2022 and concluded in April 2022, 1 week before the deadline. Deutsch Tiếng Việt About SilverBlaze Italiano ไทย “When I look at the money that we could have saved, I realize that we should have migrated sooner,” says Smith. “Now on AWS, we can take advantage of all the features that we couldn’t before. From a performance perspective, a scalability perspective, and a reliability perspective, we believe that we’re on one of the best solutions out there: AWS.” Learn how software company SilverBlaze cut infrastructure costs by 66 percent by migrating to AWS using AWS Application Migration Service. Learn more » to migrate 45 servers SilverBlaze, a subsidiary of Harris Computer, offers software that helps utility consumers make informed decisions for a sustainable future while helping providers reduce costs, drive innovation, and improve the health of their business and the planet. SilverBlaze chose Amazon Web Services (AWS) as its cloud provider and worked with Atayo Group Inc. (Atayo), an AWS Partner, which had experience migrating customers to AWS. To complete the migration in a short time, SilverBlaze used AWS Application Migration Service (formerly CloudEndure Migration), which minimizes time-intensive, error-prone manual processes by automating the conversion of source servers to run natively on AWS. Using AWS Application Migration Service, SilverBlaze migrated 45 servers quickly and simply to AWS with minimal disruptions to customers. Now on AWS, SilverBlaze has cut its infrastructure costs, has improved its performance and staff productivity, and can access greater functionality using other AWS services. Português" Reducing Log Data Storage Cost Using Amazon OpenSearch Service with CMS _ Case Study _ AWS.txt,"67% reduction Français and data replay 2023 Español The Centers for Medicare & Medicaid Services (CMS) is a federal agency under the US Department of Health & Human Services. CMS administers Medicare to more than 83 million people, effectively making it the United States’ largest health insurer. The process of designing, developing, and implementing CMS’s new system was quick, going from idea to product in 6 months. CMS had worked alongside AWS for about 10 years prior to the beginning of this project, so the agency already had a system for approving projects being developed on AWS. Additionally, CMS was able to implement the new system so quickly because Amazon OpenSearch Service was simple and intuitive. Unlike using the old system, which required expertise to use properly, CMS employees have had a much easier time adopting Amazon OpenSearch Service. “We didn’t have to send engineers to get training,” says Spitz. “The ease of use of Amazon OpenSearch Service has made it so much simpler for our security operations center to very quickly build dashboards and do forensics.” Amazon OpenSearch Service 日本語 Outcome | Increasing Efficiency and Savings for the Future Ultimately, the project pressed CMS to consider how it can use all types of log data more efficiently and in more out-of-the-box ways. “Because we’re using Amazon OpenSearch Service, we’ve been able to redirect resources to other missions. Instead of spending millions of dollars on repeatable security functions, we can invest that money toward needs like Medicare modernization,” says Spitz. Close Get Started 한국어 Learn more » Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Because we’re using Amazon OpenSearch Service, instead of spending millions of dollars on repeatable security functions, we can invest that money toward needs like Medicare modernization.” CMS is one of the largest purchasers of healthcare in the world. Medicare, Medicaid, and CHIP provide healthcare for one in four Americans. Medicare enrollment has increased from 19 million beneficiaries in 1966 to approximately 64 million beneficiaries. Medicaid enrollment has increased from 11 million beneficiaries in 1966 to about 83 million beneficiaries. Administrating these programs amounts to CMS ingesting 14–15 TB of log data every single day. Over the years, storage on the old system became increasingly expensive because the massive amounts of log data that ran through CMS only grew. CMS needed to reduce the costs of its log data storage system, and it also wanted a cost-effective solution to perform log data analysis and to respond to security issues more quickly. Bob Spitz Founder of alignIT and Consultant, CMS in data storage costs Improves security features AWS Services Used Opportunity | Using Amazon OpenSearch Service to Reduce Data Log Storage Costs for CMS Solution | Cutting Log Data Storage Costs by 67% and Accessing New Features 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский عربي Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. 中文 (简体) About the Centers for Medicare & Medicaid Services Now, using Amazon OpenSearch Service, CMS saves 67 percent of the costs of its previous log data storage solution. The solution ingests 2 TB of log flow data daily, which are stored in buckets in Amazon Simple Storage Service (Amazon S3), an object storage service built to store and retrieve any amount of data from anywhere. “Amazon S3 plays a huge role in the overall solution, keeping costs down but also making the data readily available and simple to consume using Amazon OpenSearch Service,” says Spitz. The solution then uses AWS Lambda, a serverless, event-driven compute service, to sort the data and send it to the appropriate Amazon OpenSearch Service repositories. “Being able to use Amazon OpenSearch Service and Amazon S3 significantly reduces our costs,” says Spitz. Amazon OpenSearch Service makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. Learn how federal agency CMS cut costs with Amazon OpenSearch Service. Overview The Centers for Medicare & Medicaid Services (CMS), the largest purchaser of healthcare in the United States, had to reduce the cost of its log data storage. The agency produces enormous amounts of log data, most of which is stored and reviewed only when issues occur. Paying for storage with its centralized logging system was becoming cost prohibitive. CMS began working out an alternative using Amazon Web Services (AWS) cloud-native services. In just 6 months, CMS developed a proof of concept, obtained approval, developed, finalized, and deployed a new cloud-based log data storage system on AWS that costs 67 percent less and makes data analysis simpler. AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Figure 1. CMS’s serverless virtual private cloud flow log ingestion pipeline and Amazon OpenSearch Service log analytics solution Türkçe English Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram CMS chose to use Amazon OpenSearch Service, which securely unlocks near-real-time searching, monitoring, and analysis of business and operational data for use cases such as application monitoring, log analytics, observability, and website search. Using Amazon OpenSearch Service presented a low-cost alternative for log ingestion and storage that would be simple to use when compared to other possible solutions, including open-source options, which would be costly to develop, build, and maintain. “We weren’t looking at it just as a base to store data,” says Bob Spitz, founder of alignIT and consultant for CMS. “We made sure that Amazon OpenSearch Service would meet all our needs: quick data ingesting, low amounts of data copying, and rapid data insights.” Reducing Log Data Storage Cost Using Amazon OpenSearch Service with CMS The agency’s online systems face constant security threats from international and domestic actors. CMS primarily uses Amazon OpenSearch Service to quickly identify what data has been affected during a security issue. Before reimagining its logging system, CMS would effectively lose the logging data that could show the agency what had happened, and it would have to manually pull missing datasets. Now, the system automatically saves historical data and can queue the data for reingestion if needed. This means CMS can use Amazon OpenSearch Service to automatically replay data from the system’s virtual private cloud flow logs that the system created before and during the issue. Instead of taking 2 weeks for two engineers to find what data was lost, CMS can let the system self-fix. CMS also uses AWS tools to provide near-real-time monitoring and analysis. The agency builds dashboards in Amazon OpenSearch Service to better process data and set automatic alerts in case of security issues. CMS further increases data security by using access management and security features in Amazon S3 to restrict access to data and keep it secure when it is shared between systems. CMS has no plans for slowing down in its quest for efficiency. Currently, the log data storage system is being used mostly by CMS’s security operations team. Because the system is so effective and simple to use, CMS plans to spread the technology to other application teams by making the data available as a shared service. “By using AWS, we can plan for the future and make sure that CMS IT systems are effective, efficient, and secure,” says Spitz. Deutsch Tiếng Việt Amazon S3 Italiano ไทย Architecture Diagram Customer Stories / Government Learn more » Click to enlarge for fullscreen viewing.  CMS administers Medicare, Medicaid, the Children’s Health Insurance Program (CHIP), and the Clinical Laboratory Improvement Amendments of 1988 program. The passage of the Patient Protection and Affordable Care Act led to the expansion of CMS’s role in the healthcare arena beyond its traditional role of administering Medicare, Medicaid, and CHIP. Over the last 50 years, CMS evolved into the largest purchaser of healthcare and now maintains the nation’s largest collection of healthcare data. AWS Lambda Português" Reducing Time to Results Carbon Footprint and Cost Using AWS HPC _ Baker Hughes Case Study _ AWS.txt,"in carbon footprint  Français Baker Hughes is also benefiting from Amazon’s path to powering its operations with 100 percent renewable energy as part of The Climate Pledge. The company has reduced the carbon footprint of its HPC workloads by 99 percent compared with on premises based on the AWS customer carbon footprint tool, which uses simple-to-understand data visualizations to help customers review, evaluate, and forecast emissions. Baker Hughes plans to continue its digital transformation, focusing on efficiency as a way to reduce emissions. By using advanced AWS technology, Baker Hughes optimizes its HPC applications while supporting the company’s long-term strategic vision to facilitate the global energy transition. 98% reduction Amazon Elastic Compute Cloud (Amazon EC2) Yogesh Kulkarni Senior Director, CTO India, Baker Hughes Español About Baker Hughes 日本語 Opportunity | Seeking an Elastic HPC Solution 2022 AWS Migration Acceleration Program (AWS MAP) Amazon EC2 offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Baker Hughes Reduces Time to Results, Carbon Footprint, and Cost Using AWS HPC The solution went live in the fourth quarter of 2021. Now more than 150 TPS engineers in Italy, India, and the United States run as many simulations as needed prior to physical tests, leading to better accuracy with fewer test iterations. Plus, Baker Hughes onboards multiple users every month without impacting HPC job performance. “We were initially planning to migrate the equivalent compute capacity of 100 teraflops to AWS, but by giving engineers the possibility to scale, the consumption spiked by four times within 3 months of go-live,” says Yogesh Kulkarni, senior director, CTO India at Baker Hughes. Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon FSx for Lustre Customer Stories / Energy & Utilities Get Started 한국어 To run CFD simulations, Baker Hughes uses Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. The solution accelerates HPC by attaching Intel-based Amazon EC2 instances to Elastic Fabric Adapter (EFA), a network interface for Amazon EC2 instances to run applications requiring high levels of internode communications at scale. EFA offers dedicated throughput of 100 gigabits per second per HPC job compared to the traditional network interface which offers 300 gigabits per second throughput shared across multiple HPC jobs. As a result, HPC jobs using EFA have low latency compared to the traditional network interface at a fraction of a cost. To further improve performance and reduce network latency, Baker Hughes deploys Amazon EC2 fleets of instances in placement groups, one per HPC job based on Shared-Nothing Architecture principle. Amazon EC2 spreads new instances across the underlying hardware as they launch, and placement groups influence the placement of interdependent instances to meet the throughput needs of the workload. By running on AWS, Baker Hughes avoids the issue of hardware lock-in that is inherent to an on-premises HPC solution. “For Ansys jobs, we now have the ability to use the best price-performance compute instances and continually onboard the latest generation processors as soon as they are available,” says Yogesh. AWS Services Used Baker Hughes migrated its computational fluid dynamics applications to AWS, cutting gas turbine design cycle time, saving 40 percent on HPC costs, and reducing its carbon footprint by 99 percent. 99% reduction 中文 (繁體) Bahasa Indonesia 40% reduction Ρусский 26% faster عربي Running Ansys simulations on AWS helps TPS to accelerate its engineering schedules and achieve a faster time to market.” 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. The Amazon WorkSpaces family of solutions provides the right virtual workspace for varied worker types, especially hybrid and remote workers. Improve IT agility and maximize user experience, while only paying for the infrastructure that you use. Learn more » We were initially planning to migrate the equivalent compute capacity of 100 teraflops to AWS, but by giving engineers the possibility to scale, the consumption spiked by four times within 3 months of go-live."" AWS CodePipeline Overview in wait time Outcome | Reducing Wait Time and Carbon Footprint by over 90% and Cost by 40% on AWS Amazon FSx for Lustre provides fully managed shared storage with the scalability and performance of the popular Lustre file system. Learn more » Solution | Simplifying Customer Experience and Improving Efficiency of HPC Jobs Using Amazon EC2 Türkçe For more than 100 years, Baker Hughes has been a global leader in industrial turbomachinery and innovation through its Turbomachinery and Process Solutions (TPS) Research Center. Based in Florence, Italy, TPS provides the turbine, compressor, and pump technology that is currently used by the energy industry. Its NovaLT gas turbines set new standards in greenhouse gas emissions, efficiency, and reliability. English in HPC costs Baker Hughes uses several storage options on AWS for its CFD workloads. To store and protect data, Baker Hughes uses Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. Amazon S3 works natively alongside Amazon FSx for Lustre, which provides fully managed shared storage with the scalability and performance of the popular Lustre file system and handles the company’s most input- and output-intensive workloads. When linked to an Amazon S3 bucket, an FSx for Lustre file system transparently presents Amazon S3 objects as files and lets engineers write results back to Amazon S3. Baker Hughes streamlines the pipeline for continuous integration and continuous delivery through automated deployments using AWS CodePipeline, a fully managed continuous delivery service that helps organizations automate release pipelines. And engineers can log in and run HPC jobs from any secure connection using Amazon WorkSpaces, a fully managed desktop virtualization service that provides secure, reliable, and scalable access from any location. TPS engineers can run the most resource-intensive Ansys jobs with 98 percent less wait time and 26 percent faster using the same license pool on AWS compared with the on-premises HPC solution, reducing the time to results. The engineers can now run design simulations in parallel on AWS compared with running them sequentially on premise. Plus, the most complex simulations with specific memory requirements not able to run on premise can now be run on AWS. The use of AWS cost-optimization levers—AWS MAP, Savings Plans, and EDP—helped Baker Hughes reduce its HPC spend by 40 percent. The collaboration between globally distributed Baker Hughes and AWS network of experts was instrumental to these outcomes. Deutsch Tiếng Việt runtime in resource-intensive HPC job Italiano ไทย David Meyer Director of Digital Operations for HPC and Remote Visualization, Baker Hughes Contact Sales Using the runtime performance of an Ansys Fluent job as a proof of concept, Baker Hughes compared cloud providers in early 2021. AWS Professional Services, a global team of experts that can help organizations realize desired business outcomes when using AWS, delivered the proof of concept within weeks and on budget, proving the best runtime performance. To accelerate its cloud migration and modernization journey, Baker Hughes used AWS Migration Acceleration Program (AWS MAP), a comprehensive and proven cloud migration program based upon the experience of AWS in migrating thousands of enterprise customers to the cloud. Baker Hughes used AWS MAP to optimize its cloud spend alongside the company’s use of Savings Plans and AWS Enterprise Discount Program, flexible and custom-tailored pricing models for AWS services. To run simulations for designing gas turbines, TPS engineers had been using on-premises HPC solutions for CFD applications from Ansys, an AWS Partner. These included Ansys Fluent for fluid simulation, Ansys CFX for turbomachinery applications, and Ansys Mechanical for structural engineering. Resource capacity bottlenecks allowed limited simulations with long wait and run times for the engineers prior to running expensive and burdensome physical tests. “To remove this bottleneck and better manage the peaks, we needed to expand capacity to 400 teraflops, but we didn’t want to pay for peak capacity yearlong,” says David Meyer, director of digital operations for HPC and remote visualization for Baker Hughes. “We needed an elastic solution for an optimal total cost of ownership.” Baker Hughes is a leading energy technology company with approximately 54,000 employees operating in over 120 countries. It designs, manufactures, and services transformative technologies to help take energy forward. Engineers at Baker Hughes were using an on-premises high performance computing (HPC) solution to simulate gas turbine designs, but it couldn’t scale due to resource capacity bottlenecks. Engineers faced long simulation wait and run times with an increased need for physical prototypes. Baker Hughes chose to migrate its computational fluid dynamics (CFD) applications from on premises to Amazon Web Services (AWS). As a result, the company saved 40 percent on HPC costs, and reduced wait time by 98 percent, run time by 26 percent, and carbon footprint of the HPC solution by 99 percent, helping the company to achieve a faster time to results.  The AWS Migration Acceleration Program (MAP) is a comprehensive and proven cloud migration program based upon AWS’s experience migrating thousands of enterprise customers to the cloud. Enterprise migrations can be complex and time-consuming, but MAP can help you accelerate your cloud migration and modernization journey with an outcome-driven methodology. Learn more » Português" Reinventing the data experience_ Use generative AI and modern data architecture to unlock insights _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Reinventing the data experience: Use generative AI and modern data architecture to unlock insights by Navneet Tuteja and Sovik Nath | on 13 JUN 2023 | in Advanced (300) , Amazon SageMaker , Artificial Intelligence , Generative AI , Technical How-to | Permalink | Comments |  Share Implementing a modern data architecture provides a scalable method to integrate data from disparate sources. By organizing data by business domains instead of infrastructure, each domain can choose tools that suit their needs. Organizations can maximize the value of their modern data architecture with generative AI solutions while innovating continuously. The natural language capabilities allow non-technical users to query data through conversational English rather than complex SQL. However, realizing the full benefits requires overcoming some challenges. The AI and language models must identify the appropriate data sources, generate effective SQL queries, and produce coherent responses with embedded results at scale. They also need a user interface for natural language questions. Overall, implementing a modern data architecture and generative AI techniques with AWS is a promising approach for gleaning and disseminating key insights from diverse, expansive data at an enterprise scale. The latest offering for generative AI from AWS is Amazon Bedrock , which is a fully managed service and the easiest way to build and scale generative AI applications with foundation models. AWS also offers foundation models through Amazon SageMaker JumpStart as Amazon SageMaker endpoints. The combination of large language models (LLMs), including the ease of integration that Amazon Bedrock offers, and a scalable, domain-oriented data infrastructure positions this as an intelligent method of tapping into the abundant information held in various analytics databases and data lakes. In the post, we showcase a scenario where a company has deployed a modern data architecture with data residing on multiple databases and APIs such as legal data on Amazon Simple Storage Service (Amazon S3), human resources on Amazon Relational Database Service (Amazon RDS), sales and marketing on Amazon Redshift , financial market data on a third-party data warehouse solution on Snowflake , and product data as an API. This implementation aims to enhance the productivity of the enterprise’s business analytics, product owners, and business domain experts. All this achieved through the use of generative AI in this domain mesh architecture, which enables the company to achieve its business objectives more efficiently. This solution has the option to include LLMs from JumpStart as a SageMaker endpoint as well as third-party models. We provide the enterprise users with a medium of asking fact-based questions without having an underlying knowledge of data channels, thereby abstracting the complexities of writing simple to complex SQL queries. Solution overview A modern data architecture on AWS applies artificial intelligence and natural language processing to query multiple analytics databases. By using services such as Amazon Redshift, Amazon RDS, Snowflake, Amazon Athena , and AWS Glue , it creates a scalable solution to integrate data from various sources. Using LangChain , a powerful library for working with LLMs, including foundation models from Amazon Bedrock and JumpStart in Amazon SageMaker Studio notebooks, a system is built where users can ask business questions in natural English and receive answers with data drawn from the relevant databases. The following diagram illustrates the architecture. The hybrid architecture uses multiple databases and LLMs, with foundation models from Amazon Bedrock and JumpStart for data source identification, SQL generation, and text generation with results. The following diagram illustrates the specific workflow steps for our solution. The steps are follows: A business user provides an English question prompt. An AWS Glue crawler is scheduled to run at frequent intervals to extract metadata from databases and create table definitions in the AWS Glue Data Catalog . The Data Catalog is input to Chain Sequence 1 (see the preceding diagram). LangChain, a tool to work with LLMs and prompts, is used in Studio notebooks. LangChain requires an LLM to be defined. As part of Chain Sequence 1, the prompt and Data Catalog metadata are passed to an LLM, hosted on a SageMaker endpoint, to identify the relevant database and table using LangChain. The prompt and identified database and table are passed to Chain Sequence 2. LangChain establishes a connection to the database and runs the SQL query to get the results. The results are passed to the LLM to generate an English answer with the data. The user receives an English answer to their prompt, querying data from different databases. This following sections explain some of the key steps with associated code. To dive deeper into the solution and code for all steps shown here, refer to the GitHub repo . The following diagram shows the sequence of steps followed: Prerequisites You can use any databases that are compatible with SQLAlchemy to generate responses from LLMs and LangChain. However, these databases must have their metadata registered with the AWS Glue Data Catalog. Additionally, you will need to have access to LLMs through either JumpStart or API keys. Connect to databases using SQLAlchemy LangChain uses SQLAlchemy to connect to SQL databases. We initialize LangChain’s SQLDatabase function by creating an engine and establishing a connection for each data source. The following is a sample of how to connect to an Amazon Aurora MySQL-Compatible Edition serverless database and include only the employees table: #connect to AWS Aurora MySQL cluster_arn = secret_arn = engine_rds=create_engine('mysql+auroradataapi://:@/employees',echo=True,   connect_args=dict(aurora_cluster_arn=cluster_arn, secret_arn=secret_arn)) dbrds = SQLDatabase(engine_rds, include_tables=['employees']) Next, we build prompts used by Chain Sequence 1 to identify the database and the table name based on the user question. Generate dynamic prompt templates We use the AWS Glue Data Catalog, which is designed to store and manage metadata information, to identify the source of data for a user query and build prompts for Chain Sequence 1, as detailed in the following steps: We build a Data Catalog by crawling through the metadata of multiple data sources using the JDBC connection used in the demonstration. With the Boto3 library, we build a consolidated view of the Data Catalog from multiple data sources. The following is a sample on how to get the metadata of the employees table from the Data Catalog for the Aurora MySQL database: #retrieve metadata from glue data catalog   glue_tables_rds = glue_client.get_tables(DatabaseName=, MaxResults=1000)     for table in glue_tables_rds['TableList']:         for column in table['StorageDescriptor']['Columns']:              columns_str=columns_str+'\n'+('rdsmysql|employees|'+table['Name']+""|""+column['Name']) A consolidated Data Catalog has details on the data source, such as schema, table names, and column names. The following is a sample of the output of the consolidated Data Catalog: database|schema|table|column_names redshift|tickit|tickit_sales|listid rdsmysql|employees|employees|emp_no .... s3|none|claims|policy_id We pass the consolidated Data Catalog to the prompt template and define the prompts used by LangChain: prompt_template = """""" From the table below, find the database (in column database) which will contain the data (in corresponding column_names) to answer the question {query} \n """"""+glue_catalog +"""""" Give your answer as database == \n Also,give your answer as database.table =="""""" Chain Sequence 1: Detect source metadata for the user query using LangChain and an LLM We pass the prompt template generated in the previous step to the prompt, along with the user query to the LangChain model, to find the best data source to answer the question. LangChain uses the LLM model of our choice to detect source metadata. Use the following code to use an LLM from JumpStart or third-party models: #define your LLM model here llm = #pass prompt template and user query to the prompt PROMPT = PromptTemplate(template=prompt_template, input_variables=[""query""]) # define llm chain llm_chain = LLMChain(prompt=PROMPT, llm=llm) #run the query and save to generated texts generated_texts = llm_chain.run(query) The generated text contains information such as the database and table names against which the user query is run. For example, for the user query “Name all employees with birth date this month,” generated_text has the information database == rdsmysql and database.table == rdsmysql.employees . Next, we pass the details of the human resources domain, Aurora MySQL database, and employees table to Chain Sequence 2. Chain Sequence 2: Retrieve responses from the data sources to answer the user query Next, we run LangChain’s SQL database chain to convert text to SQL and implicitly run the generated SQL against the database to retrieve the database results in a simple readable language. We start with defining a prompt template that instructs the LLM to generate SQL in a syntactically correct dialect and then run it against the database: _DEFAULT_TEMPLATE = """"""Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Only use the following tables: {table_info} If someone asks for the sales, they really mean the tickit.sales table. Question: {input}"""""" #define the prompt PROMPT = PromptTemplate( input_variables=[""input"", ""table_info"", ""dialect""], template=_DEFAULT_TEMPLATE) Finally, we pass the LLM, database connection, and prompt to the SQL database chain and run the SQL query: db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT) response=db_chain.run(query) For example, for the user query “Name all employees with birth date this month,” the answer is as follows: Question: Name all employees with birth date this month SELECT * FROM employees WHERE MONTH(birth_date) = MONTH(CURRENT_DATE()); User Response: The employees with birthdays this month are: Christian Koblick Tzvetan Zielinski Clean up After you run the modern data architecture with generative AI, make sure to clean up any resources that won’t be utilized. Shut down and delete the databases used (Amazon Redshift, Amazon RDS, Snowflake). In addition, delete the data in Amazon S3 and stop any Studio notebook instances to not incur any further charges. If you used JumpStart to deploy an LLM as a SageMaker real-time endpoint, delete endpoint through either the SageMaker console or Studio. Conclusion In this post, we integrated a modern data architecture with generative AI and LLMs within SageMaker. This solution uses various text-to-text foundation models from JumpStart as well as third-party models. This hybrid approach identifies data sources, writes SQL queries, and generates responses with query results. It uses Amazon Redshift, Amazon RDS, Snowflake, and LLMs. To improve the solution, you could add more databases, a UI for English queries, prompt engineering, and data tools. This could become an intelligent, unified way to get insights from multiple data stores. To dive deeper into the solution and the code shown in this post, check out the GitHub repo . Also, refer to Amazon Bedrock for use cases on generative AI, foundation models, and large language models. Appendix Example prompts Domain Database/API Prompt SQL (Generated by LLM) Output Sales & Marketing Amazon RedShift How many ticket sales are there? SELECT COUNT(*) AS total_sales FROM tickit.sales; There are 172,456 ticket sales. Sales & Marketing Amazon RedShift What was the total commission for the ticket sales in the year 2008? SELECT SUM(commission) AS total_commission FROM tickit.sales WHERE EXTRACT(YEAR FROM saletime) = 2008 The total commission for ticket sales in the year 2008 was $16,614,814.65. Legal S3 How many frauds happened in the year 2023? SELECT count(*) FROM claims WHERE extract(year from write_time) = 2023 AND fraud = 1; There were 164 fraud claims in 2023. Legal S3 How many policies were claimed this year? SELECT count(*) FROM claims; There were 5000 claims made this year. Human Resources Amazon Aurora MySQL Name all employees with birth date this month SELECT * FROM employees WHERE MONTH(birth_date) = MONTH(CURRENT_DATE()); The employees with birthdays this month are: Christian Koblick Tzvetan Zielinski Kazuhito Cappelletti Yinghua Dredge Human Resources Amazon Aurora MySQL How many employees were hired before 1990? SELECT COUNT(*) AS 'Number of employees hired before 1990' FROM employees WHERE hire_date < '1990-01-01' 29 employees were hired before 1990. Finance and Investments Snowflake Which stock performed the best and the worst in May of 2013? SELECT name, MAX(close) AS max_close, MIN(close) AS min_close FROM all_stocks_5yr WHERE date BETWEEN '2013-05-01' AND '2013-05-31' GROUP BY name ORDER BY max_close DESC, min_close ASC The stock that performed the best in May 2013 was AnySock1 (ASTOCK1) with a maximum closing price of $842.50. The stock that performed the worst was AnySock2 (ASTOCK2) with a minimum closing price of $3.22. Finance and Investments Snowflake What is the average volume stocks traded in July of 2013? SELECT AVG(volume) AS average_volume FROM all_stocks_5yr WHERE date BETWEEN '2013-07-01' AND '2013-07-31' The average volume of stocks traded in July 2013 was 4,374,177 Product – Weather API What is the weather like right now in New York City in degrees Fahrenheit? About the Authors Navneet Tuteja  is a Data Specialist at Amazon Web Services. Before joining AWS, Navneet worked as a facilitator for organizations seeking to modernize their data architectures and implement comprehensive AI/ML solutions. She holds an engineering degree from Thapar University, as well as a master’s degree in statistics from Texas A&M University. Sovik Kumar Nath is an AI/ML solution architect with AWS. He has extensive experience designing end-to-end machine learning and business analytics solutions in finance, operations, marketing, healthcare, supply chain management, and IoT. Sovik has published articles and holds a patent in ML model monitoring. He has double masters degrees from the University of South Florida, University of Fribourg, Switzerland, and a bachelors degree from the Indian Institute of Technology, Kharagpur. Outside of work, Sovik enjoys traveling, taking ferry rides, and watching movies. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Relay Therapeutics Case Study.txt,"Since deploying the AWS high-performance computing solution, Relay Therapeutics has run multiple screens of five billion compounds. Because of the scalability offered by AWS, scientists can run the screens on multiple snapshots of the same moving protein target.   Typically, in traditional IT environments, pharmaceutical companies virtually screen a few million compounds at a time. Relay Therapeutics was determined to scale that number into the billions and turned to Amazon Web Services (AWS) to solve the challenge. “The major factor in selecting AWS over other cloud providers is the support we received from the start,” says Pat Walters, senior vice president of computation at Relay Therapeutics. “And it has continued to help us make our processes work more efficiently.” Pierce estimates that Amazon EC2 Spot Instances reduce compute costs by 50 percent compared to conducting virtual screening on premises. AWS and Relay Therapeutics also built parameter checks into the process to keep analysis costs from exceeding the budgeted amount. “We get alerted if a job will go beyond a set expense threshold,” Walters explains. “That tells us a parameter is off so we can terminate the job or make an adjustment on the fly.” Français Benefits of AWS By accessing close to 100,000 CPUs on AWS, the Relay Therapeutics team is able to perform the analysis of billions of compounds in one day. It solved the CPU cost challenge by capitalizing on the elastic capacity of Scales compute resources as required for each analysis job Español Sorting a table with billions of rows is not a trivial exercise. By using AWS technologies, we can deal with all that information efficiently, which helps us strive toward our ultimate goal—getting medicines to patients faster than we previously thought possible.” On AWS, the company also simplified virtual screening so scientists can use open source scripts to kick off analysis on AWS Batch. The scientists then rapidly analyze the data by taking advantage of Amazon Athena, a serverless query service with no infrastructure to manage.  Reduces compute resource costs by 50% Analyzes 5 billion molecular compounds in 1 day vs. months 日本語 Simplified Process for Scientists 한국어 Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Amazon Athena , which can be spun up and turned off as needed. Get Started Processing Billions of Molecules in 24 Hours Relay Therapeutics Uses AWS to Accelerate Drug Discovery AWS Services Used 中文 (繁體) Bahasa Indonesia AWS Batch Contact Sales Ρусский عربي In the future, Relay Therapeutics anticipates scientists may be able to virtually screen commercially available libraries of 10 billion compounds, which will require integrating machine learning to control the costs. AWS data center 50% Savings in Compute Costs 中文 (简体) Amazon EMR will be important components of this effort.   About Relay Therapeutics Enables scientists to easily run complex analysis Pat Walters Senior Vice President of Computation, Relay Therapeutics Achieving the Impossible Türkçe Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices. English AWS Batch—a cloud-native orchestration service—in conjunction with Spot Instances, Relay Therapeutics easily scales to the required number of CPUs for each virtual screen. AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. Based in Massachusetts, Relay Therapeutics is committed to creating medicines that have a transformative impact on patients. The company combines unprecedented computational power with leading-edge experimental approaches across structural biology, biophysics, chemistry, and biology. A few years ago, the Relay Therapeutics team did not think it was possible to run virtual screening at the scale the company has now achieved, with scientists analyzing tables with a billion rows. “Even sorting a table with billions of rows is not a trivial exercise,” Walters emphasizes. “By using AWS technologies, we can deal with all that information efficiently, which helps us strive toward our ultimate goal—getting medicines to patients faster than we previously thought possible.” Relay Therapeutics is a precision medicine company transforming the drug discovery process by leveraging unparalleled insights into protein motion. Prior to testing promising compounds in the lab, scientists have to consider a molecular universe of available starting points numbering close to 10 billion compounds. They need to filter this extensive set down to the 100–200 compounds most likely to bind to the biological target. By analyzing more compounds, scientists increase the chances they will find the right molecules to test in the lab. In a typical on-premises data center, with thousands of CPUs, the analysis of a billion compounds could take months. Deploying sufficient CPUs in an on-premises data center would also be cost-prohibitive, particularly due to the “bursty” nature of the analyses. On the Horizon: Processing 10 Billion Compounds Relay Therapeutics leverages unused Amazon EC2 capacity in the AWS Cloud at up to a 90 percent discount compared to pricing for On-Demand Instances. By relying on Deutsch Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances Availability Zones and Tiếng Việt Italiano ไทย Validates analysis parameters to avoid cloud cost overruns 2020 Learn more » Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Amazon EC2 Spot Instances Scientists don’t have to worry about complex programming, so they have more time to analyze results and optimize the drug discovery process. “Orchestrating that many jobs manually in a traditional system is a nightmare,” says Levi Pierce, director of computation at Relay Therapeutics. “But using AWS Batch saves us a lot of time.” Português" Resilience Builds a Global Data Mesh for Lab Connectivity on AWS _ Case Study _ AWS.txt,"Français Adam Mendez Associate Director for Data Engineering, Resilience Español Outcome | Continuing to Accelerate Learning Cycles for Drug Development   About Resilience 日本語 2023 at rest and in transit 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Opportunity | Automating and Accelerating Data Transfer for Resilience   uploaded to Amazon S3 to date 100+ Get Started AWS Cloud Development Kit (AWS CDK) accelerates cloud development using common programming languages to model your applications. Learn more » Amazon CloudWatch collects and visualizes real-time logs, metrics, and event data in automated dashboards to streamline your infrastructure and application maintenance. Learn more » Amazon CloudWatch AWS Services Used 中文 (繁體) Bahasa Indonesia  to build the infrastructure for data to be available in the cloud ไทย Ρусский In less than 3 months, Resilience’s Digital Research & Development organization, working closely with its data engineering and networking teams, built AWS infrastructure to power its globally connected system. The solution uses AWS DataSync, a secure, online service that automates and accelerates data transfer, to migrate data from its on-premises systems to the AWS Cloud. This data is transferred securely using AWS PrivateLink, which establishes connectivity between virtual private clouds and AWS services without exposing data to the internet. This data is then stored on Amazon Simple Storage Service (Amazon S3), an object storage service built to retrieve any amount of data from anywhere, and can be accessed by both scientific and business users across Resilience’s organization. “With a centrally managed system for data storage on AWS, we can seamlessly integrate with other applications and analytics software, whether they are third-party software-as-a-service solutions or internally developed,” says Mendez.   To date, Resilience has uploaded more than 75 TB of research data from over 100 various lab devices to Amazon S3. Scientific and business users across Resilience can now review, process, and analyze their instrument data on Amazon S3 to achieve their research and development goals. The company relies on AWS Internet of Things services such as AWS IoT Greengrass, an open-source edge runtime and cloud service, to automatically invoke the migration tasks on demand, providing scientists with access to their data on the cloud in under 5 minutes. By using AWS Cloud Development Kit (AWS CDK), which accelerates cloud development using common programming languages, to model its applications, Resilience can onboard new devices and bring entire sites online in a matter of days. With its infrastructure-as-code approach, Resilience is helping dozens of research teams expedite their work. “By facilitating near-real-time data upload from each of our sites, we can provide strong data backup while helping teams use insights in a cross-functional, cross-site manner,” says Jonathan Rivernider, lab systems engineer at Resilience. “This puts data into the hands of scientists faster to accelerate learning cycles.”   On the cloud, Resilience’s lab data needed to be organized in a way that aligns with how scientists use their data. To accomplish this, the team designed an Amazon S3 data lake using the AWS Prescriptive Guidance for Data Lake Architectures and engaged Quilt Data, an AWS Partner, to assign governance controls. These controls turn the instrument datasets into data packages, an immutable record of raw lab data, analyzed data, and associated lab files, including graphs and PowerPoints. Now, as data moves through analysis stages by scientists, data packages are maintained on Amazon S3 with versioning, metadata, and lineage information. This data is searchable in a user portal for authorized lab and business users and integrates with their electronic library notebooks.   Using Amazon CloudWatch, a monitoring service that provides operational insights for various AWS resources, the teams were also able to build a robust logging system for all data transfer tasks. Now, Resilience can verify that proper alerts are in place to verify the operational health of the system and each lab instrument. “Given the sensitive nature of the research data, security of this system is paramount,” says Jiro Koga, senior systems engineer at Resilience. “By incorporating strict network firewall rules, client certificates, and secure endpoints using AWS PrivateLink, all data is safely transferred with encryption in flight and at rest.” عربي AWS PrivateLink 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. By connecting laboratory instruments to AWS, Resilience has accelerated the transfer of key data for its research, manufacturing, and product development workflows. Scientists and business users alike have reliable access to the data they need to make key decisions, and the company intends to scale this solution further to support more research sites and instruments. 75+ TB Solution | Connecting More than 100 Laboratory Instruments from Six Research Sites to the Cloud  Overview Despite the scientific advancements propelling cell and gene therapy development, the manufacturing technology behind these complex medicines hasn’t kept pace. Resilience is addressing this gap. The biomanufacturing company offers customized and scalable solutions that aim to produce these complex medicines faster, with less risk and increased flexibility. By centralizing vast amounts of data from diverse product areas and laboratory instruments across production sites and analyzing them for insights, Resilience is discovering ways to produce novel therapies safely and at scale. < 5 mins   Resilience Builds a Global Data Mesh for Lab Connectivity on AWS laboratory instruments from six sites connected Türkçe Encrypts data English With a centrally managed system for data storage on AWS, we can seamlessly integrate with other applications and analytics software, whether they are third-party software-as-a-service solutions or internally developed."" AWS CDK “By creating a reusable pattern that can be used across any site, we demonstrated how to connect different AWS services to build an entire data management system,” says Brian McNatt, global head for digital research and development at Resilience. “We fully intend to continue expanding our AWS data network as Resilience’s manufacturing footprint continues to grow across more sites and more key research devices.” < 3 months Using a range of offerings from Amazon Web Services (AWS), Resilience has built a globally connected system for uploading, storing, managing, and finding data from each of its research and manufacturing sites securely in the cloud. With a network of over 100 cloud-connected lab devices across six company sites, Resilience has reduced the turnaround time between experiments and insights while helping customers accelerate production. Deutsch Tiếng Việt Italiano Customer Stories / Life Sciences AWS DataSync is a secure, online service that automates and accelerates moving data between on premises and AWS Storage services. Resilience is a technology-focused biomanufacturing company dedicated to broadening access to complex medicines.. Founded in 2020, the company is building a sustainable network of high-tech, end-to-end manufacturing solutions to ensure the treatments of today and tomorrow can be made quickly, safely, and at scale. Contact Sales Learn more » Founded in 2020, Resilience is driving innovative biomanufacturing. It offers a range of scalable, off-the-shelf biomanufacturing modalities for gene therapies, nucleic acid synthesis, protein purification, and more for leading pharmaceutical and biotechnology companies. It also oversees a large network of instruments, including bioreactors, flow cytometers, microscopes, and genomic sequencers.   To accelerate production and decrease the time between performing experiments and generating insights, Resilience needed to build connectivity from each of its research and manufacturing sites to the cloud. With such a vast volume and diversity of data, however, building a connected data network was no simple task. “We have lots of product areas, which require an equally wide range of laboratory instruments to develop them. This creates a high degree of data heterogeneity,” says Adam Mendez, associate director for data engineering at Resilience. “We needed a robust system for data transfer that was agnostic to the data type and could quickly and securely upload the data from all lab devices to the cloud.” The company identified AWS as the optimal solution for the project due to its secure, scalable infrastructure and powerful Internet of Things (IoT) capabilities. AWS DataSync Learn how biomanufacturing innovator Resilience revolutionizes the way novel medicines are produced with a connected network for data transfer on AWS.  AWS PrivateLink provides private connectivity between virtual private clouds (VPCs), supported AWS services, and your on-premises networks without exposing your traffic to the public internet. Learn more » Português" ResMed Case Study _ AWS AppSync _ AWS.txt,"Optimized ResMed staff’s time and energy Improved data latency from 7 minutes to 10 seconds  After completing a proof of concept alongside the AWS team, ResMed decided to completely rearchitect its environment for myAir using cloud-native services with enhanced security features. ResMed began the implementation of AWS AppSync and additional AWS solutions in March 2020 and initiated its first rollout to the AWS Asia Pacific Region in January 2021. The company continued to implement the serverless solution in select Regions before concluding the project in July 2021, when it migrated its largest user base in the United States. From start to finish, the implementation went smoothly with support from AWS. Français ResMed offers digital health solutions like AirView and myAir, which give healthcare providers and device users the ability to remotely self-monitor positive airway pressure (PAP) and ventilator treatment. Monitoring PAP and ventilator usage can help improve users’ adherence as well as clinicians’ patient management efficiency. As of December 31, 2021, myAir had over 4 million registered users who can receive personalized support, tailored coaching tips, access to therapy data, and nightly sleep scores that help them get a better night’s sleep. Additionally, over 18.5 million PAP users were remotely monitored in ResMed’s AirView solution for clinicians. Español Brian Hickey Director of Engineering, Patient Experience, ResMed Seeking Greater Scalability with Serverless Architecture Since then, ResMed has increased its productivity and accelerated its time to market for digital solution launches and upgrades while using AWS AppSync. “Our biggest reason for using AWS AppSync was the synchronization infrastructure that it provided,” says Stanley Kurdziel, senior engineering manager at ResMed. Now, data updates more seamlessly for users using the myAir app on multiple devices. Using AWS AppSync, the ResMed team can be more responsive and make quick, same-day changes that would have previously taken weeks to enact, reducing the time to deploy new code by 90 percent. “Speed is a key benefit,” says Kurdziel. “We want the ability to change something quickly without difficulty. Using AWS AppSync, now we have that capacity.” After implementing AWS AppSync and other AWS solutions with myAir, ResMed has built two additional serverless products that have recently gone live. It plans to continue a serverless-first approach with all new projects in the future.  日本語 Provides deeper insights and analytics on user engagement Contact Sales ResMed pioneers innovative solutions that empower people to lead healthier, higher-quality lives. Its digital health technologies and cloud-connected medical devices transform care for people with sleep apnea, COPD, and other chronic diseases. 한국어 AWS AppSync Reduced operational overhead by 80% Implementing AWS AppSync Considering Future Serverless Solutions Get Started To further accelerate its time to market, ResMed built a continuous integration and continuous delivery pipeline using AWS CodePipeline, a fully managed continuous delivery service that helps users automate their release pipelines for fast and reliable application and infrastructure updates, and AWS CodeBuild, a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. Implementing these fully managed solutions means that ResMed has increased staff productivity. “The amount of time and labor we’ve saved on operations means that we’ve been able to increase the number of people working on the app,” says Hickey. “Now, everyone gets to work on building new things for the product, things that customers and users get to see and experience, rather than spending all their time on operations.” “Using AWS AppSync, we invested up front, and now we can turn around products much faster and much more economically. It was a no-brainer for us.”  AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates.  AWS Services Used 中文 (繁體) Bahasa Indonesia Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский AWS CodePipeline عربي Provides more accurate insights to users using machine learning 中文 (简体) Before adopting AWS AppSync, ResMed ran its myAir application as a monolithic application using on-premises servers. Under this model, the company faced two key challenges: the existing data center could not handle the company’s quickly growing user base, and the software that it had been using had aged poorly, creating challenges and stress for ResMed’s development and operations teams. The company believed that migrating to the cloud in a serverless architecture would provide significant benefits to its business and users. Learn more » ResMed Improves Agility and User Satisfaction Using AWS AppSync Benefits of AWS ResMed employed AWS Lambda, a serverless, event-driven compute service that lets users run code for virtually any type of application or backend service without provisioning or managing servers. The company also adopted Amazon DynamoDB, a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at virtually any scale. ResMed selected both these solutions because they are fully managed, which empowers its developers to devote their time to innovating new features rather than troubleshooting operational issues. By reducing the server management workload of ResMed’s development team, the company can now achieve more with less effort. “Serverless solutions are really powerful and really simple to use, deploy, and manage,” says Hickey. Moreover, the company could reduce its operational overhead cost by approximately 80 percent compared with its legacy system.  ResMed turned to Amazon Web Services (AWS) solutions to scale to support more device users globally, reduce application latency, and deploy new features more quickly. To develop its myAir application, ResMed selected AWS AppSync, a serverless GraphQL and Pub/Sub API service that simplifies building modern web and mobile applications. In conjunction with a suite of other AWS solutions, the company could use AWS AppSync to reduce operational overhead, improve the user experience, and provide more accurate and valuable insights by using machine learning. Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. DynamoDB offers built-in security, continuous backups, automated multi-Region replication, in-memory caching, and data export tools. About ResMed Digital health leader ResMed is one of the leading global providers of cloud-connected solutions for people with sleep apnea, COPD, asthma, and other chronic conditions. In 2021, ResMed helped improve the lives of over 133 million people in over 140 countries. Now, ResMed has a goal to improve 250 million lives in 2025, and it needs an agile, serverless solution to increase user satisfaction and achieve greater scalability.  Working on AWS, ResMed has improved latency for its users. “Data that used to take 7 minutes to show up for a user now arrives in less than 10 seconds,” says Hickey. Its users not only get data more quickly, but they also have access to more of it. Using its new serverless architecture, ResMed can now perform microexperiments and determine what features and data are most beneficial to users. English “We needed the basic agility of serverless architecture as well as the ability to integrate with other services in the cloud,” says Brian Hickey, director of engineering, patient experience at ResMed. “We wanted to take advantage of those simple integrations and innovate rapidly and efficiently.”  Deutsch Amazon DynamoDB Tiếng Việt Italiano ไทย Türkçe Reduced new code deployment time by 90% 2022 “We have years and years of runway benefit with this solution,” says Hickey. “Using AWS AppSync, we invested up front, and now we can turn around products much faster and much more economically. It was a no-brainer for us.” AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use. AWS Lambda AWS AppSync is a fully managed service that makes it easy to develop GraphQL APIs by handling the heavy lifting of securely connecting to data sources like AWS DynamoDB, Lambda, and more. Learn more » Português" Respond.io Scales Its Messaging Platform and Connects 10000 Companies with Customers on AWS _ Respond.io Case Study _ AWS.txt,"About Respond.io Français Hassan says, “Our workflows require a sophisticated architecture, with thousands of executions running per minute. With AWS Lambda and AWS Fargate, we can manage this seamlessly, without worrying about security patching and server maintenance.” Hassan Ahmed CTO and Cofounder, Respond.io 2023 100+ Million Español AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Furthermore, users can conduct extensive searches within chat logs to understand their customers’ needs and challenges and obtain comprehensive reports that guide strategy. Business decision-makers can explore factors like average response times, problem resolution success rates, peak hours for sales inquiries, and other business-critical data. Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Respond.io is a software as a service (SaaS) platform that helps companies manage all its customer messaging in one place. For example, a retailer may receive customer support requests and sales inquiries through a variety of messaging channels. These messages are filtered into the Respond.io platform, where customer support and sales staff can address them in an organized and efficient manner. Customizable automations Amazon OpenSearch Service 日本語 Respond.io also provides its customers with extensive reporting features that help them glean powerful insights from the vast amount of data created through customer messaging. Its reporting module is powered by Amazon OpenSearch Service. This means customers can obtain reports in milliseconds and analyze variables, like which messaging channels their customers prefer, peak messaging times, and additional insights that guide operations and marketing strategies. Amazon OpenSearch Service makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. OpenSearch is an open source, distributed search and analytics suite derived from Elasticsearch. Based in Kuala Lumpur, Malaysia, with offices in Hong Kong, Respond.io is a comprehensive customer conversation management software that facilitates seamless marketing, sales, and support communications across instant messaging, web chat, and email. Get Started 한국어 messages stored in Amazon DynamoDB Overview | Opportunity | Solution | Outcome | AWS Services Used Opportunity | Enhancing and Scaling a Powerful Business Messaging Platform 2+ Billion Outcome | A Cutting-edge Product and a Rapidly Growing Business AWS Services Used Overview 中文 (繁體) Bahasa Indonesia OpenSearch Reporting Since adopting AWS, Respond.io is managing over 100 million messages per month for more than 10,000 customers, serving a range of multinational corporations. In September 2022, the company received $7 million in Series A venture funding, and executives see no limits to continued expansion. Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Respond.io was running its platform on a serverless architecture from a different cloud provider, but its founders quickly realized that the platform was not equipped to scale. In fact, users were experiencing chat log search latencies and system delays. To scale and continue to serve enterprises like Toyota, McDonald’s, and Decathlon, the company needed a reliable, flexible, and robust cloud provider. 中文 (简体) to handle communication workflows Respond.io currently stores over 1.5 TB from 2.6 billion messages in Amazon DynamoDB, a fully managed, serverless NoSQL database. The platform also uses Amazon Simple Storage Service (Amazon S3) with Amazon Athena to export these messages, facilitating efficient retrieval and minimal search latency. This gives users a way to quickly search and access customer chat logs, follow up on previous customer exchanges, and effectively manage marketing, sales, and support communications. Low latency sales and support messages exchanged monthly Amazon Elastic Container Service (Amazon ECS) Hassan concludes, “We couldn’t have grown and survived without AWS, considering the complexity and the sheer volume of data we handle today.” Respond.io continues to develop an innovative SaaS product that stands out in the marketplace. The product delivers efficient communication across 15 different messaging channels, and its user-friendly dashboard and customizable, no-code workflows make it easy for companies to handle multiple inquiries. The platform’s extensive reporting features and low-latency chat capabilities bring additional value, handing Respond.io’s customers a competitive edge. Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. Respond.io is a Malaysia-based SaaS company whose business messaging platform helps organizations seamlessly manage customer communications. To enhance its platform and scale in response to growth, Respond.io migrated to AWS. Türkçe delivering comprehensive marketing insights English Respond.io transformed its business messaging platform with AWS Serverless—expanding search capabilities to efficiently handle over 100 million messages per month. Solution | Supporting a Massive Expansion in Product Features and Customer Volume Deutsch Respond.io Scales Its Messaging Platform and Connects 10,000+ Companies with Customers on AWS We live in an era in which consumer-facing companies cannot survive, much less thrive, without a strategic approach to communicating with customers via WhatsApp, Instagram, and other messaging applications. Companies capable of handling marketing and 1:1 conversation across major messaging channels have a strong edge over those with limited options. Tiếng Việt Italiano ไทย Respond.io’s founders had experience with Amazon Web Services (AWS) and decided to migrate to AWS. Hassan Ahmed, CTO and cofounder of Respond.io, says, “We’re expanding our platform’s features and our user base is growing rapidly. Considering the extensive infrastructure that AWS offered, we were confident in AWS’ ability to help us scale.” We couldn’t have grown and survived without AWS, considering the complexity and the sheer volume of data we handle today.” Amazon DynamoDB for efficient analysis of customer chat logs Learn more » Customer Stories / Software and Internet Today, Respond.io has migrated 90 percent of its workloads to run on a serverless architecture on AWS Lambda. As a result, the companies it serves can now customize and automate workflows. Administrators can create a simple, no-code workflow that triggers an automatic response to incoming messages containing specific keywords, and they can create rules for assigning support tickets based on staff availability, time-in-queue, and many other factors. AWS Lambda Respond.io leverages Amazon Elastic Container Service (Amazon ECS), AWS Fargate, AWS OpenSearch Service, and Amazon DynamoDB to build a low-latency, scalable platform with robust search and reporting capabilities. Since adopting AWS, Respond.io is managing over 100 million messages per month for more than 10,000 customers, serving a range of multinational corporations. Português" Retain original PDF formatting to view translated documents with Amazon Textract Amazon Translate and PDFBox _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Retain original PDF formatting to view translated documents with Amazon Textract, Amazon Translate, and PDFBox by Anubha Singhal and Sean Lawrence | on 03 JUL 2023 | in Amazon Textract , Amazon Translate , Technical How-to | Permalink | Comments |  Share Companies across various industries create, scan, and store large volumes of PDF documents. In many cases, the content is text-heavy and often written in a different language and requires translation. To address this, you need an automated solution to extract the contents within these PDFs and translate them quickly and cost-efficiently. Many businesses have diverse global users and need to translate text to enable cross-lingual communication between them. This is a manual, slow, and expensive human effort. There’s a need to find a scalable, reliable, and cost-effective solution to translate documents while retaining the original document formatting. For verticals such as healthcare, due to regulatory requirements, the translated documents require an additional human in the loop to verify the validity of the machine-translated document. If the translated document doesn’t retain the original formatting and structure, it loses its context. This can make it difficult for a human reviewer to validate and make corrections. In this post, we demonstrate how to create a new translated PDF from a scanned PDF while retaining the original document structure and formatting using a geometry-based approach with Amazon Textract , Amazon Translate , and Apache PDFBox . Solution overview The solution presented in this post uses the following components : Amazon Textract – A fully managed machine learning (ML) service that automatically extracts printed text, handwriting, and other data from scanned documents that goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables. Amazon Textract can detect text in a variety of documents, including financial reports, medical records, and tax forms. Amazon Translate – A neural machine translation service that delivers fast, high-quality, and affordable language translation. Amazon Translate provides high-quality on-demand and batch translation capabilities across more than 2,970 language pairs, while decreasing your translation costs. PDF Translate – An open-source library written in Java and published on AWS Samples in GitHub. This library contains logic to generate translated PDF documents in your desired language with Amazon Textract and Amazon Translate. It also uses the open-source Java library Apache PDFBox to create PDF documents. There are similar PDF processing libraries available in other programming languages, for example Node PDFBox . While performing machine translations, you may have situations where you wish to preserve specific sections of text from being translated, such as names or unique identifiers. Amazon Translate allows tag modifications, which allows you to specify what text should not be translated. Amazon Translate also supports formality customization, which allows you to customize the level of formality in your translation output. For details on Amazon Textract limits, refer to Quotas in Amazon Textract . The solution is restricted to the languages that can be extracted by Amazon Textract, which currently supports English, Spanish, Italian, Portuguese, French, and German. These languages are also supported by Amazon Translate. For the full list of languages supported by Amazon Translate, refer to Supported languages and language codes . We use the following PDF to demonstrate translating the text from English to Spanish. The solution also supports generating the translated document without any formatting. The position of the translated text is maintained. The source and translated PDF documents can also be found in the AWS Samples GitHub repo . In the following sections, we demonstrate how to run the translation code on a local machine and look at the translation code in more detail. Prerequisites Before you get started, set up your AWS account and the AWS Command Line Interface (AWS CLI). For access to any AWS Services such as Textract and Translate, appropriate IAM permissions are needed. We recommend utilizing least privilege permissions. To learn more about IAM permissions see Policies and permissions in IAM as well as How Amazon Textract works with IAM and How Amazon Translate works with IAM . Run the translation code on a local machine This solution focuses on the standalone Java code to extract and translate a PDF document. This is for easier testing and customizations to get the best-rendered translated PDF document. The code can then be integrated into an automated solution to deploy and run in AWS. See Translating PDF documents using Amazon Translate and Amazon Textract for a sample architecture that uses Amazon Simple Storage Service (Amazon S3) to store the documents and AWS Lambda to run the code. To run the code on a local machine, complete the following steps. The code examples are available on the GitHub repo. Clone the GitHub repo: git clone https://github.com/aws-samples/amazon-translate-pdf Run the following command: cd amazon-translate-pdf Run the following command to translate from English to Spanish: java -jar target/translate-pdf-1.0.jar --source en --translated es Two translated PDF documents are created in the documents folder, with and without the original formatting ( SampleOutput-es.pdf and SampleOutput-min-es.pdf ). Code to generate the translated PDF The following code snippets show how to take a PDF document and generate a corresponding translated PDF document. It extracts the text using Amazon Textract and creates the translated PDF by adding the translated text as a layer to the image. It builds on the solution shown in the post Generating searchable PDFs from scanned documents automatically with Amazon Textract . The code first gets each line of text with Amazon Textract. Amazon Translate is used to get translated text and save the geometry of the translated text. Region region = Region.US_EAST_1; TextractClient textractClient = TextractClient.builder() .region(region) .build(); // Get the input Document object as bytes Document pdfDoc = Document.builder() .bytes(SdkBytes.fromByteBuffer(imageBytes)) .build(); TranslateClient translateClient = TranslateClient.builder() .region(region) .build(); DetectDocumentTextRequest detectDocumentTextRequest = DetectDocumentTextRequest.builder() .document(pdfDoc) .build(); // Invoke the Detect operation DetectDocumentTextResponse textResponse = textractClient.detectDocumentText(detectDocumentTextRequest); List blocks = textResponse.blocks(); List lines = new ArrayList<>(); BoundingBox boundingBox; for (Block block : blocks) { if ((block.blockType()).equals(BlockType.LINE)) { String source = block.text(); TranslateTextRequest requestTranslate = TranslateTextRequest.builder() .sourceLanguageCode(sourceLanguage) .targetLanguageCode(destinationLanguage) .text(source) .build(); TranslateTextResponse resultTranslate = translateClient.translateText(requestTranslate); boundingBox = block.geometry().boundingBox(); lines.add(new TextLine(boundingBox.left(), boundingBox.top(), boundingBox.width(), boundingBox.height(), resultTranslate.translatedText(), source)); } } return lines; The font size is calculated as follows and can easily be configured: int fontSize = 20; float textWidth = font.getStringWidth(text) / 1000 * fontSize; float textHeight = font.getFontDescriptor().getFontBoundingBox().getHeight() / 1000 * fontSize;   if (textWidth > bbWidth) {     while (textWidth > bbWidth) {         fontSize -= 1;         textWidth = font.getStringWidth(text) / 1000 * fontSize;         textHeight = font.getFontDescriptor().getFontBoundingBox().getHeight() / 1000 * fontSize;      } } else if (textWidth < bbWidth) {      while (textWidth < bbWidth) {          fontSize += 1;          textWidth = font.getStringWidth(text) / 1000 * fontSize;          textHeight = font.getFontDescriptor().getFontBoundingBox().getHeight() / 1000 * fontSize;       } } The translated PDF is created from the saved geometry and translated text. Changes to the color of the translated text can easily be configured. float width = image.getWidth(); float height = image.getHeight();   PDRectangle box = new PDRectangle(width, height); PDPage page = new PDPage(box); page.setMediaBox(box); this.document.addPage(page); //org.apache.pdfbox.pdmodel.PDDocument   PDImageXObject pdImage;   if(imageType == ImageType.JPEG){     pdImage = JPEGFactory.createFromImage(this.document, image); } else {     pdImage = LosslessFactory.createFromImage(this.document, image); }   PDPageContentStream contentStream = new PDPageContentStream(document, page, PDPageContentStream.AppendMode.OVERWRITE, false);   contentStream.drawImage(pdImage, 0, 0); contentStream.setRenderingMode(RenderingMode.FILL);   for (TextLine cline : lines){     String clinetext = cline.text;     String clinetextOriginal = cline.originalText;                            FontInfo fontInfo = calculateFontSize(clinetextOriginal, (float) cline.width * width, (float) cline.height * height, font);     //config to include original document structure - overlay with original     contentStream.setNonStrokingColor(Color.WHITE);     contentStream.addRect((float) cline.left * width, (float) (height - height * cline.top - fontInfo.textHeight), (float) cline.width * width, (float) cline.height * height);     contentStream.fill();       fontInfo = calculateFontSize(clinetext, (float) cline.width * width, (float) cline.height * height, font);     //config to include original document structure - overlay with translated     contentStream.setNonStrokingColor(Color.WHITE);     contentStream.addRect((float) cline.left * width, (float) (height - height * cline.top - fontInfo.textHeight), (float) cline.width * width, (float) cline.height * height);     contentStream.fill();     //change the output text color here     fontInfo = calculateFontSize(clinetext.length() <= clinetextOriginal.length() ? clinetextOriginal : clinetext, (float) cline.width * width, (float) cline.height * height, font);     contentStream.setNonStrokingColor(Color.BLACK);     contentStream.beginText();     contentStream.setFont(font, fontInfo.fontSize);     contentStream.newLineAtOffset((float) cline.left * width, (float) (height - height * cline.top - fontInfo.textHeight));     contentStream.showText(clinetext);     contentStream.endText(); } contentStream.close() The following image shows the document translated into Spanish with the original formatting ( SampleOutput-es.pdf ). The following image shows the translated PDF in Spanish without any formatting ( SampleOutput-min-es.pdf ). Processing time The employment application pdf took about 10 seconds to extract, process and render the translated pdf. The processing time for text heavy document such as the Declaration of Independence PDF took less than a minute. Cost With Amazon Textract, you pay as you go based on the number of pages and images processed. With Amazon Translate, you pay as you go based on the number of text characters that are processed. Refer to Amazon Textract pricing and Amazon Translate pricing for actual costs. Conclusion This post showed how to use Amazon Textract and Amazon Translate to generate translated PDF documents while retaining the original document structure. You can optionally postprocess Amazon Textract results to improve the quality of the translation, for example extracted words can be passed through ML-based spellchecks such as SymSpell for data validation, or clustering algorithms can be used to preserve reading order. You can also use Amazon Augmented AI (Amazon A2I) to build human review workflows where you can use your own private workforce to review the original and translated PDF documents to provide more accuracy and context. See Designing human review workflows with Amazon Translate and Amazon Augmented AI and Building a multi-lingual document translation workflow with domain-specific and language-specific customization to get started. About the Authors Anubha Singhal is a Senior Cloud Architect at Amazon Web Services in the AWS Professional Services organization. Sean Lawrence was formerly a Front End Engineer at AWS. He specialized in front end development in the AWS Professional Services organization and the Amazon Privacy team. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Return Entertainment Case Study.txt,"Custom infrastructure Amazon Elastic Compute Cloud (Amazon EC2) Français Scales easily Return Entertainment was founded by gaming industry veterans in 2019. Then entire nations shut down in response to the COVID-19 pandemic, and the startup had to adjust like everyone else. “When everybody suddenly had to stay home and travel was no longer possible, we still needed to demonstrate our games to investors and partners,” says Tuomas Paavola, chief technology officer. Return Entertainment began sending links to potential partners so that they could try its games in the cloud and discovered that it led to increased productivity, collaboration, and innovation. “That’s when we thought, Why not go fully cloud native ourselves? It got us thinking about creating games that could only exist in the cloud, that could fully use cloud-native possibilities—things that wouldn’t be possible with existing services,” says Paavola. Also crucial to Return Entertainment’s development are the monitoring and observation capabilities that AWS services provide. Using Amazon CloudWatch, a service that collects data in the form of logs, metrics, and events, the company can monitor its applications and optimize its resource use. “This is a new field, and no one knows yet how players behave in this environment,” says Juha Suihkonen, lead architect at Return Entertainment. “It’s critical for us to get data. But it’s data that nobody has, so we have to forge our own path.” Using the power of AWS, Return Entertainment is working to lower the boundaries of gaming. Its cloud-native games are designed to be played with others regardless of distance, system, or device—all through one click of a link. The variety, versatility, and reliability of AWS offerings, combined with collaborative support from an AWS solutions architect, empower the startup to explore new horizons in cloud-native gaming. Español AWS Lambda Tuomas Paavola Chief Technology Officer, Return Entertainment  日本語 AWS Services Used Customer Stories / Games 2022 Return Entertainment Speeds Up Development of Cloud-Native Games Using AWS Opportunity | Building Innovative Cloud-Native Gaming 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Contact Sales Optimizes Migrating to AWS was a practical choice for Return Entertainment for several reasons. First was familiarity: the whole Return Entertainment team already knew AWS, having had positive experiences with its offerings through previous work at other companies. AWS also offered the coverage that the startup needed. Amazon EC2 could provide the scalability that Return Entertainment needed to make cloud gaming accessible worldwide. “AWS has global reach, so we could get GPU machines close to our players,” says Paavola. “Using AWS is cost effective because we can serve the games up fast to the players when they want them, whether in the daytime or evening, on weekdays or weekends.” resource use through Amazon CloudWatch for global cloud gaming using Amazon EC2 Running on the cloud makes interactivity simple, but all the cloud computing that the company required needed a powerful server. “We started out with dedicated services at a local hosting company,” says Paavola, “but we quickly figured out that we needed something scalable.” That was when Return Entertainment chose AWS to turn its goals of cloud-native gaming into reality. Solution | Building Innovative Cloud-Native Gaming 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Return Entertainment can deliver global gaming efficiently because the startup has no servers to maintain on its own. For custom backend functionalities in its games, the company uses Amazon DynamoDB, a fully managed, serverless database service, and AWS Lambda, a serverless, event-driven compute service that can run code for virtually any type of application or backend service without provisioning or managing servers. Using these serverless solutions makes Return Entertainment’s game development faster and its operations simpler. Further, the company is ready to scale up or down as needed with ease. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Outcome | Exploring the Potential of the Cloud Using AWS “It’s been very helpful to have a solutions architect who can double-check our designs and our configurations of products and services to make it all work together,” says Emil Kaidesoja, an engineer at Return Entertainment. Based on feedback from the solutions architect, Return Entertainment chose serverless architecture and developed the first version of its cloud-native gaming infrastructure in just a few months. Low-latency Overview The support provided by AWS was another important factor in Return Entertainment’s choice to adopt AWS solutions. The company’s engineers work alongside an AWS game tech solutions architect who can quickly respond to the team’s needs, give prescriptive guidance or demonstrations, and collaborate to help accelerate game development. Shortly after its founding, Helsinki-based gaming developer Return Entertainment realized the potential of going fully cloud native. Rather than developing games for existing cloud gaming services as originally intended, the company became one of the first to design innovative games directly in the cloud. The founders recognized this would be the best way to test the cloud’s limits, harness its powers instantly, and make new forays in gaming development. Get Started Helsinki-based cloud-native gaming company Return Entertainment aims to transform the gaming industry through its use of the cloud. Its games can be played on any device and require no downloads or installations, making them accessible, simple, and fun for everyone. Using AWS is cost effective because we can serve the games up fast to the players when they want them, whether in the daytime or evening, on weekdays or weekends.” Türkçe English Low latency is also key to Return Entertainment’s ability to deliver a seamless gaming experience. Using Amazon CloudFront, a content delivery network service built for high performance, security, and developer convenience, Return Entertainment can serve cloud content with low latency to create an ideal experience for gamers. AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use. Learn more » built in just a few months Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. DynamoDB offers built-in security, continuous backups, automated multi-Region replication, in-memory caching, and data export tools.  Learn more » streaming offers satisfying gaming experience About Return Entertainment Deutsch Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Securely deliver content with low latency and high transfer speeds. Learn more »   Tiếng Việt Italiano ไทย Amazon CloudFront Amazon DynamoDB As the startup pursued innovative cloud-native game development, it looked to Amazon Web Services (AWS) for the tools that it needed. Return Entertainment could build cloud-native games using GPU instances from Amazon Elastic Compute Cloud (Amazon EC2), which provides secure, resizable capacity for virtually any workload. With Amazon EC2 and powerful components from a host of other AWS offerings, Return Entertainment saved both time and money. These savings gave Return Entertainment’s designers the freedom to focus on creating the interactive games that people want to play. Learn more » When everything works well, all this cloud gaming power might go unnoticed by gamers. “In the end, users don’t care much about what’s underneath the game, as long as the game works,” says Antti Sartanen, CEO of Return Entertainment. “You shouldn’t even know or care that the games are in the cloud. They’ll just be the most shareable, fun games you can play.” But for designers, harnessing the power of AWS for next-generation game experiences makes all the difference. Português Learn how Return Entertainment built cloud-native gaming infrastructure in a few months, and reduced time and cost using AWS." Revive lost revenue from bad ecommerce search using Natural Language Processing _ AWS for Industries.txt,"AWS for Industries Revive lost revenue from bad ecommerce search using Natural Language Processing by Aditya Pendyala and Siddharth Pasumarthy | on 30 MAY 2023 | in Amazon Comprehend , Amazon Kendra , Amazon OpenSearch Service , Amazon Textract , CPG , Industries | Permalink | Comments |  Share Ecommerce sites are supposed to be prompt, precise and above all, user-friendly. Yet, their search performance history reveals an unsatisfactory reality for shoppers and retailers. According to Baymard Institute , “61% of all ecommerce sites show search results that are misaligned to users’ searches,” forcing shoppers to either enter a new search or abandon their old one entirely. “The frustration involved in the overall product search experience results in an unacceptable level of churn and burn, about 68%,” says Forrester . With Gen Z demanding faster (and more accurate) search results, ecommerce companies are feeling the pressure to modernize their search, but few are choosing to act on it. Those who make this mistake run the serious risk of falling behind their competitors, not just in innovation, but in sales too. In this blog, we’ll discuss why keyword-based searches are burning a hole in retailers’ pockets and how Amazon Web Services (AWS) can help ecommerce companies earn it back with natural language processing (NLP). Challenges with keyword-based searches Not all online shoppers will use the search bar during their shopping experience, but nearly fifty percent do. In its 2022 roadmap report “Must-Have E-Commerce Features,” Forrester found that, “43% of users on retail websites go directly to a search bar when they first land on a website.” This makes prioritizing search results even more important when keeping a customer engaged. Doing so is a lot harder done than said, because most search engines don’t understand natural language. Let’s say you’re looking for a red dress shirt. You pull up your favorite website and type “men’s red dress shirt” into the search bar. Once you do this, the search engine works to understand what you’ve just written. However, because keyword-based search engines only understand keywords as individual terms, any input outside of this can trigger a misaligned search result. Instead of getting results for a red dress shirt, the search engine might return results for dresses or shirts, not a “dress shirt.” For this to change, the search engine needs to understand the search as one term. In other words, it needs to understand the intent of the user. Common challenges to keyword-base searches are: typos, synonyms and regional dialects, feature-based searches, filter-based searches, context-based searches, and thematic searches. Typos: This is when someone accidentally misspells a word in their search. For example, entering “sweeter” as opposed to “sweater.” Synonyms & regional dialects: This is when a user searches for a word that can have a different, regional meaning. For example, someone might search “shades” instead of “sunglasses” and get completely different results. Example: multi-billion-dollar retailer – search results for searching “mens shades” instead of “mens sunglasses” Feature-based search: This is when a user wants to search for a product with a specific feature. For example, one might search “strap sandal.” Keyword-based search engines can only understand keywords, not the intent of the user. Even though sandal and strap are used in the product description, the search engine doesn’t identify the search and returns zero searches. Filter-based search: This is when a user is looking for a particular quality in an item. For example, Earrings under 30, Blue Socks, Polyester upholstery covers and more. Example: multi-billion-dollar retailer – search results showing unrelated items from a search request for “Earrings Under 30” Context-based search: This is when a user searches for something based on context, not a specific product. For example, someone might search “drafty window fix” or “cold remedy” to see what products come up within the search. Context-based searches are the most challenging for retailers because with context-based searches, oftentimes users are searching for keywords that don’t even exist—resulting in zero returns or zero relevant returns for users. Thematic search: This is when a user is searching for a product within a thematic category. For example, someone looking for a specific type of rug might search “hallway rug,” as opposed to simply “rug.” Example: multi-million-dollar retailer – search results showing unrelated items from searching “hallway rug” instead of “rug” “From a user’s point of view, these everyday descriptions are just as correct as the industry jargon, and most of the participants during large-scale testing never thought of trying another synonym when they received poor search results,” states Baymard Institute . “Instead, participants simply assumed that the poor or limited results were the site’s full selection for such products.” Don’t burn a hole in your pocket For shoppers and retailers, these issues are frustrating and taint the overall quality of a shopping experience. However, for retailers, the impacts of these issues are two-fold, negatively impacting their customers’ experiences and their company’s financials. If shoppers can’t find the product they’re looking for, retailers can lose out on revenue, a lot of revenue . Just look at the numbers. According to a study by Econsultancy , the average ecommerce conversion rate is 2.77%. But when shoppers use the search bar and find what they are looking for, the average conversion increases to a rate of 4.63%. That’s nearly double the average ecommerce conversion rate. If searched on Amazon.com , this number increases even more. Every time someone searches on Amazon.com and finds what they’re looking for, the conversion rate increases by 6x . So, what was once a conversion rate of 2% becomes 12%. If we translate these percentages into revenue, this is a huge financial jump for ecommerce companies. How can AWS help refine your ecommerce search? AWS offers artificial intelligence and machine learning (AI/ML) services like Amazon Comprehend, Amazon Kendra, Amazon Textract and Amazon OpenSearch Service that together can be used to improve ecommerce search capabilities. Amazon Comprehend is a natural language processing service that uses machine learning to find meaning, insights and connections in text. This service equips your search engine to index key phrases, entities and sentiment to improve search performance. Amazon Comprehend learns over time, uncovering valuable insights from text in documents, customer support tickets, product reviews, emails, and social media feeds. With Amazon Comprehend, users can: Mine business and call center analytics: Extract insights from customer surveys to improve your products. Index and search product reviews: Focus on context by equipping your search engine to index key phrases, entities, and sentiment, not just keywords. Amazon Kendra is an ML based intelligent search engine that understands natural language. This intelligent enterprise search service helps you search across different content repositories with built-in connectors, giving users highly accurate answers without the need for machine learning expertise. Amazon Textract is a ready-to-use ML service that automatically and accurately extracts text, handwriting and data from scanned documents with no manual effort. Across industries, Amazon Textract can be used to keep data organized and in its original context, as well as eliminate manual review of output. Amazon OpenSearch Service is an open source, distributed search and analytics suite that enables you to perform interactive log analytics, near real-time application monitoring, and website search. With OpenSearch Service, users can quickly find relevant data with a fast, personalized search experience within your applications, websites and data lake catalogs. Conclusion Even with billions of dollars in sales, retailers still are losing out on revenue thanks to poor search performance capabilities. However, it doesn’t have to be that way. When used together, AWS services like Amazon Comprehend, Amazon Kendra, Amazon Textract and Amazon OpenSearch Service can help eliminate this problem. They can create a powerful, improved search experience so retailers can finally focus on lifting revenue, not lowering it. Discover ways you can improve retail search performance and start boosting revenue with AWS AI/ML services. Learn more about AWS for consumer packaged goods (CPG) or contact an AWS Representative. Further Reading Building Blocks for Modern Retail Ecommerce and Media Search with AWS Tech Analysis with Amazon OpenSearch Service and Amazon Comprehend Building an NLU-powered search application with Amazon SageMaker and Amazon Opensearch Service KNN feature TAGS: aws , eCommerce , Natural Language Procesing (NLP) Aditya Pendyala Aditya is a Senior Solutions Architect at AWS based out of NYC. He has extensive experience in architecting cloud-based applications. He is currently working with large enterprises to help them craft highly scalable, flexible, and resilient cloud architectures, and guides them on all things cloud. He has a Master of Science degree in Computer Science from Shippensburg University and believes in the quote “When you cease to learn, you cease to grow.” Siddharth Pasumarthy Siddharth is a Solutions Architect based out of New York City. He works with enterprise retail customers in the fashion and apparel industry, to help them migrate to cloud and adopt cutting edge technologies. He has a B.S. in Architecture from the Indian Institute of Technology and an M.S. in Information systems from Kelley School of Business. In addition to keeping up-to-date with technology, he is passionate about the arts, and creates still life acrylic paintings in his free time. Comments View Comments Resources AWS for Industry AWS Events AWS Training & Certification AWS Whitepapers AWS Compliance Reports Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Revolutionizing Manufacturing with Sphere and Amazon Lookout for Visions XR and AI Integration _ AWS Partner Network (APN) Blog.txt,"AWS Partner Network (APN) Blog Revolutionizing Manufacturing with Sphere and Amazon Lookout for Vision’s XR and AI Integration by Arun Nallathambi , Colin Yao , and Alexandra Corey | on 13 JUL 2023 | in Amazon Lookout for Vision , Artificial Intelligence , AWS Marketplace , AWS Partner Network , Case Study , Customer Solutions , Industries , Intermediate (200) , Manufacturing , Thought Leadership | Permalink | Comments |  Share By Arun Nallathambi, Sr. Partner Solutions Architect – AWS By Colin Yao, CTO – Sphere By Alexandra Corey, Head of Marketing – Sphere Sphere Sphere  and Amazon Lookout for Vision are revolutionizing the way that high-value equipment and machines are assembled, maintained, and operated. By combining extended reality (XR) with artificial intelligence (AI), the integration gives manufacturing customers a cutting-edge tool to uncover process issues, identify missing components, detect damaged parts, and more. In this post, we will explore use cases in which the enhanced training procedures and advanced analytics afforded by Sphere and Amazon Lookout for Vision can be applied to real-world scenarios. Sphere is an AWS Partner and AWS Marketplace Seller that’s an immersive collaboration developer and provider, supporting enterprise teams in boosting their bottom line through XR. Sphere is used by leading businesses that are looking to increase productivity, optimize supply chain operations, connect workers worldwide, and reduce errors, safety risks, and environmental footprints. Sphere Overview Sphere is device-agnostic, working with the market’s widest range of augmented, virtual, and assisted reality headsets. It also operates on smartphones, tablets, and PCs. In addition, it’s agnostic across conferencing tools, as well as leading enterprise resource planning (ERP), product lifestyle management (PLM), and customer relationship management (CRM) software. Heavily adopted by the manufacturing, automotive, healthcare, and defense sectors, Sphere’s turnkey solution provides tools for workforce collaboration, enhanced training, access to remote experts, and holographic build planning. Each of Sphere’s add-on packages—including Sell, Connect, Build, and Train—are offered in a single, streamlined platform. Sphere’s integration with Amazon Lookout for Vision is an extension to the company’s Train package. Sphere Train enables immersive guidance in the training, operation, and maintenance of critical equipment and machines. Workflows included in the package consist of a sequence of steps that each contain text instruction, along with optional spatial indicators featuring reference assets and operator actions. Sphere supports 60+ file types, enabling users to bring any media content into XR. These include CAD models, multiple document types, video and audio files, and more. Workflows are automatically saved, generating a report that provides valuable operational insight. Figure 1 – Operator connecting and collaborating to get expert help in XR environment. Benefits of Amazon Lookout for Vision Amazon Lookout for Vision is a cloud-based machine learning (ML) service offered by Amazon Web Services (AWS) that enables you to create and train computer vision models to analyze images. Customers use these models to detect anomalies at scale, such as detecting damaged parts, identifying missing components, uncovering process issues in a setup, and using these visuals to take corrective actions. Amazon Lookout for Vision enables customers to easily and quickly create ML models with the goal of preventing avoidable downtime and reducing supply chain disruptions. Organizations in manufacturing, healthcare, and more use Amazon Lookout for Vision to build efficient image-based inspection processes that are more scalable, reliable, faster, and reduce manual labor dependency. Powering Precision with Sphere and Amazon Lookout for Vision Sphere’s integration with Amazon Lookout for Vision amplifies critical XR use cases to support machine maintenance, up time, and worker effectiveness. The platform is deployed in real-world environments, generating return on investment (ROI) through manufacturing risk reduction using XR combined with AI functionality. By contrasting expected results with actual outputs during Sphere-powered workflows, the integration enables enterprises to move from a retroactive review of completed work to on-demand feedback and verification. Real-time error avoidance saves Sphere customers millions of dollars annually. Example: Combining XR with AI Let’s review an example to help illustrate the integration of Sphere and Amazon Lookout for Visio. As part of the mounting procedure for a precision measurement machine, pins must be placed in extremely specific positions on the holding apparatus. Like all applied AI/ML applications, the solution begins with data. Specifically, we use image data of “normal” expected results, as well as images of “defects” or “anomalies.” Image samples are collected featuring both normal and anomalous cases, and then fed into Amazon Lookout for Vision. In this context, training a model is simple and requires a limited sample to get started. Figure 2 – Mount piece for precision measurement machine. Amazon Lookout for Vision allows us to train models for specific scenarios in a powerful way. Not only can customers create models that recognize if the pins are in the correct place or not, they can also extend it to tell them which pins are misplaced specifically. Amazon Lookout for Vision allows users to create classification models that determine whether an anomaly is present in the input image. This scenario can be thought of as a straight-forward pass or fail. However, this can be taken a step further by training image segmentation models, which gives the location of an anomaly in the image through semantic segmentation. Although this segmentation takes more input data and training, the contextual information can be extremely useful. Once a model is trained, it can be reused continuously to help technicians and operators increase the accuracy of their work. Onsite employees can put on their XR headset and begin the step-by-step procedure that guides them through the setup for the precision measurement machine. With Sphere’s XR solution, the user is spatially guided through the process and receives cues as to where they need to take action, as well as key points of interest to keep in mind. Figure 3 – Operator following instruction and workflow in XR environment. The operator arrives at a step that requires them to set up the mounting apparatus. Once they feel the work has been conducted correctly, they can capture a photo using Sphere which, together with Amazon Lookout for Vision, automatically verifies whether the step was precisely completed. Sphere allows all of the above to be conducted safely and efficiently, while remaining hands-free and unencumbered. What Amazon Lookout for Vision provides is a confidence interval which can be combined with Sphere to build complex workflows with configurable conditions for acceptable quality. If the setup is done correctly, the operator can move forward with running the measurement procedure. If not, and the confidence is low, Sphere will prompt the user to double check pin placement and otherwise provide guidance as to which pins are specifically misaligned. Alternatively, if confidence lies in a gray zone, it may suggest the operator use Sphere to call a remote expert and get a second opinion before continuing. Figure 4 – Amazon Lookout for Vision powers Sphere to conduct quality check on XR space. Through the standard usage of Sphere, combined with Amazon Lookout for Vision, these recognition models improve over time with increased input. Verification attempts are reused to offer more training data beyond the initial training dataset. By creating this continuous feedback loop, Sphere allows companies to further refine the models and adapt them to their changing requirements and account for temporal deviations that may present themselves. Case Study: Micron’s Deployment of The Solution Micron Technology , a Sphere customer as well as investor, uses the platform to provide frontline workers the necessary tools for improving business efficiency. For Micron, access to digitized training functionality with paperless reporting is a step in the right direction when it comes to standard operating procedure (SOP) compliance traceability. However, work performance oversight is just one piece of the puzzle, as it doesn’t prevent process mistakes in the first place. Errors are often paired with costly consequences requiring rework and retroactive corrections, all of which is avoidable if flagged sooner. Sphere has allowed Micron to increase machine availability by 2% and save over 3,000 hours of machine downtime annually. With Sphere plus Amazon Lookout for Vision, Micron gains real-time insight into whether a job is being performed correctly, allowing operators to act immediately if something goes wrong. “For Micron, Sphere is a critical component of business continuation,” says Ning Khang Lee, Director of Smart Manufacturing and AI at Micron. “We use Sphere to connect multinational teams, effectively train workers, and give ourselves an operation edge in the competitive semi-conductor market.” Many of Micron’s procedures require complete hands-free usage, making Sphere’s XR solution a natural fit. For example, complex machine maintenance involves many physical steps which must be conducted by a technician in the correct order. Moving away from the machine to check instructions in a booklet or on a computer is inefficient, unsafe, and can easily lead to errors that result in significant disruptions to the supply chain. Sphere’s Train package allows the technician to remain focused on the task as they’re guided by detailed, holographic workflow steps that are anchored to the appropriate region of the machine. Amazon Lookout for Vision harnesses AI to add an even further layer of risk reduction. Conclusion The manufacturing industry is being revolutionized by the introduction of extended reality (XR) and AI technologies, which have brought about numerous benefits in terms of efficiency and risk reduction. By combining Sphere’s productivity and collaboration platform with Amazon Lookout for Vision’s ability to train and continuously reuse models, the integration provides a streamlined solution for customers to improve SOP compliance, reduce machine downtime, and eliminate costly errors. You can learn more about Sphere in AWS Marketplace . . . Sphere – AWS Partner Spotlight Sphere is an AWS Partner  and immersive collaboration developer and provider which supports enterprise teams in boosting their bottom line through extended reality (XR). Contact Sphere | Partner Overview | AWS Marketplace TAGS: AWS Partner Guest Post , AWS Partner References , AWS Partner Solutions Architects (SA) , AWS Partner Success Stories Comments View Comments Resources AWS Partner and Customer Case Studies AWS Partner Network Case Studies Why Work with AWS Partners Join the AWS Partner Network Partner Central Login AWS Training for Partners AWS Sponsorship Opportunities Follow  AWS Partners LinkedIn  AWS Partners Twitter  AWS Partners YouTube  AWS Email Updates  APN Blog RSS Feed" Rivian Case Study _ Automotive _ AWS.txt,"About Rivian Français AWS Select Consulting Partner Amazon FSx for Lustre Español Amazon EC2 Improved availability of compute resources 日本語 AWS Services Used AWS Professional Services Increased software speed by up to 66% 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Rivian depends on computer-aided engineering tools to extend vehicles’ range and maintain high safety standards. But in early 2020, one of the company’s on-premises high-performance computing clusters failed, reducing its compute capacity by half. Rivian looked to the cloud to overcome this challenge. X-ISS provides system and application technical support to Rivian’s computer-aided engineering team for Get Started Accelerating Innovation with Efficient Compute Elastic Fabric Adapter (EFA), a network interface for Amazon EC2 instances, Rivian’s engineers can scale out to a larger number of cores. Amazon FSx for Lustre is a fully managed service that provides cost-effective, high-performance, scalable storage for compute workloads. Many workloads such as machine learning, high performance computing (HPC), video rendering, and financial simulations depend on compute instances accessing the same set of data through high-performance shared storage. Rivian pushes the pace of automotive innovation with AWS 中文 (繁體) Bahasa Indonesia On AWS, the speed of Rivian’s software tools has improved by up to 66 percent, and Rivian can load a full vehicle bill of materials in 22 minutes. The company uses Using the Breadth of AWS Services Ρусский عربي 中文 (简体) AWS Professional Services, a global team of experts, Rivian improved data availability using AWS CloudFormation, which enables users to speed up cloud provisioning, Rivian can deploy automatically through continuous integration / continuous delivery. Amazon Relational Database Service (Amazon RDS)—which makes it simple to set up, operate, and scale a relational database in the cloud. Backup synchronization, which before took up to 1 day, now takes less than 1 hour. “As Rivian grows at a rapid pace, we need a highly scalable system,” says Surendra Balu, Rivian’s 3DExperience technical lead. “Changes that took 5 days now occur within minutes.” And using Learn more » Benefits of AWS On AWS, interaction with product lifecycle management has increased 66 percent. Rivian also improved failover using To meet accelerated engineering schedules and reduce the need for physical prototypes, electric vehicle manufacturer Amazon EC2 Auto Scaling, which helps users maintain application availability. Amazon Elastic Compute Cloud (Amazon EC2) Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. In 2020, Rivian found that its on-premises research and development information technology infrastructure could not keep up with its performance needs. Resource bottlenecks affected product lifecycle management, computer-aided design, and computer-aided engineering, so Rivian began using Amazon Web Services (AWS) to architect an agile engineering environment. English Amazon RDS People who were skeptical about high-performance computing in the cloud are more open minded after seeing our results on AWS.” 2021 Enabled collaboration through shared storage Madhavi Isanaka Chief Information Officer, Rivian Reduced need for physical prototypes Rivian Executes Vision of Agile Engineering on AWS “Our engineers expected the fix to take 6 months,” says Madhavi Isanaka, Rivian’s chief information officer. Instead, Rivian built a new compute cluster on AWS. “In 3 weeks, we had a working proof of concept on AWS,” says Isanaka. After that success, the company migrated its production environments. In the cloud, Rivian’s engineers can access and automate resources on demand. Deutsch Rivian relies on advanced modeling and simulation techniques. Using high compute capacity, simulations enable engineers to test new concepts and bring their designs to market quickly. Amazon FSx for Lustre, a fully managed storage service, Rivian can access shared storage quickly. And after consulting Tiếng Việt Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Italiano ไทย The AWS Professional Services organization is a global team of experts that can help you realize your desired business outcomes when using the AWS Cloud. We work together with your team and your chosen member of the AWS Partner Network (APN) to execute your enterprise cloud computing initiatives. Rivian plans to continue migrating workloads to AWS, enabling more seamless postprocessing and visualization. “People who were skeptical about high-performance computing in the cloud are more open minded after seeing our results on AWS,” says Isanaka. “This is accelerating adoption across the board.” Contact Sales Scale-Out Computing on AWS, which helps customers deploy and operate multiuser environments. “In early product development stages, we don’t have many physical vehicles, so we use AWS to bring the design space to life,” says Isanaka. Using Optimizing for Efficiency and Innovation Rivian is an electric vehicle maker and automotive technology company. It designs and manufactures vehicles and offers services related to sustainable transportation. C5 Instances, which deliver cost-effective high performance at a low price per compute ratio. By using Amazon EC2 C5n Instances and Português" Rumah Siap Kerja (RSK) Case Study - Amazon Web Services (AWS).txt,"With AWS, everything from security to scalability is built-in and fully managed in the cloud. This lets us focus on delivering high-quality, high-value education to our users. AWS and Elitery have supported us in completely transforming our business model during the pandemic, as well as sustaining our subsequent business growth.” Français RSK runs Amazon Relational Database Service (Amazon RDS) to automate time-consuming administration tasks, such as hardware provisioning, patching, and backups. This means RSK’s IT team now spends less time on infrastructure maintenance and can redirect its focus to developing new products and improving features. Español RSK deployed its LMS on Amazon Elastic Compute Cloud (Amazon EC2) and Amazon EC2 Auto Scaling to grow or shrink compute capacity depending on demand. While the platform handles about 500 concurrent users on average, it can sometimes reach as many as 20,000 concurrent users. With a scalable cloud-based infrastructure, RSK can deliver consistent, high-quality training video content, even during traffic spikes. Learn More 日本語 Contact Sales These include RSK’s mobile app, which was launched in 2022. The app complements its online LMS, allowing users to attend training sessions on the go, or to seek career coaching services. In 2022, RSK also introduced an entrepreneurship training course via its mobile app that helps aspiring entrepreneurs start their own businesses. Rumah Siap Kerja (RSK) is an Indonesia-based social enterprise that provides professional and entrepreneurship training, and career coaching services. Founded in 2019, RSK was established wholly offline with face-to-face trainings, working with more than 50 trainers across a range of skill sets, expertise, and industries. 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. In 2021, on AWS’s recommendation, RSK integrated Amazon CloudFront with Amazon Simple Storage Service (Amazon S3) to deliver its video content. Built for high performance and security, the content delivery network service has helped RSK halve data transfer charges. Get Started Rumah Siap Kerja Pivots to a Cloud-based E-Learning Platform in 2 Months on AWS Designed, built, and deployed a cloud-based LMS within 2 months AWS Services Used RSK also uses Amazon Simple Email Service (Amazon SES) to support its user registration process and marketing campaigns. Previously, the system was unable to process more than 5,000 registrations/day, leading to user validation errors during registration. Using Amazon SES, RSK can quickly scale and has not encountered issues with validation errors since.  During the COVID-19 pandemic, Indonesia experienced its highest level of unemployment in nearly a decade, and demand for online professional training surged. RSK realized it needed to pivot its business and build an e-learning platform to deliver its courses and programs.  中文 (繁體) Bahasa Indonesia Going Serverless to Improve End-User Experience Ρусский عربي Learn more » “With AWS, everything from security to scalability is built-in and fully managed in the cloud. This lets us focus on delivering high-quality, high-value education to our users. AWS and Elitery have supported us in completely transforming our business model during the pandemic, as well as sustaining our subsequent business growth,” shared Risyad, head of IT at RSK.   中文 (简体) Risyad Head of IT, Rumah Siap Kerja With its LMS in the cloud, RSK could utilize pre-recorded videos and video conferencing apps to train its members virtually. The new LMS features built-in assessment tools, which gives RSK’s trainers and users a comprehensive view of the entire learning journey. RSK also uses the LMS to centrally manage training, tracking students, and reporting analytics, saving them up to 10 hours/week on administrative tasks. Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Benefits of AWS RSK turned to the cloud to flexibly adapt to changing pandemic conditions without overcommitting budgets. It chose to work with Amazon Web Services (AWS) as the organization already had a good experience hosting its website on the AWS Cloud. In 2020, RSK began working with Elitery, an Amazon Web Services (AWS) Advanced Tier Services Partner, to set up its e-learning platform on the AWS Cloud.  Achieving Cost Reductions and Improved Customer Service Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Türkçe English Amazon RDS Can automatically resize compute capacity to handle up to 20,000 concurrent users per day About Rumah Siap Kerja (RSK) Reduced data transfer charges by 50 percent Deutsch Amazon SES RSK was able to design, build, and deploy a full-fledged Learning Management System (LMS) in just 2 months. RSK was able to successfully grow its user base by up to 300 percent within a year, and deliver more than 3,700,000 hours of training to at least 500,000 users. As of 2022, RSK has over 1,496 courses on the platform, with 2,000 training videos equivalent to a total of over 3,700,000 viewing hours. Tiếng Việt Amazon S3 Italiano ไทย To learn more, visit https://aws.amazon.com/education. Amazon CloudFront Looking ahead, RSK plans to adopt Amazon Aurora to power its performance-intensive applications in a serverless, fully managed database environment. This hands-off approach to capacity management will allow RSK to focus on expanding its suite of products and features, thus creating a more engaging and comprehensive learning experience for its users.  Supporting RSK’s Pivot to Cloud-based Training Português 2022 Amazon Simple Email Service (SES) lets you reach customers confidently without an on-premises Simple Mail Transfer Protocol (SMTP) system. Rumah Siap Kerja (RSK) is an Indonesia-based education technology startup that provides professional and entrepreneurship training, and career coaching services. Founded in 2019, RSK was established wholly offline with face-to-face trainings, working with more than 50 trainers across a range of skill sets, expertise, and industries." Run Jobs at Scale While Optimizing for Cost Using Amazon EC2 Spot Instances with ActionIQ _ ActionIQ Case Study _ AWS.txt,"Headquartered in New York City, ActionIQ operates a CDP for business, marketing, and analytics that operates on a software-as-a-service model. It helps companies derive business intelligence using data that they already own to improve customer engagement and drive revenue. Previously, ActionIQ ran its solution using Amazon EC2 Reserved Instances, which provide a significant discount compared with On-Demand pricing and provide a capacity reservation when used in a specific Availability Zone. “This compute system is used by every team in the company for data processing,” says Mitesh Patel, tech lead at ActionIQ. “If our system is not running, our teams cannot meet their customers’ SLAs.” Français Optimizes Using this solution, ActionIQ has significantly optimized its compute costs. The hourly price for Spot Instances is $1.93 per hour compared to the cost of Reserved Instances, which was $3 per hour. ActionIQ runs anywhere between 10–500 machines at any given time, and by adopting Spot Instances, it has unlocked significant cost savings. Like with On-Demand Instances, ActionIQ pays only for the capacity it uses when using Spot Instances, instead of having infrastructure always running. This benefit has further optimized its costs. Additionally, ActionIQ has an AWS Savings Plan in place, which reduces costs for workloads that cannot be interrupted. 2023 ActionIQ saw an opportunity to optimize for both scale and cost by choosing a different pricing option on Amazon Web Services (AWS). The company adopted Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances, which run fault-tolerant workloads at up to a 90 percent discount compared with Amazon EC2 On-Demand Instances, which let companies pay for compute capacity by the hour or second. By making this change, ActionIQ has reduced its compute costs and positioned its business for future growth. Español Outcome | Helping Businesses Derive Better Insights for Customer Engagement on AWS Learn more » 日本語 Contact Sales Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used AWS Services Used 中文 (繁體) Bahasa Indonesia analytics capabilities Expands customers' Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Ρусский Customer Stories / Software & Internet عربي Amazon EC2 Reserved Instances (RI) provide a significant discount (up to 72%) compared to On-Demand pricing and provide a capacity reservation when used in a specific Availability Zone. runs hundreds of parallel jobs per customer 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Amazon EC2 ActionIQ empowers everyone to be a customer experience champion. Its solutions give business teams the freedom to explore and act on customer data while helping technical teams better manage data governance, costs, and performance. Learn more » On-Demand Instances let you pay for compute capacity by the hour or second (minimum of 60 seconds) with no long-term commitments. Overview Amazon EC2 On-Demand Instances Learn how ActionIQ is powering its enterprise customer data platform using Spot Instances. compute costs In about 6 months, ActionIQ transitioned its Reserved Instances to Spot Instances. The company can now run thousands of customer workloads in a way that meets the time constraints set by its SLAs, benefiting customers and internal teams alike. “We had to build on top of Spot Instances to achieve our SLAs, making changes like building resilience across Availability Zones,” says Patel. “We’ve made a lot of progress and have gotten to a stage where we do not need to tune out clusters. We can now predict how they are going to behave at any point, given some traffic.” in concurrency for customer workloads For ActionIQ, deriving fast insights is critical. The software-as-a-service (SaaS) company operates a powerful customer data platform (CDP) that helps large enterprises better understand their customers and improve their experiences. To help its enterprise customers run more workloads in parallel and meet its service-level agreements (SLAs), ActionIQ wanted to improve the scalability and cost-effectiveness of its system. Türkçe English Run Jobs at Scale while Optimizing for Cost Using Amazon EC2 Spot Instances with ActionIQ Because ActionIQ can scale to run more workloads, its customers no longer experience long wait times or backlogs when they need to use the platform. They can add more data to the system, run jobs, and receive results much faster, which improves their speed of innovation. “Before we adopted Spot Instances, our customers regularly had to wait because their jobs were placed in a queue,” says Patel. “Now, there isn’t any backlog anymore because we can scale, and we have constructed our automatic scaling algorithms to prevent these wait times.” About ActionIQ of jobs at scale Opportunity | Using Amazon EC2 Spot Instances to Reduce Costs for ActionIQ Runs thousands By adopting Spot Instances, ActionIQ has opened up a world of opportunities for its business. In the future, the company plans to optimize its machines based on job types and build out its HybridCompute composable architecture feature, which will help customers connect their own datasets from other systems to the ActionIQ platform. “Our competitors can’t effectively derive business value from such a large dataset in a way that could make it truly usable,” says Joffe. “Our system’s ability to handle the size and complexity of the datasets that we work with is a key differentiating factor, and we can accomplish this by using Spot Instances.” Because of the scale and the flexibility that we have gained by using Spot Instances, we can handle larger and more complex workloads than ever before.” Deutsch Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices. Learn more » ActionIQ found Reserved Instances to be highly reliable for running its customers’ workloads. However, the volatile nature of workload demand meant there wasn’t a steady, simple-to-predict compute resource requirement. This resulted in ActionIQ paying for Reserved Instances even when workloads were not running, which incurred unnecessary costs. In 2019, ActionIQ chose to adopt Spot Instances so that it could optimize for cost and scale more effectively in response to its customers’ needs. “Spot Instances was a two-for-one solution,” says Patel. “We could achieve the scalability that we wanted because we did not need to prepay for machines in advance.” Amazon EC2 Reserved Instances Tiếng Việt Nitay Joffe Chief Technology Officer, ActionIQ Italiano ไทย Cost-effectively Using Spot Instances, ActionIQ has achieved greater scalability and can run highly complex workloads for its customers. Customers can build segments that are much more complex and run 100 times more workloads in parallel than they could previously. As a result, customer workloads have become 10 times more complex. Its enterprise CDP customers can derive even more value from their data without having to worry about whether the solution can handle their requests. ActionIQ is well positioned for future growth because it can scale more effectively to meet its customers’ compute demands. Solution | Scaling Cost-Effectively to Run Hundreds of Concurrent Jobs per Customer Amazon EC2 Spot Instances With Spot Instances, ActionIQ can scale to run 50,000 workloads and counting without needing to define a long-term commitment for its compute capacity needs. The company can onboard new customers and datasets quickly and as needed. ActionIQ can also run thousands of concurrent jobs per customer in a much more cost-effective way compared with Reserved Instances. As a result, its customers can expand their analytics capabilities. “Because of the scale and the flexibility that we have gained by using Spot Instances, we can handle larger and more complex workloads than ever before,” says Nitay Joffe, chief technology officer at ActionIQ. “We can scale our storage and query capabilities across massive datasets, and we know that we are backed by Amazon EC2.” 100x increase Português" Rush University System for Health Creates a Population Health Analytics Platform on AWS _ Rush Case Study _ AWS.txt,"Building on its highly successful COVID-19 analytics hub with support from Amazon Web Services (AWS), RUSH developed the Health Equity Care & Analytics Platform (HECAP). This platform transforms, aggregates, and harmonizes data from different sources to reflect the complex interplay of clinical and social factors on patient health. HECAP uses advanced analytics to provide actionable insights for patients and providers, which RUSH is using to enhance care outcomes and reduce health inequities in Chicago’s West Side. Amazon Comprehend Medical Français 2023 RUSH runs analytics models using Amazon SageMaker, a service that lets users build, train, and deploy machine learning models for any use case. Using Amazon SageMaker, RUSH can identify different factors that could influence health outcomes and generate a risk stratification score, which it uses to identify the most at-risk patients. RUSH queries data using Amazon Athena, an interactive query service that makes it simple to analyze data directly from Amazon HealthLake. Amazon Athena also integrates with Amazon SageMaker so that data scientists can prepare data for machine learning. “One of the biggest challenges that data scientists face is that models are complex, and joining data from multiple sources can be cumbersome,” says Saldanha. “With the low-code environment on Amazon SageMaker, we can simplify healthcare data analysis and also minimize errors, which is very important.” RUSH can then present data to providers using dashboards on Amazon QuickSight, a service that powers data-driven organizations with unified business intelligence at hyperscale. Using this information, providers can make critical decisions about each patient’s care and connect them with important resources like food banks, support for utility payments, and transportation. Español About Rush University System for Health Established in 1837, RUSH is a leading academic healthcare system that encompasses three major hospitals and numerous outpatient care facilities. The system primarily serves Chicago’s West Side residents, who have a lower life expectancy than residents of wealthier sections of the city. “Our patients who live in the most disadvantaged neighborhoods are living 16 years less than our patients from more affluent areas,” says Dr. Michael Cui, internal medicine physician and associate chief medical informatics officer at RUSH. “Our goal with HECAP is to improve these documented, long-standing healthcare disparities.” 日本語 Using HECAP, RUSH can aggregate all available data about a patient and run analytics models and tools to help guide healthcare decisions. The solution collects data from several sources, including the Epic electronic health record (EHR), blood pressure readings, social determinant of health surveys, and claims history. The platform uses Amazon HealthLake, a HIPAA-eligible service offering healthcare and life sciences companies a unified view of individual and population data to inform analysis and intervention at scale. Amazon HealthLake supports Amazon Comprehend Medical, a HIPAA-eligible natural language processing service that extracts key information from text such as physician’s notes and discharge summaries in the EHR. Using this service, RUSH can transcribe and link important data, such as medications and procedures, to standardized medical terminologies, like ICD-10-CM and RxNorm. HECAP can then extract relevant information from this data to derive further insights. “When we are successfully bringing data from multiple sources and we have identified the appropriate machine learning models, we do something called risk stratification,” says Saldanha. “Using these results, we can identify actionable interventions for health equity. Our clinicians and support staff can intervene and make changes to care delivery and other services so that we can improve patient outcomes.” Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. Amazon HealthLake Close We have a great opportunity to start bringing in more data from different sources and use the power of AWS to scale massively across our system, significantly benefiting the care of our patients in Chicago.” Produces risk score 한국어 Outcome | Advancing Health Equity in the United States through Data Interoperability and Advanced Analytics using clinical, social, and patient-generated data Solution | Developing a Comprehensive Picture of Patient Risk Using Amazon HealthLake Amazon Comprehend Medical is a HIPAA-eligible natural language processing (NLP) service that uses machine learning that has been pre-trained to understand and extract health data from medical text, such as prescriptions, procedures, or diagnoses. AWS Services Used In addition to medical conditions and lifestyle behaviors, certain factors such as housing, transportation, and access to food, known as the social determinants of health, help healthcare providers understand differences in health status. Patient data can be difficult to capture because it is often siloed across different providers and service organizations. Some data points are often unstructured, such as patient-generated data. Other information is sometimes unavailable, such as employment and neighborhood safety data. Clinicians at RUSH sought to identify the breadth of issues that contribute to the life expectancy gap, so they embarked on a project to make patient data more accurate and actionable. “First, we built a solution on AWS to bring data from multiple sources into a single pane of glass. We successfully enhanced citywide coordination for the COVID-19 pandemic response,” says Anil Saldanha, chief innovation officer of RUSH. “When the Robert Wood Johnson Foundation gave us an additional grant, we expanded the platform capabilities to develop and launch HECAP, with the support of AWS and its Health Equity Initiative.” 中文 (繁體) Bahasa Indonesia Click to enlarge for fullscreen viewing.  Opportunity | Using AWS Services to Identify Health Disparities and Advance Health Equity Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Rush University System for Health (RUSH) is a nationally recognized health system leader in quality and health equity. The hospital network is committed to addressing the underlying causes of the 16-year life expectancy gap among minority and lower-income residents of Chicago’s West Side. RUSH sought to build a comprehensive analytics solution to identify and inform scalable interventions for equitable healthcare based on clinical, cardiometabolic, and social needs. 中文 (简体) for minority and underserved patient populations RUSH HECAP Architecture Learn more » Overview Builds a complete patient profile Rush University System for Health (RUSH) is an academic healthcare system based in Chicago, Illinois. RUSH comprises three major hospitals, a wide network of medical providers, and numerous outpatient care facilities. “We have a great opportunity to start bringing in more data from different sources and use the power of AWS to scale massively across our system, significantly benefiting the care of our patients in Chicago,” says Saldanha. “We want to make HECAP a blueprint that we hope other organizations will use to advance health equity across the United States.” Get Started Türkçe English Aggregates data Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram Amazon QuickSight to guide clinical and community intervention Anil Saldanha Chief Innovation Officer, Rush University System for Health RUSH is continuing to build out HECAP by adding more functionality to the provider dashboard, such as enhancing risk prediction modeling and implementing additional tools to enhance care for underserved populations. Using the methodology and architecture that it developed on AWS, RUSH hopes to expand the solution to support other healthcare organizations and improve outcomes for patients everywhere. Amazon QuickSight powers data-driven organizations with unified business intelligence (BI) at hyperscale. Deutsch Amazon HealthLake is a HIPAA-eligible service offering healthcare and life sciences companies a chronological view of individual or patient population health data for query and analytics at scale. Tiếng Việt Customer Stories / Healthcare Italiano ไทย Using HECAP on AWS, RUSH can provide its clinicians with a complete picture of their patients and provide patients with tools for better health. “As a clinician, it is incredibly important to see patient data from multiple sources,” says Cui. “Being able to bring in machine learning tools from AWS to analyze this data is a game changer. As a healthcare system, we can take better care of our patients and access a new and richer data source than we currently have access to.” Learn how Rush University System for Health is using AWS to identify disparities and advance health equity. Advances health equity Learn more » from multiple sources using HIPAA-eligible services Architecture Diagram Rush University System for Health Creates a Population Health Analytics Platform on AWS Português Amazon SageMaker" Safe image generation and diffusion models with Amazon AI content moderation services _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Safe image generation and diffusion models with Amazon AI content moderation services by Lana Zhang , James Wu , John Rouse , and Kevin Carlson | on 28 JUN 2023 | in Advanced (300) , Amazon Comprehend , Amazon Rekognition , Amazon SageMaker JumpStart , Generative AI | Permalink | Comments |  Share Generative AI technology is improving rapidly, and it’s now possible to generate text and images based on text input. Stable Diffusion is a text-to-image model that empowers you to create photorealistic applications. You can easily generate images from text using Stable Diffusion models through Amazon SageMaker JumpStart. The following are examples of input texts and the corresponding output images generated by Stable Diffusion. The inputs are “A boxer dancing on a table,” “A lady on the beach in swimming wear, water color style,” and “A dog in a suit.” Although generative AI solutions are powerful and useful, they can also be vulnerable to manipulation and abuse. Customers using them for image generation must prioritize content moderation to protect their users, platform, and brand by implementing strong moderation practices to create a safe and positive user experience while safeguarding their platform and brand reputation. In this post, we explore using AWS AI services Amazon Rekognition and Amazon Comprehend , along with other techniques, to effectively moderate Stable Diffusion model-generated content in near-real time. To learn how to launch and generate images from text using a Stable Diffusion model on AWS, refer to Generate images from text with the stable diffusion model on Amazon SageMaker JumpStart . Solution overview Amazon Rekognition and Amazon Comprehend are managed AI services that provide pre-trained and customizable ML models via an API interface, eliminating the need for machine learning (ML) expertise. Amazon Rekognition Content Moderation automates and streamlines image and video moderation. Amazon Comprehend utilizes ML to analyze text and uncover valuable insights and relationships. The following reference illustrates the creation of a RESTful proxy API for moderating Stable Diffusion text-to-image model-generated images in near-real time. In this solution, we launched and deployed a Stable Diffusion model (v2-1 base) using JumpStart. The solution uses negative prompts and text moderation solutions such as Amazon Comprehend and a rule-based filter to moderate input prompts. It also utilizes Amazon Rekognition to moderate the generated images. The RESTful API will return the generated image and the moderation warnings to the client if unsafe information is detected. The steps in the workflow are as follows: The user send a prompt to generate an image. An AWS Lambda function coordinates image generation and moderation using Amazon Comprehend, JumpStart, and Amazon Rekognition: Apply a rule-based condition to input prompts in Lambda functions, enforcing content moderation with forbidden word detection. Use the Amazon Comprehend custom classifier to analyze the prompt text for toxicity classification. Send the prompt to the Stable Diffusion model through the SageMaker endpoint, passing both the prompts as user input and negative prompts from a predefined list. Send the image bytes returned from the SageMaker endpoint to the Amazon Rekognition DetectModerationLabel API for image moderation. Construct a response message that includes image bytes and warnings if the previous steps detected any inappropriate information in the prompt or generative image. Send the response back to the client. The following screenshot shows a sample app built using the described architecture. The web UI sends user input prompts to the RESTful proxy API and displays the image and any moderation warnings received in the response. The demo app blurs the actual generated image if it contains unsafe content. We tested the app with the sample prompt “A sexy lady.” You can implement more sophisticated logic for a better user experience, such as rejecting the request if the prompts contain unsafe information. Additionally, you could have a retry policy to regenerate the image if the prompt is safe, but the output is unsafe. Predefine a list of negative prompts Stable Diffusion supports negative prompts, which lets you specify prompts to avoid during image generation. Creating a predefined list of negative prompts is a practical and proactive approach to prevent the model from producing unsafe images. By including prompts like “naked,” “sexy,” and “nudity,” which are known to lead to inappropriate or offensive images, the model can recognize and avoid them, reducing the risk of generating unsafe content. The implementation can be managed in the Lambda function when calling the SageMaker endpoint to run inference of the Stable Diffusion model, passing both the prompts from user input and the negative prompts from a predefined list. Although this approach is effective, it could impact the results generated by the Stable Diffusion model and limit its functionality. It’s important to consider it as one of the moderation techniques, combined with other approaches such as text and image moderation using Amazon Comprehend and Amazon Rekognition. Moderate input prompts A common approach to text moderation is to use a rule-based keyword lookup method to identify whether the input text contains any forbidden words or phrases from a predefined list. This method is relatively easy to implement, with minimal performance impact and lower costs. However, the major drawback of this approach is that it’s limited to only detecting words included in the predefined list and can’t detect new or modified variations of forbidden words not included in the list. Users can also attempt to bypass the rules by using alternative spellings or special characters to replace letters. To address the limitations of a rule-based text moderation, many solutions have adopted a hybrid approach that combines rule-based keyword lookup with ML-based toxicity detection. The combination of both approaches allows for a more comprehensive and effective text moderation solution, capable of detecting a wider range of inappropriate content and improving the accuracy of moderation outcomes. In this solution, we use an Amazon Comprehend custom classifier to train a toxicity detection model, which we use to detect potentially harmful content in input prompts in cases where no explicit forbidden words are detected. With the power of machine learning, we can teach the model to recognize patterns in text that may indicate toxicity, even when such patterns aren’t easily detectable by a rule-based approach. With Amazon Comprehend as a managed AI service, training and inference are simplified. You can easily train and deploy Amazon Comprehend custom classification with just two steps. Check out our workshop lab for more information about the toxicity detection model using an Amazon Comprehend custom classifier. The lab provides a step-by-step guide to creating and integrating a custom toxicity classifier into your application. The following diagram illustrates this solution architecture. This sample classifier uses a social media training dataset and performs binary classification. However, if you have more specific requirements for your text moderation needs, consider using a more tailored dataset to train your Amazon Comprehend custom classifier. Moderate output images Although moderating input text prompts is important, it doesn’t guarantee that all images generated by the Stable Diffusion model will be safe for the intended audience, because the model’s outputs can contain a certain level of randomness. Therefore, it’s equally important to moderate the images generated by the Stable Diffusion model. In this solution, we utilize Amazon Rekognition Content Moderation , which employs pre-trained ML models, to detect inappropriate content in images and videos. In this solution, we use the Amazon Rekognition DetectModerationLabel API to moderate images generated by the Stable Diffusion model in near-real time. Amazon Rekognition Content Moderation provides pre-trained APIs to analyze a wide range of inappropriate or offensive content, such as violence, nudity, hate symbols, and more. For a comprehensive list of Amazon Rekognition Content Moderation taxonomies, refer to Moderating content . The following code demonstrates how to call the Amazon Rekognition DetectModerationLabel API to moderate images within an Lambda function using the Python Boto3 library. This function takes the image bytes returned from SageMaker and sends them to the Image Moderation API for moderation. import boto3 # Initialize the Amazon Rekognition client object rekognition = boto3.client('rekognition') # Call the Rekognition Image moderation API and store the results response = rekognition.detect_moderation_labels( Image={ 'Bytes': base64.b64decode(img_bytes) } ) # Printout the API response print(response) For additional examples of the Amazon Rekognition Image Moderation API, refer to our Content Moderation Image Lab . Effective image moderation techniques for fine-tuning models Fine-tuning is a common technique used to adapt pre-trained models to specific tasks. In the case of Stable Diffusion, fine-tuning can be used to generate images that incorporate specific objects, styles, and characters. Content moderation is critical when training a Stable Diffusion model to prevent the creation of inappropriate or offensive images. This involves carefully reviewing and filtering out any data that could lead to the generation of such images. By doing so, the model learns from a more diverse and representative range of data points, improving its accuracy and preventing the propagation of harmful content. JumpStart makes fine-tuning the Stable Diffusion Model easy by providing the transfer learning scripts using the DreamBooth method. You just need to prepare your training data, define the hyperparameters, and start the training job. For more details, refer to Fine-tune text-to-image Stable Diffusion models with Amazon SageMaker JumpStart . The dataset for fine-tuning needs to be a single Amazon Simple Storage Service (Amazon S3) directory including your images and instance configuration file dataset_info.json , as shown in the following code. The JSON file will associate the images with the instance prompt like this: {'instance_prompt':<>} . input_directory |---instance_image_1.png |---instance_image_2.png |---instance_image_3.png |---instance_image_4.png |---instance_image_5.png |---dataset_info.json Obviously, you can manually review and filter the images, but this can be time-consuming and even impractical when you do this at scale across many projects and teams. In such cases, you can automate a batch process to centrally check all the images against the Amazon Rekognition DetectModerationLabel API and automatically flag or remove images so they don’t contaminate your training. Moderation latency and cost In this solution, a sequential pattern is used to moderate text and images. A rule-based function and Amazon Comprehend are called for text moderation, and Amazon Rekognition is used for image moderation, both before and after invoking Stable Diffusion. Although this approach effectively moderates input prompts and output images, it may increase the overall cost and latency of the solution, which is something to consider. Latency Both Amazon Rekognition and Amazon Comprehend offer managed APIs that are highly available and have built-in scalability. Despite potential latency variations due to input size and network speed, the APIs used in this solution from both services offer near-real-time inference. Amazon Comprehend custom classifier endpoints can offer a speed of less than 200 milliseconds for input text sizes of less than 100 characters, while the Amazon Rekognition Image Moderation API serves approximately 500 milliseconds for average file sizes of less than 1 MB. (The results are based on the test conducted using the sample application, which qualifies as a near-real-time requirement.) In total, the moderation API calls to Amazon Rekognition and Amazon Comprehend will add up to 700 milliseconds to the API call. It’s important to note that the Stable Diffusion request usually takes longer depending on the complexity of the prompts and the underlying infrastructure capability. In the test account, using an instance type of ml.p3.2xlarge, the average response time for the Stable Diffusion model via a SageMaker endpoint was around 15 seconds. Therefore, the latency introduced by moderation is approximately 5% of the overall response time, making it a minimal impact on the overall performance of the system. Cost The Amazon Rekognition Image Moderation API employs a pay-as-you-go model based on the number of requests. The cost varies depending on the AWS Region used and follows a tiered pricing structure. As the volume of requests increases, the cost per request decreases. For more information, refer to Amazon Rekognition pricing . In this solution, we utilized an Amazon Comprehend custom classifier and deployed it as an Amazon Comprehend endpoint to facilitate real-time inference. This implementation incurs both a one-time training cost and ongoing inference costs. For detailed information, refer to Amazon Comprehend Pricing . Jumpstart enables you to quickly launch and deploy the Stable Diffusion model as a single package. Running inference on the Stable Diffusion model will incur costs for the underlying Amazon Elastic Compute Cloud (Amazon EC2) instance as well as inbound and outbound data transfer. For detailed information, refer to Amazon SageMaker Pricing . Summary In this post, we provided an overview of a sample solution that showcases how to moderate Stable Diffusion input prompts and output images using Amazon Comprehend and Amazon Rekognition. Additionally, you can define negative prompts in Stable Diffusion to prevent generating unsafe content. By implementing multiple moderation layers, the risk of producing unsafe content can be greatly reduced, ensuring a safer and more dependable user experience. Learn more about content moderation on AWS and our content moderation ML use cases , and take the first step towards streamlining your content moderation operations with AWS. About the Authors Lana Zhang is a Senior Solutions Architect at AWS WWSO AI Services team, specializing in AI and ML for content moderation, computer vision, and natural language processing. With her expertise, she is dedicated to promoting AWS AI/ML solutions and assisting customers in transforming their business solutions across diverse industries, including social media, gaming, e-commerce, and advertising & marketing. James Wu is a Senior AI/ML Specialist Solution Architect at AWS. helping customers design and build AI/ML solutions. James’s work covers a wide range of ML use cases, with a primary interest in computer vision, deep learning, and scaling ML across the enterprise. Prior to joining AWS, James was an architect, developer, and technology leader for over 10 years, including 6 years in engineering and 4 years in marketing and advertising industries. Kevin Carlson is a Principal AI/ML Specialist with a focus on Computer Vision at AWS, where he leads Business Development and GTM for Amazon Rekognition. Prior to joining AWS, he led Digital Transformation globally at Fortune 500 Engineering company AECOM, with a focus on artificial intelligence and machine learning for generative design and infrastructure assessment. He is based in Chicago, where outside of work he enjoys time with his family, and is passionate about flying airplanes and coaching youth baseball. John Rouse is a Senior AI/ML Specialist at AWS, where he leads global business development for AI services focused on Content Moderation and Compliance use cases. Prior to joining AWS, he has held senior level business development and leadership roles with cutting edge technology companies. John is working to put machine learning in the hands of every developer with AWS AI/ML stack. Small ideas bring about small impact. John’s goal for customers is to empower them with big ideas and opportunities that open doors so they can make a major impact with their customer. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Samsung Electronics Improves Demand Forecasting Using Amazon SageMaker Canvas _ Samsung Electronics Case Study _ AWS.txt,"Saved time Français Increased Amazon SageMaker Studio is an integrated development environment (IDE) that provides a single web-based visual interface where you can access purpose-built tools to perform all machine learning (ML) development steps, from preparing data to building, training, and deploying your ML models, improving data science team productivity by up to 10x. 2023 Samsung Electronics is a multinational company based in South Korea that provides customers around the world with access to technology, such as mobile phones, computers, and smart devices. Español forecasting accuracy for data science team to focus on advanced models 日本語 Customer Stories / Electronics & Semiconductor Samsung Electronics Improves Demand Forecasting Using Amazon SageMaker Canvas Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used From days to hours to generate insights Using Amazon SageMaker Canvas is simple, and the interface is user friendly. Even a business analyst like me can analyze data and get insights using machine learning.” AWS Services Used If the marketing intelligence group does need assistance with a model, it can collaborate with the data science team using AWS services. Business analysts using Amazon SageMaker Canvas can share the same model with data scientists who use Amazon SageMaker Studio, an integrated development environment that provides a single web-based visual interface for data scientists to access tools to perform all ML development steps. Using Amazon SageMaker Studio, data scientists can evaluate model results and parameters. “The data science team is small and has a lot of responsibilities analyzing advanced models,” says Lee. “It makes sense to have business analysts working with simpler models because we can still collaborate with the data science team if we encounter challenges.” 中文 (繁體) Bahasa Indonesia Amazon SageMaker Studio Solution | Increasing Forecasting Accuracy While Reducing the Time to Receive Results by 1–2 Days Contact Sales Ρусский Outcome | Encouraging Other Teams to Use Amazon SageMaker Canvas for Additional Use Cases عربي By equipping business analysts with the skills to use Amazon SageMaker Canvas, Samsung saves time for both business analysts and data scientists. The marketing intelligence group meets weekly to analyze future demand for the company’s resources. In the past, it couldn't determine how a particular factor would impact demand on its own. “Using Amazon SageMaker Canvas, we can quickly see how a factor will affect the model,” says Lee. “Previously, we had to ask our data science team for help and would typically wait for 1–2 days. Now, we can save time by getting the answer using Amazon SageMaker Canvas in 1–2 hours.” The data science team can then focus on working with more advanced models, which is a better use of its expertise. 中文 (简体) Dooyong Lee Manager of Marketing Intelligence, Samsung Electronics Learn more » between business analysts and data scientists Overview Empowered business analysts Forecasting PC set demand and shipments is a small portion of the forecasting that Samsung Electronics does as a large, multinational company. The marketing intelligence group plans to train other members of the team to use Amazon SageMaker Canvas in the future. It is also encouraging other teams to start using the service for additional use cases, such as analyzing mobile, server, and automotive demand. “Using Amazon SageMaker Canvas is simple, and the interface is user friendly,” says Lee. “Even a business analyst like me can analyze data and get insights using ML.” Amazon SageMaker Canvas expands access to machine learning (ML) by providing business analysts with a visual interface that allows them to generate accurate ML predictions on their own—without requiring any ML experience or having to write a single line of code. Türkçe About Samsung Electronics English AWS Data Lab offers accelerated, joint engineering engagements between customers and AWS technical resources to create tangible deliverables that accelerate databases, analytics, artificial intelligence/machine learning (AI/ML), application & infrastructure modernization, and DataOps initiatives. Based in South Korea, Samsung Electronics is a global company offering people around the world access to technology, such as mobile phones, computers, and smart devices. The Samsung Device Solutions division of the company focuses on the inner workings of electronic devices to provide maximum performance, reliability, and longevity. AWS Data Lab Deutsch to build ML models and generate accurate ML predictions Learn how Samsung Electronics in the technology and electronics industry equipped business analysts to forecast demand using Amazon SageMaker Canvas without writing code. Within the Samsung Device Solutions division, the Memory Marketing team analyzes memory needs for electronics produced by the multinational company. It previously forecasted memory chip demand based on customer preferences, external research, and simple regression. However, these inputs were sometimes volatile, inaccurate, and didn’t account for new factors. For example, with new applications and devices on the market and environmental factors like the COVID-19 pandemic impacting business, it became difficult to determine the inflection point by solely looking at previous trends. To overcome these challenges, Samsung Electronics sought a new methodology for demand forecasting. Rather than increasing the workload of its data science team, the company wanted to empower business analysts with no ML or coding experience to inform data-driven decision-making with ML using Amazon SageMaker Canvas, which provides business analysts with a visual interface for generating accurate ML predictions on their own, without writing code. Tiếng Việt Opportunity | Employing No-Code ML for Demand Forecasting Using Amazon SageMaker Canvas Italiano ไทย Samsung Electronics kicked off the project in April 2022. Then, in August 2022, it started training business analysts from the marketing intelligence group, a portion of the Memory Marketing division, through AWS Data Lab, which offers accelerated, joint engineering engagements between customers and AWS technical resources to create tangible deliverables. Five members of the team went through 5 days of training to learn how to use Amazon SageMaker Canvas. By September 2022, the business analysts were using Amazon SageMaker Canvas to analyze data and forecast demand over the next eight quarters for PC shipments. Increased collaboration Learn more » To forecast demand, business analysts imported data from various sources, including internal data and external data from third-party sources, into Amazon SageMaker Canvas. After importing the data and selecting values to predict forecasting demand, Samsung Electronics could automatically prepare the data, explore it, and quickly build ML models. “All of these steps are done with a click, so business analysts can easily use the tool,” says Dooyong Lee, manager of marketing intelligence at Samsung Electronics. After building a demand forecast model using Amazon SageMaker Canvas, Samsung Electronics is seeing highly accurate predictions. “Using Amazon SageMaker Canvas, we can continuously advance the forecast accuracy over time,” says Lee. Amazon SageMaker Canvas Digital devices are everywhere: in homes, offices, and people’s pockets. To keep up with the increasing complexity of digital devices on the market and efficiently meet customer needs, Samsung Electronics needed a better way to predict demand for memory hardware. The company wanted to empower business analysts without coding experience to glean data-driven insights using machine learning (ML), so it sought a solution using Amazon Web Services (AWS). Using features of Amazon SageMaker—fully managed infrastructure, tools, and workflows for building, training, and deploying ML models for any use case—Samsung Electronics enhanced forecasting accuracy while saving time for both its business and data science teams. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português" Samsung Electronics Uses Amazon Chime SDK to Deliver a More Engaging Television Experience for Millions of Viewers _ Samsung Case Study _ AWS.txt,"By working closely with AWS teams during initial design discussions right through to development, the service benefitted from a short development timeframe. “With AWS, we were able to quickly roll out Live Chatting, the world’s first live television and text chat service,” says Seokjae Oh, platform service part lead of the visual display division at Samsung Electronics. Without collaborating with AWS teams, Samsung would have needed additional internal resources to create its new messaging services. Français Taking advantage of this solution, Samsung customers can view chat interfaces on their Samsung smart television, write messages using their remote control or mobile phone, and chat by converting micro voice messages to text via the same devices. Interactive chat functions include emojis and recommended messages based on program genres. Additionally, the live chatting solution can scale automatically to support millions of users by relying on Amazon Elastic Container Service (Amazon ECS), a fully managed container orchestration technology. With the Amazon Chime SDK, builders can easily add real-time voice, video, and messaging powered by machine learning into their applications. 2023 Samsung relies on the Amazon Chime SDK to deliver a television live chat solution in months, give viewers a more engaging experience, and scale to support chat services on televisions produced from 2020 to present. Español for television viewers 日本語 Customer Stories / Media & Entertainment Get Started 한국어 With AWS, we were able to quickly roll out Live Chatting, the world’s first live television and text chat service.” Quickly designs and deploys Overview | Opportunity | Solution | Outcome | AWS Services Used Outcome | Giving Television Viewers a More Interactive and Engaging Experience Scales on demand AWS Services Used Amazon ECS 中文 (繁體) Bahasa Indonesia Samsung Electronics, based in South Korea, is the country’s leading electronics company. Samsung produces consumer devices including televisions, LCD panels, and printers; semiconductors; and communications devices such as smartphones and networking gear. The company consists of nearly 230 subsidiaries across the globe. Seokjae Oh Leader of the Platform Service, Samsung Electronics Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Samsung is currently working to commercialize its new service to target the wider Korean market and continues to work with AWS to improve maintenance and service functions. “We look forward to helping Samsung innovate rapidly to meet evolving demands for new entertainment services,” says Ham. 中文 (简体) Samsung Electronics Uses Amazon Chime SDK to Deliver a More Engaging Television Experience for Millions of Viewers Learn more » Overview The application increases viewer engagement by giving customers the ability to view chat interfaces on their smart television and write messages using their remote control or mobile phone. Solution | Creating an Interactive Chatting Solution Using Amazon Chime SDK Türkçe About Samsung Electronics English to support millions of viewers Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Amazon Chime SDK The Samsung Visual Display Division used the Amazon Chime SDK, a service that enables embedded real-time communication, to create a new live chatting solution. The Amazon Chime SDK makes it easy for developers to add real-time voice, video, and messaging powered by machine learning into their applications. With Amazon Chime SDK, as well as AWS developer tools and databases, Samsung built a live television chat service on AWS, with messaging capabilities integrated into live chatting functionality. Opportunity | Meeting the Demand for Interactive Chat Features Samsung, based in Korea, is a global electronics company and the world’s largest manufacturer of televisions and smartphones. Samsung produces a range of consumer and industry electronics, including appliances, digital media devices, semiconductors, and memory chips. The company’s mission is to create a better future for consumers by using sustainable products. Using Amazon ECS to facilitate automatic scalability, the Samsung live chatting solution scales across multiple channels during spikes in demand from larger groups of viewers during major sports events or popular television show finales. “Viewers across South Korea are looking for new ways to engage with their favorite TV programs. With the agility of AWS, we can scale this new Samsung viewing experience to customers countrywide, driving brand loyalty,” Ham says. Deutsch Tiếng Việt Samsung has combined messaging and entertainment to give viewers a way to share their thoughts, reactions, and emotions during live television shows. The AWS-powered live chatting technology helps users watching the same program to enter and chat in a single chat room, interacting with each other with messages displayed in a panel on the right side of the television screen. Interactivity across all viewing networks and cross-device connectivity makes text chat easy and accessible to South Korean viewers. “Samsung Electronics is making television more engaging with cloud technology,” said Kee Ho Ham, managing director of AWS Korea. “Using AWS, Samsung Electronics brings interactive live television chat services to global customers for the first time.” Italiano ไทย Contact Sales To address its business requirements, Samsung wanted to use a cloud-based solution for scalability and agility. Because the company had previous experience running workloads on AWS, it selected AWS again due to the strong support it had already received. Learn more » live chatting solution Interactive and engaging experience Samsung Electronics (Samsung) is the world’s largest television manufacturer. To meet customer demand, the company decided to build on Amazon Web Servces (AWS) and used Amazon Chime SDK to create a new live television chat service. In recent years, Samsung has seen rising demand from consumers for easy-to-use interactive chat features during television shows and movies. Through its own analysis, the company discovered that customers are seeking a sharing experience built into the television to make watching shows more engaging overall. To meet this customer demand, Samsung wanted to develop a new solution that would integrate messaging capabilities into live chatting. The company also wanted to implement a solution quickly to meet the needs of its customers. Português" Saving 80 on Costs While Improving Reliability and Performance Using Amazon Aurora with Panasonic Avionics _ Panasonic Avionics Case Study _ AWS.txt,"Jeremy Welch Cloud Development Data Software Engineer, Panasonic Avionics Corporation Panasonic has delivered over 15,000 in-flight entertainment systems and over 3,400 in-flight connectivity solutions to airlines around the world. Its in-flight entertainment systems capture data about passengers’ activities while onboard an airplane, such as their music and movie preferences. Airlines want this information so that they can make quick decisions based on current data to capture optimal incremental revenue opportunities. Panasonic’s previous on-premises system for collecting this data included a self-managed MySQL database as the backend that had limited flexibility and was difficult to maintain. To provide data to airlines more efficiently, Panasonic sought to improve the scalability, availability, and overall resiliency of its in-flight entertainment applications, reduce the heavy lifting of maintenance work, improve database replication performance, and optimize costs. Français 2023 10+ TB Español Panasonic Avionics Corporation is a supplier of in-flight entertainment and communications systems on commercial airlines. It has delivered over 15,000 in-flight entertainment systems and over 3,400 in-flight connectivity solutions to airlines around the world. Pursuing these objectives led the company to migrate to a cloud-based architecture using a suite of AWS services. “For the heavy-duty data work we need to do, AWS is definitely the best choice for us,” says Edwin Woolf, cloud development team manager at Panasonic. To modernize its legacy database, Panasonic decided to use Amazon Aurora, a relational database service built for the cloud with full MySQL and PostgreSQL compatibility, as its storage engine. Panasonic used Amazon Aurora MySQL-Compatible Edition for its various data marts to develop a new data lake—a centralized repository that supports data storage at virtually any scale—at its core for archiving. Amazon CloudWatch alarms, the built-in monitoring feature of Aurora, also means that Panasonic does not have to run third-party monitoring systems. About Panasonic Avionics Corporation Amazon Aurora provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. Learn more » 日本語 Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Customer Stories / Manufacturing Panasonic can now provide the data that airlines want while making flight time more enjoyable for travelers. It can collect, analyze, and store data more efficiently at scale and deliver the data to airlines in near real time. This data provides additional insight into content usage patterns and helps Panasonic to improve product offerings and customer experience. Using Aurora Database Cloning to quickly create duplicates of production databases gives Panasonic a way to reduce costs and improve flexibility when working with its databases. Faster and more efficient than physically copying the data, Aurora Database Cloning supports the creation of a new cluster that uses the same Aurora cluster volume and has the same data as the original. To help improve system reliability, Panasonic incorporates machine learning on Amazon SageMaker, which can be used to build, train, and deploy machine learning models for virtually any use case with fully managed infrastructure, tools, and workflows. Using machine learning, Panasonic has started to predict and identify potential failures of aircraft antennae (needed for passengers to connect to the internet). 한국어 80% reduction Overview | Opportunity | Solution | Outcome | AWS Services Used Outcome | Building a Data-Driven Mindset of data migrated Using the Amazon Aurora clusters has had a huge impact not just on cost-effectiveness but on operations as well, because there have been huge improvements in performance and, even more significantly, in reliability—less burden on the development team.” After preparing its on-premises databases for migration, Panasonic used AWS Database Migration Service (AWS DMS), which is used to migrate databases to AWS quickly and securely, to handle the replication of its smaller databases from onsite to the cloud. Using AWS DMS, Panasonic could migrate databases with minimal downtime by keeping the source database fully operational. For larger databases, not wanting to saturate their available AWS Direct Connect bandwidth limit, Panasonic used Percona XtraBackup to back up source databases and transfer them to Amazon Simple Storage Service (Amazon S3)—an object storage service offering industry-leading scalability, data availability, security, and performance—before restoring the databases to target Aurora MySQL clusters. Teams at Panasonic also use Amazon Athena, an interactive query service that makes it simple to analyze data in Amazon S3 using standard SQL, to run data analytics queries and extract relevant information from the databases. Because Amazon Athena is serverless, there is no infrastructure to manage, reducing system overhead requirements. When staff can quickly query data without having to set up and manage servers or data warehouses, they can focus on value-adding tasks instead. Panasonic Avionics Corporation (Panasonic) needed to modernize its architecture to keep pace with its day-to-day operations. The commercial airline in-flight entertainment and communications systems supplier wanted to improve the reliability and redundancy of its databases, which were backed by an onsite infrastructure that presented storage and scalability challenges. Looking for a solution to expand its capacity, modernize its infrastructure, and migrate 10 TB of data to the cloud, Panasonic selected Amazon Web Services (AWS). Since migrating, the company can collect, analyze, and store data more efficiently at scale and provide reliable services to its customers to accomplish its primary goal of making flight time as enjoyable as possible for personal and business travelers. AWS Services Used Opportunity | Using Amazon Aurora to Modernize Data Storage and Management Overview 中文 (繁體) Bahasa Indonesia Amazon Aurora Solution | Cutting Query Time up to 20% Using Amazon Aurora While Saving 80% on Costs Contact Sales Ρусский from 10–15 seconds to 0.3 seconds using Aurora MySQL عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Get Started Reduced replication lag time Saving 80% on Costs While Improving Reliability and Performance Using Amazon Aurora with Panasonic Avionics in query time in costs by migrating to the cloud AWS DMS Amazon Athena is a serverless, interactive analytics service built on open-source frameworks, supporting open-table and file formats. Although migrating Panasonic systems to the cloud was complex and involved 10 TB of data, the company could work with the AWS Database Specialist Solutions Architecture team to determine and implement solutions that accomplished Panasonic’s business goals. “It’s been a breath of fresh air to be able to speak to the AWS developers directly. That personal contact is worth a lot,” says Woolf. 18–20% improvement Türkçe English AWS Database Migration Service (AWS DMS) is a managed migration and replication service that helps move your database and analytics workloads to AWS quickly, securely, and with minimal downtime and zero data loss. Learn how Panasonic Avionics Corporation migrated its database environment to the cloud using AWS. By migrating its databases to a managed cloud-native database service like Aurora, Panasonic has saved an estimated 80 percent on costs over its previous onsite environment. Additionally, replication lags have reduced significantly. “Using our on-premises system under heavy loads, the databases experienced up to a 10-to-15-second replication delay between writer and reader. The equivalent database running on Aurora MySQL sees at most a 0.3-second delay, meaning that data is available in near real time,” says Jeremy Welch, cloud development data software engineer at Panasonic, who led the migration effort. Panasonic has also seen an approximately 18–20 percent improvement in query time. Reliable operation and less customer exposure to technical issues are a big plus. Ability to provide data Deutsch to airlines in near real time Tiếng Việt Amazon S3 Italiano ไทย Amazon Athena Learn more » Moving forward, Panasonic wants to develop a data-driven mindset to support access to data so that internal teams can optimize how they use that data within their respective business units. After the success it has seen by migrating to AWS, the company wants to expand its data lake and provide cataloging as a means for data discovery. “Migrating to AWS has been a huge win,” says Woolf. Português" Saving time with personalized videos using AWS machine learning _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Saving time with personalized videos using AWS machine learning by Humphrey Chen and Aaron Sloman | on 28 JAN 2021 | in Amazon Comprehend , Amazon DynamoDB , Amazon OpenSearch Service , Amazon Rekognition , Amazon SageMaker , Artificial Intelligence | Permalink | Comments |  Share CLIPr aspires to help save 1 billion hours of people’s time. We organize video into a first-class, searchable data source that unlocks the content most relevant to your interests using AWS machine learning (ML) services. CLIPr simplifies the extraction of information in videos, saving you hours by eliminating the need to skim through them manually to find the most relevant information. CLIPr provides simple AI-enabled tools to find, interact, and share content across videos, uncovering your buried treasure by converting unstructured information into actionable data and insights. How CLIPr uses AWS ML services At CLIPr, we’re leveraging the best of what AWS and the ML stack is offering to delight our customers. At its core, CLIPr uses the latest ML, serverless, and infrastructure as code (IaC) design principles. AWS allows us to consume cloud resources just when we need them, and we can deploy a completely new customer environment in a couple of minutes with just one script. The second benefit is the scale. Processing video requires an architecture that can scale vertically and horizontally by running many jobs in parallel. As an early-stage startup, time to market is critical. Building models from the ground up for key CLIPr features like entity extraction, topic extraction, and classification would have taken us a long time to develop and train. We quickly delivered advanced capabilities by using AWS AI services for our applications and workflows. We used Amazon Transcribe to convert audio into searchable transcripts, Amazon Comprehend for text classification and organizing by relevant topics, Amazon Comprehend Medical to extract medical ontologies for a health care customer, and Amazon Rekognition to detect people’s names, faces, and meeting types for our first MVP. We were able to iterate fairly quickly and deliver quick wins that helped us close our pre-seed round with our investors. Since then, we have started to upgrade our workflows and data pipelines to build in-house proprietary ML models, using the data we gathered in our training process. Amazon SageMaker has become an essential part of our solution. It’s a fabric that enables us to provide ML in a serverless model with unlimited scaling. The ease of use and flexibility to use any ML and deep learning framework of choice was an influencing factor. We’re using TensorFlow, Apache MXNet, and SageMaker notebooks. Because we used open-source frameworks, we were able to attract and onboard data scientists to our team who are familiar with these platforms and quickly scale it in a cost-effective way. In just a few months, we integrated our in-house ML algorithms and workflows with SageMaker to improve customer engagement. The following diagram shows our architecture of AWS services. The more complex user experience is our Trainer UI, which allows human reviews of data collected via CLIPr’s AI processing engine in a timeline view. Humans can augment the AI-generated data and also fix potential issues. Human oversight helps us ensure accuracy and continuously improve and retrain models with updated predictions. An excellent example of this is speaker identification. We construct spectrographs from samples of the meeting speakers’ voices and video frames, and can identify and correlate the names and faces (if there is a video) of meeting participants. The Trainer UI also includes the ability to inspect the process workflow, and issues are flagged to help our data scientists understand what additional training may be required. A typical example of this is the visual clues to identify when speaker names differ in various meeting platforms. Using CLIPr to create a personalized re:Invent video We used CLIPr to process all the AWS re:Invent 2020 keynotes and leadership sessions to create a searchable video collection so you can easily find, interact, and share the moments you care about most across hundreds of re:Invent sessions. CLIPr became generally available in December 2020, and today we launched the ability for customers to upload their own content. The following is an example of a CLIPr processed video of Andy’s keynote. You get to apply filters to the entire video to match topics that are auto-generated by CLIPr ML algorithms. CLIPr dynamically creates a custom video from the keynote by aggregating the topics and moments that you select. Upon choosing Watch now , you can view your video composed of the topics and moments you selected. In this way, CLIPr is a video enrichment platform. Our commenting and reaction features provide a co-viewing experience where you can see and interact with other users’ reactions and comments, adding collaborative value to the content. Back in the early days of AWS, low-flying-hawk was a huge contributor to the AWS user forums. The AWS team often sought low-flying-hawk’s thoughts on new features, pricing, and issues we were experiencing. Low-flying-hawk was like having a customer in our meetings without actually being there. Imagine what it would be like to have customers, AWS service owners, and presenters chime in and add context to the re:Invent presentations at scale. Our customers very much appreciate the Smart Skip feature, where CLIPr gives you the option to skip to the beginning of the next topic of interest. We built a natural language query and search capability so our customers can find moments easily and fast. For instance, you can search “SageMaker” in CLIPr search. We do a deep search across our entire media assets, ranging from keywords, video transcripts, topics, and moments, to present instant results. In a similar search (see the following screenshot), CLIPr highlights Andy’s keynote sessions, and also includes specific moments when SageMaker is mentioned in Swami Sivasubramanian and Matt Wood’s sessions. CLIPr also enables advanced analytics capabilities using knowledge graphs, allowing you to understand the most important moments, including correlations across your entire video assets. The following is an example of the knowledge graph correlations from all the re:Invent 2020 videos filtered by topics, speakers, or specific organizations. We provide a content library of re:Invent sessions, with all the keynotes and leadership sessions, to save you time and make the most out of re:Invent. Try CLIPr in action with re:Invent videos, see how CLIPr uses AWS to make it all happen. Conclusion Create an account at www.clipr.ai and create a personalized view of re:Invent content. You can also upload your own videos, so you can spend more time building and less time watching! About the Authors Humphrey Chen ‘s experience spans from product management at AWS and Microsoft to advisory roles with Noom, Dialpad, and GrayMeta. At AWS, he was Head of Product and then Key Initiatives for Amazon’s Computer Vision. Humphrey knows how to take an idea and make it real. His first startup was the equivalent of shazam for FM radio and launched in 20 cities with AT&T and Sprint in 1999. Humphrey holds a Bachelor of Science degree from MIT and an MBA from Harvard. Aaron Sloman is a Microsoft alum who launched several startups before joining CLIPr, with ventures including Nimble Software Systems, Inc., CrossFit Chalk, and speakTECH. Aaron was recently the architect and CTO for OWNZONES, a media supply chain and collaboration company, using advanced cloud and AI technologies for video processing. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Scaling Authentic Educational Games Using Amazon GameLift with Immersed Games _ Case Study _ AWS.txt,"Lindsey Tropf Founder and Chief Executive Officer, Immersed Games Français By leaving the infrastructure to AWS, Immersed Games freed up time for higher-value work. Amazon GameLift automatically manages the scaling up of a collection of Amazon EC2 servers on the backend to take on player loads. “As a small team, using AWS means we don’t have to deal with all of the knowledge and capability to manage infrastructure servers manually,” says Trussell. When the company still managed infrastructure manually, a developer had to wait in the office on Friday night until every student had signed off before updating the game. That manual effort has been automated along with server management using AWS. Immersed Games is an educational video game studio that builds immersive learning experiences that are standards-aligned. The company’s game, Tyto Online, teaches students scientific problem solving. Español With Amazon Cognito, you can add user sign-up and sign-in features and control access to your web and mobile applications. Learn more » Amazon Cognito 日本語 Amazon GameLift, Immersed Games can scale to accommodate more students than it could when it was manually managing its servers. Now, any time students sign on, Amazon GameLift automatically spins up additional servers as needed, giving the company confidence that it can provide a seamless learning experience to students at any moment. “We’re no longer in panic mode, unsure if we can handle the load of several classes coming online at the same time. Amazon GameLift is taking care of it all,” says Kyle Trussell, technical director at Immersed Games. Customer Stories / Games In addition to enhanced technical support and architectural guidance, Business Support provides access to third-party software support, documentation and forums, AWS Trusted Advisor, AWS Personal Health Dashboard, AWS Support API, and launch and event planning. Learn more » Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Immersed Games used AWS Business Support, which offers technical support and architectural guidance, to hold an AWS Immersion Day that introduced many of its young developers to cloud fundamentals. Saving money on AWS is now a whole-company effort. “We all want to be aware of what’s happening on AWS and how much money that is saving,” says Tropf. The development team started counting dollars saved as “pizza points,” saving up for pizza parties when it delivers significant cost savings. In the last year, the company has seen a 70 percent decline in technology spending, even as it improves the gaming experience for students and achieves scalability. Most importantly, Immersed Games can offer thousands of students compelling and authentic problem-solving experiences that impart real-world thinking skills. AWS Business Support 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Using AWS, we can spend more time on what makes us unique: creating immersive educational games.” Improved Get Started AWS Services Used game experience With its original infrastructure, Tyto Online could host only a maximum of 150 concurrent players, and the team had to manually scale and balance the server loads. The company also struggled to develop an immersive, 3D game that could work in schools that often were unwilling to install an app and wanted to run the game from a web browser. “It is a massive challenge to build a 3D game that runs in the web browsers of cheap, 4-year-old laptops in schools,” says Tropf. As school districts kept registering new students to play and learn, Immersed Games knew it needed to find a new way to manage, develop, and scale its game. Those challenges led Immersed Games to Amazon GameLift, a managed game server hosting solution, in October 2021. The service not only hosts game servers but also manages load balancing and networking. Reduced 中文 (繁體) Bahasa Indonesia As the company expands to new school districts, Immersed Games is also looking at implementing Amazon Cognito—a tool offering secure and frictionless customer identity and access management that scales—to meet security standards. “We don’t want to spend all our time rebuilding the wheel with the same services,” says Tropf. “Using AWS, we can spend more time on what makes us unique: creating immersive educational games.” Immersed Games, an education technology (EdTech) startup, needed scalable infrastructure to host its science education game, Tyto Online. The company faced the added challenge of running games seamlessly in schools that have limited equipment and strict protocols for internet access. After experimenting with many solutions, Immersed Games faced high and uncertain hosting costs, making it difficult for the company to build engaging games and constraining the scale of its operations. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 70% 中文 (简体) Scaling Authentic Educational Games Using Amazon GameLift with Immersed Games 2022   Overview Amazon GameLift is a dedicated game server hosting solution that deploys, operates, and scales cloud servers for multiplayer games. Solution | Improving Server Management and Decreasing Technology Costs by 70% Using Amazon GameLift labor hours scalability Seeing the success that it has had so far, Immersed Games has high ambitions. “Our goal is to let other people start building games on our solution in the future,” says Tropf. “We want companies to make other equally compelling game-based learning content.” By proving that it is possible to deliver an immersive educational gaming experience in a web browser, Immersed Games hopes to spearhead a new wave of innovation. Türkçe English Achieved About Immersed Games In 2019, Immersed Games chose to offload its infrastructure to Amazon Web Services (AWS) to help it develop and scale games effectively. “Using AWS means that we can spend more time on the things that are important to us: designing an amazing educational experience and focusing on students and teachers,” says Lindsey Tropf, founder and chief executive officer of Immersed Games. Now using AWS, Immersed Games can scale simply, reduce costs, and free developers to focus on developing features for the game instead of managing infrastructure. Outcome | Starting a Wave of Educational Game Development using AWS Deutsch Tiếng Việt reduction in overall tech costs Italiano ไทย Opportunity | Using Amazon GameLift to Scale Tyto Online for Immersed Games Contact Sales Learn how Immersed Games in EdTech delivered 70 percent cost savings using Amazon GameLift. Learn more » Amazon EC2 Immersed Games is an educational video game studio headquartered in Buffalo, New York. The idea for the company came when Tropf was working on a PhD in education and saw the parallels between learning theory and the kind of authentic problem-solving scenarios that happen in gaming. Immersed Games launched in 2015, but with funding sparse, the company built its cloud infrastructure using free credits from a variety of hosting providers. It eventually settled on AWS because of the support the cloud provider offers EdTech companies. “The fact that I had dedicated AWS contacts who understood the education market meant a lot, especially because I couldn’t get a hold of a real person at the companies we used previously,” says Tropf. The company used Amazon Elastic Compute Cloud (Amazon EC2), a cloud solution offering secure and resizable compute capacity for virtually any workload, to host the game servers. Português Amazon GameLift" Scaling Data Pipeline from One to Five Satellites Seamlessly on AWS _ Axelspace Case Study _ AWS.txt,"costs by lifecycling data Axelspace began building its custom, scalable data pipeline in 2019, with the intention of using fully managed services to automate as many steps in its process as possible and alleviate the operational burden on its development team. In general, the pipeline works as an intermediary between the satellites and AxelGlobe. First, the company downlinks data from its satellites. Then, the data proceeds through a series of modules, which represent different processing steps. For storing processing metadata and capture information, Axelspace adopted Amazon Relational Database Service (Amazon RDS), which makes it simple to set up, operate, and scale a relational database in the cloud. As the company continued to grow, it looked to AWS for solutions that would facilitate innovation within its data-processing pipeline and free up time for its team of developers to focus on testing new algorithms. Axelspace was also searching for a cost-effective solution that would help it deliver data to its customers at the lowest possible cost. “One of our key differentiators is affordability,” says Jay Pena, senior product manager at Axelspace. “It’s our goal to provide satellite imagery to everyone.” Français Throughout this project, Axelspace’s global team accessed multilingual documentation on AWS for technical support and cloud best practices. Using its custom-built data pipeline, the company can deliver data to its customers in under 5 hours. This speed is especially crucial in emergency cases, such as satellite imagery of natural disasters. These innovations have also given Axelspace’s development teams the ability to focus on improving the overall quality of its satellite imagery and operations. For instance, Axelspace has deployed additional custom tasking features that give its customers the ability to choose the capture frequency and term of any given satellite. “We love the fully managed solutions on AWS,” says Fechko. “They help our teams focus on algorithm development instead of infrastructure maintenance.” Customer Stories / Aerospace and Satellite  Español to deliver data to customers We love the fully managed solutions on AWS. They help our teams focus on algorithm development instead of infrastructure maintenance.” 日本語 Axelspace Scales Data Pipeline from One to Five Satellites Seamlessly on AWS 2022 Amber Fechko Cloud Engineering Unit Leader, Axelspace Get Started 한국어 Axelspace specializes in manufacturing both satellite hardware and compatible software, such as AxelGlobe, a subscription-based platform that gives customers the ability to access satellite imagery from anywhere. Since the launch of its first GRUS microsatellite in 2018, the company has rapidly expanded its fleet of remote sensing satellites to five, which it uses to capture Earth-observation data. Its customers can use this data across a wide variety of different applications, including land monitoring, disaster prevention, city planning, and more. Overview | Opportunity | Solution | Outcome | AWS Services Used Outcome | Scaling Its Global Operations Amazon Lambda according to demand One to five satellites AWS Services Used 中文 (繁體) Bahasa Indonesia of modules simultaneously processing data Amazon Relational Database Service (Amazon RDS) While designing its custom scaling system, Axelspace also wanted to provide an environment for monitoring that would remain secure. So the company implemented Amazon CloudWatch, which provides companies with observability of their AWS resources and applications on AWS and on premises. Using Amazon CloudWatch, Axelspace receives near-immediate notifications of system anomalies through internal notification channels. “We can better sleep at night using AWS services, knowing that our data is in a controlled environment,” says Pena. Axelspace also focused on increasing its cost savings by innovating its use of Amazon Simple Storage Service (Amazon S3), an object storage service built to retrieve any amount of data from virtually anywhere. Instead of storing its data in one Amazon S3 class, the company cycles its intermediary data for either removal or migration into lower Amazon S3 classes, helping it save tens of thousands of dollars on storage costs. Ρусский Solution | Building a Custom, Scalable Data Pipeline on AWS عربي Axelspace manufactures both satellite hardware and compatible software. The company has produced nine microsatellites, including five GRUS satellites, and it provides an Earth-observation platform, AxelGlobe, and a one-stop service for microsatellite missions, AxelLiner. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Amazon Elastic Container Service (Amazon ECS) Axelspace uses AWS Lambda to kick-start the processing and determine which AWS compute service is appropriate for the job. “Our workloads are variable but predictable,” says Fechko. “By building a custom scaling system, we can provision our resources on demand according to the processing requirements of our individual modules.” Depending on the size and type of module, Axelspace uses either Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload; Amazon Elastic Container Service (Amazon ECS), a fully managed container orchestration service that makes it easy for companies to deploy, manage, and scale containerized applications; or AWS Fargate, a serverless compute service for containers. With its custom-built data pipeline in place, Axelspace can process data in a virtually unlimited number of modules simultaneously. “It doesn’t matter if we have 10 captures processing or 100,” says Fechko. “We’ve been able to scale from one satellite to five seamlessly.” Opportunity | Expanding Its Fleet of Satellites Overview Amazon RDS is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud.  Learn more » Amazon EC2 offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload.  Learn more » Türkçe Under 5 hours Deploys resources Space technology company Axelspace has made satellite imagery and data more accessible for its global customer base by using microsatellites. Because the company handles both the manufacturing and operation of these satellites, along with the processing and analysis of satellite data, it needed a robust compute infrastructure that could dynamically scale to support all its operations, especially as it began sending more microsatellites into space. English About Axelspace Amazon Elastic Compute Cloud (Amazon EC2) Virtually unlimited number Saves on storage Deutsch AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers.  seamlessly scaled data pipelines From the beginning, Axelspace chose Amazon Web Services (AWS) as the cloud service provider for its custom, event-based scaling system using a combination of AWS services, including AWS Lambda, which gives companies the ability to run code without thinking about servers or clusters. By automating the provisioning of its infrastructure based on workloads, Axelspace instantly scaled its data-processing operations to support additional data from four new satellites while optimizing compute costs and running increasingly complex algorithms on its satellite data. Tiếng Việt To process data from its satellites, Axelspace has built a data-processing pipeline on AWS, which runs advanced algorithms that produce clear, accurate images for its customers. Each satellite capture produces tens of gigabytes of data. As the company launched more satellites into space and increased its capture frequency, the demand on its data-processing pipeline increased tenfold. “Our data-processing pipeline is our heaviest usage of AWS,” says Amber Fechko, cloud engineering unit leader at Axelspace. Italiano ไทย Contact Sales Learn more » Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Learn more » Because Axelspace has built a scalable, event-based infrastructure, it’s now undertaking an expansion of its global operations. With a well-established customer base in Japan, the company is looking at building its portfolio overseas. Axelspace is also exploring the possibility of increasing the resiliency of its processing operations by deploying across multiple AWS Regions. “I have nothing but wonderful things to say about the AWS team,” says Fechko. “AWS is an incredible asset to us at Axelspace.” Português" Scaling Sustainability Solutions for Buildings Using AWS with BrainBox AI _ Case Study _ AWS.txt,"Jean-Simon Venne Chief Technology Officer, BrainBox AI Français 2023 BrainBox AI’s autonomous decarbonization technology connects to existing building management systems or cloud-connected thermostats, gathers data, and uses ML to determine optimal settings for the heating, ventilation, and air conditioning (HVAC) systems of the building. “It adds a brain to a building so that it can act preemptively rather than reactively,” says Rebecca Handfield, vice president of marketing and public relations at BrainBox AI. Español We could never reproduce that scalability on our own. AWS is part of our secret recipe.” When it launched in May 2019, the company had 12 staff members and managed 15 buildings. As it grew, it needed more flexibility and began using AWS in 2020. Using AWS, BrainBox AI could expand to new regions and quickly onboard new buildings to keep up with the demand for sustainable solutions. Now, in 2023, BrainBox AI has over 150 people and manages hundreds of buildings worldwide 24/7. 日本語 Get Started 한국어 Up to 40% Overview | Opportunity | Solution | Outcome | AWS Services Used Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Learn more » Manages hundreds of buildings 24/7 Scaling Sustainability Solutions for Buildings Using AWS with BrainBox AI AWS Services Used Amazon Elastic Compute Cloud (Amazon EC2) provides secure and resizable compute capacity for virtually any workload. Up to 25% 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Customer Stories / Software & Internet عربي Headquartered in Montreal, BrainBox AI is a decarbonization technology company that provides cloud-based AI/ML solutions to decrease the emissions and improve the energy efficiency of buildings in over 20 countries. Outcome | Spreading Solutions to Reduce Carbon Impact 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Using AWS, BrainBox AI can replicate and redeploy its solution to new regions rapidly. It can also keep latency under 500 ms by using AWS servers that are located closer geographically to the buildings where it is expanding. Using services such as Amazon Elastic Compute Cloud (Amazon EC2)—which offers secure and resizable compute capacity—and Amazon Relational Database Service (Amazon RDS)—services to set up, operate, and scale databases in the cloud—BrainBox AI can scale flexibly. “All the tools, all of the monitoring, observability, and autoscaling capacity is already there on AWS,” says Jean-Simon Venne, chief technology officer and cofounder at BrainBox AI. Overview Canadian technology scale-up BrainBox AI is helping building owners reduce emissions and energy consumption using cloud-based artificial intelligence (AI) and machine learning (ML) on Amazon Web Services (AWS). Using AWS, BrainBox AI can deliver deep learning solutions with low latency to multiple regions and scale quickly to meet the demand for a growing number of building owners who want to reduce their emissions. reduction in HVAC energy costs BrainBox AI wants to accelerate emissions reductions to make a lasting, tangible impact on climate change for future generations. Multisite retailers and other commercial building owners are showing interest in using the solution to manage rising energy costs and comply with environmental legislation. Using AWS, BrainBox AI can scale to meet the demand. “We could never reproduce that scalability on our own,” says Venne. “AWS is part of our secret recipe.” Türkçe When BrainBox AI installs its solution in a new building, it must train a new ML model to control the building’s HVAC systems. The models are trained for 2–3 months using internal and external data streams, such as equipment data, utility patterns, and weather patterns. After installation, BrainBox AI models determine the optimal settings for running the building’s HVAC systems—the component that often consumes the most energy in a building—and control the system by adjusting boilers, pumps, fans, and other physical equipment. The ML models reassess the data every 5 minutes to optimize for comfort, cost, and energy efficiency. English Amazon RDS BrainBox AI scaled its autonomous energy management solution to new regions using AWS, reducing the carbon emissions of the buildings that it is installed in by up to 40 percent. About BrainBox AI Scaled out to 20 countries reduction in building HVAC emissions Opportunity | Using AWS to Expand Service for BrainBox AI Deutsch Tiếng Việt Italiano ไทย Using BrainBox AI, building owners reduce HVAC energy costs by up to 25 percent and reduce HVAC-related greenhouse gas emissions by up to 40 percent. The solution has been implemented in 20 countries, and by the end of 2022, BrainBox AI was onboarding 20 new buildings per week. Using AWS, the company hopes to increase its capacity to onboard up to 1,000 new buildings per week. Learn more » Amazon EC2 Solution | Reducing Carbon Emissions by Up to 40% Using ML Português" Scaling Text to Image to 100 Million Users Quickly Using Amazon SageMaker _ Canva Case Study _ AWS.txt,"Français It wasn’t only speed to market that was a concern for Canva but, more importantly, user trust and safety. The advent of AI-generated art has brought about new ways for users to create problematic content. In some cases, these AIs might even create offensive images on their own. Manually moderating each image would have required Canva to hire hundreds of moderators working around the clock. Instead, it turned to Amazon Rekognition, which offers pretrained and customizable computer vision capabilities to extract information and insights from images and videos. “Amazon Rekognition was really useful,” says Pink. “We’re not allowing users to enter prompts that could potentially generate malicious content, and we are using Amazon Rekognition to identify not-safe-for-work images that the model generates.” If a user enters an offensive image prompt, Canva simply returns no results to the user. There is also an option for users to report generated images they deem offensive. Español Learn more » Canva sets its image-creation sequence up so that after a user enters a text prompt, it uses an Amazon SageMaker Real-Time Inference endpoint to generate an image. When the images are generated, the system filters them through the Amazon Rekognition model. At the end of the pipeline, Canva displays a selection of images to the end user. With this cutting-edge text-to-image technology, users can create unique, high-quality images in seconds rather than in hours or days. 日本語 Amazon SageMaker Solution | Rapidly Bringing New Features to Users Using Amazon SageMaker Outcome | Scaling Up for Future Growth 한국어 Amazon Rekognition Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used no items found  Under 3 weeks More Canva Stories … Canva Scales Text to Image to 100 Million Users Quickly Using Amazon SageMaker AWS Services Used 1 Canva now uses Amazon SageMaker for over 60 ML models, affecting nearly every stage of image creation in the service. “Getting models into customers’ hands and then building momentum around that is very important. AWS has been absolutely essential for us to do any of this,” says Pink. Canva rolled out this innovative new feature to its users so quickly in large part due to the amount of employee time that the company saves using AWS. Using AWS also reduced costs by saving Canva a costly hardware investment up front. “AWS is a very good option for robust scaling in terms of return on investment because we can deploy effectively and quickly,” says Pink. 中文 (繁體) Bahasa Indonesia Opportunity | Using Amazon SageMaker to Accelerate Deployment for Canva Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Canva already used ML through Amazon Web Services (AWS) and Amazon SageMaker, a service to build, train, and deploy ML models for virtually any use case with fully managed infrastructure, tools, and workflows. The company wanted to introduce a feature that would let users enter a text prompt and get an AI-generated image, but doing so on its own would take at least 6 months of dedicated engineering work and a huge number of GPUs. By using Amazon SageMaker Real-Time Inference functionality, Canva could bring the new feature to users in less than 3 weeks. by adding content moderation 2022 Overview in ML for users Get Started About Canva Türkçe Learn how Canva rolled out its image-generating app using Amazon SageMaker and Amazon Rekognition. English to ship text-to-image feature to users Improved productivity Glen Pink Director of ML, Canva Accelerated innovation Global visual communications platform Canva wanted to use machine learning (ML) to bring an artificial intelligence (AI)-image-generation feature to its 100 million monthly active users—and do so quickly. Since its founding in 2013, its goal has been to empower anyone to communicate visually, on any device, from anywhere in the world. Canva is an online platform for creating and editing everything from presentations to social media posts, videos, documents, and even websites. The company aims to democratize content creation so that everyone, from enterprises down to the smallest-scale bloggers, has access to advanced visual communication tools. With the development of programs that use ML and AI to create images based on text input, building a text-to-image function in Canva aligned with the organization’s goal of empowering creativity and making design as simple as possible. “There has been a huge explosion in generated content,” says Glen Pink, director of ML at Canva. “AI-generated images have only recently become more than a toy. It’s become something that can actually be used as part of the creative design process.” Deutsch When an engineer at Canva built a text-to-image demo based on Stable Diffusion—an open-source, deep learning text-to-image ML model released in 2022—the company invested in integrating it with Canva. Pink’s first step in creating this tool was to turn to AWS, because Canva has been using services from AWS for nearly its entire existence. “It would have probably taken 6 months to implement on our own,” Pink says. “I wouldn’t even know how to approach the scaling from the hardware perspective.” Indeed, it would have been impossible for Canva to set up enough GPUs to make its text-to-image function a reality in time to meet business needs. Tiếng Việt By using Amazon SageMaker, Canva could ship the new text-to-image feature to users in the space of 3 weeks. “That’s a normal turnaround time for some models,” Pink says, “but this is heavy lifting and cutting edge. Before AWS, Canva couldn’t ship big, modern, cutting-edge models quickly, and now we can.” Build new applications with generative AI. Italiano ไทย Amazon Rekognition offers pre-trained and customizable computer vision (CV) capabilities to extract information and insights from your images and videos. With over 100 million monthly active users, Canva is seeking to expand the intelligent services that it offers along with its global user base. The company plans to continue using AWS to build these tools at the scale that it needs to serve its growing Canva for Teams users. Using Amazon SageMaker makes it simple for Canva’s ML engineers to innovate rapidly and shape the future of team collaboration. “This is where AWS is actively involved in delivering the underlying environment to support the really heavy ML models,” Pink says. Learn more » Founded in 2013, Canva is a free online visual communications and collaboration platform with a mission to empower everyone in the world to design. “Using AWS, the Canva ML environment does very well at scaling to large numbers of users,” he says. “We can be confident that whatever we build on top of AWS, it’s going to scale.” Português Using AWS, the Canva ML environment does very well at scaling to large numbers of users.”" Scaling to Ingest 250 TB from 1 TB Daily Using Amazon Kinesis Data Streams with LaunchDarkly _ LaunchDarkly Case Study _ AWS.txt,"LaunchDarkly provides scalable feature flag management software as a service that decouples feature rollout and code deployment, helping development teams to manage risk. and evaluate around 20 trillion feature flags daily LaunchDarkly streams event-data-processing records in real time into AWS Lambda, a serverless, event-driven compute service that lets companies run code for virtually any type of application or backend service without provisioning or managing servers. LaunchDarkly uses Lambda functions to process and transform data before sending it downstream to Amazon Kinesis Data Firehose, which reliably loads near-real-time streams into data lakes, warehouses, and analytics services. LaunchDarkly has doubled its data analytics use cases using Amazon Kinesis Data Analytics, which lets companies interactively query and analyze data in real time and continuously produce insights for time-sensitive use cases. For example, customers can evaluate flags not just by user but also by context, a generalized way to refer to the people, services, machines, or other resources that encounter feature flags. Analytics workloads no longer fail due to a large influx of data, helping LaunchDarkly to scale to safely accommodate an increasing number of customer experiments. Instead of conventional processing methods that update data every 30 minutes, LaunchDarkly’s solution helps customers to analyze the effect of new feature releases in just a few minutes. “Using Amazon Kinesis Data Analytics, we have much more flexibility and can optimize our customers’ experiences,” Zorn says. For example, LaunchDarkly uses Kinesis Data Analytics to filter noise from user data and streamline pertinent information for customers. “We are able to realize the full value of our data,” says Zorn. “We don’t need to compromise analyses due to data volume issues.” Français durability 2023 Español Outcome | Continuing to Support Customer Experimentation While Managing Risk AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers.  Learn more » 日本語 AWS Services Used Contact Sales Amazon Kinesis Data Analytics is the easiest way to transform and analyze streaming data in real time using Apache Flink. 99.999% Opportunity | Using Amazon Kinesis Data Streams to Optimize Availability for LaunchDarkly 한국어 LaunchDarkly provides a feature-management solution for development teams that seek to manage risk as they deploy new software features. The company had already built a scalable compute architecture on Amazon Web Services (AWS), and it needed a data streaming solution to handle proliferating volumes of event data. The solution also needed to provide high availability to critical workloads so that LaunchDarkly customers could better manage risk by minimizing disruption and by quickly identifying threats. The company turned to services from Amazon Kinesis, which makes it simple to collect, process, and analyze near-real-time streaming data so that companies can get timely insights and react quickly to new information. Using Amazon Kinesis services, LaunchDarkly has scaled to ingest 250 TB of data in near real time and evaluate around 20 trillion feature flags daily, double its data analytics use cases, and provide 99.999 percent availability for customers. of data retention Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Mike Zorn Software Architect, LaunchDarkly 99.999999% Get Started Since adopting Kinesis Data Streams, LaunchDarkly has solidified the reliability of the events API it provides to customers, with five nines of availability and eight nines of data durability. “If we still had our previous architecture, we’d probably have around 1 or 2 percent availability,” Zorn says. “The availability of our events API has been rock solid since we adopted Amazon Kinesis Data Streams.” Scaling to Ingest 250 TB from 1 TB Daily Using Amazon Kinesis Data Streams with LaunchDarkly 中文 (繁體) Bahasa Indonesia use cases Click to enlarge for fullscreen viewing. Ρусский Customer Stories / Software & Internet عربي LaunchDarkly creates an additional layer of safety by using the configurable retention window of Kinesis Data Streams, which lets a company store data for 1–7 days. If a software misconfiguration or bug causes data to be processed incorrectly, LaunchDarkly engineers can use the added layer of safety to simply reingest historical data for customers. “That’s something I didn’t fully anticipate or appreciate when we first adopted Amazon Kinesis,” says Zorn. “It’s super simple to do, and it makes our customers very, very happy.” LaunchDarkly is using Kinesis Data Analytics to continue to enhance the functionality that its feature flags offer to customers. To process the ever-growing data volume, LaunchDarkly continues to use Kinesis services and other AWS services to enhance the reliability of the API it provides to customers, protecting customers from data loss and optimizing their ability to test new features. “It would have made it really hard to introduce an experimentation product that people would have any faith in if we were dropping data all the time,” Zorn says. “Using Amazon Kinesis Data Streams has removed the risk from our data system’s growth to a pretty large extent.” 中文 (简体) Amazon Kinesis Data Streams is a serverless streaming data service that makes it easy to capture, process, and store data streams at any scale. Solution | Building Robust Data Streaming Tools to Ingest, Process, and Analyze Data at Scale Overview Architecture Diagram - With Kinesis, Before Kinesis, Kinesis & KDA Using Kinesis Data Streams, LaunchDarkly collects volumes of granular customer data concerning which users experience specific feature flags and whether certain feature flags are still in use. LaunchDarkly has scaled from ingesting a single terabyte a day to roughly 250 TB a day, while evaluating about 20 trillion flags daily. “Using Amazon Kinesis Data Streams helped us solve how to create a layer of indirect processing that protects our workloads from one another,” Zorn says. “What’s more, it’s helped us to safely reach the level of scale that we’re at now.” Scaled to ingest 250 TB 1–7 days Türkçe English Learn how LaunchDarkly built a scalable event-processing pipeline with 99.999 percent availability using Amazon Kinesis Data Streams. Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram AWS Lambda availability Amazon Kinesis Data Streams Amazon Kinesis Data Analytics Using Amazon Kinesis Data Streams has removed the risk from our data system’s growth to a pretty large extent.” Founded in 2014, LaunchDarkly provides software as a service that empowers customers’ development teams to safely deliver and control software releases through the use of feature flags. A feature flag is a kind of toggle that facilitates continuous delivery of software by decoupling feature rollout and deployment, concealing the code pathway. Customers’ software teams deploy new features “darkly”—meaning “off”—and control their releases rather than risk an all-or-nothing launch into production. For example, LaunchDarkly customers can release a feature to a small number of users to track performance, and then gradually increase the rollout. This reduces the risk profile for software teams that don’t need to scramble to repair errors in a widespread feature release. In short, feature flags help LaunchDarkly customers scale safe releases for real users. Deutsch Tiếng Việt Amazon Kinesis Data Firehose Italiano ไทย Architecture Diagram Close Learn more » To run its servers, LaunchDarkly had been using Amazon Elastic Compute Cloud (Amazon EC2), which offers secure and resizable compute capacity for virtually any workload. It managed incoming requests by optimally routing traffic using Elastic Load Balancing (ELB), which automatically distributes incoming application traffic across one or more Availability Zones. At first, the company was using its servers both to ingest data and to run all its analytics processing, but the strain had begun to cause a rise in workload failures. “That was a solution that worked well when we were a really small company,” says Mike Zorn, software architect at LaunchDarkly. “But as our data volume increased, it showed that this system needed to be more reliable.” The cumulative volumes of data slowed the analytics workloads, and the company needed to scale up its data processing so that it could keep up with demand. With the idea of isolating workloads to optimize availability as the company continued to grow, LaunchDarkly adopted Amazon Kinesis Data Streams, a serverless streaming data service that makes it simple to capture, process, and store data streams at virtually any scale. Amazon Kinesis Data Firehose is an extract, transform, and load (ETL) service that reliably captures, transforms, and delivers streaming data to data lakes, data stores, and analytics services. Doubled data analytics About LaunchDarkly Português" Scaling Up to 30 While Reducing Costs by 20 Using AWS Graviton3 Processors with Instructure _ Case Study _ AWS.txt,"Instructure could also manage more requests while reducing its response times from 1.5 seconds to 500 ms using the Amazon EC2 C7g Instance clusters. As a result, millions of concurrent users can complete tasks with less interruption. “We’re able to take that in-person student-teacher experience and either extend it or, where needed, replace it,” says Pendleton. AWS Graviton processors are designed by AWS to deliver the best price performance for your cloud workloads running in Amazon EC2. Learn more » Français Amazon EC2 Auto Scaling Opportunity | Adopting a Scalable Solution with Better Performance Español After migrating to AWS Graviton3 processors, Instructure saw a 30 percent boost in throughput performance and improved load performance running on Amazon EC2 C7g Instances over Amazon EC2 instances not based on AWS Graviton3 processors. “Migrating to AWS Graviton3 processors has helped us save costs on scaling while empowering us to offer our users a smoother and faster experience,” says Pendleton. The company achieved up to 20 percent better performance from its application servers while running fewer instances at peak times. The organization also observed that the Amazon EC2 C7g Instances were delivering better results against their cost, which was reduced by 15–20 percent. “These cost savings mean that we can invest in more novel, interesting solutions, like new data services and machine learning. Our engineers can also spend less time doing mundane tasks and more time innovating to benefit customers,” says Pendleton. Established in 2008, Instructure, the maker of Canvas LMS, is a US-based education technology company with global operations. The Instructure Learning Platform includes learning solutions for higher education and K–12 schools to elevate student success, amplify the power of teaching, and inspire everyone to learn together. Zach Pendleton Chief Architect, Instructure Up to 30% 日本語 2023 Amazon EC2 C7g Instances “AWS has consistently been a fantastic vendor for us. It is flexible and responsive,” says Pendleton. “Working alongside AWS, we can build solutions that meet our customers’ needs.” 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Get Started Customer Stories / Education error rates Instructure first migrated its compute-intensive workloads to AWS Graviton2 processor–based Amazon EC2 C6g Instances, which optimize for both higher performance and lower cost per vCPU. The migration from Amazon EC2 C5 Instances was seamless. The primary programming languages used by Instructure, Ruby and Java, support Arm-based instances. Hence, there were no source code changes required. When AWS launched AWS Graviton3 processors in 2022, Instructure performed load tests on AWS Graviton3 processors that are based on Amazon EC2 C7g Instances. These offer up to 25 percent better performance over the sixth-generation Amazon EC2 C6g Instances based on AWS Graviton2 processors. The load tests assessed the new instances’ cost and performance benefits, and the results compelled the company to migrate to AWS Graviton3–based instances. Instructure is a cloud-native company, having chosen Amazon Web Service (AWS) for its reliability, global reach, and sustainability, says Pendleton. “We saw the value of the cloud from the beginning and moved in that direction.” Instructure is running on Amazon Elastic Compute Cloud (Amazon EC2) instances, which provide secure and elastic compute capacity for virtually any workload. When online learning increased during the COVID-19 pandemic, Instructure began to explore using AWS Graviton–based Amazon EC2 instances, powered by custom-built AWS Graviton processors, to deliver high performance at a lower price for cloud workloads. Amazon EC2 C7g instances, powered by the latest generation AWS Graviton3 processors, provide the best price performance in Amazon EC2 for compute-intensive workloads. Learn more » AWS Services Used Migrating to AWS Graviton3 processors has helped us save costs on scaling while empowering us to offer our users a smoother and faster experience.” Reduced 中文 (繁體) Bahasa Indonesia 15-20% Instructure uses AWS Graviton processors to scale its solution and uses Amazon EC2 Auto Scaling, which makes it possible for users add or remove compute capacity dynamically to meet changing demand. Ρусский improvement in throughput performance عربي Amazon Elastic Compute Cloud (Amazon EC2) 中文 (简体) Overall, Instructure observed up to 30 percent improved performance by migrating to AWS Graviton–based instances. “We saw better 99th percentile performance during load testing of the Amazon EC2 C7g instances, which led to lower error rates. That kind of consistency and reliability is meaningful to us and our customers,” says Pendleton. Outcome | Spending More Time on Innovation Instead of Infrastructure Management Solution | Reducing Costs by Up to 20 Percent and Increasing Performance by Up to 30 Percent by Migrating to AWS Graviton Processors Overview Instructure faced a spike in user traffic due to the quick and sudden spread of the COVID-19 pandemic and had to invest significant time and resources to scale to meet learners’ and institutions’ online learning needs. “Our business is highly dynamic in its scaling requirements,” says Zach Pendleton, chief architect at Instructure. “We scale down to almost nothing on a weekend, and then during an exam period or the beginning of a semester, we have dramatic jumps in load.” To curb costs as it scaled, Instructure investigated ways to approach its compute needs efficiently without compromising performance. AWS Graviton Processor Instructure plans to migrate its remaining databases running on older instance types to AWS Graviton3 processors. The company is reinvesting its savings from Amazon EC2 into developing data services on AWS that give customers insight into at-risk students so that it can engage them proactively. To do so, Instructure expects to expand its use of Amazon Simple Storage Service (Amazon S3)—which offers industry-leading scalability—and add additional AWS services, such as Amazon Redshift, which offers cloud data warehousing, and Amazon EMR, a cloud big data solution for petabyte-scale data processing, interactive analytics, and machine learning. Scaling Up to 30% While Reducing Costs by 20% Using AWS Graviton3 Processors with Instructure About Instructure Türkçe English On serverless AWS solutions, Instructure streamlines its infrastructure management to further optimize the way that it uses compute power. The company uses AWS Fargate, a serverless, pay-as-you-go compute engine for building applications. Instructure also uses AWS Lambda, a serverless, event-driven compute service to run code for nearly any type of application or backend service. Instructure provides learning management solutions for higher education and K–12 schools worldwide. The company offers various digital tools for collaborating through videoconferencing and online discussions. Students can manage their calendars, read course content, and submit assignments. Teachers can grade the work on the same platform and submit feedback. Deutsch from 1.5 seconds to 500 ms in load testing Tiếng Việt Learn how education technology company Instructure improved throughput by up to 30 percent using AWS Graviton–based Amazon EC2 instances. Because much of education moved to online learning in 2020, education technology company Instructure, the creator of Canvas LMS, adjusted its compute spend to scale its business efficiently, boosting performance and streamlining the online learning experience for millions of schools. Italiano ไทย Contact Sales increase in cost savings Learn more » Amazon EC2 Auto Scaling helps you maintain application availability and lets you automatically add or remove EC2 instances using scaling policies that you define.  Learn more » Reduced response time Amazon EC2 offers the broadest and deepest compute platform, with over 600 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português" Securing Workforce Access at Scale Using AWS IAM Identity Center with Xylem _ Xylem Case Study _ AWS.txt,"Français 2023 Xylem has already migrated 15 products to the new solution and plans to have the process completed by early 2023. After that, the company plans to operationalize this approach to identities and use it for more AWS services. “The only way we’re going to keep building and growing as a company is to strengthen identity as our foundation, and that’s exactly what we did using AWS,” says Jacobs. Español Customer Stories / Energy - Power & Utilities workforce identity management and onboarding in AWS from days to hours 日本語 AWS IAM Identity Center Xylem is a water technology company based in the United States that provides efficient, innovative, and sustainable technology solutions to businesses in more than 150 countries. Josh Jacobs Senior Manager for Global Security Operations, Xylem Learn how Xylem, a leading water technology company, applies access controls for its workforce users as it accelerates AWS adoption using AWS IAM Identity Center. 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Founded in 2011, Xylem provides smart water solutions—from water meters to leak detection services—to utility companies and other customers in 150 countries. When Xylem began to provide operational security controls across its cloud products, it discovered that identity credentials were not uniform across its 140 AWS accounts. When team members shifted roles, they needed to gain access to other accounts. To create a common identity and access framework enforceable across the company and its AWS accounts, Xylem decided to use AWS IAM Identity Center. “We have a consistent identity solution that we manage within any group, we’re able to audit access, and we can enforce consistent identity policies, multifactor authentication, password complexity and password rotation, and on and on,” says Josh Jacobs, senior manager for global security operations at Xylem. “We’re able to do a lot with limited resources.” About Xylem AWS IAM Improved Water technology company Xylem has adopted a multiaccount strategy to improve efficiency and security posture, using over 140 Amazon Web Services (AWS) accounts. Many of these accounts used native AWS Identity and Access Management (AWS IAM) to securely manage identities and access to AWS services and resources for individual accounts. As Xylem started to increase the number of AWS accounts to increase its business agility and innovation, the company was looking for a solution to consistently apply information security policies across these multiple accounts. Using AWS IAM Identity Center and AWS Organizations to centrally manage workforce access to multiple AWS accounts, Xylem could reduce employee onboarding time, improve its security posture, and achieve a comprehensive view of the security of its accounts. Get Started Outcome | Expanding the Security Approach to More AWS Services AWS Organizations lets you create new AWS accounts at no additional charge. With accounts in an organization, you can easily allocate resources, group accounts, and apply governance policies to accounts or groups. AWS Services Used With AWS Identity and Access Management (AWS IAM), you can specify who or what can access services and resources in AWS, centrally manage fine-grained permissions, and analyze access to refine permissions across AWS. Reduced 中文 (繁體) Bahasa Indonesia Learn more » Ρусский The only way we’re going to keep building and growing as a company is to strengthen identity as our foundation, and that’s exactly what we did using AWS.” عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Opportunity | Using AWS IAM Identity Center to Improve Workforce Identity and Access Management in AWS Securing Workforce Access at Scale Using AWS IAM Identity Center with Xylem Achieved Overview security posture across AWS accounts comprehensive view of security and access across all AWS accounts Türkçe By using AWS IAM Identity Center, Xylem can provide workforce access at scale as it continues to accelerate cloud adoption and innovate solutions for customers. New business acquisitions can be assimilated into workforce access while consistently applying policies across multiple AWS accounts. AWS IAM Identity Center (successor to AWS Single Sign-On) helps you securely create or connect your workforce identities and manage their access centrally across AWS accounts and applications. Learn more » English The company began migrating workforce identities to AWS IAM Identity Center in 2021. These identities include the company’s data lake team, one of its most security-conscious development teams. The migration is going smoothly, with no downtime for Xylem products. The company also uses AWS Security Hub to automate AWS security checks and centralize security alerts. Xylem uses it to monitor data and security 24/7, improving its security posture. Xylem has sped up the onboarding of new employees to AWS; their identities are set up before they begin working, instead of days later. “Everybody at Xylem has an identity, and if they shift into a role where they will be using AWS, it’s essentially zero time to get the identity piece of that added,” says Jacobs. This improvement in identity management and access controls helps employees develop products faster, resulting in better time to market. Deutsch Solution | Benefiting from Multiaccount Identity and Access Management Using AWS Tiếng Việt Italiano ไทย Contact Sales AWS Security Hub is a cloud security posture management service that performs security best practice checks, aggregates alerts, and enables automated remediation. Learn more » Learn more » AWS Organizations AWS Security Hub Português" SecurionPay _ Amazon Redshift _ Amazon Quicksight _ Amazon Kinesis _ AWS.txt,"The development platform is consistent and designed for 99.995 percent reliability, helping developers to test and build new services and address bespoke merchant requirements. “Using AWS, we can adapt and change our services fast,” says Jankowiak. “If any errors occur, we can fix them and roll out improvements immediately. This flexibility adds to our competitive advantage—the sky is the limit.” Français Benefits of AWS Alerting Merchant Customers to Potential Fraud Español Close cooperation with the AWS account team facilitated the development of an alerting system based on Amazon QuickSight. The system spots every abnormal behavior in the traffic and immediately notifies customers about the event. ""This is exactly what we needed. We had an idea of how to do it, but AWS suggested we build a custom engine, which we did,"" says Szymon Święcki, DevOps engineer at SecurionPay. ""Our workshops with AWS have been super helpful."" To implement the highest standards, SecurionPay has based its multi-layered approach to security on AWS. Using AWS Key Management Service (AWS KMS) made it easy to create and manage cryptographic keys, saving the IT team time for maintenance and backup tasks. The company has needed less time to complete the Payment Card Industry (PCI) compliance audit process. Reducing 3 days to half a day has freed up the team to focus on innovation. Within 3 months of migrating to AWS, employees from across the business—from operations to sales—were using data-driven insights to make decisions. For example, the risk team can now easily drill down into the details of suspicious events without involving the data analytics team, speeding up time to resolution. They can also quickly spin up dashboards on topics such as merchants, regions, or traffic-per-card issuer. Reports that previously took hours are now almost instantaneous, delivering timely insights.  Learn more 日本語 Contact Sales SecurionPay Manages Complex Online Payments, Scales to 300% Growth Using AWS Released daily product updates while maintaining 99.995% uptime SecurionPay runs an online credit card payment platform that handles 1 out of every 1,500 transactions worldwide for Mastercard and Visa. The company combines the latest technology with customer-centric user experience to create a product that is optimized to meet future needs. It facilitates complex payments for both low- and high-risk global merchants.  한국어 Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Since chargebacks, fraud, and failed payments can hamper revenue growth if not managed properly, the company has a security-first approach. Maintaining the highest possible level of data security for fast, increasing volumes of transactions is central to its business.  Learn more SecurionPay wanted to maintain its customer experience standards while rapidly growing its customer base. To do this, the company decided to draw from real-time insights based on its customer behavior. It used Amazon Redshift for its fast, easy, and secure cloud data warehousing. It also reached for Amazon QuickSight, a cloud-based business intelligence (BI) tool for creating dashboards.  Time-effectiveness goes hand in hand with cost decrease. Costs were reduced by up to 90 percent, since SecurionPay began using AWS Lambda, a serverless, event-driven compute service, and Amazon Kinesis, making it easy to collect, process, and analyze real-time streaming data. Get Started Generated business analytics reports in seconds Customer Sales Rise By 22% Using AWS AWS Services Used SecurionPay has improved the efficiency of product development using AWS DevOps tools. The team set up a continuous integration/continuous delivery (CI/CD) pipeline that delivers daily product updates, so customers always have access to the latest features.  中文 (繁體) Bahasa Indonesia Scaled to handle 25 million monthly transactions and 300% growth Amazon Kinesis Ρусский عربي 中文 (简体) Amazon Redshift Better Business Intelligence Using Amazon QuickSight Highly effective anti-fraud features are essential to quickly spotting fraudulent charges and taking the necessary action.  Encouraging Innovation with 99.995% Uptime SecurionPay built a payment platform relying on 60–70 AWS services. “Using AWS, we can scale to meet demand, and we were profitable within 6 months of launching the business,” says Lucas Jankowiak, CEO and co-founder at SecurionPay.  Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine learning to deliver the best price performance at any scale. Amazon QuickSight allows everyone in your organization to understand your data by asking questions in natural language, exploring through interactive dashboards, or automatically looking for patterns and outliers powered by machine learning. Türkçe English SecurionPay facilitates complex payments for global merchants regardless of whether the businesses are low- or high-risk. A reliable, secure, and scalable platform ensures the flawless processing of millions of transactions. It provides the flexibility to deploy new payment options at a pace for SecurionPay's customers to increase their sales conversions. Being backed up by AWS, SecurionPay has scaled to meet 300 percent business growth, improved customers’ sales by 19 percent, and used data analytics to support smarter business decisions. SecurionPay supports flexible payment options such as one-click upgrades, offers, cancellations, and upsales for customers, while providing secure authentication. Drawing upon AWS services, SecurionPay provides a checkout process that is 2–4 minutes faster than previously, because customers avoid redirecting payments to third-party websites and reduce the number of forms to fill out. Such improvement in convenience contributed to increased customer sales conversions by an average of 19 percent, and overall sales by 22 percent. “Passing the benefits of our secure and scalable service to our customers gives us a competitive advantage,” says Jankowiak. About SecurionPay SecurionPay turned to Amazon Web Services (AWS) to build a secure platform that can reliably process millions of concurrent transactions and support the company’s 300 percent year-on-year growth. It also developed a flexible architecture that promotes innovation so that it can offer new payment options to merchants to boost their sales conversions. Deutsch Tiếng Việt Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. Italiano ไทย In turn, its merchant customers benefit from the comprehensive BI tool that allows them to easily group and filter transactions, giving them better information about their businesses.  Lucas Jankowiak CEO and Co-founder, SecurionPay 2022 Using AWS, we can adapt and change our services fast. If any errors occur, we can fix them and roll out improvements immediately. This flexibility adds to our competitive advantage—the sky is the limit.” Amazon QuickSight Global merchants require scalable resources to ensure they have the capacity to meet variable buyer demand and process payments in a timely fashion.  SecurionPay provides a platform for online payments, serving global enterprises, and mid-sized and small companies. It supports a total of 160 currencies and 23 languages, making it an ideal service for cross-border transactions. Português Increased customer sales conversions by an average of 19%" Security Posture Strengthened Using AWS Shield Advanced with OutSystems _ Case Study _ AWS.txt,"of security solution deployment without additional resources Français Español For higher levels of protection against attacks targeting your applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Amazon Route 53 resources, you can subscribe to AWS Shield Advanced. About OutSystems 日本語 response times for issues Managed complexity Using AWS services, we reduced 2 hours of work to less than 5 minutes.” Recognized by Gartner in 2021 as a leader in enterprise low-code application development platforms, OutSystems supports customers in a variety of industries, including customers managing business-to-business applications, business-to-employee applications, and business-to-consumer applications. Its customers’ applications have different usages and traffic patterns depending on the use case, making it challenging for OutSystems to manage the wide range of behaviors and security postures. Prior to using AWS services for a security solution, OutSystems supported two customers with their own custom security protection solution. However, this solution required a significant amount of manual effort from the company and didn’t offer protection at scale. Starting in 2020, OutSystems implemented a security solution using Firewall Manager, Shield Advanced, and AWS WAF—which helps protect web applications from common web exploits—to meet the varying needs of its customers because it had already built its application development platform using AWS services. “It was a natural choice for us because our product runs natively on AWS, and we have experience with it internally, so we could implement the security solution with less overhead,” says Igor Antunes, head of security architecture at OutSystems. Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used AWS WAF AWS Services Used AWS Shield Advanced Opportunity | Using AWS Shield Advanced Supported Managing the Complexity of Security Solutions for OutSystems      Solution | Improving Response Times to Security Issues and Reducing Costs Using AWS Shield Advanced, AWS WAF, and AWS Firewall Manager Using services like AWS Shield Advanced, a managed distributed denial-of-service protection service, OutSystems successfully scaled to manage the complexity of over 4,000 web application firewalls (WAFs) while improving the response time to security issues after finding a malicious indicator from approximately 2 hours to under 5 minutes. OutSystems paired Shield Advanced with AWS Firewall Manager, a security management service for centrally configuring and managing firewall rules across accounts and applications. Because Firewall Manager supports Shield Advanced policies, OutSystems used both services to accomplish its goal of managing the complexity of security solutions while improving response time. Igor Antunes Head of Security Architecture, OutSystems 中文 (繁體) Bahasa Indonesia Cost and Time Savings Achieved, Security Posture Strengthened Using AWS Shield Advanced with OutSystems Contact Sales Ρусский Customer Stories / Software & Internet عربي The security solution for OutSystems needed to support the complexity and large scale required by its customers. The company manages a large and growing number of application load balancers—over 4,000 as of 2022—and serves thousands of applications across all load balancers. To protect its customers across multiple geographic regions, OutSystems uses AWS WAF. “Using AWS services, we can manage the security posture of all customers from a central place by deploying rules that are specific to our technology and blocking malicious events,” says Antunes. “We also have the granularity to address very specific challenges.” Using Firewall Manager, OutSystems can define rules while leaving room for local configuration options based on a country’s regulations or a company’s policies. For example, OutSystems can support configurations related to geo-blocking for individual customers in a specific environment while relying on a basic rule set for configurations that don’t vary across customers. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. AWS Firewall Manager is a security management service that allows you to centrally configure and manage firewall rules across your accounts and applications in AWS Organizations. Learn more » 2022 Using AWS services, OutSystems achieved significant time savings so that the company could reallocate resources to other projects. “Previously, an analyst and an operator would have to create the local WAF and deploy the rules with the solution when reacting to an event,” says Antunes. “Using AWS, we reduced 2 hours of work to less than 5 minutes.” This saved time is particularly impactful with the company’s ever-growing number of WAFs because it would be unsustainable to change rules manually for all the WAFs or to adapt the rules to a set of customers. If a cyber issue occurs, OutSystems can resolve it quickly because AWS Shield Advanced also provides early detection of possible distributed denial of service attacks and tight collaboration with the AWS response team. Overview in monthly costs 88% reduction Outcome | Continuing to Fine-Tune the Security Solution Using AWS Firewall Manager Türkçe English OutSystems reduced its costs by 88 percent per month by upgrading to Shield Advanced. The company gains these significant cost savings on an ongoing basis despite its scale because it no longer needs to pay for each WAF or rule. “Using AWS Shield Advanced and AWS Firewall Manager, we pay a fixed rate and get as much protection as we need,” says Antunes. Founded in Portugal, OutSystems is a global software vendor that provides a low-code, high-performance application development platform that helps its customers develop applications quickly with minimal coding knowledge. OutSystems provides a high-performance, low-code application development platform that helps its customers develop applications quickly with minimal coding knowledge. Founded in 2001 in Portugal, OutSystems has become a global software vendor that supports 13 AWS Regions with offices around the world. AWS Firewall Manager When deploying its security solution, OutSystems worked closely alongside AWS teams to address challenges and meet customer needs. OutSystems plans to continue implementing additional capabilities of AWS Firewall Manager to fine-tune its security solution and better protect its customers. “Throughout the full lifecycle, from the inception of an idea until the end, we always used AWS to get the right support at the right time,” says Antunes. 4,000 Deutsch Learn how OutSystems in the software industry managed thousands of web application firewalls using AWS Firewall Manager. application load balancers supported Tiếng Việt Italiano ไทย AWS WAF helps you protect against common web exploits and bots that can affect availability, compromise security, or consume excessive resources. Learn more » When deploying the security solution, OutSystems also saved on implementation costs compared with the cost of a solution from another vendor because the company didn’t need to obtain additional resources or capacity above what it was already using internally. Additionally, by using the infrastructure of Firewall Manager for the deployment of its solution, OutSystems could focus on its own product instead of designing its security solution from scratch. Throughout the process, OutSystems received support from the teams at AWS to manage the complexity of the solution. For example, when OutSystems exceeded AWS limits of internal APIs because of the scale of its security solution, the AWS WAF and Firewall Manager teams worked alongside the company to troubleshoot. “The teams at AWS were always available to work with us and provide guidance on the best practices for deploying this solution,” says Antunes. Learn more » Less than 5-minute As software vendor OutSystems grew its business, it needed a scalable security solution for its cloud service to further protect customers from cyber issues and simultaneously reduce operational overhead. In 2020, OutSystems looked to Amazon Web Services (AWS) for centralized security management so that the company could offer protection at scale while limiting manual interventions. Português" Selecting the right foundation model for your startup _ AWS Startups Blog.txt,"AWS Startups Blog Selecting the right foundation model for your startup by Aaron Melgar | on 22 JUN 2023 | in AWS for Startups , Generative AI , Thought Leadership | Permalink |  Share When startups build generative artificial intelligence (AI) into their products, selecting a foundation model (FM) is one of the first and most critical steps. A foundation model is a large machine learning (ML) model pre-trained on a vast quantity of data at scale resulting in a model that can be adapted to a wide range of downstream tasks. Model selection has strategic implications for how a startup gets built: Everything from user experience and go-to-market, to hiring and profitability, can be affected by selecting the right model for your use case. Models vary across a number of factors, including: Level of customization – The ability to change a model’s output with new data ranging from prompt-based approaches to full model re-training Model size – How much information the model has learned as defined by parameter count Inference options – From self-managed deployment to API calls Licensing agreements – Some agreements can restrict or prohibit commercial use Context windows – How much information can fit in a single prompt Latency – How long it takes for a model to generate an output Following are some of the most impactful aspects to consider when selecting a foundation model to meet your startup’s needs. Application-specific benchmarks As startups evaluate the performance of different models for their use case, a critical step in the process is establishing a benchmark strategy, which helps a startup quantify how well the content that a model generates matches expectations. “There are a large number of models out there, ranging from closed source players…to open-source models like Dolly, Alpaca, and Vicuna. Each of these models have their own tradeoffs — it’s critical that you choose the best model for the job,” explains Noa Flaherty, chief technology officer (CTO) and co-founder of Vellum . “We’ve helped businesses implement a wide variety of AI use cases and have seen first-hand that each use case has different requirements for cost, quality, latency, context window, and privacy.” Generalized benchmarks (such as Stanford’s Holistic Evaluation of Language Models ) are a great starting point for some startups because they help prioritize which foundation models to start experimenting with. However, generalized benchmarks may be insufficient for startups that are focused on building for a specific customer base. For example, if your model needs to summarize medical appointments or customer feedback, the model should be evaluated against how well it can perform these specific tasks. “To do custom benchmarking, you need a workflow for rapid experimentation – typically via trial and error across a wide variety of scenarios. It’s common to over-fit your model/prompt for a specific test case and think you have the right model, only for it to fall flat once in production,” Noa advises. Custom benchmarking may include techniques such as calculating BLEU and ROUGE scores ; these are two metrics that help startups quantify the number of corrections that are necessary to AI-generated text before giving it final approval for human-in-the-loop applications. Quality metrics and model evaluation are critical, which is why Noa founded Vellum in the first place. This Y-Combinator backed startup focuses their product offerings on experimentation: Per Noa, “The more you can compare/contrast models across a variety of cases that resemble what you’ll see in production, the better off you’ll be once in production.” Smaller, purpose-built models are on the rise Once quality benchmarks have been established, startups can begin to experiment with using smaller models meant for specific tasks, like following instructions or summarization. These purpose-built models can significantly reduce a model’s parameter count while maintaining its ability to perform domain-specific tasks. For example, startup GoCharlie is partnered with SRI to develop a marketing-specific multi-modal model with 1B parameters. “One-size-fits-all models will never truly solve an end user’s needs, whereas models designed to serve those needs specifically will be the most effective,” explains Kostas Hatalis, the chief executive officer (CEO) and co-founder of GoCharlie. “We believe purpose-built models tailored to specific verticals, such as marketing, are crucial to understanding the genuine requirements of end users.” The open-source research community is driving a lot of innovation around smaller, purpose-built models such as Stanford’s Alpaca or Technology Innovation Institute’s Falcon 40B . Hugging Face’s Open LLM Leaderboard helps rank these open-source models across a range of general benchmarks. These smaller models deliver comparable benchmark metrics on instruction-following tasks, with a fraction of the parameter count and training resources. As startups customize their models for domain-specific tasks, open-source foundation models empower them to further customize and fine-tune their systems with their own datasets. For example, Parameter-Efficient Fine-tuning (PERT) solutions from Hugging Face have shown how adjusting a small number of model parameters, while freezing most other parameters of the pre-trained LLMs, can greatly decrease the computational and storage costs. Such domain adaptation based fine-tuning techniques are generally not possible with API-based proprietary foundation models which can limit the depth to which a startup can build a differentiated product. Focusing usage on specific tasks also makes the foundation model’s pre-trained knowledge across domains like mathematics, history, or medicine, generally useless to the startup. Some startups choose to intentionally limit the scope of foundation models to a specific domain by implementing boundaries, such as Nvidia’s open-source NeMo Guardrails , within their models. These boundaries help to prevent models from hallucination: irrelevant, incorrect, or unexpected output. Inference flexibility matters Another key consideration in model selection is how the model can be served. Open-source models, as well as self-managed proprietary models, grant the flexibility to customize how and where the models are hosted. Directly controlling a model’s infrastructure can help startups ensure reliability of their applications with best practices like autoscaling and redundancy. Managing the hosting infrastructure also helps to ensure that all data generated and consumed by a model is contained to dedicated cloud environments which can adhere to security requirements set by the startup. The smaller, purpose-built models we mentioned earlier also require less compute intensive hardware, helping startups to optimize unit economics and price performance. In a recent experiment , AWS measured up to 50% savings in inference cost when using ARM-based AWS Graviton3 instances for open-source models relative to similar Amazon Elastic Compute Cloud (EC2) instances. These AWS Graviton3 processors also use up to 60% less energy for the same performance than comparable Amazon EC2 instances, which helps startups who are considering the environmental impacts of choosing power hungry inference hardware.  A study from World Economic Forum detailed the energy consumption of data centers. Once considered an externality, environmental implications have risen to top of minds of many and AWS enables startups to quantify their environmental impact through offerings such as Carbon Footprint Reporting , which helps companies compare the energy efficiency of different hardware selections. Conclusion Wherever your startup is in its generative AI journey—getting the infrastructure ready, selecting a model, or building and fine-tuning–AWS provides maximum flexibility for customers. Amazon Bedrock , a fully managed service, gives you access to foundation models from leading foundation models including Amazon’s own Titan family of models, available via a fully managed API. Amazon SageMaker JumpStart is self-service machine learning hub. It offers built-in algorithms, pre-trained foundation models, and easy-to-use solutions to solve common use cases for customers like fine-tuning their models or customizing their infrastructure. Check out these generative AI resources for startups building and scaling on AWS 🚀. Need help deciding which model or solution to choose? Want to work with AWS to offer your own model or algorithm?  Reach out to our team today ! TAGS: AIML Aaron Melgar Aaron empowers the AI/ML Startups & Venture Capital ecosystem at AWS, focused on early stage company growth. He is a former founder, series-A product manager, machine learning director, and strategy consultant. He is a second-generation Latin American who loves tennis, golf, travel, and exchanging audiobook recommendations about economics, psychology, or business. Resources AWS Activate AWS for Startups Resources Build Your Startup with AWS AWS for Startups Events Follow  AWS Startups Twitter  AWS Cloud Twitter  AWS Startups Facebook  AWS Startups Instagram  AWS Startups LinkedIn  Twitch  Email Updates" Shgardi Case Study.txt,"increase in monthly orders Opportunity | Amazon EKS Auto-scaling Helps Shgardi Cut Infrastructure Costs by 40% Français increase in conversion rate for new customers 20% Español Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that streamlines setup, operation, and management of message brokers on AWS. Learn more » Dahab says the migration has made Shgardi more efficient and placed it in a much healthier position than it was pre-pandemic. “We are going to continue using more AWS services,” he says. “This is helping us improve our market share in the MENA region and will improve our ability to expand country by country.” Learn how »  The company migrated to Amazon Web Services (AWS) in 2021. Containerization on AWS made it easier for Shgardi to manage its underlying infrastructure, and instead focus on innovation and business development. It also deployed microservices, so its delivery platform could automatically scale to match demand, and it used several AWS services to improve the performance, security, and reliability of its platform. 日本語 2023 Amazon MQ The move to microservices and an auto scaling infrastructure has freed up Shgardi’s developers so they can focus on coming up with new ways to improve efficiency and customer experience, instead of maintaining servers. Previously, deploying updates to the platform, whether it was to fix bugs or add a new feature, would take up to a week. Dahab says that it now takes a few hours, which is a reduction of at least 70 percent. Get Started 한국어 Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used Solution | Shgardi Increases Revenue and Conversion Rate Using Amazon Personalize Amazon Personalize Shgardi Boosts Monthly Orders by 20%, Cuts Costs by 40%, and Prepares for Growth Using AWS Tarek Dahab, chief technical officer (CTO) at Shgardi, explains that its previous infrastructure wasn’t designed to scale quickly. “During the COVID-19 pandemic, our traffic was increasing every day, so we kept ordering new servers,” he says. “But by the time they arrived and were set up, we needed more.” According to Dahab, in one year the infrastructure grew from a single dedicated server to 40 servers operating in clusters. “Our technical team was overwhelmed maintaining these servers, and they had no time for innovating or performing higher-value tasks,” he says. Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. Amazon CloudFront AWS Services Used 中文 (繁體) Bahasa Indonesia About Company Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 70% 中文 (简体) Dahab says it took less than 2 weeks to fully deploy the service. So far, it has been a huge success. “In the 3 months since we used Amazon Personalize to build our recommendation engine, we have increased monthly orders by 20 percent and boosted the conversion of new visitors into customers by 30 percent,” he says. On the customer-facing side, Shgardi used Amazon CloudFront,a content delivery network (CDN) service built for high performance, security, and developer convenience. Dahab says Shgardi uses Amazon CloudFront to cache images and objects, so its customers have the best possible browsing experience. 40% Shgardi is a Saudi Arabia-based delivery service that operates in 80 cities and arranges deliveries of food, pharmaceuticals, groceries, parcels, and other goods. Shgardi was looking for a way to convert more of its website visitors to customers. It also wanted to increase the average value of its active customers’ shopping baskets. The company decided to use Amazon Personalize, which lets developers quickly build and deploy curated recommendations and intelligent user segmentation at scale using machine learning. Overview Migrating to AWS has not only improved the reliability and performance of Shgardi’s platform, it has also reduced costs, increased revenue, and improved the productivity of the company’s developers. This has resulted in the delivery service attracting external investment and being recognized in the Forbes Middle East and North Africa (MENA) list of the most-funded startups, for having raised more than $37 million. This has resulted in increased uptime and revenue, as well as savings of 40 percent on infrastructure costs, which were previously around $15,000 per month. It also reduced the time taken to deploy platform updates by 70 percent, which improved the productivity of Shgardi’s IT staff. Migrating to AWS has allowed Shgardi to be more flexible, agile, and reliable in serving its clients. Shgardi has about 600 employees, 70 of whom have a technical background in coding and engineering skills. The company used its in-house expertise to containerize its platform using Amazon Elastic Kubernetes Service (Amazon EKS) for auto scaling—a managed service to run Kubernetes in the AWS Cloud—and deployed hundreds of microservices. To aid connectivity in the backend and make the most efficient use of its new architecture, it used Amazon MQ, a fully managed service for open-source message brokers. In the 3 months since we used Amazon Personalize to build our recommendation engine, we have increased monthly orders by 20% and boosted the conversion of new visitors into customers by 30%.” Shgardi also wanted to maximize the platform’s performance and minimize latency to ensure that visitors would have a good user experience. On the backend, the company used Amazon Relational Database Service (Amazon RDS)—a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. It also used Amazon Aurora, a database designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. “Using Amazon RDS and Amazon Aurora was the simplest way to enable our existing databases to scale on demand while also reducing management overheads,” says Dahab. Outcome | Shgardi Attracts Millions in Investments and Eyes Expansion AWS Customer Success Stories Türkçe Amazon MQ allows diverse applications on various platforms to communicate and exchange information. “These AWS services work together and allow us to easily manage our platform,” says Dahab. “When the traffic spikes, they auto scale. And compared to our previous infrastructure, we are spending about 40 percent less. This was the perfect solution for us.” English reduction in platform-update deployment time Before it migrated to AWS, Shgardi was unhappy that its on-premises platform experienced an uptime rate of just 90 percent. Dahab explains that, on top of problems faced during traffic spikes, the company had to plan regular maintenance downtime. “We had servers going down, the interconnect between servers going down, and we had to schedule regular maintenance windows—when customers wouldn’t be able to access the platform,” says Dahab. “Since migrating to AWS, this is no longer an issue and we have had more than 99 percent uptime.” Tarek Dahab Chief Technical Officer, Shgardi Deutsch Customer Stories / Startup / Saudi Arabia The lockdowns during the COVID-19 pandemic caused demand for Shgardi’s delivery services to increase exponentially. As a result, the company struggled to keep its platform from being overwhelmed and potentially crashing. The company would regularly add new dedicated servers to its on-premises infrastructure to try to match demand. This was a manual and time-consuming process and caused its infrastructure costs to continually increase. Tiếng Việt Shgardi is a Saudi Arabia-based delivery service. The company started out delivering food when it launched in 2019. However, during the COVID-19 pandemic it diversified to include general deliveries, parcels, groceries, and pharmaceuticals. It now has over 3 million customers and has completed more than 5 million orders. But increased demand challenged Shgardi’s infrastructure capacity, which caused unplanned downtime and negatively impacted the customer experience. Italiano ไทย Amazon EKS Amazon Personalize allows developers to quickly build and deploy curated recommendations and intelligent user segmentation at scale using machine learning (ML). Learn more » saving on infrastructure costs Learn more » 30% Português" Showpad Accelerates Data Maturity to Unlock Innovation Using Amazon QuickSight _ Case Study _ AWS.txt,"Français In 2021, sales enablement solution company Showpad envisioned using the power of data to unlock innovations and drive business decisions across its organization. Showpad’s legacy solution was fragmented and expensive, with different tools providing conflicting insights and lengthening time to insight. The company decided to use Amazon Web Services (AWS) to unify its business intelligence (BI) and reporting strategy for both internal organization-wide use cases and in-product embedded analytics targeted at its customers. Amazon QuickSight has become our go-to solution for any BI requirement at Showpad—both internally and externally, especially when it comes to correlating data across departments and business units.”   2023 Español Amazon QuickSight powers data-driven organizations with unified business intelligence (BI) at hyperscale. With QuickSight, all users can meet varying analytic needs from the same source of truth through modern interactive dashboards, paginated reports, embedded analytics, and natural language queries. 日本語 increase in dashboard development activity Founded in 2011 and with offices around the world, Showpad provides a single destination for sales representatives to access all sales content and information, along with coaching and training tools to create informed, upskilled, and trusted buying teams. The platform also provides analytics and insights to support successful information sharing and fuel continuous improvement. In 2021, Showpad decided to take the next step in its data evolution and set forth the vision to power innovation, product decisions, and customer engagement using data-driven insights. This required Showpad to accelerate its data maturity by mindfully using data and technology holistically for its customers. dashboard production from months to weeks About Showpad 한국어 For the second work stream of in-product customer reporting, Showpad released its first version of QuickSight reporting to customers in June 2022. “We went through user research, development, and beta tests in a span of 6 months, which was a big win for us,” says Minnaert. With the foundational architecture in place, shipping to a customer can happen in a few sprints, focusing on iterating and fine-tuning insights instead of solution engineering. The company can then follow up with tailor-made reporting for each customer using the same data so that it tells a consistent story. Overview | Opportunity | Solution | Outcome | AWS Services Used The company already used AWS in other aspects of its business and found that using Amazon QuickSight would meet all its BI and reporting needs with seamless incorporation into the AWS stack. “We chose Amazon QuickSight because of its embedded analytic capabilities, serverless architecture, and consumption-based pricing,” says Minnaert. Get Started Learn how Showpad used Amazon QuickSight to streamline data access and reduce insights turnaround time from months to weeks. increased speed, performance gains with SPICE (Super-fast, Parallel, In-memory Calculation Engine) AWS Services Used Showpad built new customer embedded dashboards within Showpad eOSTM and migrated its legacy dashboards to Amazon QuickSight, which powers data-driven organizations with unified BI at hyperscale.   Jeroen Minnaert Head of Data, Showpad 中文 (繁體) Bahasa Indonesia 10x Opportunity | Using Amazon QuickSight to Streamline Data-Driven Decisions   Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. 3x increase in ROI expected   Overview Showpad users can quickly prototype reports in a well-known environment—building reports using QuickSight and then testing them with customers—and have increased dashboard development activity by three times across the organization. “After we settle on reports or dashboards, it does not take much engineering effort to bring them to production,” says Minnaert. After a dashboard is agreed on, it can go through Showpad’s automated dashboard promotion process that can take an idea from development to production in weeks, not months. Showpad’s users and customers also benefit from performance gains with 10 times increased speed when using SPICE (Super-fast, Parallel, In-memory Calculation Engine), which is the robust in-memory engine that QuickSight uses. It takes only seconds to load dashboards. Using the serverless QuickSight, Showpad expects to see a three-times increase in projected return on investment in 2023. It can deprecate custom reporting, infrastructure, and multiple tools with the new data architecture and QuickSight. “The serverless model was also compelling because we did not have to pay for server instances nor license fees per reader. On Amazon QuickSight, we pay for usage. This makes it easy for us to provide access to everyone by default,” says Minnaert. And by providing dashboard and report building across 600 employees, including analysts and nontechnical users, Showpad reduced the time to build and deliver insights from months to weeks. Solution | Architecting a Portable Data Layer and Migrating to Accelerate Time to Value Founded in 2011, Showpad has offices around the world. It helps sales representatives share personalized content and deliver better buyer experiences, and it provides coaching and analytics insights to businesses. Türkçe By helping business users and experts rapidly prototype dashboards and reports to meet user and customer needs, Showpad uses the power of data to innovate and drive growth across its organization. “Amazon QuickSight has become our go-to solution for any BI requirement at Showpad—both internally and externally, especially when it comes to correlating data across departments and business units,” says Minnaert. English Fast time to value 6 months Outcome | Unlocking Innovation with Self-Service BI and Rapid Prototyping   After determining an approach and building the foundation, the team wanted to scale. But with 70 dashboards with over 1,000 visuals and over 1,000 tables ingesting data from more than 20 data sources, the team decided to prioritize the migration order. The company started with dashboards that had the fewest dependencies and worked up to customer success and marketing dashboards that combined product and engineering and revenue operations data. Showpad launched the first dashboard set in April 2022 and completed its internal BI migration by the end of 2022. As of January 2023, Showpad’s QuickSight instance includes over 2,433 datasets and 199 dashboards. Showpad Accelerates Data Maturity to Unlock Innovation Using Amazon QuickSight On the internal reporting front, the data team took a “Working Backwards” approach to make sure it had the right process before going all in with its existing dashboards. The company also reimagined its data pipeline and architecture, creating a portable data layer by decoupling the data transformation from visualization, machine learning, or one-time querying tools and centralizing its business logic. The portable data layer facilitated the creation of data products for varied use cases, made available within various tools based on the need of the consumer. Deutsch Showpad continues to expand in-product reporting while optimizing performance for an improved customer experience. Showpad hopes to further reduce the time that it takes to load a dashboard and make and ship a report to a customer. To make self-service even easier, Showpad will soon launch embedded Amazon QuickSight Q, which empowers anyone to ask questions in natural language and receive accurate answers with relevant visualizations that help them gain insights from the data. Tiếng Việt Italiano ไทย to first reporting version deployment   Learn more » After choosing QuickSight as its solution in November 2021, Showpad took on two streams of development: migrating internal organization-wide BI reporting and building in-product reporting using embedded analytics. Showpad worked closely alongside the QuickSight team for a smooth rollout. Amazon QuickSight But the company’s legacy BI solution and data were fragmented across multiple tools. “If each tool tells a different story because it has different data, we won’t have alignment within the business on what this data means,” says Jeroen Minnaert, head of data at Showpad. Consistency, ownership, and insufficient data access were also challenges for Showpad across its targeted user base due to a complex BI access process, licensing issues, and insufficient education. Showpad wanted to bring all the data into a unified interface, democratize that data, and drive and unlock innovation through advanced insights. Português" Sixth Force Solutions _ Amazon Web Services.txt,"Amazon Elastic Compute Cloud (Amazon EC2). With this capability, Sixth Force customers can remotely access Prolaborate and Enterprise Architect on AWS. The cloud version of Enterprise Architect enhances security through AWS services such as the Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Français To accommodate its customers’ requirements, Sixth Force decided to offer a cloud version of the Benefits Reducing Deployment Time from Months to Weeks Helping Enterprise Architect Users Collaborate Reliably Across the Globe Español Because of the agility and flexibility that comes with AWS, Sixth Force customers can deploy the new versions of Enterprise Architect and Prolaborate faster than before. “Previously, because of bureaucracy and multiple processes, it could take a large enterprise up to nine months to go live with the on-premises version of our software,” says Saleem. “Now, with AWS, that time is reduced to a few weeks at the most. The day a new version of the software is released, people can start streaming it immediately. This represents a real transformation for our customers.” Sparx Architecture Platform on Amazon Web Services (AWS). “We chose AWS for its global scale, ease of use, strong support ecosystem, and compliance and security capabilities,” says Nizam. “Additionally, by selecting AWS, we knew we could easily deploy and scale our solution across multiple geographies.” Learn more » Nizam Mohamed Founder, Prolaborate Amazon AppStream 2.0 is a fully managed non-persistent desktop and application service for remotely accessing your work. 日本語 NICE DCV is a high-performance remote display protocol that provides customers with a secure way to deliver remote desktops and application streaming from any cloud or data center to any device, over varying network conditions. Get Started 한국어 Customers in the Sparx Systems ecosystem who are adopting the cloud and streaming versions of Prolaborate and Enterprise Architect are experiencing increased reliability because of the underlying technology on AWS and are taking advantage of multiple Availability Zones. “Most of our customers are distributed across multiple cities, and they often struggled with latency and delays,” says Nabil Saleem, product manager for Sparx Systems Prolaborate. “Because of the high availability and reliability of AWS, those problems have become a thing of the past. Our solutions perform better and have decreased latency because of AWS, so we know Prolaborate users can collaborate easily, no matter where they are in the world.” By using AWS, Sixth Force has quickly grown its customer base for the new cloud and streaming versions of its software. “We grew our cloud-hosted and SaaS versions of Prolaborate and Enterprise Architect from zero to more than 60 in less than a year since using AWS,” says Nizam. “We have also generated 150 percent revenue growth in the past year and a half. Much of this is due to the scalability and flexibility we have by running on AWS.” Amazon Web Application Firewall (AWS WAF). Sixth Force also uses AWS to deliver a software as a service (SaaS) version of Prolaborate and Enterprise Architect, Reduces deployment time from months to weeks Amazon GuardDuty Sparx Enterprise Architect (Enterprise Architect) every day to design and create software systems and business processes. Enterprise Architect is an integrated visual modeling and design tool offered by Australia-based Sparx Systems, a leader in architecture modeling tools. AWS Services Used Amazon AppStream 2.0 capabilities for the Sparx Architecture Platform. Amazon AppStream 2.0 is a fully managed, non-persistent desktop and application service for remote access. “This is a 20-year-old application with a very strong user base, and we are now bringing it to more users through this AWS-powered EA SaaS solution,” says Nizam. Amazon GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation. For several years, Sixth Force has sought to respond to customer demands for a cloud version of the Sparx Architecture Platform. “Many of our customers had the on-premises versions of Enterprise Architect and Prolaborate and used dedicated resources to maintain data centers, roll out applications, and change management,” Nizam says. “These customers wanted to take advantage of the agility, cost savings, and scalability of the cloud.” 中文 (繁體) Bahasa Indonesia Sixth Force Solutions, based in India, provides Enterprise Architecture consulting for customers in a range of industries. A Sparx strategic partner, Sixth Force offers Prolaborate collaboration software and supports companies in deploying Sparx Enterprise Architect. Creating Cloud and Streaming Versions of Enterprise Architect and Prolaborate Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Ρусский عربي Increases revenue by 150% in 1.5 years In the future, Sixth Force expects to roll out its Amazon AppStream–based solution to more enterprises in Europe and the US. Nizam concludes, “With the Amazon AppStream–based streaming solution, we can deliver greater scale while offering better collaboration capabilities. We look forward to expanding this solution to give our remote workers worldwide the best possible tools.” 中文 (简体) Creates 60+ versions of cloud-hosted and SaaS solutions in less than 1 year   To learn more, visit aws.amazon.com/products/end-user-computing. Prolaborate, a sharing and collaboration software platform. Prolaborate integrates seamlessly with Enterprise Architect and gives software architects the ability to analyze, interact, and make key decisions based on Enterprise Architect model data. “Prolaborate and Enterprise Architect combined help architects create a digital architecture platform by leveraging model data to build dashboards and graphs,” says Nizam Mohamed, founder of Prolaborate. “As a result, users can gain business insights and share and collaborate more easily, no matter where they are located.” Sixth Force customers are also lowering costs by implementing Prolaborate and Enterprise Architect on AWS. “A lot of enterprises spent a significant amount of money to manage data centers and infrastructure before our cloud offerings were available, but they no longer need to worry about those things,” Nizam says. NICE DCV Growing Revenue by 150% on AWS We grew our cloud-hosted and SaaS versions of Prolaborate and Enterprise Architect from zero to more than 60 in less than a year since using AWS.” Türkçe Amazon Elastic Compute Cloud The company has seen a specific increase in cloud deployments for large banks, telecommunications firms, and manufacturing organizations in Europe and the US, most of which have security and compliance requirements that Sixth Force can help meet on AWS. “Many of our SaaS customers in particular are returning for renewals and asking for more advanced features,” says Nizam. “This encourages us to continue working on innovative new offerings.” English Sixth Force Solutions Delivers Cloud Version of Enterprise Architect Software to Meet Customer Demand for Rapid Tool Deployment Amazon AppStream 2.0 About Sixth Force Solutions Amazon GuardDuty threat detection service and Deutsch Since 2018, Sparx strategic partner Tiếng Việt Learn More Italiano ไทย In November 2020, Sixth Force launched a newer SaaS streaming version of Enterprise Architect that extends Contact Sales EA SaaS. The solutions are streamed through web browsers powered by Amazon EC2 instances and NICE DCV, an AWS remote desktop and application streaming service. “We felt that offering a streaming solution on AWS would cater to our customers in a more customized way and give them more configuration capabilities,” Nizam says. 2022 Sixth Force Solutions has complemented Enterprise Architect by offering Nearly 1 million people across the globe use Sixth Force launched an AWS-powered cloud hosting service for the Sparx ecosystem, including Enterprise Architect, that runs on Helps employees collaborate reliably across the globe Português" SKODA Uses AWS to Predict and Prevent Production Line Breakdowns.txt,"Envisioning the Future of MAGIC EYE and the ŠKODA Approach Français Español 日本語 Contact Sales When a single minute of lost production costs automotive manufacturers the revenue of one car, there’s no room for production downtime. To meet its production demands and avoid unnecessary revenue loss, ŠKODA AUTO (ŠKODA) knew it needed a way to prevent production line issues from occurring instead of just reacting to them.  Milan Dědek Manager for Predictive Maintenance, ŠKODA Get Started 한국어 With an eye toward improvement, ŠKODA assessed its existing production and maintenance processes and determined that its current reactive approach to assembly line disruptions was not meeting its needs. It needed a way to accurately predict potential problems to prevent breakdowns before they occur. Predictive maintenance leaves no room for failure and breakdown, making it a strong pillar for the ŠKODA maintenance strategy. Fortunately, using AWS, ŠKODA had the technology it needed to make—and scale—such a high-level process improvement. “ŠKODA is a big company with lots of processes and a very fragmented infrastructure, so we need to cooperate with a strong service provider,” says Milan Dědek, manager for predictive maintenance at ŠKODA. “AWS offers plenty of services not only for today but also for future projects.”  About ŠKODA Reduces assembly line downtime ŠKODA TECHNICAL: My Machine – Magic Eye Amazon EC2 Adopting a Proactive Approach to Production Line Maintenance AWS Services Used Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский عربي Increased staff productivity 中文 (简体) Learn more >> ŠKODA Uses AWS to Predict and Prevent Production Line Breakdowns Learn more » Harnessing the Power of AI and Compute Vision Benefits of AWS Using a combination of AWS services and in-house technology, ŠKODA got to work developing MAGIC EYE, a new way to manage auto production, in 2020. MAGIC EYE compute vision technology collects, monitors, and analyzes equipment data to identify vulnerabilities and calculate different breakdown scenarios before they create a problem. “The aim of our department is to identify these weak places and find a solution to limit or remove the breakdown,” says Dědek. “MAGIC EYE is one of the most important parts of our approach because it’s directly on the main production line.” Optimized production costs For flexible and scalable compute, MAGIC EYE uses Amazon Elastic Compute Cloud (Amazon EC2), which offers secure and resizable compute capacity for virtually any workload. The solution also uses additional AWS services, like Amazon Relational Database Service (Amazon RDS), which provides users with the ability to set up, operate, and scale a relational database in the cloud with just a few clicks. For visualization of the MAGIC EYE solution, the company uses Amazon QuickSight, which helps everyone in an organization to understand data by asking questions in natural language, exploring through interactive dashboards, or automatically looking for patterns and outliers powered by machine learning. The company’s strategic combination of cost-efficient AWS services and onsite expertise has set ŠKODA up for increased cost savings—whether from reduced downtime, faster maintenance, or overall increased efficiency per circuit—across the assembly line. In addition to optimizing costs, ŠKODA can scale with ease to meet fluctuating production needs using AWS. With this scalability, ŠKODA will be able to develop MAGIC EYE into an even more powerful standard solution that can eventually be rolled out to more of the Volkswagen Group’s factories. Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.  To address this need, ŠKODA turned to Amazon Web Services (AWS) and used AWS Internet of Things services to create MAGIC EYE, an innovative manufacturing solution that works to prevent issues and reduce costly and avoidable downtime. English Facilitated a predictive approach to production line maintenance· Amazon RDS For the Volkswagen Group as a whole, MAGIC EYE is one part of an ambitious long-term plan for improving production processes, increasing productivity, and optimizing cost savings. It’s the first stage in what will be an industry-wide shift to replace reactive production strategies with a more effective, proactive approach. “The flexibility and potential to further roll out MAGIC EYE beyond the production line is definitely important to us,” says Dědek. “There’s no place for failure or breakdown. I think this is the mantra for all production. In the long-term, this approach is a good investment.” Czech auto manufacturer ŠKODA operates under the Volkswagen Group umbrella. Its automobiles are sold in over 100 countries, and the global demand for these vehicles leaves little room for production stalls. Every vehicle not produced costs auto manufacturers like ŠKODA thousands of dollars in lost revenue, so a continuous production line is key for keeping production moving quickly and efficiently and driving revenue generation.  ŠKODA is a big company with lots of processes and a very fragmented infrastructure, so we need to use a strong service provider. AWS solutions offer much more potential for us to grow.” Deutsch Tiếng Việt Amazon S3 Amazon QuickSight allows everyone in your organization to understand your data by asking questions in natural language, exploring through interactive dashboards, or automatically looking for patterns and outliers powered by machine learning. Italiano ไทย Türkçe ŠKODA is a Czech automobile manufacturer headquartered in Mladá Boleslav, Czech Republic. It is part of the Volkswagen Group. 2022 ŠKODA’s MAGIC EYE solution uses six cameras mounted by a conveyor frame to monitor equipment and reach places human operators can’t access with ease. In the process of manufacturing electric vehicles, the increased weight of the battery puts additional pressure on the belts, which also requires more monitoring. In the amount of time it takes a car to move through the ŠKODA production line, these cameras collect nearly 450,000 photos. The cameras connect to a powerful computer on the assembly line frame, where 10 artificial neural networks collect and analyze the photos. The results are sent directly to the cloud and stored using Amazon Simple Storage Service (Amazon S3), object storage built to retrieve any amount of data from anywhere.  If MAGIC EYE detects an irregularity, like dirt in the power line area, loose or cracked bolts, or aluminum track damage, it alerts the maintenance operator, who then decides the best approach to take, such as remedial action or scheduling future repair work during planned downtime. This process is a major shift from ŠKODA’s previous reactive approach, when equipment was only checked during scheduled inspections or when a malfunction became significant enough to impact the assembly line. By then, production could be stalled for minutes, hours, or even days, depending on the problem. Using MAGIC EYE, maintenance operators can see potential concerns in advance and create the best course of action. “With enough data, I’m able to predict when failures could come and the percentage of potential problems,” says Dědek. MAGIC EYE’s neural networks can now recognize a total of 14 defect types and 178 classes, including several subcategories, positioning it to detect hundreds of different scenarios and conditions. Amazon QuickSight Português" SmartSearch-case-study.txt,"On-demand scalability Français To accelerate the migration, SmartSearch adopted AWS Application Migration Service, (CloudEndure Migration), which minimizes time-intensive, error-prone manual migration processes by automatically converting source servers from physical, virtual, and cloud infrastructure to run natively on AWS. In only 6 months, the company replicated its servers on AWS without disruptions to its clients. Since completing the first phase of its migration, SmartSearch has achieved vastly improved performance. AWS Application Migration Service Español Migrating to AWS to Support Growth and Improve System Performance SmartSearch Completes a Seamless Migration to the Cloud Using AWS Application Migration Service Learn more » Learn More 日本語 would provide out of the gate."" About SmartSearch SmartSearch knew that hosting its system on AWS would both reduce the burden of data center maintenance and unlock key performance improvements. It chose to migrate its entire system to AWS. “We compete with the best in our industry, and we want our customers to have access to the most stable platform,” says Morris. “We evaluated everything and realized that AWS was the best choice for us. So, we made the decision to go all in.”  Get Started 한국어 Then, they began to replicate SmartSearch’s Microsoft SQL Servers on the cloud using AWS Application Migration Service. By June 2021, SmartSearch had completed the first phase of its migration with virtually no disruption to its clients or system downtime. In fact, the cutover window took only 9 hours and was scheduled over a weekend. “In the war on talent, staffing companies sometimes have minutes to submit a resume, satisfy their customer, and gain a huge commission,” says Morris. “Our primary goal was to minimize customer impact due to a migration. RedNight Consulting partnered with us to develop a plan, and we carried it out flawlessly. The migration to AWS was beautiful.”  Nanoseconds to spin up new servers As a provider of digital recruiting and staffing solutions, SmartSearch knows that performance and resilience are critical components for its software environment. Reliability and uptime are key for clients to perform at a high level, capturing lucrative commissions and valuable contracts. To continue to meet client expectations and improve system performance, the software company chose to migrate its self-managed data center to Amazon Web Services (AWS).  L. J. Morris President and Chief Technology Officer, SmartSearch  On the advice of its parent company, SmartSearch engaged RedNight Consulting, an AWS Partner, to accelerate the migration. RedNight Consulting has significant technical expertise on AWS and worked with SmartSearch to create a comprehensive migration strategy. “RedNight Consulting recommended that we completely recreate our network on AWS first and then optimize it,” says Morris. In January 2021, the teams set out to duplicate SmartSearch’s environment on AWS, with a goal to complete the project in 6 months. 9-hour cutover window AWS Services Used To learn more, visit aws.amazon.com/application-migration-service. 中文 (繁體) Bahasa Indonesia Benefits: Ρусский عربي Using AWS Application Migration Service to Minimize Downtime and Customer Disruptions The performance that we got on 中文 (简体) When the new system went live, SmartSearch saw immediate improvements. “The performance that we got on AWS from day one was breathtaking,” says Morris. “We didn’t realize how much more headroom AWS would provide out of the gate.” Since the migration, SmartSearch customers have expressed great satisfaction with the system’s performance and reliability. SmartSearch also uses Amazon CloudWatch, a monitoring and observability service, to monitor its system. Using this tool, the SmartSearch IT team can quickly identify and resolve potential performance issues before they affect the client experience.  SmartSearch is a software company that develops solutions for the staffing and recruiting industry. Global clients rely on SmartSearch’s comprehensive talent acquisition tool to centralize sourcing, hiring, and applicant tracking activities. Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Amazon CloudWatch collects and visualizes real-time logs, metrics, and event data in automated dashboards to streamline your infrastructure and application maintenance. First, SmartSearch and RedNight Consulting completed a proof of concept that identified components that SmartSearch needed to adjust prior to the migration. Based on these findings, the teams performed a domain update, simplified the network architecture, and decommissioned servers that were no longer in use.  Now that it has duplicated its entire environment on AWS, SmartSearch will continue to modernize its infrastructure. In particular, it is in the process of migrating from its SQL Servers to Amazon Aurora, a relational database management system built for the cloud with full MySQL and PostgreSQL compatibility. “We will see meaningful cost and performance improvements by migrating to Amazon Aurora,” says Morris. SmartSearch is also exploring serverless solutions like AWS Lambda, a serverless, event-driven compute service.  Founded in 1986, SmartSearch provides talent acquisition software that centralizes sourcing, recruiting, applicant tracking, and hiring activities. To power its service, the company had previously self-managed an on-premises data center. Improving performance or increasing memory was a costly, time-consuming experience for SmartSearch. “We were successful hosting our own data center for years, but as we prepare for rapid growth and acceleration, we want to invest our time and resources in the product and customer needs,” says L. J. Morris, president and chief technology officer of SmartSearch. “By migrating to AWS, we can focus on building great products, which is what we do best.”  Türkçe Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. English AWS SmartSearch will continue to use AWS to deliver high-performing services to its clients. “Using AWS Application Migration Service, we duplicated an aging system and completely recreated that network in the cloud,” says Morris. “We couldn’t have accomplished this without the support of AWS.” Deutsch To comply with regulations for its global clients, SmartSearch can quickly launch its environment in new AWS Regions, which are physical locations where AWS clusters data centers. “For General Data Protection Regulation compliance, we were able to power up a new instance of our network in Germany,” says Morris. “This duplication took a matter of weeks on AWS but would have been a yearlong project on premises.” Now, SmartSearch can seamlessly grow alongside its customers and configure its system to meet their evolving technical requirements.  Tiếng Việt Access to modernization tools Italiano ไทย SmartSearch now powers its software environment using virtual servers on Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. It can spin up new Amazon EC2 instances in nanoseconds when it needs additional capacity and can scale back servers when they are no longer needed. This on-demand scalability saves time and opens opportunities for innovation among its IT team. “We promoted our IT director to director of operations, which would not have been possible previously,” says Morris. “This is a testament to the fact that he doesn’t have to focus solely on our network since the migration.”  Amazon CloudWatch Contact Sales AWS Application Migration Service minimizes time-intensive, error-prone manual processes by automatically converting your source servers to run natively on AWS. It also simplifies application modernization with built-in optimization options. Continuing to Optimize and Improve Software Systems Using Amazon Aurora 2022 Amazon EC2 from day one was breathtaking. We didn’t realize how much more headroom Português No client disruption during migration" Snap optimizes cost savings with Amazon S3 Glacier Instant Retrieval _ Snap Case Study _ AWS.txt,"Explore Snap's journey of innovation using AWS in download latency in some Regions Français Amazon S3 added since being on AWS Español Solution | Saving Tens of Millions on Infrastructure and Improving Visibility into Object Storage Migrating Snap’s content to Amazon S3 has also improved operations and visibility. Using Amazon S3 Storage Lens, a feature that delivers organization-wide visibility into object storage usage, the company has better insight into what it’s storing so that it can make more informed, data-driven decisions. Snap also migrated to AWS to scale its infrastructure to support its growth: the amount of content that it stores has grown by 5–10 percent each year. Meanwhile, Snap transitioned other parts of its infrastructure from its previous monolithic architecture to one based on microservices to host many of the services that powered its app. To accomplish this, it turned to Amazon Elastic Kubernetes Service (Amazon EKS), a managed container service to run and scale Kubernetes applications in the cloud or on premises. “We worked extensively with the AWS team to migrate some of our features and components to microservices on AWS,” says Manoharan. Each microservice can be deployed in multiple Regions, simplifying the management of its infrastructure. As a result, Snap saw a 20–30 percent reduction in download latency in certain Regions for refreshing feeds, downloading media, and doing near-real-time communications. 日本語 As Snap’s storage needs increased, the company needed to optimize storage without diminishing performance or compromising user experience. To achieve this, Snap migrated its data from another cloud provider to Amazon Web Services (AWS) and used Amazon Simple Storage Service (Amazon S3), an object storage service that offers industry-leading scalability, data availability, security, and performance. The fact that no customer noticed this major migration to Amazon S3 Glacier Instant Retrieval was a big win for us. It was a seamless experience for end users, and we had no production issues during the entire migration.” Contact Sales Greater than 99.99% Outcome | Gaining Insights on AWS to Prioritize Business Needs 한국어 Snap migrated more than 2 exabytes of data—roughly equivalent to 1.5 trillion media files—seamlessly to Amazon S3 Glacier Instant Retrieval from Amazon S3 Standard-IA. “The fact that no customer noticed this major migration to Amazon S3 Glacier Instant Retrieval was a big win for us,” says Manoharan. “It was a seamless experience for Snapchatters, and we had no production issues during the entire migration.” As a result of the migration, the company saved tens of millions of dollars on storage. Snap has configured Amazon S3 in 20 AWS Regions around the world so that customers anywhere can retrieve data in milliseconds. The AWS Global Infrastructure is the most secure, extensive, and reliable Global Cloud Infrastructure for a business’s applications. The global reach of AWS lets Snap store media closer to the place where Snapchatters are creating it for optimal performance. Snap is also able to deliver content efficiently using Amazon CloudFront, a content delivery network service built for high performance, security, and availability. “We’ve been able to off-load all of the regionalization work and costs to AWS so that we can focus on developing new features,” says Manoharan. As a result, Snapchat continues to meet its quarterly cost-optimization goals. Overview | Opportunity | Solution | Outcome | AWS Services Used 2 exabytes Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. … In 2016, Snap migrated its data to AWS. “We chose to migrate to AWS because of its global reach, excellent performance, and competitive pricing that, in turn, gave us the ability to reinvest in our business,” says Vijay Manoharan, manager of the media delivery platform team at Snap. Amazon S3 Glacier Instant Retrieval is an archive storage class that delivers the lowest-cost storage for long-lived data that is rarely accessed and requires retrieval in milliseconds. AWS Services Used In 2017, Snap migrated one of the app’s most central features—Snapchat Stories—to Amazon DynamoDB, a fully managed, serverless, NoSQL database designed to run high-performance applications at virtually any scale. Using Amazon DynamoDB, the company experienced greater than 99.99 percent availability and can better manage the metadata associated with customers’ photos and videos. The company estimates that it has added 200 million daily active users since 2016 and has dramatically improved its ability to grow and innovate on AWS. 1 中文 (繁體) Bahasa Indonesia Amazon S3 Glacier Instant Retrieval To optimize the cost of storing permanent content, Snap adopted Amazon S3 Glacier Instant Retrieval, which is designed to deliver low-cost storage for long-lived data that is rarely accessed. By using Amazon S3 Glacier Instant Retrieval for its long-term, rarely accessed media files, Snap is saving tens of millions of dollars while delivering the same performance and powering new business opportunities, such as innovative app features and new hardware products. About Snap Inc. Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. no items found  On AWS, Snap is ready to handle more growth and roll out innovative features in a way that’s both cost efficient and delivers a great user experience. “By gaining new insights on AWS,” Manoharan says, “we can strike the right balance between further reducing costs and maintaining performance.” Learn more » Amazon CloudFront is a content delivery network (CDN) service built for high performance, security, and developer convenience. 2022 Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service to run Kubernetes in AWS and on-premises data centers. Learn more » Overview of data migrated Snap Inc. is a camera company that aims to improve the way that people live and communicate through Snapchat, its photo- and video-sharing app, and through its hardware products designed to make capturing and sharing media easier. Snap Optimizes Cost Savings While Storing Over 1.5 Trillion Photos and Videos on Amazon S3 Glacier Instant Retrieval Get Started More Snap Stories Türkçe English Snap Inc. (Snap) builds the popular visual messaging app Snapchat, which enhances relationships with friends, family, and the world. More than 363 million daily active users use Snapchat to share and save photos and videos. Though it started with a focus on ephemeral content, such as photos that would disappear after a few seconds, the app has become a place for Snapchatters—as Snapchat users are called—to store media and memories long term, if they choose. Opportunity | Optimizing Storage by Migrating to AWS 200 million daily active users Saved tens of millions of dollars Deutsch Snap had been storing saved media on Amazon S3 Standard-Infrequent Access (S3 Standard-IA), a storage class for data that is infrequently accessed (once every 1–2 months) but requires rapid access when needed. With the launch of Amazon S3 Glacier Instant Retrieval in November 2021, the company realized that it could save even more on costs with virtually no impact on performance. The Snap team even influenced the development of this archive storage class by providing feedback and collaborating with the Amazon S3 team as the storage class was being designed. To determine if Amazon S3 Glacier Instant Retrieval delivered a lower total cost than Amazon S3 Standard-IA, Snap began by analyzing the access patterns of its data. This analysis showed that using Amazon S3 Glacier Instant Retrieval would reduce costs because the storage class is ideal for data that needs immediate access but is only accessed once per quarter. So, Snap began migrating to the storage class in March 2022 using Amazon S3 Lifecycle policies. By June 2022, Snap had migrated all existing content and was storing all new content in Amazon S3 Glacier Instant Retrieval. Tiếng Việt 20–30% reduction Italiano ไทย Amazon EKS Amazon CloudFront on storage Snap plans to continue looking for opportunities to achieve further cost savings while focusing on innovation. “The AWS team provided us with tremendous support,” says Manoharan. “That commitment has really helped us prioritize our business needs.” Learn more » availability achieved Vijay Manoharan Manager of the Media-Delivery Platform Team, Snap Inc. Snap’s needs accelerated in 2016 after the launch of Snapchat Memories, a feature that automatically archives media and resurfaces it over time. “Snapchat Memories is our predominant use case for storing media for long periods,” says Manoharan. Snapchatters might view this content for a few days and then not view it again for months or years, so the company wanted to optimize its storage on AWS for further cost savings. Português" Software Colombia and AWS Team Up to Create Powerful Identity Verification Solution _ Software Colombia Case Study _ AWS.txt,"Amazon Textract is a machine learning (ML) service that automatically extracts text, handwriting, and data from scanned documents. It goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables.  Français 2023 in overall identification and onboarding process Español Amazon Cognito 日本語 Get Started 한국어 Learn how Software Colombia builds on Amazon Web Services (AWS) to transform the identity management landscape. Overview | Opportunity | Solution | Outcome | AWS Services Used for its customers and digital processes AWS and our new eLogic biometrical solution helps us reduce fraud and risk by 95%, while making our product more inclusive and accessible."" Improved security Amazon Rekognition offers pre-trained and customizable computer vision (CV) capabilities to extract information and insights from your images and videos. AWS Services Used AWS Amplify Software Colombia now mitigates identity spoofing attacks by 95 percent and the time spent by end users onboarding into systems and platforms was reduced by 92 percent. This enhances the user experience in the authentication process and enables more secure electronic communication channels that organizations can use to quickly and safely distribute products and services. 中文 (繁體) Bahasa Indonesia Software Colombia, an organization founded and headquartered Bogotá, Colombia, are specialists in the virtualization of procedures, electronic invoices, digital signatures, chronological stamping, and applications of PKI technology for customers worldwide. Software Colombia innovate in digital signatures, authentication, and e-commerce solutions with the highest quality standards for its customers, and its mission is to become the leading digital verification and authentication company in the region by 2025. With Amazon Cognito, you can add user sign-up and sign-in features and control access to your web and mobile applications. Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Outcome | Enhancing Identity Verification with Face Liveness Detection 中文 (简体) In the modern business environment, identity management has become a vital concern for enterprises that conduct digital transactions. With the proliferation of online platforms and the need to safeguard sensitive data from malicious actors, companies require robust solutions to manage user identity securely. Software Colombia needed an efficient, accurate, and robust biometric facial recognition solution capable of verifying user identity by using advanced algorithms to analyze facial features and match them against existing records. The solution would be used for the processes of issuing X509 digital certificates and to secure the signature of documents online, as well as to protect other important web transactions. Such a solution will help Software Colombia and its customers reduce the cost and risk of fraud on business-critical processes. Amazon Textract Learn more » 95% reduction Overview Designed and prototyped in identity spoofing attacks and risk Increased speed and accuracy Türkçe English Opportunity | Increasing Accuracy while Reducing Costs with Face Identity Verification Amazon Rekognition Solution | Expanding Software Colombia Solutions with Machine Learning to prove a person’s identity in minutes regardless of location Software Colombia is a top-tier software development company based in Bogotá, Colombia, providing cutting-edge technology solutions globally. The company has a team of skilled experts in machine learning (ML), artificial intelligence (AI), software development, mobile app development, web development, cloud computing, and big data. It has completed over 300 successful projects for clients globally, including healthcare, finance, logistics, and education. The company's focus on innovation, quality, and client satisfaction has earned it recognition as a top software development company in Colombia. Alex Chacón Software Colombia, CEO Software Colombia Creates a Powerful Identity Verification Solution Using AWS Deutsch an identity verification solution in 4 weeks Tiếng Việt AWS Amplify is a complete solution that lets frontend web and mobile developers easily build, ship, and host full-stack applications on AWS, with the flexibility to leverage the breadth of AWS services as use cases evolve. Italiano ไทย Contact Sales About Software Colombia Learn more » Customer Stories / Software and Internet Software Colombia's new solution is called eLogic Biometrics and is designed and prototyped with the AWS envision engineering team, capable of mitigating the identity spoofing attacks and risk by 95 percent through a biometric face recognition and authentication mechanism regardless of whether the user is providing the image through a phone or another camera. Aditionally, the identity verification, authentication, and the overall onboarding process of new customers was reduced by 92 percent, enhancing the user experience in the authentication process and enabling more secure electronic communication channels that organizations can use to distribute products and services. eLogic Biometrics was developed with a serverless architecture, using AWS services such as Amazon Cognito, Amazon SQS, and Amazon Textract for document processing. Software Colombia deploys the solution with Amazon Amplify, which supports the new Amazon Rekognition Face Liveness API. Português 92% reduction" Spacelift Case Study.txt,"10 AWS Lambda Configuring Cloud Environments 3x Faster Français Spacelift supports customers working in pure cloud environments as well as those running hybrid models because they need to store certain data on premises to comply with security or privacy regulations. Cut down on security and compliance issues by a factor of 10 Español This means customers require fewer senior-level IT staff or can increase productivity of current developers, so they have more time to create innovative products and features. “When new developers join a company, they can spin up all the infrastructure they need in seconds with little product knowledge, and then quickly minimize error risks and correct any misconfigurations,” says Wyszynski. “Through automation, our customers’ DevOps teams can configure cloud environments 3 times faster than doing the same work manually.” 90% To ensure that its platform is flexible and able to scale rapidly, Spacelift uses AWS Lambda, which allows users to run code without thinking about servers or clusters. This helps the company deal with unpredictable workload demand from customers. “A single customer might launch a thousand tasks that need addressing, and then have nothing to process for the next hour,” says Kuba Martin, software engineer at Spacelift. “Using AWS Lambda, we can quickly spin up compute capacity to deal with incoming requests, so they can be resolved quickly and tasks don’t accumulate. This means our customers experience reliable performance—and they stay happy.” Easing Communication for Hybrid Environments Using AWS IoT Core Developers working for Spacelift’s customers can set up cloud environments immediately, even if they have minimal cloud experience, because Spacelift provides an easy-to-use interface to the underlying AWS setup. 日本語 Contact Sales 2022 Spacelift Reduces Time Spent on Cloud Management by 90% Using AWS 한국어 Sped up cloud environments’ configurations by 300% Overview | Opportunity | Solution | Outcome | AWS Services Used This reduces the complexity of the infrastructure so it can be managed with fewer DevOps engineers. It also means that new environments needed for startups or a large company opening a new office, for instance, can be set up quickly, easing corporate expansions. Automation also reduces the error rates compared to manual configurations, so its customers’ platforms are more reliable for their end customers. Based in Silicon Valley and Poland, Spacelift has created a platform that simplifies the management of complex cloud environments. That means IT teams can focus on creating innovative products, rather than maintaining infrastructure. The approach has proved popular and spurred the company’s growth from 1 to 40 employees over 2 years. To ensure high levels of reliability, security, and compliance for its platform, Spacelift turned to Amazon Web Services (AWS). Using AWS, the start-up has helped customers such as Checkout.com and Kin cut down on the time spent on repetitive infrastructure maintenance tasks by 90 percent. For example, by automating security and data privacy configurations, the company has reduced the time needed to handle these issues by a factor of 10 compared to doing the work manually. Get Started Spacelift is now part of AWS ISV Accelerate, a co-sell program for organizations that provide software solutions that run on, or integrate with, AWS. Its solution is also available for businesses to download and deploy from AWS Marketplace. “We’re always looking to deepen our use of AWS,” says Wyszynski. “Working together closely helps us to build on our success and supports ongoing product development, meaning we can continually improve our services for customers.” Getting Up and Running on AWS in Half the Expected Time AWS Activate AWS Services Used AWS Activate provides startups with a host of benefits, including AWS credits*, AWS support plan credits, and architecture guidance to help grow your business. Learn more » 中文 (繁體) Bahasa Indonesia Spacelift Reduces Time Spent on Cloud Management by 90% Using AWS Spacelift’s platform combines continuous integration and deployment (CI/CD) processes to manage infrastructure as code (IaC), so customers can easily and quickly set up and maintain cloud architectures. Using the Spacelift platform, customers can replicate code with common open-source IaC tools instead of configuring new cloud environments manually. Ρусский Customer Stories / Software & Internet عربي Spacelift also cuts down on the time required from developer teams to fix code issues when replicating code. “Using AWS, we can simply roll back to a reset with just 3 clicks and minimize the engineers’ involvement, if there are any code errors,” says Wyszynski. “This is one of the biggest advantages of having a highly available system.” Marcin Wyszynski, Founder and Chief Product Officer, Spacelift 中文 (简体) Reduced customers’ repetitive development tasks by 90% To facilitate information flow between the cloud and the on-premises system, Spacelift uses AWS IoT Core, which easily and securely connects devices to the cloud. “With a direct cloud connection to a customer’s IT environment, we can easily route communications,” says Wyszynski. “This helps to keep the technical complexity of the platform low and means the client doesn’t have to worry about managing additional infrastructure.” AWS IoT Core 300% Overview About Company Built platform on AWS in 4 months—half the expected time We’ve moved so fast thanks to help from the AWS support teams and the AWS Activate program. We were able to quickly verify product assumptions and the support team helped us to get key functionalities right.” Türkçe English Spacelift helps businesses to easily set up and manage complex cloud environments, so they can do more with fewer team members. Its platform combines continuous integration and deployment (CI/CD) processes to manage infrastructure as code (IaC). This speeds up code development, and increases the efficiency of workflow management by reducing error rates and automating key manual tasks. Using AWS, Spacelift has helped customers like Checkout.com and Kin to cut down on repetitive infrastructure maintenance tasks by 90 percent. Automating security and data privacy configurations means customers reduce the time spent on these issues by a factor of 10. Almost half of IT recruiters worldwide report difficulties in finding qualified developer candidates. Fast-growing startup Spacelift addresses this shortage of technical staff by helping businesses do more with the DevOps and engineering talent they have. 4 months The AWS ISV Accelerate Program is a co-sell program for organizations that provide software solutions that run on or integrate with AWS. Learn more » The company built its system on AWS from day one, and was up and running in just 4 months, twice as fast as it had estimated it would take. “We moved so quickly thanks to help from the AWS support teams and the AWS Activate program,” says Marcin Wyszynski, founder and chief product officer at Spacelift. “We were able to quickly verify product assumptions and the support team helped us to get key functionalities right.” Deutsch Spacelift helps businesses to easily set up and manage complex cloud environments, so they can do more with fewer team members. Its platform combines continuous integration and deployment (CI/CD) processes to manage infrastructure as code (IaC). Tiếng Việt Spacelift offers a collaborative platform to manage cloud infrastructures and services. Its platform uses continuous integration and deployment (CI/CD) processes and supports infrastructure as code management tools to speed runtime configuration, version management, and state management. It has 40 employees and is based in Poland and the US. Italiano ไทย AWS ISV Accelerate AWS IoT Core lets you connect billions of IoT devices and route trillions of messages to AWS services without managing infrastructure.Message Broker Mirror Device State Built-in Alexa LoRaWAN Devices. Learn more » AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Learn more » Spacelift chose AWS to ensure ease of use for its customers, as the majority of them were already using AWS. “All of our customers use AWS in a sophisticated way, so the fact that we use the same technologies and tools means it’s easy for them to get set up with our platform too,” says Wyszynski. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português" Sprout Social Reduces Costs and Improves Performance Using Amazon EMR _ Case Study _ AWS.txt,"Amazon Simple Storage Service Sprout Social’s migration to Amazon EMR meant a 40 percent reduction in costs and a 30–50 percent decrease in batch data processing time. It also meant that Sprout Social could focus less on technical issues and more on core business goals, like research and improving features for customers. Français As a company that provides social media management software for businesses, Sprout Social processes enormous amounts of data. But with its self-managed batch processing tech stack nearing its end of life, the company needed a new solution. Sprout Social was already using several Amazon Web Services (AWS); so, after evaluating a few other service providers, the company ultimately migrated to Amazon EMR, a cloud big data solution for running large-scale distributed data processing jobs, interactive SQL queries, and machine learning applications using open-source analytics frameworks such as Apache Spark, Hive, and Presto. Español Amazon EMR is a cloud big data platform for running large-scale distributed data processing jobs, interactive SQL queries, and machine learning (ML) applications using open-source analytics frameworks. Solution | Reducing Costs and Improving Operations improved batch job performance 日本語 2022 Sprout Social Reduces Costs by 40% and Improves Performance by 50% Using Amazon EMR AWS offerings make it possible for us to continue investing heavily in research and development and developing customer features as opposed to fighting a battle to keep costs under control.” Get Started 한국어 About Sprout Social Sprout Social saw the benefits of migrating to Amazon EMR almost immediately. The biggest benefit was that Sprout Social saw reduced costs using Amazon S3 storage over Amazon EBS volumes. “Amazon EMR is orders of magnitude cheaper for the large dataset we have,” says Johnson. “What that means is that we have more predictability around our cost as our company and our dataset expands.” Using Amazon EMR, scaling clusters is now significantly more straightforward than its self-managed solution, which saves many hours of Sprout Social engineers’ time. Also, the Sprout Social team estimates that it saved roughly 40 percent in total costs over its previous data storage solution. Amazon Elastic Block Store (Amazon EBS) is an easy-to-use, scalable, high-performance block-storage service designed for Amazon Elastic Compute Cloud (Amazon EC2). Learn more » AWS Services Used With this self-managed batch processing solution nearing end of life, however, Sprout Social took the opportunity to investigate other solutions. The company had wrestled with long-standing pain points. Commonly, it had to scale its Apache Hadoop cluster multiple times per year. Doing so required a significant amount of guesswork and time from Sprout Social engineers. “There was this kind of low-grade babysitting that would reach a peak when we needed to scale,” says Matt Trumbell, director of engineering on the Listening team at Sprout Social. “We would try to always stay ahead of it but knowing when we needed to scale was kind of like reading the tea leaves.” Dan Johnson Principle Site Reliability Engineer 中文 (繁體) Bahasa Indonesia Because Amazon EMR is a managed service that works using Apache Hadoop, it was a natural fit for the needs of Sprout Social. As a result, the company had an almost-seamless migration to Amazon EMR. In fact, the Sprout Social team could quickly import a snapshot it had taken of its existing Apache Hadoop cluster, and the service was up and running in a matter of hours. After migrating its first cluster in August 2021, Sprout Social completed the migration of two additional clusters by January 2022. The AWS team provided support for Sprout Social through the migration process, both with technical issues, like specific cluster-level settings to maximize performance, and cost-related issues, like testing without going over budget. “Because Amazon EMR is very easy to stand up, it was trivial for us to test this process a few times in advance,” says Johnson. “We had full confidence going into it that we knew what the actual migration window would be and could communicate that with the rest of engineering and support.” Founded in 2010, Sprout Social merges the complex landscape of social media channels into one comprehensive and navigable system. Sprout Social’s customer base can then use its software to communicate with their customers, plan and publish content to various channels, and measure how customers are engaging with their brand. To accomplish this, the company ingests billions of data points in the form of messages and metrics from different social network channels. It then uses open-source software Apache HBase as the primary data store for the social media data that it analyzes. Contact Sales Ρусский Sprout Social has also seen improvements of 30–50 percent in overall batch job performance, traditionally its biggest bottleneck given how much data must be processed in any given job. “Amazon EMR has been an absolute game changer because of our ability to scale compute independently from storage,” Johnson says. “And we’ve seen less instability due to disk input/output and overall better and more predictable job run times on Amazon EMR, as opposed to our old traditional Apache Hadoop stack.” عربي 30-50% cost reduction from previous solution 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. 40% Overview Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload Learn more » Going forward, Sprout Social is planning to further optimize its use of Amazon EMR. Specifically, the team wants to explore how it could reduce the size of its main cluster and start using ephemeral clusters to handle batch jobs on a more as-needed basis. By doing so, it hopes to reduce costs associated with operational overhead and provide new features to its customers that wouldn’t have been possible before it migrated to Amazon EMR. Social media management software company Sprout Social reduced costs by 40 percent and reduced batch job processing times by 30–50 percent using Amazon EMR. Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Learn more » Opportunity | Migrating to Effective Data Storage Türkçe Looking for a data solution that would scale with ease, the Sprout Social team tested using Amazon EMR alongside Amazon S3 and EMRFS in June 2021. Using these services, Sprout Social engineers found that they could chart a very clear, smooth path to successfully migrate. The Amazon S3 throughput of Amazon EMR was not only keeping up with Sprout Social’s use of Amazon Elastic Compute Cloud (Amazon EC2), a web service that provides secure, resizable compute capacity in the cloud, and Amazon Elastic Block Store (Amazon EBS), easy to use, high-performance block storage at any scale, but surpassing it. “We were able to continue running our services without needing to reinvent the wheel, all while hitting the triangle of faster, cheaper, and more reliable,” says Dan Johnson, principal site reliability engineer at Sprout Social. English on core business objectives Sprout Social has a history of using AWS solutions. For example, the company built its employee advocacy tool, Bambu, entirely on AWS in 2014, using solutions like Amazon Simple Storage Service (Amazon S3), object storage built to retrieve any amount of data from anywhere. But it had been using a self-managed Hadoop solution for its batch analytics system. Amazon EMR Amazon Elastic Block Store “Tools like Amazon EMR are critical to our ability to invest our money wisely and in areas other than data storage,” says Johnson. “AWS offerings make it possible for us to continue investing heavily in research and development and developing customer features as opposed to fighting a battle to keep costs under control.” Deutsch Tiếng Việt Overview | Opportunity | Solution | Outcome | AWS Services Used Italiano ไทย Sprout Social is a B2B SaaS company that provides integrated social media management. It offers a solution that provides tools for brand monitoring and social customer care, content planning and publishing, and other capabilities. Improved focus Learn more » time-consuming data storage scaling Customer Stories / Software and Internet Amazon Elastic Compute Cloud Decreased Outcome | Optimizing Data Storage to Focus on Overall Company Performance Português" Spryker Case Study _ Amazon Elastic Compute Cloud _ AWS.txt,"Français Spryker can also adjust resources with a few clicks to accommodate seasonal traffic spikes that retailers experience on busy shopping days such as Black Friday. This means Spryker's customers can rely on speedy performance when their own customers need them most. “We use auto-scaling configurations to deliver solid reliability and performance,” says Lunov. “And because AWS provides both on-demand and reserved, we pay only for what we use, so we can achieve the right balance of cost versus performance.” Español Providing reliable service to customers regardless of where they are in the world is essential to many of Spryker’s customers, which are global businesses operating in dozens of countries. Spryker reduces latency issues and improves the customer experience, using AWS Regions and Availability Zones, which provide discrete data centers in 84 locations. AWS Availability Zones also make it easy for Spryker customers to comply with local data protection regulations that require data to remain within a geographic region, because customers can specify where they want their information to be hosted. Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. 日本語 Spryker customers benefit from the use of AWS to increase their efficiency and enable them to rapidly expand their operations around the world. Spryker has gained a competitive advantage by shortening customer onboarding and has launched across the globe in EMEA, North America, and APAC including China. It has also improved its ability to innovate, so it can continue to enhance its services to meet customers’ evolving needs. Spryker offers its services in any region where AWS is available, including mainland China. It used AWS support to help it solve the legal and technical complexities of developing a solution for China, so its customers can reach shoppers in that growing market. Scales to meet rising demand when customers doubled in 1 year 한국어 Amazon RDS About Spryker Get Started Shortens customer onboarding time from months to 1-2 days To speed development of core features, engineers can spin up development environments running on AWS as needed, to test out new ideas. “Being able to validate hypotheses by combining our core product with AWS results in a better experience for our customers,” says Lunov. Spryker now has a flexible infrastructure and the tools it needs to innovate. It spends less time on maintenance and uses AWS services for tasks such as managing containerized workloads, instead of developing its own. This means its IT team can focus on innovation and adapting to meet customer needs. AWS Services Used 中文 (繁體) Bahasa Indonesia Reduces maintenance overheads so Spryker can focus on innovation We value our collaboration with AWS. It has helped us to find the right technology to deliver composable commerce solutions to some of the world’s biggest brands.” Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский عربي Learn more » 中文 (简体) Spryker owes its success to staying close to its customers and solving their problems. When it noticed customers were spending significant resources running complex backends to support Spryker software, it built a cloud-based solution on Amazon Web Services (AWS) to make it more efficient.  Increasing Innovation and Collaborating to Meet Business Goals Spryker provides a cloud-based platform that global businesses rely on to run their B2B, marketplace, and direct-to-consumer (D2C) commerce businesses. When Spryker noticed customers were running complex backends to support its software, it turned to AWS. Using AWS, it improved the customer experience and shortened customer onboarding from months to days. Spryker has also expanded its operations worldwide, including launching in APAC, and improved its ability to innovate. Volodymyr Lunov Director of Cloud Engineering, Spryker The COVID-19 pandemic proved a catalyst to Spryker’s already fast-growing business, as many businesses shifted to online sales channels when physical shops were forced to close. Using Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity, Spryker can scale compute and storage resources to accommodate its expanding number of customers, which doubled over the past year. Amazon ECS Benefits of AWS Getting customers up and running on the cloud solution is straightforward. Onboarding takes from as few as 4 hours to 1-2 days, compared to months previously. Because many Spryker customers already use AWS, it further simplifies the process. Customer Onboarding Shortened from Months to Days Spryker provides businesses of all sizes with cloud-native solutions for B2B and marketplace commerce. Founded in 2014 in Berlin, it has over 600 global employees, including offices in Germany, the Netherlands, Ukraine, the UK, and the US.  Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Quick onboarding is a competitive advantage for Spryker. It won a major customer, Aldi, when the grocer needed to rapidly ramp up online sales during the COVID-19 pandemic. Spryker is commissioned to migrate Aldi’s digital commerce solutions to its cloud solution on a global scale. English Supports customers’ global expansion Spryker Brings Composable Commerce to Global Businesses in Days Using AWS Growing to Meet Doubling Customer Numbers Using Amazon EC2 Spryker provides technology that global businesses rely on to run their B2B and marketplace commerce businesses. Founded in 2014 in Berlin, it’s growing rapidly—sales have doubled over the past 3 years—and it now has over 600 employees. Its customers include major brands such as Aldi, Hilti, Ricoh, Siemens, and Metro.  Deutsch Tiếng Việt Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Italiano ไทย Contact Sales 2022 Amazon EC2 The Spryker Cloud Commerce OS solution, built on AWS, is developed and run by Spryker. Previously, customers ran the software on custom backends that were time-consuming to maintain. “Many of our customers were managing a range of different technologies,” says Volodymyr Lunov, senior director of cloud engineering at Spryker. “Our vision is that customers should focus on their core business to create sophisticated solutions, and not worry about the infrastructure. Instead, Spryker takes care of that.”  Throughout its journey from startup to global company, Spryker has appreciated a close relationship with AWS. “We value our collaboration with AWS,” says Lunov. “It’s helped us to find the right technology to deliver ecommerce solutions to some of the world’s biggest brands.” Português" Staffordshire University Uses AWS Academy to Help Students Meet Business Demand for Cloud Skills _ Case Study _ AWS.txt,"Français Build your cloud skills at your own pace, on your own time, and completely for free Top 10 Español Founded in 1914, Staffordshire University serves over 15,000 students across three schools and four campuses. Maintaining a focus on solving wide-reaching challenges, the university reports that 78 percent of its research is world-leading or of international importance, according to the Research Excellence Framework 2014. So it was only natural that Staffordshire University’s School of Digital, Technologies, and Arts became one of the first educational institutions in the United Kingdom to offer cloud computing skills training using AWS Academy.  日本語 2022 Empowering higher education institutions to prepare students for industry-recognized certifications and careers in the cloud. Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used AWS Educate Staffordshire University Uses AWS Academy to Help Students Meet Business Demand for Cloud Skills Customer Stories / Education Opportunity | Building In-Demand Cloud Skills with AWS Academy AWS Services Used AWS Academy Learner Labs Dr. Justin Champion Senior Lecturer, School of Digital, Technologies and Arts, Staffordshire University 中文 (繁體) Bahasa Indonesia AWS Academy Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 6 AWS Courses Along with adding AWS Academy courses to its curriculum, Staffordshire University also became one of the first adopters of AWS Academy Learner Labs. These are hands-on lab environments where educators can bring their own assignments and invite their students to gain experience with the AWS Cloud. “AWS Academy Learner Labs let students make mistakes and still learn about cloud computing along the way, and that’s invaluable,” says Dr. Champion.  中文 (简体) As a public research university that helps students connect their studies to real-world needs, England’s Staffordshire University was ready to expand its curriculum to include cloud computing skills. IT-related roles make up 13 percent of all job vacancies in the United Kingdom. Businesses want to hire candidates with digital skills—especially cloud computing, which employers identified as the top skill they look for in job candidates. In late 2020, Staffordshire University participated in the AWS Educate University Challenge. This was an interuniversity competition where students learned essential cloud computing skills at the same time as they competed to earn points, badges, and prizes for their universities. Many students from across the United Kingdom and Ireland participated—including three students from Staffordshire University, who placed in the top 10 by the end of the challenge. “The performance raised the profile of our students among potential employers,” says Dr. Carolin Bauer, senior lecturer at the School of Digital, Technologies, and Arts at Staffordshire University. “Many companies have been in touch regarding placements for our students and graduates as well as other projects. It’s been a great success.” Overview 93% Validate technical skills and cloud expertise to grow your career and business. Learn more » Amazon Web Services (AWS) Education Programs collaborate with education institutions and the public sector to provide access for individuals to develop cloud computing and digital skills. To help graduates boost their employability, Staffordshire University worked with the AWS team to introduce cloud computing skills training and add cloud courses to its credit-bearing computer science modules. Staffordshire University offers courses through AWS Academy, which empowers higher education institutions to prepare students for industry-recognized certifications and careers. Since the university added AWS Academy courses to its curriculum in 2017, several hundred students have participated. Of those students, 93 percent have achieved employment within 6 months of graduation. Empowered students Türkçe Solution | Learning by Doing Using AWS Learner Labs English With AWS Academy, our students love that they’re not just taking theory lessons. They get to work in actual environments with real AWS tools.”  Next up, Staffordshire University is expanding on the success of its cloud courses by launching additional programs of study developed in collaboration with the AWS team. Staffordshire University and the AWS team designed these programs by ""Working Backwards"" — an Amazon process that encourages companies to brainstorm solutions by using a customer challenge as the starting point — from the cloud skills employers are currently seeking in the United Kingdom and across the global labor market. One of these programs, which launches in September 2022, is a cloud computing course that features both cloud computing and cybersecurity modules and will offer students more opportunities to discover what’s possible with the AWS Cloud. “What we want to encourage is for students to play with AWS services as well as build confidence with the tools,” says Dr. Champion. to learn remotely using any hardware and earn AWS Certifications Staffordshire University added cloud computing skills training to its curriculum using AWS Education Programs, helping 93 percent of participants find employment within 6 months of graduation. covering cloud skills AWS Certification during the AWS Educate University Challenge Deutsch of graduates find jobs within 6 months Tiếng Việt Italiano ไทย Outcome | Developing New Cloud Coursework About Staffordshire University Staffordshire University is a public research university in Staffordshire, England. Founded in 1914, the university serves over 15,000 students across three schools and four campuses. The United Kingdom has experienced a technology boom in recent years, with technology funding tripling in the first 6 months of 2021 compared to the same period in 2020. In particular, employers need professionals with cloud computing skills ranging from cloud development to machine learning and data analytics. To meet demand, Staffordshire University offers students their choice of six AWS courses covering these key skills and more. Facilitated by two AWS educators using a ready-to-teach curriculum and resources provided by AWS Academy, the program can easily scale up as interest grows. Students enjoy a hands-on approach to their studies and get the chance to use AWS services. “With AWS Academy, our students love that they’re not just taking theory lessons,” says Dr. Justin Champion, senior lecturer at the School of Digital, Technologies, and Arts at Staffordshire University. “They get to work in actual environments with real AWS tools.” Learn more » Because learning with AWS Academy takes place in the cloud, Staffordshire University can offer remote learning regardless of students’ computer hardware. Learners enjoy the flexibility of practicing cloud computing skills from anywhere. They can also prepare to earn AWS Certifications, which validate technical skills and cloud expertise to grow your career. As a result, Staffordshire University students get the opportunity to prepare for the workforce and boost their employability long before they graduate.  Português Long-running hands-on lab environments where educators can bring their own assignments and invite their students to get experience using select AWS Services. Learn more »" Stanford Multimodal Data Case Study _ Life Sciences _ AWS.txt,"Reduces costs with the pay-as-you-use model On AWS, the DDRCC team designed its MyPHD and SDO solutions to import, query, and analyze large medical databases securely, at high speeds, and at a low cost. “Each of our tools have unique needs, especially as they move outside of the research environment and are deployed for clinical use,” says Dr. Philip Tsao, associate chief of staff for precision medicine for the VA Palo Alto Health Care System and professor of medicine at Stanford University. “To design scalable and secure medical applications, it is critical to form cross-functional teams of experts and facilitate effective collaboration.” Français Benefits of AWS Español Amazon EC2 Organizations of all sizes across all industries are transforming and delivering on their missions every day using AWS. Contact our experts and start your own AWS Cloud journey today. Precision medicine research relies on an individualized understanding of multimodal data (like genomic, microbiomic, and proteomic data) so that clinicians and researchers can personalize therapy for patients. The large amount of data derived from wearable sensors, electronic medical records, and molecular profiles adds another dimension. This increased scale and complexity raises new challenges around data availability, acquisition, storage, integration, and analysis. Therefore, it is imperative for researchers to have an agile and elastic data strategy. ""Deep data is the future of medicine. We need it for monitoring health and for diagnostics, prognostics, and treatments, all at a personal level,"" says Dr. Michael Snyder, chair and professor of genetics at Stanford University. Amazon Athena 日本語 DDRCC at Stanford University Uses AWS for Research in Precision Medicine Leveraging Multimodal Data Improves elasticity of the SDO for educational use To facilitate precision medicine research, DDRCC created the My Personal Health Dashboard (MyPHD), a secure, scalable, and interoperable health management system for consumers. MyPHD provides efficient data acquisition, storage, and near-real-time analysis capabilities for researchers using Amazon Web Services (AWS). The team also developed the Stanford Data Ocean (SDO), which is the first serverless precision medicine educational solution for researchers to educate, innovate, and collaborate over code and data. By building on AWS, DDRCC is using the elasticity, scalability, and security of the cloud to benefit both consumers and biologists and improve the field of precision medicine. 한국어 Stanford Deep Data Research Computing Center is in the Department of Genetics at Stanford Medicine in Palo Alto, California. The team works on design and development of systematic and intelligent solutions for large-scale biomedical applications. Service Workbench on AWS Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. AWS Services Used Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Improves adaptability for collaborative research 中文 (繁體) Bahasa Indonesia Service Workbench on AWS enables IT teams to provide secure, repeatable, and federated control of access to data, tooling, and compute power that researchers need. Dr. Amir Bahmani Director of Deep Data Research Computing Center (DDRCC), Stanford Contact Sales Ρусский عربي 中文 (简体) About Stanford Deep Data Research Computing Center Precision medicine depends on integrating disparate, multimodal datasets to draw inferences. Typically, these datasets are large and siloed across disparate sources. For researchers, it is important to determine the right compute and storage configurations that are needed to apply complex computational algorithms to these large datasets. The DDRCC team developed SDO to help researchers efficiently allocate resources to experiment with code. Using SDO, researchers can explore important questions around precision medicine and scale innovative solutions. By running SDO workloads on AWS, DDRCC has achieved high scalability while meeting stringent security requirements. Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Learn more »   To improve biologists’ ability to complete vital health research, DDRCC uses Amazon SageMaker and Service Workbench on AWS. Using SageMaker, bioinformaticians can build, train, and deploy machine learning models for virtually any use case with fully managed infrastructure, tools, and workflows. The team uses Service Workbench on AWS to facilitate the secure, repeatable, and federated control of access to data, tooling, and compute power that researchers need. Researchers can securely access large datasets on Amazon Simple Storage Service (Amazon S3), an object storage service with industry-leading scalability, data availability, security, and performance. Improves security of precision medicine solutions Get Started DDRCC requires high scalability to process data from MyPHD and SDO and relies on Amazon Elastic Compute Cloud (Amazon EC2), a web service that provides secure, resizable compute capacity in the cloud. “Not only can we scale MyPHD and support different numbers of users, but we can also scale our algorithms based on the number of workloads,” says Dr. Arash Alavi, research and development lead of the DDRCC at Stanford University. To run preprocessing pipelines for large-scale genomics and transcriptomics applications, the team also uses Amazon Genomics CLI, an open-source tool for genomics and life science customers, and AWS Batch, a service for fully managed batch processing at virtually any scale. Amazon Genomics CLI simplifies and automates cloud infrastructure deployments, while AWS Batch makes it simple to run hundreds of thousands of batch computing jobs on AWS. Amazon EC2 provides secure and resizable compute capacity to support virtually any workload. Achieves scalability of MyPHD for virtually any number of users Security is a major requirement for applications that handle medical data. DDRCC’s solutions do not use, store, or process protected health information, and all data in transit and at rest is completely encrypted and anonymized. To maintain a high level of security, DDRCC has adopted AWS services like Amazon Cognito, a service that lets teams add user sign-up, sign-in, and access control to web and mobile apps. “The security features that AWS provides include out-of-the-box logging, auditing, and monitoring, which we use to protect our data,” says Bahmani.  Türkçe English   Deutsch Building Innovative Solutions on AWS for Multimodal Data Analysis The Deep Data Research Computing Center (DDRCC) at Stanford University, one of the many initiatives originating out of Stanford Synder Labs, is part of the Department of Genetics at Stanford Medicine in Palo Alto, California. Its goal is to create tools that bridge the gap between biology and computer science, and help researchers in precision medicine deliver tangible medical solutions. Tiếng Việt Amazon S3 Amazon Cognito Italiano ไทย DDRCC’s MyPHD provides a secure, comprehensive environment for biometrical data analytics at a massive scale. It can store, organize, and process complex health datasets and support near-real-time data analysis and visualization at the individual and cohort levels. This is designed to refine the accuracy of diagnoses and medical prescriptions, and improve precision medicine. To support the large-scale analysis of participants’ data for individual health management, DDRCC can scale resources for MyPHD based on the number of workloads. It also uses AWS security services as the foundation for its medical applications, which deal with large volumes of highly sensitive personal data. DDRCC also uses Amazon Athena, an interactive query service, to facilitate the analysis of data stored in Amazon S3 using standard SQL. Because this service is highly elastic, researchers can query data collected by SDO and MyPHD on demand and move more quickly in their projects. Additionally, Athena is serverless, so there is no infrastructure for DDRCC to manage. The team pays for only the queries they run, reducing costs. “The ability to scale resources dynamically based on the size of the workload—this pay-as-you-go model—is astonishing,” says Dr. Amir Bahmani, director of the DDRCC at Stanford University. 2022 The support from AWS was incredibly valuable to DDRCC, and the company plans to continue using AWS services to design innovative and creative solutions for precision medicine on the cloud. “You can be anywhere in the world, and you can access these large medical datasets,” says Bahmani. “We’ve achieved this by running our infrastructure on AWS.” Designing Solutions for Precision Medicine Research Using Multimodal Data Collaborating on Precision Medicine You can be anywhere in the world, and still access these large medical datasets. We’ve achieved this by running our infrastructure on AWS.” Português" Sterling Auxiliaries Case Study _ Amazon Web Services.txt,"SAP customers can fully realize all the benefits of SAP S/4HANA in the AWS Cloud for systems of all sizes. AWS Backint Agent Français SAP S/4HANA on AWS. SAP S/4HANA on AWS to meet its year-end deadline and improve system performance. Español Delivered highly available content to millions of users   Sterling Auxiliaries is now saving time and human resources formerly dedicated to backing up SAP data manually on premises. Inteliwaves helped the company implement Customer Stories / Manufacturing Sterling Auxiliaries is an international manufacturer of surfactants and industrial chemicals based in India. To meet its go-live timeline and improve system performance, the company upgraded its on-premises SAP R/3 system to 日本語 The infrastructure setup and onboarding to SAP S/4HANA on AWS was a smooth process, and we’ve had a great experience with Inteliwaves.” 2022 Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Learn more » Eliminated hardware upgrades and data center maintenance costs Get Started 한국어 Opportunity | Seeking Fast Deployment with No Downtime Overview | Opportunity | Solution | Outcome | AWS Services Used Vishal Shah Director, Sterling Auxiliaries Inteliwaves Technologies, but encountered critical issues with its on-premises data center vendors. Infrastructure and provisioning were delayed, threatening to disrupt Sterling Auxiliaries’ project timeline. The business needed to go live with SAP S/4HANA by the beginning of April, the start of the new financial year. About Sterling Auxiliaries AWS Services Used Amazon Elastic Compute Cloud (Amazon EC2) instances. Two staff members, each of whom spent 4–5 hours daily on backups, have been redeployed to the infrastructure team because backups are now automated on AWS. Anil Chavan, accounts head at Sterling Auxiliaries, says, “Our headaches due to the need to constantly monitor SAP are gone, which is a big relief.” Receives real-time responses to issues  Overview 中文 (繁體) Bahasa Indonesia Sterling Auxiliaries Pvt. Ltd. launched in 1984 in India as a manufacturer of surfactants and other industrial chemicals. The business began its exporting business in 2000, and to date, international customers account for 60 percent of total sales.  Amazon Elastic Block Store (Amazon EBS) is an easy-to-use, scalable, high-performance block-storage service designed for Amazon Elastic Compute Cloud (Amazon EC2). Learn more » AWS Backint Agent to back up and restore SAP HANA workloads running on Contact Sales Ρусский On the recommendation of عربي Learn more » Sterling Auxiliaries is a manufacturer specializing in the production of industrial surfactants and chemicals. Headquartered in Mumbai, India, Sterling Auxiliaries exports to customers in 65 countries. Its domestic and international sales are increasing yearly. 中文 (简体) AWS Backint Agent for SAP HANA is an SAP-certified backup and restore solution for SAP HANA workloads running on Amazon EC2 instances. AWS Backint Agent backs up your SAP HANA database to Amazon S3 and restores it using SAP management tools, such as SAP HANA Cockpit, SAP HANA Studio, or SQL commands. Learn more » Amazon EBS snapshots to automate backups. The company has improved productivity and employee satisfaction with seamless system performance and achieved 100 percent uptime since migration. The company worked with AWS Partner Inteliwaves on the SAP implementation and migration and is using Vishal Shah director at Sterling Auxiliaries, says, “The infrastructure setup and onboarding to SAP S/4HANA on AWS was a smooth process, and we’ve had a great experience with Inteliwaves. Virtual servers in the production and development environments were available when we needed them, which allowed us to meet our deadline.” Low latency Since launching SAP S/4HANA on AWS, Sterling Auxiliaries reports significant time savings, faster performance, and improved employee satisfaction with a highly available SAP environment. “We’re saving a lot of time now that we don’t need to wait around during server lags,” says Chavan. We can accomplish the work quicker and use our time for other activities such as SAP system audits and application planning.” Scaled to support 10x increase in web traffic   Furthermore, with the older SAP R3 Setup on premises, the business experienced one or two days of downtime each month and regular connectivity challenges slowed down or prevented employees from carrying out their work. For example, when there was a power outage in Mumbai—where Sterling Auxiliaries’ SAP R3 servers were located—factory workers in Gujarat couldn’t access the system, leading to delays in dispatching materials. Backups were also a cumbersome, time-consuming process on premises. Sterling Auxiliaries worked with partner Inteliwaves to implement SAP S/4HANA on AWS, automating backup, and saving time with improved system performance. Sterling Auxiliaries Resolves SAP Downtime and Boosts Productivity to Fuel Business Expansion on AWS Türkçe Amazon Elastic Compute Cloud Deployed resources in minutes versus weeks English Performance improvements, plus the new implementation of the SAP DMS module on AWS, have also facilitated document transfer among teams. “Overall, our factory and back-office employees are much happier with SAP S/4HANA on AWS. They can retrieve documents, save data, and generate reports faster,” Chavan adds. When asked about the company’s plans, Shah says, “The success of this project has prompted us to evaluate cloud-based solutions for other legacy systems in 2023. We’re also planning to implement SAP S/4HANA on AWS for other business divisions that have been running SAP on premises.” Sterling Auxiliaries’ export business is growing annually, so having a low-latency SAP backbone will be key as the business expands. Support SAP S/4HANA on AWS AWS Partner Inteliwaves, Sterling Auxiliaries deployed 2 weeks Amazon Elastic Block Store Deutsch AWS Backint Agent with Tiếng Việt The company has been using SAP software since 2006, with SAP as the foundation for operations at its headquarters and its main factory in the state of Gujarat. The business began migrating from SAP R/3 to SAP S/4HANA at the start of 2022 with the help of Italiano ไทย Outcome | Saving Time while Boosting Employee Satisfaction Within two weeks, Inteliwaves helped migrate Sterling Auxiliaries’ SAP S/4HANA development, quality, and production environments from its data center servers to AWS. With the support of Inteliwaves, Sterling Auxiliaries was able to go live with SAP S/4HANA by the start of the new financial year. 25–30% Since launching S/4HANA on AWS, Sterling Auxiliaries has also eliminated server downtime and delays due to connectivity issues. Improved connectivity has driven a 25–30 percent rise in productivity. Solution | Improving Productivity with Automated Processes Amazon Elastic Block Store (Amazon EBS) snapshots to automate daily backups and SAP-certified Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português 10 hours" Storengy Case Study.txt,"Amazon Simple Storage Service Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Français Accelerating Time-to-Market for Geoscientific Studies Español 日本語 By leveraging CCME, Storengy engineers can use a simple portal for submitting jobs and they can run complex simulations using MATLAB and other scientific applications. As a result, Storengy researchers can deploy new HPC environments faster than before. “Using the CCME tool on AWS, we can deploy HPC resources in 30 minutes, compared to the weeks or months it would take to procure servers and provision compute in our on-premises environment,” says Thebault. “That means we can speed time-to-market for our scientific studies.” Get Started 한국어 Because Storengy now pays for HPC workloads as a service rather than per month, the company expects to save thousands of dollars each month. Also, once it begins taking advantage of Amazon FSx for Lustre, Storengy will spend considerably less than it pays for its BeeGFS parallel file system. Scaling on Demand to Meet Business Growth Working with UCit to Deploy HPC on AWS “Using Amazon FSx for Lustre, we will have more flexibility in terms of cost and performance, depending on the application requirements,” says Thebault. Overall, the AWS solution gives Storengy engineers the flexibility to spend more time on research. “Using AWS, we can give our researchers a new way of working, and this is only the beginning. We are not limited by our technology tools anymore—the tools have adapted to our research, which frees us to focus entirely on innovation,” Thebault concludes. AWS Services Used Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. The company has increased its HPC cluster performance by 2.5 times since moving to AWS. With faster performance, Storengy can more quickly validate experiments before moving them into production. “We always use the latest AWS CPU to run our HPC clusters, which ensures we always have the best performance,” says Thebault. “This is a major improvement over our on-premises environment, and it helps us perform geoscientific studies faster than before so we can more quickly determine the location of underground natural gas.” 中文 (繁體) Bahasa Indonesia About Storengy Ρусский عربي 中文 (简体) Storengy can now scale its HPC clusters on demand, making it simpler and faster to explore the company’s 10 trillion cubic meters of natural gas underground. “Whenever we want to initiate a new gas exploration project, we can add the capacity we need to support it without limitations,” says Thebault. “Because of AWS, we have the scalability and high availability to perform hundreds of simulations at a time. Additionally, the CCME solution scales automatically up or down to support our peak workload periods, which means we don’t have any surprises with our HPC environment.” Benefits of AWS Jean-Frederic Thebault Engineer, Storengy Storengy Moves HPC to AWS, Runs Geoscientific Simulations 2.5 Times Faster Storengy, a subsidiary of ENGIE, is a global leader in underground natural gas storage. The company owns 21 natural gas storage sites and offers innovative products to customers across the globe. Using the CCME tool on AWS, we can deploy HPC resources in 30 minutes, compared to the weeks or months it would take to procure servers and provision compute in our on-premises environment.” AWS ParallelCluster Türkçe Amazon Elastic Compute Cloud English Scales on demand to meet business growth Deploys HPC environments in 30 minutes instead of weeks or months AWS ParallelCluster is an AWS-supported open source cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. Deutsch Storengy, a subsidiary of the ENGIE Group, is a leading supplier of natural gas. The company offers gas storage, geothermal solutions, carbon-free energy production, and storage technologies to enterprises worldwide. To ensure its products are properly stored, Storengy uses high-tech simulators to evaluate underground gas storage, a process that requires extensive use of high-performance computing (HPC) workloads. The company also uses HPC technology to run natural gas discovery and exploration jobs. For many years, Storengy ran its HPC workloads in an on-premises IT environment, but it struggled to manage an increase in jobs. “Our HPC environment was not designed to scale easily. We had to do larger simulations in a very short time as our business grew, and we lacked the ability to support the gas exploration workloads,” says Jean-Frederic Thebault, engineer at Storengy. Storengy also sought to accelerate the deployment of HPC clusters for its engineers. “It typically took weeks or sometimes months to provision server clusters for a new project,” says Thebault. “We wanted our engineers spending their time on research, not provisioning.” Running Simulations 2.5 Times Faster Tiếng Việt Expects to save thousands of dollars monthly Italiano ไทย Contact Sales Storengy addressed its limitations by choosing to move its HPC environment to Amazon Web Services (AWS). “We knew the cloud would give us the scalability and flexibility we were looking for, and AWS offers more services than any other cloud provider we evaluated,” says Thebault. The company collaborated with AWS Partner UCit to implement the UCit Cloud Cluster Made Easy (CCME) solution, which enables Storengy researchers to quickly build customizable HPC clusters and create multiple cluster profiles that match workload type to the number of compute resources. CCME runs on AWS ParallelCluster and Amazon Elastic Compute Cloud (Amazon EC2) instances, and it stores HPC data in Amazon Simple Storage Service (Amazon S3) buckets. “We evaluated each HPC workload and collaborated with Storengy engineers to determine which workloads were right for AWS,” says Philippe Bricard, chief executive officer and founder of UCit. “We also used one of our internal cost optimization tools to help Storengy budget for the cost of running workloads on AWS.” In addition, to reduce costs, Storengy plans to use Amazon FSx for Lustre as a managed service for HPC workloads, replacing its previous BeeGFS parallel file system. 2021 Learn more » Runs simulations 2.5 times faster than before Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Português" Streamline and Standardize the Complete ML Lifecycle Using Amazon SageMaker with Thomson Reuters _ Thomson Reuters Case Study _ AWS.txt,"Using AWS services like Amazon SageMaker, we can create our own customized solutions while tapping into core ML functionalities.” Français 2023 Frees time Español Maria Apazoglou Vice President of AI/ML and Business Intelligence Platforms, Thomson Reuters Streamline and Standardize the Complete ML Lifecycle Using Amazon SageMaker with Thomson Reuters for data scientists to focus on ML model building 日本語 Amazon SageMaker Customer Stories / Media & Entertainment Get Started 한국어 Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used to AWS AI services from months to days Learn how Thomson Reuters streamlined ML development using its Enterprise AI Platform powered by Amazon SageMaker. in ML lifecycle When ML models are ready for deployment, TR uses multiple services based on whether a model is deployed in TR’s products or is destined for internal use. “To deploy models that are going into our products, our product engineering team often uses Amazon SageMaker endpoints,” says Apazoglou. “For teams that are creating AI for internal consumption, we have developed a deployment service that codes Amazon SageMaker bots to run inferences for the models on a periodic schedule.” To monitor its ML models for drift or potential bias and to provide explainability of generated insights, TR uses Amazon SageMaker Model Monitor, a service that keeps ML models accurate over time. It also relies on Amazon SageMaker Clarify, which detects bias in ML data and explains model predictions. By extending these solutions, the company can schedule and evaluate AI models’ performance according to predefined metrics and receive notifications whenever bias or drifts are detected. A series of significant acquisitions accompanied TR’s organic AI growth. To improve collaboration, trust, and transparency in ML development, it chose to unify AI use across its business units and acquired data science teams. When TR Labs used AWS services to develop a promising experimentation solution, TR chose to extend this effort and build an enterprise-wide solution on top of it. “Using AWS services like Amazon SageMaker, we can create our own customized solutions while tapping into core ML functionalities,” says Apazoglou. TR architected and built its Enterprise AI Platform with support from the Amazon Machine Learning Solutions Lab (Amazon ML Solutions Lab), which pairs teams with ML experts to help identify and build ML solutions, and the Data Lab Resident Architect (RA) program. AWS Services Used 中文 (繁體) Bahasa Indonesia About Thomson Reuters With Amazon SageMaker Model Monitor, you can select the data you would like to monitor and analyze without the need to write any code. Contact Sales Ρусский Outcome | Improving Trust and Transparency throughout the ML Lifecycle عربي On AWS, TR can better meet its ML model governance standards and empower its data scientists to build innovative, secure, and powerful AI services to serve end users. The company is using the solution at scale across its entire enterprise and has seen widespread adoption across its data science teams. In fact, more than 150 AI professionals are using the solution. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. With the SageMaker model registry you can catalog models for production, manage model versions, associate metadata, such as training metrics, with a model, manage the approval status of a model, deploy models to production and automate model deployment with CI/CD. Learn more » Amazon SageMaker Clarify provides machine learning (ML) developers with purpose built tools to gain greater insights into their ML training data and models. Learn more » Overview Opportunity | Using Amazon SageMaker to Streamline Collaboration and Accelerate Innovation Faciliates large-scale Shortens access Accelerates With the Enterprise AI Platform, TR has improved governance and reduced the time to market of complete AI solutions built across business units in a secure environment. Using AWS services, the company has effectively solved the challenge of adhering to standards regarding ethics, monitoring, explainability, and risk assessment across a range of AI use cases while streamlining collaboration. Now, TR’s data scientists and stakeholders have access to a centralized environment where they can collectively view and manage metadata and health metrics. For experimentation and training, TR needed secure access to data in the cloud to accelerate the development of AI solutions. Using the Enterprise AI Platform, it can quickly spin up ML workspaces based on AWS CloudFormation infrastructure, which speeds up cloud provisioning with infrastructure as code. These workspaces can handle heavy computational workloads and provide access to tools such as Amazon SageMaker Notebooks, which offer fully managed notebooks for exploring data and building ML models. By incorporating purpose-built ML tools with data scientists’ workflows, TR can efficiently run experiments, work on advanced ML projects, and deal with large volumes of data. For example, it analyzed over two million audio files to identify common customer complaints and helped an 11-person team securely and efficiently collaborate on a document-analysis project. “We’ve now streamlined the process for how we create and set up ML resources,” says Dave Hendricksen, senior architect at TR. “In the past, creating an account would take 2–3 months. Now, we can provision one in 2 or 3 days.” Türkçe English TR built its Enterprise AI Platform on Amazon Web Services (AWS) to provide its ML practitioners with a simple-to-use, secure, and compliant environment that is embedded with services that address the complete ML lifecycle. This solution is based on Amazon SageMaker, a service that makes it simple to build, train, and deploy ML models for various use cases. Now, TR can deliver advanced AI services to end users at a faster pace. innovation Using the Enterprise AI Platform, TR has effectively unified its multiaccount, multipersona ML landscape. In the future, it will continue to build out the solution using Amazon SageMaker and will explore ways to run its over 100 legacy ML models on the solution. “We have definitely increased the transparency and improved the governance of our ML models on AWS,” says Apazoglou. “TR operates on trust, so these capabilities are really fundamental.” Amazon SageMaker Model Monitor Finally, the Enterprise AI Platform’s Model Registry provides a central repository for all TR AI/ML models. This component is partly based on Amazon SageMaker Model Registry, which companies use to catalog models for production, manage model versions, and associate metadata—such as training metrics—with a model. Using this service, the company makes ML models that are developed across multiple AWS accounts and are owned by different business units available to view and potentially to reuse, making it simple for teams to collaborate. TR also gains transparency and orchestration of model workflows as well as a centralized view of models’ metadata and health metrics. Thomson Reuters is a leading provider of business information services. Its products include highly specialized software and tools for legal, tax, accounting, and compliance professionals as well as its global news service, Reuters. Embeds governance standards TR formed when Thomson Corporation acquired Reuters Group. In addition to its global news service, TR provides its customers with products that include highly specialized software and tools for legal, tax, accounting, and compliance professionals. With roots dating back to 1851, TR first incorporated AI in the 1990s to streamline and automate manual processes for its customers. It later established TR Labs to embed AI/ML into its products. “Over time, we have seen an increase in the use of AI both within our products and within our company for deriving better insights from our data,” says Maria Apazoglou, vice president of AI/ML and business intelligence platforms at TR. Deutsch Amazon SageMaker Model Registry Tiếng Việt Amazon SageMaker Clarify Solution | Scaling the Enterprise AI Platform across TR Using Amazon SageMaker Italiano ไทย collaborative projects Learn more » To create a customized Enterprise AI Platform, TR needed to accommodate a variety of AI use cases, solutions, and AI practitioners’ personas. It also needed to consider scalability, flexibility, governance, and security throughout the ML lifecycle, from model training and deployment to monitoring and explainability. Português Thomson Reuters (TR) is on a mission to facilitate innovative projects through the increase of machine learning (ML) and artificial intelligence (AI). The content-driven technology company is a leading provider of business information services. AI and ML technologies are at the core of these solutions, but development processes vary across TR’s business units and data science teams. To facilitate cross-team collaboration and speed up the development of creative solutions, TR set out to build an agile environment that standardizes AI/ML workflows." Streamline Workflows Using the AWS Support App in Slack with Okta _ Okta Case Study _ AWS.txt,"AWS Support App in Slack enables you and your team members to manage cases, collaborate, and chat with AWS support agents directly from your Slack channel.  Requests live support Français AWS Management Console 2023 On the AWS Support App in Slack, the ability to create the support ticket and initiate contact with a live person right away directly from our Slack channel is invaluable for us.” Okta’s products use identity information to grant people access to applications on multiple devices at any time, while still enforcing strong security protections. Since 2009, the company has used AWS services to develop its software solutions and protect customer infrastructure. Okta uses AWS Enterprise Support, which provides concierge-like service for companies with business or mission-critical workloads in AWS where the main focus is helping them achieve business outcomes and find success in the cloud. Español 日本語 Get Started 한국어 Opportunity | Using AWS Support App in Slack to Resolve Questions Faster for Okta AWS Support App in Slack Overview | Opportunity | Solution | Outcome | AWS Services Used Jarret Peterson Manager of Site Reliability Engineering, Okta Accelerates The AWS Support App in Slack is now a key communication and collaboration tool for Okta, and the company plans to implement the application in more of its business units in the future. “The AWS Support App in Slack accelerates the support process drastically,” says Peterson. “The response times have been good, and the information has been valuable.” Previously, Okta managed all of its support requests through the AWS Management Console. For security purposes, only a few of Okta’s team members were authorized to access the console. As a result, the process of requesting support was lengthy and cumbersome. “The only way for an engineer to open an AWS Support ticket was to ask someone with access to our production accounts to log a ticket for them,” says Calvin Austin, senior director of site reliability engineering at Okta. “It was very slow and very onerous.” It would take 1 week to open up a new ticket, and resolving a query could involve weeks of inefficient back-and-forth communication. If a team member files a support case, only that person will have access to that case for security purposes, unless they grant access to others as well. Managing these controls added additional work for Okta engineers, and the company knew that it needed a faster and more efficient workflow. After engaging their AWS Technical Account Managers for advice, Okta’s Workforce Identity and Customer Identity business units chose to adopt the AWS Support App in Slack. Outcome | Empowering Okta Engineers to Get the Most Out of AWS Support sessions on demand AWS Services Used The Workforce Identity unit at Okta, which develops the company’s core identity and access management software, was a beta tester for the AWS Support App in Slack. It took an afternoon for the team to install Slack and implement the application. Twenty-four engineers on the Workforce Identity team use the application to ask questions about AWS documentation and receive support for development questions. Following this successful implementation, Okta’s Customer Identity team, which protects customers’ infrastructure, adopted the application. “It took maybe an hour for us to set up the AWS Support App in Slack,” says Jarret Peterson, manager of site reliability engineering at Okta. “There wasn’t too much on our side that we had to do. The AWS Support team took care of most of the implementation.” Four of the managers on the Customer Identity team rely on the AWS Support App in Slack to quickly resolve critical, time-sensitive questions. AWS Management Console provides everything you need to access and manage the AWS Cloud, all in one web interface. 中文 (繁體) Bahasa Indonesia To streamline its internal workflows, Okta adopted the AWS Support App in Slack, an application that makes it simple to create, update, search for, and resolve support cases in Slack channels. Now, the company can create and manage AWS Support cases at a faster pace, collaborate on tickets, and even request live support directly from Slack, empowering its engineers to get the most out of AWS Support resources. innovation Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Learn how Okta is empowering engineers to get the most out of AWS resources using the AWS Support App in Slack.   Solution | Opening Tickets for AWS Support in Minutes Instead of Weeks to resolve issues AWS Support provides a mix of tools and technology, people, and programs designed to proactively help you optimize performance, lower costs, and innovate faster. Overview AWS Support Using the AWS Support App in Slack, Okta’s engineers are empowered to ask questions and find answers to critical issues and development roadblocks. The company has accelerated its speed of innovation and improved efficiency and productivity across the Customer Identity and Workforce Identity teams. Türkçe English Using the application, Okta’s engineers can open AWS Support tickets in Slack in a few minutes instead of 1 week and resolve their queries at a much faster pace. “Before, it would take at least 1 or 2 weeks to get an answer because access to AWS Management Console was required to create support cases. Now, it’s gone down to 1 day because any engineer can create a case,” says Austin. “That’s a huge, huge time saving.” Multiple engineers can collaborate on the same ticket, providing teams with full visibility and insight into AWS Support requests. Because the Workforce Identity team can quickly receive answers about AWS documentation and services, its engineers can focus their time on developing new features—resulting in a faster speed of innovation. Okta creates products that use identity information to grant people access to applications on multiple devices at any time, while still enforcing strong security protections. Streamline Workflows Using the AWS Support App in Slack with Okta Several weeks to 1 day For Okta, its ability to innovate quickly and maintain high levels of security is the key to its success. The identity and access management company creates cloud-based identity platform software, built on Amazon Web Services (AWS), that helps companies protect access to their assets and technologies. Many of its business units rely on AWS Support, which offers expert guidance and assistance, to aid in the development of new features and resolve mission-critical issues. However, it could take a week or longer for engineers to open a support ticket and receive a response. Engineers can only engage with the AWS Support team through the AWS Management Console, a web application where businesses can access everything they need to manage their AWS resources, and Okta maintains strict controls on which employees can access the console. About Okta Deutsch to open up support tickets Tiếng Việt 1 week to minutes Italiano ไทย Using AWS Support App in Slack, Okta can obtain AWS Support with fewer steps and fewer people involved. Additionally, more people on Okta’s engineering team have the ability to request live support on demand without having to sign into the AWS Management Console. By streamlining these internal workflows and expanding engineers’ access to AWS Support, Okta can resolve issues at a much faster pace. This speed is crucial for the Customer Identity team, which often submits time-sensitive requests. “During a critical event, literally every minute counts. Before, we would have had to arrange a phone call or reach out to our AWS Technical Account Manager,” says Peterson. “On the AWS Support App in Slack, the ability to create the support ticket and initiate contact with a live person right away directly from our Slack channel is invaluable for us.” Learn more » Português" SUPINFO Creates 5-Year Master of Engineering Degree Implementing AWS Education Programs _ Case Study _ AWS.txt,"SUPINFO International University increased employability for its students and gave them hands-on cloud experience by implementing AWS Academy courses into its master of engineering curriculum. Français by preparing students for industry-recognized AWS Certifications AWS Academy Español Opportunity | Helping Future Engineers Develop Cloud Computing Skills  AWS Certified Solutions Architect–Associate Empowering higher education institutions to prepare students for industry-recognized certifications and careers in the cloud. 日本語 in the cloud regardless of students’ home computing setup 2022 SUPINFO’s master of engineering degree program officially launched in 2020, with approximately 200 students enrolled per year. During the program, students take several courses through AWS Academy. Second-year students take AWS Academy Cloud Foundations, an introductory course intended for students who seek an overall understanding of cloud computing concepts. In their fourth year, students take AWS Academy Cloud Architecting, an intermediate-level course that covers the fundamentals of building IT infrastructure on AWS. 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used By implementing AWS Education Programs into its master of engineering curriculum, SUPINFO is preparing future cloud talent with the skills that they need to succeed in the growing industry. The school currently has six AWS-accredited educators on its board and plans to upskill more instructors as the program expands.  As part of this program, students also gain real-world work experience because SUPINFO requires students to participate in internships with employer partners. “After their first year, students will begin their internships by working with our employer partners for 3 days a week, then another 2 days at school,” says Paul-Antoine Kempf, an educator at SUPINFO. “This approach is central to the education at SUPINFO. By giving students opportunities in development-related jobs, it trains them and helps develop confidence with the latest cloud platforms and tools.” Customer Stories / Education Hands-on experience AWS Services Used Overview To provide practical, hands-on experiences for students, SUPINFO uses tools from AWS Academy Learner Labs. These lab environments provide opportunities for educators to bring their own assignments and invite their students to get experience using select AWS services. “Being able to manipulate and experiment with tools on AWS is the most constructive approach to learning,” says Kempf. “All the classes have AWS Academy Learner Labs built in, and it is the reason why the program has been so successful.” 中文 (繁體) Bahasa Indonesia Outcome | Continuing to Build Cloud Skills for the Future Contact Sales Ρусский While creating the master of engineering program, SUPINFO wanted to tap into the potential of the cloud and equip its students with skills that are in high demand among potential employers. Seeking a hands-on solution, it engaged AWS Education Programs and implemented several AWS Academy courses into its master of engineering curriculum. Additionally, SUPINFO provided opportunities for students to earn AWS Certifications, which validate technical skills and cloud expertise. To facilitate the launch of the degree program, the AWS team assigned a technical program manager to train SUPINFO educators quickly, helping them prepare to teach students by the program’s start date. عربي Remote learning 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Being able to manipulate and experiment with tools on AWS is the most constructive approach to learning. All the classes have AWS Academy Learner Labs built in, and it is the reason why the program has been so successful.”  SUPINFO students have significantly improved their employability and are better prepared for the cloud workforce by participating in the master of engineering program. By earning AWS Certifications, students can demonstrate their cloud expertise to future employers. They can also apply the skills that they learned through their internships and AWS Academy Learner Labs activities to their future roles, streamlining their transitions from school to the workforce. “SUPINFO’s master of engineering curriculum offers a comprehensive approach to cloud education,” says Lounes Behloul, a student at SUPINFO. “Aside from connecting students to potential employers and helping us gain work experience, I appreciate the hands-on nature of the activities. The ability to take the tools that I learn at school then immediately apply them to my job in the future is invaluable.” Increase employability In-demand cloud skills with AWS services Validate technical skills and cloud expertise to grow your career and business. Learn more » About Company Get Started Solution | Implementing AWS Education Programs for Students Türkçe Based in France, SUPINFO International University (SUPINFO) is a private higher education institution with a specialty in computer science. Founded in 1965, it is a member of IONIS Education Group, which serves more than 30,000 students worldwide. SUPINFO will also increase specializations within the curriculum, especially for fourth- and fifth-year students. The institution will also expand its adoption of AWS Education Programs to complement these specializations, further demonstrating its commitment to building a highly skilled, well-trained cloud workforce of the future. English In addition to teaching valuable cloud skills through these mandatory courses, SUPINFO’s program provides students with the option to earn industry-recognized AWS Certifications. For example, AWS Academy Cloud Architecting teaches students the skills that they need to pursue AWS Certified Solutions Architect–Associate. This AWS Certification validates the ability to design and implement distributed systems on AWS. Students have also earned AWS Certified Cloud Practitioner, which validates cloud fluency and foundational AWS knowledge. The focus of this certification is on the design of cost and performance optimized solutions, demonstrating a strong understanding of the AWS Well-Architected Framework. Learn more » AWS Certification This credential helps organizations identify and develop talent with critical knowledge related to implementing cloud initiatives. Learn more » Deutsch Paul-Antoine Kempf Educator, SUPINFO Tiếng Việt Italiano ไทย SUPINFO Creates 5-Year Master of Engineering Degree Implementing AWS Education Programs AWS Certified Cloud Practitioner Working with AWS Education Programs, which prepares diverse learners for in-demand, entry-level cloud roles around the world, SUPINFO implemented multiple courses from AWS Academy into its 5-year master of engineering curriculum. AWS Academy provides higher education institutions with a free, ready-to-teach cloud computing curriculum that prepares students to pursue industry-recognized certifications and in-demand cloud jobs. Through this initiative, SUPINFO increased employability for its students by giving them hands-on experience using AWS services and the opportunity to earn industry-recognized credentials. Learn more » Based in France, SUPINFO is a private institution of higher education and a member of IONIS Education Group. Cloud computing is an integral part of SUPINFO’s master of engineering degree program and for good reason. Driven by digital transformation across various industries in Europe, the United States’ $35 billion cloud computing market is expected to grow at a compound annual growth rate of 15 percent between 2020 and 2028. Additionally, the French government announced a €1.8 billion support plan for the nation’s cloud computing sector to keep the country competitive on a global scale. For SUPINFO International University (SUPINFO), cloud knowledge is a critical part of higher education curriculum. The educational institution, which specializes in computer science, understands the potential of the cloud computing market as it has grown and expanded at a steady pace. To equip its engineering students with in-demand skills for careers in the cloud, SUPINFO turned to Amazon Web Services (AWS). for careers in the cloud Português" SURF Drives Ground-Breaking Research Accelerates Time to Insight Using AWS.txt,"In late 2020, SURF called for proposals to support research projects using Amazon Web Services (AWS) across the Netherlands. SURF supports these projects with 160 hours of consultancy and €5,000 to spend on AWS cloud consumption. Español {font-family:"Cambria Math"; 日本語 SURF is the National Research and Education Network (NREN) in the Netherlands. It is one of the most active and innovative NRENs in GÉANT, the pan-European data network for the research and education community. Headquartered in Utrecht, SURF facilitates collaboration on projects ranging from biological science to earth observation. SURF is a membership organization comprising more than 100 institutions, including research universities, universities of applied sciences, secondary vocational educational institutions, and university medical centers.  mso-font-pitch:variable; 한국어 margin-top:0cm; mso-bidi-font-size:12.0pt; AWS Services Used mso-fareast-language:EN-US;}p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast {margin-bottom:0cm;} Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message-oriented middleware, and empowers developers to focus on differentiating work. mso-fareast-language:EN-US;}p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst Gives researchers access to tailored solutions mso-pagination:widow-orphan; Project Crunchbase involves scraping the text data from 30,000 start-ups to identify which are developing products or services to limit CO2 emissions. The research team deployed automated compute infrastructure, which adjusts compute resources as needed, to perform the data analysis. mso-fareast-language:EN-US;}div.WordSection1 {page:WordSection1;}ol Learn more SURF Facilitates Collaboration on Projects Ranging from Biological Science to Earth Observation mso-ansi-language:EN-US; ไทย mso-default-props:yes; Powering Cutting-Edge Research The research team has combined the dataset with its own data to improve the accuracy of analysis. It uses AWS Fargate Spot—a new purchase option for AWS Fargate that enables developers to launch tasks on spare capacity with a steep discount, and AWS Batch to run multiple computing tasks relating to the data.   panose-1:2 4 5 3 5 4 6 3 2 4; Português line-height:107%; AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. You can trigger Lambda from over 200 AWS services and software as a service (SaaS) applications, and only pay for what you use. mso-font-signature:-536870145 1107305727 0 0 415 0;}@font-face Français The University of Twente phenology project looks at the impact of climate change on plants by using geodata such as timings of the start of the spring season over many years. The challenge was to design an architecture that made it possible to scale the analysis in resolution of time or space, as well as use AWS to integrate satellite data. Discover how AWS is enabling the Benelux public sector to drive prosperity, collaboration, and safety of citizens through digital transformation and innovation. SURF has a long history of IT and data expertise, but using AWS presents a new learning curve for the organization. The SURF team regularly consults with AWS to find innovative solutions for particular use cases, tailored specifically to unique research needs. “Using AWS, and cloud generally, you need to keep on top of the art of the possible,” says Griffioen. “New products and services are going live every week. We need to learn how to knit all these things together, so that researchers get the best from our services.” SURF and AWS worked closely together to support these projects, which will continue through 2022, when SURF plans to publish another open call for proposal. With these initiatives, AWS is supporting SURF on its mission to bring cloud power to research communities, shortening the time from research to scientific discovery. mso-ansi-font-size:11.0pt; panose-1:5 0 0 0 0 0 0 0 0 0; {mso-style-priority:34; 中文 (繁體) Bahasa Indonesia mso-style-type:export-only; mso-style-unhide:no; panose-1:2 11 6 4 2 2 2 2 2 4; 2022 Getting the Most Value from Data SURF is supporting a number of ground-breaking research projects using AWS. Türkçe English Project MinE from University Medical Center (UMC) Utrecht is using the TOPMed genomics dataset in a project involving the movement of DNA sequencing data relating to amyotrophic lateral sclerosis (ALS)—a form of motor neurone disease—from the US to Europe. The initial size of this dataset was 6 petabytes and could already be partially processed using AWS, reducing its size. Project Phenology Achieves Resolution of Time and Space Using Amazon EMR {mso-style-unhide:no; Amazon EMR Tiếng Việt SURF Drives Ground-Breaking Research and Accelerates Time to Insight Using AWS Project Crunchbase Scrapes Data from 30,000 Companies Using AWS Lambda and Amazon SQS Brings optimal IT services to research projects @font-face mso-ascii-theme-font:minor-latin; mso-add-space:auto; {mso-style-type:export-only; Benefits of AWS Using AWS, we can mix services and find the best solutions for researchers to not only manage their data, but also store, stage, and share it—as well as analyze it in different ways. mso-bidi-font-family:"Times New Roman"; mso-hansi-font-family:Calibri; {font-family:Wingdings; Speeds time to scientific discovery {margin-bottom:0cm;}ul Amazon EC2 mso-fareast-theme-font:minor-latin; margin-left:36.0pt; margin:0cm; font-family:"Calibri",sans-serif; mso-generic-font-family:decorative; {font-family:Calibri; عربي Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. mso-font-signature:0 0 0 0 0 0;}p.MsoNormal, li.MsoNormal, div.MsoNormal mso-font-signature:3 0 0 0 -2147483647 0;}@font-face Amazon EMR is a cloud big data platform for running large-scale distributed data processing jobs, interactive SQL queries, and machine learning (ML) applications using open-source analytics frameworks such as Apache Spark, Apache Hive, and Presto. panose-1:2 15 5 2 2 2 4 3 2 4; The combination of SURF and AWS has helped accelerate the development of services for research projects and opens new opportunities for researchers. “As datasets continue to grow, they become more expensive to store and move,” says Robert Griffioen, program coordinator, scalable data analytics team, SURF. “Using AWS, we can mix services and find the best solutions for researchers to not only manage their data, but also store, stage, and share it—as well as analyze it in different ways.” mso-font-alt:"Times New Roman"; mso-style-qformat:yes; mso-fareast-font-family:Calibri; mso-bidi-theme-font:minor-bidi; Improves accessibility and analysis of research data SURF is the National Research and Education Network (NREN) in the Netherlands, a collaborative organization for IT in Dutch education and research. Institutions in this community work together in the SURF cooperative to develop the best possible digital services and encourage knowledge sharing through continuous innovation. SURF has 350 employees, 113 connected institutions, and 1 million users. The research team deployed Amazon EMR, a managed cluster platform that simplifies the running of big data frameworks. The ability to scale analysis as required was achieved by using infrastructure as code, which makes it easy to configure new architecture and pay only for what is used.   Deutsch AWS Lambda mso-fareast-language:EN-US;}p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph Italiano margin-right:0cm; mso-font-charset:0; font-size:11.0pt; Amazon Simple Queue Service (SQS) mso-bidi-font-family:"Times New Roman \(Body CS\)"; margin-bottom:8.0pt; margin-bottom:0cm; Robert Griffioen Program Coordinator, Scalable Data Analytics Team, SURF Learn more » SURF, the National Research and Education Network (NREN), has a publicly funded mission to bring the latest IT capabilities to education and research communities. In 2020, SURF called for proposals to support research projects using Amazon Web Services (AWS) across the Netherlands. It has since used AWS for projects focused on motor neurone disease, machine learning, and geodata for ecological insights. Bringing research loads to the cloud is shortening the journey from research to scientific discovery and making data more shareable and accessible. mso-ascii-font-family:Calibri; Using AWS, SURF supports researchers by bringing the power of the cloud to their research and helping make data easier to replicate. SURF uses Terraform to deploy infrastructure-as-code and uses Amazon Elastic Kubernetes Service (Amazon EKS), a managed container service to run and scale Kubernetes applications in the cloud. Containers can be deployed in the cloud and on premises, so that data and research can be far more portable. “Doing the best research today is not only about the work itself,” says Griffioen, “but also about how easily and securely data can be moved, shared, and reproduced.” About SURF mso-font-pitch:auto; mso-font-signature:-469750017 -1073732485 9 0 511 0;}@font-face mso-fareast-language:EN-US;}p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle Ρусский mso-hansi-theme-font:minor-latin; The research group was already using Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for workloads, but it wanted to look further into machine learning capabilities and cost-saving opportunities. It created an Amazon Machine Image (AMI), which helps experiments run faster. And, using Amazon EC2 Spot Instances, the research team has been able to access all the compute resources it needs, while containing costs. mso-font-charset:77; mso-ansi-language:EN-GB; 中文 (简体) {font-family:"Times New Roman \(Body CS\)"; mso-fareast-language:EN-US;}.MsoChpDefault The previous setup consisted of an Amazon EC2 solution that ran on 60 servers. Now, the researchers are using AWS Lambda, a serverless, event-driven compute service for running code, while Amazon Simple Queue Service (SQS) sequences workflows. The results are saved in Amazon Simple Storage Service (Amazon S3), which can retrieve any amount of data from anywhere. Using this infrastructure, the research team are able to scrape data from the websites in a controlled manner, improving monitoring, cutting costs, and making the tools available for future scraping projects. Project AutoML is helping to tune machine learning algorithms in a data-driven way. The process of benchmarking machine learning models requires a complex orchestration of hundreds of compute tasks on a large infrastructure stack. In the AutoML project, AWS co-developed a more cost-effective deployment of the AutoML benchmark framework in the cloud, reducing benchmark runtime and cutting infrastructure costs. Project MinE Shifts DNA Sequencing Data Using AWS Fargate Spot and AWS Batch mso-style-parent:""; mso-generic-font-family:swiss; Get Started mso-generic-font-family:roman; Project AutoML Accelerates Experiments Using Amazon Machine Image" Syngenta Case Study _ Amazon Web Services.txt,"Improves the user experience Amazon Simple Storage Service Improves average response time by up to 20% Français The migration kicked off in July 2020 and included more than 45 SAP applications, over 2,500 interfaces, and a new implementation of the SAP S/4HANA Central Finance application. After migrating to AWS, the Syngenta SAP environment now has over 450 virtual machines running on Amazon Elastic Compute Cloud (Amazon EC2) instances, with over 600 TB of data stored in Amazon Elastic Block Store (Amazon EBS). For this migration, Syngenta collaborated with AWS Partner DXC Technology for technical migration assistance and Infosys for application testing support. Migrating SAP to AWS for Scalability and High Availability Around 70–80 percent of Syngenta’s core business runs entirely on an SAP environment, using business-critical applications such as SAP ECC, SAP PO, SAP BW, SAP SLT, and SAP S/4HANA. For years, the company hosted these business-critical SAP applications in a traditional data center, incurring high costs and technological constraints around hardware capacity, server sizes, network bandwidth, and hosting next-generation applications. Hardware was refreshed once every four years, leading to technology debt. To learn more, visit aws.amazon.com/sap/. Español Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), IT managers, and product owners. Reducing Operating Expenses by 28% Learn More 日本語 About Syngenta Get Started 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Elastic Load Balancing By migrating to AWS, Syngenta has improved its SAP application performance by up to 20 percent. As a result of this performance improvement, end user productivity also increased. Becomes future ready to support Syngenta’s SAP S/4HANA roadmap By gaining scalability, flexibility, and cost savings on AWS, Syngenta can allocate more resources toward innovation. The company has started focusing on platform modernization by leveraging new technologies like Auto Scaling and AWS Backint Agent. Additionally, Syngenta is exploring the adoption of AWS Launch Wizard for SAP to easily provision and configure SAP S/4HANA on AWS. By implementing AWS Launch Wizard and other AWS services, Syngenta will continue to focus on innovation and modernizing its SAP landscape. AWS Services Used Scales infrastructure capacity to support high seasonal business demand 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Contact Sales Ρусский عربي 中文 (简体) Eliminates the dependency on hardware refresh Amazon Elastic Compute Cloud Learn more » Sohil Laad SAP Operations & Technology Lead, Syngenta Benefits of AWS Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple targets and virtual appliances in one or more Availability Zones (AZs). Syngenta, based in Basel, Switzerland, is a global, science-based agricultural technology company with a presence in over 90 countries across the globe. Syngenta innovates with world-class science to protect crops and improve seeds. The company has more than 30,000 employees and reported 2021 global sales of $16.7 billion. Türkçe Reduces business downtime for key maintenance activities English Improving SAP Performance by up to 20% Optimizes costs with an overall reduction of 28% in SAP TCO In addition, by adopting AWS best practices, Syngenta followed the principle of having one application per server. This significantly increases application availability and minimizes business downtime. Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Deutsch Syngenta adopted a Multi-Availability Zone for SAP on AWS High Availability Setup, which comprised SAP load balancers and Elastic Load Balancing to automatically distribute incoming application traffic across multiple targets to improve scalability. Additionally, Syngenta adopted Amazon CloudWatch to monitor application and platform performance and optimize compute resource usage. Tiếng Việt The Syngenta IT team decided that moving its SAP applications to the public cloud was the right solution for the company’s challenges. After an initial assessment period and discussions with different cloud providers, the organization chose Amazon Web Services (AWS) as its cloud provider because AWS offered the most flexibility and the right technical features. Italiano ไทย Amazon CloudWatch Scaling our SAP applications seamlessly on AWS not only helps us meet rapid growth but also helps us manage seasonal demand. This was not feasible in our previous on-premises environment.” Syngenta Improves Application Performance and Reduces Costs with SAP on AWS 2022 Syngenta is a global company with headquarters in Switzerland. The company has more than 30,000 employees in over 90 countries working to transform how crops are grown and protected. Syngenta innovates with world-class science to protect crops and improve seeds. Its two core businesses support farmers with technologies, knowledge, and services so they can sustainably provide the world with better food. The SAP on AWS migration was a success for the Syngenta Global IT department. Aside from improvements in system availability and performance, operations costs have gone down by 28 percent since the migration. Syngenta will now be able to proactively forecast cost savings in the future. Now, Syngenta can scale its SAP environment based on demand, which helps the company better support its yearly business growth. Furthermore, as a highly seasonal business, Syngenta can upscale or downscale compute capacity on demand, with no limitation. “Scaling our SAP applications seamlessly on AWS not only helps us meet rapid growth but also helps us manage seasonal demand. This was not feasible in our previous on-premises environment,” says Sohil Laad, SAP Operations & Technology Lead at Syngenta. Português" Taggle Systems Case Study _ Amazon Web Services.txt,"Helping Councils and Water Utilities Cut Costs Scales to ingest data from 80,000 new sensors across Australia in 2022 Français Simplifying Integration and Accelerating Time to Market With AWS, Taggle can integrate seamlessly with third-party devices, applications, and radio networks. “We provide an end-to-end IoT solution, and AWS helps us support our own proprietary radio network to collect data from devices, as well as third-party devices and networks,” says Bowker. Español The Taggle IoT platform runs on AWS, using Amazon Kinesis Data Streams to store and ingest streaming data in real time from sensors and meters in the field. The platform also uses AWS Lambda functions to process ingested sensor and meter data to convert for consumption through the company’s visualization and analytics packages, or for export to external analytic or management systems. Taggle relies on Amazon Relational Database Service (Amazon RDS) to store live data, and Amazon Simple Storage Service (Amazon S3) to store archived data for querying. The Taggle solution database currently holds over one billion rows of data, all encrypted using AWS security components to meet customers’ stringent data privacy requirements. Additionally, the Taggle engineering team runs its development and test environments on AWS. Learn More As Taggle grew, it needed an IT environment that could scale easily to support high volumes of IoT data as well as analytical and visualization applications. Geoff Bowker, cloud solutions director at Taggle says, “We’re looking to add about 80,000 more sensors in the next 12–18 months, with each one reporting data hourly at a minimum, and in some cases every 15 minutes where there are alarming conditions such as rapidly rising flood water or sewer blockage. While our load is generally predictable, we do experience sudden spikes which can lead to rapid increases in IoT Platform demand at critical times.” Processing this data and meeting service level agreements for its customers is why Taggle required a platform capable of scaling responsively. 日本語 Contact Sales To learn more, visit aws.amazon.com/iot/. With Taggle’s AWS-based IoT platform, councils and water utilities across Australia are reducing their operating costs. “Our solution helps customers defer some of their capital expenses by saving water through identifying leaks, which is money they’re losing,” Bowker says. According to Taggle, industry benchmarks indicate that non-revenue water—water that has been produced and is ""lost"" before it reaches the customer—can make up to 25 percent of water flows. “By reducing consumption on the consumer side of the network, our customers can defer capital expenditures on additional storage, water treatment, and distribution capacity.” 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Taggle is looking to enhance its relationship with AWS by joining the AWS Partner Program. “We want to take advantage of the AWS Partner Network to leverage the AWS brand and scalability,” says Bowker. “We already have a dominant market share in Australia but have more room for growth in the smart water space, both locally and internationally. Partnering with AWS will certainly make a difference.” Get Started Streaming and Ingesting IoT Data on an AWS-Based Platform About Taggle Systems AWS Services Used 中文 (繁體) Bahasa Indonesia One of Taggle’s challenges has been finding ways to integrate with other systems cost-effectively and quickly. For example, there are thousands of billing system vendors Taggle would need to work with if it expands to the US.  Ρусский عربي 中文 (简体) Integrates seamlessly with third-party devices, applications, and networks Learn more » The company’s developers also rely on AWS to reduce development time, decreasing time to market by 15 percent for new features and solution enhancements. For example, Taggle recently developed a range of new tag types that integrate with the IoT platform. “AWS has helped us optimize performance and throughput for the tags on our existing system as our sales volumes have increased,” says Bowker.  Learn more » Geoff Bowker Cloud Solutions Director, Taggle Benefits of AWS Taggle also sought a technology solution to support its growing ecosystem of third-party devices and radio networks that help deliver data to asset management, emergency management, and supervisory control and data acquisition (SCADA) applications. AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Amazon Kinesis Data Streams is a serverless streaming data service that makes it easy to capture, process, and store data streams at any scale. Türkçe English Taggle IoT Platform Tracks Thousands of Smart Water Sensors to Help Utilities Cut Costs Amazon Relational Database Service AWS Lambda Amazon Kinesis Data Streams Taggle is also taking advantage of the high availability and reliability of AWS services to ensure it meets its customers’ requirements for data continuity. “Our IoT platform has a range of redundancy features built into it. So, if we lose transmission from a tag or have an extended outage, we can restore data continuity quickly,” Bowker says. “This is critical in helping our customers avoid data loss. It also ensures they can identify water leaks or loss within their network, as that can only happen with continuity of data to read.” Taggle is Australia’s leading supplier of smart water solutions for local and regional councils and water utilities. The company provides a complete smart water solution that’s open, interoperable, and scalable. Taggle has more than 270,000 meters and sensors deployed across Australia. Deutsch Tiếng Việt Italiano ไทย Using AWS, we know we can scale as necessary to accommodate the 80,000 additional sensors we’re rolling out this year. We’re confident that we can continue our fast pace of growth with AWS.” Scaling to Reliably Ingest Data from 80,000 New Sensors By running its IoT platform on AWS, Taggle can scale on demand to support high volumes of IoT data as the company grows. “Using AWS, we know we can scale as necessary to accommodate the 80,000 additional sensors we’re rolling out this year,” says Bowker. “We’re confident we can continue our fast pace of growth with AWS.” Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. 2022 More than 50 councils and water utilities across Australia rely on Taggle smart water solutions to gather data from Internet of Things (IoT) sensors and meters. These provide insights on leak detection, demand management, network optimization, customer engagement, and billing. Taggle has more than 270,000 meters and sensors deployed throughout Australia, reading over 2 billion data points annually. The sensors accumulate data on water flow for metering, water levels for floodplains, water catchment and wastewater, water pressure for network and pipeline management, and rainfall. Taggle’s network delivers more than 5 million readings to councils and water utilities daily. Although Taggle considered several IoT technologies to support its platform, Amazon Web Services (AWS) best met its business requirements for scalability. “We chose AWS because it offered the technology stack and production environments to meet our needs now and into the future,” says Bowker. Helps utilities and councils cut costs Português Reduces time to market for new features by 15%" Takeda Accelerates Digital Transformation by Migrating to AWS _ Takeda Case Study _ AWS.txt,"Outcome | Exploring New Technological Possibilities for Healthcare on the AWS Cloud  AWS Lake Formation Français 80% Enhanced Español Takeda Accelerates Digital Transformation by Migrating to AWS AWS Control Tower Learn how Takeda, a 240-year-old company, uses AWS to increase operational agility, reduce technical debt, and modernize its business. 日本語 2023 Most importantly, Takeda has built its own digital muscle by effectively modernizing its technology landscape, which is promoting cloud-based innovation across its business units. “We are creating a digital flywheel, of which the cloud migration and modernization is just the first step. We are finding opportunities to encourage new ways of working and to empower digital initiatives across our organization,” says Pehrson. “Project Fuji was a digital transformation journey from the outset. Our digital journey addressed all the dimensions of what we needed to do to achieve patient outcomes. Certainly, the foundation of this was the technology infrastructure on AWS and data as a digital solution facilitator.” Get Started 한국어 The AWS Cloud spans 99 Availability Zones within 31 geographic regions around the world, with announced plans for 12 more Availability Zones and 4 more AWS Regions in Canada, Israel, New Zealand, and Thailand. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used Opportunity | Using AWS Services to Modernize the Digital Landscape for Takeda  Now, Takeda is better equipped to engage in powerful digital initiatives and respond to the world’s challenges with agility. At the beginning of the COVID-19 pandemic, pharmaceutical companies came together to form the CoVIg-19 Plasma Alliance, which aimed to use immunoglobulin therapy to treat COVID-19 patients and help them recover faster. In 1 weekend, Takeda could spin up a secure, collaborative environment using AWS Control Tower, which is used to set up and govern a secure, multi-account AWS environment, and AWS Lake Formation, which creates secure data lakes, making data available for wide-ranging analytics. As a result, the alliance proceeded rapidly to a phase III clinical trial for hyperimmune therapy. AWS Services Used AWS Lake Formation easily creates secure data lakes, making data available for wide-ranging analytics. Learn more » virtual machines running on AWS 中文 (繁體) Bahasa Indonesia Using AWS services, the team at Takeda knows that access to advanced technologies is one API call away. “Whether you’re climbing a mountain or changing a 240-year-old company, the real alchemy is to bring forth the best in your people and use technology for the benefit of the patient,” says Pehrson. “It’s our will in using AWS resources that will drive the future of Takeda.” 7 out of 13 sustainability and automated compliance ไทย Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) From a data perspective, Takeda did not have a centralized catalog, pipeline, or data lake. As a result, its teams were purchasing commercial datasets repeatedly without making them accessible internally, and there was no mechanism to share the data it produced with its partners. “There was no single source of truth,” says Pehrson. “This was a difficult situation for the Data, Digital, and Technology team, and we needed to disrupt ourselves.” data centers closed, with 2 soon to close Overview Solution | Accelerating Digital Transformation by Migrating over 600 Applications to AWS  Ryan Pehrson Head of DevOps and Cloud Enablement, Takeda Pharmaceutical Company Limited Takeda turned to Amazon Web Services (AWS) for Project Fuji, an initiative to empower self-service, on-demand access to cloud technologies across the organization. With this project, it aimed to migrate 80 percent of its business applications in core data centers to AWS and other software-as-a-service solutions, and rationalize its technology estate. The business transformation resulting from modernized solutions and accelerated data services established an internal engine for innovation and equipped its employees with new skills and ways of working. Takeda is a global, values-based, research and development–driven biopharmaceutical company headquartered in Japan. It strives to discover and deliver life-transforming treatments, guided by its commitment to patients, people, and the planet. Türkçe About Takeda On AWS, Takeda is exploring new digital ambitions. In the future, it plans to use AWS to develop telehealth, personalized treatment, and smart manufacturing initiatives. “We don’t actually know how the healthcare landscape of tomorrow will be composed, but we know this: we will continue to build on AWS,” says Pehrson. English Takeda chose AWS because of its wide range of cloud offerings and high adoption among life science companies. The team also felt that AWS had a strong compliance and security posture with its shared responsibility model, further validated by certifications and third-party audited artifacts. Additionally, Takeda appreciated the contributions from AWS to the open-science community and global alliances to progress scientific innovations. “We believed we could reset expectations of what the Data, Digital, and Technology team can do and become the trusted innovation partner that our business always wanted,” says Pehrson. “We needed a catalyst and more capabilities than we had within Takeda to get it done.” Takeda Accelerates Digital Transformation by Migrating to AWS We don’t actually know how the healthcare landscape of tomorrow will be composed, but we know this: we will continue to build on AWS.” With a mission to improve health and create a brighter future for the world, Takeda wanted to respond to patients’ needs with greater speed and agility and to be at the intersection of human health, technology, and business growth. But, having grown through acquisitions over the years, it needed to deal with the weight of the past. Despite a significant application rationalization initiative, Takeda still had thousands of business applications and significant technology debt, and its IT infrastructure needed to be modernized. “Our Data, Digital, and Technology team’s energy was spent mostly on maintaining the old, not building the new,” says Ryan Pehrson, head of DevOps and cloud enablement at Takeda. “We could not support or build the latest technology in our data centers. Though we had and have great technology professionals on staff, we neither had the skills nor the funding to keep up with the leading edge of innovation.” Amazon Regions In 2019, Takeda chose to migrate to AWS. It embarked on an intense 2-year journey toward cloud modernization to create an innovation engine that could drive better patient outcomes. After analyzing each of its applications, the company used an agile migration factory approach to shift what was necessary to the cloud and close 10 of its 13 data centers—and Project Fuji was born. Deutsch AWS Control Tower simplifies AWS experiences by orchestrating multiple AWS services on your behalf while maintaining the security and compliance needs of your organization. Learn more » Tiếng Việt of core data center applications migrated Italiano Customer Stories / Life Sciences Contact Sales With the support of the AWS team and Takeda’s technology partners, Takeda followed a rinse-and-repeat model to migrate its applications to AWS. In 8 months, the company has migrated 80 percent of its applications to six AWS Regions, which are locations around the world where AWS clusters data centers. The 615 migrated applications amount to over 10 PB of data; the company also runs over 8,000 average daily virtual machines on AWS. By migrating to AWS, Takeda could retire 7 out of its 13 data centers, with 2 more to close soon, improving its operational agility. “We certainly wouldn’t have access to the advanced technologies that we have today if we had stayed in our data centers,” says Pehrson. “We wouldn’t have any of the cloud-native innovations available, and we would still be stuck in the ongoing overhead and administration of all that technology.” By getting out of its data centers, Takeda has also reduced its carbon footprint by 1,918.5 metric tons, improving its environmental impact. Founded in 1781, Takeda Pharmaceutical Company Limited (Takeda) is a global, values-based, research and development–driven company committed to discovering and delivering life-transforming treatment options. With a deep focus on patients, trust, and reputation, it aims to become the world’s most trusted digital global biopharmaceutical company. When Takeda’s aging technology landscape hindered its pace of innovation, the company knew that it was time to embark on a cloud transformation journey. 1,918.5 metric tons of carbon removed 8000 Português" Tally Solutions _ Amazon Web Services.txt,"Achieves 42% cost savings Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Working remotely with TallyPrime on AWS is simple for Tally’s customers, who only need a Tally license and TallyPrime on AWS pack from Elcom to begin working through NICE DCV. “With TallyPrime powered by AWS, customers are onboarded by Elcom in 5–10 minutes. It’s a seamless process,” Joyce says. “There’s no need for training or excessive time spent in learning the solution.” Français About Tally Solutions Private Ltd Using NICE DCV to Stream TallyPrime on AWS Joyce Ray Head of India Business, Tally Solutions To learn more, visit  aws.amazon.com/smart-business.  Español NICE DCV With TallyPrime powered by AWS, Tally is set to scale its platform seamlessly as user traffic increases. AWS offers the necessary scalability and reliability to ensure the best experience for Tally's global customers. Tally and AWS collaborated to architect a scalable, reliable, and cost-effective solution which can serve the unique needs of the Indian SMB market. AWS solution architects and prototyping engineers worked with Tally engineers to design, prototype, build, and test innovative features for a seamless user experience. The AWS team helped Tally rapidly iterate by testing multiple solutions and selecting the best techno-commercial fit. Tally’s AWS Partner, Elcom now has over 15,000 TallyPrime users empowered by AWS, thanks to NICE DCV enabling them to work from anywhere on any device at any time.” 80% of small and medium businesses (SMBs) in India rely on Tally’s business management software to manage their accounting, inventory, taxation compliance, and overall finances. In the last 36 years, 日本語 Get Started 한국어 Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Using NICE DCV, Tally provides its customers with anytime, anywhere access to TallyPrime regardless of location. “Elcom now has over 15,000 TallyPrime users empowered by AWS, thanks to NICE DCV enabling them to work from anywhere on any device at any time,” says Joyce. “Many of these customers are growing enterprises with multiple locations, and this greatly simplifies things for them. Whether there are travel restrictions or other interruptions, users have more flexibility now.” Benefits Onboards new users in 5–10 minutes The company’s flagship business management software—TallyPrime—provides modern experience and features for SMBs to run their businesses seamlessly. Joyce Ray, head of India Business at Tally, says, “Over the past 36 years, we’ve been able to simplify the lives of millions of entrepreneurs across India by providing everything SMBs need to run their businesses smoothly.” AWS Services Used Tally Solutions, headquartered in India, is a technology company that delivers business software for small and medium businesses. Founded more than three decades ago in 1986, Tally Solutions caters to millions of users across a range of industries in more than 120 countries.  AWS Key Management Service (AWS KMS) lets you create, manage, and control cryptographic keys across your applications and AWS services. AWS Key Management Service (AWS KMS) by creating and controlling cryptographic keys and automated application scalability with 中文 (繁體) Bahasa Indonesia Ensuring Secure Streaming while Reducing Costs Gives users reliable remote access to ERP application anytime, anywhere Amazon Elastic Compute Cloud (Amazon EC2) instances and streams the application to on-premises client machines. It leverages AWS Auto Scaling, which adjusts capacity based on demand. Application-level two-factor authentication based on state-of-the-art asymmetric cryptography adds an additional layer of mandatory authentication for every user accessing the system. Ρусский عربي 中文 (简体) Tally Solutions Private Ltd. has provided enterprise resource planning (ERP) software to more than 2 million businesses and over 7 million users across the globe. Using NICE DCV, TallyPrime runs remotely on Tally appointed Elcom Digital as its national distributor for marketing and sales of TallyPrime through Tally Partners. Elcom implements   Scaling the User Base Rapyder Solutions, also assisted in co-developing and testing the software, while Türkçe Amazon Elastic Compute Cloud Tally has achieved cost optimization and affordability by migrating from a Windows to Linux environment, resulting in approximately 42% cost savings. Furthermore, NICE DCV is offered as a complimentary service running on Amazon EC2 with no additional charges, allowing Tally to offer TallyPrime at a competitive price to its customers. English Around Tally Solutions Securely Streams Its ERP Solution with NICE DCV, Providing Remote Access Anytime, Anywhere AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. Securely streams software to thousands of customers NICE DCV, an AWS high-performance remote display protocol, to securely stream the hosted Tally application. “We chose NICE DCV because of flexibility and cost optimization, alongside the experience and support of AWS,” says Joyce. With the onset of the pandemic in early 2020, businesses were forced to adapt quickly to new ways of working. To access TallyPrime remotely, the main system on which it was installed needed to be switched on and connected. However, with offices shut down during the pandemic, maintaining these systems and connections became more challenging. As a result, there was an increasing demand for anytime, anywhere access to TallyPrime, which was previously managed through remote access. NICE DCV is a high-performance remote display protocol that provides customers with a secure way to deliver remote desktops and application streaming from any cloud or data center to any device, over varying network conditions. Deutsch AWS Enterprise Support ensured successful deployment and rapid on-demand support. AWS Auto Scaling Tiếng Việt Learn More Italiano ไทย Contact Sales Tally enhanced security through Learn more » 2023 Tally sought a cloud-based application streaming solution that would serve the growing demand of anytime, anywhere access. “We considered various remote display protocol solutions for high-performance and opted for a multi-modal solution supported by AWS,” Joyce says. AWS Key Management Service Giving Users Remote Application Access from Anywhere Tally is securely streaming its ERP software to thousands of customers by running on NICE DCV, which integrates with the company’s two-factor authentication process. NICE DCV provides custom security layers, which, alongside AWS KMS encryption, helps enhance the security of TallyPrime. Amazon Elastic Container Service (Amazon ECS) to run a containerized application environment, with each container associated with a user; a unique instance of NICE DCV server assigned on a per-user basis handles the streaming end-user session setup and rendering. Português" Tangent Works Case Study.txt,"Elke Van Santvliet, Machine Learning Expert, Tangent Works Français Many companies struggle to realize benefits from the information they hold about their operations and customers. The shortage of data scientists, who have the skills to analyze data to get useful insights, makes this problem even more difficult to solve. Speeds up customer onboarding  In competitive markets, making good decisions based on insights derived from machine learning can be the difference between success and failure. Tangent Works brings these advanced analytics capabilities within the reach of every organization. “Using AWS, we help businesses realize the benefits of machine learning. And they don’t need a dedicated data science team to do it,” says Van Santvliet. Español Tangent Works helps customers across a wide range of sectors use TIM to improve their operations. For example, retailers more accurately forecast consumer demand based on historical sales data combined with weather forecasts. Utility companies plan their maintenance schedules taking into account seasonal changes. Energy providers predict consumer usage to keep equipment running at peak efficiency. And financial services firms employ TIM’s anomaly detection to automate credit card fraud detection and rapidly build models in response to new threat types. To manage customer workloads, Tangent Works uses Amazon Elastic Kubernetes Service (Amazon EKS), a fully managed container service for Kubernetes applications, and AWS Fargate, a serverless, pay-as-you-go compute engine that automates server management. It also uses Amazon RDS for PostgreSQL, which makes it easy for Tangent Works staff to set up, operate, and scale PostgreSQL deployments in the cloud. As a young company with a small IT team, Tangent Works turned to Amazon Web Services (AWS) to provide an efficient way to launch its services, manage clients’ compute and storage needs, and support its rapid growth. Using AWS, staff are able to focus on product development rather than infrastructure maintenance, and Tangent Works can dynamically and cost-effectively scale resources to meet variable customer demand. 日本語 Tangent Works provides technology that automates the machine learning modeling process so companies can make better use of their data. Founded in 2014, Tangent Works has offices across Europe and in the US. Creating AI Models in Seconds Instead of Weeks AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. AWS Fargate is compatible with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Get Started 한국어 Powering IoT with Siemens Digital Industries Software Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service to run and scale Kubernetes applications in the cloud or on-premises. AWS Fargate Cuts staff time spent on infrastructure management The large quantities of data generated by Internet of Things (IoT) systems makes this an ideal use case for Tangent Works technology. TIM provides specialized capabilities for businesses using IoT, including sensor monitoring and anomaly detection. The system is also capable of analyzing failures to improve its predictive abilities. Belgium-based Tangent Works provides businesses with a fast, affordable way to derive value from their data. Its technology helps customers automate the machine learning modeling process so they can easily perform complex analysis to drive smarter decision-making. AWS Services Used Scaling to Support Rapid Growth Build with powerful services and platforms, and the broadest machine learning framework support anywhere. 中文 (繁體) Bahasa Indonesia Builds AI models in seconds not weeks Ρусский عربي Learn more » About Tangent Works 中文 (简体) Making Good Decisions Based on Machine Learning Tangent Works partnered with Siemens Digital Industries Software to integrate TIM technology into Siemens’ MindSphere product, an industrial IoT-as-a-service solution. MindSphere customers now have a single dashboard through which they can analyze IoT data, and business users can develop their own data models. This gives them a better understanding of their operations and helps them make smarter decisions as a result. “Thanks to Tangent Works and the ability to use AI and machine learning to automate predictive analytics, even citizen data scientists can easily analyze data and get immediate insights at scale,” says Raymond Kok, senior vice president of cloud application solutions at Siemens Digital Industries Software. “This puts the power of IoT data in the hands of every user.” 2022 Benefits of AWS The company’s Tangent Information Modeler (TIM) technology—which provides customers with bespoke artificial intelligence (AI) capabilities—delivers the accuracy of manual modeling in a fraction of the time. This means organizations save money both on the staff resources required to create the models and the compute resources needed to run them. “Using AWS, we can instantly scale compute capacity when clients build new models or apply existing models,” says Elke Van Santvliet, machine learning expert at Tangent Works. “Our customers can create models in a few seconds or minutes—this would have taken weeks before. And it’s cost-effective because we pay only for the resources we use.”  Because models are easy to update and run, customers can adjust them regularly, to keep business insights fresh. For instance, retailers can amend the predicted performance and stock requirements of each shop every week, or manufacturers can instantly build a new model when a shop-floor process changes. Türkçe Using AWS, we help businesses realize the benefits of machine learning. And they don’t need a dedicated data science team to do it.” Helping Customers Innovate The team at Tangent Works is now developing ways to add automated modeling functions to existing AWS machine learning and visualization tools. It is also refining the company’s time series anomaly detection technology so it can be applied to the entire lifecycle of machine learning development and operations. This means the system would be able to monitor itself and automatically improve its own modeling, providing customers with faster access to data models. English Amazon RDS for PostgreSQL Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. The beauty of Tangent Works’ tools is that they’re easy to use. Not only do they put powerful machine learning technology into the hands of business users, but they also help data scientists improve their productivity by reducing repetitive modeling tasks.  Deutsch Tiếng Việt Tangent Works Puts Machine Learning Modeling into Business Users’ Hands Using AWS Italiano ไทย Amazon EKS Tangent Works is a fast-growing firm that’s always adding customers. And because many of these businesses already use AWS, it simplifies onboarding and collaboration. “Getting our clients up and running quickly means we’re able to grow rapidly,” says Van Santvliet. The company can easily scale its resources to run these increased workloads. Offering its services directly to customers on the AWS Marketplace has also supported growth. Tangent Works has shortened its sales cycles and won 5 enterprise customers in the last 2 months through the marketplace. Tangent Works helps companies get more value from their time-series data by automating the machine learning modeling process. Its Tangent Information Modeler (TIM) technology makes artificial intelligence technology accessible and affordable to businesses that don’t have dedicated data science teams. Tangent Works used AWS to launch its services, manage customers’ compute and storage needs, and support its rapid growth. Amazon EC2 Amazon RDS for PostgreSQL gives you access to the capabilities of the familiar PostgreSQL database engine. This means that the code, applications, and tools you already use today with your existing databases can be used with Amazon RDS. Eases integration with Siemens MindSphere Português Machine Learning on AWS" TC Energy Builds an Operations Data Platform for 60000 Miles of Pipeline Using AWS Data Analytics _ TC Energy Case Study _ AWS.txt,"Uses data Amazon Athena is a serverless, interactive analytics service built on open-source frameworks, supporting open-table and file formats. AWS Glue is a serverless data integration service that makes it easier to discover, prepare, move, and integrate data from multiple sources for analytics, machine learning (ML), and application development. Français 2023 TC Energy is seeking more than just a financial return for its efforts with O360. “Our number-one value is safety,” says Shane Taylor, business analyst for the US Natural Gas Pipelines business unit at TC Energy. By improving information access and flow, the company expects to improve safety. It also aims to bolster the sustainability of its operations. As the energy industry advances energy transition initiatives, TC Energy is using innovative solutions like O360 to overcome challenges. Español Using AWS serverless services, we achieved better scale and performance with reduced costs.” TC Energy was founded in 1951, and since then, the conglomerate has come to own and operate established pipelines across North America through a series of acquisitions. Its pipes traverse thousands of miles, crossing plains, deserts, and mountains to provide vital energy to consumers across the continent. The sheer scale of its operations creates an urgent need for data insights. For example, when data silos prevented project planning teams from getting a unified overview of planned work, it was difficult to plan and implement projects efficiently. At the same time, TC Energy’s commitment to responsible stewardship challenged employees to minimize the disturbances that work crews can cause for local communities when it is necessary to take multiple trips to pipeline locations. TC Energy Corporation (TC Energy) is a team of 7,000+ energy problem solvers with 60,000 miles of pipeline across Mexico, the United States, and Canada. Its infrastructure transports 25 percent of all the natural gas consumed in North America. 日本語 Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. The company has a broad vision for O360. “We know that we have a multi-year journey with this data foundation on AWS,” says Taylor. “But this is a great first step.” Get Started 한국어 Learn how TC Energy in the energy industry is targeting 70 data sources using AWS Glue and Amazon Athena. $1.6M annual AWS Glue Amazon Athena Outcome | Achieving Holistic Business Success Using AWS Services Customer Stories / Energy AWS Services Used To break down its ambitious goals into more manageable targets, TC Energy identified 11 use cases for the initiative, which it calls Operations 360 (O360). The team followed an interactive approach to achieve results quickly. “By consolidating data on AWS and bundling pipeline work, we conservatively estimate that we’ll save $1.6 million annually,” says Irfan Ali, director of pipeline integrity data engineering at the US Natural Gas Pipelines business unit of TC Energy. A unique aspect of the requirements for TC Energy’s operations data lake was the need for advanced geospatial processing. The team evaluated vendor offerings, but most were not yet compatible with the patterns and scalability of a data lake environment. Using Apache Sedona, the team could build a set of libraries for running geospatial transformations within the AWS Glue environment, including geo-hashing, linear referencing, and dynamic segmentation. “The end result is a unified source of data that can be consumed in a number of different ways,” says Derrick Bowen, principal consultant for Pariveda. “We’re also establishing patterns that can be reused across the organization to tackle TC Energy’s diverse use cases.” The cost savings generated by these initial steps are expected to fund the rest of the initiative, which will extend across multiple years. 中文 (繁體) Bahasa Indonesia targeted Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) The complexity of TC Energy’s O360 implementation necessitated an incremental approach that used a fleet of AWS services. First, the data is ingested and sent to Amazon Simple Storage Service (Amazon S3), an object storage service offering industry-leading scalability, data availability, security, and performance. Then, TC Energy processes data using a pair of solutions: Amazon EventBridge, a serverless event bus, and AWS Glue. Pariveda used the Modern Data Enterprise Framework to transform and enhance the data, which is then queried using Amazon Athena. Finally, TC Energy uses AWS Glue, Amazon EventBridge, and AWS Step Functions, a visual workflow service, to transform the data. Learn more » Learn more » Derrick Bowen Principal Consultant, Pariveda Overview as an asset Türkçe English Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram TC Energy has already had significant success migrating existing systems to AWS. In March 2021, the company decided to look into what benefits could be realized from a centralized operations data lake. It knew that this would be a highly complex task because discovery alone involved over 140 internal stakeholders who connected data, processes, and people. The first step was to bring all these sponsors together to lay out and prioritize 11 unique use cases related to improving operational excellence. Then, TC Energy selected Pariveda as an implementation partner. “Pariveda has been very important in implementing a data lake that is based on industry best practices and architecture,” says Ali. On Pariveda’s recommendation, TC Energy opted to develop its new data lake with serverless architecture using AWS Glue, a serverless data integration service that makes it simpler to discover, prepare and integrate data from multiple sources into secure data lakes. It also used Amazon Athena, a service for analyzing petabyte-scale data where it lives. About TC Energy Opportunity | Using AWS Glue to Consolidate Diverse Data Sources for TC Energy Amazon S3 estimated cost savings With 60,000 miles of pipeline in Mexico, Canada, and the United States, TC Energy invests heavily in operations and reliability to accommodate constantly changing demand and regulations. The company wanted to create a centralized data repository known as a data lake to improve the management of its US natural gas infrastructure by consolidating information from over 70 data sources. TC Energy turned to Amazon Web Services (AWS) and Pariveda Solutions Inc. (Pariveda), an AWS Partner, to facilitate this challenging project. TC Energy Architecture Diagram Deutsch The results of the early stages of O360 implementation look promising for TC Energy’s future growth plans. “Using AWS serverless services, we achieved better scale and performance with reduced costs,” Bowen says. More than a third of the company’s data sources were added to the new data lake within the first 8 months, and the company now has a solid data foundation for future enhancements. Additionally, as more data sources are ingested, a use case may evolve to focus on identifying emission reduction opportunities in pursuit of TC Energy’s sustainability goals. The team has also identified ways to use the ingested data with advanced machine learning to further continual improvement by using efficiencies in the way that TC Energy operates. Amazon DynamoDB Tiếng Việt Solution | Building a Data Foundation for Decades of Innovation Using Amazon Athena Italiano ไทย 70 distinct data sources Architecture Diagram Close TC Energy Builds an Operations Data Platform for 60,000 Miles of Pipeline Using AWS Data Analytics Click to enlarge for fullscreen viewing.  For a complex, high-value use case, the team developed a rich, interactive web application for planners to collaborate on finding opportunities to bundle capital projects to save costs and reduce environmental and community impacts. The web front end is backed by AWS Lambda, a serverless, event-driven compute service, as well as Amazon DynamoDB, a fast, flexible NoSQL database service for single-digit millisecond performance at virtually any scale. The company selected Amazon API Gateway—a fully managed service that makes it simple for developers to create, publish, maintain, monitor, and secure APIs at virtually any scale—to create and maintain secure APIs for the interactive offering. “Using these AWS services, we’ve created a solution that makes it simple for users to see the data, drill into it, and work together to get value for the business,” says Bowen.  Automated program planning Português Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale." TCSG Works with AWS Academy to Offer Digital Cloud Computing Credential to 22 Colleges _ Case Study _ AWS.txt,"Hands-on learning Meanwhile, the Career Day event that TCSG hosted provided valuable experience to students in the Cloud Academy. “Events like Career Day help employers connect with students for internships, apprenticeships, or even direct hire before they graduate,” says Ferguson. “The education-to-workforce pipeline that we have built is invaluable.” Français in less than a year from 7 to 200 students across 10 colleges Learn from AWS experts. Advance your skills and knowledge. Build your future in the AWS Cloud. Español Real-world experiences 日本語 Contact Sales 2022 TCSG built the TCSG Cloud Academy so that it could deliver virtual courses to potentially thousands of students from all 22 colleges. It engaged Amazon Web Services (AWS) to include valuable cloud learning content that would help students gain cloud computing expertise. In its first year, close to 200 students took cloud courses, and more than half of them earned a cloud-specific credential. The Technical College System of Georgia (TCSG) worked with AWS Education Programs to launch its virtual Cloud Academy, making cloud learning accessible regardless of location or IT setup. Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used AWS Educate Customer Stories / Education Solution | Delivering Cloud Education in Diverse Forms To promote the learning of in-demand skills and help students land jobs in a competitive workforce, the Technical College System of Georgia (TCSG) created the TCSG Cloud Academy, which helps address the state’s growing need for workers with cloud computing expertise. As the global cloud computing market continues to grow rapidly, Atlanta has become one of the most tech-forward cities in the United States, with over 125,000 cloud computing jobs available. AWS Services Used 中文 (繁體) Bahasa Indonesia AWS Academy’s ready-to-teach curriculum makes it easy for institutions to adopt and start teaching cloud computing right away. The time to market is the main advantage for us.”  Steven Ferguson Chief Information Officer, TCSG Empowering higher education institutions to prepare students for industry-recognized certifications and careers in the cloud. Learn more » Ρусский provided to students using AWS services عربي +200% growth 中文 (简体) AWS Academy AWS Training and Certification Accessible education Overview To connect those hands-on cloud experiences and the rest of the Cloud Academy curriculum to real-world jobs, TCSG collaborated with the AWS team in December 2021 to host a Career Day event that was attended by several major cloud computing employers. Students of the TCSG Cloud Academy could meet with prospective employers and gain practical insights into the in-demand cloud computing skills that these organizations want and that they would learn. Build your cloud skills at your own pace, on your own time, and completely for free. Outcome | Expanding Cloud Learning to Even More Students Accelerated time-to-market via career days with employers Türkçe English regardless of students’ IT setups Opportunity | Building Cloud Expertise for a Competitive Workforce About Technical College System of Georgia One of the most important benefits of using AWS Academy is its ready-to-teach curriculum, which simplifies training educators, making it easy to scale the degree program. In fact, TCSG used this feature to scale its program across its colleges. In 1 year, TCSG grew its Cloud Academy from 7 students across 3 colleges to almost 200 students across 10 colleges. And because all the necessary course materials were included in AWS Academy, just 12 AWS educators were able to teach all 200 students. “AWS Academy’s ready-to-teach curriculum makes it easy for institutions to adopt and start teaching cloud computing right away,” says Steven Ferguson, chief information officer at TCSG. “The time to market is the main advantage for us.” TCSG Works with AWS Academy to Offer Digital Cloud Computing Credential to 22 Colleges TCSG, in partnership with the Georgia Department of Education, plans to expand the TCSG Cloud Academy to 5,500 learners across Georgia by 2024. The agency also wants to include high schools that don’t already offer AWS courses to further promote cloud learning among the state’s future workforce. with ready-to-teach curriculum from AWS Academy Deutsch  In addition, TCSG’s Cloud Academy included AWS Academy Learner Labs—long-running hands-on lab environments where educators can bring their own assignments and invite their students to get experience using select AWS services. These labs cater to beginners as well as more experienced learners, and they make it simple for educators to assign projects, view students’ work spaces, and monitor course analytics in the cloud. With its new online program, TCSG can equip its students with in-demand skills for future careers in the cloud, such as cloud support associate, network technician, web development engineer, and cloud support engineer, among others. It has also improved the accessibility of its courses and cloud expertise in general by removing barriers like location, IT setup, and hardware access. TCSG is helping students enter a competitive workforce as educated cloud professionals and providing opportunities for success. TCSG built its Cloud Academy using AWS Academy, which provides higher education institutions with a free, ready-to-teach cloud computing curriculum that prepares students to pursue industry-recognized certifications and in-demand cloud jobs. TCSG launched the TCSG Cloud Academy in two forms: one as a specialization within an existing associate’s degree and the second as a stand-alone technical certificate of credit. For the technical certificate of credit, students who have existing degrees can enter the curriculum to focus on cloud computing and participate in hands-on cloud experiences using AWS services.  Tiếng Việt Italiano ไทย The Technical College System of Georgia is the state government agency that supervises workforce development of more than 294,000 students across 22 technical colleges, 88 campuses, and more than 600 programs. Using the AWS curriculum and technology as the foundation for its courses, TCSG is preparing students to earn industry-recognized AWS Certifications to increase employability while improving accessibility to cloud education by offering the academy virtually and remotely. Learn more » TCSG is the state of Georgia government agency that supervises workforce development of hundreds of thousands of students across 22 technical colleges, 88 campuses, and more than 600 programs. The agency aims to run a system of technical education using the latest technology that’s accessible to all adults and corporate citizens in the state. To develop and deploy its new cloud-focused curriculum, it worked with AWS Education Programs, which helps TCSG institutions develop initiatives that align education to careers in the cloud and promote student employability, preparing diverse learners for in-demand cloud roles around the world. In 2020, the organization officially launched the TCSG Cloud Academy—a virtual program for students pursuing expertise and certifications in cloud computing—on its eCampus virtual learning system. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português" Technology that delivers_ iFood and Appoena gain agility with AWS Marketplace _ iFood Case Study _ AWS.txt,"Carla Lemos IT Governance Manager, iFood Français for improved troubleshooting with a single observability tool 2023 We are a highly innovative and agile company, committed to maintaining our efficiency as we continue to grow. To ensure our efficiency at scale, we are actively exploring SaaS contracts available on the AWS Marketplace. They offer tremendous potential to make a significant impact toward meeting our goals.” Español Despite its unrivalled position as a market leader, however, iFood had a challenge. Exceptional growth led to silos of data and fragmented tools—and visibility suffered. Tech teams found application troubleshooting difficult; performance metrics and traceability functions were located across nine tools. The unwieldy system was time consuming to troubleshoot, threatening to impact system performance that could result in a poor customer experience and potentially damage reputation. The company needed an observability solution to help it visualize data, explore metrics, and perform other reporting functions in a single tool. It also wanted to procure and deploy the company's chosen solution efficiently, which proved to be challenging to manage across nine tools. “As our customer base grows, we have to be more efficient and agile to deliver solutions,” says Carla Lemos, IT governance manager at iFood. 日本語 for IT development and operation teams iFood is a Brazil-based online food ordering and delivery platform and financial services company serving the food ecosystem. Technology that Delivers: iFood and Appoena Gain Agility Using AWS Marketplace Get Started 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Increased efficiency Brazil-based online food delivery service, iFood, is challenged to meet changing market demands while managing growth. Critical to that effort was the consolidation of its observability toolkit. iFood asked Appoena, an Amazon Web Services (AWS) Partner with expertise in Datadog implementations and migrations, to use AWS Marketplace to procure and deploy the software on its AWS environment. Migrating to the new solution would help the company to consolidate multiple observability tools into one, giving it seamless visibility across the data infrastructure. Datadog is one of several independent software vendor (ISV) solutions iFood has transacted through AWS Marketplace. That list continues to grow as the benefits of a simplified approach to procurement, management, and reporting have begun to pay dividends. Streamlined SaaS procurement AWS Services Used “We have a huge number of SaaS vendors to select and manage,” says Lemos. “AWS Marketplace has helped us to track and manage all those expenses. It’s given us more confidence in our providers because we can always do a SaaS free trial and POC.” 中文 (繁體) Bahasa Indonesia Established better governance Contact Sales Ρусский عربي  with flexible payment terms and schedules 中文 (简体) AWS Marketplace has also become an important vehicle for discovery. The iFood team values using AWS Marketplace to review and benchmark vendors, gaining confidence in buying decisions as the organizations’ needs grow and change. Saved from a burdensome procurement process, IT teams gain agility and can focus on high value activities like building a better customer experience. After multiple proofs of concept (POCs), iFood chose to overcome those functional challenges using the Datadog observability solution. iFood wanted to take advantage of the benefits of AWS Marketplace to implement the solution, asking its preferred partner Appoena, to procure, implement and service the solution. To facilitate that deal, iFood took advantage of the Channel Partner private offer (CPPO) process, which Appoena was qualified to provide on AWS Marketplace. Increased visibility Outcome | Governance and the Importance of SaaS Overview That flexibility helped Appoena to manage the POC process with iFood of the engagement and streamlined the procurement and deployment. “We were so excited by the opportunities we found on AWS Marketplace,” says Willian Valerio, observability engineer and co-founder at Appoena. “All the processes on the platform gave us a way to close deals easily and more quickly. With AWS Marketplace, we were able to reduce procurement time for the project from 2 to 3 months to 2 or 3 days.” After the initial migration, Appoena continues to support iFood by transacting its professional services on AWS Marketplace. As iFood embraced the technical advantages of its Datadog deployment with Appoena, it also gained efficiencies in procurement and reporting. Solution | Flexible, Simplified Fulfillment with Private Offer Opportunity | Innovation at Scale with SaaS Observability Solution Türkçe AWS Marketplace English “Our experience with AWS Marketplace has been so positive that we’re making it a best practice to start there—for research, vetting, SaaS free trials. The support we get is incredible, and at every software renewal, I ask vendors about working through AWS Marketplace,” says Lemos. “Our continued partnership is so important to helping us to be agile."" Founded in 2011, iFood is a food delivery hub for small to medium grocery, restaurant, and convenience stores. Innovation paved the way to manage soaring pandemic demand and by Q2 of 2021, iFood held 80 percent market share in food delivery. The volume marshalled by the iFood application continues to grow unabated. During one weekend of the FIFA World Cup in 2022, iFood shattered sales records with more than 8 million orders placed on the app. Today, iFood continues to lead the food delivery ecosystem—from exploring drone delivery models to its pledge to go plastic free and carbon neutral by 2025. AWS Marketplace is a curated digital catalog enabling customers to quickly find, test, buy, deploy, and manage the third-party software, data, and professional services necessary to build solutions and run their business. Procurement teams leverage AWS Marketplace to accelerate innovation and enable cloud users to deploy solutions rapidly and securely, while reducing total cost of ownership and improving operational oversight. Learn more » Deutsch With benefits such as simplified procurement and fast deployment, Appoena quickly understood—and reaped—the value of being an AWS Marketplace Channel Partner. In the initial engagement, Appoena was able to help iFood to consolidate nine tools into one on AWS. Integration with existing AWS infrastructure supports the technical teams as Appoena continues its close partnership with iFood. Leveraging professional services offered by Appoena helped iFood continue to optimize the performance of its software purchase—from workload migration, to operations, to reporting. Appoena has also discovered the benefits of transacting on AWS Marketplace and has been able to provide services to multiple new customers in the months after its own onboarding. Optimized Costs Tiếng Việt AWS Marketplace standardized contracts and simplified billing have also accelerated other purchases. Using SaaS vendors such as Confluent and Databricks on AWS Marketplace is a key strategy to support innovation at scale. Italiano ไทย and control over current and future spend process leading to agility and an improved customer experience For iFood, transacting with an AWS Marketplace private offer helped them to step into its contract with flexible payment schedules. Staggered payments—at close and at the time of migration—better aligned with organization budget and forecasting needs. With all spend residing in a single location, iFood gains greater visibility and control over spending as well. Providing details on each software-as-a-service (SaaS) contract puts all the information the company needs to budget and forecast more accurately. “With AWS Marketplace, we are able to save around 2 to 3 weeks of internal process,” says Lemos. “This gives us time for efficiency studies, negotiation, and improves the satisfaction of the technical teams.” “Obviously, we must prioritize the technology—but now, we are looking to accelerate both the technology side of those solutions, as well as the administration of those systems,” says Lemos. “The flexibility of private offers has been amazing.” Overview | Opportunity | Solution | Outcome About iFood Customer Stories / Transport & Logistic Português" TEG on using Machine Learning and Amazon Personalize to boost user engagement and ticket sales _ Ticketek Video _ AWS.txt,"Français 2023 Español 日本語 TEG on Using Machine Learning and Amazon Personalize to Boost User Engagement and Ticket Sales Customer Stories / Media & Entertainment Get Started 한국어 Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. AWS Services Used 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский عربي 中文 (简体) Learn more » Tane Oakes Chief Technology Officer (CTO), TEG  Amazon Personalize allows developers to quickly build and deploy curated recommendations and intelligent user segmentation at scale using machine learning (ML). Türkçe We’re actually determining the right way of communicating, starting to preempt that journey. We found an uplift of a conversion rate of well in excess of 200% on purchases. The number of tickets that are part of those purchases has gone up by close to 50%.” English Amazon Personalize In this video, Tane Oakes, Chief Technology Officer (CTO) at TEG discusses how the company personalized its weekly email newsletter that goes to 4 million subscribers using Amazon Personalize. Deutsch Tiếng Việt Italiano ไทย Ticketek is owned by TEG and is a global leader in ticketing and technology with more than 40 years of experience ticketing major international events and partnering with the world’s premier venues. Based in Australia, TEG operates more than 30 brands in 40 countries on six continents. Prior to building on Amazon Web Services (AWS), newsletters were sent based on ""state"" parameters only with no other personalization. Using Amazon Personalize, the company can now provide customers with a greater diversity of shows and events that suit their unique interests. Purchase rates improved by more than 200 percent with volume of tickets sold per newsletter open increasing by 49 percent. Português" Tempus Ex Case Study _ Amazon ECS _ AWS.txt,"AWS Direct Connect Français Amazon Elastic Container Service (Amazon ECS) Español Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Facilitates on-premises hardware deployments with no added complexity In May 2021, AWS announced the general availability of Amazon ECS Anywhere. “We immediately saw the value and started working toward using it in our on-premises infrastructure,” says Brown. Using Amazon ECS Anywhere, Tempus Ex can use the same infrastructure-as-code templates that it was already running in the cloud to run its on-premises deployment. The on-premises Amazon ECS Anywhere clusters use AWS Direct Connect—a service that creates a dedicated network connection to AWS—to create a fast, reliable connection to the company’s cloud clusters, which delegate work to the on premises clusters. 日本語 Saves time and improves workflow with lean team Using Amazon ECS Anywhere saves us time and improves our workflow because we can use the same hardware in the cloud or on our local machines.” Get Started 한국어 AWS Training and Certification delivers over 200% ROI, as quantified by Forrester, by upskilling your existing workforce. Our content is created by experts at AWS and updated regularly so you can keep your cloud skills fresh. As Tempus Ex expands its customer base, the company also plans to offer training to employees through Improving Efficiency without Increasing Complexity Using Amazon ECS Anywhere Achieves 40x faster processing speeds Simplifies deployments AWS Services Used 中文 (繁體) Bahasa Indonesia Sports technology company Tempus Ex Machina (Tempus Ex) manages live video on game days for sports league and broadcasting customers such as American professional football. The company needed a simple way to deploy its solutions on specialized on-premises hardware without increasing the complexity of its workflows. Tempus Ex was already using Amazon Web Services (AWS) for its cloud deployments and wanted to find a hybrid solution to implement in a similar way on premises. Using Amazon Elastic Container Service (Amazon ECS) Anywhere, which lets users run and manage container workloads on customer managed infrastructure, Tempus Ex deployed its workloads to specialized on-premises hardware to process and transcode high-resolution video 40 times faster without additional complexity and at a lower cost. The AWS Direct Connect cloud service is the shortest path to your AWS resources. While in transit, your network traffic remains on the AWS global network and never touches the public internet. Ρусский عربي Learn more » Using Amazon ECS Anywhere, Tempus Ex can manage both hybrid and cloud deployments with a small team. For its cloud deployments, Tempus Ex uses Amazon ECS, a fully managed container orchestration service that makes it simple for users to deploy, manage, and scale containerized applications. The company can deploy the same infrastructure on its on-premises hardware using Amazon ECS Anywhere without adding manual processes or other complexities. “We’re a startup,” says Brown. “We wanted a solution that we could use while staying lean.” 中文 (简体) AWS Training and Certification About Tempus Ex Tempus Ex Processes Live Video for Professional Football at 40x Speed in Hybrid Solution Using Amazon ECS Anywhere Benefits of AWS Türkçe Using AWS as Building Blocks to Innovate English Since implementing Amazon ECS Anywhere to facilitate deployments on its specialized hardware, Tempus Ex’s workloads are transcoded 40 times faster than the previous processing speeds. The hardware also provides increases in the compatible codecs and resolutions that Tempus Ex can process. “Using Amazon ECS Anywhere saves us time and improves our workflow because we can use the same hardware in the cloud or on our local machines,” says Brown. Staying Lean while Managing Ultra-High-Resolution Video Data To achieve high-speed video processing and transcoding capabilities for customers, Tempus Ex needed robust hardware. “We work with ultra-high-resolution video—8K and above and there’s not a lot of hardware that can handle that very well,” says Chris Brown, chief technology officer (CTO) at Tempus Ex. Tempus Ex purchased specialized hardware to manage video transcoding on premises, and the company looked into a hybrid infrastructure to incorporate this hardware while continuing to run most of its solution on AWS. Chris Brown Chief Technology Officer, Tempus Ex Deutsch Tiếng Việt Tempus Ex is a sports data company providing solutions to gather, process, and consume data in new, innovative ways. Tempus Ex has partnered with premier global sports leagues, broadcasters, and world-class athletes to deliver cutting-edge solutions. Italiano ไทย Contact Sales AWS Training and Certification, which facilitates building and validating skills to get more out of the cloud. “We’re trying a lot of innovative things here at Tempus Ex,” says Brown. “We rely on AWS for all sorts of new products, and we continue to find solutions on AWS to fill needs created by our new projects."" 2022 Português Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today." Teva Case Study _ Biopharma _ AWS.txt,"Excessive Use and Use with Other Long acting Beta2-Agonists: AirDuo Digihaler should not be used more often than recommended, at higher doses than recommended, or in conjunction with other medicines containing LABA, as an overdose may result. Clinically significant cardiovascular effects and fatalities have been reported in association with excessive use of inhaled sympathomimetic drugs. Español Please see full Prescribing Information for AirDuo Digihaler and ArmonAir Digihaler. To learn more, visit www.Digihaler.com. Please read the full Prescribing Information. ProAir® Digihaler® (albuterol sulfate) Inhalation Powder is a prescription medicine used in people ≥4 years of age for the treatment or prevention of bronchospasm in people who have reversible obstructive airway disease and for the prevention of exercise-induced bronchospasm. Amazon Cognito 日本語 AirDuo® Digihaler® (fluticasone propionate and salmeterol) inhalation powder is indicated for the treatment of asthma in patients aged 12 years and older. AirDuo Digihaler is only for patients uncontrolled on an inhaled corticosteroid (ICS) or whose disease severity clearly warrants an ICS/Long-acting beta2- agonist (LABA). Limitation of Use: AirDuo Digihaler is not indicated for the relief of acute bronchospasm.  Serious Asthma-Related Events: Use of a LABA as monotherapy (without an ICS) for asthma is associated with an increased risk of asthma-related death. Available data from controlled clinical trials also suggest that use of LABA as monotherapy increases the risk of asthma-related hospitalization in pediatric and adolescent patients. These findings are considered a class effect of LABA monotherapy. When LABA are used in fixed-dose combination with ICS (such as AirDuo Digihaler), data from large clinical trials do not show a significant increase in the risk of serious asthma-related events (hospitalizations, intubations, death) compared with ICS alone 한국어 Mark Maalouf Vice President, Global Digital Health, Teva Immunosuppression and Risks of Infections: Patients who use corticosteroids, such as found in AirDuo Digihaler and ArmonAir Digihaler are at risk for potential worsening of existing tuberculosis; fungal, bacterial, viral, or parasitic infections; or ocular herpes simplex. A more serious or even fatal course of chickenpox or measles may occur in susceptible patients. Use with caution in patients with the above because of the potential for worsening of these infections Deterioration of Disease and Acute Episodes: AirDuo Digihaler should not be initiated in patients during rapidly deteriorating or potentially life-threatening episodes of asthma. ArmonAir Digihaler and AirDuo Digihaler are not indicated for the relief of acute bronchospasm. An inhaled, short-acting beta2-agonist, not ArmonAir Digihaler or AirDuo Digihaler, should be used to relieve acute symptoms such as shortness of breath Allows for analysis of patient-specific and aggregated data AWS Services Used Teva is now commercializing the Digihaler® family of inhalers in the United States. “In less than a year,” Nir says, “we were able to develop and deploy a digital health platform.” Teva has big plans for the franchise and its potential. In the long-term, it is evaluating ways to expand digital devices like the Digihaler® family of inhalers and data analysis tools like those Teva has built on AWS, so patients, caregivers, and healthcare providers can have more informed conversations thanks to an increased understanding of patients’ inhaler use. “By continuing to make strides in the digital health space, backed by innovative AWS technology, we are able to develop and build a digital offering in-house—from the device to the software—allowing us to expand our capabilities beyond respiratory in the future,” Maalouf says. Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. Contraindications: ArmonAir Digihaler and AirDuo Digihaler are contraindicated in: Bahasa Indonesia IMPORTANT SAFETY INFORMATION FOR AIRDUO® DIGIHALER® AND ARMONAIR® DIGIHALER® Safety Information Effect on Growth: ICS may cause a reduction in growth velocity, Patients should be maintained on the lowest dose of inhaled corticosteroid that effectively controls their asthma. Monitor growth of pediatric patients receiving ArmonAir Digihaler and AirDuo Digihaler. Coexisting Conditions: Use AirDuo Digihaler with caution in patients with convulsive disorders, thyrotoxicosis, diabetes mellitus, ketoacidosis, and in patients who are unusually responsive to sympathomimetic amines Patients with known severe hypersensitivity to milk proteins or any ingredients of ArmonAir Digihaler or AirDuo Digihaler 1. World Health Organization. Asthma. www.who.int/news-room/fact-sheets/detail/asthma. Accessed October 7, 2021. Using AWS services, Teva and Onica were able to build the DHP on a tight schedule without sacrificing security, speed, or analytical capability. Teva and Onica worked together to automate a once long, tedious, manual verification and validation process in every deployment. Automating this process extends compliance early into the development cycle, bridging the gap between engineering and quality control. This process automation is part of Teva’s long-term success, which depends on the ability to stay compliant while developing at a rapid pace. “Teva’s speed of innovation and its ability to quickly ship products to its platform demonstrates the company’s commitment to the end user, because the company prioritized operating in such an automated and controlled way,” says Puccio. Ρусский Gives patients the ability to view and share data if desired To build the DHP, Onica used AWS Lambda so that it could run code without having to provision or manage servers. “AWS Lambda lets you use the service as needed,” says Nir. “It’s not always alive and kicking. Once it’s needed, it wakes up and starts to engage.” The database component of the DHP would be crucial to allow for both analysis of an individual patient’s usage of the inhaler and big data analytics across all users. The system relies on Amazon S3—an object storage service that offers industry-leading scalability, data availability, security, and performance—as a long-term database. It pairs Amazon S3 and Amazon DynamoDB, a key-value and document database that delivers single-digit millisecond performance at any scale and uses the latter as a mobile snapshot database. For the system to comply with regulations, Teva needed to be able to both store all records indefinitely and access quick snapshots of its data. “That’s one of the things we found beneficial about AWS,” Nir explains. “Using Amazon DynamoDB allowed us to create a very fast-paced and simple-to-scale database.” Once Onica and Teva got to work, the main challenge was time. Implementing a serverless architecture using AWS solutions empowered Teva to scale, control costs, and experience fast development cycles. “We needed to put it into production about 7.5 months after our project kickoff,” says Matt Puccio, practice manager of cloud-native development with Onica. “So we used a lot of cloud-native technologies to build in an expedited fashion. Using AWS was the major reason why we could meet this timeline.” Glaucoma and Cataracts: Long-term use of ICS, including fluticasone propionate, a component of ArmonAir Digihaler and AirDuo Digihaler, may increase the risk for cataracts or glaucoma. Regular eye exams should be considered Cardiovascular and Central Nervous System Effects: The salmeterol component of AirDuo Digihaler, can be associated with excessive beta-adrenergic stimulation which could present as the following symptoms: seizures, angina, hypertension or hypotension, tachycardia with rates up to 200 beats/min, arrhythmias, nervousness, headache, tremor, palpitation, nausea, dizziness, fatigue, malaise, and insomnia. Use with caution in patients with cardiac arrhythmias, hypertension, coronary insufficiency. Drug may need to be discontinued in certain patients. Adverse Reactions with AirDuo Digihaler: Most common adverse reactions (greater than or equal to 3%) include nasopharyngitis, oral candidiasis, headache, cough and back pain ไทย Use of Anti-Inflammatory Agents: ProAir Digihaler alone may not be adequate to control asthma in many patients. Early consideration should be given to adding anti-inflammatory agents, e.g., corticosteroids AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers, creating workload-aware cluster scaling logic, maintaining event integrations, or managing runtimes. Learn more » IMPORTANT SAFETY INFORMATION FOR PROAIR® DIGIHALER® (Continued) Português IMPORTANT SAFETY INFORMATION FOR AIRDUO® DIGIHALER® AND ARMONAIR® DIGIHALER® (Continued) Français INDICATIONS FOR PROAIR® DIGIHALER® Drug Interactions: Other short-acting sympathomimetic bronchodilators should not be used concomitantly with ProAir Digihaler The development of this system faced an extremely tight deadline: Teva had set an internal target of 1 year to develop the production version of the DHP, and it needed a tested, trustworthy cloud provider as a foundation. “The maturity of AWS infrastructure and the level of security audits that AWS performs on its data centers and services gave us peace of mind,” says Maalouf. “We knew that the privacy and security of patient and customer data would be the top priority.” Teva decided to engage AWS Partner Onica to develop a custom system that Teva would be able to adapt and expand as needed. Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Transferring Patients from Systemic Corticosteroid Therapy: Particular care is needed for patients who have been transferred from systemically active corticosteroids to ICS because deaths due to adrenal insufficiency have occurred in patients with asthma during and after transfer from systemic corticosteroids to less systemically available ICS. Taper patients slowly from systemic corticosteroids if transferring to ArmonAir Digihaler or AirDuo Digihaler PLEASE SEE ADDITIONAL SAFETY INFORMATION BELOW About Teva Hypersensitivity Reactions, Including Anaphylaxis: Immediate hypersensitivity reactions (e.g., urticaria, angioedema, rash, bronchospasm, hypotension), including anaphylaxis, may occur after administration of ArmonAir Digihaler or AirDuo Digihaler. Discontinue ArmonAir Digihaler or AirDuo Digihaler if such reactions occur To learn more, visit aws.amazon.com/health/biotech-pharma. 中文 (繁體) Contact Sales INDICATIONS FOR ARMONAIR® DIGIHALER® AND AIRDUO® DIGIHALER® 2021 Drug Interactions with Strong Cytochrome P450 3A4 Inhibitors: The use of strong cytochrome P450 3A4 (CYP3A4) inhibitors (e.g., ritonavir, ketoconazole) with ArmonAir Digihaler or AirDuo Digihaler is not recommended because increased systemic corticosteroid adverse effects may occur; increased cardiovascular adverse effects may also occur with AirDuo Digihaler Hypersensitivity Reactions including Anaphylaxis: Immediate hypersensitivity reactions may occur after administration of albuterol sulfate, as demonstrated by rare cases of urticaria, angioedema, rash, bronchospasm, anaphylaxis, and oropharyngeal edema. Hypersensitivity reactions including anaphylaxis, angioedema, pruritus, and rash have been reported with the use of therapies containing lactose, an inactive ingredient in ProAir Digihaler Use of the App is not required for administration of medication to the patient. Adverse Reactions with ArmonAir Digihaler: Most common adverse reactions (greater than or equal to 3%) are: upper respiratory tract infection, nasopharyngitis, oral candidiasis, headache, and cough Türkçe ArmonAir Digihaler and AirDuo Digihaler contain a built-in electronic module which detects, records, and stores data on inhaler events for transmission to mobile App. English Building a Secure Digital Inhaler Tiếng Việt Benefits of Building on AWS To address the security needs that were a critical part of Teva’s requirements for the system, Teva harnessed Amazon Cognito to provide secure user sign-up, sign-in, and access control. Nir appreciates that Amazon Cognito is “very, very secure,” and it helped Teva build a robust authentication mechanism that would help the company safeguard user data. The company envisioned an inhaler that would connect to a mobile app so that patients and healthcare providers could track inhaler events, the digital health analytics team could access anonymized information in a database, and permissioned healthcare providers could review inhaler use data shared by their own patients (upon consent) through a dashboard. However, to make this system work smoothly and to develop a truly secure backend that could protect patient data, Teva needed to innovate, so it decided early on that it wanted to construct a serverless architecture to reap the benefits of scalability as well as time and operational cost efficiency for Teva. Teva Respiratory, LLC (Teva), an affiliate of Teva Pharmaceutical Industries Ltd., a major worldwide pharmaceutical company, saw an opportunity to use digital device technology to provide objective inhaler data to patients and caregivers, helping to support a more informed dialogue between healthcare providers and patients in the management of their respiratory conditions. Asthma affects roughly 339 million people worldwide according to the World Health Organization.1 Teva wanted to help facilitate more informed discussions between these patients and their healthcare providers. Primary treatment of status asthmaticus or other acute episodes of asthma requiring intensive measures The maturity of AWS infrastructure and the level of security audits that AWS performs on its data centers and services gave us peace of mind. We knew that the privacy and security of patient and customer data would be the top priority.” IMPORTANT SAFETY INFORMATION FOR PROAIR® DIGIHALER® Most common adverse reactions (≥1% and >placebo) are back pain, pain, gastroenteritis viral, sinus headache, urinary tract infection, nasopharyngitis, oropharyngeal pain and vomiting With a portfolio of over 3,500 products, Teva supplies thousands of drugs around the world. It’s also one of the largest manufacturers of inhalers. In 2013 the company imagined a way to enhance its product portfolio for patients around the world. “We saw a huge opportunity to go into the digital health space and progress existing inhaler technology into a digital inhaler,” says Mark Maalouf, vice president of Global Digital Health at Teva. “We imagined an inhaler that could track how often the inhaler is used and also measure inspiratory flow, which may help HCPs assess if inhaler technique needs improvement.” Research reveals that one of the biggest challenges in respiratory care is healthcare providers’ ability to accurately assess how often patients are using their inhalers,5 especially once they leave the healthcare provider’s office. This is where objective inhaler-use data could be beneficial, allowing for more informed discussions around the disease and treatment. Amazon DynamoDB Monoamine Oxidase Inhibitors or Tricyclic Antidepressants: ProAir Digihaler should be administered with extreme caution to patients being treated with these agents, or within 2 weeks of discontinuation of these agents, because the action of albuterol on the cardiovascular system may be potentiated. Consider alternative therapy Hypokalemia and Hyperglycemia: Beta-adrenergic agonist medicines may produce significant hypokalemia in some patients, possibly through intracellular shunting, which has the potential to produce adverse cardiovascular effects. Decrease in serum potassium are usually transient, not requiring supplementation. Be alert to hypokalemia and hyperglycemia in patients using AirDuo Digihaler عربي Deterioration of Asthma: Need for more doses of ProAir Digihaler than usual may be a marker of acute or chronic deterioration of asthma and requires reevaluation of treatment, such as possible need for anti-inflammatory treatment, e.g., corticosteroids Digoxin: Carefully evaluate the serum digoxin levels in patients who are currently receiving digoxin and ProAir Digihaler Beta-Blockers: Beta-adrenergic-receptor blocking agents not only block the pulmonary effect of beta-agonists, such as ProAir Digihaler, but may produce severe bronchospasm in asthmatic patients. Therefore, patients with asthma should not normally be treated with beta-blockers 3. AirDuo Digihaler Prescribing Information. Parsippany, NJ. Teva Respiratory, LLC. Eosinophilic Conditions and Churg-Strauss Syndrome: Systemic eosinophilic conditions, such as Churg- Strauss syndrome, may occur when using ArmonAir Digihaler or AirDuo Digihaler. Be alert to eosinophilia, vasculitic rash, worsening pulmonary symptoms, cardiac complications, and/or neuropathy Reduction in Bone Mineral Density (BMD): Decreases in BMD have been observed with long-term administration of products containing ICS. Patients with major risk factors for decreased bone mineral content, such as prolonged immobilization, family history of osteoporosis, or chronic use of drugs that can reduce bone mass (e.g., anticonvulsants, oral corticosteroids) should be monitored and treated with established standards of care when using ArmonAir Digihaler or AirDuo Digihaler Deutsch AWS Lambda Amazon S3 Italiano ArmonAir® Digihaler® (fluticasone propionate) inhalation powder is indicated for the maintenance treatment of asthma as prophylactic therapy in patients 12 years of age and older. Limitation of Use: ArmonAir Digihaler is not indicated for the relief of acute bronchospasm. Teva Uses AWS to Manage Digital Inhaler Data for Patients and Healthcare Providers Teva Pharmaceuticals is a global leader in generic and specialty medicines in nearly every therapeutic area with a portfolio of over 3,500 products. Around 200 million people around the world use a Teva medicine product every day. Finding Opportunities to Help Inform Treatment Decisions Paradoxical Bronchospasm: ProAir Digihaler can produce paradoxical bronchospasm that may be life-threatening. Discontinue ProAir Digihaler and institute alternative therapy if paradoxical bronchospasm occurs.   Oropharyngeal Candidiasis has occurred in patients treated with ArmonAir Digihaler or AirDuo Digihaler. Advise patients to rinse the mouth with water without swallowing following inhalation ADH-40715 December 2021 The Digihaler® family of inhalers, however, presented a range of technical challenges. Teva needed to add Bluetooth technology and built-in sensors to measure inspiratory flow rates and save data into the firmware. The company also needed to develop mobile applications that could securely communicate with the inhaler, store data, and give patients the ability to view and share that data. The data would travel to a cloud database—the DHP—which would permit both patient-specific data analysis and aggregated analysis of the information coming from the inhalers and applications. Get Started Meets regulatory, privacy, and security requirements for its service 4. ArmonAir Digihaler Prescribing Information. Parsippany, NJ. Teva Respiratory, LLC. Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. Cardiovascular Effects: ProAir Digihaler, like other beta-adrenergic agonists, can produce clinically significant cardiovascular effects in some patients, as measured by heart rate, blood pressure, and/or symptoms. If such effects occur, the drug may need to be discontinued. ProAir Digihaler, like all sympathomimetic amines, should be used with caution in patients with cardiovascular disorders, especially coronary insufficiency, cardiac arrhythmias, and hypertension Diuretics: Caution is advised in the coadministration of beta-agonists with non-potassium sparing diuretics (such as loop or thiazide diuretics). Consider monitoring potassium levels Hypokalemia: As with other beta-agonists, ProAir Digihaler may produce significant hypokalemia in some patients. The decrease is usually transient, not requiring supplementation Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Footnotes Established a digital health platform in less than 1 year Do Not Exceed Recommended Dose: Fatalities have been reported in association with excessive use of inhaled sympathomimetic drugs in patients with asthma Teva used a group of Amazon Web Services (AWS) services—including AWS Lambda and Amazon Simple Storage Service (Amazon S3)—and worked alongside AWS Premier Consulting Partner Onica, a Rackspace Technology company, to construct its serverless architecture for the Digihaler® family of inhalers. This FDA-approved family of digital, breath-actuated inhalers uses built-in sensors to track how often and how well the inhaler is used as measured by inspiratory flow rates. Inhaler use is recorded as an event when the cap is opened or the patient inhales.2–4 The inhaler records events and sends data directly to the Digihaler® app through Bluetooth wireless technology, and the app can then display data and reports that patients can choose to share with their healthcare providers. Using AWS services, Teva was able to establish a digital health platform (DHP) for the Digihaler® family of inhalers in less than a year—a cloud system that meets regulatory, privacy, and security requirements. 中文 (简体) Contraindications: ProAir Digihaler (albuterol sulfate) Inhalation Powder is contraindicated in patients with hypersensitivity to albuterol or patients with a severe hypersensitivity to milk proteins. Rare cases of hypersensitivity reactions, including urticaria, angioedema, and rash have been reported after the use of albuterol sulfate. There have been reports of anaphylactic reactions in patients using inhalation therapies containing lactose.  Patients using AirDuo Digihaler should not use another medicine containing a LABA (e.g., salmeterol, formoterol fumarate, arformoterol tartrate, indacaterol) for any reason Hypercorticism and Adrenal Suppression may occur with high doses of ICS, including fluticasone propionate, or at the recommended dose in susceptible individuals. If such changes occur, discontinue ArmonAir Digihaler or AirDuo Digihaler slowly Using AWS to Quickly Create a Robust System 2. ProAir Digihaler Prescribing Information. Parsippany, NJ. Teva Respiratory, LLC. Paradoxical Bronchospasm and Upper Airway Symptoms: Paradoxical bronchospasm may occur. if bronchospasm occurs treat immediately with an inhaled, short-acting bronchodilator discontinue AirDuo Digihaler or ArmonAir Digihaler and institute alternative therapy Coexisting Conditions: ProAir Digihaler, like all sympathomimetic amines, should be used with caution in patients with convulsive disorders, hyperthyroidism, or diabetes mellitus; and in patients who are unusually responsive to sympathomimetic amines   Developed an FDA-approved family of digital inhalers with built-in sensors One major consideration was security and privacy. “Because the system holds both protected health information and personally identifiable information, we needed to give the patient control over who can access their data, such as their physician,” says Yaron Nir, head of the DHP. These privacy issues were also important from a legal and regulatory standpoint: Teva wanted the digital inhaler system to be compliant with HIPAA and global health regulations. 5. George M. Adherence in Asthma and COPD: New Strategies for an Old Problem. Respir Care. 2018 Jun;63(6):818-831. doi: 10.4187/respcare.05905." The Mill Adventure Case Study.txt,"The Mill Adventure is a challenger in the iGaming space, providing groundbreaking turnkey solutions. Its offering includes licenses and operations to support rapid deployment for companies that want to offer iGaming websites. Tiếng Việt Français Content overload is a real challenge in the iGaming world. With so much choice, selecting the best game to play can be difficult for end users. The Mill Adventure’s platform helps its customers deliver targeted content to increase engagement and improve the user experience. Amazon S3 Español From its beginnings in Malta, The Mill Adventure is now active in four continents, with more international targets planned in Europe and North America. Choosing AWS as the backbone of its platform means The Mill Adventure can move fast to secure clients. “We can onboard customers in less than 6 weeks, whereas it usually takes our competitors on traditional, on-premises, infrastructures a number of months,” says Arruda. The Mill Adventure is on track to double its customer base in 1 year.Its serverless environment has helped its customers accommodate growing demand, too. AWS services such as AWS Lambda and Amazon DynamoDB form the foundation of The Mill Adventure’s iGaming platform, meaning it can scale instantly to changing volumes and simplify the operational burden of maintaining the platform. The Mill Adventure also uses Amazon Kinesis Data Streams to publish all changes which other components can react to. The Mill Adventure’s platform is certified accordingly to comply with the requirements set by a number of regulating bodies including the Swedish Gambling Authority (SGA), the Malta Gaming Authority (MGA), the Dutch Kansspelautoriteit (KSA), the German Glücksspielbehörde (GGL) and the Romanian National Gambling Office (ONJN). The platform implements specific features to control player verification, authentication, checks in central registries, and activity limits. Amazon Athena 日本語 The Mill Adventure, founded in Malta in 2019, provides customizable, turnkey solutions for iGaming businesses. It helps its customers meet industry and regional compliance and security requirements, as well as make better use of customer data to enhance and tailor the player experience. The Mill Adventure recognized that with data-driven insights, iGaming businesses can engage with customers more effectively, adapt more quickly to changing regulations, and streamline operations across the board. Critically, they can also improve the user experience through personalization and protect player safety by identifying unsafe or potentially illegal practices. But processing huge volumes of player-generated data is a challenge. To help deliver its vision of a personalized and responsible customer experience, The Mill Adventure turned to Amazon Web Services (AWS) to build its serverless platform. 2023 Amazon Athena is a serverless, interactive analytics service built on open-source frameworks, supporting open-table and file formats. Learn more » Contact Sales iGaming companies need to comply with regulatory and data privacy regulations in the territories in which they offer services. Businesses need to ensure that when they enter and operate in a new regulated market, they do so safely. When the company first launched, The Mill Adventure’s team recognized its platform needed to help customers meet evolving regulatory requirements. “The high level of regulation can be a barrier to market entry,” says Dario Arruda, chief executive officer (CEO) at The Mill Adventure. “We wanted to help our customers by simplifying the process.” Based in Malta, The Mill Adventure provides comprehensive and customizable, turnkey iGaming solutions. Its functionality includes everything from licenses and support for meeting compliance demands, to operations and business intelligence. Using AWS, the company built a serverless, business-to-business iGaming platform that can deliver a personalized and responsible experience for players. Since its inception, The Mill Adventure is on track to double its customer base year-on-year and has reduced the time it usually takes to onboard customers from months to weeks. Innovation happens quickly at the company with over 10,000 product updates released in just 3 years. Overview | Opportunity | Solution | Outcome | AWS Services Used Learn how »  Amazon QuickSight intelligence is important because we can provide our customers with the business analytics they need to fine tune their, often complex organization to stand out and grow in a very competitive industry.” Customers can choose to work in its white-label setup with The Mill Adventure’s licenses. “Regulatory compliance of our serverless environment is simple,” says Arruda. “Testing and iterating compliance updates happens quickly within development lifecycles, and ensures the team are ready to roll out any changes required by new regulations.” That means the company can respond to changes and integrate new compliance services, helping its customers to enter new jurisdictions faster and with lower overheads. “Keeping updated with emerging regulations requires continuous work,” says Arruda. Using AWS, The Mill Adventure can offer its customers peace of mind that security is covered. “Our platform upholds the highest service level agreements (SLAs) in the industry,” says Arruda. “Being ISO 27001– and ISO 17065–certified as well as Payment Card Industry Data Security Standard (PCI DSS) compliant demonstrates our commitment to customer data security. In addition, we have easily accessible and cost-effective AWS offsite backups running concurrently.” Get Started 한국어 AWS Services Used 中文 (繁體) Bahasa Indonesia The Mill Adventure Delivers Secure, Compliant, and Personalized iGaming Solutions Using AWS Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Solution | A State-of-the-Art Analytics and Data Platform 中文 (简体) Dario Arruda, Chief Executive Officer, The Mill Adventure Overview AWS CloudFormation lets you model, provision, and manage AWS and third-party resources by treating infrastructure as code. Learn more » About The Mill Adventure The Mill Adventure manages this process with its own SmartLobbies service—a fully managed machine learning service for personalized recommendations. The service was built using Amazon Personalize to create real-time, personalized user experiences faster at scale. The result is that iGaming customers can maximize the value of their player data to curate game content targeted to each user, without requiring dedicated teams to select content manually. This helps lower customers’ operational costs and means that human resources are freed up to focus on product development. AWS Customer Success Stories Türkçe English Outcome | Innovating at Pace with Thousands of New Features AWS Cloudformation Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Deutsch The company’s data lake integrates with Amazon QuickSight, a cloud-native, serverless business intelligence (BI) service. The Mill Adventure provides its customers with industry standard reports that give granular insight into iGaming operations. This includes operational reports (such as finance and payments), marketing analytics, and automated regulatory reports. The latter offloads the burden of reporting duties required by the different authorities regulating iGaming.In addition, using Amazon QuickSight, The Mill Adventure makes it easy for customers to explore their data in real time and to easily author reports on demand, something which the company views as a competitive advantage. “Amazon QuickSight intelligence is important because we can provide our customers with the business analytics they need to fine tune their complex organization to stand out and grow in a very competitive industry,” says Arruda. Customer Stories / Software Internet / Malta The Mill Adventure’s platform also prioritizes player welfare. The iGaming industry complies with standards relating to player verification and authentication. It must also run checks on central registries, monitor player activity limits, and preempt addictive behaviours. These demands put a duty of care on iGaming providers to make sure players don’t develop unhealthy habits, or take part in criminal activity such as money laundering. The Mill Adventure’s platform is built from the ground up with this in mind. Using machine learning technology, players are profiled according to their activity and gaming behaviors. Players at risk are automatically tagged for investigation, making the task of identifying and keeping track of cases requiring intervention as straightforward as possible. “To help encourage responsible gaming, intelligence from Amazon QuickSight is important because it helps us analyze if a player’s behavior is hitting a threshold,” says Arruda. “The next steps can include guiding them to a self-assessment, or to proactively make efforts to stop addictive habits as soon as possible. Our players’ welfare matters.” Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. Italiano ไทย Solution | A Serverless Infrastructure to Support Rapid Growth Opportunity | Personalizing While Protecting Players Using Amazon Personalize Learn more » To succeed and grow in a competitive international market, iGaming businesses must deliver compelling gaming experiences, while also meeting numerous industry regulations that vary from territory to territory. Opportunity | Complying with Regulations to Help Customers Expand Securely Português Since it launched, The Mill Adventure has wasted no time bringing new products and services to market. Its development team can easily create test environments on AWS that replicate production environments in minutes, with no upfront costs. There have been more than 10,000 new features in around 530 releases over 3 years, with no reported interruptions to service. “We make broad use of AWS services,” says Arruda. “This is critical in helping us deliver the value our customers need.”Using AWS, The Mill Adventure wants to innovate even further. “There’s an old Chinese proverb that continues to inspire us,” says Arruda. “When the winds of change blow, some people build walls and others build windmills.”" The Next Frontier_ Generative AI for Financial Services _ AWS for Industries.txt,"AWS for Industries The Next Frontier: Generative AI for Financial Services by Ruben Falk | on 22 JUN 2023 | in Amazon Machine Learning , Amazon SageMaker , Artificial Intelligence , Financial Services , Industries , Thought Leadership | Permalink |  Share Generative artificial intelligence (AI) applications like ChatGPT have captured the headlines and imagination of the public. Generative AI is a type of AI that can create new content and ideas, including conversations, stories, images, videos, and music. Like all AI, generative AI is powered by machine learning (ML) models—very large models (known as Large Language Models or LLMs) that are pre-trained on vast amounts of data and commonly referred to as foundation models (FMs). In the financial services industry, leaders and developers are eager to understand generative AI’s potential and put it to work. For example, Banco Bilbao Vizcaya Argentaria, S.A. (BBVA), a global banking leader, announced plans to explore the potential of advanced technologies, like Amazon Bedrock , a new service that makes FMs from Amazon and leading AI startups accessible via an API, to create innovative financial solutions. Earlier this year, Goldman Sachs started experimenting with generative AI use cases , like classification and categorization for millions of documents, including legal contracts. While traditional AI tools can help solve for these use cases, the organization sees an opportunity to use LLMs to take these processes to the next level. JPMorgan also recently announced that it is developing a ChatGPT-like software service that helps selecting the right investment plans for the customers. Bloomberg released training results for BloombergGPT™, a new large-scale generative AI model trained on a wide range of financial domain data. As a financial data company, Bloomberg’s data analysts have collected and maintained financial language documents spanning 40 years. To improve existing natural language processing (NLP) tasks like sentiment analysis, and extend the power of AI in financial services, Bloomberg created a 50-billion parameter LLM—a form of generative AI—purpose-built for finance. We are truly at an exciting inflection point in the widespread adoption of ML, but as leaders in the financial services industry move forward, they will need to define the problems they want to solve using generative AI and establish a cloud strategy to enable generative AI opportunities. In this blog, we focus on a handful of generative AI use cases for the financial services industry, how AWS enables customers to quickly build and deploy generative AI applications at scale, and how to get started with generative AI at AWS. Use cases for financial services Across banking, capital markets, insurance, and payments, executives are eager to understand generative AI and applicable use cases, and developers want to experiment with generative AI tools that are easy to use, secure, and scalable. Below we explore four use case categories where generative AI can be applied in the financial services industry. 1. Improve customer experience LLMs can improve employee productivity through more intuitive and human-like accurate responses to employee queries, for example an HR-bot that can answer HR related questions. They also can create more capable and compelling conversational AI experiences for external customer service applications, such as call center assist functionality that provides agents with automated assistance, contextual recommendations, and next best actions. Without LLMs, questions would typically have to be anticipated and a fixed set of answers would have to be created in advance by human authors. Whereas, with LLMs, answers can be generated on the fly and, as new information becomes available, it can be incorporated automatically into the answers provided. Today, financial services institutions leverage ML in the form of computer vision, optical character recognition, and NLP to streamline the customer onboarding and know-your-customer (KYC) processes. Generative AI can help firms deliver flexible and relevant conversations that improve the overall customer experience, like adapting the conversational style to match that of the customer (for example, casual conversation mode or formal conversation mode). With LLMs, firms can automatically translate complex questions from internal users and external customers into their semantic meaning, analyze for context, and then generate highly accurate and conversational responses. Specifically, LLMs enable long-form answers to open-ended questions (e.g., search thousands of pages of legal or technical documentation and summarize the key points that answer the question). Data captured from customer interactions, such as call transcriptions and chatlogs, can also be summarized and analyzed for sentiment to more easily understand the themes associated with positive or negative customer experiences. Similarly, themes of interest to individual customers and context of prior conversations can be summarized and incorporated to enhance an omni-channel approach and deliver unified brand experience for customers. 2. Increase productivity of knowledge workers  Generative AI tools can help knowledge workers, such as financial or legal analysts, product innovators, and consultative sales professionals, become more efficient and effective in their roles. Knowledge workers will evolve their focus from searching for, aggregating, and summarizing key sections of text and images to checking the accuracy and completeness of answers provided by generative AI models. This use case has application for many job roles, including financial advisors and analysts preparing investment recommendations, compliance analysts responding to the impact of new regulations, loan officers drafting loan documentation, underwriters crafting insurance policies, and salespeople preparing RFI responses.  In all these cases, the human professional can retain edit rights and final say, and be able to shift focus to other more value-add activities. 3. Understand market and customer sentiment The ability to track event-driven news exists today, and many hedge funds and quants have developed ways to trade the markets based on signals from news and social media sentiment, confidence, and story counts. However, traditional event-driven investment strategies and surveillance methodologies rely on mining for known behavior and patterns. Generative AI has potential to surface new themes and associated sentiment without direction. For instance, LLMs can identify new trends in consumer behavior from social media content by clustering posts with similar meaning and assigning the clusters an aggregate measure of sentiment. Similarly, negative sentiment associated with specific content, such as a new advertising campaign, can quickly be identified and summarized. Investors and enterprises can then respond promptly to this information. 4. Drive product innovation and automate business processes Generative AI has the potential to help financial advisors and investors to leverage conversational text to automatically create highly tailored investment strategies and portfolios. For example, a financial advisor or investor could speak or type into a wealth management platform: “I want to invest in clean energy companies that don’t rely on mining of raw materials in countries with poor human rights.” A generative AI-enabled platform could then provide a list of companies with supporting commentary on why those companies were selected. Similarly, investors could access and read auto-generated summarized commentary on their investments and portfolios. The initial implementations of these solutions are likely to be aimed internally at financial advisors given that, today, generative AI has limitations with respect to accuracy. Such limitations would have to be overcome for these solutions to be truly scalable, i.e. if the daily commentary tailored to each retail customer’s portfolio had to be checked by a human, it might defeat the purpose of such generative AI-created commentaries, at least for the mass affluent. Generative AI can also rapidly and efficiently produce data products from textual data sources that are only lightly used today. For instance, annual reports and filings (such as 10-Ks filed with the SEC in the United States) are primarily used as a source for financial statements. Buried in text of these documents is data that could power a product catalog or a customer and supply-chain relationship map across all or most public companies globally. Generative AI can create these types of data products at a fraction of the cost that it would take to extract this information manually or with traditional NLP processes. In past blogs , we have described how LLMs can be fine-tuned for optimal performance on specific document types, such as SEC filings. Annual reports are just one, albeit an important, source that can feed data products. Unstructured data (mostly text) is estimated to account for 80%-90% of all data in existence. Generative AI is well suited to transform these large repositories of written and spoken word into on-demand structured or semi-structured information that can power investment processes and retail investor interactions. Investment research, investor presentations, earnings call transcripts, broadcast news and interviews, newspapers, trade journals, and websites are examples of content sources which, when searched comprehensively and appropriately summarized, can provide targeted intelligence of value to investors, such as pricing trends or consumer preferences for particular products or product areas. Building on over 20 years of experience AI and ML have been a focus for Amazon for over 20 years, and many aspects of the Amazon customer experience are informed or driven by ML, including our eCommerce recommendations engine; the paths that optimize robotic picking routes in our fulfillment centers; and our supply chain, forecasting, and capacity planning. Amazon Web Services (AWS) leverages Amazon’s experience and the experiences of our customers with the goal of democratizing ML and making it accessible to anyone who wants to use it. This includes more than 100,000 customers of all sizes and industries, who we have helped innovate using AI and ML with industry leading capabilities, including financial services. Today, we have the broadest and deepest portfolio of AI and ML services. For example, we developed Amazon SageMaker , an easy way for all developers to build, train, and deploy models. We also offer access to a wide range of artificial intelligence (AI) and ML services that enable the financial services industry to add AI capabilities like image recognition, forecasting, and intelligent search to applications with a simple API call. Today, financial services leaders like NatWest, Vanguard, and PennyMac, as well as thousands of startups and government agencies around the world, use our tools to help them leverage AI and ML to transform and advance their organizations, industries, and missions. We take the same democratizing approach to generative AI in financial services, making it easy, practical, and cost-effective for customers to use in their business across all the three layers of the ML stack, including: infrastructure, tools, and purpose-built AI services. Our approach to generative AI is to invest and innovate across the ML stack to take this technology out of the realm of research and make it available to customers of any size and developers of all skill levels. Powering generative AI opportunities With AWS, financial services customers get the flexibility to choose the way they want to build with generative AI: build their own FMs with purpose-built ML infrastructure, leverage pre-trained FMs as base models to build their applications, or use services with built-in Generative AI without requiring any specific expertise in FMs. To enable this flexibility, we have identified four important considerations so that you can quickly build and deploy generative AI applications at scale. 1. Make AWS the easiest place to build with FMs. With Amazon Bedrock, customers can build and scale generative AI-based applications using FMs, democratizing access for all builders. Amazon Bedrock is a new service that makes FMs from Amazon and leading AI startups, including AI21 Labs, Anthropic, and Stability AI, accessible via an API. Amazon Bedrock is the easiest way for customers to build and scale generative AI-based applications using FMs, democratizing access for all builders. 2. Invest in the most price-performant infrastructure for machine learning. Harnessing the power of generative AI requires a large amount of computational resources and data, which can be costly and time-consuming to acquire and manage. Using our AWS Trainium and AWS Inferentia chips, we offer the lowest cost for training models and running inference in the cloud. 3. Deploy game changing generative AI applications like Amazon CodeWhisperer. Generative AI can take the heavy lifting out of time-consuming coding tasks and accelerate building with unfamiliar APIs. Amazon CodeWhisperer is an AI coding companion that uses an FM to radically improve developer productivity by generating code suggestions in real-time based on developers’ comments in natural language and prior code in their Integrated Development Environment (IDE). 4. Provide flexibility to work with open source models or build their own FMs. In addition to models in Bedrock, Amazon SageMaker JumpStart is an ML hub offering algorithms, models, and ML solutions. With SageMaker JumpStart, customers can discover, explore, and deploy open source FMs that are not available in Bedrock such as OpenLLaMA, RedPajama, Mosiac MPT-7B, FLAN-T5/UL2, GPT-J-6B/Neox-20B, and Bloom/BloomZ. Ready to start reimagining your business for today and tomorrow? As financial services institutions move forward, they will need a good understanding of generative AI technology, the ability to compare and contrast the efficacy of different FMs for specific tasks, and the opportunity to experiment with different approaches to domain adaptation and model customization. At AWS, we aim to make it easy and practical for our customers to explore and use generative AI in their businesses. Join the Generative AI and the future of financial services webinar on July 13th, 11:00 am EDT. Learn more about AWS AI and ML and Generative AI for financial services customers. Get started with Amazon SageMaker Jumpstart to solve common use cases for financial services. Ruben Falk Ruben is a Capital Markets Specialist with focus on Data Architecture, Analytics, Machine Learning & AI. Ruben joined AWS from S&P Global Market Intelligence where he was Global Head of Investment Management Solutions and ran product strategy and market development for S&P’s fundamental and quantitative investment management products including desktop, data feeds, NLP, and the ClariFI quant platform. Previously Ruben was a Director with UBS Investment Bank and also spent time as a management consultant. Ruben has a Computer Science degree from Brandeis University and an MBA from UC Berkeley. Resources AWS for Industry AWS Events AWS Training & Certification AWS Whitepapers AWS Compliance Reports Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" The positive impact Generative AI could have for Retail _ AWS for Industries.txt,"AWS for Industries The positive impact Generative AI could have for Retail by David Dorf | on 24 MAY 2023 | in Amazon Forecast , Amazon Personalize , Amazon SageMaker , Artificial Intelligence , Generative AI , Industries , Retail | Permalink | Comments |  Share Since it was released back in November 2022, the internet has been buzzing about ChatGPT. Since then, retailers having been asking two main questions: what is it, and how will it impact my business? Let’s dive into both, staying high-level, and see if we can make sense of all the hype. What is Generative AI? Most people were introduced to Generative AI (GenAI) when they heard about ChatGPT. ChatGPT is a chatbot application that’s using a large language model (LLM) called GPT. There are other LLMs available, but GPT seems to be the most advanced to date. An LLM is a type of Foundation Model (FM) that is focused on language. FMs are neural networks that are trained with vast amounts of data so they can pick out patterns and formulate rules without explicitly being told the rules. In the English language there are lots of rules, and lots of exceptions to those rules. The model learns the rules, and exceptions, by examining the vast amounts of writing on the internet and in books. What’s unique about the advancement of LLMs is the model’s ability to keep track of context, meaning, and relevance. Its size allows it to quickly reference many, many facts so it can converse on almost any topic. Of course, it doesn’t really know what its saying—it’s merely repeating back what it learned when it was trained. This brings up a shortcoming—it can occasionally “hallucinate.” That is, it sometimes may repeat untruths or draw incorrect conclusions. FMs can be trained on language, images, mathematics, and more. This forms a base model upon which users can add additional specific training to tune the model for a given purpose. Generative AI (GenAI) uses FMs to generate new things based on its training. This includes content creation and natural language interactions. Let’s look at each. There are three main areas where content creation shines: Create textual artifacts like product descriptions, blogs, and marketing content. That’s not to say it’s ready to print, but it certainly provides an excellent starting point for a human to refine. Create custom images without the need for expensive photography. Imagine being able to populate your website with images that are generated. Create code for programming —programming languages are just another type of language, so FMs can be trained to be good at writing and debugging code. That’s not to say programmers are going away; rather, it’s a tool to boost programmer productivity. There are four main areas for leveraging natural language interactions: Enhanced chatbots —customers can ask more complex questions about their orders and product recommendations. Summarization could provide bulk data like weekly sales, inventory reports, and more, while providing a summary. Real-time language transactions , which could bring international users to your website. Potential to enhance search by allowing complex requests then providing detailed results. How will this impact retailers? So now that we have an understanding of this new technology, we can look at applications for the retail industry. First and foremost, FMs (and LLMs and GenAI) can make existing artificial intelligence and machine learning (AI/ML) applications better. For example, you may already be using machine learning for personalized recommendations, but adding FMs might open up a conversational aspect that allow customer to discuss recommendations. The following figure showcases some ideas classified by retailer solution areas. Figure 1 – Retail Use Cases by Solution Area GenAI could be used to improve chatbot engagement, generate interesting product descriptions, provide training content for employees, and detect potential supply chain bottlenecks. These are just a few of the many use cases that could benefit from retailers leveraging FMs. Retailers should be open to experimentation and continue to watch as this technology matures further. Keep a backlog of possible use cases, and start to learn about the FMs available today (for example, DALL-E, Stable Diffusion, Midjourney, and Amazon Titan ). How can AWS help? For years AWS has been helping retailers use AI/ML to automate processes, enhance the customer experience, and optimize decisions. We continue to be on the forefront of research and ways to increase access to AI/ML tools. AWS is previewing Amazon Bedrock , a fully managed service that makes FMs from leading AI startups and Amazon available through an API. You can choose from a wide range of FMs to find the model that is best suited for your use case. Search, find, and synthesize information to answer questions from a large corpus of data. Create realistic and artistic images of various subjects, environments, and scenes from language prompts. Help customers find what they’re looking for with more relevant and contextual product recommendations than word matching. Also available is Amazon Code Whisperer , a developer tool that can generate code suggestions ranging from snippets to full functions in real-time based on your comments and existing code. Enhance code security by scanning your code to detect hard-to-find vulnerabilities, and get code suggestions to remediate them immediately. Align to best practices for tackling security vulnerabilities, such as those outlined by Open Worldwide Application Security Project (OWASP), or those that don’t meet crypto library best practices and other similar security best practices. And as always, retailers will find Amazon Personalize , Amazon Forecast , and Amazon SageMaker available to address retailers’ AI/ML requirements. Conclusion The advancements in GenAI and capabilities demonstrated are nothing short of amazing, but we are still in the early days of this technology. Retailers should certainly be adopting proven AI/ML solutions like personalization, forecasting, and chatbots while monitoring the GenAI space and looking for use cases that directly impact their business. Contact an AWS Representative to learn how we can help accelerate your business. Further Reading • Announcing New Tools for Building with Generative AI on AWS • Generative AI on AWS • AWS Machine Learning Blog for Retail TAGS: AI , retail , Thought Leadership David Dorf David Dorf is a Worldwide Retail Specialist at AWS where he focuses on providing solutions for retailers. David previously held positions at Infor Retail, Oracle Retail, 360Commerce, Circuit City, AMF Bowling, and Schlumberger’s Retail & Banking division developing retail systems using various technologies. David spent several years working with NRF-ARTS on technology standards and continues to support the Retail Orphan Initiative charity. He holds degrees from Virginia Tech and Penn State. Comments View Comments Resources AWS for Industry AWS Events AWS Training & Certification AWS Whitepapers AWS Compliance Reports Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" The Retail Race_ A Roadmap for Implementing a Smart Store Strategy _ AWS for Industries.txt,"AWS for Industries The Retail Race: A Roadmap for Implementing a Smart Store Strategy by Justin Swagler | on 31 MAY 2023 | in Amazon Forecast , Amazon Kinesis , Amazon Personalize , AWS IoT Core , AWS IoT TwinMaker , AWS Panorama , Edge , Industries , Retail | Permalink | Comments |  Share Retailers have always been in a race to deliver an exceptional customer experience, and in the digital age, that race has only become more intense—particularly in physical stores. With consumers shopping again at brick-and-mortar stores, some key trends have emerged: Half of retail customers intend to still use digital, mobile, self-service, and contactless technologies adopted during the pandemic. 60% of customers will become repeat buyers after a personalized experience with a retailer. By 2027, 70% of retail store sales will be “digitally influenced”. By 2025, 60% of retail customers expect retail space to be focused on experience rather than product. This raises customer expectations of convenient, omnichannel digital experiences in physical stores. Delivering an exceptional customer experience has become more critical than ever. However, doing so poses many challenges for retailers today: Legacy technology infrastructure: Many retailers still rely on legacy technology infrastructure, which can be expensive to maintain and can make it difficult to quickly implement new solutions. Data fragmentation: Retailers often have fragmented data sources, which can make it difficult to gain a comprehensive view of customer behavior, inventory levels, and other key metrics. Lack of real-time insights: Traditional retail analytics solutions often rely on batch processing, which can lead to delays in getting actionable insights. Inability to personalize customer experiences: Without a comprehensive view of customer data, it can be difficult for retailers to provide personalized recommendations and offers. Ultimately, by embracing new technologies, such as Amazon Web Services (AWS) Smart Store Solutions, retailers can stay competitive, enhance customer experience, and drive revenue growth. The key is to remain agile and adaptable, constantly seeking new ways to improve the customer experience and optimize their operations. In my recent blog Embrace Retail’s Future: Bringing Smart Store Solutions to Life, I shared a few approaches for how retailers can assess and prioritize implementation of Smart Store Solutions. However, doing so is a journey and requires a strategic roadmap focused on the right investments and steps. If you’re looking to embark on the journey of implementing AWS Smart Store Solutions in your physical retail environment, this blog post offers a roadmap to guide you. It’s important to note that every retailer’s digital maturity level and priorities may vary. Therefore, you can customize this roadmap to align with your specific needs and focus on your highest priorities. Accelerating Towards Smart Store Success: A Strategic Roadmap Figure 1: Key steps along the strategic Smart Store roadmap To help retailers succeed in the retail race, a strategic Smart Store roadmap is crucial. This roadmap outlines key steps retailers can take to leverage AWS Smart Store Solutions and transform their physical retail stores into modern, customer-centric environments. By following this roadmap, retailers can embrace innovative technologies, gain valuable insights, and create seamless experiences that keep them ahead of the competition. Step 1: Modernize Commerce Architecture The first crucial step for retailers is to transition to a MACH or composable architecture. This modern architecture approach , which stands for Microservices-based, API-first, Cloud-native, and Headless, enables retailers to rapidly assemble and disassemble different components of their technology stack. By embracing this architecture , retailers can swiftly implement new solutions and respond to evolving customer demands. It promotes agility and adaptability, empowering retailers to stay ahead in the race. To underscore the significance of MACH in driving growth and innovation, a June 2020 Gartner report highlights its impact: “By 2023, organizations that have adopted a composable approach will outpace competition by 80% in the speed of new feature implementation.” Gartner emphasizes the strategic advantage businesses can gain by embracing a composable architecture, intensifying the importance of this step in the strategic roadmap to smart stores. Step 2: Establish and Streamline Data & Analytics Foundation To fully harness the potential of AWS Smart Store Solutions, retailers must establish robust data, analytics, and insights platforms. This entails capturing data from various sources and leveraging analytics to extract valuable insights. These insights can be utilized to optimize the customer experience, streamline operations, and drive sales. By making data-driven decisions and taking real-time actions, retailers can gain a competitive edge, as highlighted by these retailer examples: Tapestry – Tapestry, the parent company of Coach, Kate Spade, and other luxury brands, unified inventory and sales data and deployed an ML-based inventory optimization solution. Under Armour – By consolidating and harnessing its enterprise data, this renowned athletic brand has enabled rapid design and deployment of new experiences for its vast customer base of over 180 million. Step 3: Monitor and Optimize with AI/ML and IoT Based Solutions The implementation of artificial intelligence and machine learning (AI/ML) and/or Internet of Things (IoT)-based solutions unlocks a realm of advanced capabilities for retailers. Leveraging AI/ML, such as Amazon SageMaker , AWS Panorama , and Amazon Personalize, retailers can offer personalized recommendations, optimize pricing, and automate processes. Retailers can also optimize staff scheduling, productivity, or communications by utilizing Workforce Management solutions. AWS IoT solutions, such as AWS IoT Kinesis , Amazon Location Service, AWS IoT Core , and AWS IoT TwinMaker , enable near real-time inventory tracking, store condition monitoring, and customer behavior analysis. With these cutting-edge technologies and the power of AWS, retailers gain valuable insights and achieve heightened operational efficiency, enabling them to deliver unparalleled customer experiences Step 4: Reduce Friction with Checkout Solutions Retailers need to implement omnichannel and checkout solutions that ensure a seamless experience across all customer touchpoints, be it online, mobile, in-store, or social media. By offering consistent and convenient interactions, retailers can meet customer expectations and preferences. Additionally, optimizing the checkout process is essential for enhancing the overall customer journey, reducing friction, and driving conversions. US pet store chain Petco fulfills customer needs quickly and conveniently by providing a curbside pickup service that it deployed in just six weeks alongside AWS Retail Competency Partner JBS Solutions Inc. (JBS). UK Grocer Sainsbury’s uses AWS technologies to reduce friction in checkout through its SmartShop application (Mobile Scan & Go) and Just Walk Out technologies. Figure 2: The Benefits of Implementing a Smart Store Strategy Stay Ahead in the Race In conclusion, adopting a strategic roadmap to smart stores empowers retailers to thrive in the dynamic landscape of modern retail. By following the outlined steps, retailers can unlock a multitude of benefits that revolutionize their operations and customer experiences. A comprehensive view of customer behavior, achieved by capturing data from all channels and sources, enables retailers to provide personalized recommendations and offers, fostering stronger customer relationships. With near real-time insights provided by AWS Smart Store Solutions, retailers can swiftly respond to inventory levels, customer behavior, and other key metrics, enabling proactive decision-making and optimized operations. The result is an improved customer experience, where personalized recommendations, streamlined checkout processes, and convenient fulfillment options create a frictionless journey that cultivates loyalty and repeat business. The strategic roadmap empowers retailers to reduce costs and improve profitability by automating processes, optimizing operations, and minimizing inventory levels. By embracing these benefits, retailers can position themselves at the forefront of innovation, delivering exceptional experiences, and securing their success in the competitive retail landscape. Find out how AWS and AWS Retail Competency Partners can support your retail transformation with Smart Store solutions. Learn more at aws.amazon.com/retail/. Further Reading AWS shows why physical retail matters more than ever at NRF 2023 Making the Smart Store a Reality: How Retailers Can Elevate Experiences, Operate Efficiently, and Achieve IT Agility AWS Retail Solution Library Great MACH runs on AWS TAGS: Amazon Dash Cart , amazon personalize , AWS IoT , BOPIS , Computer Vision , Customer 360 , Data loss prevention (DLP , Just Walk Out (JWO) technology , RFID , Scan-and-Go , Smart Stores , workforce management Justin Swagler Justin Swagler is worldwide head of Physical Retail at AWS, where he leads the global strategy and thought leadership for physical retailing. Justin has 15+ years of consumer packaged goods, retail, and strategy experience spanning innovation strategy, retail operations, product development, and executive leadership. He is passionate about shepherding organizations to strategically innovate and reinvent consumer experiences. He holds an undergraduate degree from the University of Illinois at Urbana-Champaign and an MBA from the Kellogg School of Management. Comments View Comments Resources AWS for Industry AWS Events AWS Training & Certification AWS Whitepapers AWS Compliance Reports Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Thomson Reuters Uses Amazon DMA to Accelerate Database Modernization _ Thomson Reuters Case Study _ AWS.txt,"Français 2023 As a global provider of business information services with over 20,000 employees in 100 countries, Thomson Reuters offers products such as highly specialized tools for legal, tax, accounting, and compliance professionals. Thomson Reuters made use of Amazon DMA experts and AWS tools and proved that we could integrate them into our process and convert and migrate our legacy database estate.”  Español Amazon Aurora provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. 日本語 Thomson Reuters has been on a cloud journey since 2015, but that journey accelerated when the company decided to exit its largest data center, which housed 800 applications and 40,000 workloads. The migration is just one component of a $600 million change program that aims to transform the Thomson Reuters business from a holding company to an operating company with more shared systems and capabilities. “We needed more consistency and efficiency,” says Bart Matzek, senior director of platform engineering at Thomson Reuters. “It was an opportunity for us to take advantage of the automation, the scale, and the power of cloud databases.” AWS Schema Conversion Tool (AWS SCT) performs cloud-native code optimization by converting legacy Oracle and SQL Server functions to their equivalent AWS service, helping to modernize the applications at the same time of database migration Customer Stories / Media & Entertainment Thomson Reuters’s Solution 한국어 Thomson Reuters Uses Amazon DMA to Accelerate Database Modernization Bart Matzek Senior Director of Platform Engineering, Thomson Reuters Industry Challenge Get Started AWS Services Used Amazon Database Migration Accelerator (Amazon DMA) is a solution that brings together AWS Database Migration Service (DMS), AWS Schema Conversion Tool (AWS SCT), and AWS database experts to help customers migrate away from traditional commercial databases at fixed prices. 中文 (繁體) Bahasa Indonesia Amazon Aurora AWS DMS Following the pilot, Thomson Reuters started scaling the migration to other databases. The company identified 31 other databases that were ready for Amazon DMA within the first year. “All this keeps our app teams focused on features,” says Matzek. “It simplifies our operations and reduces our expenses. It allows us to take a step toward modernizing. And then it keeps our engineers learning and progressing in the cloud.” Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Learn how Thomson Reuters accelerated the migration of its largest data center, which housed 800 applications and 40,000 workloads, using Amazon DMA. 中文 (简体) AWS SCT Learn more » According to Matzek, some key lessons learned are to define and share your process early, understand your infrastructure limitations, and engage the right experts. “Including Amazon DMA in our conversations early was a key thing that we did,” he says. Thomson Reuters engineers were excited to tackle the project’s inherent challenges. One team had 10,000 schemas with 200 TB of data that needed to convert in 45 days to hit scale-testing timelines, according to Matzek. They hit that ambitious timeline by proactively working alongside the Amazon DMA team—which brought expertise in architecture, tooling, performance, and tuning—to develop a plan, move through operational roadblocks, and automate key tasks. That team provided hands-on training workshops and attended daily scale testing meetings to troubleshoot issues, greatly flattening the learning curve. After establishing goals and plans, Thomson Reuters used Amazon DMA to launch a pilot on a category-one database. Using AWS tools such as AWS Database Migration Service (AWS DMS) and AWS Schema Conversion Tool (AWS SCT), the pilot took approximately 8 weeks, and it concluded with a successful database migration and modernization to Amazon Aurora, a relational database designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. “This was a huge win for us,” says Matzek. “It provided a basis to start exploring this further.” Türkçe English AWS Database Migration Service (AWS DMS) is a managed migration and replication service that helps move your database and analytics workloads to AWS quickly, securely, and with minimal downtime and zero data loss. Looking ahead, Thomson Reuters is paving the way forward using Amazon DMA. “Thomson Reuters made use of Amazon DMA experts and AWS tools and proved that we could integrate them into our process and convert and migrate our legacy database estate,” says Matzek. “We’ve made a ton of progress.” Deutsch Tiếng Việt Italiano ไทย About Thompson Reuters Learn more » Thomson Reuters did not just want to migrate to the cloud. Incremental modernization was an important goal with key objectives that included optimizing its product portfolio, simplifying operations, reducing expenses, and creating an inclusive culture of world-class talent. The company turned to Amazon Web Services (AWS) and ultimately chose Amazon Database Migration Accelerator (Amazon DMA), a solution that brings together AWS services and AWS database experts to help customers migrate away from traditional commercial databases. Thomson Reuters took advantage of the methodical approach to migration that used database complexity along with technical, financial, and human resources to meet its needs. Benefits of Using Amazon DMA Amazon DMA Português" THREAD _ Life Sciences _ AWS.txt,"Using AWS, we’ve built a faster system and supporting tools that help our customers conduct more modern, global, and patient-centric clinical trials.” John Reites Chief Executive Officer, THREAD Français Benefits of AWS Español Organizations of all sizes across all industries are transforming and delivering on their missions every day using AWS. Contact our experts and start your own AWS Cloud journey today. Using AWS also helps THREAD deliver secure and scalable access to applications and nonpersistent desktops from virtually any location. The company uses 日本語 Contact Sales Performing an AWS Well-Architected Review AWS Professional Services Get Started 한국어 The AWS Professional Services organization is a global team of experts that can help you realize your desired business outcomes when using the AWS Cloud. Drives recruitment and retention in clinical research Innovating on AWS Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. AWS Well-Architected Tools AWS Services Used To conduct remote studies using its platform, THREAD offered a simple, intuitive interface to capture data from patients, caregivers, and remote nurses. THREAD built a customer-facing portal that gives researchers the ability to review the data from their clinical trials and studies all in one place, on demand. This portal also provides all the digital tools that pharmaceutical companies, nurses, and caregivers need to connect with the participants remotely, including videoconferencing capabilities, onboarding modules, e-consent forms, surveys, and more. “Capturing data directly from people in their homes was new in our industry and required a patient-focused approach including our simple, user-friendly interface,” says Reites. “Through this portal, researchers can interact with their data and conduct visits with their participants from anywhere in the world, not only in a clinic.”  As THREAD continues to expand, it plans to continue working alongside AWS to optimize its architecture. “Working on AWS means that we can customize our services and build out more tools on top of our infrastructure,” says Pearson. “We can operate at scale and give our customers peace of mind that they have a reliable, secure environment where they can run clinical trials and test new medications.” 中文 (繁體) Bahasa Indonesia Scaled to support studies across 60 countries and over 100,000 participants Amazon Relational Database Service (Amazon RDS), which provides customers with the ability to set up, operate, and scale a relational database in the cloud with just a few clicks. With this scalability, THREAD’s customers have conducted studies that include over 100,000 different participants across 60 countries. “Using AWS, we’ve built a faster system and supporting tools that help our customers conduct more modern, global, and patient-centric clinical trials,” says Reites. “In turn, these research studies are more inclusive and convenient for both the researchers and participants.” Reduces manual work and quality checks Ρусский عربي 中文 (简体) AWS Professional Services, a team of AWS experts that help companies achieve their desired business outcomes on AWS. Learn more » Modernizing Clinical Trials for All Stakeholders Launched in 2016, THREAD provides a fully configurable platform for running decentralized clinical trials (DCTs) and patient-centric eCOA solutions with consulting services that offer global scalability and rapid flexibility from trial design to close out. Türkçe As THREAD expanded into new regions, it wanted to validate that its infrastructure and data storage would comply with local regulations. “The regulatory environment is constantly changing and is different in every country,” says Scott Pearson, chief product officer at THREAD. “We had to think of innovative ways to store, secure, and access data for our customers while maintaining alignment with regulatory requirements.” THREAD turned to Industry Innovators 2022: THREAD English About THREAD Amazon RDS Delivers up to 30% cost savings to customers at scaled use As a cloud-first company, THREAD adopted several AWS services at the time of its launch, including   Through its engagement with AWS Professional Services, THREAD learned about the AWS Well-Architected Tool, which helps companies review their architecture and adopt AWS best practices. Using this resource, THREAD focused on future-proofing its environment and optimizing its architecture with data security and regulatory compliance in mind. “The AWS Well-Architected Tool helped us better understand our current infrastructure and how we could achieve our goals in the fastest, most efficient way possible by aligning with AWS best practices,” says Pearson. The AWS Well-Architected Tool is designed to help you review the state of your applications and workloads, and it provides a central place for architectural best practices and guidance. Clinical research technology company THREAD provides a fully configurable platform for running decentralized clinical trials (DCTs) and patient-centric electronic clinical outcome assessment (eCOA) solutions. It also provides consulting services that offer global scalability and rapid flexibility from clinical research design to close out. THREAD's solution helps pharmaceutical companies and clinical research organizations (CROs) design studies that are modern, more inclusive, and retain more participants thus improving sponsors’ ability to achieve research objectives. To deliver on that promise, THREAD needed a scalable, reliable hosting infrastructure that could also achieve the country-specific regulatory requirements across its global operations. Deutsch Helps its customers accelerate their times to market Achieved a high availability on its solution Tiếng Việt Amazon S3 THREAD Scales Decentralized Clinical Trials Across 60 Countries Using AWS Italiano ไทย Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. 2022 Amazon EC2 Clinical trial recruitment remains one of the biggest challenges in the pharmaceutical industry due to time, costs, and physical constraints. Launched in 2016, THREAD provides a unified, proprietary platform and consulting services for pharmaceutical companies and CROs to design, operate, and scale decentralized research studies and eCOA programs, which help trial sponsors increase clinical trial engagement and retention and accelerate their times to market. Because THREAD’s customers can conduct their research in the home, on the go, and in the clinics, they can recruit people from anywhere, which increases trial diversity, engagement, and retention. By addressing these challenges in recruitment and study design, THREAD can support up to 30 percent time and cost savings for its customers and five times more inclusive versus industry benchmarks. Using Amazon Web Services (AWS), THREAD has scaled its platform across 60 countries to meet the heightened demand for patient-centric DCTs and eCOA solutions during the COVID-19 pandemic. By making an early investment in cloud technologies, THREAD can innovate and offer more inclusive and cost-effective solutions compared with traditional clinical trials, which are more complex and expensive. Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 475 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. THREAD chose AWS as its cloud service provider because of its global availability, scalability, dedicated support, custom-built solutions for regulated industries, and data security service-level agreements. Over the years, THREAD has used Amazon EC2 to scale its infrastructure and expand to new countries. Português Part of THREAD’s unique approach is unifying disparate trial datasets into one central hub that offers streamlined access and powers advanced analytics. THREAD stores its data using Amazon Simple Storage Service (Amazon S3), an object storage service built to retrieve any amount of data from anywhere. The company set up its platform to automatically store multimodal data collected from trial participants, ranging from telehealth visit summaries to data from wearables, in Amazon S3. “We’re reducing the time our customers need to spend on data collection and data quality,” says John Reites, chief executive officer (CEO) of THREAD. “We’ve changed where the source data comes from so that we capture data directly from participants’ devices or from the person who inputs it. This allows THREAD’s customers to direct more resources toward high-level tasks that accelerate time to market, verus focusing on low quality, manual efforts.”" Tokenize Builds A Scalable Cost-Effective Digital Exchange Platform On AWS _ Case Study _ AWS.txt,"1 DevOps engineer Français Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service. Route 53 connects user requests to internet applications running on AWS or on-premises. Learn more » 80% Amazon Virtual Private Cloud (Amazon VPC) gives you full control over your virtual networking environment, including resource placement, connectivity, and security. Learn more » Español Tokenize Xchange (Tokenize) is a Singapore-based digital exchange platform that facilitates the trading of over 100 cryptocurrencies. Launched in 2017, Tokenize was one of the first three digital asset exchanges (DAX) to receive approval from the Securities Commission Malaysia and is now the second-largest operator in Malaysia by traded market share at 40 percent. Learn More 日本語 AWS Graviton processors are designed by AWS to deliver the best price performance for your cloud workloads running in Amazon EC2. AWS Graviton2 processors deliver a major leap in performance and capabilities over first-generation AWS Graviton processors. 2022 Solution | Facilitating Business Growth on the Digital Exchange Platform 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Tokenize Xchange (Tokenize) is a Singapore-based digital exchange platform that facilitates the trading of over 100 cryptocurrencies. Launched in 2017, Tokenize was one of the first three digital asset exchanges (DAX) to receive approval from the Securities Commission Malaysia and is now the second-largest operator in Malaysia by traded market share at 40 percent. To date, the platform has amassed over 250,000 users from across 8 countries. Instead of four to maintain and operate the entire IT infrastructure Helps Tokenize meet full regulatory compliance across 27 regions Redirected cost savings to marketing, hiring, and customer acquisition AWS Services Used Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Learn more » Tokenize was able to save up to 80 percent in hardware cost by deploying its platform entirely on the AWS Cloud. The startup has redirected these savings towards marketing, hiring, and customer acquisition. Tokenize is a digital exchange platform that facilitates the trading of over 100 cryptocurrencies, including Bitcoin, Ethereum, and its own token, Tokenize Emblem (TKX). As a startup, Tokenize sought a scalable, cost-effective IT infrastructure to support the performance and storage for cryptocurrency trading, and keep initial costs low. 中文 (繁體) Bahasa Indonesia The scalability and cost-efficiency of the AWS Cloud has helped Tokenize with a secure hosting environment without hefty upfront capital investments. AWS’s global reach has also helped Tokenize grow the number of users on its digital exchange platform. As of September 2022, the platform grew to over 250,000 users within 48 months, which was 12 months faster than it had planned. Amazon Route 53 Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) AWS also helps Tokenize meet security and compliance requirements globally. Amazon Virtual Private Cloud (Amazon VPC) secures the company’s virtual networking environment, including resource placement, connectivity, and security. Amazon CloudWatch monitors all of Tokenize’s applications, making sure that the platform achieves 99.99 percent uptime as required by regulatory compliance. Furthermore, since AWS is responsible for protecting the infrastructure that runs all of the services offered on the AWS Cloud, Tokenize is able to meet regulatory compliance in multiple countries without engaging in additional administrative and approval processes, such as Malaysia and Singapore. To learn more, visit aws.amazon.com/financial-services. Looking ahead, Tokenize plans to integrate Amazon CloudWatch with Slack, its primary messaging platform. Once integrated, Amazon CloudWatch will be able to monitor its entire blockchain infrastructure, and automatically notify engineers when issues arise. About Tokenize Xchange Overview Outcome | Helping Tokenize Reach Global Cryptocurrency Traders Get Started As a startup, we are particularly cost sensitive. AWS has a comprehensive suite of end-to-end services that not only allows Tokenize to grow its digital exchange platform, but they also help us achieve the desired performance while keeping cost firmly in check.” Customer Stories / Financial Services With the AWS Cloud, Tokenize reduced the time it takes for the platform to be to set up in a production-ready environment from up to 5 days to 24 hours. Türkçe Discover how Tokenize’s digital exchange platform saved up to 80 percent in upfront deployment costs by using Amazon Web Services. Tokenize decided against an on-premises infrastructure because it would cost US$12,800 upfront, require up to four DevOps engineers to monitor and operate its platform, which adds to manpower costs, and delay its digital exchange platform by 12 months as it needed to comply with local laws and regulations. English Tokenize Builds A Scalable, Cost-Effective Digital Exchange Platform On AWS Tokenize uses Amazon Route 53 in conjunction with Amazon EC2 across multiple Regions to ensure single-digit millisecond latencies for high-speed transactions on the platform. It also uses Amazon Simple Storage Service (Amazon S3) to store and retrieve 8 TB of data, such as Know Your Customer (KYC) documents and financial reports. “As a startup, we are particularly cost sensitive. AWS has a comprehensive suite of end-to-end services that not only allows Tokenize to grow its digital exchange platform, but they also help us achieve the desired performance while keeping cost firmly in check,” shares Hong Qi Yu, founder and CEO of Tokenize. Amazon Relational Database Service Reinvestments Tokenize is fully built and deployed on Amazon Web Services (AWS) and automatically scales storage and compute capacities based on transaction volumes and active concurrent users. Tokenize also leveraged AWS’s global reach to amass over 250,000 users across 8 countries. Amazon Virtual Private Cloud Tokenize uses Amazon Elastic Compute Cloud (Amazon EC2) to automatically scale and handle an average of 13.1 transactions per second (TPS) and 10,280 active concurrent users. During peak periods, such as 2021, it was able to easily scale up within minutes to handle 18.1 TPS and 16,800 active concurrent users. Tokenize also deployed Amazon Relational Database Service (Amazon RDS) to automate provisioning, patching, and backups of its database on the cloud. As a managed service, Amazon RDS allows the company to operate with just one DevOps engineer, thereby reducing the overall manpower cost. Deutsch 27 regions Tiếng Việt Italiano ไทย Opportunity | Balance Cost and Time-to-market Contact Sales Learn more » Hong Qi Yu Founder & CEO, Tokenize Xchange Amazon Elastic Compute Cloud Cost reduction by deploying on the AWS Cloud Português" Toppan Case Study.txt,"Working with AWS Training and Certification has proven to be an extremely effective means to achieve our digital transformation goals.”  Toppan has been evolving printing technologies for over 120 years. With the recent decline in conventional printing demand, it is promoting printing technologies for digital businesses.“ Because the cloud is essential to the speedy creation and expansion of our DX business, we needed to develop HR who can use AWS at an early stage,” says Makoto Murata, general manager of the HR Development Center at Toppan’s Personnel and Labor Relations Division. Français Using AWS Training for DX Toppan Inc. (Toppan) is in the process of a new business transformation. Founded in 1900 as a printing company, Toppan has evolved into a global digital business with over 54,000 employees. In 2017, it embarked on a digital transformation (DX) for further growth and prioritized DX human resources (HR) in its 2021 midterm management plan. To accelerate its business transformation, Toppan wanted to foster DX HR and raise the DX business’ operating income to 30 percent of the total by 2025.  Accelerating Growth with AWS Training Built in-house cloud skills to drive solution development Español In 2021, the Personnel and Labor Relations Division, DX Design Division, and other stakeholders chose to conduct a training program with AWS Training and Certification, led by its HR Development Center. “In a survey asking what skills our employees wanted to acquire, we found that the demand for AWS skills was overwhelmingly high,” says Kensuke Yanagida, general manager of Development Strategy, ICT Development Center, DX Design Division. “Another success factor was having AWS Training and Certification customize the curriculum,” Murata adds. “Employees could acquire systematic knowledge in a short period based on the needs that we identified from the survey.” 日本語 AWS Services Used Contact Sales Get Started 한국어 Architecting on AWS Using AWS Training, Toppan has improved cross-functional communication within the company. “For beginner-level students, acquiring basic cloud knowledge has made it easier to discuss AWS services with customers and in-house engineers,” says Yanagida. “Those who attended the Architecting on AWS classes are now using their training in actual system development projects, leading to speedy environment building and widespread cloud adoption.” We offer both digital and classroom training that allows you to learn online at your own pace and learn best practices from an expert instructor. Whether you are just starting out, building on existing IT skills, or sharpening your cloud knowledge, AWS Training and Certification can help you be more effective and do more in the cloud. Trained over 1,600 employees on AWS To develop DX HR, Toppan turned to Amazon Web Services (AWS) and worked with AWS Training and Certification, which helps companies build and validate skills so they can get more out of the cloud. More than 1,600 employees took part in AWS Training, learning basic cloud skills that are essential to its DX journey and expanding Toppan’s digital solutions portfolio. “Training is a means to an end. It is what the trained individual accomplishes that is important,” says Shinichi Ohkubo, executive vice president and representative director at Toppan. “This collaborative initiative between the HR Development Center and the DX Design Division has helped drive our company transformation.” Learn the foundations of cloud computing, storage, and networking on AWS in this one-day course. You will learn about AWS products, services, and common solutions so that you can make informed decisions about IT solutions based on your business requirements.. Toppan Inc. is a global printing company headquartered in Tokyo, Japan. Since 1900, it has continued to develop advanced applications in printing technology, supporting industries such as manufacturing, retail, and consumer goods. 中文 (繁體) Bahasa Indonesia Makoto Murata General Manager of the HR Development Center at Toppan's Personnel & Labor Relations Division Streamlined cross-functional communication between team Accelerated DX journey through internal program promotion and company-wide training Ρусский عربي Toppan has successfully reskilled its employees on AWS and is on track to achieve its goals for 2025. Its employees have a deeper knowledge of AWS services, empowering them to use AWS in new ways. In 2022, Toppan will offer the same training to drive HR development and further scale its DX business. 中文 (简体) AWS Training and Certification Learn more » Helped 1,050 employees achieve AWS Certifications in 1 year Toppan Reskills on AWS to Lift Its Digital Transformation to a New Stage Benefits of AWS Kensuke Yanagida General Manager of Development Strategy, ICT Development Center, DX Design Division Toppan has used AWS since 2015, starting with smaller projects and now powering critical internal workloads, like smart factory capabilities, Internet of Things data lakes, and an SAP environment used by 30 of its global group companies. Toppan releases more than 100 new services and functional improvements for end users annually, and most of them use AWS. As it expanded, it needed to grow employees’ cloud skills so they could better reflect customer needs in solution development. Shinichi Ohkubo Executive Vice President & Representative Director Improved brand perceptions by actively promoting DX and business transformation AWS Cloud Practitioner Essentials Türkçe English From May to August 2021, Toppan trained over 1,600 employees, including those from technical and nontechnical sales, planning, and administrative teams. When Toppan announced the program to each department, many employees voluntarily applied, leading to overwhelming participation. The HR Development Center worked with AWS Training and Certification to design a unique program in line with the company’s DX HR development policy, which offered 3 training courses. “When offering the training, the AWS Training and Certification team took the skill level of the students into consideration, adjusted the pace of the lectures, and devised a simple explanation method, which helped the students,” says Rena Sasaki, project member at the HR Development Center at Toppan. Rena Sasaki HR Development Center at Toppan's Personnel & Labor Relations Division AWS Technical Essentials Kensuke Yanagida General Manager of Development Strategy, ICT Development Center, DX Design Division “Working with AWS Training and Certification has proven to be an extremely effective means to achieve our DX goals,” says Yanagida. “Through our efforts, we were able to send a message to both inside and outside the company that we are actively promoting DX. We will continue to use AWS services to meet the high expectations of our customers.” Deutsch   Tiếng Việt About Toppan Inc. Through a series of use case scenarios and practical learning, you’ll learn to identify services and features to build resilient, secure, and highly available IT solutions in the AWS Cloud. Expert AWS Instructors emphasize best practices using the AWS Well-Architected Framework and guide you through the process of designing optimal IT solutions, based on real-life scenarios. Italiano ไทย For course placement, Toppan made decisions based on job type, cloud knowledge level, and the goals of individuals and teams. Nontechnical, beginner-level employees took AWS Cloud Practitioner Essentials, a basic course for individuals who want to develop a fundamental understanding of the AWS Cloud, held through a web conferencing system. Those who wanted a deeper understanding also took AWS Technical Essentials 2 (now renamed AWS Practical Startup Workshop). This course teaches the foundations of cloud computing, storage, and networking on AWS. Experienced AWS users took Architecting on AWS as a hands-on course to build a secure and highly available system environment using AWS services. Toppan also offered the opportunity to earn AWS Certifications, which validate technical skills and cloud expertise. “With the support of AWS Training and Certification, we encouraged all applicants to take AWS Certification exams,” says Murata. “We helped 1,050 people achieve AWS Certifications in just 1 year.” Employees were satisfied with the program; an internal survey showed that 90 percent of Architecting on AWS participants were happy with the course. Continuing to Improve Cloud Skills 2022 This updated digital course is for individuals who want to develop a fundamental understanding of the AWS Cloud, independent of any specific technical role. You’ll learn about AWS Cloud concepts, core AWS services, security, architecture, pricing, and support to build your AWS Cloud knowledge. Through these efforts, Toppan has motivated employees to develop new solutions on AWS. For example, participants who took Architecting on AWS accelerated the development of review-it!, an automated proofreading service. It is common for multiple people to check for errors in the text when producing printed matter and product packages, which can be inefficient and burdensome. The review-it! service uses artificial intelligence and machine learning to automate this process, reducing workloads and preventing human errors. The teams adopted a Scrum methodology during development, which is unique to Toppan and helped the team derive solutions from both printing and digital perspectives. Using this approach and its new cloud skills, the team swiftly built the solution while properly reflecting the user’s requirements. “Because of the extensive functions available on AWS, such as authorization setting and security control, our team was able to build in a secure environment and save management time,” says Yanagida. “Using AWS services maximized the effectiveness of Scrum development.” The team also used the AWS Well-Architected Tool to review the state of applications and workloads on AWS, protecting the quality of the system environment and optimizing development costs. Português Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today." Toyota Motor North America Case Study _ AWS.txt,"into application development processes and carbon footprint Overview | Opportunity | Solution | Outcome | AWS Services Used Español {font-family:"Cambria Math"; Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. 日本語 AWS Outposts mso-font-charset:2; mso-font-pitch:variable; font-family:"Arial",sans-serif; 한국어 {font-family:Cambria; Solution | Reducing Costs by More than $10 Million Overall and Improving Governance with Chofer instead of quarterly mso-bidi-font-size:12.0pt; AWS Services Used Projects ship weekly Kishore Jonnalagedda Contact Sales margin:0in; Opportunity | Using AWS to Standardize Application Deployment for TMNA Overview TMNA, the operating subsidiary for the Toyota Motor Corporation in the United States, Canada, and Mexico, began using AWS in 2015. Having over 1,600 cloud applications supporting its operations and more than 100 application teams, the company wanted to simplify and standardize the development of new applications on AWS. “We wanted to make adoption simple while bringing rigorous security standards and best practices into the design automatically,” says Kishore Jonnalagedda, director of engineering at TMNA. mso-pagination:widow-orphan; saved annually, totaling over $10 million as of 2022 {page:WordSection1;}ol 6 weeks mso-bidi-font-family:Cambria;}.MsoChpDefault As Toyota Motor North America (TMNA) matured its cloud development strategy, its manual application deployment process was creating bottlenecks. To achieve a more cohesive strategy, the company wanted to facilitate the adoption of Amazon Web Services (AWS) for its developers by providing tools to build new applications in keeping with best practices and to reduce cognitive load. ไทย font-family:"Cambria",serif; mso-default-props:yes; More Toyota Stories panose-1:2 4 5 3 5 4 6 3 2 4; Português {margin-bottom:0in;}ul Français Onboarding new developers and contractors happens more quickly as well. On AWS, TMNA can set up a sandbox account in less than a day. TMNA has also adopted Amazon CloudWatch, a solution that companies use to observe and monitor AWS resources and applications in the cloud and on premises. The company maintains observability by ingesting data from the AWS services it uses into Datadog, a software-as-a-service monitoring and analytics platform and an AWS Partner. Now, TMNA can develop applications with more transparency by tracking metrics and logs using Datadog dashboards, helping the team troubleshoot problems faster than before. “We know exactly what the architecture of an application looks like, and we can check the configuration,” says Jonnalagedda. “In a few minutes, the troubleshooting conversation is over, whereas previously, customers might have been unable to work for as long as 4 hours.” Toyota Motor North America is the operating subsidiary for the Toyota Motor Corporation in the United States, Canada, and Mexico. Toyota works to create high-quality vehicles and to find innovative ways to advance society with cutting-edge technology. By building Chofer and using Backstage on AWS, TMNA has accelerated the deployment of new applications, established best practices, and reduced time to market and costs. TMNA has built an internal cloud community, and its employees have become internal developer advocates. “The fact that AWS was able to set up a team to support us and build plug-ins for Backstage really helped us out,” says Jonnalagedda. mso-font-signature:3 0 0 0 1 0;}@font-face mso-ascii-font-family:Cambria; no items found  Since building this platform, TMNA has expanded its self-service catalog to involve other AWS services, including AWS Outposts, which gives companies the ability to run AWS infrastructure and services on premises for a truly consistent hybrid experience. TMNA creates these templates in conjunction with its security team, improving the overall governance of the company’s resources and validating that it has a 100 percent, A-plus security rating. “Security patterns and scalability are already taken into account,” says Jonnalagedda. “All these things—capacity, operations, cost optimization, cost transparency, and security—are being built into the application without the application team having to think of them in detail.” panose-1:5 0 0 0 0 0 0 0 0 0; 中文 (繁體) Bahasa Indonesia mso-style-unhide:no; As of 2022, TMNA has experienced a total cost reduction of more than $10 million overall and of around $5 million in annual cloud infrastructure costs, saving up to $96,000 in infrastructure costs per team. Furthermore, TMNA can track its go-green initiatives by using Chofer’s Cloud Carbon Footprint tool, which shows the various TMNA teams’ carbon footprints, their cloud spend, and their cost optimization recommendations. Customer Stories / Automotive Increased visibility About Toyota Motor North America Türkçe English {mso-style-unhide:no; This self-service catalog lets developers save time on deploying applications by avoiding going through the engineering and security teams. Now, the TMNA team can spin up a new environment in only 6 hours; it used to take months. One TMNA team saved 6 weeks’ worth of effort, which would have cost around $250,000 if it had chosen to build the application from scratch. With these time savings, TMNA can ship projects in weeks instead of quarterly. TMNA’s cloud team can also make sure deployments are backward compatible and upgraded along with the DevOps continuous integration and delivery pipelines, saving its application teams an additional estimated 4–6 weeks on these tasks. Amazon Elastic Kubernetes Service (Amazon EKS) is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. Learn more » mso-font-signature:0 268435456 0 0 -2147483648 0;}@font-face Tiếng Việt Building a Development Platform to Support Secure Application Deployment Using Backstage and AWS with Toyota Motor North America mso-generic-font-family:auto; The company spent 6 months in late 2020 putting together a team and developing the required skill sets to build Chofer. TMNA developers enrolled in AWS Training and Certification, where AWS experts provide training to improve cloud skills, and the company took part in AWS Partner Network (APN) Immersion Days, custom workshops that are delivered by AWS Partners. Participating in training has given TMNA team members a shared vocabulary and understanding, which facilitate more efficient communication about projects.   mso-font-signature:-536869121 1107305727 33554432 0 415 0;}p.MsoNormal, li.MsoNormal, div.MsoNormal {mso-style-type:export-only; {font-family:Wingdings; Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. … $5 million AWS Outposts is a family of fully managed solutions delivering AWS infrastructure and services to virtually any on-premises or edge location for a truly consistent hybrid experience. Learn more » mso-fareast-font-family:Cambria; عربي for deploying applications AWS Training and Certification of effort saved for one team, equivalent to $250,000 Amazon ECS @font-face mso-style-qformat:yes; 40 approved templates Deutsch Italiano Amazon EKS TMNA began building its internal development platform using AWS and Backstage as a developer portal to facilitate the front end of the build in February 2021. By May 2021, the company released the minimum viable product of Chofer, which unifies infrastructure tooling, services, training, observability, cost tracking, infrastructure scaffolding, and documentation into a single streamlined development interface. This interface makes it simple for internal teams to create and manage applications on AWS. Using Chofer, TMNA has implemented backend parameters, including automatic firewall rules and network routing access across AWS Regions, IP address authorization and authentication, and dashboards for cost and sustainability transparency. TMNA does this by using Backstage’s scaffolder and self-service catalog components to provide over 40 approved templates to its developers that include the necessary compute resources. For instance, TMNA developers can use Chofer to deploy containerized applications using Amazon ECS or Amazon Elastic Kubernetes Service (Amazon EKS)—a managed container service to run and scale Kubernetes. mso-font-charset:0; TMNA built a new internal development platform on AWS called Chofer using Backstage, an open-source framework for building developer portals open sourced by Spotify. Chofer gives developers the tools to deploy modern applications that use services such as Amazon Elastic Container Service (Amazon ECS), a fully managed container orchestration service, across the organization. Since TMNA built Chofer, its team of developers has adopted over 40 different AWS services, helping the company modernize its applications and save money and developer time while facilitating faster, more secure application deployments at scale. Director of Engineering, Toyota Motor North America Learn how TMNA in the automotive industry deploys applications faster using AWS. mso-bidi-font-family:Cambria;}p.Normal0, li.Normal0, div.Normal0 mso-bidi-font-family:Cambria;}div.WordSection1 The fact that AWS was able to set up a team to support us and build plug-ins for Backstage really helped us out.” TMNA application teams are working to further their cloud competencies and to showcase their cloud expertise through blogs and training, which will help other teams adopt Chofer and expand its cloud use to other areas of the business. “We’re in the initial stages of adopting cloud solutions in our factory and manufacturing area,” says Jonnalagedda. “We’re working to achieve the same cost-efficiency benefits and holistic cloud architecture across more of the company on AWS.” {mso-style-name:Normal0; Learn from AWS experts. Advance your skills and knowledge. Build your future in the AWS Cloud. Learn more » font-size:11.0pt; 1 Ρусский Learn more » 中文 (简体) {margin-bottom:0in;} 2022 mso-hansi-font-family:Cambria; mso-style-parent:""; Get Started mso-generic-font-family:roman; Outcome | Scaling a Holistic Cloud Architecture to Additional Business Areas" Track customer traffic in aisles and cash counters using Computer Vision _ AWS for Industries.txt,"AWS for Industries Track customer traffic in aisles and cash counters using Computer Vision by Sandeep Mehta and Rafael Koike | on 16 JUN 2023 | in Amazon API Gateway , Amazon CloudFront , Amazon Cognito , Amazon DynamoDB , Amazon QuickSight , Amazon Simple Storage Service (S3) , AWS Lambda , AWS Panorama , Industries , Kinesis Data Streams , Kinesis Video Streams , Retail | Permalink | Comments |  Share The retail industry has changed dramatically over the last couple of decades. From small shops to large retail chains of stores. The rise of ecommerce, use of digital promotions and targeted marketing are just some examples where technology has contributed to the immense growth of the market. From personalized customer experiences to sustainability the field is ever evolving. As customers become more aware about different brands, and choices of products and services, there is an increased pressure for retail clients to thrive—with a definite need of a digital presence. However, 72% of retail shopping is still done in brick-and-mortar stores ( According to research from Forrester ), as this provides consumers the physical experience of seeing, trying and holding the products in their hand. For retail, Forrester Research predicts that total retail sale will reach $5.5 trillion by 2027 and 70% of that will be in-store sales. With customer traffic returning to stores post pandemic, there is a need to track and plan for customer’s preferences. Utilizing computer vision technology , customer traffic can be recorded in the stores, which can be used for the following use cases: Better store planning Efficient seasonal and holiday planning and reporting Adjusting and taking action when there is increased customer traffic Identifying safety issues and potential threats Now, we will walk through each of the use cases to understand how utilizing computer vision technology can help retail sales and better customer experiences. Better store planning Typically, the store manager is responsible for ensuring all the products are available on the shelves in the right place and in enough quantities. By tracking aisle traffic, store managers can understand which aisle receives the most customer visits. This can help the store manager to place popular products in popular aisles. Many times, revenue is lost when customers are unable to find the product they’re looking for and decide not to ask a store associate. Such revenue losses can be minimized by better product placement. A store’s workforce can be planned more efficiently depending on the customer volume. For example, store traffic data can be analyzed for a given time in the day or for particular days in a week. Understanding daily, and even hourly, patterns can help better optimize a store’s workforce. Efficient seasonal and holiday planning and reporting Seasonal and holiday sales can bring higher customer traffic to the store. Being able to evaluate and compare data, across stores, during different times could be of great value to retail stores and their management. For example, analyzing customer traffic (on a monthly, quarterly and yearly basis) before, during and after a peak season or holiday could indicate which store is in demand based on location and service. Adjusting and taking action when there is increased customer traffic One of the most inconvenient things for customers is to wait in the checkout queue. The retail industry has provided various ways to solve this problem by providing self-checkout, digital checkout on mobile phone and Just Walk Out technology. These solutions are efficient and beneficial depending on the parameters such as store type, store location, customer volume and more. If we track and report to the store manager when there are long lines at the cashier and/or self-checkout counters, the store manager can take corrective action such as opening more checkout counters or addressing any customer checkout issue. Learning the average time at the cash counter and other performance indicators could also help the store manager with training of their workforce. Identifying safety issues and potential threats There could be a safety issue, for example, if liquid is spilled on the floor. This would require the store manager’s immediate attention so cleaning can be done. Similarly, suspicious customer behavior could indicate potential stealing or carrying of a weapon. If a potential suspect is identified in time, it can prevent challenges in dealing with such threats. Retailers, on average, saw a 26.5% increase in organized retail crime incidents in 2021. Retail theft cost retailers $95 billion in 2021 as per National Retail Federation . Being able to identified, as early as possible, any customer carrying weapons or potentially threatening other customers is paramount so store security or police can properly intervene. Of course, before activating any potential threat response, all available information should be identified and carefully reviewed to prevent false alarms. How AWS can help address these use cases? Amazon Web Services (AWS) computer vision (CV), artificial intelligence and machine learning (AI/ML) technology and cloud solutions can support and accelerate learning for the described use cases. AWS can help on-premise and on the cloud for such use cases. AWS Panorama devices support connecting to multiple camera streams at a given time and support running multiple ML models per stream. Once installed and connected to your network, AWS Panorama devices connect to the AWS Management Console. Register your AWS Panorama device and add video feeds from onsite cameras, deploy trained machine learning models, and run applications in minutes. AWS Panorama allows you to deploy CV applications to the edge, allowing you to run cloud-based machine learning where low-latency, data privacy, and limited internet bandwidth are concerns. AWS Panorama offers a flexible option for adding CV to automate tasks that traditionally require human inspection and monitoring. This data can further be processed by AWS services to send notifications, take corrective actions and build insights. This hybrid solution can bring value to efficient store management, loss prevention and revenue improvement. Following is the reference solution architecture diagram and how each group of services help in bringing intelligence to store management. Figure 1. Reference solution architecture for tracking and analyzing customer traffic Solution Walkthrough We can walk-through the reference architecture per each section shown in the diagram: 1. AWS Panorama and PoE Cameras: PoE (Power-over-Ethernet) cameras are mounted at the store to capture each aisle and the checkout area. These cameras are connected to an AWS Panorama device at the store. With computer vision technologies like AWS Panorama that apply AI/ML to video cameras positioned throughout store, retailers can access shopper traffic, customer movements, shelf and product interactions, checkout queues, and loss prevention activities and patterns. The code is deployed in the AWS Panorama device on-premises. The AWS Panorama device delivers detected behaviors or patterns to your cloud-based analytics data framework. 2. Data Ingestion, Storage and AI/ML: Video streams from the cameras are captured by Amazon Kinesis Video Stream, which collects and processes as near real-time streaming data. The video streams are stored in an Amazon Simple Storage Service (Amazon S3) bucket for playback. Amazon S3 provides a scalable cloud storage with high durability and security. The videos can be stored for a span of a day, weeks or months. Amazon Kinesis Data Stream captures the inferences derived by the AWS Panorama AI model and application code. This inference could be the number of customers, safety issues or detection of weapons carried by a customer. The Amazon Kinesis Video Stream feeds the stream to Amazon SageMaker, which can further train the AI model to detect more accurate findings. SageMaker allows developers to build, train and deploy machine learning models for various use cases. 3. Data Processing, storing interface results and Business Intelligence: The findings received from Kinesis Data Stream is fed to AWS Lambda , which is a serverless compute. It can process thousands of events per second. The inference results are stored in Amazon DynamoDB , that is a No-sql database. Amazon DynamoDB can store key-value pairs with single-digit millisecond performance at scale. Upon updating this data in the Amazon DynamoDB, we can configure the data inserts to invoke an Amazon DynamoDB stream, which can invoke an AWS Lambda function “Interface Evaluator”. The new data will be stored in an S3 bucket, which we can use as an Interface Data Lake. It is a common use case for Amazon S3 to be used as a data lake solution for holding large amounts of historical data, which can further be massaged, curated or used for analysis. This data can be directly fed to Amazon QuickSight , a Business Intelligence (BI) tool for reporting and analysis. Amazon QuickSight powers data-driven organizations with unified BI at hyper-scale. With Amazon QuickSight, all users can meet varying analytic needs from the same source of truth through modern interactive dashboards, paginated reports, embedded analytics, and natural language queries. These dashboards can be presented to the store manager and/or corporate management for further analysis. These dashboards can be embedded in a store monitoring application. The following table shows a simple example of inference data. Figure 2. Inference results for each aisle can be stored in Amazon DynamoDB 4. Application, User authentication and access: Store manager logs-in to the application. The application is made available through Amazon CloudFront, that is a Content Delivery Network (CDN) service. This application can be hosted as a static website using an S3 bucket as the origin. Amazon Cognito is used for managing the user pool for the application and provides user authentication in accessing the application. Amazon API Gateway is a fully managed service that enables developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the “front door” for applications to access data and business logic. Amazon API Gateway can integrate with Amazon Cognito authorizer for validating access when calling the APIs. A “business facade” AWS Lambda function is used to gain access to the video stream stored on the S3 bucket. 5. Notifications: The interface results can also invoke the AWS Lambda function to update the notification table. The notification table will have a notification entry when any configured limit is crossed, for example, number of customers in cashier’s aisle. Whenever a notification is entered in the notification table, a Lambda function generated notification is invoked. This Lambda function will invoke Amazon Pinpoint, which in-turn can send an email or a text (depending on the configuration) to the store manager on their mobile describing the event. The store manager can then take any action as needed. The entire architecture is serverless and requires minimum infrastructure of mounting or using existing PoE cameras. Retailers can use these services to enhance the store’s visibility and gather value added data points for the mentioned use cases. AWS Panorama can even power use cases such as shelf inventory management, misplaced items and more. This approach can be applied for banks, public buildings and other industries as well, where human presence needs to be tracked and analyzed. Conclusion This blog explains how computer vision technology can be leveraged for in-store automation with actionable insights for the retail industry. Helping customers in-store is one of the areas where customer can take with them a positive experience, which ultimately drives higher revenue and increased customer loyalty. Contact an AWS Representative to know how we can help accelerate your business. Further Reading Customers using AWS Panorama Automation at Tyson Foods with computer vision Building and deploying an object detection computer vision application Nordcloud’s Automated Solution for Computer Vision Applications Sandeep Mehta Sandeep is a Senior Solutions Architect and is part of Analytics TFC at AWS. Sandeep has passion to help customers design modern cloud architecture and recommend them right services for their requirements. He understands business use cases and translates them to secured, scalable and resilient IT solutions. Rafael Koike Rafael M. Koike is a Principal Solutions Architect supporting Enterprise customers in Southeast and is part of the Storage TFC. Rafael has a passion to build, and his expertise in security, storage, networking, and application development has been instrumental in helping customers move to the cloud securely and fast. Comments View Comments Resources AWS for Industry AWS Events AWS Training & Certification AWS Whitepapers AWS Compliance Reports Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Train a Large Language Model on a single Amazon SageMaker GPU with Hugging Face and LoRA _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Train a Large Language Model on a single Amazon SageMaker GPU with Hugging Face and LoRA by Philipp Schmid , Doug Kelly , and Robert Fisher | on 05 JUN 2023 | in Advanced (300) , Amazon SageMaker , Artificial Intelligence , Generative AI , Technical How-to | Permalink | Comments |  Share This post is co-written with Philipp Schmid from Hugging Face. We have all heard about the progress being made in the field of large language models (LLMs) and the ever-growing number of problem sets where LLMs are providing valuable insights. Large models, when trained over massive datasets and several tasks, are also able to generalize well over tasks that they aren’t trained specifically for. Such models are called foundation models , a term first popularized by the Stanford Institute for Human-Centered Artificial Intelligence . Even though these foundation models are able to generalize well, especially with the help of prompt engineering techniques, often the use case is so domain specific, or the task is so different, that the model needs further customization. One approach to improve performance of a large model for a specific domain or task is to further train the model with a smaller, task-specific dataset. Although this approach, known as fine-tuning , successfully improves the accuracy of LLMs, it requires modifying all of the model weights. Fine-tuning is much faster than the pre-training of a model thanks to the much smaller dataset size, but still requires significant computing power and memory. Fine-tuning modifies all the parameter weights of the original model, which makes it expensive and results in a model that is the same size as the original. To address these challenges, Hugging Face introduced the Parameter-Efficient Fine-Tuning library (PEFT). This library allows you to freeze most of the original model weights and replace or extend model layers by training an additional, much smaller, set of parameters. This makes training much less expensive in terms of required compute and memory. In this post, we show you how to train the 7-billion-parameter BloomZ model using just a single graphics processing unit (GPU) on Amazon SageMaker , Amazon’s machine learning (ML) platform for preparing, building, training, and deploying high-quality ML models. BloomZ is a general-purpose natural language processing (NLP) model. We use PEFT to optimize this model for the specific task of summarizing messenger-like conversations. The single-GPU instance that we use is a low-cost example of the many instance types AWS provides. Training this model on a single GPU highlights AWS’s commitment to being the most cost-effective provider of AI/ML services. The code for this walkthrough can be found on the Hugging Face notebooks GitHub repository under the sagemaker/24_train_bloom_peft_lora folder. Prerequisites In order to follow along, you should have the following prerequisites: An AWS account . A Jupyter notebook within Amazon SageMaker Studio or SageMaker notebook instances. You will need access to the SageMaker ml.g5.2xlarge instance type, containing a single NVIDIA A10G GPU. On the AWS Management Console , navigate to Service Quotas for SageMaker and request a 1-instance increase for the following quotas: ml.g5.2xlarge for training job usage and ml.g5.2xlarge for endpoint usage . After your requested quotas are applied to your account, you can use the default Studio Python 3 (Data Science) image with a ml.t3.medium instance to run the notebook code snippets. For the full list of available kernels, refer to Available Amazon SageMaker Kernels . Set up a SageMaker session Use the following code to set up your SageMaker session: import sagemaker import boto3 sess = sagemaker.Session() # sagemaker session bucket -> used for uploading data, models and logs # sagemaker will automatically create this bucket if it does not exist sagemaker_session_bucket=None if sagemaker_session_bucket is None and sess is not None: # set to default bucket if a bucket name is not given sagemaker_session_bucket = sess.default_bucket() try: role = sagemaker.get_execution_role() except ValueError: iam = boto3.client('iam') role = iam.get_role(RoleName='sagemaker_execution_role')['Role']['Arn'] sess = sagemaker.Session(default_bucket=sagemaker_session_bucket) print(f""sagemaker role arn: {role}"") print(f""sagemaker bucket: {sess.default_bucket()}"") print(f""sagemaker session region: {sess.boto_region_name}"") Load and prepare the dataset We use the samsum dataset, a collection of 16,000 messenger-like conversations with summaries. The conversations were created and written down by linguists fluent in English. The following is an example of the dataset: { ""id"": ""13818513"", ""summary"": ""Amanda baked cookies and will bring Jerry some tomorrow."", ""dialogue"": ""Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"" } To train the model, you need to convert the inputs (text) to token IDs. This is done by a Hugging Face Transformers tokenizer. For more information, refer to Chapter 6 of the Hugging Face NLP Course. Convert the inputs with the following code: from transformers import AutoTokenizer model_id=""bigscience/bloomz-7b1"" # Load tokenizer of BLOOMZ tokenized = AutoTokenizer.from_pretrained(model_id) tokenizer.model_max_length = 2048 # overwrite wrong value Before starting training, you need to process the data. Once it’s trained, the model will take a set of text messages as the input and generate a summary as the output. You need to format the data as a prompt (the messages) with a correct response (the summary). You also need to chunk examples into longer input sequences to optimize the model training. See the following code: from random import randint from itertools import chain from functools import partial # custom instruct prompt start prompt_template = f""Summarize the chat dialogue:\n{{dialogue}}\n---\nSummary:\n{{summary}}{{eos_token}}"" # template dataset to add prompt to each sample def template_dataset(sample): sample[""text""] = prompt_template.format(dialogue=sample[""dialogue""], summary=sample[""summary""], eos_token=tokenizer.eos_token) return sample # apply prompt template per sample dataset = dataset.map(template_dataset, remove_columns=list(dataset.features)) print(dataset[randint(0, len(dataset))][""text""]) # empty list to save remainder from batches to use in next batch remainder = {""input_ids"": [], ""attention_mask"": []} def chunk(sample, chunk_length=2048): # define global remainder variable to save remainder from batches to use in next batch global remainder # Concatenate all texts and add remainder from previous batch concatenated_examples = {k: list(chain(*sample[k])) for k in sample.keys()} concatenated_examples = {k: remainder[k] + concatenated_examples[k] for k in concatenated_examples.keys()} # get total number of tokens for batch batch_total_length = len(concatenated_examples[list(sample.keys())[0]]) # get max number of chunks for batch if batch_total_length >= chunk_length: batch_chunk_length = (batch_total_length // chunk_length) * chunk_length # Split by chunks of max_len. result = { k: [t[i : i + chunk_length] for i in range(0, batch_chunk_length, chunk_length)] for k, t in concatenated_examples.items() } # add remainder to global variable for next batch remainder = {k: concatenated_examples[k][batch_chunk_length:] for k in concatenated_examples.keys()} # prepare labels result[""labels""] = result[""input_ids""].copy() return result # tokenize and chunk dataset lm_dataset = dataset.map( lambda sample: tokenizer(sample[""text""]), batched=True, remove_columns=list(dataset.features) ).map( partial(chunk, chunk_length=2048), batched=True, ) # Print total number of samples print(f""Total number of samples: {len(lm_dataset)}"") Now you can use the FileSystem integration to upload the dataset to Amazon Simple Storage Service (Amazon S3): # save train_dataset to s3 training_input_path = f's3://{sess.default_bucket()}/processed/samsum-sagemaker/train' lm_dataset.save_to_disk(training_input_path) print(""uploaded data to:"") print(f""training dataset to: {training_input_path}"") In [ ]: training_input_path=""s3://sagemaker-us-east-1-558105141721/processed/samsum-sagemaker/train"" Fine-tune BLOOMZ-7B with LoRA and bitsandbytes int-8 on SageMaker The Hugging Face BLOOMZ-7B model card indicates its initial training was distributed over 8 nodes with 8 A100 80 GB GPUs and 512 GB memory CPUs each. This computing configuration is not readily accessible, is cost-prohibitive to consumers, and requires expertise in distributed training performance optimization. SageMaker lowers the barriers to replication of this setup through its distributed training libraries ; however, the cost of comparable eight on-demand ml.p4de.24xlarge instances would be $376.88 per hour. Furthermore, the fully trained model consumes about 40 GB of memory, which exceeds the available memory of many individual consumer available GPUs and requires strategies to address for large model inferencing. As a result, full fine-tuning of the model for your task over multiple model runs and deployment would require significant compute, memory, and storage costs on hardware that isn’t readily accessible to consumers. Our goal is to find a way to adapt BLOOMZ-7B to our chat summarization use case in a more accessible and cost-effective way while maintaining accuracy. To enable our model to be fine-tuned on a SageMaker ml.g5.2xlarge instance with a single consumer-grade NVIDIA A10G GPU, we employ two techniques to reduce the compute and memory requirements for fine-tuning: LoRA and quantization. LoRA (Low Rank Adaptation) is a technique that significantly reduces the number of model parameters and associated compute needed for fine-tuning to a new task without a loss in predictive performance. First, it freezes your original model weights and instead optimizes smaller rank-decomposition weight matrices to your new task rather than updating the full weights, and then injects these adapted weights back into the original model. Consequently, fewer weight gradient updates means less compute and GPU memory during fine-tuning. The intuition behind this approach is that LoRA allows LLMs to focus on the most important input and output tokens while ignoring redundant and less important tokens. To deepen your understanding of the LoRA technique, refer to the original paper LoRA: Low-Rank Adaptation of Large Language Models . In addition to the LoRA technique, you use the bitsanbytes Hugging Face integration LLM.int8() method to quantize out the frozen BloomZ model, or reduce the precision of the weight and bias values, by rounding them from float16 to int8. Quantization reduces the needed memory for BloomZ by about four times, which enables you to fit the model on the A10G GPU instance without a significant loss in predictive performance. To deepen your understanding of how int8 quantization works, its implementation in the bitsandbytes library, and its integration with the Hugging Face Transformers library, see A Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using Hugging Face Transformers, Accelerate and bitsandbytes . Hugging Face has made LoRA and quantization accessible across a broad range of transformer models through the PEFT library and its integration with the bitsandbytes library. The create_peft_config() function in the prepared script run_clm.py illustrates their usage in preparing your model for training: def create_peft_config(model): from peft import ( get_peft_model, LoraConfig, TaskType, prepare_model_for_int8_training, ) peft_config = LoraConfig( task_type=TaskType.CAUSAL_LM, inference_mode=False, r=8, # Lora attention dimension. lora_alpha=32, # the alpha parameter for Lora scaling. lora_dropout=0.05, # the dropout probability for Lora layers. target_modules=[""query_key_value""], ) # prepare int-8 model for training model = prepare_model_for_int8_training(model) model = get_peft_model(model, peft_config) model.print_trainable_parameters() return model With LoRA, the output from print_trainable_parameters()indicates we were able to reduce the number of model parameters from 7 billion to 3.9 million. This means that only 5.6% of the original model parameters need to be updated. This significant reduction in compute and memory requirements allows us to fit and train our model on the GPU without issues. To create a SageMaker training job, you will need a Hugging Face estimator. The estimator handles end-to-end SageMaker training and deployment tasks. SageMaker takes care of starting and managing all the required Amazon Elastic Compute Cloud (Amazon EC2) instances for you. Additionally, it provides the correct Hugging Face training container, uploads the provided scripts, and downloads the data from our S3 bucket into the container at the path /opt/ml/input/data . Then, it starts the training job. See the following code: import time # define Training Job Name job_name = f'huggingface-peft-{time.strftime(""%Y-%m-%d-%H-%M-%S"", time.localtime())}' from sagemaker.huggingface import HuggingFace # hyperparameters, which are passed into the training job hyperparameters ={ 'model_id': model_id, # pre-trained model 'dataset_path': '/opt/ml/input/data/training', # path where sagemaker will save training dataset 'epochs': 3, # number of training epochs 'per_device_train_batch_size': 1, # batch size for training 'lr': 2e-4, # learning rate used during training } # create the Estimator huggingface_estimator = HuggingFace( entry_point = 'run_clm.py', # train script source_dir = 'scripts', # directory which includes all the files needed for training instance_type = 'ml.g5.2xlarge', # instances type used for the training job instance_count = 1, # the number of instances used for training base_job_name = job_name, # the name of the training job role = role, # IAM role used in training job to access AWS resources, e.g. S3 volume_size = 300, # the size of the EBS volume in GB transformers_version = '4.26', # the transformers version used in the training job pytorch_version = '1.13', # the pytorch_version version used in the training job py_version = 'py39', # the python version used in the training job hyperparameters = hyperparameters ) You can now start your training job using the .fit() method and passing the S3 path to the training script: # define a data input dictionary with our uploaded s3 uris data = {'training': training_input_path} # starting the train job with our uploaded datasets as inputs huggingface_estimator.fit(data, wait=True) Using LoRA and quantization makes fine-tuning BLOOMZ-7B to our task affordable and efficient with SageMaker. When using SageMaker training jobs, you only pay for GPUs for the duration of model training. In our example, the SageMaker training job took 20,632 seconds, which is about 5.7 hours. The ml.g5.2xlarge instance we used costs $1.515 per hour for on-demand usage. As a result, the total cost for training our fine-tuned BLOOMZ-7B model was only $8.63. Comparatively, full fine-tuning of the model’s 7 billion weights would cost an estimated $600, or 6,900% more per training run, assuming linear GPU scaling on the original computing configuration outlined in the Hugging Face model card. In practice, this would further vary depending upon your training strategy, instance selection, and instance pricing. We could also further reduce our training costs by using SageMaker managed Spot Instances . However, there is a possibility this would result in the total training time increasing due to Spot Instance interruptions. See Amazon SageMaker Pricing for instance pricing details. Deploy the model to a SageMaker endpoint for inference With LoRA, you previously adapted a smaller set of weights to your new task. You need a way to combine these task-specific weights with the pre-trained weights of the original model. In the run_clm.py script, the PEFT library merge_and_unload() method takes care of merging the base BLOOMZ-7B model with the updated adapter weights fine-tuned to your task to make them easier to deploy without introducing any inference latency compared to the original model. In this section, we go through the steps to create a SageMaker model from the fine-tuned model artifact and deploy it to a SageMaker endpoint for inference. First, you can create a Hugging Face model using your new fine-tuned model artifact for deployment to a SageMaker endpoint. Because you previously trained the model with a SageMaker Hugging Face estimator, you can deploy the model immediately. You could instead upload the trained model to an S3 bucket and use them to create a model package later. See the following code: from sagemaker.huggingface import HuggingFaceModel # 1. create Hugging Face Model Class huggingface_model = HuggingFaceModel( model_data=huggingface_estimator.model_data, #model_data=""s3://hf-sagemaker-inference/model.tar.gz"", # Change to your model path role=role, transformers_version=""4.26"", pytorch_version=""1.13"", py_version=""py39"", model_server_workers=1 ) As with any SageMaker estimator, you can deploy the model using the deploy() method from the Hugging Face estimator object, passing in the desired number and type of instances. In this example, we use the same G5 instance type equipped with a single NVIDIA A10g GPU that the model was fine-tuned on in the previous step: # 2. deploy model to SageMaker Inference predictor = huggingface_model.deploy( initial_instance_count=1, instance_type= ""ml.g5.4xlarge"" ) It may take 5–10 minutes for the SageMaker endpoint to bring your instance online and download your model in order to be ready to accept inference requests. When the endpoint is running, you can test it by sending a sample dialog from the dataset test split. First load the test split using the Hugging Face Datasets library. Next, select a random integer for index slicing a single test sample from the dataset array. Using string formatting, combine the test sample with a prompt template into a structured input to guide our model’s response. This structured input can then be combined with additional model input parameters into a formatted sample JSON payload. Finally, invoke the SageMaker endpoint with the formatted sample and print the model’s output summarizing the sample dialog. See the following code: from random import randint from datasets import load_dataset # 1. Load dataset from the hub test_dataset = load_dataset(""samsum"", split=""test"") # 2. select a random test sample sample = test_dataset[randint(0,len(test_dataset))] # 3. format the sample prompt_template = f""Summarize the chat dialogue:\n{{dialogue}}\n---\nSummary:\n"" fomatted_sample = { ""inputs"": prompt_template.format(dialogue=sample[""dialogue""]), ""parameters"": { ""do_sample"": True, # sample output predicted probabilities ""top_p"": 0.9, # sampling technique Fan et. al (2018) ""temperature"": 0.1, # increasing the likelihood of high probability words and decreasing the likelihood of low probability words ""max_new_tokens"": 100, # } } # 4. Invoke the SageMaker endpoint with the formatted sample res = predictor.predict(fomatted_sample) # 5. Print the model output print(res[0][""generated_text""].split(""Summary:"")[-1]) # Sample model output: Kirsten and Alex are going bowling this Friday at 7 pm. They will meet up and then go together. Now let’s compare the model summarized dialog output to the test sample summary: print(sample[""summary""]) # Sample model input: Kirsten reminds Alex that the youth group meets this Friday at 7 pm to go bowling. Clean up Now that you’ve tested your model, make sure that you clean up the associated SageMaker resources to prevent continued charges: predictor.delete_model() predictor.delete_endpoint() Summary In this post, you used the Hugging Face Transformer, PEFT, and the bitsandbytes libraries with SageMaker to fine-tune a BloomZ large language model on a single GPU for $8 and then deployed the model to a SageMaker endpoint for inference on a test sample. SageMaker offers multiple ways to use Hugging Face models; for more examples, check out the AWS Samples GitHub . To continue using SageMaker to fine-tune foundation models, try out some of the techniques in the post Architect personalized generative AI SaaS applications on Amazon SageMaker . We also encourage you to learn more about Amazon Generative AI capabilities by exploring  JumpStart ,  Amazon Titan  models, and Amazon Bedrock . About the Authors Philipp Schmid is a Technical Lead at Hugging Face with the mission to democratize good machine learning through open source and open science. Philipp is passionate about productionizing cutting-edge and generative AI machine learning models. He loves to share his knowledge on AI and NLP at various meetups such as Data Science on AWS, and on his technical blog . Robert Fisher is a Sr. Solutions Architect for Healthcare and Life Sciences customers. He works closely with customers to understand how AWS can help them solve problems, especially in the AI/ML space. Robert has many years of experience in software engineering across a range of industry verticals including medical devices, fintech, and consumer-facing applications. Doug Kelly is an AWS Sr. Solutions Architect that serves as a trusted technical advisor to top machine learning startups in verticals ranging from machine learning platforms, autonomous vehicles, to precision agriculture. He is member of the AWS ML technical field community where he specializes in supporting customers with MLOps and ML inference workloads. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Transform analyze and discover insights from unstructured healthcare data using Amazon HealthLake _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Transform, analyze, and discover insights from unstructured healthcare data using Amazon HealthLake by Shravan Vurputoor , Rafael Koike , and Randheer Gehlot | on 09 MAY 2023 | in Amazon Athena , Amazon HealthLake , Amazon QuickSight , Amazon SageMaker , Amazon Simple Storage Service (S3) , Amazon Textract , AWS Lambda , Healthcare | Permalink | Comments |  Share Healthcare data is complex and siloed, and exists in various formats. An estimated 80% of data within organizations is considered to be unstructured or “dark” data that is locked inside text, emails, PDFs, and scanned documents. This data is difficult to interpret or analyze programmatically and limits how organizations can derive insights from it and serve their customers more effectively. The rapid rate of data generation means that organizations that aren’t investing in document automation risk getting stuck with legacy processes that are manual, slow, error prone, and difficult to scale. In this post, we propose a solution that automates ingestion and transformation of previously untapped PDFs and handwritten clinical notes and data. We explain how to extract information from customer clinical data charts using Amazon Textract , then use the raw extracted text to identify discrete data elements using Amazon Comprehend Medical . We store the final output in Fast Healthcare Interoperability Resources (FHIR) compatible format in Amazon HealthLake , making it available for downstream analytics. Solution overview AWS provides a variety of services and solutions for healthcare providers to unlock the value of their data. For our solution, we process a small sample of documents through Amazon Textract and load that extracted data as appropriate FHIR resources in Amazon HealthLake. We create a custom process for FHIR conversion and test it end to end. The data is first loaded into DocumentReference . Amazon HealthLake then creates system-generated resources after processing this unstructured text in DocumentReference and loads it into Condition , MedicationStatement , and Observation resources. We identify a few data fields within FHIR resources like patient ID, date of service, provider type, and name of medical facility. A MedicationStatement is a record of a medication that is being consumed by a patient. It may indicate that the patient is taking the medication now, has taken the medication in the past, or will be taking the medication in the future. A common scenario where this information is captured is during the history-taking process in the course of a patient visit or stay. The source of medication information could be the patient’s memory, a prescription bottle, or from a list of medications the patient, clinician, or other party maintains. Observations are a central element in healthcare, used to support diagnosis, monitor progress, determine baselines and patterns, and even capture demographic characteristics. Most observations are simple name/value pair assertions with some metadata, but some observations group other observations together logically, or could even be multi-component observations. The Condition resource is used to record detailed information about a condition, problem, diagnosis, or other event, situation, issue, or clinical concept that has risen to a level of concern. The condition could be a point-in-time diagnosis in the context of an encounter, an item on the practitioner’s problem list, or a concern that doesn’t exist on the practitioner’s problem list. The following diagram shows the workflow to migrate unstructured data into FHIR for AI and machine learning (ML) analysis in Amazon HealthLake. The workflow steps are as follows: A document is uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. The document upload in Amazon S3 triggers an AWS Lambda function. The Lambda function sends the image to Amazon Textract. Amazon Textract extracts text from the image and stores the output in a separate Amazon Textract output S3 bucket. The final result is stored as specific FHIR resources (the extracted text is loaded in DocumentReference as base64 encoded text) in Amazon HealthLake to extract meaning from the unstructured data with integrated Amazon Comprehend Medical for easy search and querying. Users can create meaningful analyses and run interactive analytics using Amazon Athena . Users can build visualizations, perform ad hoc analysis, and quickly get business insights using Amazon QuickSight . Users can make predictions with health data using Amazon SageMaker ML models. Prerequisites This post assumes familiarity with the following services: Amazon Athena AWS Cloud Development Kit (AWS CDK) Amazon CloudWatch AWS Lambda AWS Lake Formation Amazon QuickSight Amazon SageMaker Amazon S3 By default, the integrated Amazon Comprehend Medical natural language processing (NLP) capability within Amazon HealthLake is disabled in your AWS account. To enable it, submit a support case with your account ID, AWS Region, and Amazon HealthLake data store ARN. For more information, refer to How do I turn on HealthLake’s integrated natural language processing feature . Refer to the GitHub repo for more deployment details. Deploy the solution architecture To set up the solution, complete the following steps: Clone the  GitHub repo , run  cdk deploy PdfMapperToFhirWorkflow  from your command prompt or terminal and follow the README file. Deployment will complete in approximately 30 minutes.  On the Amazon S3 console, navigate to the bucket starting with pdfmappertofhirworkflow -, which was created as part of cdk deploy .  Inside the bucket, create a folder called uploads and upload the sample PDF ( SampleMedicalRecord.pdf ). As soon as the document upload is successful, it will trigger the pipeline, and you can start seeing data in Amazon HealthLake, which you can query using several AWS tools. Query the data To explore your data, complete the following steps: On the CloudWatch console, search for the HealthlakeTextract log group. In the log group details, note down the unique ID of the document you processed. On the Amazon HealthLake console, choose Data Stores in the navigation pane. Select your data store and choose Run query . For Query type , choose Search with GET . For Resource type , choose DocumentReference . For Search parameters , enter the parameter as relates to and the value as DocumentReference/ Unique ID. Choose Run query . In the Response body section, minimize the resource sections to just view the six resources that were created for the six-page PDF document. The following screenshot shows the integrated analysis with Amazon Comprehend Medical and NLP enabled. The screenshot on the left is the source PDF; the screenshot on the right is the NLP result from Amazon HealthLake. You can also run a query with Query type set as Read and Resource type set as Condition using the appropriate resource ID. The following screenshot shows the query results. On the Athena console, run the following query: SELECT * FROM ""healthlakestore"".""documentreference""; Similarly, you can query MedicationStatement , Condition , and Observation resources. Clean up After you’re done using this solution, run cdk destroy PdfMapperToFhirWorkflow to ensure you don’t incur additional charges. For more information, refer to AWS CDK Toolkit (cdk command) . Conclusion AWS AI services and Amazon HealthLake can help store, transform, query, and analyze insights from unstructured healthcare data. Although this post only covered a PDF clinical chart, you could extend the solution to other types of healthcare PDFs, images, and handwritten notes. After the data is extracted into text form, parsed into discrete data elements using Amazon Comprehend Medical, and stored in Amazon HealthLake, it could be further enriched by downstream systems to drive meaningful and actionable healthcare information and ultimately improve patient health outcomes. The proposed solution doesn’t require the deployment and maintenance of server infrastructure. All services are either managed by AWS or serverless. With AWS’s pay-as-you-go billing model and its depth and breadth of services, the cost and effort of initial setup and experimentation is significantly lower than traditional on-premises alternatives. Additional resources For more information about Amazon HealthLake, refer to the following: Amazon Textract IDP CDK Constructs and Samples How to modernize legacy HL7 data in Amazon HealthLake Addressing Health Equity through Remote Patient Monitoring and Continuity of Care Advance pediatric care using Amazon HealthLake for scalable FHIR-based data analytics Unlock patient data insights using Amazon HealthLake Build a cognitive search and a health knowledge graph using AWS AI services About the Authors Shravan Vurputoor is a Senior Solutions Architect at AWS. As a trusted customer advocate, he helps organizations understand best practices around advanced cloud-based architectures, and provides advice on strategies to help drive successful business outcomes across a broad set of enterprise customers through his passion for educating, training, designing, and building cloud solutions. In his spare time, he enjoys reading, spending time with his family, and cooking. Rafael M. Koike is a Principal Solutions Architect at AWS supporting Enterprise customers in the South East, and is part of the Storage and Security Technical Field Community. Rafael has a passion to build, and his expertise in security, storage, networking, and application development has been instrumental in helping customers move to the cloud securely and fast. Randheer Gehlot is a Principal Customer Solutions Manager at AWS. Randheer is passionate about AI/ML and its application within HCLS industry. As an AWS builder, he works with large enterprises to design and rapidly implement strategic migrations to the cloud and build modern, cloud-native solutions. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Transforming fleet telematics into predictive analytics with Capgeminis Trusted Vehicle and AWS IoT FleetWise _ The Internet of Things on AWS Official Blog.txt,"The Internet of Things on AWS – Official Blog Transforming fleet telematics into predictive analytics with Capgemini’s Trusted Vehicle and AWS IoT FleetWise by Cher Simon | on 14 JUL 2023 | in Amazon Athena , Amazon Machine Learning , Amazon Managed Grafana , Amazon QuickSight , Amazon SageMaker , Amazon Simple Storage Service (S3) , Amazon Timestream , Analytics , AWS Glue , AWS IoT Core , AWS IoT FleetWise , Internet of Things | Permalink |  Share Introduction Building a resilient path to post-pandemic recovery requires adaptability to dynamic trends. Therefore, many logistics leaders use predictive analytics to drive supply chain decisions, improve internal operational processes, meet regulatory compliance, and reduce transportation maintenance costs. Use cases for advanced predictive analytics in logistics include transportation management, fleet management, last-mile delivery, and visibility into fleet operations. Through 2024, Gartner predicts 50% of enterprises and supply chain organizations will invest in real-time transportation visibility platforms to measure business performance, make informed decisions, and achieve digital maturity. While vehicle data is becoming more accessible, organizations face challenges managing the continuous variability of massive data generation driven by connected vehicles. According to McKinsey , 95% of new vehicles sold globally will be connected, generating terabytes of sensor data hourly. However, collecting proprietary data formats across vehicle models leads to data fragmentation resulting in noisy data and delayed fleet-wide insights. With software-defined vehicles driving the next evolution of the automotive industry, data becomes a critical component enabling new functionalities and digital services entirely through software. Hence, fleet telematics is crucial in driving quality decision-making and identifying a sustainable business strategy in a volatile market. AWS IoT FleetWise is a fully managed service that simplifies collecting, transforming, and transferring vehicle data to the cloud. Automakers, fleet operators, and automotive suppliers can access standardized fleet-wide vehicle data without developing custom data collection systems. With intelligent data collection capabilities, AWS IoT FleetWise allows customers to collect and send only high-value vehicle data to the cloud for proactive fleet health analytics and feature enhancements. Furthermore, customers can train machine learning (ML) models using collected data to improve autonomous driving and advanced driver assistance systems (ADAS). With over 40 years of automotive industry experience and a close partnership with Amazon Web Services (AWS), Capgemini expanded its Trusted Vehicle connected mobility solution with AWS IoT FleetWise capabilities. In this post, we will show how Capgemini’s Trusted Vehicle and its integration with AWS IoT FleetWise provides end-to-end transportation visibility into vehicle health and campaign management. How AWS IoT FleetWise works AWS IoT FleetWise enables secure data ingestion from vehicles to the cloud through a vehicle modeling framework. The following architecture diagram shows AWS IoT FleetWise service components and how they interact. Figure 1: AWS IoT FleetWise user flow Here is the user flow of AWS IoT FleetWise: Users develop and install their Edge Agent for AWS IoT FleetWise based on a reference implementation . The Edge Agent allows users to test simulated vehicle data before integration or runs as an application to connect remotely to a fleet of vehicles. Next, users can create a semantic digital twin of the vehicle in AWS IoT FleetWise by defining a vehicle model consisting of vehicle attributes such as model year and engine type. Standardizing vehicle data format and defining relationships between signals in AWS IoT FleetWise provides a foundational vehicle data structure for creating data collection campaigns. Users can create campaigns with condition-based or time-based collection schemes. AWS IoT FleetWise deploys active campaigns to target vehicles to acquire sensor data from the vehicle network based on defined data collection schemes. The Edge Agent applies inspection rules to upload vehicle data back to the AWS IoT FleetWise data plane through AWS IoT Core , a fully managed service that connects IoT devices to the cloud. The data plane persists the collected data in Amazon Timestream or Amazon Simple Storage Service (Amazon S3) for further analysis. Users can analyze trends and patterns to generate actionable insights with AWS analytics services, including Amazon QuickSight for business intelligence, Amazon Managed Grafana for data visualization, Amazon Athena for interactive queries, and AWS Glue for data integration. You can also build ML models using Amazon SageMaker . Enhance fleet analytics with Capgemini’s Trusted Vehicle Built on AWS Connected Mobility Solution (CMS) , Capgemini’s Trusted Vehicle helps customers harness the power of data by gathering and operationalizing vehicle telemetry data in the cloud. Trusted Vehicle provides accelerators such as reusable templates and campaign management tools, enabling customers to develop intelligent and personalized features with connected vehicle solutions. Benefits of Trusted Vehicle and AWS IoT FleetWise Trusted Vehicle now integrates with AWS IoT FleetWise, providing an aggregated view of vehicle, driver, and trip data to accelerate time-to-value with fleet telematics. Extending the core AWS capabilities and AWS IoT FleetWise, Trusted Vehicle enables automakers and fleet operators to drive mobility and digital transformation. Now, let’s review how Trusted Vehicle integrates with AWS IoT FleetWise. The following diagram illustrates how customers can use a wide range of vehicle capabilities provided by Trusted Vehicle and integrate with AWS IoT FleetWise to accelerate vehicle data collection, transformation, and analysis in the cloud. Figure 2: Capgemini’s Trusted Vehicle integration with AWS IoT FleetWise Here is the user flow of Trusted Vehicle with AWS IoT FleetWise integration: Select business process – Trusted Vehicle provides a library of standard automotive business processes allowing automakers to develop vehicle capabilities with advanced analytics. Users can select a vehicle business process from Trusted Vehicle’s library, including Vehicle Onboarding, Telematics, Value-Enabled Services, Vehicle Subscription Services, Vehicle Security, Electric Vehicle (EV) Services, Fleet Reliability and Monitoring, and Remote Vehicle Management Systems. Choose business function – Each business process contains a set of business functions for various vehicle capabilities. For example, the Telematics business process provides business functions for activating or deactivating telematics data ingestion, custom anomaly alerts, software-over-the-air (SOTA), and trouble code diagnosis through various telematic control unit (TCU) or electronic control unit (ECU) of vehicles. Configure EV function – Users can configure business functions via Trusted Vehicle’s console or invoke vehicle capabilities programmatically via APIs. For example, the EV Services business function API allows users to register and update EV accounts, authorize EV sessions, pay overage fees, and retrieve EV fleet status. Users can extend these standard EV capabilities to create personalized customer experiences. Select data collection campaign template – Trusted Vehicle provides ready-to-use and customizable templates for business functions requiring vehicle data collection. These templates contain standard configurations and best practices to diagnose issues or improve the quality of service remotely. Update campaign parameters – Creating AWS IoT FleetWise campaigns for data collection is easy with prebuilt campaign templates provided by Trusted Vehicle. For example, users can select the EV-Battery-Monitoring campaign template to gather battery monitoring data. You can enter a logical expression to configure what data your Edge Agent collects. For instance, $variable.`EVBatterySample.Drivetrain.ActualVehicleSpeed`>50.0 tells the Edge Agent to collect battery metrics when a vehicle speed exceeds 50 kilometers per hour (km/h). Users can choose between Always or On first trigger mode for data collection rules. Default trigger mode is Always where the Edge Agent collects data based on specified conditions, whereas On first trigger mode only collects data upon the first occurrence. Users can also set a trigger interval between data collection events. Deploy data collection campaigns to vehicles – Trusted Vehicle deploys the configured campaign to remote vehicles through the customer’s Edge Agent. With the end-to-end campaign implementation, Trusted Vehicle simplifies vehicle data processing and analysis with pre-configured analytic capabilities and visual interfaces. Edge Agents collect data from vehicles – Edge Agents begin collecting vehicle signals upon campaign activation. Users can remotely monitor and control vehicle data processing via Trusted Vehicle’s console, including suspending or resuming a campaign to optimize data collecting costs. Near real-time visibility allows automakers to diagnose vehicle issues, implement over-the-air (OTA) updates, and enhance remote vehicle management services through Trusted Vehicle. Visualize and analyze vehicle metrics – Once vehicle data is available in the cloud, users can build interactive Grafana dashboards to analyze and visualize fleet telematics. The following image shows the visualization comparing an electric vehicle’s speed and battery temperature metrics from Trusted Vehicle. Automakers can make timely decisions based on near real-time insights and visibility into vehicle health. Figure 3: Capgemini’s Trusted Vehicle Fleet Telematics Conclusion We covered how Capgemini’s Trusted Vehicle integrates with AWS IoT FleetWise to simplify fleet management implementation and accelerates time to value. Customers can collect high-value vehicle data with AWS IoT FleetWise and build connected vehicle solutions using various reusable templates provided by Trusted Vehicle. Consequently, fleet operators can diagnose potential vehicle issues with timely insights for impactful fleet decisions throughout the vehicle lifecycle. About the Authors Cher Simon Cher Simon is a Principal Partner Solutions Architect specializing in machine learning and data analytics at AWS. Cher has 20 years of experience architecting enterprise-scale, data-driven, and AI-powered industry solutions. Besides building cloud-native solutions in her day-to-day role with customers, Cher is also an author and a frequent speaker at AWS conferences. Rahul Khandelwal Rahul Khandelwal is a Chief Architect at Capgemini, specializing in cloud-native enterprise transformation and digital enablement. Rahul has diverse geography experience in IT consulting, leading large-scale digital transformation programs across automotive and retail industries. As a trusted industry advisor and speaker with multiple publications, Rahul is passionate about how technology can transform business. Daniel Davenport Daniel Davenport is a Principal Analyst at Capgemini North America Automotive team. Daniel enjoys building innovative mobility solutions in a rapidly changing transportation sector. Primarily working with AWS services, Daniel helps customers to deliver business results with cloud-native connected mobility industry solutions. Resources Getting Started What's New Top Posts Official AWS Podcast AWS Case Studies Follow  Twitter  Facebook  LinkedIn  Twitch  RSS Feed  Email Updates" Translate redact and analyze text using SQL functions with Amazon Athena Amazon Translate and Amazon Comprehend _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Translate, redact, and analyze text using SQL functions with Amazon Athena, Amazon Translate, and Amazon Comprehend by Bob Strahan | on 26 FEB 2021 | in Amazon Athena , Amazon Comprehend , Amazon Comprehend Medical , Amazon Machine Learning , Amazon Translate , Analytics , Artificial Intelligence | Permalink | Comments |  Share October 2021 Update (v0.3.0): Added support for Amazon Comprehend DetectKeyPhrases You have Amazon Simple Storage Service (Amazon S3) buckets full of files containing incoming customer chats, product reviews, and social media feeds, in many languages. Your task is to identify the products that people are talking about, determine if they’re expressing happy thoughts or sad thoughts, translate their comments into a single common language, and create copies of the data for your business analysts with this new information added to each record. Additionally, you need to remove any personally identifiable information (PII), such as names, addresses, and credit card numbers. You already know how to use Amazon Athena to transform data in Amazon S3 using simple SQL commands and the built-in functions in Athena. Now you can also use Athena to translate and analyze text fields, thanks to Amazon Translate , Amazon Comprehend , and the power of Athena User Defined Functions (UDFs). Athena is an interactive query service that makes it easy to analyze data stored in Amazon S3 using SQL. Amazon Comprehend is a Natural Language Processing (NLP) service that makes it easy to uncover insights from text. Amazon Translate is a neural machine translation service that delivers fast, high-quality, affordable, and customizable language translation. In this post, I show you how you can now use them together to perform the following actions: Detect the dominant language of a text field Detect the prevailing sentiment expressed—positive, negative, neither, or both Detect key phrases Detect or redact entities (such as items, places, or quantities) Detect or redact PII Translate text from one language to another This post accomplishes the following goals: Show you how to quickly set up the text analytics functions in your own AWS account (it’s fast and easy!) Briefly explain how the functions work Discuss performance and cost Provide a tutorial where we do some text analytics on Amazon product reviews Describe all the available functions We include a list of all the available functions at the end of the post; the following code shows a few example queries and results: USING EXTERNAL FUNCTION detect_sentiment(text_col VARCHAR, lang VARCHAR) RETURNS VARCHAR LAMBDA 'textanalytics-udf' SELECT detect_sentiment('I am very happy', 'en') as sentiment sentiment POSITIVE USING EXTERNAL FUNCTION detect_pii_entities(text_col VARCHAR, lang VARCHAR) RETURNS VARCHAR LAMBDA 'textanalytics-udf' SELECT detect_pii_entities('I am Bob, I live in Herndon VA, and I love cars', 'en') as pii pii [[""NAME"",""Bob""],[""ADDRESS"",""Herndon VA""]] USING EXTERNAL FUNCTION redact_pii_entities(text_col VARCHAR, lang VARCHAR, type VARCHAR) RETURNS VARCHAR LAMBDA 'textanalytics-udf' SELECT redact_pii_entities('I am Bob, I live in Herndon VA, and I love cars', 'en', 'NAME,ADDRESS') as pii_redacted pii_redacted I am [NAME], I live in [ADDRESS], and I love cars USING EXTERNAL FUNCTION translate_text(text_col VARCHAR, sourcelang VARCHAR, targetlang VARCHAR, terminologyname VARCHAR) RETURNS VARCHAR LAMBDA 'textanalytics-udf' SELECT translate_text('It is a beautiful day in the neighborhood', 'auto', 'fr', NULL) as translated_text translated_text C'est une belle journée dans le quartier Install the text analytics UDF An Athena UDF uses AWS Lambda to implement the function capability. I discuss more details later in this post, but you don’t need to understand the inner workings to use the text analytics UDF, so let’s get started. Install the prebuilt Lambda function with the following steps: Navigate to the  TextAnalyticsUDFHandler  application in the  AWS Serverless Application Repository . In the Application settings section, keep the settings at their defaults. Select I acknowledge that this app creates custom IAM roles . Choose Deploy . And that’s it! Now you have a new Lambda function called textanalytics-udf . You’re ready to try some text analytics queries in Athena. If you prefer to build and deploy from the source code instead, see the directions at the end of the GitHub repository README . Run your first text analytics query If you’re new to Athena, you may want to review the Getting Started guide. Your Athena Workgroup must use Athena engine version 2 . Enter the following query into the SQL editor: USING EXTERNAL FUNCTION detect_sentiment(text_col VARCHAR, lang VARCHAR) RETURNS VARCHAR LAMBDA 'textanalytics-udf' SELECT detect_sentiment('I am very happy', 'en') as sentiment You get a simple POSITIVE result. Now try again, varying the input text—try something less positive to see how the returned sentiment value changes. To get the sentiment along with confidence scores for each potential sentiment value, use the following query instead: USING EXTERNAL FUNCTION detect_sentiment_all(text_col VARCHAR, lang VARCHAR) RETURNS VARCHAR LAMBDA 'textanalytics-udf' SELECT detect_sentiment_all('I am very happy', 'en') as sentiment Now you get a JSON string containing the sentiment and all the sentiment scores: {""sentiment"":""POSITIVE"",""sentimentScore"":{""positive"":0.999519,""negative"":7.407639E-5,""neutral"":2.7478999E-4,""mixed"":1.3210243E-4}} You can use the built-in JSON extraction functions in Athena on this result to extract the fields for further analysis. How the UDF works For more information about the Athena UDF framework, see Querying with User Defined Functions . The Java class TextAnalyticsUDFHandler implements our UDF Lambda function handler. Each text analytics function has a corresponding public method in this class. Athena invokes our UDF Lambda function with batches of input records. The TextAnalyticsUDFHandler subdivides these batches into smaller batches of up to 25 rows to take advantage of the Amazon Comprehend synchronous multi-document batch APIs where they are available (for example, for detecting language, entities, and sentiment). When there is no synchronous multi-document API available (such as for DetectPiiEntity and TranslateText ), we use the single-document API instead. Amazon Comprehend API service quotas provide guardrails to limit your cost exposure from unintentional high usage (we discuss this more in the following section). By default, the multi-document batch APIs process up to 250 records per second, and the single-document APIs process up to 20 records per second. Our UDFs use exponential back off and retry to throttle the request rate to stay within these limits. You can request increases to the transactions per second quota for APIs using the Quota Request Template on the AWS Management Console . Amazon Comprehend and Amazon Translate each enforce a maximum input string length of 5,000 utf-8 bytes. Text fields that are longer than 5,000 utf-8 bytes are truncated to 5,000 bytes for language and sentiment detection, and split on sentence boundaries into multiple text blocks of under 5,000 bytes for translation and entity or PII detection and redaction. The results are then combined. Optimizing cost In addition to Athena query costs, the text analytics UDF incurs usage costs from Lambda and Amazon Comprehend and Amazon Translate. The amount you pay is a factor of the total number of records and characters that you process with the UDF. For more information, see AWS Lambda pricing , Amazon Comprehend pricing , and Amazon Translate pricing . To minimize the costs, I recommend that you avoid processing the same records multiple times. Instead, materialize the results of the text analytics UDF by using CREATE TABLE AS SELECT (CTAS) queries to capture the results in a separate table that you can then cost-effectively query as often as needed without incurring additional UDF charges. Process newly arriving records incrementally using INSERT INTO…SELECT queries to analyze and enrich only the new records and add them to the target table. Avoid calling the text analytics functions needlessly on records that you will subsequently discard. Write your queries to filter the dataset first using temporary tables, views, or nested queries, and then apply the text analytics functions to the resulting filtered records. Always assess the potential cost before you run text analytics queries on tables with vary large numbers of records. In this section, we provide two example cost assessments. Example 1: Analyze the language and sentiment of tweets Let’s assume you have 10,000 tweet records, with average length 100 characters per tweet. Your SQL query detects the dominant language and sentiment for each tweet. You’re in your second year of service (the Free Tier no longer applies). The cost details are as follows: Size of each tweet = 100 characters Number of units (100 character) per record (minimum is 3 units) = 3 Total Units: 10,000 (records) x 3 (units per record) x 2 (Amazon Comprehend requests per record) = 60,000 Price per unit = $0.0001 Total cost for Amazon Comprehend = [number of units] x [cost per unit] = 60,000 x $0.0001 = $6.00   Example 2: Translate tweets Let’s assume that 2,000 of your tweets aren’t in your local language, so you run a second SQL query to translate them. The cost details are as follows: Size of each tweet = 100 characters Total characters: 2,000 (records) * 100 (characters per record) x 1 (Translate requests per record) = 200,000 Price per character = $0.000015 Total cost for Amazon Translate = [number of characters] x [cost per character] = 200,000 x $0.000015 = $3.00 Analyze insights from customer reviews It’s time to put our new text analytics queries to use. For a tutorial on getting actionable insights from customer reviews, see Tutorial: Analyzing Insights from Customer Reviews with Amazon Comprehend . This post provides an alternate approach to the same challenge: using SQL queries powered by Athena and Amazon Comprehend. The tutorial takes approximately 10 minutes to complete, and costs up to $6 for Amazon Comprehend—there is no cost if you’re eligible for the Free Tier. Create a new database in Athena Run the following query in the Athena query editor: CREATE DATABASE IF NOT EXISTS comprehendresults; When connecting your data source, choose your new database. Create a source table containing customer review data We use the Amazon Customer Reviews Dataset , conveniently hosted for public access in Amazon S3. Run the following query in the Athena query editor: CREATE EXTERNAL TABLE amazon_reviews_parquet( marketplace string, customer_id string, review_id string, product_id string, product_parent string, product_title string, star_rating int, helpful_votes int, total_votes int, vine string, verified_purchase string, review_headline string, review_body string, review_date bigint, year int) PARTITIONED BY (product_category string) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat' LOCATION 's3://amazon-reviews-pds/parquet/' Under Tables , find the new table amazon_reviews_parquet. From the options menu, choose Load partitions . Preview the new table, amazon_reviews_parquet . Run the following query to assess the average review length: SELECT AVG(LENGTH(review_body)) AS average_review_length FROM amazon_reviews_parquet The average review length is around 365 characters. This equates to 4 Amazon Comprehend units per record (1 unit = 100 characters). Detect the language for each review To detect the language of each review, run the following query in the Athena query editor—it takes just over 1 minute to run and costs $2: CREATE TABLE amazon_reviews_with_language WITH (format='parquet') AS USING EXTERNAL FUNCTION detect_dominant_language(col1 VARCHAR) RETURNS VARCHAR LAMBDA 'textanalytics-udf' SELECT *, detect_dominant_language(review_body) AS language FROM amazon_reviews_parquet LIMIT 5000 This query creates a new table, amazon_reviews_with_language , with one new column added: language . The LIMIT clause limits the number of records to 5,000. Cost is calculated as: 5,000 (records) x 4 (units per record) x 1 (requests per record) x $0.0001 (Amazon Comprehend price per unit) = $2.   Run the following query to see the detected language codes, with the corresponding count of reviews for each language: SELECT language, count(*) AS count FROM amazon_reviews_with_language GROUP BY language ORDER BY count DESC Detect sentiment and entities for each review To detect sentiment, run the following query in the Athena query editor—it uses two text analytics functions, takes around 1 minute to run, and costs $4: CREATE TABLE amazon_reviews_with_text_analysis WITH (format='parquet') AS USING EXTERNAL FUNCTION detect_sentiment_all(col1 VARCHAR, lang VARCHAR) RETURNS VARCHAR LAMBDA 'textanalytics-udf', EXTERNAL FUNCTION detect_entities_all(col1 VARCHAR, lang VARCHAR) RETURNS VARCHAR LAMBDA 'textanalytics-udf' SELECT *, detect_sentiment_all(review_body, language) AS sentiment, detect_entities_all(review_body, language) AS entities FROM amazon_reviews_with_language WHERE language IN ('ar', 'hi', 'ko', 'zh-TW', 'ja', 'zh', 'de', 'pt', 'en', 'it', 'fr', 'es') This query creates a new table, amazon_reviews_with_text_analysis , with two additional columns added: sentiment and entities . The WHERE clause restricts the result set to the list of languages supported by Amazon Comprehend sentiment and entity detection. Cost is calculated as: 5,000 (records) x 4 (units per record) x 2 (requests per record) x $0.0001 (Amazon Comprehend price per unit) = $4. Preview the new table and inspect some of the values for the new sentiment and entities columns. They contain JSON strings with nested structures and fields. The following screenshot shows the sentiment column details. The following screenshot shows the entities column details. Next, we use the JSON functions in Athena to prepare these columns for analysis. Prepare sentiment for analysis Run the following SQL query to create a new table containing sentiment and sentiment scores expanded into separate columns: CREATE TABLE sentiment_results_final WITH (format='parquet') AS SELECT review_date, year, product_title, star_rating, language, CAST(JSON_EXTRACT(sentiment,'$.sentiment') AS VARCHAR) AS sentiment, CAST(JSON_EXTRACT(sentiment,'$.sentimentScore.positive') AS DOUBLE ) AS positive_score, CAST(JSON_EXTRACT(sentiment,'$.sentimentScore.negative') AS DOUBLE ) AS negative_score, CAST(JSON_EXTRACT(sentiment,'$.sentimentScore.neutral') AS DOUBLE ) AS neutral_score, CAST(JSON_EXTRACT(sentiment,'$.sentimentScore.mixed') AS DOUBLE ) AS mixed_score, review_headline, review_body FROM amazon_reviews_with_text_analysis Preview the new sentiment_results_final table (see the following screenshot). Does the sentiment generally align with the text of the review_body field? How does it correlate with the star_rating ? If you spot any dubious sentiment assignments, check the confidence scores to see if the sentiment was assigned with a low confidence. Prepare entities for analysis Run the following SQL query to create a new table containing detected entities unnested into separate rows (inner subquery), with each field in a separate column (outer query): CREATE TABLE entities_results_final WITH (format='parquet') AS SELECT review_date, year, product_title, star_rating, language, CAST(JSON_EXTRACT(entity_element, '$.text') AS VARCHAR ) AS entity, CAST(JSON_EXTRACT(entity_element, '$.type') AS VARCHAR ) AS category, CAST(JSON_EXTRACT(entity_element, '$.score') AS DOUBLE ) AS score, CAST(JSON_EXTRACT(entity_element, '$.beginOffset') AS INTEGER ) AS beginoffset, CAST(JSON_EXTRACT(entity_element, '$.endOffset') AS INTEGER ) AS endoffset, review_headline, review_body FROM ( SELECT * FROM ( SELECT *, CAST(JSON_PARSE(entities) AS ARRAY(json)) AS entities_array FROM amazon_reviews_with_text_analysis ) CROSS JOIN UNNEST(entities_array) AS t(entity_element) ) Preview the contents of the new table, entities_results_final (see the following screenshot) . Visualize in Amazon QuickSight (optional) As an optional step, you can visualize your results with Amazon QuickSight . For instructions, see Step 5: Visualizing Amazon Comprehend Output in Amazon QuickSight . You can use the new word cloud visual type for entities, instead of tree map. In the word cloud chart menu, select Hide “other” categories . You now have a dashboard with sentiment and entities visualizations that looks similar to the following screenshot. Troubleshooting If your query fails, check the Amazon CloudWatch metrics and logs generated by the UDF Lambda function. On the Lambda console, find the textanalytics-udf function. Choose Monitoring . You can view the CloudWatch metrics showing how often the function ran, how long it runs for, how often it failed, and more. Choose View logs in CloudWatch to open the function log streams for additional troubleshooting insights. For more information about viewing CloudWatch metrics via Lambda, see Using the Lambda console . Additional use cases There are many use cases for SQL text analytics functions. In addition to the example shown in this post, consider the following: Simplify ETL pipelines by using incremental SQL queries to enrich text data with sentiment and entities, such as streaming social media streams ingested by Amazon Kinesis Data Firehose Use SQL queries to explore sentiment and entities in your customer support texts, emails, and support cases Prepare research-ready datasets by redacting PII from customer or patient interactions Standardize many languages to a single common language You may have additional use cases for these functions, or additional capabilities you want to see added, such as the following: SQL functions to call custom entity recognition and custom classification models in Amazon Comprehend SQL functions for de-identification—extending the entity and PII redaction functions to replace entities with alternate unique identifiers Additionally, the implementation is open source, which means that you can clone the repo, modify and extend the functions as you see fit, and (hopefully) send us pull requests so we can merge your improvements back into the project and make it better for everyone. Cleaning up After you complete this tutorial, you might want to clean up any AWS resources you no longer want to use. Active AWS resources can continue to incur charges in your account. In Athena, run the following query to drop the database and all the tables: DROP DATABASE comprehendresults CASCADE In AWS CloudFormation, delete the stack serverlessrepo-TextAnalyticsUDFHandler . Cancel your QuickSight subscription . Conclusion I have shown you how to install the sample text analytics UDF Lambda function for Athena, so that you can use simple SQL queries to translate text using Amazon Translate, generate insights from text using Amazon Comprehend, and redact sensitive information. I hope you find this useful, and share examples of how you can use it to simplify your architectures and implement new capabilities for your business. The SQL functions described here are also available for Amazon Redshift. For more information, see Translate and analyze text using SQL functions with Amazon Redshift, Amazon Translate, and Amazon Comprehend . Please also watch my overview video , and share your thoughts with us in the comments section, or in the issues section of the project’s GitHub repository . Appendix: Available function reference This section summarizes the functions currently provided. The README file provides additional details. Detect language This function uses the Amazon Comprehend BatchDetectDominantLanguage API to identify the dominant language based on the first 5,000 bytes of input text. The following code returns a language code, such as fr for French or en for English: USING EXTERNAL FUNCTION detect_dominant_language(text_col VARCHAR) RETURNS VARCHAR LAMBDA 'textanalytics-udf' SELECT detect_dominant_language('il fait beau à Orlando') as language The following code returns a JSON formatted array of language codes and corresponding confidence scores: USING EXTERNAL FUNCTION detect_dominant_language_all(text_col VARCHAR) RETURNS VARCHAR LAMBDA 'textanalytics-udf' SELECT detect_dominant_language_all('il fait beau à Orlando') as language_all Detect sentiment This function uses the Amazon Comprehend BatchDetectSentiment API to identify the sentiment based on the first 5,000 bytes of input text. The following code returns a sentiment as POSITIVE, NEGATIVE, NEUTRAL, or MIXED: USING EXTERNAL FUNCTION detect_sentiment(text_col VARCHAR, lang VARCHAR) RETURNS VARCHAR LAMBDA 'textanalytics-udf' SELECT detect_sentiment('Joe is very happy', 'en') as sentiment The following code returns a JSON formatted object containing detected sentiment and confidence scores for each sentiment value: USING EXTERNAL FUNCTION detect_sentiment_all(text_col VARCHAR, lang VARCHAR) RETURNS VARCHAR LAMBDA 'textanalytics-udf' SELECT detect_sentiment_all('Joe is very happy', 'en') as sentiment_all Detect Key Phrases This function uses the Amazon Comprehend DetectKeyPhrases API to identify key phrases. Input text longer than 5,000 bytes results in multiple Amazon Comprehend API calls. The following code returns a JSON formatted object containing an array of key phrase values: USING EXTERNAL FUNCTION detect_key_phrases(text_col VARCHAR, lang VARCHAR) RETURNS VARCHAR LAMBDA 'textanalytics-udf' SELECT detect_key_phrases('His name is Joe, he lives in Richmond VA, he bought an Amazon Echo Show on January 5th, and he loves it', 'en') as key_phrases The following code returns a JSON formatted object containing an array of key phrases, with their scores, and character offsets: USING EXTERNAL FUNCTION detect_key_phrases_all(text_col VARCHAR, lang VARCHAR) RETURNS VARCHAR LAMBDA 'textanalytics-udf' SELECT detect_key_phrases_all('His name is Joe, he lives in Richmond VA, he bought an Amazon Echo Show on January 5th, and he loves it', 'en') as key_phrases_all Detect entities This function uses the Amazon Comprehend DetectEntities API to identify entities. Input text longer than 5,000 bytes results in multiple Amazon Comprehend API calls. The following code returns a JSON formatted object containing an array of entity types and values: USING EXTERNAL FUNCTION detect_entities(text_col VARCHAR, lang VARCHAR) RETURNS VARCHAR LAMBDA 'textanalytics-udf' SELECT detect_entities('His name is Joe, he lives in Richmond VA, he bought an Amazon Echo Show on January 5th, and he loves it', 'en') as entities The following code returns a JSON formatted object containing an array of entity types, with their values, scores, and character offsets: USING EXTERNAL FUNCTION detect_entities_all(text_col VARCHAR, lang VARCHAR) RETURNS VARCHAR LAMBDA 'textanalytics-udf' SELECT detect_entities_all('His name is Joe, he lives in Richmond VA, he bought an Amazon Echo Show on January 5th, and he loves it', 'en') as entities_all Redact entities This function replaces entity values for the specified entity types with “ [ENTITY_TYPE] ”. Input text longer than 5,000 bytes results in multiple Amazon Comprehend API calls. See the following code: USING EXTERNAL FUNCTION redact_entities(text_col VARCHAR, lang VARCHAR, types VARCHAR) RETURNS VARCHAR LAMBDA 'textanalytics-udf' SELECT redact_entities('His name is Joe, he lives in Richmond VA, he bought an Amazon Echo Show on January 5th, and he loves it', 'en', 'ALL') as entities_redacted The command returns a redacted version on the input string. Specify one or more entity types to redact by providing a comma-separated list of valid types in the types string parameter, or ALL to redact all types. Detect PII This function uses the DetectPiiEntities API to identify PII. Input text longer than 5,000 bytes results in multiple Amazon Comprehend API calls. The following code returns a JSON formatted object containing an array of PII entity types and values: USING EXTERNAL FUNCTION detect_pii_entities(text_col VARCHAR, lang VARCHAR) RETURNS VARCHAR LAMBDA 'textanalytics-udf' SELECT detect_pii_entities('His name is Joe, his username is joe123 and he lives in Richmond VA', 'en') as pii The following code returns a JSON formatted object containing an array of PII entity types, with their scores and character offsets: USING EXTERNAL FUNCTION detect_pii_entities_all(text_col VARCHAR, lang VARCHAR) RETURNS VARCHAR LAMBDA 'textanalytics-udf' SELECT detect_pii_entities_all('His name is Joe, his username is joe123 and he lives in Richmond VA', 'en') as pii_all Redact PII This function replaces the PII values for the specified PII entity types with “ [PII_ENTITY_TYPE] ”. Input text longer than 5,000 bytes results in multiple Amazon Comprehend API calls. See the following code: USING EXTERNAL FUNCTION redact_pii_entities(text_col VARCHAR, lang VARCHAR, types VARCHAR) RETURNS VARCHAR LAMBDA 'textanalytics-udf' SELECT redact_pii_entities('His name is Joe, his username is joe123 and he lives in Richmond VA', 'en', 'ALL') as pii_redacted The function returns a redacted version on the input string. Specify one or more PII entity types to redact by providing a comma-separated list of valid types in the type string parameter, or ALL to redact all type. Translate text This function translates text from the source language to target language. Input text longer than 5,000 bytes results in multiple Amazon Translate API calls. See the following code: USING EXTERNAL FUNCTION translate_text(text_col VARCHAR, sourcelang VARCHAR, targetlang VARCHAR, customterminologyname VARCHAR) RETURNS VARCHAR LAMBDA 'textanalytics-udf' SELECT translate_text('It is a beautiful day in the neighborhood', 'auto', 'fr', NULL) as translated_text The function returns the translated string. Optionally, auto-detect the source language (use auto as the language code, which uses Amazon Comprehend), and optionally specify a custom terminology (otherwise use NULL for customTerminologyName ). About the Author Bob Strahan  is a Principal Solutions Architect in the AWS Language AI Services team. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Tyler Technologies Recovers Mission-Critical Workloads 12x Faster Using AWS Elastic Disaster Recovery _ Tyler Technologies Case Study _ AWS.txt,"We are confident in our recoverability. Using AWS Elastic Disaster Recovery helps us to sleep better at night.” As a provider of integrated software and technology services to the public sector, Tyler Technologies (Tyler) required a disaster recovery (DR) solution that could quickly restore large, complex systems involving thousands of servers. The on-premises infrastructure of Tyler’s DR solution was reaching a tipping point that would make it difficult to fulfill its service-level agreements (SLAs) for clients, which specified recovery within 4 hours. To overhaul its DR solution and fulfill the company’s larger goal of migration to the cloud, Tyler turned to Amazon Web Services (AWS). Français 2023 Español 日本語 AWS Services Used AWS Professional Services Adopting the AWS Cloud can provide you with sustainable business advantages. Supplementing your team with specialized skills and experience can help you achieve those results. The AWS Professional Services organization is a global team of experts that can help you realize your desired business outcomes when using the AWS Cloud. Learn more » 한국어 Learn how Tyler Technologies used AWS to improve recovery time objectives (RTOs) and recovery point objectives (RPOs) of more than 4,300 virtual machines. Opportunity | Finding a Solution to Accelerate Infrastructure Recovery at Scale AWS Elastic Disaster Recovery (AWS DRS) minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery. Get Started AWS Elastic Disaster Recovery As a software development company, Tyler had experience in validating software solutions, and it performed rigorous tests on all the DR solutions that it evaluated based on several criteria. First, the solution had to meet or exceed its recovery SLAs. Second, it had to be simple to set up, operate, test, and maintain at scale over the long term. Third, it had to facilitate automated deployment with a push-of-a-button failover process. Finally, it had to maintain operations in the recovery environment while the primary environment was repaired. “When we evaluated tools, we needed to go deep and check how a solution meets our systems’ requirements at scale and measure how quickly we could go from pushing the recovery button to having users back online, with fingers on keyboards using our applications,” Gainford says. 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Christopher Armstrong Director of Information Security, Tyler Technologies With an exclusive focus on public sector software since 1997, Tyler helps local, state, and federal government entities to operate more efficiently and connect more transparently with their constituents and with each other. Tyler boasts more than 37,000 successful installations across 12,000 sites in all 50 states and offers complete solutions that support a wide range of government services. Its DR plan includes preparation for data center failures and IT disruptions caused by attacks, natural disasters, or other outages. The strategy includes protecting more than 4,300 virtual machines running Windows Server operating systems, which host a wide array of client environments. Complex, mission-critical workloads include Microsoft SQL Server databases and database clusters as well as customized Tyler software. “We found that we were unique in that we have hundreds of software versions that run in different ways, and each virtual machine is a little bit different,” says Russell Gainford, Tyler’s vice president of cloud strategy and operations. “There’s no templated approach to easily set up recovery.” Overview Türkçe Solution | Achieving 12x Faster Recovery Time Using AWS Elastic Disaster Recovery To build its new DR solution, Tyler worked with the AWS Elastic Disaster Recovery service team and AWS Professional Services, a global team of experts that can help organizations realize their desired business outcomes when using AWS. Together, the teams were able to design a DR solution, map applications and networks, and build and test a DR runbook. “There was an incredible amount of listening and responsiveness,” says Gainford. “AWS hit every deliverable on time, which is difficult for any software company to do, and shows a lot of dedication.” As the project proceeded according to set timelines, the collaboration grew, and Tyler decided to expand AWS Elastic Disaster Recovery to additional virtual machines. “AWS Elastic Disaster Recovery was proven to be faster, hands down, than every other solution,” Gainford says. “We also developed a connection during this project that goes beyond a typical vendor relationship. We’re working with people that we trust.” English Using AWS Elastic Disaster Recovery, Tyler achieved recovery time objectives—a measure of business disruption—of minutes, even for the most complex systems with specific boot order sequences. This meant that Tyler could bring users fully back online within 20 minutes, 12 times faster than its 4-hour-recovery SLAs and previous DR solution. In fact, AWS Elastic Disaster Recovery was the only solution Tyler evaluated that featured a recovery time below 4 hours. “The other solutions that we had in the running could achieve full recovery at a minimum of 4 hours, so AWS Elastic Disaster Recovery was vastly faster,” says Christopher Armstrong, director of information security at Tyler. Tyler also achieved recovery point objectives—the measure of the frequency of backups and an important parameter of data recovery—of seconds. “One of the things that I really liked about the way AWS Elastic Disaster Recovery worked compared to other solutions was the continuous replication to keep machines up to date in the cloud,” says Armstrong. “It was exponentially faster than every other solution that we looked at, which would build and then restore data elements after an event. Although those solutions were an option, the trade-off was definitely speed.” Tyler implemented its DR solution in the cloud using AWS Elastic Disaster Recovery (CloudEndure Disaster Recovery), which minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery. Using AWS Elastic Disaster Recovery, Tyler achieved a 12-times-faster recovery time than with its legacy DR solution. The company was able to exceed the recovery time objectives of its SLAs and achieve recovery point objectives of seconds. Deutsch About Tyler Technologies Using AWS Elastic Disaster Recovery, Tyler maintains disaster readiness as part of its daily operations, validating its recovery time objectives quarterly with automation that verifies all systems while paying only for resources when it needs them. Its IT staff is trained to set up AWS Elastic Disaster Recovery when servers are added or changed. Operators perform regular DR tests and drills and are prepared for failover to AWS in the event of a disaster or IT disruption. Tyler also avoided a large, planned capital expenditure to refresh its DR data center hardware and instead reallocated these funds toward the operating expense of running DR in the cloud. “When—not if—disaster strikes, it will be a nonevent for us,” says Armstrong. “We’ll just push a button and move on. We tested it and really kicked the tires, so we are confident in our recoverability. Using AWS Elastic Disaster Recovery helps us to sleep better at night.” Tiếng Việt Italiano ไทย With an exclusive focus on public sector software since 1998, Tyler helps local, state, and federal government entities to operate more efficiently and connect more transparently with their constituents and with each other. Tyler’s DR testing and drills indicated that its existing solution could no longer keep up with the company’s growth. Plus, Tyler was interested in the pay-as-you-go cloud model for DR infrastructure, which converts DR capital expenses into operating expenses. Moving DR to the cloud would also avoid an upcoming infrastructure lifecycle refresh and address resource constraints in its on-premises DR data centers. Tyler Technologies Recovers Mission-Critical Workloads 12x Faster Using AWS Elastic Disaster Recovery Learn more » Outcome | Maintaining Cost-Effective Disaster Readiness on AWS Overview | Opportunity | Solution | Outcome | AWS Services Used Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português" Ultra Commerce Case Study.txt,"“AWS is constantly innovating, and we get the benefit of that innovation without having to invest millions of dollars,” shares Hyland. He continues, “With the pace of innovation at AWS, we can consume more services to address challenges we would traditionally have to solve ourselves. This helps reduce our operations and management overheads while facilitating innovation.” AWS Lambda Français Ultra Commerce offers a complete SaaS platform to solve complex ecommerce challenges for growing businesses. Its headless commerce solution provides a flexible feature set that marketers and developers can use to enhance the customer experience. Ultra Commerce recently introduced Vesta, an advanced integration feature that automates product data catalogue operations. Solution | Solving Complex Ecommerce Challenges with an Agile Platform 2023 When choosing a cloud provider, Ultra Commerce, an AWS Partner, selected AWS for its global footprint and constantly evolving portfolio of modern cloud services. On AWS, its customers have the flexibility to provision workloads in different markets to ensure compliance with data sovereignty regulations. Paul McClure, chief product officer at Ultra Commerce, says, “We can give our customers the option of multiple or single deployments and a consistent rollout in any AWS Region globally. Our SaaS leverages the content delivery network [CDN] and local point-of-presence capabilities of AWS to cache and optimize ecommerce performance at the edge.” Español Some leading-edge features Ultra Commerce recently launched include subscription commerce and deliveries to drive repeat business without requiring customers to manually place orders. It also launched an advanced promotion engine to create targeted promotion configurations, ensuring that its clients push the right incentive to the right group of consumers at the right time. In addition, Ultra Commerce introduced a new service integration called Vesta to help growing businesses automate product catalogue operations, such as cleansing of product data from vendors. Ultra Commerce Transforms Online Selling with Next Generation, End-to-End Ecommerce Platform on AWS Ultra Commerce uses Amazon Elastic Container Service (Amazon ECS) for fully managed container orchestration and AWS Fargate as a serverless compute engine for containers. It relies on AWS Lambda to run serverless code across its entire application suite, along with Amazon Aurora for MySQL and Amazon Aurora Serverless for database administration. “Whenever possible, our preference is to leverage AWS managed services rather than building something from scratch. This practice allows us to control provisioning and scaling much more dynamically while ensuring data security,” explains McClure. 日本語 4.2 million Contact Sales Seasonal promotions and end-of-month sales are typical among ecommerce vendors, which leads to extreme spikes in application traffic. With Ultra Commerce on the cloud, companies are assured of a consistent ecommerce performance and customer experience at scale. One of the platform’s largest customers averages about 4.2 million orders per month, 85 percent of which happen in the last three days of the month. Its customers benefit from a service level agreement of 99.999 percent uptime. 99.999% Growing, multi-faceted businesses have fluid requirements when it comes to ecommerce. As they expand, companies often discover selling gaps and missing features in their ecommerce platform, or encounter roadblocks that hinder digital storefront expansion and business growth. Furthermore, companies are often eager to modernize their ecommerce functionality without intense development or maintenance effort, while remaining secure and compliant. Amazon Aurora Serverless 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Next, Ultra Commerce is considering how to integrate more artificial intelligence and machine learning into its services to develop new value-added features. It’s also evaluating a data lake and new analytics capabilities using services such as AWS Glue. Get Started AWS Services Used By building its SaaS on AWS, Ultra Commerce can offer its customers a highly scalable, secure, and flexible end-to-end commerce solution anywhere in the world. “Our customers don’t have to worry about performance, scalability, or their security posture when choosing Ultra Commerce. They have the flexibility to deploy wherever they like and can grow their business with confidence,” Hyland says. 中文 (繁體) Bahasa Indonesia On the backend, technical teams have total flexibility to leverage a full commerce suite without restrictive templates or styles. The Ultra Commerce framework is elastic and developer-friendly, so developers spend less time adding plugins or other services to their storefronts. Features and capabilities are modular, built to integrate into existing business services and back-office functions. Low-Cost Innovation Ultra Commerce leverages AWS Lambda and Amazon Aurora to build a feature-rich, flexible, headless ecommerce platform that reduces time to market for B2B customers. Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Protects sensitive personal and financial data Learn more » Secure Learn more » Overview Amazon Elastic Container Service (Amazon ECS) AWS is constantly innovating, and we get the benefit of that innovation without having to invest millions of dollars.” In a similar innovative vein, the business continues to leverage the evolving portfolio of modern technology available on AWS. Ultra Commerce recently ported its self-managed search capability to the fully managed Amazon OpenSearch Service, for example, to improve search efficiency on its customers’ sites. The business currently holds two AWS Competencies, in Digital Customer Experience and Retail, and has also began selling its solution on AWS Marketplace. Opportunity | Resolving Technology Gaps that Hinder Storefront Scaling Currently, Ultra Commerce uses AWS Lambda, Amazon Elastic Container Services (Amazon ECS), and Amazon Aurora Serverless. Ultra Commerce transforms customer experiences via an elastic, API-driven ecommerce platform that can be deployed securely across the globe. Türkçe Amazon Aurora Serverless is an on-demand, autoscaling configuration for Amazon Aurora. It automatically starts up, shuts down, and scales capacity up or down based on your application's needs.  English Amazon ECS is a fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. Ultra Commerce launched in 2012 to solve these growing pains by offering a complete headless ecommerce platform that stores, manages, and delivers content without a front-end delivery layer. From click to ship, Ultra Commerce is a fully managed software as a service (SaaS) running on the Amazon Web Services (AWS) Cloud. Matthew Hyland, CEO of Ultra Commerce, says, “We solve complex commerce challenges with an agile platform that’s connected to enterprise-ready selling and marketing tools.” AWS Fargate Keeps overheads low while continuously launching new features For customers using their own CDNs at the edge, Ultra Commerce adds services such as AWS Shield for managed distributed denial of service (DDoS) protection. The business follows the best practices prescribed by the AWS Well-Architected framework and is fully compliant with the Payment Card Industry Data Security Standard (PCI DSS). Scales to support large monthly orders Deutsch AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers.  Tiếng Việt AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. AWS Fargate is compatible with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Ultra Commerce offers companies a complete headless platform for ecommerce that stores, manages, and delivers digital commerce capabilities without a front-end delivery layer. It launched on AWS to leverage modern technologies that comply with data residency regulations. Italiano ไทย Flexible Deployment About Ultra Commerce Gives businesses a choice of where and how to deploy Matthew Hyland CEO, Ultra Commerce uptime SLA with customers Outcome | Leveraging Modern Technology for Continuous Innovation Português" Ultrasound Business Area Improves Customer Experience Using AWS Systems Manager _ Siemens Healthineers Case Study _ AWS.txt,"The solution uses AWS Systems Manager to connect the instances that are in the cloud to those that aren’t by using an AWS Systems Manager Agent (SSM Agent) to update, manage, and configure resources. The SSM Agent is installed on the devices. The cloud instances use Amazon Elastic Compute Cloud (Amazon EC2), a web service that provides secure and resizable compute capacity to support virtually any workload. The devices can then communicate with the remote services solution with speed and at scale. Solution | Using Amazon DynamoDB for Secure and Quick Connections  Scott Kumono Cloud Product Manager for Remote Services, Siemens Healthineers Français Increased Siemens Healthineers is a multinational medical technology company that aims to improve access to healthcare for everyone across the globe. Español Opportunity | Enhancing Remote Servicing Capability and Convenience for Hospital IT Staff  The new solution also helps maintain the connectivity of ultrasound devices as they are moved around the facilities. “With this AWS solution, we can seamlessly transition across different environments and maintain connectivity over time,” says Kumono. All data is securely stored on the Amazon Simple Storage Service (Amazon S3), object storage built to retrieve any amount of data from anywhere. This makes remote software distribution and device log collection possible for the Siemens Healthineers Ultrasound Business Area. Siemens Healthineers Ultrasound Business Area Improves Customer Experience Using AWS 日本語 AWS Services Used Contact Sales 2022 AWS Systems Manager Using AWS Systems Manager, a secure and complete management solution for hybrid cloud environments, the business area reduced the time it took to register its devices to the remote services infrastructure from 2 hours to 5 minutes. This offers greater device availability, more time for ultrasound labs to focus on patient care, and improved productivity for its customers while adhering to security and compliance guidelines. 한국어 Manages a fleet Overview | Opportunity | Solution | Outcome | AWS Services Used The team chose AWS to build this innovation because it checked the boxes for the scalability, connectivity, security, and compliance that were needed for its global operations. “Because we’re a global organization, we need to meet the challenges of different regions,” says Scott Kumono, cloud product manager for remote services at Siemens Healthineers. “For example, a new data security law in China made it difficult to connect devices in that region. AWS had the know-how, and their team provided us with the steps that we needed to take to connect to the ultrasound system in that region.” scalability Improved Get Started of ultrasound devices remotely and efficiently Amazon DynamoDB, a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at virtually any scale, provides consistency of workflows. “Using Amazon DynamoDB database tables and AWS Systems Manager documents, we consistently perform service workflows, like log collection, with the click of a button, the same way as on the client app,” says Kumono. “Our client app interacts with the cloud, and regardless of the differences on the device end, the experience for the service user is the same.” AWS global Availability Zones support Siemens Healthineers’ global footprint and address redundancy. A range of AWS tools provides the needed security and access management, achieving higher security for the solution. Before, we felt that we had a limit. Now, if we want to try or offer something new, our ideas can more easily become reality.” Siemens Healthineers is a key global medical technology company, with its Ultrasound Business Area headquartered in Issaquah, Washington. Using its custom-built infrastructure, the company can remotely monitor the condition of its ultrasound equipment in near real time and offer its customers a range of services to keep their operations up and running. This includes fast error identification, remote repair and software updates, proactive maintenance, and technical and clinical collaboration services. 中文 (繁體) Bahasa Indonesia ไทย Ρусский setup time from 2 hours to 5 minutes عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. AWS Systems Manager is a secure end-to-end management solution for hybrid cloud environments. Learn more » Outcome | Designing without Limits Using AWS  Overview requirements with ease customer experience The new solution built on AWS is scalable and can be deployed quickly. A streamlined workflow reduces the installation costs, minimizes the number of visits required to customer sites, and reduces the time and effort required of hospital IT administrators—improving their experience so that the healthcare facility can better focus on their patients’ care. It also helps the team minimize inefficiencies and improves the service level of its onsite visits, because technicians now have a complete device diagnosis and necessary components before an onsite visit. Türkçe English Meets regional The team turned to Amazon Web Services (AWS) to build an efficient and comprehensive infrastructure to remotely connect its ultrasound devices to Siemens Healthineers Ultrasound service experts. This allows the business area to track device performance in near real time and deliver a full spectrum of remote support, including troubleshooting and streamlined technician workflows. The team also uses AWS Lambda, a serverless, event-driven compute service used to run code for virtually any type of application or backend service without provisioning or managing servers, reducing the need to add data centers to run the remote connectivity solution. Because AWS Lambda is serverless, the team can run the code whenever it needs to increase the speed, which helps reduce costs. The team started developing the new solution powered by AWS in July 2021. Working with Stanford Medicine, the company developed and deployed a proof of concept in just 3 months using the AWS Management Console, which has everything that a business needs to access and manage the AWS Cloud in one web interface. “The proof of concept was really successful,” says Kumono. “We could remotely connect to the system, distribute a 16 GB, full build, as a patch, and install it remotely. We monitored the transfer progress and all the activity using the AWS Management Console.” Using AWS, the business area collects device logs automatically without additional configuration. The solution also provides continual software updates and live technical support. By proactively detecting potential issues and remotely addressing maintenance needs, the team helps providers consistently operate at peak performance. The Ultrasound Business Area of medical technology company Siemens Healthineers wanted to proactively monitor—with customer consent—its ultrasound devices at healthcare facilities around the world. It also wanted to provide remote maintenance, along with applications and technical support, to help its healthcare customers sustain peak device performance 24/7. The idea was to minimize disruptions for healthcare providers by promptly resolving their issues through timely access to support so that healthcare providers could focus more time on delivering care to their patients. AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Learn more » Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Deutsch Amazon DynamoDB Tiếng Việt Amazon S3 Reduced connectivity Italiano Customer Stories / Life Sciences Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. Learn more » Learn more » One of Siemens Healthineers Ultrasound Business Area’s goals is to improve healthcare through digitizing service delivery for healthcare providers. Connecting to ultrasound systems using AWS helps the business area achieve this goal in a more efficient, secure, and faster way, giving time back to busy healthcare staff. The next step for the company is to connect the systems used for internal use and for testing of the solution and to provide internal training to familiarize its engineers with the new system processes. The company will continue to push for innovations using AWS in the future. “Before, we felt that we had a limit. Now, if we want to try or offer something new, and there’s good business value, our ideas can more easily become reality,” says Kumono. About Siemens Healthineers AWS Lambda Learn how Siemens Healthineers Ultrasound Business Area uses AWS to remotely manage ultrasound devices to reduce downtime and improve customer experience. Português" Upskilling Over 2K Employees with AWS Training and Certification and Creating a Culture of Innovation _ Techcombank Case Study _ AWS.txt,"As part of the Guild framework, Techcombank also created a Cloud Champion network consisting of 28 trained Cloud Champions who facilitate study groups and provide one-on-one coaching to offer additional training to employees. “Our change management team relies on our Cloud Champions, technical experts who can help our employees become trained and certified,” says An Nguyen, change management expert, IT at Techcombank. Français Español Build in-demand cloud skills—your way—with our online learning center. Learn more » 日本語 AWS Skill Builder By working with AWS Training and Certification and leveraging the AWS Skills Guild training framework, Techcombank migrates 21 critical applications and key non-production environments in 15 months and accelerates development and lowers costs, all while increasing innovation to meet customer needs. 90% AWS Classroom Training 한국어 After initial deployment success, demand was rising for additional cloud capabilities across the company. Techcombank needed more skilled resources to scale its new environment. The company turned to AWS Training and Certification, which helps organizations and learners build AWS skills through digital and classroom training. After conducting an initial assessment and the AWS Learning Needs Analysis, AWS Training and Certification created a role-based training plan focused on workloads that were specific to Techcombank. The bank also sought to accelerate cloud adoption and leveraged AWS Skills Guild, a comprehensive skills enablement program helping companies increase employee engagement and drive cloud fluency at scale. Overview | Opportunity | Solution | Outcome | AWS Services Used Techcombank initially refactored its complex treasury system and deployed it on AWS. In just 15 months from the start of 2021, the bank migrated 21 applications to AWS including PayLink, a payment hub built in-house. Techcombank uses a range of AWS services for compute resources, data storage, security management, and on-demand scalability and high availability. Since migrating to AWS, the bank has reduced average monthly application costs for PayLink by more than 30 percent. AWS Training & Certification Techcombank chose to move its business applications from an on-premises data center to the public cloud. In 2021, it began migrating business applications to AWS. “AWS is at the forefront of cloud technology, and we needed to ramp up quickly and move fast. The AWS team provided the strong support we required and advised us on regulatory compliance requirements, which is key for our business,” says Elizabeth. The enablement program included formal AWS Classroom Training, which covered foundational to advanced cloud topics such as cloud architecture, application development, machine learning, and security. It also included informal hands-on training modules, such as AWS Jam, AWS GameDay, and Cloud Saturdays, helping employees put their skills into practice and solve real business problems. Outcome | Closing the AWS Skills Gap and Fostering a Culture of Innovation With a focus on continuous learning, Techcombank established learning targets for its employees, which includes training available on AWS Skill Builder, an online learning center from AWS featuring hundreds of self-paced learning resources. Every technical role within the bank’s IT and Data Analytics team has an AWS recommended learning path, often referred to as the guiding principle for an employee’s learning needs. An Nguyen Change Management Expert, Techcombank AWS Services Used In addition to upskilling and certifying, Techcombank is attracting talent by providing AWS training courses. “People are aware that when they join our company, their learning and development is a priority,” An Nguyen says. “They recognize that the bank invests in our people, and that’s attractive to current and prospective employees.” By leveraging AWS Training and Certification, we’re driving a culture where employees are fully responsible for their learning and development, funded by Techcombank.” To date, Techcombank has engaged in 105 training classes and upskilled more than 2,800 employees and executives, while certifying 249 employees on AWS. The company has completed 21 instructor-led classroom training sessions, with an overall employee satisfaction score of 4.71 out of 5. In addition, 659 employees have completed 4,269 free digital courses in AWS Skill Builder. 中文 (繁體) Bahasa Indonesia Vietnam Technological and Commercial Joint Stock Bank, commonly known as Techcombank, was founded in Vietnam in 1993 and currently has 12,000 employees. The organization offers financial services products and services such as checking and savings accounts, loans, and money transfers. employees certified on AWS technologies A training framework that accelerates innovation. Solution | Empowering Employees to Innovate with AWS Training and Certification Ρусский decrease in provisioning time عربي reduction in average monthly application costs 2,800+ 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Learn with expert AWS instructors who teach you in-demand cloud skills and best practices. Learn more » Techcombank Upskills Over 2,800 Employees with AWS Training and Certification and Creates a Culture of Innovation 2022 Techcombank provides banking services to more than 5.4 million customers in Vietnam. To meet customer demand for digital products, security, and high availability, the bank migrated its key business applications to Amazon Web Services (AWS). Overview upskilled staff Get Started Customer Stories / Financial Services Türkçe English AWS Skills Guild Tuan Nguyen, chief information officer at Techcombank says, “Our cloud migration journey is defined by our culture as much as our technology choice. Without specific AWS knowledge, experience, and qualifications among existing employees, we knew our migration would be challenging.” As Techcombank embarked on its cloud journey, it faced a roadblock: its traditional workforce lacked cloud skills and competencies. In addition to hiring employees knowledgeable in AWS, Techcombank realized it needed to invest in its existing team to develop new AWS skills. It also needed to change its culture by educating technology leaders and the entire organization to fully embrace the company’s digital and cloud transformation ambitions. Techcombank’s mission is to “change banking, change lives.” To meet this mission, the bank needed to quickly launch new products for customers. “We’re obsessed with customer experience and satisfaction. That means meeting customer demands for digital products, convenience, and security, while providing highly available applications and services. Our customers are at the core of everything we do,” says Elizabeth Nguyen, senior expert, IT at Techcombank. 30% Build your teams’ skills and confidence so they can deliver great solutions for your customers. Learn more » Deutsch Using AWS, Techcombank delivers a reliable online banking experience for its customers across Vietnam while enhancing security and ensuring compliance. Techcombank reduced provisioning time by 90 percent, and this helped the company roll out more frequent application updates and new product prototypes faster than before. It used to take over a month on average to provision a new environment, and the bank expects to reduce that to five days on AWS. “We can go to market faster on AWS, which allows us to focus on delivering value to our customers through innovative initiatives like our upcoming customer loyalty program,” says Elizabeth.  Tiếng Việt Italiano ไทย To support service adoption and make the best use of cloud resources at a lower cost, Techcombank collaborated with AWS Training and Certification and leveraged AWS Skills Guild framework to train and upskill more than 2,800 employees on AWS. As a result, the bank accelerated its cloud adoption and transformation journey, reducing provisioning time by 90 percent and average monthly application costs by more than 30 percent, while fostering a culture of innovation. Contact Sales By using AWS Training and Certification to certify and upskill its employees, the bank is closing its digital skills gap and building a culture of innovation. “By leveraging AWS Training and Certification, our employees possess the relevant cloud knowledge where they can think and work smartly. They’re now having discussions about the cloud confidently, whether it’s in a meeting or in the kitchen over lunchtime. We’ve been able to gain a common language,” says An Nguyen. The bank’s goal is to certify 340 employees on AWS technologies by the end of 2022, and it’s currently only about 100 people away from that goal. Learn more » Since its founding in 1993, Techcombank has grown into one of the largest joint-stock commercial banks in Vietnam. Today, the bank provides a range of banking products and financial services to more than 5.4 million retail and corporate customers in Vietnam through a network of 315 branches. 249 About Techcombank Português Opportunity | Innovating to Meet Demand for Secure, Digital Banking Products" Upstox Saves 1 Million Annually Using Amazon S3 Storage Lens _ Upstox Case Study _ AWS.txt,"Amazon S3 Glacier Instant Retrieval is an archive storage class that delivers the lowest-cost storage for long-lived data that is rarely accessed and requires retrieval in milliseconds. Français 2023 English Español The second phase of optimization focused on using Amazon S3 Storage Lens to get organization-wide visibility into object storage usage and activity trends. Using the Amazon S3 Storage Lens advanced metrics, Upstox had access to 35 additional metrics and 15 months of historical data. Upstox gained extensive insights into which buckets and prefixes were growing and at what rate, the health of data operations, and how to identify cost savings and performance improvements by using the most optimal Amazon S3 storage class. Upstox used Amazon S3 Storage Lens advanced metrics to identify existing buckets with insufficient lifecycle rules in place. It then set up lifecycle rules to migrate infrequently accessed data to Amazon S3 Glacier Instant Retrieval, which provides low-cost archive storage with milliseconds retrieval for rarely accessed data. Upstox uses Amazon S3 Glacier Instant Retrieval to store previous snapshots of its data lake for debugging, compliance, regulatory, and audit activities. “Amazon S3 Storage Lens advanced metrics are extremely important in our daily operations because we use them to drill down into our data costs so that we can make quick, well-informed, and impactful business decisions,” says Chandra. “We can now understand and visualize our storage usage and analyze it to detect outliers and anomalies, which consistently helps us optimize our storage at scale.” Within 2 months, Upstox had reduced its daily cost for Amazon S3 by 93 percent. 日本語 Upstox, a leading tech-first discount broker based in Mumbai, India, chose Amazon Web Services (AWS) as its preferred cloud provider and saves $1 million annually by optimizing its data stored on Amazon Simple Storage Service (Amazon S3)—an object storage service that offers industry-leading scalability, durability, data availability, security, and performance—using Amazon S3 Storage Lens, a cloud-storage analytics solution that gives organization-wide visibility into object storage usage. The company’s stock-trading service provides financial services to more than 11 million customers in India and aims to have over 20 million customers by the end of 2023. Opportunity | Optimizing Data Storage Costs Using Amazon S3 Contact Sales Get Started 한국어 in Amazon S3 storage costs Amazon S3 Glacier Instant Retrieval Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Amazon Athena Over 96% reduction Upstox is a discount broker firm in Mumbai, India, that helps more than 11 million customers trade in public markets by providing a digital platform for education and trading. from days to hours AWS Services Used on Amazon S3 storage costs using Amazon S3 Storage Lens 中文 (繁體) Bahasa Indonesia Solution | Reducing Costs by Over 96% Using Amazon S3 Storage Lens Ρусский عربي Amazon S3 Storage Lens delivers organization-wide visibility into object storage usage, activity trends, and makes actionable recommendations to optimize costs and apply data protection best practices. 中文 (简体) Upstox focused on ingesting data from many data sources, including relational database management systems and APIs, into its data lake to create a workflow for queries to use the Upstox Data Platform. In early 2022, Upstox ingested a few petabytes of data into Amazon S3. Data is ingested into Amazon S3 buckets, then queried for various use cases, like getting insights on adoption rates of new features and uptime trends of services using Amazon Athena, which analyzes petabyte-scale data where it lives with ease and flexibility. Upstox uses Amazon Athena as an in-house data-querying tool across the entire firm to answer business queries and give visibility into daily business and product-feature performance. About Upstox Outcome | Continue Using Amazon S3 Storage Lens to Observe, Analyze, and Optimize Storage Footprint Overview Amazon Athena is a serverless, interactive analytics service built on open-source frameworks, supporting open-table and file formats. Reduced time to insight $1 million saved annually Customer Stories / Financial Services Learn how Upstox, an India-based leading online stock-trading platform, saves $1 million in data storage costs annually using Amazon S3 Storage Lens. Indranil Chandra Principal Machine Learning and Data Engineer, Upstox Türkçe Upstox Saves $1 Million Annually Using Amazon S3 Storage Lens As Upstox’s business grew, so did the amount of data it stored in Amazon S3. Upstox needed to find a way to optimize data storage as part of an initiative to reduce cloud infrastructure costs across the company. It began to improve data storage efficiency by following Amazon S3 best practices and lowering costs despite increases in its storage usage. Upstox started using AWS in 2016 for its scalability and elasticity. It was fast, easy, and more cost effective for Upstox to migrate existing applications to AWS compared to maintaining an on-premises data center. Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Deutsch In June 2021, the company used Amazon S3 to build its secure and scalable data lake solution—Upstox Data Platform, a solution to facilitate comprehensive, custom, and secure data delivery to business users. “We chose Amazon S3 as the foundation of our data lake architecture because of its meaningful storage innovation, performance, scalability, availability, security, and affordability. We could break down data silos, unlock the value of our data, and increase innovation across our organization,” says Indranil Chandra, principal machine learning and data engineer at Upstox. This migration away from its on-premises data lake helped to reduce time to insight from days to hours. By using Amazon S3 Storage Lens, we’re saving $1 million annually for the company. AWS exceeded our expectations in every way.” Tiếng Việt Amazon S3 Overview | Opportunity | Solution | Outcome | AWS Services Used  Italiano ไทย Prior to using the cloud, Upstox stored all its data in an on-premises data center. Data was stored on multiple systems across various locations, and it was time consuming to provide the right access to end users. Scalability and hardware management also limited their on-premises data lake. Upstox is now all-in on AWS. By moving from on premises to AWS, Upstox lowered costs, became more agile, and innovated faster. From 2020 to 2023 while being on AWS, Upstox grew 10 times over. With such significant growth, Upstox needed to make its Amazon S3 storage usage more efficient, save on costs, and further maximize AWS benefits. The third phase of Upstox’s optimization involved upgrading its data pipelines to ingest data from high-throughput source databases, like NoSQL databases, into the data lake in near real time. Within 5 months, Upstox achieved a 96 percent cost reduction. “The path forward is to continue using Amazon S3 Storage Lens to maintain a closer pulse on our storage usage and further improve upon cost efficiencies,” says Chandra. “With the Amazon S3 Storage Lens interactive dashboard, we can easily locate the Amazon S3 prefix hot spots where increases in cost happen and optimize them with the right retention policies and Amazon S3 storage class to further improve cost efficiencies.” Amazon S3 Storage Lens Learn more » Upstox is committed to consistently using Amazon S3 Storage Lens for storage analytics and insights, along with additional Amazon S3 optimization features to gain the most value from its storage spend. By building Amazon S3 Storage Lens into its daily operations and taking advantage of the Amazon S3 Storage Lens advanced metrics, Upstox has visibility into storage analytics across various dimensions and can pinpoint areas in need of swift action. Upstox can unlock more value and opportunities with its data, save costs, focus more attention on projects that differentiate its business, and be well prepared for future growth. The company also plans on evaluating Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering), which automates storage cost savings by migrating data when access patterns change. Upstox can continuously optimize spending while building modern, scalable applications to meet its needs. “By using Amazon S3 Storage Lens, we’re saving $1 million annually for the company. AWS exceeded our expectations in every way. We look forward to working alongside AWS as we help millions of customers do better with their wealth,” says Chandra. The significant rise in the volume of stored data in Amazon S3 also meant a rise in Amazon S3 storage costs. To lower the Amazon S3 storage costs and optimize its economic and operational advantages, Upstox started a three-phase plan. The first phase involved optimizing the data in its Amazon S3 buckets where Amazon Athena query results are stored. Starting in May 2022, the company implemented a retention policy for Amazon Athena data queries to warrant query results for a finite time. In just 2 days, it had reduced daily Amazon S3 storage costs by 62 percent. By using Amazon S3 best practices, Upstox built a highly available and durable data lake architecture. Português" Use proprietary foundation models from Amazon SageMaker JumpStart in Amazon SageMaker Studio _ AWS Machine Learning Blog.txt,"AWS Machine Learning Blog Use proprietary foundation models from Amazon SageMaker JumpStart in Amazon SageMaker Studio by June Won , Nitin Eusebius , and Mani Khanuja | on 27 JUN 2023 | in Amazon SageMaker , Amazon SageMaker JumpStart , Artificial Intelligence , Generative AI | Permalink | Comments |  Share Amazon SageMaker JumpStart is a machine learning (ML) hub that can help you accelerate your ML journey. With SageMaker JumpStart, you can discover and deploy publicly available and proprietary foundation models to dedicated Amazon SageMaker instances for your generative AI applications. SageMaker JumpStart allows you to deploy foundation models from a network isolated environment, and doesn’t share customer training and inference data with model providers. In this post, we walk through how to get started with proprietary models from model providers such as AI21, Cohere, and LightOn from Amazon SageMaker Studio . SageMaker Studio is a notebook environment where SageMaker enterprise data scientist customers evaluate and build models for their next generative AI applications. Foundation models in SageMaker Foundation models are large-scale ML models that contain billions of parameters and are pre-trained on terabytes of text and image data so you can perform a wide range of tasks, such as article summarization and text, image, or video generation. Because foundation models are pre-trained, they can help lower training and infrastructure costs and enable customization for your use case. SageMaker JumpStart provides two types of foundation models: Proprietary models – These models are from providers such as AI21 with Jurassic-2 models, Cohere with Cohere Command, and LightOn with Mini trained on proprietary algorithms and data. You can’t view model artifacts such as weight and scripts, but you can still deploy to SageMaker instances for inferencing. Publicly available models – These are from popular model hubs such as Hugging Face with Stable Diffusion, Falcon, and FLAN trained on publicly available algorithms and data. For these models, users have access to model artifacts and are able to fine-tune with their own data prior to deployment for inferencing. Discover models You can access the foundation models through SageMaker JumpStart in the SageMaker Studio UI and the SageMaker Python SDK. In this section, we go over how to discover the models in the SageMaker Studio UI. SageMaker Studio is a web-based integrated development environment (IDE) for ML that lets you build, train, debug, deploy, and monitor your ML models. For more details on how to get started and set up SageMaker Studio, refer to Amazon SageMaker Studio . Once you’re on the SageMaker Studio UI, you can access SageMaker JumpStart, which contains pre-trained models, notebooks, and prebuilt solutions, under Prebuilt and automated solutions . From the SageMaker JumpStart landing page, you can browse for solutions, models, notebooks, and other resources. The following screenshot shows an example of the landing page with solutions and foundation models listed. Each model has a model card, as shown in the following screenshot, which contains the model name, if it is fine-tunable or not, the provider name, and a short description about the model. You can also open the model card to learn more about the model and start training or deploying. Subscribe in AWS Marketplace Proprietary models in SageMaker JumpStart are published by model providers such as AI21, Cohere, and LightOn. You can identify proprietary models by the “Proprietary” tag on model cards, as shown in the following screenshot. You can choose View notebook on the model card to open the notebook in read-only mode, as shown in the following screenshot. You can read the notebook for important information regarding prerequisites and other usage instructions. After importing the notebook, you need to select the appropriate notebook environment (image, kernel, instance type, and so on) before running codes. You should also follow the subscription and usage instructions per the selected notebook. Before using a proprietary model, you need to first subscribe to the model from AWS Marketplace : Open the model listing page in AWS Marketplace. The URL is provided in the Important section of the notebook, or you can access it from the SageMaker JumpStart service page . The listing page shows the overview, pricing, usage, and support information about the model. On the AWS Marketplace listing, choose Continue to subscribe . If you don’t have the necessary permissions to view or subscribe to the model, reach out to your IT admin or procurement point of contact to subscribe to the model for you. Many enterprises may limit AWS Marketplace permissions to control the actions that someone with those permissions can take in the AWS Marketplace Management Portal. On the Subscribe to this software page , review the details and choose Accept offer if you and your organization agree with the EULA, pricing, and support terms. If you have any questions or a request for volume discount, reach out to the model provider directly via the support email provided on the detail page or reach out to your AWS account team. Choose Continue to configuration and choose a Region. You will see a product ARN displayed. This is the model package ARN that you need to specify while creating a deployable model using Boto3. Copy the ARN corresponding to your Region and specify the same in the notebook’s cell instruction. Sample inferencing with sample prompts Let’s look at some of the sample foundation models from A21 Labs, Cohere, and LightOn that are discoverable from SageMaker JumpStart in SageMaker Studio. All of them have same the instructions to subscribe from AWS Marketplace and import and configure the notebook. AI21 Summarize The Summarize model by A121 Labs condenses lengthy texts into short, easy-to-read bites that remain factually consistent with the source. The model is trained to generate summaries that capture key ideas based on a body of text. It doesn’t require any prompting. You simply input the text that needs to be summarized. Your source text can contain up to 50,000 characters, translating to roughly 10,000 words, or an impressive 40 pages. The sample notebook for AI21 Summarize model provides important prerequisites that needs to be followed. For example the model is subscribed from AWS Marketplace , have appropriate IAM roles permissions, and required boto3 version etc. It walks you through how to select the model package, create endpoints for real-time inference, and then clean up. The selected model package contains the mapping of ARNs to Regions. This is the information you captured after choosing Continue to configuration on the AWS Marketplace subscription page (in the section Evaluate and subscribe in Marketplace ) and then selecting a Region for which you will see the corresponding product ARN. The notebook may already have ARN prepopulated. You then import some libraries required to run this notebook and install wikipedia, which is a Python library that makes it easy to access and parse data from Wikipedia. The notebook uses this later to showcase how to summarize a long text from Wikipedia. The notebook also proceeds to install the ai21 Python SDK, which is a wrapper around SageMaker APIs such as deploy and invoke endpoint . The next few cells of the notebook walk through the following steps: Select the Region and fetch the model package ARN from model package map Create your inference endpoint by selecting an instance type (depending on your use case and supported instance for the model; see Task-specific models for more details) to run the model on Create a deployable model from the model package Let’s run the inference to generate a summary of a single paragraph taken from a news article. As you can see in the output, the summarized text is presented as an output by the model. AI21 Summarize can handle inputs up to 50,000 characters. This translates into roughly 10,000 words, or 40 pages. As a demonstration of the model’s behavior, we load a page from Wikipedia. Now that you have performed a real-time inference for testing, you may not need the endpoint anymore. You can delete the endpoint to avoid being charged. Cohere Command Cohere Command is a generative model that responds well with instruction-like prompts. This model provides businesses and enterprises with best quality, performance, and accuracy in all generative tasks. You can use Cohere’s Command model to invigorate your copywriting, named entity recognition, paraphrasing, or summarization efforts and take them to the next level. The sample notebook for Cohere Command model provides important prerequisites that needs to be followed. For example the model is subscribed from AWS Marketplace, have appropriate IAM roles permissions, and required boto3 version etc. It walks you through how to select the model package, create endpoints for real-time inference, and then clean up. Some of the tasks are similar to those covered in the previous notebook example, like installing Boto3, installing cohere-sagemaker (the package provides functionality developed to simplify interfacing with the Cohere model), and getting the session and Region. Let’s explore creating the endpoint. You provide the model package ARN, endpoint name, instance type to be used, and number of instances. Once created, the endpoint appears in your endpoint section of SageMaker. Now let’s run the inference to see some of the outputs from the Command model. The following screenshot shows a sample example of generating a job post and its output. As you can see, the model generated a post from the given prompt. Now let’s look at the following examples: Generate a product description Generate a body paragraph of a blog post Generate an outreach email As you can see, the Cohere Command model generated text for various generative tasks. Now that you have performed real-time inference for testing, you may not need the endpoint anymore. You can delete the endpoint to avoid being charged. LightOn Mini-instruct Mini-instruct, an AI model with 40 billion billion parameters created by LightOn, is a powerful multilingual AI system that has been trained using high-quality data from numerous sources. It is built to understand natural language and react to commands that are specific to your needs. It performs admirably in consumer products like voice assistants, chatbots, and smart appliances. It also has a wide range of business applications, including agent assistance and natural language production for automated customer care. The sample notebook for LightOn Mini-instruct model provides important prerequisites that needs to be followed. For example the model is subscribed from AWS Marketplace, have appropriate IAM roles permissions, and required boto3 version etc. It walks you through how to select the model package, create endpoints for real-time inference, and then clean up. Some of the tasks are similar to those covered in the previous notebook example, like installing Boto3 and getting the session Region. Let’s look at creating the endpoint. First, provide the model package ARN, endpoint name, instance type to be used, and number of instances. Once created, the endpoint appears in your endpoint section of SageMaker. Now let’s try inferencing the model by asking it to generate a list of ideas for articles for a topic, in this case watercolor. As you can see, the LightOn Mini-instruct model was able to provide generated text based on the given prompt. Clean up After you have tested the models and created endpoints above for the example proprietary Foundation Models, make sure you delete the SageMaker inference endpoints and delete the models to avoid incurring charges. Conclusion In this post, we showed you how to get started with proprietary models from model providers such as AI21, Cohere, and LightOn in SageMaker Studio. Customers can discover and use proprietary Foundation Models in SageMaker JumpStart from Studio, the SageMaker SDK, and the SageMaker Console. With this, they have access to large-scale ML models that contain billions of parameters and are pretrained on terabytes of text and image data so customers can perform a wide range of tasks such as article summarization and text, image, or video generation. Because foundation models are pretrained, they can also help lower training and infrastructure costs and enable customization for your use case. Resources SageMaker JumpStart documentation SageMaker JumpStart Foundation Models documentation SageMaker JumpStart product detail page SageMaker JumpStart model catalog About the authors June Won is a product manager with SageMaker JumpStart. He focuses on making foundation models easily discoverable and usable to help customers build generative AI applications. Mani Khanuja  is an Artificial Intelligence and Machine Learning Specialist SA at Amazon Web Services (AWS). She helps customers using machine learning to solve their business challenges using the AWS. She spends most of her time diving deep and teaching customers on AI/ML projects related to computer vision, natural language processing, forecasting, ML at the edge, and more. She is passionate about ML at edge, therefore, she has created her own lab with self-driving kit and prototype manufacturing production line, where she spends lot of her free time. Nitin Eusebius  is a Sr. Enterprise Solutions Architect at AWS with experience in Software Engineering , Enterprise Architecture and AI/ML. He works with customers on helping them build well-architected applications on the AWS platform. He is passionate about solving technology challenges and helping customers with their cloud journey. Comments View Comments Resources Getting Started What's New Blog Topics Amazon Comprehend Amazon Kendra Amazon Lex Amazon Polly Amazon Rekognition Amazon SageMaker Amazon Textract Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Using Amazon EC2 Spot Instances and Karpenter to Simplify and Optimize Kubernetes Infrastructure _ Neeva Case Study _ AWS.txt,"About Neeva Français In late 2021, Neeva worked alongside the Karpenter team to experiment with and contribute fixes to an early version of Karpenter. They also connected Karpenter to its Kubernetes dashboard to gather metrics on usage. Neeva experimented with different instance types until it found a combination of Spot Instances and Amazon EC2 On-Demand Instances—which make it possible for companies to pay for compute capacity by the hour or second—that helped the company control costs while meeting its performance requirements. Neeva runs its jobs on a large scale, and costs can add up quickly. So, the company uses Spot Instances to stay within budget. “We can more effectively use Amazon EC2 Spot Instances because Karpenter adopts some of the best practices of Spot Instances, including flexibility and instance diversification,” says Mohit Agarwal, infrastructure engineering lead at Neeva. “We can also take advantage of the purchasing option of On-Demand Instances as needed for critical pipelines.” 2023 Español Learn more » 日本語 18 to 3 hours We can more effectively use Amazon EC2 Spot Instances because Karpenter adopts some of the best practices of Spot Instances, including flexibility and instance diversification.” Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Founded in 2019 with the mission of providing a user-first search experience, Neeva delivers high-quality search results without ads and gives answers powered by AI. It also protects user privacy by blocking trackers. “Our customer and our user are the same person,” says Asim Shankar, chief technology officer at Neeva. “We have built a better product because we have no competing incentives.” Learn how Neeva, an AI-powered, ad-free search engine, balanced scalability and cost optimization using Karpenter and Amazon EC2 Spot Instances. Amazon Elastic Kubernetes Service cost optimization AWS Services Used 10–100 hours per week reduction 中文 (繁體) Bahasa Indonesia In the past, changing from one instance type to another would have required a team member to create a new node group, set it up with the right instances, warrant that the group was deployed to a Terraform—an open-source, infrastructure-as-code software tool—and then make a corresponding change to Neeva’s Kubernetes configuration. “Now, any of our engineers can make that change on the Kubernetes side,” says Shankar. “It’s just one Karpenter provisioner file where we can specify what instance type we want, and Karpenter handles the rest.” in time spent waiting on infrastructure management Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Now that it can spin up new instances quickly without having to spend as much time on infrastructure management, Neeva can iterate at a higher velocity and run more experiments in less time, improving the company’s search engine, delivering a better customer experience, and driving adoption. For example, Neeva could reduce its indexing jobs from 18 hours to just 3 hours for nearly the same cost, letting it refresh its web index faster. Neeva can also more efficiently run its large language models, which the company uses to summarize the web and provide a richer search experience. In October 2022, Neeva launched in Europe, which required building indexes that contained French and German documents. “Iteration time was lower on those because we could run our experiments faster,” says Shankar. Achieved Neeva, an ad-free private search engine powered by artificial intelligence (AI), needed a cost-efficient, scalable way to crawl, process, and index billions of web pages daily. The company, which uses a subscription-based business model, sought a solution to maintain cost optimization when scaling its compute resources and empowering its small team to manage these resources on its own. Overview How Neeva Uses Amazon EC2 Spot Instances and Karpenter to Simplify and Optimize Kubernetes Infrastructure Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. reduction in time spent on indexing jobs  of iteration and shortened development cycles Türkçe into compute resource usage English Increased speed Since its inception, cloud-native Neeva has built its infrastructure using Amazon Web Services (AWS). When the Neeva team learned about Karpenter—an open-source project for fast and simple compute provisioning, autoscaling, and lifecycle management for Kubernetes—it recognized a solution for simplifying its infrastructure and achieving the balance between scalability and cost optimization. Since adopting Karpenter alongside other AWS solutions, Neeva has improved its scalability, its agility, and the speed of its development cycles, and it has saved its team up to 100 hours per week of wait time on systems administration. Mohit Agarwal Infrastructure Engineering Lead, Neeva Now that Neeva uses Karpenter to provision infrastructure resources for its Amazon EKS clusters, it can iterate quickly by democratizing its infrastructure changes. The company is ready to keep innovating, launching in new regions, and improving its search engine at a rapid pace—all while keeping within its budget using Spot Instances. As a result, the company is prepared to deliver even better ad-free search experiences for its customers. “The bulk of our compute is or will be managed using Karpenter going forward,” says Shankar. “We are very confident in the ability of our systems to scale using AWS solutions.” Improved visibility Deutsch Solution | Increasing Iteration Speed Using Karpenter and Amazon EC2 Spot Instances Opportunity | Using Karpenter to Reduce Time Spent Waiting on Infrastructure Management by 100 Hours per Week Tiếng Việt Outcome | Using Karpenter to Scale Neeva’s Innovative Search Engine Italiano ไทย Founded in 2019, Neeva is the world’s first user-first private search engine. Neeva delivers high-quality search results without any ads and protects user privacy by blocking trackers. Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Learn more » Amazon EC2 Spot Instances Neeva built its infrastructure on AWS and containerized its workloads using Amazon Elastic Kubernetes Service (Amazon EKS), a service for running and scaling Kubernetes. The company runs its clusters using Amazon Elastic Compute Cloud (Amazon EC2), a service that provides secure and resizable compute capacity for virtually any workload. Specifically, Neeva uses Amazon EC2 Spot Instances, a service that lets businesses take advantage of unused Amazon EC2 capacity in the AWS Cloud and achieve cost savings of up to 90 percent. However, provisioning new instance types in Amazon EKS required manual configurations that required expertise in cloud resource management that few engineers had, creating a bottleneck that slowed the team’s development cycles. When Karpenter became available in November 2021, Neeva knew that its team could use the solution to self-manage its three compute clusters, which involved up to 1,000 machines at peak. Typically, Amazon EKS requires compute to be managed by creating autoscaling groups for different workloads. With Karpenter, actively managing autoscaling groups or managed node groups is unnecessary, and instances tailored to a workload can be provisioned and de-provisioned on demand. “The complexity of understanding different compute instances to standardize for the workload used to slow our developers down,” says Shankar. “Using Karpenter, we no longer have to worry about fitting our workloads to compute instances, and we have simplified our overall system. Our developers only need to understand Kubernetes and don’t have to think about autoscaling group configurations or matching to precise instance types.” Português By using Karpenter, Neeva has improved its visibility and can track its compute resources usage more closely. The company has also achieved improved productivity, which has led to more cost savings. Having a self-managing infrastructure saves the Neeva team anywhere from 10 to 100 hours every week because developers no longer experience delays when a managed node group or particular instance type doesn’t have enough space. “Sometimes, someone’s job would get stuck in the pipeline over the weekend, and it was hard to get someone with Terraform or Amazon EKS privileges to debug,” says Avinash Parchuri, infrastructure engineering lead at Neeva. “We’d then pay the price in terms of delayed experimentation. Now that all engineers can modify their workloads through Kubernetes configurations, those issues are resolved.”" Using Amazon SageMaker to accelerate and deploy predictive editorial analytics solutions _ Smartocto Case Study _ AWS.txt,"Français Increased These customers have been very happy with the insights that they can glean from using Smartify to improve their content and grow their audience. “Smartify is the future of analytics,” says Rutger Verhoeven, chief marketing officer at smartocto. “It’s been very well received by news and media companies, and it supports them in their decision-making and marketing strategies.” The company has also cut its compute costs using Amazon SageMaker, saving hundreds of dollars each month. With these savings, smartocto plans to iterate new versions of Smartify that will include more analytics and features for its customers. Español In 2021, smartocto worked alongside the Amazon Web Services (AWS) team to build a proof of concept to test Amazon SageMaker, which gives users the ability to build, train, and deploy ML models for virtually any use case with fully managed infrastructure, tools, and workflows. A few months later, smartocto decided to migrate all its ML models to Amazon SageMaker, freeing its team to focus on innovation. In less than 3 months, the company developed and deployed a predictive editorial analytics solution called Smartify, which helps its customers create relevant, engaging content and grow their audiences. It was difficult for smartocto to quickly onboard new customers and deploy new ML models with its previous architecture because the company’s teams had to build a new secure environment for each of its customers. Further, those teams had to complete several manual tasks as a part of the onboarding process, which made the overall process prone to human error. Smartocto Deploys Editorial Analytics Solution in 3 Months Using Amazon SageMaker Learn from AWS experts. Advance your skills and knowledge. Build your future in the AWS Cloud. Learn more » 日本語 Amazon SageMaker After completing the proof of concept, the company worked to migrate several of its existing ML models to different Amazon SageMaker endpoints, which smartocto completed in a few months. Because smartocto could support predictions in near real time, the company decided to develop Smartify, a predictive editorial analytics solution that uses ML to forecast the expected engagement, such as click rates, likes, and shares, of a news post on a particular channel. 2022 Customer Stories / Media & Entertainment lower resource usage per ML model while delivering better predictions  한국어 Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. Learn more » Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Now that smartocto has deployed Smartify to production, it has entered phase two of its project, which entails training its ML models to yield richer, more accurate editorial insights and support its customers further. As the company continues to iterate new versions of Smartify, it will rely on AWS for needed support. “The fact that we’ve built Smartify entirely on AWS technology is a big achievement,” says Verhoeven. “Using AWS services, we can keep innovating and expanding our content analytics.” In 2021, smartocto learned about Amazon SageMaker, and it engaged the AWS team to build a proof of concept that would test the solution on one of the company’s existing ML models. “We were looking for an ML solution that would help us lower our compute costs, reduce the time spent on managing our infrastructure, and free our teams to focus on fine-tuning the accuracy of our algorithms,” says Ilija Susa, cofounder and chief data officer at smartocto. “We realized that we could save a lot of time and support near-real-time predictions using Amazon SageMaker.” Outcome | Leading the Future of Content Analytics Get Started Founded in 2015, smartocto provides content analytics to 350 newsrooms and media companies around the world through its smartocto system, which features both near-real-time and historical data features. To drive its analytics, the company uses ML, which it supports using a combination of open-source products and AWS services, including Amazon Elastic Compute Cloud (Amazon EC2), which provides secure and resizable compute capacity for virtually any workload. One of the features that smartocto uses is Amazon SageMaker Studio, the first fully integrated development environment for ML. Using this feature, smartocto’s teams can quickly share and save ML notebooks from anywhere, which helped its data science and data engineering teams collaborate across divisions and fast-track the development of Smartify. “Our data science team focused on developing our algorithms to generate accurate predictions, and our data engineering team led the automation and management of our infrastructure,” says Susa. “We didn’t have to engage our systems engineering team, which saved us a lot of time and resources.” The company also learned how to set up Amazon SageMaker in such a way that it would run on its existing programming language, Python, using the Amazon SageMaker Python Software Development Kit, which supports managed training of models with ML frameworks such as TensorFlow and PyTorch. In less than 3 months, smartocto finished developing Smartify, and the company quickly deployed the solution to production—an estimated 6 months ahead of schedule. “It was amazing how fast we were able to release Smartify using Amazon SageMaker,” says Susa. Using Amazon SageMaker, smartocto has achieved a 10 times lower resource usage per ML model while delivering better predictions. Smartocto also automated the process for onboarding new customers to Smartify. Previously, it could take smartocto a few weeks to set up one of its solutions for a new customer. Now, the company can complete the onboarding process in a matter of days by using multimodel endpoints and hosting a unique ML model for each of its customers. “Adding new customers is much faster and simpler for us to do,” says Susa. “We can spend our time focusing on training our ML models for accuracy instead.” Since releasing Smartify, smartocto has deployed this solution for 10 of its customers. AWS Services Used 中文 (繁體) Bahasa Indonesia Opportunity | Seeking a Simple, Cost-Effective ML Solution Headquartered in the Netherlands, smartocto BV provides content analytics driven by ML to 350 newsrooms and media companies around the world. Amazon SageMaker Studio provides a single, web-based visual interface where you can perform all ML development steps, improving data science team productivity by up to 10x.. Learn more » Ρусский Less than 3 months عربي 中文 (简体) AWS Training and Certification Learn more » deployment of its ML models Overview Solution | Developing Smartify in 3 Months Using Amazon SageMaker Amazon EC2 Türkçe Amazon SageMaker Studio Simplified English It was amazing how fast we were able to release Smartify using Amazon SageMaker.”  Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Achieved 10x About smartocto Smartocto Architecture Diagram Deutsch Tiếng Việt needed to onboard new customers Smartocto began developing Smartify in February 2022, and to upskill its staff and accelerate its time to market, the company engaged AWS Training and Certification, which helps participants learn from AWS experts, advance their skills and knowledge, and build their future in the cloud on AWS. The company also relied on the AWS team for technical support. “It was a great experience working alongside the AWS team,” says Đorđe Marjanović, senior data engineer at smartocto. “The AWS team provided us with additional resources and examples of how to use various features on Amazon SageMaker.” Italiano ไทย monthly cost savings by hundreds of dollars Content analytics provider smartocto BV (smartocto) wanted to simplify the deployment of its machine learning (ML) models so that it could deliver richer editorial analytics and improve its customer satisfaction. The company had been using a combination of open-source and cloud solutions to self-host its ML workloads, but this combination of solutions was becoming increasingly time consuming to manage. Close to develop Smartify, a predictive editorial anayltics solution Click to enlarge for fullscreen viewing.  Ilija Susa Cofounder and Chief Data Officer, smartocto BV Architecture Diagram Days instead of weeks Português Contact Sales" Using Amazon SageMaker to improve response time of its demand forecast service by 200 percent _ Visualfabriq Case Study _ AWS.txt,"For Visualfabriq and its customers, the biggest impact of the migration to SageMaker has been a significant performance improvement for its solution. By moving inference from the web servers to SageMaker, the solution is more efficient, and the costs are consistent and transparent. The company improved the response time of its demand forecast service—which predicts the impact that a promotional action will have on the volume of sales for a retailer—by 200 percent. “This type of performance improvement is really important for our customers because predicting a promotion happens a lot,” says Christos Tselas, senior machine learning engineer at Visualfabriq. “Moving the bar from 2 seconds to 1 second or from 10 seconds down to 5 seconds for the response time is a significant time savings and makes the user experience better when people are predicting hundreds of promotions per day.” Français 200% Visualfabriq offers a revenue management solution with applied artificial intelligence capabilities to customers in the consumer packaged goods industry. Its founders had experience in the industry and launched the company with a vision to use data to offer insights for optimal decision making. Since its founding in 2013, the company has grown from an international startup to a global vendor that supports customers worldwide from offices in seven countries on four continents. Español Visualfabriq currently has 50 ML models in production, and the company plans to scale up to onboard additional customers. Visualfabriq is also working on offering additional features, such as model self-service infrastructure, so that customer data scientists can build and manage their own models, as well as a model pipeline so that the company can translate input data to a model in a standardized way. A future goal is to use an environment from Amazon SageMaker Studio, a fully integrated development environment for ML, to facilitate collaboration between Visualfabriq’s data science team and the customer team during model creation. “Using Amazon SageMaker, we can deliver value to customers, and customers are happier now than they were before,” says Verstraaten. “You can’t put a price on that.” Hours per month For Visualfabriq’s customers, the migration to SageMaker was smooth and without any major outages, but the change didn’t go unnoticed. One of Visualfabriq’s customers approached the company to express its satisfaction when a process that previously took around 30 seconds was reduced to 7 seconds. Visualfabriq strives to be innovative and continues to make significant improvements for its customers. “It’s so important that we have conversations about what’s next so that we can be in front of the technology curve, outperforming our competitors and delivering consistent and sustainable value to our clients,” says Jaco Brussé, chief executive officer (CEO) and cofounder at Visualfabriq. “We always have constructive dialogue with the teams at AWS about how we can get better every day.” 日本語 Customer Stories / Consumer Packaged Goods Increased Performance and Standardization Using Amazon SageMaker with Visualfabriq 2022 Contact Sales Get Started 한국어 Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. Learn more » Overview | Opportunity | Solution | Outcome | AWS Services Used with faster response times improvement in response time for demand forecast service AWS Services Used 中文 (繁體) Bahasa Indonesia Learn how Visualfabriq in the CPG industry improved performance and standardized deployment of ML models using Amazon SageMaker. About Visualfabriq Outcome | Rolling Out Models Using Amazon SageMaker to Additional Customers  Ρусский Improved customer satisfaction عربي for easier customer onboarding 中文 (简体) Using Amazon SageMaker, we can deliver value to customers, and customers are happier now than they were before. You can’t put a price on that.” Standardized deployment Overview Opportunity | Using Amazon SageMaker to Implement Improved Revenue Management Solution for Visualfabriq in Under 2 Months As software-as-a-service company Visualfabriq grew its customer base and expanded its machine learning (ML) capabilities, the company needed to adapt its technology stack to improve performance and make models easier to manage. Visualfabriq was already all in on Amazon Web Services (AWS) and decided to migrate its ML models to Amazon SageMaker, which provides the fully managed infrastructure, tools, and workflows needed to build, train, and deploy ML models for virtually any use case. Using SageMaker, Visualfabriq improved model response times by 200 percent and deployed a scalable solution that requires less manual intervention and facilitates faster onboarding for new customers. Türkçe English Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Jelle Verstraaten Team Lead for Forecast Prediction, Artificial Intelligence, and Revenue Growth Management, Visualfabriq Deutsch Solution | Improving Performance and Standardizing Deployment of ML Models Using Amazon SageMaker Tiếng Việt Amazon S3 saved on maintenance and troubleshooting because of increased visibility Italiano ไทย In 2021, Visualfabriq started implementing SageMaker with the goal of creating a scalable, reproducible infrastructure for its ML models. With its previous infrastructure, Visualfabriq saved all the models for its customers using Amazon Simple Storage Service (Amazon S3), an object storage service built to retrieve any amount of data from anywhere. Models would be loaded onto the web server while the user waited for the output, which led to inefficiencies and made issues difficult to diagnose. Visualfabriq used AWS services from the beginning and chose to migrate its models to SageMaker to reduce manual development work, facilitate automation for model deployment and usage, and be able to monitor data about its models. By using multi-model inference endpoints, the company could also make its solution more cost effective and efficient because SageMaker can skip the downloading and loading steps by keeping models in memory. Visualfabriq implemented its solution using multi-model inference endpoints from SageMaker in 1–2 months. Visualfabriq offers a revenue management solution with applied artificial intelligence capabilities to customers in the consumer packaged goods industry. Founded in 2013, the company has grown from an international startup to a global vendor. Learn more » Using SageMaker, Visualfabriq has developed a standard for deploying its artificial intelligence to customers in a scalable way that supports future growth. Model deployment is more consistent and faster because the company can initiate a specific endpoint and automatically upload a file to make it available right away on the inference endpoint. Visualfabriq can also use SageMaker to see if a model is deployed and running effectively. This increased visibility saves the company about 2 hours per month by eliminating maintenance and troubleshooting time. “We can streamline our processes because developers aren’t distracted by issues, like determining if a model is deployed,” says Jelle Verstraaten, team lead for forecast prediction, artificial intelligence, and revenue growth management at Visualfabriq. Additionally, Visualfabriq can onboard customers and deploy a model faster because the process is standardized and transparent. “Because we have less manual effort and can see that everything is working correctly using Amazon SageMaker, we are more confident in onboarding additional customers and creating more models,” says Tselas. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Português Amazon SageMaker" Using Amazon SageMaker to Personalize Sleep Therapy for Millions of Patients _ ResMed Case Study _ AWS.txt,"ResMed rapidly built an AI/ML platform proof of concept on Amazon Web Services (AWS), using as its backbone Amazon SageMaker, which supports companies in building, training, and deploying ML models for any use case with fully managed infrastructure, tools, and workflows. Using AWS, ResMed built the Intelligent Health Signals (IHS) platform. This automated AI/ML platform has greatly expanded ResMed’s AI/ML capabilities so that it can simplify ML model development and deployment for data scientists, accelerate time to market, and scale globally, facilitating personalized therapy for ResMed users with chronic sleep disorders. Cuts time In 2021, ResMed didn’t have the automated, unified AI/ML self-service solution to securely run inferences through the very large volumes of patient sleep data required to meet its 2025 goal. The first version of IHS was built alongside Manifold, an AWS Partner, with which ResMed had a strong track record of joint innovation. Although successful as a proof of concept, the container-based framework was developed by data scientists who each used different tools, which forced them to take responsibility for that infrastructure in perpetuity. “Leaving it to an individual developer to build their own toolbox isn’t scalable, nor will it lead to the rigorous quality we want in an end product,” says Badri Raghavan, ResMed’s vice president for AI and ML. AWS Glue is a serverless data integration service that makes it easier to discover, prepare, move, and integrate data from multiple sources for analytics, machine learning (ML), and application development. Français Amazon SageMaker is built on Amazon’s two decades of experience developing real-world ML applications, including product recommendations, personalization, intelligent shopping, robotics, and voice-assisted devices. Learn more » personalized sleep therapy to over 18.5 million patients  Solution | Building an AI/ML Platform on Amazon SageMaker in 1 Year AWS Data Lab offers accelerated, joint engineering engagements between customers and AWS technical resources to create tangible deliverables that accelerate data, analytics, artificial intelligence/machine learning (AI/ML), serverless, and containers modernization initiatives. Learn more » Español ResMed Uses Amazon SageMaker to Personalize Sleep Therapy for Millions of Patients About ResMed Learn more » 日本語 Amazon SageMaker Contact Sales Close or week to deploy ML models Delivers 한국어 Alongside Manifold, ResMed began building a second version of IHS, its next-generation ML solution, in early 2022. For guidance, the team took part in AWS Data Lab, which offers accelerated, joint engineering engagements between customers and AWS technical resources to create tangible deliverables that accelerate data, analytics, AI/ML, and application modernization initiatives. “The AWS Data Lab was great,” says Philomena Lamoureaux, senior manager of ML and AI at ResMed. “We had the time blocked out for our developers to focus only on the development and the education for this proof of concept.” After the AWS Data Lab, Amazon SageMaker adoption at ResMed more than doubled in 3 months. The prototype solution rolled out in April 2022, just 2 months after ResMed worked alongside the AWS Data Lab team, and the foundational AI/ML capabilities of the IHS solution on Amazon SageMaker were deployed within 6 months. AWS Glue Get Started AWS Services Used predictions processed per day per ML model 中文 (繁體) Bahasa Indonesia ResMed’s AI/ML solution uses Amazon SageMaker Processing to run preprocessing, postprocessing, and model evaluation workloads on fully managed infrastructure. ResMed takes advantage of many Amazon SageMaker features to train models and pipelines and to choose deployment types, including near-real-time and batch inferences. (See Figure 1 for more details on ResMed’s solutions architecture.) These ML models deliver near-real-time predictions to the myAir application that then tailors and delivers content to myAir users. Each ML model creates up to 2 million predictions per day. In addition to in-app notifications, myAir also sends personalized email campaigns to customers using Amazon Pinpoint, a flexible and scalable outbound and inbound marketing communications service. “Previously, all myAir users would receive similar messages from the app,” says Urvashi Tyagi, chief technology officer at ResMed. “IHS has facilitated personalized interactions with patients through myAir based on which ResMed device they use, their waking hours, and additional contextual data.” Now, over 18.5 million patients enjoy tailored content and a personalized experience. “Our team can make sure patients get the benefit of all the data we have,” says Prakhar Shukla, director of data engineering at ResMed. Amazon Pinpoint ไทย Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي Digital health technology company ResMed is one of the leading global providers of cloud-connected solutions for people with sleep apnea, chronic obstructive pulmonary disease, asthma, and other chronic conditions. From July 2021 through June 2022, ResMed helped improve the lives of over 140 million people in over 140 countries. Its goal is to improve 250 million lives per year by 2025. However, its previous artificial intelligence (AI) and machine learning (ML) capabilities couldn’t process enough data to deliver personalized sleep recommendations at that scale. It needed a way to streamline ML development and scale its operations quickly. 中文 (简体) ResMed used Amazon SageMaker to rapidly build the AI/ML IHS solution that supports personalizing sleep therapy for over 18.5 million patients worldwide. “Prior to adopting Amazon SageMaker, all myAir users would receive the same messages from the app at the same time, regardless of their condition,” says Raghavan. “Amazon SageMaker has helped facilitate more personalized therapy for ResMed users. We took advantage of Amazon SageMaker features to train model pipelines and to choose deployment types, including near-real-time and batch inferences to deliver tailored content to myAir users.” In addition, says Raghavan, “Amazon SageMaker has helped us to achieve our key goal of embedding ML capabilities across our global organization by deploying ML models in days or weeks compared with months.” to create a fully operating AI/ML solution 2022 Overview Learn how ResMed in digital health technology built a streamlined AI/ML solution in less than 1 year using Amazon SageMaker. Amazon Pinpoint offers marketers and developers one customizable tool to deliver customer communications across channels, segments, and campaigns at scale. Learn more » Less than 1 year From months to days Türkçe English ResMed data scientists now have more time and flexibility. “The deployment, serving, and monitoring are streamlined and automated as much as possible so that data scientists can create a model without being tied to the infrastructure they build,” says Lamoureaux. “They can move on and have the space to be creative.” Using Amazon SageMaker, ResMed data scientists accelerate time to market by deploying ML models in days or weeks compared with months previously and by cutting time for AI/ML pipeline processing by several hours. Overview | Opportunity | Solution | Outcome | AWS Services Used | Architecture Diagram ResMed provides digital health technologies and cloud-connected medical devices that transform care for people with sleep apnea, chronic obstructive pulmonary disease, and other chronic diseases and out-of-hospital software platforms that support caregivers. These solutions improve quality of life, reduce the impact of chronic disease, and lower costs for consumers and healthcare systems in more than 140 countries. 2 million Outcome | Using AWS to Personalize Treatment for Millions of Sleep Patients Opportunity | Searching for an AI/ML Solution to Scale Globally for ResMed for AI/ML pipeline processing by several hours AWS Data Lab Deutsch Amazon SageMaker has helped us to achieve our key goal of embedding ML capabilities across our global organization by deploying ML models in days or weeks compared with months.” Tiếng Việt Badri Raghavan Vice President for AI and ML, ResMed Italiano Customer Stories / Life Sciences ResMed AI/ML Intelligent Health Signals Platform Flow Diagram ResMed provides continuous positive airway pressure devices and masks for people with sleep apnea, chronic obstructive pulmonary disease, and other sleep disorders. This cloud-connected equipment collects data on patients’ sleep patterns and shares it with patients through ResMed’s myAir patient engagement application. Then, myAir’s Smart Coaching feature uses AI/ML to launch customized recommendations to each patient to improve their outcomes. Architecture Diagram ResMed chose Amazon SageMaker to build a centralized, standardized AI/ML solution because it scaled globally and connected well with solutions the company was already using for data storage. In 2018, ResMed had built a data lake on AWS that was compliant with regional data regulations. Amazon SageMaker connects seamlessly with this data lake through AWS Glue, a serverless data integration service that makes it simple to discover, prepare, and combine data for analytics, ML, and application development. Click to enlarge for fullscreen viewing.  Português" Using Computer Vision to Enable Digital Building Twins with NavVis and AWS _ AWS Partner Network (APN) Blog.txt,"AWS Partner Network (APN) Blog Using Computer Vision to Enable Digital Building Twins with NavVis and AWS by David Sauerwein , Markus Winterholer , Simon Boehmer , and Ignacio Perez Hallerbach | on 20 JUN 2023 | in Amazon SageMaker , Analytics , Artificial Intelligence , AWS Partner Network , Case Study , Customer Solutions , Industries , Thought Leadership | Permalink | Comments |  Share By David Sauerwein, Sr. Data Scientist – AWS By Markus Winterholer, Delivery Practice Manager – AWS By Simon Boehmer, Cloud Application Architect – AWS By Ignacio Perez Hallerbach, VP Global Head of Partners & Platform – NavVis NavVis Managing existing brownfield buildings is a challenging task because teams usually lack accurate ground truth data. Object detection algorithms are a key technology to automate and scale the creation of a digital building twin, providing a solution to this challenge. For detecting objects in indoor environments with machine learning, NavVis and Amazon Web Services (AWS) collaborated to build a digital building twin for a large industry customer. This post covers the requirements for the customer’s application, as well as the main challenges for training, evaluation, and deploying custom object detection models on Amazon SageMaker . Additionally, it covers the management of multiple models and the integration into an existing digital twin initiative using a full serverless web application. NavVis is an AWS Partner that supplies fast, reliable spatial data to service providers and enterprises seeking to capture photorealistic digital twins of the built environment. Its digital factory solutions enable greater organizational operability, productivity, agility, and profitability. Digital Twins Digital twins have become an inevitable tool for data-driven modeling and process optimization along the entire value chain. AWS has defined a 4-level framework to help customers categorize their digital twin use cases to best understand the data, models, and business processes needed to enable digital twin use cases. Here, we have built an L2 Informative digital building twin. The web application shown in Figure 1 enables users to see detected points of interest on a floor plan. Figure 1 – Object detection web application. The digital twins presented here use accurate 3D point cloud data and panoramic images provided by the NavVis Reality Capture Solution . To further populate the digital twins with object-level information, AWS Professional Services automated the object detection in the cloud. A digital replica of a building that serves as a single source of truth for infrastructure data enables a large variety of use cases for modern facility management. Monitoring of infrastructure, building auditing, maintenance, performance and safety improvements, as well as compliance checks, can all be automated and supported with an accurate digital building twin. Problem and Business Value For newly-constructed buildings, a range of digital plans and inventory lists are available. Usually, the data is extracted from the construction plans as a planned model. To validate that the assets are installed in the right place and quantity, a manual assessment is required to create an as-is building model. For existing brownfield buildings, the asset data and accurate plans represent a scarce article. Object-level information in the form of inventory lists is missing or outdated, and often entire plans of the site don’t reflect the latest state of the building. To fill this data gap, the customer conducted yearly building reviews manually. However, these manual processes are error-prone and produce low-quality data that can only be used for a limited number of use cases and are hard to scale. The new solution creates high-density and high-quality data for a variety of stakeholders in an automated way. The solution proves these manual building reviews can be substituted by a digital, computer vision-powered process. Automated data collection, inventory generation, and data review build a solid base for rich digital building twins. This solution generates efficiencies in facility management and enables quick adaptation of buildings to users’ needs. It also enables the evaluation of the data quality of automatically created building inventory lists, and to assess a range of business cases designed around automated building reviews. Goals and Requirements A full machine learning (ML) pipeline is required to enable the automated detection of objects. The initial prototype is focused on detecting and reviewing 13 object classes, including smoke detectors, desks, lights, exit signs, and fire extinguishers. These classes allow for the validation of a range of engrossing business cases, such as: Validating if exit signs are pointing to an emergency exit. Validating fire extinguishers are present under adequate signage. Validating if a smoke detector is in every room. Planning maintenance activities, such as counting lights, counting desks for cleaning, or counting plants for watering. The figure below shows detected objects of interest in a building hallway. Fire extinguishers are detected below the signs, which is compliant with the security requirements. Figure 2 – Web application showing objects detected in a hallway. In addition to that initial set of object classes, a requirement is the low cost of adding new classes from different locations. The flexibility and scalability of the solution is important because of the high variability of buildings, both in terms of layout and equipment. A cost-optimized approach to data labeling and model training is emphasized in this solution by incorporating few-shot learning and pre-labeling. To guarantee a level of modularity, the solution enables users to include multiple models that are specialized in different object types. The user also has to deal with numerous large image datasets; therefore, a streamlined management of new datasets and buildings is also a key requirement. Eventually, a substantial number of objects from different sources is detected. For usability, the application has to be accessible through a web browser, allow the user to validate the results, integrate with the existing digital twin project, and ideally only incur costs when it’s actually used. Next, we walk you through the following high-level components of the solution. Data Acquisition NavVis’ mobile mapping solutions played a key role when in capturing highly accurate 3D scans of the relevant indoor environments. The data was captured using the NavVis M6 and NavVis VLX devices while continuously moving through the building. The scanning devices are equipped with a set of high-resolution cameras and LiDAR sensors. A single scan package contains 3D point cloud data, raw camera images, and panoramic images. For training a 2D object detection model, the panoramic images were selected due to their smaller size, which could lead to less effort during the labeling phase. The following image shows an example of the scanning process using NavVis VLX. Figure 3 – Wearable mobile mapping system scanning process using NavVis VLX. Image Tiling The panoramic image resolution is 8192×4096 pixels, and size varies from 5-10 MB. Modern object detection frameworks expect smaller images. To address this problem, image tiling is introduced and images are resized to 2048×1024 pixels. Then, a sliding window sized at 1024×1024 pixels extracts smaller tiles with 50% overlap, resulting in three images with annotations split accordingly. This is a sweet spot for available object detection methods that still avoids aggressive resizing, where small objects like smoke detectors or exit signs could disappear. Figure 4 – Example of image tiling. Pre-Labeling To reduce the time effort and cost of training a custom object detection model, pre-labeling was introduced. Pre-labeling is a process of including a feedback loop in the labeling phase, as the initial model is trained using a small number of labeled images (approximately 100). To achieve better generalization of a model trained on such a small dataset, new training examples are created out of the existing training dataset using image augmentation (such as rotating, cropping, shifting, and color modification). The model is used for initial population of the labeling tool with bounding boxes, converting the labeling task into refinement, where a user has to adjust existing boxes and labels, rather than starting from scratch. Pre-labeling is based on periodically retraining the model after a new portion of labeled data is ready. The pre-labeling phase reduces the time needed to label a single panoramic image from approximately 20 minutes to five minutes. Image Pre-Selection Images taken from a building’s interior contain thousands of different objects. Labeling objects of interest means finding a few instances spread across hundreds of images. To improve efficiency, few-shot learning is introduced. The method uses a pre-trained network, and freezes all but the classification layer such that the network can be fine-tuned using only a few samples of a novel class. Detection accuracy of this model is not the priority since its sole purpose is to determine if an image contains a particular object. The confidence threshold is set to a low value, because some false positives are acceptable and the goal is to find as many objects as possible. The trained model looks through a big dataset and selects images containing detections. Only selected images with accompanying bounding boxes are used for further refinement. The image pre-selection phase reduces the number of images taken for further processing by up to 80%. Figure 4 shows results for the fire extinguisher class using the SSD ResNet50 FPN 1024×1024 model (TensorFlow2 implementation) with only eight training images. Figure 5 – Example of image pre-selection. Object Detection The core of the solution is object detection, which determines what’s present in a picture and find its location. The model has to deal with a large variety in object sizes (from very big to very tiny) and an imbalanced dataset (such as over-represented class light and only a few examples of defibrillator ) within a reasonable inference time. YOLO (You Only Look Once) is a family of object detection models known for being highly performant yet incredibly small. Trained on a custom date, YOLOv5 outperformed EfficientDet and SSD families (implemented in TensorFlow2) and was selected as the main object detector. Figure 6 – Comparison of YOLOv5 with other frameworks. Amazon SageMaker multi-GPU instances can speed up the computationally costly model training process. With an ml.p3.8xlarge instance, training with over 2,200 images is complete in under one hour, allowing for multiple training and evaluation sessions in a single day. Mean average precision (mAP) for all classes is 81.6%, which is high, but single-image results aren’t very reflective in this case. Due to the iterative nature of the data acquisition (the robot is moving across the building and taking pictures with fixed time intervals), there’s usually more than one chance to spot a single object. The model is optimized for high precision, and multiple detections of a single instance are clustered in the postprocessing phase. To enrich the detection capabilities of the system, Amazon Rekognition was included as another model, allowing it to find objects like door or staircase in the images. Amazon Rekognition is a cloud-based software-as-a-service (SaaS) computer vision platform. Web Application To enable users to easily use this solution for building inventory creation and visualization of the object detection pipeline results, a fully AWS-powered web application was built. The application is used to upload new scans, start a detection run with specific object detection models, evaluate and refine results, and export selected objects of interest to adjacent tools. The serverless application uses the following AWS services: Deployment and hosting: AWS Amplify , AWS CloudFormation , and AWS CodePipeline User management: Amazon Cognito Backend: Amazon API Gateway , AWS Lambda , and AWS Step Functions Data persistence: Amazon DynamoDB Figure 7 – Overall solution architecture. Conclusion In this post, we discussed how NavVis and AWS used object detection algorithms to create a digital building twin with object-level information. Cost-intensive and slow, manual building inspections are replaced by highly scalable machine learning solutions in the cloud. The modular design of the solution makes it easy to onboard new datasets and models and extend its capabilities over time. This shows that ML can play a significant role in driving efficiencies in facility management and help adapt buildings to customer’s needs. . . NavVis – AWS Partner Spotlight NavVis is an AWS Partner  that supplies fast, reliable spatial data to service providers and enterprises seeking to capture photorealistic digital twins of the built environment. Its digital factory solutions enable greater organizational operability, productivity, agility, and profitability. Contact NavVis | Partner Overview | Case Studies TAGS: AWS Partner Guest Post , AWS Partner References , AWS Partner Success Stories , NavVis Comments View Comments Resources AWS Partner and Customer Case Studies AWS Partner Network Case Studies Why Work with AWS Partners Join the AWS Partner Network Partner Central Login AWS Training for Partners AWS Sponsorship Opportunities Follow  AWS Partners LinkedIn  AWS Partners Twitter  AWS Partners YouTube  AWS Email Updates  APN Blog RSS Feed" Valant Uses AWS Communication Developer Services to Help Behavioral Health Practices Drive Better Patient Engagement _ Valant Case Study _ AWS.txt,"Français Shortly after the onset of the pandemic in early 2020, Valant began offering a telehealth solution to provide virtual capabilities to practices and their patients. The solution was based on a digital communications platform that lacked a multi-user experience and many other requested features. “The platform we used offered peer-to-peer video only, and we needed group capabilities, chat, screen and file sharing, and a whiteboard,” says James Jay, chief technology officer at Valant Medical Solutions. “In behavioral health, it’s common to have parents, spouses, or other guests attend sessions, and we saw a significant demand from practices for multi-user functionality, as well as other features critical to engaging effectively with patients. We also had strong demand to integrate co-payment collection into telehealth check-in workflows in advance of sessions.” 2023 Amazon Simple Email Service Español by using voice, video, messaging, and automated reminders Valant Medical Solutions, Inc. provides electronic health record software to behavioral health providers and practices. To add enhanced telehealth capabilities and improve patient communication, the company turned to Amazon Web Services to add capabilities in voice, video, messaging, and email through AWS Communication Developer Services to build a new telehealth solution for more than 2,500 behavioral health practices. AWS Communication Developer Services (CDS) are cloud-based APIs and SDKs that help builders add communication capabilities into their apps or websites with minimal coding. 日本語 Valant Medical Solutions, Inc. designs and develops web-based electronic health record (EHR) software to help behavioral health providers and practices streamline administration tasks and improve patient outcomes. More than 20,000 behavioral health professionals in group and solo private practices across the United States use the Valant platform to treat individuals seeking behavioral healthcare. The Valant IO system has extensive capabilities to enable providers to deliver value-based care through measurement-based assessment and ongoing outcome assessments. 5% Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used With the new Valant solution, practices can better engage their patients and communicate with them more frequently through automated reminders for appointments, insurance, no-show follow up, and more. Each practice has the option to deliver all communications via SMS, voice, and emails. Additionally, Valant has grown its overall business by 21 percent and increased add-on revenue by more than 100 percent. business growth Valant Uses AWS Communication Developer Services to Help Behavioral Health Practices Drive Better Patient Engagement Opportunity | Looking to Add More Features to a Telehealth Solution AWS Services Used Amazon Chime SDK As a result of key features built over the last 12 months, Valant has increased its overall business by more than 20 percent. The new telehealth and patient communications features are a big driver of the success. “Because of our new telehealth and automated reminders, which offer more robust features such as group meetings, our clients have seen a revenue increase,” says Jay. “We’ve had an incredible adoption of these new tools, which is also helping us grow our market share and customer satisfaction.” 中文 (繁體) Bahasa Indonesia 21% Valant Medical Solutions, Inc., based in Seattle, Washington, provides web-based electronic health record software that helps behavioral health providers and practices improve patient outcomes and streamline administration tasks. More than 2,800 professionals in group and solo private practices across the United States use the Valant platform to deliver value-based care. Using Amazon Chime SDK’s real-time video capabilities, Valant created a new desktop and mobile telehealth solution that integrates with the company’s EHR and practice management software. Through the portal, patients can schedule and initiate video visits with practices directly from their MYIO patient portal. Valant then used Amazon Pinpoint and Amazon Simple Email Service (Amazon SES) to build an appointment reminder system that allows practices to communicate with patients over email, SMS, and voice. Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Learn more » The portal and reminder system uses rule-based reminders that automatically send patients multiple appointment reminders with customized timing before a scheduled appointment. The system also offers communications for canceled appointments, overdue balances, portal onboarding, insurance and credit card expiration, group therapy appointments, patient management, facility closures, no-show follow up scheduling, intake/signature assignments, and clinical outcome measures assigned, that are reviewed by practices. Valant integrated the CDS services with Amazon Polly to convert text to speech for customized automated robocall reminders. “The tools we built around Amazon Pinpoint, Amazon SES, and Amazon Polly give practices a wide range of tools for communicating better with their patients,” says Jay. James Jay Chief Technology Officer, Valant Medical Solutions By using AWS Communication Developer Services, we’ve given tools to providers that help them communicate more easily and stay in touch more frequently with patients in the channel of their choice. Our goal was to improve patient engagement, and we’ve done that.” Overview as a result of better patient engagement practice growth Increases engagement Türkçe English Amazon Pinpoint offers marketers and developers one customizable tool to deliver customer communications across channels, segments, and campaigns at scale. Valant uses Amazon Chime SDK, Amazon Pinpoint, and Amazon SES to build a new telehealth solution, automate patient communications, and increase patient engagement. 2x add-on revenue growth Solution | Integrating Voice, Video, and Messaging and Automating Patient Reminders AWS Communication Developer Services (CDS) With the Amazon Chime SDK, builders can easily add real-time voice, video, and messaging powered by machine learning into their applications. As Valant explored new telehealth capabilities, the organization also saw an opportunity to better engage with patients by offering personalized and recurring reminders as a part of its solution. “Our solution only allowed us to provide a single static appointment reminder, and many practices were buying third-party solutions with multiple reminders,” Jay says. “We needed to address this problem as part of our strategy to offer a fully integrated platform.” Amazon Simple Email Service (SES) lets you reach customers confidently without an on-premises Simple Mail Transfer Protocol (SMTP) system. AWS Communication Developer Services (CDS) are cloud-based APIs and SDKs that help builders add communication capabilities into their apps or websites with minimal coding. Deutsch Amazon Pinpoint Tiếng Việt Outcome | Improving Patient Engagement and Driving Business Growth Italiano ไทย About Valant Medical Solutions, Inc. Learn more » Valant was running much of its core IT environment on Amazon Web Services (AWS), and an overall positive experience led the company to expand its use of AWS services. The company chose Amazon Chime SDK, which helps builders add real-time voice, video, and messaging capabilities into their communications applications. “We already had a big investment with AWS, and Amazon Chime SDK offered the features we wanted in addition to easy implementation,” says Jay. As Valant continues to enhance its telehealth and patient portal solutions, it plans to take advantage of the cross-utilization of AWS services by standardizing on a single communications platform. “The fact that many of these services work with each other makes everything easier with both implementation and support because we’re working with one team or person at AWS,” says Jay. The organization is currently working to implement Amazon Cognito as a single sign-on solution for practices. “We will be able to map a single identity through Amazon Cognito and also connect it to Amazon Pinpoint to do some unique things,” Jay concludes. “These services integrate well together, and that will help us add new features and capabilities as we grow.” By using the new AWS-based telehealth solution, behavioral health practices have improved their communication and engagement with their patients by offering multi-feature video conferencing with group meeting and screen sharing capabilities. “By using AWS Communication Developer Services, we’ve given tools to providers that help them communicate more easily and stay in touch more frequently with patients in the channel of their choice,” says Jay. “Our goal was to improve patient engagement, and we’ve done that. The response from our practices has been overwhelmingly positive. Behavioral health patients are going through challenges, and getting multiple appointment reminders is very valuable — it’s an important part of helping them ensure they don’t miss valuable time with their behavioral health provider.” Português" Veolia Australia and New Zealand Case Study - Amazon Web Services (AWS).txt,"Amazon Simple Storage Service Security was top of mind during Veolia’s migration planning. CMD worked with Veolia’s internal cloud team to first set up an AWS Landing Zone with multi-Availability Zone (multi-AZ) architecture to strengthen the business’s disaster recovery capabilities. Veolia also implemented Amazon GuardDuty for intelligent threat protection and AWS Backup to centralize and automate backup across AWS services. Enhances security with cloud-native tools Français Benefits of AWS Español Veolia’s cloud team worked closely with CMD, following the structured learnCMD training path alongside frequent ad-hoc discussions on building in the cloud using the landing zone concept. “CMD brought their expertise and experience in leading the cloud migration, and our team members benefited from a lot of hands-on learning along the way,” says Nandavaram.  Facilitating Updated, Self-Service Reporting with Data Lake Learn More 日本語 AWS Services Used About Veolia Australia and New Zealand AWS gives us the flexibility to try different approaches until we find the optimum configuration, something that would’ve taken weeks or months on premises."" Get Started 한국어 “Our systems are integrated on the AWS Cloud to transfer data daily across business units, whereas data syncs previously took weeks to extract and ingest on premises,” Nandavaram says. Furthermore, employees can visualize continually updated project data and customize dashboards to suit their reporting requirements without involving Veolia’s IT team. This self-service visualization and analytics capability empowers teams to make data-driven decisions faster. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. AWS Landing Zone The initial support received also swayed the decision. “AWS was there from day one as we started the move-to-cloud conversation and engaged with us proactively,” says Pradeep Nandavaram, head of infrastructure & cloud at Veolia Australia and New Zealand. AWS introduced AWS Partner CMD Solutions to help with migration and implementation. CMD guided Veolia through a detailed discovery and planning phase to understand the existing technology landscape and identify gaps, dependencies, and constraints.  “We had a wide variety of enterprise SQL licenses in our technology mix. Leveraging the skills of our internal teams and CMD, we were able to rationalize our SQL spend by optimizing licensing versions in some cases, to save on database costs,” says Nandavaram. With CMD’s help to either re-platform, retire, or refactor SQL applications, Veolia is saving 67 percent on its database spend since migrating to AWS. Stands up Citrix environment in 1 hour instead of multiple days Reduces SQL database spend by 67% 中文 (繁體) Bahasa Indonesia Veolia Australia and New Zealand is part of the global Veolia Group, which is committed to ecological transformation through sustainable waste, water, and energy initiatives. Veolia has 179,000 employees worldwide who help develop access to natural resources and preserve and replenish available resources. To learn more, visit aws.amazon.com/products/databases/migrations. Contact Sales Ρусский Veolia Migrates 34 Business-Critical Applications to AWS for Improved Scalability and Faster Data Access عربي CMD and Veolia then prepared a tailored, phased cloud migration roadmap and outlined anticipated benefits. The CMD team performed an AWS Optimization and Licensing Assessment (AWS OLA) using the Cloudamize tool to assist with the business case alongside execution details. “The data collected in this discovery phase helped us rationalize the cloud migration business case to our leadership,” Nandavaram adds.  Utilizing Edge Computing and IoT 中文 (简体) Amazon GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation. Learn more » Building the Business Case for Cloud Migration Gains flexibility to experiment with new services and fail fast Some of Veolia’s technology stack is running on the Microsoft operating system, with SQL database servers, .NET applications, and Active Directory. As part of the migration process, Veolia worked with CMD to retire some databases and consolidate its remaining SQL databases running in Amazon Elastic Compute Cloud (Amazon EC2), to reduce licensing costs and management overhead.  To support its second pillar of digital transformation—data for business—Veolia built a data lake using Amazon Simple Storage Service (Amazon S3), AWS Glue, Amazon Athena, and AWS Lake Formation. The data initiative occurred in parallel with the migration of core systems to AWS. Data from the company’s various data sources powers reporting dashboards used across the organization.  Customizations for AWS Control Tower combines AWS Control Tower and other highly-available, trusted AWS services to help customers more quickly set up a secure, multi-account AWS environment using AWS best practices. Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Türkçe English Standing up a Citrix environment now takes Veolia about an hour using infrastructure as code; previously this was a tedious, multi-day exercise. “Velocity is key. We no longer follow the software development life cycle approach, which increases our agility,” adds Nandavaram.  During the six-month implementation period with CMD, Veolia also accelerated infrastructure deployment by enhancing its continuous integration/continuous deployment (CI/CD) pipeline and relying more heavily on infrastructure as code. “We’re now able to spin up instances on the fly using automation we can copy and adapt to each deployment. CMD played an important role in bringing more maturity to our CI/CD approach,” Nandavaram says.  Veolia also adopted Amazon Relational Database Service (Amazon RDS) for enterprise workloads, for both in-house and commercial off-the-shelf applications. “The business requirements evolve very quickly for each application. AWS gives us the flexibility to try different approaches until we find the optimum configuration, something that would’ve taken weeks or months on premises,” Nandavaram explains. Veolia currently uses more than 50 AWS services to support its digital initiatives. Looking ahead, Veolia has plans to utilize edge computing and internet of things technology to monitor its digital solutions, facilities, and fleet. “This was not possible before due to the size of the data or need for extensive capacity planning. Our enterprise solutions on AWS will be integrated with different systems which will allow us to manage a range of digital assets efficiently,” Nandavaram says.  Amazon Relational Database Service Enhancing Security and Resilience with Partner Support Speeds up deployment with infrastructure as code Gaining a Mature CI/CD Pipeline with Infrastructure as Code Deutsch CMD used  AWS Application Migration Service (CloudEndure Migration) to support the migration of 34 business-critical Veolia applications to AWS over a period of 4–6 months. When the scope of the CMD-led project was completed, Veolia’s internal team was able to independently migrate several additional, complex enterprise applications to AWS over the subsequent 6–8 months. Veolia has continued to evolve its AWS Landing Zone environment since implementation. Pradeep Nandavaram Head of Infrastructure & Cloud, Veolia Australia & New Zealand Tiếng Việt Veolia Australia and New Zealand (Veolia) began its cloud journey in 2019 with the migration of its Citrix environment to Amazon Web Services (AWS). At the time, AWS was the only provider meeting data sovereignty and classification, in addition to other solution requirements put forth by some of its larger customers. Italiano ไทย Improves visibility into data with self-service reporting dashboards Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Lowering SQL Licensing Costs with Increased Flexibility Amazon GuardDuty 2022 Climate change discussions often touch on the importance of building circular economies, where sustainable manufacturing, consumption, and waste are taken into consideration. The benefits of this model go beyond the environment; when implemented wisely, circular economies can yield lasting financial gains. One report examining the impact of a circular economy in Australia, for example, estimates that it could result in a $23 billion GDP boost for the country by 2025. Veolia is a global group dedicated to ecological transformation through sustainable waste, water, and energy management initiatives. Veolia Group employs 179,000 people worldwide, with a large operation in Australia and New Zealand. The group has been pursuing digital transformation for several years, underpinned by three pillars: move to cloud, data for business, and security anytime, anywhere, from any device. Português" Vocareum Offers Amazon Lightsail to Help over 50000 Cloud Learners Build Cloud Skills _ Vocareum Case Study _ AWS.txt,"to deploy sandbox environments quickly virtually limitless scaling Serves over 50,000 students Français AWS re/Start 2023 By adding Amazon Lightsail to its service offerings, Vocareum has made it simple and cost efficient for its customers to support students as they learn, experiment, and prepare for new opportunities. As demand for employees with cloud skills continues to accelerate around the globe, Vocareum plans to continue adding AWS solutions to its repertoire to help institutions meet changing needs. For example, as interest in ML grows, Amazon SageMaker—a solution for building, training, and deploying ML models—is becoming more popular among Vocareum’s customers. Vocareum is a cloud-native education technology company that provides digital learning and training solutions to educational institutions. Vocareum was founded in 2012 and today serves more than one thousand institutions and one million learners. Español Learn how Vocareum in the education industry offers Amazon Lightsail to help its customers spin up new test environments quickly and cost efficiently. using Amazon Lightsail Makes it simple About Vocareum Founded in 2012, Vocareum provides digital training solutions to over one thousand institutions—from colleges to corporations to online training services—and more than one million learners. Instructors use Vocareum’s learning platform to provide students with the environment and tools that they need to practice cloud skills, programming, and other data science concepts, including artificial intelligence and machine learning (ML). To help promote the best learning outcomes, the company offers an array of resources and services—including various AWS solutions—that instructors can make available to their students. “Our business is built on AWS,” says David Lin, vice president of business development at Vocareum. “We’ve provisioned over four million AWS accounts for students who are learning by using AWS solutions.” 日本語 AWS Services Used AWS re/Start is a cohort-based workforce development training program that prepares individuals for careers in the cloud and connects them to potential employers. Get Started 한국어 Outcome | Adding More AWS Services to Teach New Skills As part of its growing offering, the company wanted to provide instructors with a way to create sandbox environments where students could practice their skills. As a result, learners can now use Amazon Lightsail to set up instances quickly and simply—perfect for educational purposes. “As a platform provider, we want to make sure that we’re serving the community’s needs,” says Lin. “The fact that so many learners are using Amazon Lightsail tells us that we have accomplished this goal.” Overview | Opportunity | Solution | Outcome | AWS Services Used AWS Educate AWS Academy provides higher education institutions with a free, ready-to-teach cloud computing curriculum that prepares students to pursue industry-recognized certifications and in-demand cloud jobs. Customer Stories / Education to control cloud costs and usage Vocareum, like many other education technology companies around the globe, had used AWS in the past. However, they decided to offer Amazon Lightsail as an option to build simpler and smaller applications with low-cost, preconfigured cloud resources. Educators on Vocareum’s learning platform can now use Amazon Lightsail to spin up new test environments, scale their resources with predictable pricing, and better serve their students with hands-on learning experiences. of labs and services to improve learning outcomes Using AWS, Vocareum can also scale as needed to make its hands-on labs accessible to a growing number of learners. Students can access the labs through their web browsers using a laptop or desktop computer without having to install any special software. Educational institutions don’t need to use their own data centers to support learning, and neither does Vocareum. “We can scale virtually infinitely in the AWS Cloud and support learning globally,” says Lin. 中文 (繁體) Bahasa Indonesia Amazon Lightsail AWS Academy Contact Sales Ρусский Vocareum’s customers also find monitoring and optimizing their cloud costs to be simple when using Amazon Lightsail and other AWS solutions. Instructors can set a budget and restrict the resources that students use for each assignment. “For instance, we can say that inside a given lab, we want to give learners access to $20 worth of AWS resources,” says Lin. “We can also restrict how much time students are allowed spend in an AWS account. This way, we can provide a sandbox environment constrained by budget and policy so that as students learn, they are using resources efficiently.” Vocareum’s cloud lab solution is actively deployed in, and is an integral part of, many AWS Education Programs, including AWS Academy, AWS Educate, and AWS re/Start, through which learners gain practical, hands-on technical skills training. عربي AWS Educate is open to any individual, regardless of where they are in their education, technical experience, or career journey. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Learn more » Amazon Lightsail offers easy-to-use virtual private server (VPS) instances, containers, storage, databases, and more at a cost-effective monthly price. Overview   Achieves Türkçe English “What’s great about being built on AWS is that our company can offer a broad and growing range of labs to support the learning community,” says Lin. David Lin Vice President of Business Development, Vocareum Deutsch Vocareum Offers Amazon Lightsail to Help over 50,000 Cloud Learners Build Cloud Skills Tiếng Việt Offers a variety Italiano ไทย Empowers instructors Solution | Scaling in the AWS Cloud to Support Global Learners Since Vocareum began offering Amazon Lightsail in 2021, over 50,000 users have adopted the service. Instructors and students find it simple to launch Amazon Lightsail and get started quickly, making the solution a popular choice for both guided assignments and independent projects. Students use Amazon Lightsail when engaging in hands-on learning about cloud computing, ML, artificial intelligence, web development, and more. “Amazon Lightsail is a complete experience,” says Lin. “Learners get access to databases and scaling as part of the Amazon Lightsail feature set, which offers them flexibility and control.” Learn more » What’s great about being built on AWS is that our company can offer a broad and growing range of labs to support the learning community.” Opportunity | Creating Sandbox Environments Using Amazon Lightsail Cloud-native education technology company Vocareum wanted to help its customers create better digital-learning experiences for their students. To do so, Vocareum needed to provide a fast, simple way for instructors to deploy sandbox environments where learners can experiment with web and application development to meet their course objectives. Português" Volkswagen Passenger Cars Case Study.txt,"Gunther Mayer IT Specialist, R&D, Volkswagen Passenger Cars Français Benefits of AWS Español Volkswagen Passenger Cars Uses NICE DCV for High-Performance 3D Remote Visualization Delivering Remote Streaming with NICE DCV 日本語 Based in Germany, Volkswagen Passenger Cars is one of the leading carmakers in the world and Europe’s largest car manufacturer. The company operates more than 50 production locations on five continents and employs over 200,000 people. With 70 different models, Volkswagen has a presence in all major market segments and almost every country. Enables flexibility for offsite employees 한국어 Companies of all sizes across all industries are transforming their businesses every day using AWS. Contact our experts and start your own AWS Cloud journey today. The company is also starting to run some NICE DCV–powered applications inside containers. “Our vision is to ultimately run all 100 applications in containers, so we can simplify IT management,” says Mayer. “Instead of changing each workstation image when we make updates, our IT team will be able to save time by centrally managing the lifecycle of all applications.” NICE DCV distributor NICE DCV Get Started NICE DCV is a high-performance remote display protocol that provides customers with a secure way to deliver remote desktops and application streaming from any cloud or data center to any device, over varying network conditions. Volkswagen Passenger Cars is enhancing its security capabilities by using NICE DCV encryption features along with its internal enterprise VPN solution. Additionally, NICE DCV streams pixels instead of geometries, which helps ensure customer data privacy. The solution also secures both pixels and end-user inputs by using TLS protocol, so customer data is highly protected. “NICE DCV gives us encryption capabilities without sacrificing performance,” says Mayer. “Because the solution streams pixels, our engineers don’t have to physically download project data to their computers. If someone loses a laptop or has a hardware problem, the data is still saved.” AWS Services Used Provides near-real-time responsiveness with high image quality 中文 (繁體) Bahasa Indonesia Our automotive engineers can reliably access their high-end Linux workstations and complete 3D simulations from home or other remote locations using NICE DCV.” NICE DCV, a technology from Amazon Web Services (AWS). NICE DCV is a high-performance remote display solution for securely delivering remote desktops and application streaming from a data center or the cloud to any device. Taking advantage of NICE DCV, the CAE engineers can run 3D CAE software remotely and stream the user interface to client machines, which eliminates the need for dedicated office-based workstations. More than 1,000 Volkswagen automotive engineers are using NICE DCV to remotely access CAE applications running in the company’s on-premises, high-performance computing (HPC) cluster. Contact Sales Ρусский عربي Relying on NICE DCV, Volkswagen engineers have the flexibility to work either onsite or from home, easily completing simulations for new passenger car designs. “Our automotive engineers can reliably access their high-end Linux workstations and complete 3D simulations from home or other remote locations using NICE DCV,” Mayer says. “Using our enterprise VPN and a smart card, our users can connect from anywhere across the globe and perform their work using the same tools they would have in our offices. This gives us a level of flexibility we never had before.” 中文 (简体) To meet its needs, Volkswagen Passenger Cars implemented More than 1,000 automotive engineers in the Volkswagen Passenger Cars division rely on multiple computer-aided engineering (CAE) applications, running on high-end Linux workstations, for use in crash safety and noise vibration harshness simulations. “Our engineers need computers with strong performance to do their work effectively,” says Gunther Mayer, IT specialist for research and development at Volkswagen Passenger Cars. For example, engineers create large simulations that show the noise created by air flowing over cars. These simulations often contain a terabyte of data. “For many years, supporting simulations that large was only possible sitting in front of high-end graphics workstations in our offices,” says Mayer. Enabling Flexibility for Offsite Engineers However, over the past several years, Volkswagen Passenger Cars has increasingly needed to provide remote access to CAE applications. “Recently, all our engineers began working from home,” says Mayer. “We knew we needed to ensure reliability and high performance so they could have the same experience at home as they did in the office.” Türkçe English Using the new version of NICE DCV, engineers now can run applications remotely at the same frame rate as office workstations. “We expect our remote workers to experience 60 frames per second as well, and this will help increase our engineers’ productivity,” says Mayer. When Volkswagen Passenger Cars begins using NICE DCV for its containerized applications, the company’s engineers will be able to further boost productivity. “We plan to eventually move our application management to the cloud,” says Mayer. “Running in the cloud will enable our engineers to access as much compute capacity as they need, whenever they need it. We expect NICE DCV to give us more flexibility and scalability as we keep growing our business.” Enhances security and protects critical data The company’s engineers are taking advantage of the improved NICE DCV streaming performance to experience a smooth, responsive interaction with their remote CAE applications. The solution’s streaming protocol enables near-real-time responsiveness for the Volkswagen Passenger Cars 3D software, while continuing to deliver accurate images. Deutsch NI SP provides the first level of technical support to Volkswagen for its NICE DCV implementation. Tiếng Việt Italiano ไทย Delivers reliable remote streaming of 3D applications to 1,000 engineers Streaming 3D Applications Remotely at a High Frame Rate 2021 Learn more » About Volkswagen Passenger Cars Improving Security and Protecting Critical Data Volkswagen Passenger Cars has been one of the world’s largest car manufacturers for over 70 years. The company delivers more than 6 million automobiles to global customers every year, from 50 production locations on five continents. Português" Voucherify Case Study.txt,"Graph Databases Français ISO 27001 compliance AWS is helping Voucherify follow its roadmap towards campaign automation and tighter targeting capabilities based on consumer behavior. Voucherify can now offer its clients increasingly sophisticated, rules-based loyalty benefits. For example, Voucherify clients can select customers who have been members of a particular program for a certain number of years. Or, promote special offers to those that have bought related products in the past, or at a certain time of year, or in a specific region. With AWS, Voucherify has capitalized on this opportunity by building a business that runs personalized offers and loyalty campaigns at any scale. Its chief executive officer, Tomasz Pindel, says that alongside the benefits of security and the EU General Data Protection Regulation (GDPR) and ISO 27001 compliance, having a single cloud provider like AWS means fewer vendor checks, more connectivity, faster access to new products, and a simplification of strategy thinking. AWS has helped the business optimize costs and support its moves into the next phase of the digital economy. For Voucherify, this means offering ever more sophisticated campaigns to its clients globally. Español Amazon EC2 Voucherify was pleasantly surprised and pleased at how hard the AWS account management team worked to optimize costs, even before signing an enterprise agreement, where discounts would have been part of the deal. “We felt we had friends and partners on our journey, says Pindel, not just a supplier.” 日本語 2022 Graph databases are purpose-built to store and navigate relationships. Relationships are first-class citizens in graph databases, and most of the value of graph databases is derived from these relationships. Graph databases use nodes to store data entities, and edges to store relationships between entities. Learn more » Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used We felt we had friends and partners on our journey, not just suppliers.” Voucherify finds that having a single cloud provider simplifies its strategy and planning. It saves time when it comes to vendor checks and applying security measures, because everything is connected. “Using AWS, it’s easy to link everything together, you spend less time and effort on interoperability,” says Tomasz Pindel, chief executive officer and co-founder, Voucherify. “None of these benefits are small things in business terms. They all make a real difference.” Amazon MSK makes it easy to ingest and process streaming data in real time with fully managed Apache Kafka. Learn more » Voucherify understood that having a single, recognized cloud vendor was a big advantage in terms of regulation and compliance, as well as giving clients assurances over security. So, it chose to work with Amazon Web Services (AWS) and began to migrate its operations to the cloud. reduced IT cost as migration allowed for better resource utilization AWS Services Used Amazon MSK 中文 (繁體) Bahasa Indonesia Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload.  EU General Data Protection Regulation (GDPR) compliance Contact Sales Ρусский Customer Stories / Software & Internet عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Learn more » Overview Voucherify Lets Businesses Build Smarter Brand Loyalty Programs Türkçe English Amazon RDS Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Learn more » Moving to AWS has opened Voucherify’s products to ISO 27001 certificate and enterprise-grade features without spikes in the IT costs. Additionally, the migration allowed for better resource utilization, reducing IT costs by 30 percent. But as the company grew fast and picked up larger clients, it stopped to reconsider how it sourced and managed technology. If it wanted to serve its clients effectively, it needed to expand its technology setup. Founded by three IT professionals in 2013, Poland-based Voucherify provides an API-first engine promotion software that gives marketers and developers the tools to implement, manage, and track targeted promotional campaigns at any scale. AWS ran graph database workshops to educate and train the Voucherify team. New possibilities and ideas opened up following the training. The company learned how to generate new insights and build a 360-degree view of end users to help make future service improvements. “We’re looking at the future,” says Pindel. “More A/B testing, personalization, sophistication, and automation. We’re excited by the possibilities.” Deutsch About Voucherify Tiếng Việt Today, Voucherify’s customers number 350 businesses in Europe, APAC, and the US and it has become a member of the MACH Alliance, a not-for-profit body that advocates for open and best-of-breed enterprise technology ecosystems. The founders wanted to maintain control of the company and decided not to accept external funding. Voucherify has remained independent since its early days. Italiano ไทย The business was started as a software company in 2013 by three engineers who wanted to provide companies with the tools necessary to build their own outbound promotion programs and brand loyalty products. The developers had witnessed first-hand the trouble clients had with existing, non-developer friendly services and saw an opportunity. Tomasz Pindel Chief Executive Officer and Co-founder, Voucherify 30% We all love a bargain or a special offer. And we appreciate it when our loyalty to a brand is rewarded—even if we don’t think about the technology and enterprise involved behind-the-scenes at loyalty programs. Poland-based Voucherify provides an API-first engine for such programs. It gives marketers and developers a set of flexible building blocks to implement, manage, and track targeted promotional campaigns at any scale. Português" WaFd Bank Transforms Contact Centers Using Conversational AI on AWS _ Case Study _ AWS.txt,"WaFd Bank (WaFd) wanted to improve the customer experience in its contact center by innovating with conversational artificial intelligence (AI). Over the past decade, the banking industry has been disrupted by new embedded finance applications and digital-only banks. WaFd, like other traditional banks, needed to compete digitally to meet changing customer expectations. Français Solution | Improving Customer Experience and Reducing Call Times by Up to 90% Español 90% Amazon Lex is a fully managed artificial intelligence (AI) service with advanced natural language models to design, build, test, and deploy conversational interfaces in applications. 日本語 AWS Services Used Amazon Polly uses deep learning technologies to synthesize natural-sounding human speech, so you can convert articles to speech. With dozens of lifelike voices across a broad set of languages, use Amazon Polly to build speech-activated applications. Learn more » Dustin Hubbard Chief Technology Officer, WaFd Bank and Pike Street Labs 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used We’re getting incredible data from AWS through the conversational logs. That has given us insights into what our customers are asking for so that we can add more self-service functionality.” After WaFd redesigned its online banking solution, the next step in its digital upgrade was its contact center. WaFd used Amazon Web Services (AWS) and Talkdesk, an AWS Contact Center Intelligence Solutions Partner, to build a new contact center solution that implements conversational AI and voice identification technology. Using the new AI-powered solution, WaFd has improved both the agent and customer experiences. Improved Get Started 25% Learn how WaFd Bank offered digital-first banking and improved customer and agent satisfaction using Amazon Lex. Unified agent experience Amazon Polly 中文 (繁體) Bahasa Indonesia AWS Contact Center Intelligence Solutions About WaFd Bank Amazon Lex Ρусский The company began using Amazon Lex—a fully managed AI service with advanced natural language models to design, build, test, and deploy conversational interfaces in applications—to build chatbots and voice bots. WaFd wanted to use that same technology to power its contact center solution so that everything would be powered by the same technology stack. In January 2022, WaFd selected Talkdesk as its new cloud contact center platform. The Talkdesk cloud-based contact center solution offers voice authentication and traditional call center routing. Additionally, Talkdesk wanted to integrate its contact center platform with Amazon Lex for WaFd. “The Talkdesk cloud platform combined with conversational AI from AWS offered a comprehensive stack of contact center technologies that I wanted to use,” says Hubbard. “AWS does conversational AI really well, and its AI can understand a lot of different accents and speaking styles correctly.”  عربي 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. customer and agent experience 2022 Overview WaFd Bank Transforms Contact Centers Using Conversational AI on AWS Customer Stories / Financial Services WaFd uses a data lake on AWS to store and analyze data from phone and chatbot conversations. “We’re getting incredible data from AWS through the conversational logs,” says Hubbard. “That has given us insights into what our customers are asking for so that we can add more self-service functionality.” The data also gives WaFd more insight into call volumes, so the call center can better manage staff schedules. Opportunity | Using Amazon Lex to Implement an AI-Powered Contact Center Solution Türkçe English WaFd is a US retail and commercial bank with over 200 branches in eight states. In 2019, WaFd founded subsidiary Pike Street Labs, a fintech startup, to drive client-facing digital innovation for the bank. “Banks need to meet customers’ digital expectations,” says Dustin Hubbard, chief technology officer at WaFd Bank and Pike Street Labs. “Every year, customers expect more innovation because that’s what they see from new entrants or in other markets.” Pike Street Labs redesigned WaFd’s online banking solution to provide personalized customer experiences and began tackling the bank’s customer care center. The company’s previous contact center solution used dated technology with limited features spread across disparate systems. This led to long wait times for customers and frustration for agents, who had to answer incoming calls without prior knowledge of what the customer needed. Agents also bore the burden of identifying fraudulent calls. WaFd needed a solution to improve both the customer and agent experiences. Previously, WaFd used two different systems in its customer care center to manage its voice and chat-based customer interactions, with no way for one system to recognize that an agent was busy on the other. Chat messages remained unanswered because agents would forget to sign in to the chat system. The company implemented chatbots and voice bots powered by Amazon Lex. Now, the call and chat systems are interoperable, and chats can be escalated to agent assisted calls when needed. When a call gets passed to an agent, the system also passes the full chat record and an analysis of the customer’s tone so that the agent is prepared to address the client’s needs and be empathetic toward the caller’s sentiment. WaFd worked with the AWS and Talkdesk teams to create and launch its new contact center solution in July 2022. All three companies are customer obsessed and worked together to deliver the best customer experiences possible. The AI-powered contact center system uses voice biometrics to authenticate customers, who can choose to use their voices as their identity confirmation, allowing them to do voice-based banking and prevent fraudulent calls. The contact center solution also auto-populates the user’s account information to assist agents so that agents have the necessary information to resolve calls quickly. This has helped increase volume of calls handled and improve agent experience. Additionally, the system offers self-service virtual agents that handle certain client requests without involving a live agent. For example, customers calling to check their account balances—representing 20 percent of WaFd’s calls—can check their account balances with voice authentication without needing to wait for a live agent. Customers can now check their account balances by phone in 28 seconds, a 90 percent reduction in time from the original 4.5 minutes. Outcome | Innovating to Further Enhance Customer Experience WaFd Bank is a bank based in Seattle, Washington, with over 200 branches across eight states. Deutsch of call volume expected to be offset by using self-service bots Tiếng Việt reduction in time to make an account balance inquiry Italiano ไทย Contact Sales Learn more » “I love being first to market with friction-reducing innovations,” says Hubbard. “WaFd is creating a digital-first banking experience with advanced AI capabilities. We’re showing the type of innovations that you can bring to banking if you put on your creative-thinking hat and apply existing technologies in new ways.” Because WaFd can handle more customer interactions using the AWS contact center solution powered by conversational AI, the bank expects its new contact center solution to reduce agent call volumes by 30 percent. The goal is not to make it harder to reach a live agent but to increase self-service opportunities. In addition, the bank expects that customer interactions will transition to lower-cost, higher-efficiency channels, such as chatbots, potentially offsetting about 25 percent of its call volume over time. The bigger impact, however, is the bank’s customer satisfaction, which it shows through an increasing net promotor score—a measurement of how willing customers are to recommend the bank to others. As the bank has innovated over the past few years, WaFd’s net promoter score has risen from 12 to over 50, which is considered excellent in the banking industry. This improvement is a testament to WaFd’s commitment to its customers. for managing voice and chat interactions in contact center WaFd is also linking its chatbots to voice using Amazon Polly, an AWS AI service to deploy high-quality, natural-sounding human voices, so that customers can talk with virtual agents using voice instead of text. Longer term, WaFd expects the combined Conversational AI and Talkdesk solutions to save costs and decrease voicemail volumes and call-abandonment rates. These improvements will help increase customer satisfaction and elevate employee experience by providing fast resolutions for many common banking transactions through self-service. WaFd sees this as just the beginning of its innovation journey with conversational AI and voice biometrics. Português Enhance your customer service experience and reduce costs by integrating machine learning into your contact center. Through intelligent chat and voice bots, voice sentiment analysis, live-call analytics and agent assist, post-call analytics, and more, personalize every customer interaction and improve overall customer satisfaction.. Learn more »" Wave Commerce case study.txt,"AWS Lambda Français Speeding Up the Development of Bespoke, Ecommerce Solutions Wave Commerce speeds up development of bespoke, ecommerce solutions on AWS Lambda, increasing developer productivity, automating server administration, and reducing infrastructure management. Español With Amazon Cognito, you can add user sign-up and sign-in features and control access to your web and mobile applications. Learn more » Amazon Cognito 日本語 AWS Services Used Contact Sales 2022 In addition, Wave Commerce adopted Amazon Cognito to manage customer identity and access management, versus building its own infrastructure control sign-up and sign-in processes for customer apps. It also uses Amazon Aurora, a relational database management system, for better reliability and scalability. 한국어 As a Shopify Plus Partner and Shopify Expert, Wave Commerce applies its software development expertise to create advanced Shopify solutions and bespoke user experiences for its clients. Rolland Yip, cofounder, and director of Wave Commerce, says, “Our clients need customized and tailored ecommerce experiences to meet their customers’ expectations, and deliver these experiences quickly and cost-effectively. That’s where we come in.” Benefits Adopting Serverless Technologies to Automate Management Since the migration, Wave Commerce’s developers can customize Shopify platforms and build apps in 6–12 weeks instead of 3–6 months. Furthermore, the company has reduced the amount of resources dedicated to infrastructure management by 70 percent—refocusing resources on development instead.   • Fast Development – Customize platforms and build apps in 6 weeks • 70% – reduction in infrastructure management resources • 100x – Scales to support increase in traffic during sale events With AWS serverless solutions, Wave Commerce can offer brands and retailers faster and more cost-effective Shopify customizations and applications for their omnichannel strategies. Yip says, “We’re offering our clients’ more customization options on a larger scale, integrating their Shopify platform with applications such as loyalty programs for more personalized reward schemes and membership tiering options. This benefits our clients’ interactions with their customers to grow lifetime value.” To increase the speed of development, Wave Commerce adopted AWS serverless solutions, moving from Amazon EC2 to AWS Lambda. Messages in Amazon Simple Queue Service (Amazon SQS) triggered AWS Lambda functions to adjust compute resources in line with developer requirements. As a result, developers no longer needed to wait for administrators to provision and scale servers. Wave Commerce is a Hong Kong-based ecommerce agency and solutions company helping businesses customize their direct-to-consumer (DTC) and omnichannel retail experiences with the Shopify platform. To accelerate ecommerce platform customizations and application development, the company adopted AWS serverless solutions. 中文 (繁體) Bahasa Indonesia Amazon Simple Queue Service (SQS) lets you send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available. Amazon Aurora By developing more cost-effectively and efficiently on AWS Lambda, we’re passing on cost and time savings to a wide range of clients and allowing them to create more distinctive omnichannel experiences on Shopify.” Ρусский Customer Stories / Software & Internet عربي Wave Commerce has established itself as a trusted Shopify partner for global brands and businesses in apparel, consumer electronics, fast-moving consumer goods, home furnishing and more. As ecommerce and offline retail continues to converge, Wave Commerce looks to grow its business through adding more application services for omnichannel commerce and to scale this business globally. Yip says, “By developing more cost-effectively and efficiently on AWS Lambda, we’re passing on cost and time savings to a wide range of clients and allowing them to create more distinctive omnichannel experiences on Shopify.” 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. The company is exploring machine learning (ML) services, such as Amazon Personalize, which makes it easier to integrate personalized experiences into websites. Chan says, “AWS offers a wide range of ML offerings that can add value to our customers’ ecommerce strategies, and we plan to integrate these technologies into our services in the future.”   Wave Commerce is a Hong Kong–based ecommerce company helping emerging brands and enterprises build, launch, and deliver ecommerce transformation on Shopify—a global ecommerce platform and ecosystem that helps businesses sell worldwide and direct to consumers. Learn more » Solution Overview Offering Faster, Cost-Effective Customizations on Shopify About Company Get Started Türkçe Amazon SQS Wave Commerce leverages AWS Lambda and Amazon SQS to reduce management, provisioning, and scaling tasks. As a result, the business reduced development time from months to weeks, offering faster, cost-effective customizations for its customers and delivering a more customized or localized ecommerce shopping experience.   Rolland Yip Cofounder and Director, Wave Commerce English Hong Kong–based Wave Commerce delivers a range of digital and ecommerce services including custom development, consulting, and marketing campaign execution. It is a leading Shopify Plus partner, assisting a diverse range of clients in establishing ecommerce.   Wave Commerce Delivers Customized Ecommerce Solutions in Weeks with AWS Serverless Amazon Aurora provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. Learn more » Deutsch Opportunity Tiếng Việt Italiano ไทย Wave Commerce have been using Amazon Web Services (AWS) for over 10 years, running Amazon Elastic Compute Cloud (Amazon EC2) instances. However, as the business grew from a few customers to thousands worldwide, it sought to speed up development and to streamline the provisioning and scaling of Amazon EC2 instances. “It could take months to build an application or set up a new integration to tailor an online store. We needed to get things done faster, and so it was critical that we accelerate the management of compute resources for increased efficiency,” says Yip. Wave Commerce moved 60 percent of its AWS environments supporting customers’ Shopify applications to AWS Lambda, successfully scaling during events like Black Friday and flash sales where workloads can increase hundredfold. William Chan, cofounder, and director of Wave Commerce says, “With AWS Lambda, we know compute resources will automatically scale when workloads increase, ensuring our applications don’t lose performance.” Outcome Overview | Opportunity | Solution | Benefits | Outcome | AWS Services Used AWS Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers Learn more » Português" WebBeds uses Amazon EC2 Spot Instances to save its business amid a reduction in travel worldwide and reduce costs up to 64 percent. _ WebBeds Case Study _ AWS.txt,"WebBeds’ use of Spot Instances was the first part of its strategy for optimizing costs. Because its Reserved Instances were expiring at the end of October 2020, WebBeds decided that migrating entirely to Spot Instances made the most sense financially. “We needed to reduce our spending, so we went all in on Spot Instances. We had the same discount as with Reserved Instances but with zero cash out,” says Perez Salazar. The second part of the company’s cost reduction strategy was optimizing its interzone traffic in its search engine, and the third was using AWS Graviton2 processors, which are designed by AWS to deliver optimal price performance for a business’s cloud workloads that are running in Amazon EC2. WebBeds reduced costs by 55 percent on its Windows-based instances and 64 percent on its Linux-based instances. The company also expects lower maintenance costs overall with these changes. Another benefit of using AWS Graviton2 processors for WebBeds was a 40 percent improvement in CPU performance. “If we have a lower cost per search, it means that we are wasting less energy to process a request using AWS Graviton2 processors,” says Perez Salazar. This reduction in energy has improved sustainability for WebBeds. Using WebBeds’ fault-tolerant search engine and the capabilities of Spot Instances, the company can reduce the sustainability impact of unused resources while lowering costs. With cost savings and scalability using AWS, WebBeds continues to grow beyond where it was before the COVID-19 pandemic, processing more searches per day than ever before. As of August 2022, WebBeds is the top user of Spot Instances and AWS Graviton instances in the Iberian Peninsula, and the company is using these services to drive infrastructure in a sustainable way. Français Increased WebBeds Reduces Costs up to 64% Using Amazon EC2 Spot Instances and AWS Graviton Español Amazon EC2 At the beginning of the COVID-19 pandemic, WebBeds searches reduced by 95 percent, and it needed to scale down to meet the lower customer usage and reduce costs to cope with the significant loss in business. Amid changes during the COVID-19 pandemic, WebBeds needed a change in mentality to develop innovative solutions in a brief period. “When the COVID-19 pandemic struck and travel stopped, it was critical to scale down our platforms rapidly, which we were able to do being a 100 percent cloud-hosted company,” says Malik. Working alongside the AWS team, WebBeds decided it could make its servers more available if it diversified its instance types. WebBeds created its own solution to determine which instance type to use for each workload. This flexibility and agility helped WebBeds begin growing again quickly, and the company had zero wasted compute when using Spot Instances. Following AWS advice, WebBeds started using Spot Instances to save costs and adapt to new scalability requirements. Spot Instances have saved the company around 64 percent compared with On-Demand prices. We needed to reduce our spending, so we went all in on Amazon EC2 Spot Instances. We had the same discount as with Reserved Instances but with zero cash out.” “The partnership that the AWS team provides is critical to help us adopt and implement innovative technology and solve complex problems with simple solutions,” says Malik. The company’s goal is to optimize its search engines using machine learning technologies, like Amazon SageMaker, which a business can use to build, train, and deploy machine learning models for virtually any use case with fully managed infrastructure, tools, and workflows. By using AWS services, WebBeds plans to innovate and improve its search engines to provide a better product for its consumers. “Because we can delegate day-to-day maintenance of services to AWS, we release resources on our side to work on innovation,” says Perez Salazar. “In doing so, we can come up with new products that have a positive impact on the business.” 日本語 AWS Services Used overall stability 한국어 energy waste Overview | Opportunity | Solution | Outcome | AWS Services Used in costs Gabriel Perez Salazar Engineering Director, WebBeds Reduces Get Started Opportunity | Adapting to the Unpredictable Travel Industry 64% reduction 中文 (繁體) Bahasa Indonesia Customer Stories / Hospitality About WebBeds Amazon EC2 Reserved Instances Contact Sales Ρусский WebBeds is one of the world’s leading providers of accommodations distribution to the travel industry. It has offices in over 30 countries and sells to customers all over the world. The company launched in 2013 and has since built a significant global distribution network. WebBeds is a division of Webjet Limited and sources content from different travel suppliers. The company connects, aggregates, and merchandises that content within the WebBeds Marketplace, and it distributes the content to its global network of travel-trade clients, who then sell to the traveling public. عربي 中文 (简体) Global travel intermediary WebBeds needed to make drastic changes in the wake of instability in the travel industry brought on by the COVID-19 pandemic. With a 95 percent loss in online traffic due to the onset of the COVID-19 pandemic and of travel restrictions, WebBeds needed to save money and react swiftly to the changing needs of its business-to-business users. Already a customer of Amazon Web Services (AWS), WebBeds had been using Amazon Elastic Compute Cloud (Amazon EC2) Reserved Instances, which provide a significant discount compared with Amazon EC2 On-Demand Pricing. The company had a minimum commitment of spend and time on instances that it was not using during the COVID-19 pandemic. 2022 AWS Graviton processors are designed by AWS to deliver the best price performance for your cloud workloads running in Amazon EC2. Learn more » Overview To adapt to shifts in the industry and following AWS advice, WebBeds decided to use Amazon EC2 Spot Instances to run fault-tolerant workloads for up to 90 percent savings. WebBeds used this purchasing option to make use of unused Amazon EC2 capacity, providing deeper savings than Reserved Instances and pay-as-you-go pricing so that the company was consuming only what it needed. The company also decided to explore AWS Graviton instances, with ARM-based architecture, for the best price performance in Amazon EC2. Using both Spot Instances and AWS Graviton instances, WebBeds saved significantly on costs, improved its ability to scale up and down with the unpredictable nature of the travel industry, and increased its sustainability. Amazon EC2 Reserved Instances (RI) provide a significant discount (up to 72%) compared to On-Demand pricing and provide a capacity reservation when used in a specific Availability Zone. Learn more » sustainability Türkçe English Amazon Elastic Compute Cloud (Amazon EC2) offers the broadest and deepest compute platform, with over 500 instances and choice of the latest processor, storage, networking, operating system, and purchase model to help you best match the needs of your workload. Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Prior to the COVID-19 pandemic, WebBeds had been using Amazon EC2 Reserved Instances. WebBeds pivoted to running its search engine entirely on Spot Instances to optimize its use of Amazon EC2 instances in the wake of the reduction in business during the COVID-19 pandemic. “There is a large consensus of industry experts who agree that the travel industry was one of the hardest hit by the COVID-19 pandemic, with cost management being a key for survival,” says Mohammed Malik, chief information officer at WebBeds. WebBeds had used Spot Instances in 2017 to help scale with the business’s growth at the time. With the support of AWS, WebBeds went live with changes after only a few weeks of work. Additionally, WebBeds faced the challenge of adjusting to changes in consumer behavior. The company needed to adapt to meet these unpredictable business demands, and it decided Spot Instances were more elastic for its business purpose than Reserved Instances. “The AWS team is working alongside us all the time, sharing new services and facilitating cost savings in different places,” says Gabriel Perez Salazar, engineering director at WebBeds. Deutsch in CPU performance Tiếng Việt Italiano ไทย WebBeds, a division of Webjet Limited, is a large and fast-growing business-to-business travel intermediary that sells to customers all over the world. Solution | Going All In on Amazon EC2 Spot Instances Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Learn more » 40% improvement Amazon EC2 Spot Instances Outcome | Building for a Future of Innovation AWS Graviton2 processors Supports Português" What Will Generative AI Mean for Your Business_ _ AWS Cloud Enterprise Strategy Blog.txt,"AWS Cloud Enterprise Strategy Blog What Will Generative AI Mean for Your Business? by Mark Schwartz | on 30 JUN 2023 | in Amazon SageMaker , Artificial Intelligence , Best Practices , Thought Leadership | Permalink | Comments |  Share It won’t surprise you to hear that there’s been lots of excitement and speculation about generative AI in our meetings with AWS customer executives lately. The question on their minds is: “What does this mean for my business?” That’s a good way to frame the question; it’s not about what generative AI can do, but what it can do for your business. And the seeds of the answer are there in that framing as well. How generative AI will affect your business depends on how you and your competitors will use it to innovate new business models and derive new competitive advantages. It’s not about what the technology itself does—exciting as that is—but about how you will combine it with other technologies, your people’s skills, your values and competencies, and your distinctive vision. It is a question of how to manage innovation in your company, which is not a new question. Generative AI, which has powers we haven’t even conceived of, joins other technology-influenced ways of solving business and mission challenges, ways of imaging the future, and technological tools like IoT, analytics, and the many services AWS offers for innovating new products and operating with excellence. The IT world has often made the mistake of confusing technologies with business models. What you will gain (or lose) from generative AI depends on the innovative uses you and your competitors find for it. The important questions—and ones that require some thought—are how to innovate with generative AI, scale with it, incorporate it into business models, and manage its risks. With that in mind, the AWS approach to generative AI becomes clearer. As with other AWS services, our emphasis has always been on helping our customers drive their businesses forward—not just producing technical capabilities but helping our customers use those capabilities to be more successful. That’s what we mean by being “customer obsessed.” We speak of democratizing AI: making it so easily available that it can become part of an enterprise’s normal cycles of experimentation, learning, understanding customers’ needs, and building business capabilities. Let’s look at generative AI from the standpoint of business innovation and excellence. This Is Exciting Generative AI, along with whatever grows out of it, appears to be the next big thing to transform how we do business. In the big picture view, recent advances in generative AI show us that extremely large foundation models are both practical and powerful and can be fine-tuned rather easily to accomplish important tasks. This is somewhat surprising. Even those at the cutting edge of AI research weren’t sure until recently how convincing the natural language content generated by even an extremely large model could be, let alone how large such a model would have to be. And there are emergent behaviors of large language models that are surprising and whose implications aren’t yet clear. Language is not the only field that might be amenable to foundation modeling—foundation models of amino acid sequences can be used to engineer new proteins for use in healthcare , models based on financial markets can inform financial applications, and stable diffusion models can create images. The unexpected emergent behaviors of very large language models go well beyond language manipulation. Generative AI will change how we think about solving a broad range of business and mission challenges. Innovating with generative AI is more than just finding uses for chatbots! Sustainable Competitive Advantages Businesses using generative AI will want to build sustainable competitive advantages. To do so, they must combine generative AI with resources that are unique and proprietary (or defensible). The large language models used by text-based services like ChatGPT are a type of foundation model (FM), a pretrained model that—in GPT-4’s case—contains hundreds of billions of parameters. Most companies will be unable to create their own FMs, as doing so requires tremendous resources and expertise. They will therefore need to use FMs from external providers, the same FMs available to their competitors and future disruptors. Sustainable competitive advantage can’t come just from using generative AI—if you can add a chatbot to the front end of your application, your competitors can as well. Your long-term advantages will come from how you fine-tune the FM, what proprietary data you add or use to train the model, or how you integrate the generative AI into business processes that are truly unique to your company. While the FM itself might not be unique to your company, you do have plenty of data that is unique: data about your customers, their prior transactions, sensors you own or control, and your research. Some of that data can be used to fine-tune the FM, derive prompts for your generative AI applications, build your own models, or simply create applications in conjunction with the FM. Amazon Bedrock allows you to use your proprietary data with an FM in a secure way that keeps your proprietary data private. This allows you to focus on managing the quality of your data and finding unique ways to use it to build differentiated services and competitive capabilities. Incorporating generative AI into your company’s distinctive ways of providing value to your customers is an integration task; generative AI must be continuous with your everyday IT applications. With your other business applications running in it, the cloud can provide integration capabilities through tools like Amazon API Gateway, analytics services, data lakes, and asynchronous movement of data. And you’ll want your authentication and authorization policies to be consistent across all your IT capabilities, including generative AI. The AWS approach to generative AI is to support our customers in building sustainable competitive differentiators, not just implementing new and exciting technology. Management of Innovation Enterprise leaders often mistakenly assume that becoming more innovative is a matter of getting employees to have more ideas. In truth, employees usually have plenty of ideas, especially those who work closely with customers. The challenge of innovation is to execute those ideas, to give them a chance to show that they can be effective. By definition, innovative ideas are necessarily risky because they are new and unproven. The key to managing innovation is to reduce the risk of innovation and then adjust governance processes to allow more freedom given the lower risk. It is here that the cloud has always excelled. An employee can quickly spin up infrastructure to test an idea, then discard the infrastructure and stop paying for it if the idea doesn’t work, or quickly change the infrastructure if needed. An employee can inexpensively and quickly build functionality by combining AWS’s many high-level services as building blocks and integrating them through serverless functions—or stop using them if they discover a better way. For example: instead of spending years building image recognition capabilities, they can obtain them off-the-shelf with Amazon Rekognition and stop using and paying for those services if the new ideas don’t prove themselves. Because the cloud dramatically reduces the cost and risk of trying innovative ideas, it allows companies to consider ideas they would have previously rejected. With generative AI in the cloud, companies can combine it with other building block services to test the new ideas stimulated by the generative AI’s capabilities—at lowered risk and cost. Again, it’s not just a matter of testing generative AI’s capabilities but of embedding them in business processes that must be tested. Critically, Amazon Bedrock allows employees to innovate with different FMs. The initial release of Bedrock supports models from AI21 Labs, Anthropic, Stability AI, and two Amazon Titan models. Each of these is designed to specialize in certain types of applications. Employees testing new ideas can choose the FM that best supports their intentions or try several and compare. The AWS approach to generative AI is amenable to good practices for managing innovation and stimulating innovation in business processes. Responsive Agility: Keeping Pace Although sustainable competitive advantage is a critical goal, companies can also use generative AI simply to improve how they serve their customers. When its customers’ needs change, a company that has learned the techniques of agility in the cloud can respond nimbly. And as generative AI evolves—and it surely will—companies can use that agility to incorporate new features and build new applications. As competitors release new capabilities, enterprises need to respond quickly to match them. As with other IT capabilities, companies must learn agility with respect to generative AI. Companies have been learning agility over the last few decades, and the same considerations will apply as they begin to incorporate generative AI. How can they sense the need for change? Deliver incrementally and quickly? Govern investments to move quickly into execution and juggle requirements with shifting priorities? The cloud (and contemporary practices like DevOps) are the keys to building agility and speed. Operationalizing Generative AI IT leaders will quickly recognize that using generative AI is not simply a matter of coming up with an idea and rolling it out. Like other technologies, it must be operationalized effectively, and the challenges of doing so are well-known to IT practitioners. Off the top of my head, AI applications and models must have reliable deployment processes, be version-controlled, be tested, and meet compliance requirements. Users must be authorized, interfaces to other systems must be built, and helpdesk services must be available. Applications must be secured. There are ethical issues to address, and guardrails must be implemented. Generative AI must become part of a business’s overall technical operations. The cloud excels in streamlining IT operations; AWS’s broad selection of services and the automation the cloud supports will be critical to making generative AI applications reliable, resilient, secure, and efficient. In particular, Amazon SageMaker is designed to make operationalizing AI applications easier. Among other features, it supports and automates governance processes, provides a centralized catalog for machine learning artifacts, integrates machine learning applications into automated testing and deployment (CI/CD) pipelines, and monitors data and models as they’re being used to ensure their quality. Speaking of efficiency, when generative AI applications become part of a company’s core business processes, cost becomes an important factor. While AWS Inferentia and AWS Trainium chips are specially designed to cost-effectively train and deploy AI models, the entire suite of cloud services and the cloud’s ability to scale up and down seamlessly will likely play a critical role in managing the costs of whatever innovations companies develop. Expressing Values With AI, addressing ethical concerns and ensuring compliance with applicable frameworks is critical. Because Amazon Bedrock is based on a choice of FMs, AWS customers can choose the FMs that best fit their compliance needs and corporate values—even as those needs evolve. They can take advantage of AWS AI Service Cards that provide transparency into how individual AWS services address and influence fairness and bias, explainability, privacy and security, robustness, governance, and transparency. Responsible AI, like the responsible use of other digital techniques, involves cultural change as well as governance processes. Governance processes establish guardrails and are critical. But the everyday activities of employees are guided by corporate culture, and building a culture of responsible AI use is a new frontier in the leadership of transformation. In my upcoming book , I suggest that ethics in digital transformation is not just a matter of rules and compliance; it’s better thought of as a way that companies express their values and can even be a business advantage. Consumers today make spending decisions based on the values their vendors demonstrate; employees choose where to work based on prospective employers’ values. There is room for enterprises to go beyond compliance and industry frameworks to formulate an ethical vision and build it into their culture and operations. Generative AI and AI in general are places where the company’s ethical vision comes to the surface—compared to, say, ERP systems and logistics. Conclusion Generative AI is powerful new technology. But for AWS customers, it is more than that—it is a way to achieve business objectives and formulate new business goals. It is less a question of what the technology can do and more a question of how businesses will innovate to make it part of the value delivery to their consumers in ways that give them a competitive edge. This is the lens through which AWS’s approach to generative AI should be viewed. TAGS: Agility , Artificial Intelligence , Best Practices , Business Value , Innovation Mark Schwartz Mark Schwartz is an Enterprise Strategist at Amazon Web Services and the author of The Art of Business Value and A Seat at the Table: IT Leadership in the Age of Agility. Before joining AWS he was the CIO of US Citizenship and Immigration Service (part of the Department of Homeland Security), CIO of Intrax, and CEO of Auctiva. He has an MBA from Wharton, a BS in Computer Science from Yale, and an MA in Philosophy from Yale. Comments View Comments Resources AWS Executive Insights Conversations with Leaders Podcast Conversations with Leaders Video Series AWS Executive Connection on LinkedIn Follow  Twitter  Facebook  LinkedIn  Twitch  RSS Feed  Email Updates" Which Recurring Business Processes Can Small and Medium Businesses Automate_ _ AWS Smart Business Blog.txt,"AWS Smart Business Blog Which Recurring Business Processes Can Small and Medium Businesses Automate? by Chintan Patel, Arindam Chatterji, and Pratik Kaneriya | on 23 JUN 2023 | in Amazon Personalize , Amazon QuickSight , Amazon Timestream , Customer Solutions , Thought Leadership | Permalink |  Share What are the major challenges facing your company’s latest digitization project? Perhaps you’re operating on a patchwork of systems and software that pre-date your time. Maybe your teams are concerned that any updates might jeopardize the existing IT system and small inefficiencies are worth the price of not modernizing. The outdated, resource-intensive business procedures that leave little time for strategy or innovation do have a far bigger cost: growing your business. Utilizing game-changing technologies to automate current company operations helps free up resources and improves efficiency. Automation is transforming industries globally, and it is bringing substantial benefits to businesses and economies worldwide. To realize automation’s full potential requires people and technology to work hand in hand. These technologies can bring a range of benefits to small and medium businesses (SMBs) such as yours which have more growth ambition than existing IT can support. According to Statista, 2023 global digital transformation spending is projected to touch $1.8T USD in just three years. Why? Because automation simplifies business processes by taking repetitive tasks and automating them for you, so you can focus on what truly matters. What is business process automation and how can it help SMB? Business process automation (BPA) is simplifying and performing repetitive tasks through the use of advanced technology, with limited human intervention. BPA isn’t about replacing the human element from your business, it’s about enhancing it. The technology is great at various computational and daily tasks, but SMBs should free up their employees to use their talents to innovate in other areas of the business. SMBs are adapting to automation in various domains by replacing repetitive and rule-based processes associated with inventory management, employee management, customer engagement, sales, and marketing. If one of your business apps sends you scheduled reports, you’re already using a basic example of automation. BPA can greatly benefit businesses by reducing the need for manual efforts and increase employee efficiency. With the surge in inflation and unpredictable market conditions, SMBs have to manage resources efficiently. Automating recurring tasks can help significantly to save operational cost as well as to improve productivity and accuracy resulting into extra ordinary customer experience. At Amazon Web Services, we know SMBs are so unique that we cannot apply a one-size-fits-all approach for automation. The key is to improve efficiency periodically so that you can allocate resources to modernize step by step. Incremental improvement is a goal and should be celebrated as much as bigger milestones. With the variety of automation available, businesses can select what best suits their needs and goals. Benefits of Automation for SMBs When you work for a small company, it’s common to have many roles. We’re hearing there are continued issues hiring qualified talent or difficulty providing growth opportunities for existing employees. Automation offers your company several helpful benefits: Improve productivity, accuracy, and efficiency by limiting the need for human intervention Improve end-to-end process visibility across teams Automation facilitates business growth by enabling delivery of seamless customer experience Scale business operations by assigning teammates to more strategic business initiatives Improve ability to respond to changes in the competitive market Streamline critical business processes to improve consistency and security Reduce operational costs Five business tasks SMBs can begin to automate Marketing customization Sheer creativity and other nuances are best left to humans. However, one of the major challenges in promoting products or services for SMBs is the ability to analyze huge amounts of customer data. SMBs have limited resources to interpret and use them for their own benefit, so AI can help them analyze huge data to make data-driven marketing decisions and create personalized promotions. Marketing automation can play a pivotal role in expanding business outside of the local markets and to extend the lifetime value of their customer base. AI services like Amazon Personalize can use your data to understand customers and provide curated recommendations. Attendance tracking SMBs, such as staffing companies and manufacturers, rely on people to generate revenue for products. Most of them rely on the paper-based time cards which are prone to mistakes, inaccuracies, and deliberate manipulation. AI-powered solutions, like biometric systems or facial recognition (powered by Amazon Rekognition ), provide automated and reliable attendance tracking. BROJ is an SMB customer based in South Korea that develops fitness management solutions. During the height of COVID-19, BROJ developed “ BROJ No Touch ” a service for managing access control and attendance with Rekoginition. Safety monitoring Automation has given industrial businesses an essential tool to increasing productivity. However, safety risk is becoming a major interest when industries are advancing to push machines to test their potential. While pushing these boundaries to advance ourselves, safety should never be overlooked and always the foremost concern. We must remember that people are essential and must be protected. Innovation in technology is helping us to improve safety in our working environments so that productivity and quality do not deteriorate. Implementing safety measures in automated systems ensure that the potential risk for employees must be prevented. Using AWS, Solaris built a real-time monitoring application for oilfield equipment. Solaris automated and integrated data ingestion by using Amazon QuickSight and Amazon Timestream to create a safer, more efficient, and cost effective solution for customers. Customer support All businesses value their customers and are committed to offer the best service and support possible. SMBs operate with limited technical resources which can result in delayed responses. These constraints can negatively impact business and customer satisfaction. Automation can help manage customer issues through self-service at lower cost compared to engaging customer service representatives. With Amazon Connect , you can set up a contact center in minutes that can easily scale up or down to meet customer demand. You can improve agent productivity and customer experience across voice and digital channels with the all-in-one, AI- and ML-powered contact center. For example, chatbots are becoming popular in customer support. The chatbots on e-commerce websites can quickly and inexpensively engage with lower-tier customer queries, saving human service representatives for more complex problems. Another example is ticket assignment. Regardless of the problem category, support tickets can be routed to the correct team promptly. This automation saves employees from having to sort through emails, messages, and chats to find and answer high-priority questions. Reporting important business metrics Automation can help businesses to increase efficiency by automating reporting on critical business metrics. Automated reports reduce data entry time and provide real-time data on all important key performance indicators (KPIs). Businesses can design automated reports with revenue data, marketing effectiveness, customer success rates, and more. Using real-time reporting allows tracking all KPIs effortlessly and can be especially helpful when it comes time for SMBs to prepare their annual reports. In addition, automation of reports will allow SMB business to more easily make data-driven decisions about how they should optimize their product offerings, marketing campaigns to improve efficiency, and customer satisfaction moving forward. Amazon QuickSight powers data-driven organizations with unified business intelligence. All users can use a single source of truth through modern interactive dashboards, paginated reports, embedded analytics, and natural language queries. Best practices for implementing Business Process Automation BPA isn’t about removing the human element from your business, it’s about enhancing it. When done correctly, BPA is more of a partnership. The machines get to do what they’re great at, freeing up your employees to use their talents to innovate in other areas of the business. One of the common misconceptions of BPA is that it takes weeks, or even months, to implement. Here are a few guiding principles to use as you embark on your BPA journey. Understand the opportunity and plan ahead: Start taking advantage of automation and AI by assessing the opportunity, identifying the high-impact use cases, and laying out the capability (and governance) groundwork. Identify short- and long-term automation goals: Identify quick tactical wins to automate activities with the highest potential and radiate out. In parallel, lay out a long-term vision. Determine total cost of ownership (TCO): Review if TCO for the selected automation solution is justified against budget constraints. Evaluate the needs of business process outsourcing and service integration partners. Redefine processes and manage organizational change: Reevaluating process and taking an end-to-end view are necessary to capture the value of automation. Integrate technology into business functions: Integrate AI and other advanced technologies into the functional model to create transformative impact and long lasting value. Create a culture of collecting and analyzing data to make inform decisions and build the muscle for continuous improvement. Next steps The emerging technologies are transforming how SMB operate, enabling them to keep up with larger organizations while maintaining the agility that makes them unique. Businesses that embrace these technologies now will be well-placed for long-term success and growth. By leveraging automation with business intelligence and AI, SMBs will gain a competitive edge and secure their future in the global economy. Connect with an AWS expert to figure out what you need to do to make BPA successful. Unlock innovation by discovering AWS Marketplace and AWS Partner Network (APN) to leverage trusted software vendors and AWS Certified partners who can help add automation solutions to your business. Chintan Patel Chintan Patel is a Solutions Architect at AWS with over 16 years of IT experience in various roles, including AI/ML Consultant, Software Developer Engineer, Technical Lead, and Project Manager. He has extensive experience working with customers from multiple domains to design and implement technical solutions that accelerate business growth. Chintan is based in Virginia (US). Arindam Chatterji Arindam Chatterji is a Senior Solutions Architect at AWS. Before joining, he worked in technical roles at large corporations such as Wipro Limited and IBM. He is passionate about helping SMBs transform their companies in the cloud. Arindam is based in Georgia (US). Pratik Kaneriya Pratik Kaneriya is an AWS Solutions Architect with over five years of database administration experience. He is driven by solving SMB challenges and committed to achieving best practices within a multifaceted environment. Before joining AWS, he was an Advanced Database Administrator at 3M. Pratik is based in Massachusetts (US). Resources AWS Smart Business Hub AWS Connected Community Follow  Twitter  Facebook  LinkedIn  Twitch  Email Updates" Windsor.txt,"Français Español 170 servers in controlled environments enabled 日本語 Customer Stories / Financial Services / Cyprus Windsor Brokers provides an online trading platform that allows real-time trades in 200+ financial instruments across nine asset classes. The company’s mobile and desktop apps are used by customers in 80 countries. Windsor Brokers planned a migration to AWS to improve its platform performance and the user experience of its services. The company consolidated its business applications and more than 170 servers onto AWS using the AWS Migration Acceleration Program. The company’s platform is now responsive at all times, supports near-real time trading, and automatically scales to meet demand. In addition, its developers are more productive because they can focus on product development and innovation instead of infrastructure maintenance. low-latency access for traders Solution | Windsor Brokers Uses AWS MAP to Ease Migration of 170+ Servers Get Started 한국어 Learn how »  Overview | Opportunity | Solution | Outcome | AWS Services Used Near-real time Founded in 1988 and licensed by the Cyprus Securities and Exchange Commission (CySEC), Windsor Brokers wanted to make sure its customers could trade in near real-time on its platform. Traffic spikes are unpredictable in the sector and can affect a trader’s ability to place an order. Spikes generally occur during times of high volatility, which are often caused by breaking news that could affect the value of a company or currency, or during new developments in global politics. 8 months In addition to providing an improved customer experience through better performance, migrating to AWS has helped the Windsor Brokers IT team to be more productive. Today, the company’s IT and development teams no longer spend time managing multiple infrastructures and instead focus on product development. Part of the project also included organizing comprehensive training packages for the company’s DevOps teams, cloud architects, IT security professionals, project managers, and quality assurance testers. AWS Services Used The migration went as planned and the project achieved its goals. The company is now looking at how it can make better use of its data.As a next step, Windsor Brokers plans to migrate its data warehouse and use analytics to generate business insights. “The future looks bright,” says Petsas. “It’s great to have worked with AWS and a local technical partner. They have shown us commitment and they feel like part of our team for the long term.” Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud. Learn more » Windsor Brokers worked with Cloud Nomads to migrate its business applications and more than 170 servers to AWS. The company chose to use the AWS MAP—a cloud migration program that AWS developed using its experience of migrating thousands of enterprise customers to the cloud. “Between the MAP program and our technical partner we got the automation tools, documentation, training, and financial incentives that made it easy to build a successful business case to migrate to AWS,” says Petsas. The project took about 8 months. Petsas says that usually these kinds of projects are very time consuming and come with high risk. “The migration went smoothly as planned. Using AWS, we've hit our targets,” he says. 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. عربي 中文 (简体) Opportunity | Consolidating Infrastructure on AWS Simplifies Scaling and Reduces Latency Windsor Brokers Scales its Real-Time Trading Platform and Improves User Experience Using AWS The company uses AWS Regions and Availability Zones, which are data center clusters spread globally. “Regions and Availability Zones strengthen our system’s availability and save us time,” says Petsas. “We can enter new markets quickly and get the performance we need.” Learn more » Overview Online trading is an extremely competitive market, with users demanding absolute reliability and trust from their chosen trading partner. Windsor Brokers operates a number of platforms that allow real-time trades in nine asset classes and over 200+ financial instruments. The company’s customers can use apps that run on both mobile and desktop. Windsor Brokers is based in Cyprus and serves customers in 80 countries.  The global financial markets function in near-real time, and for Windsor Brokers’ clients to take part, they also need to execute transactions and exchange information at the same speed. That means low latency is critical for the company. Windsor Brokers chose to consolidate its workloads on Amazon Web Services (AWS) and worked with AWS Partner Cloud Nomads to run the AWS Migration Acceleration Program (MAP). The company completed its migration in 8 months, and its applications and trading platforms now scale automatically to match customer demand. Using AWS services, Windsor Brokers’ developers can focus on innovation and emerging technologies rather than manage infrastructure and in-house computing systems. Outcome | Migration Adds Value and Creates New Opportunities About Company AWS Customer Success Stories Türkçe Windsor Brokers’ infrastructure was spread across several cloud providers, colocation sites, and an on-premises data center. In 2020, it decided to consolidate its infrastructure and migrate to AWS. The company wanted to simplify its framework and remain competitive. Using AWS, Windsor Brokers’ infrastructure now scales with demand. The company uses Amazon Elastic Compute Cloud (Amazon EC2) to provide secure and resizable compute capacity for virtually any workload. It also uses AWS Elastic Beanstalk to deploy and scale web applications, and Elastic Load Balancing (ELB) to distribute network traffic to improve application scalability. English Amazon RDS Leonidas Petsas IT Operations Manager, Windsor Brokers The migration went smoothly as planned. Using AWS, we’ve hit our targets.” Amazon EKS is a managed Kubernetes service to run Kubernetes in the AWS cloud and on-premises data centers. Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Deutsch It’s vital that the trading platform is always available and responsive. “Our customers need to be able to manage their accounts and trade efficiently when they want—from anywhere they want—with their platform of choice,” says Leonidas Petsas, IT operations manager at Windsor Brokers. “That is our core competency and a competitive advantage.” Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. Access reliable, scalable infrastructure on demand. Scale capacity within minutes with SLA commitment of 99.99% availability. Learn more » Tiếng Việt Amazon S3 Organizations of all sizes use AWS to increase agility, lower costs, and accelerate innovation in the cloud. Italiano ไทย Amazon EKS Innovation Amazon EC2 Windsor Brokers was determined to provide the best possible user experience for its trading customers. To achieve that, the company needed to minimize latency and maximize the performance of its platform. Its customers needed to access information and execute transactions in near-real time and regardless of their geographical location. “In finance, platform availability is critical and every millisecond counts,” says Pestas. “That’s why we now use a cloud provider like AWS—it means our customers can depend on the reliability of our products and services and help keep us competitive in this dynamic market.” 2023 to complete migration using AWS MAP migrated to AWS Português" Wireless Car Case Study _ AWS IoT Core _ AWS.txt,"AWS IoT Core lets you connect billions of IoT devices and route trillions of messages to AWS services without managing infrastructure. Français Español WirelessCar has significantly reduced manual labor by adopting AWS IoT Core. The company can also quickly scale the infrastructure for its backend services up and down as needed, without having to provision additional resources. “Using AWS IoT Core, we can save time and reduce the number of resources needed to deliver a solution,” says Strömberg. By saving time on undifferentiated development work, its developers can focus on innovative, value-generating tasks, such as developing new applications and services. As a result, WirelessCar can develop new solutions at a faster pace, significantly improving its speed to market. 日本語 Get Started 한국어 Supports high availability WirelessCar provides digital services for connected cars, including in the areas of connectivity, journey intelligence, safety and security, electric vehicles, and shared mobility. To support a global fleet of vehicles, which rely on Internet of Things (IoT) technology, the company needed to manually manage and configure several backend services. As WirelessCar’s service footprint grew, so did the number of processes that it needed to manage. Adopting AWS IoT Core to Connect Vehicles to the Cloud Using AWS IoT Core, we can save time and reduce the number of resources needed to deliver a solution.”  AWS Services Used 中文 (繁體) Bahasa Indonesia WirelessCar sought a fully managed solution to connect its fleet of vehicles to the cloud. Knowing that it wanted to adopt AWS, the company engaged the AWS team and began implementing the solution into its infrastructure. “We have a strategic objective as a company to try to use as many cloud services as we can that do not require any development or management by us, especially services that greatly speed up and make our development processes much simpler,” says Strömberg. “That’s why we looked into AWS IoT Core.” Ρусский عربي Learn more » A key component of WirelessCar’s backend system is reliability. If the system is unavailable, digital services are disconnected from vehicles, significantly impacting the customer experience. Using AWS IoT Core, the company is able to achieve high availability, delivering a reliable service to its fleet of vehicles. This is especially important for time-sensitive services. “To keep a stable and reliable backend connection, we are striving for zero downtime,” says Strömberg. “Using AWS services helps us achieve this target 24/7 globally.” 中文 (简体) Reduces manual labor with fully managed services AWS IoT Core Managing a Global Network of Millions of Vehicles Benefits of AWS About WirelessCar Using AWS IoT Core, WirelessCar facilitates communications with its globally distributed network of customers. “AWS IoT Core solves the connection points for all of our vehicles,” says Strömberg. “We add certificates and set up policies that provide a specific vehicle access to a certain set of topics. This supports high-throughput, bidirectional communication. This is really important to establish a fast, responsive set of services between the vehicle and the backend system.” On AWS, WirelessCar can seamlessly scale to send thousands of messages per second, a number that is increasing exponentially as more vehicles are added to its fleet. Türkçe On AWS, WirelessCar successfully connected millions of vehicles to the cloud and established the infrastructure necessary to support an ever-growing number of backend services. The company is planning to develop more solutions alongside the AWS team. “The communication with AWS has been very smooth,” says Strömberg. “I would say that AWS has supported us in the continuous fulfillment of our mission to connect vehicles around the world.” English Connects millions of vehicles to the cloud WirelessCar is a provider of digital vehicle services headquartered in Sweden. Founded in 1999, the company has connected more than nine million vehicles in over 100 countries. WirelessCar Connects Millions of Vehicles to the Cloud Using AWS IoT Core Improved speed to market Seeking a way to reduce manual labor, WirelessCar engaged Amazon Web Services (AWS) and adopted AWS IoT Core, which is a service that connects billions of IoT devices and routes trillions of messages to AWS services without managing infrastructure. By adopting this fully managed service, WirelessCar has reduced manual labor and increased its speed to market while scaling its platform to support millions of vehicles. Deutsch Henrik Strömberg Solutions Architect, WirelessCar Sends thousands of messages per second Tiếng Việt Founded in 1999, WirelessCar has connected more than nine million vehicles in over 100 countries. To deliver its digital services, the company requires granular control over the messages that are sent between each vehicle and its infrastructure for cloud-based business applications. “We have a constantly growing number of clients and need to establish a constant connection with all of these vehicles,” says Henrik Strömberg, solution architect at WirelessCar. “The number of connected vehicles is ever growing, and so is the number of services offered to these vehicles.” Italiano ไทย Contact Sales 2022 Continuing to Build Innovative Solutions for Vehicles on AWS Automotive Partner Solution WirelessCar | Amazon Web Services WirelessCar has implemented AWS IoT Core into its infrastructure and has used it to support live traffic since early 2020. Working alongside the AWS team, the company configured the solution and met its requirements for security, scalability, and compliance. By taking advantage of the fully managed nature of AWS IoT Core, along with the support from the AWS team, WirelessCar was able to simplify the implementation process. “AWS IoT Core is basically an out-of-the-box solution,” says Strömberg. “There is not a lot of technical groundwork that needs to be done, apart from setting up some processes and establishing naming conventions.” Português Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today." Yamato Logistics (HK) case study.txt,"Amazon Simple Storage Service Developing a Data Pipeline on AWS Yamato Logistics (HK) Ltd. is a member of the Yamato Group. The company is a leading logistics solution provider of ecommerce, electronics parts, and cold chain operations in the APAC region. Its main business includes international freight forwarding, door-to-door delivery services, logistics services, and local/international moving services. Français However, extracting and processing data was a slow manual monthly exercise at Yamato Logistics (HK). Its finance and accounting teams would take several days to extract and transform data in spreadsheets before distributing it to different business units for report generation needs. The amount of manual effort involved in this process meant this exercise could only be carried out once per month. Español Yamato Logistics (HK) uses Amazon Simple Storage Service (Amazon S3) as a data lake, which stores about 5 GB of raw data daily. Data is then processed and cataloged using AWS Glue and becomes immediately available for search and query on Amazon Athena without complex installation or setup. In addition, the company uses Amazon QuickSight to create serverless dashboards in minutes via native integrations with Amazon S3 and Amazon Athena. Yamato Logistics Hong Kong (HK) Ltd.—a member of the Yamato Group—is a leading logistics solution provider. The company was experiencing rapid growth in data. To manage its data more efficiently, Yamato Logistics (HK) migrated to Amazon Web Services (AWS). Amazon Athena 日本語 2022 Get Started 한국어 Samuel Lai, IT project manager at Yamato Logistics (HK), explains, ""Previously, our management teams had to wait until the end of the month to receive sales data for report generation. Now, they have access to automated sales reports and can explore data through interactive dashboards on demand.""   Overview | Opportunity | Solution | Outcome | AWS Services Used Managing the Logistics of Data Benefits AWS Glue is a serverless data integration service that makes it easier to discover, prepare, move, and integrate data from multiple sources for analytics, machine learning (ML), and application development. Yamato Logistics (HK)'s data is now automatically updated and available daily instead of monthly. This automated process saves around three working days per month, freeing multiple teams from laborious work on spreadsheets. AWS Services Used Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Customers of all sizes and industries can store and protect any amount of data for virtually any use case, such as data lakes, cloud-native applications, and mobile apps. Learn more » Yamato Logistics (HK) plans to take advantage of the user-friendly, interactive dashboards in Amazon QuickSight to cater to the distinct needs of each of its business units. “We want to provide a more business-centric dashboard, including details like process visibility and key performance indicators, for different business units to view and use on a daily basis,” says Samson. “For example, our freight forwarding business could visualize container shipment information for each of its customers, while the local delivery team could visualize courier routing information on a map of Hong Kong.” 中文 (繁體) Bahasa Indonesia Contact Sales Ρусский AWS Glue عربي Yamato Logistics (HK) Streamlines Its Data for Increased Business Visibility, Analytics, and Insights to Innovate Its Logistics Operations Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Our goal is to predict logistics trends so we can analyze and find patterns to create new business opportunities. We want to create a data-driven culture in our company, and using AWS helps us achieve that.” Samson Yu Executive Officer and IT General Manager, Yamato Logistics (HK) Solution Overview Every day, the iconic black cat (kuro-neko) logo of the Yamato Group can be seen all over Hong Kong’s bustling streets and ports as Yamato Logistics (HK) goes about its business of international freight forwarding, door-to-door delivery, and other logistics services. The scale and complexity of these services generates vast amounts of data from customers, transactions, business processes, and supply chains. Furthermore, the company’s growth in areas like ecommerce means the volume of data its managing is increasing rapidly. About Company To handle its data as efficiently as it handles its packages, Yamato Logistics (HK) decided to migrate its applications from an on-premises infrastructure to the AWS Cloud. Samson Yu, executive officer and IT general manager at Yamato Logistics (HK), says, “We had many limitations running our servers on premises, which was weighing down our processes and negatively impacting our service speed and quality. Plus, we wanted to improve our data storage and management so we could leverage data for faster analysis.”   Türkçe Customer Stories / Transportation & Logistics English Opportunity Yamato Logistics (HK) developed a data pipeline on AWS and uses Amazon Simple Storage Service (Amazon S3) as a data lake, AWS Glue as a serverless data integration service, Amazon Athena as an interactive query service, and Amazon QuickSight for serverless dashboards. As a result, the company has improved its business intelligence, generating valuable insights to improve innovations.   The company is also exploring how it can further leverage data with AWS machine learning to predict future trends, shipping volumes, and sales, so it can remain ahead of changing business needs. Samson concludes, “Our goal is to predict logistics trends so we can analyze and find patterns to create new business opportunities. We want to create a data-driven culture in our company, and using AWS helps us achieve that.” Amazon QuickSight allows everyone in your organization to understand your data by asking questions in natural language, exploring through interactive dashboards, or automatically looking for patterns and outliers powered by machine learning. Learn more » Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run. Learn more » Deutsch Yamato Logistics (HK) developed a data pipeline using Amazon S3 and leveraged Amazon QuickSight to create serverless dashboards within minutes, increasing operational efficiency and generating insights to innovate faster. Tiếng Việt Italiano ไทย Yamato Logistics (HK) migrated its business-critical applications to run on Amazon Elastic Compute Cloud (Amazon EC2) instances, followed by redesigning and rebuilding its applications with serverless technology. It then engaged with AWS Partner Nextlink Technology to build a data pipeline for various projects, starting with its freight management system (FMS). Samson says, “We lacked the expertise needed to build data pipelines. By working with Nextlink, we created a custom solution for our FMS in just two months and learned how to build pipelines for future projects ourselves.” With AWS, Yamato Logistics (HK)’s data is streamlined, available, and transparent, improving business visibility and allowing it to generate valuable insights for innovation. The improved visibility also provides opportunities for Yamato Logistics (HK) to revamp its existing business processes based on readily available data. With better and faster visibility, Yamato Logistics (HK) can now forecast upcoming demand and allocate its resources—such as containers, trucks, and operators—accordingly. Improving Operations and Providing Opportunities for Innovation Learn more » Outcome ● 700,000 QPS: The number of queries NodeReal can handle per second ● 26 ms: The average latency achieved by deploying on the AWS Cloud ● 700,000/second: API calls that the company can scale to support within 30 minutes Amazon QuickSight 中文 (简体) Português" Zomato Saves Big by Using AWS Graviton2 to Power Data-Driven Business Insights.txt,"Français reduction in costs AWS Graviton2 Moving to AWS Graviton2-based instances reduced infrastructure costs by up to 30 percent. This has allowed the engineering and product teams at Zomato to derive insights at a faster pace. Español Learn More 日本語 AWS Services Used Get Started 한국어 Overview | Opportunity | Solution | Outcome | AWS Services Used Outcome | Setting the Standard for Improvements Across the Board To learn more, visit aws.amazon.com/ec2/graviton. 25% reduction in number of instances Rajat Taya Senior Software Engineer, Zomato As a restaurant aggregator and food delivery platform, Zomato relies heavily on data-driven insights to improve the customer experience. The engineering and product teams require continuous real-time visibility of how its customers are interacting with the platform so that it can improve the platform's restaurant and cuisine recommendations, improve the accuracy of estimated delivery arrival times, and speed up the overall delivery process. Solution | Achieving Faster, More Cost-Efficient Data Processing 中文 (繁體) Bahasa Indonesia Customer Stories / Hospitality Contact Sales Ρусский reduction in CPU utilization عربي After the successful migration of Apache Druid and Trino workloads to AWS Graviton2, Zomato intends to migrate its Spark and Flink clusters to Graviton2 in order to gain performance and cost benefits. 中文 (简体) Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Zomato uses Apache Druid a real-time database, and Trino, a SQL query engine to provide fast queries across heterogeneous data sources. In a week, Apache Druid ingests over 20 billion events and serves 8 million queries, while Trino serves over 250,000 queries, and as such, are major cost contributors to the company’s data platform. To improve performance of these query engines without increasing costs, Zomato worked with AWS in February 2022 to migrate its Apache Druid and Trino workloads onto AWS Graviton2-based instances. 2022 Overview AWS Graviton2 has helped us improve the price performance of our data platform by 25 percent. We were looking to tune our clusters for performance, and came across the AWS Graviton2-based instances which are more CPU performant. Moving to AWS Graviton2-based instances was the fastest and easiest way to achieve our goals with little tweaks. The entire process, including testing, took us two weeks.” The AWS Graviton2-based instances reduced the Central Processing Unit (CPU) utilization by 10 percent, helping Zomato maintain the performance of its data platform clusters on fewer instances. Zomato has reduced the peak capacity of Apache Druid Cluster and Trino Clusters by 25% and 20%, respectively. Zomato is an India-based restaurant aggregator, food delivery, dining-out company with over 350,000 listed restaurants across more than 1,000 cities in India. The company relies on data insights to enrich the customer experience and improve cost efficiencies. In 2022, Zomato worked with Amazon Web Services (AWS) to migrate its Apache Druid and Trino workloads to AWS Graviton2 instances. This reduced the query runtime of both query engines by 25 percent, enabling teams to make important decisions based on platform metrics while also planning automated interventions to improve the customer experience. Zomato Saves Big by Using AWS Graviton2 to Power Data-Driven Business Insights Türkçe English Opportunity | Balancing Backend Performance with Cost Up to 30% AWS Graviton2 processors deliver a major leap in performance and capabilities over first-generation AWS Graviton processors. Graviton2-based instances provide the best price performance for workloads in Amazon EC2. Graviton2-based instances support a wide range of general purpose, burstable, compute-optimized, memory-optimized, storage-optimized, and accelerated computing workloads including application servers, microservices, high-performance computing (HPC), CPU-based machine learning (ML) inference, video encoding, electronic design automation, gaming, open-source databases, and in-memory caches. Deutsch 10% About Zomato Tiếng Việt Italiano ไทย Zomato's mission is better food for more people. Started in 2010, Zomato, a tech-first company, offers services like restaurant search and discovery, reviews, ordering and home delivery of food, online table reservation, and digital payments when dining out. It also works with restaurant partners to provide tools to engage and acquire more customers while empowering them with a last-mile delivery service and a one-stop procurement solution, Hyperpure, for ingredients and kitchen products. 20% reduction in query run time Learn more » Apart from this, Zomato has been focusing on providing transparent and flexible earning opportunities to its delivery fleet and contributing towards a more sustainable society through its collaboration with not-for-profit organization, Feeding India. Learn how Zomato enabled faster decision-making based on real-time insights by improving query runtimes by 25 percent. “AWS Graviton2 has helped us improve the price performance of our data platform by 25 percent,” shares Rajat Taya, senior software engineer, Zomato. “We were looking to tune our clusters for performance, and came across the AWS Graviton2-based instances which are more CPU performant. Moving to AWS Graviton2-based instances was the fastest and easiest way to achieve our goals with little tweaks. The entire process, including testing, took us two weeks.”  Português" Zoox Case Study _ Automotive _ AWS.txt,"Expects to use hundreds of petabytes of data in the next few years Français Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service to run and scale Kubernetes applications in the cloud or on-premises. To manage Amazon EC2 instances for long-running services and occasional jobs, Zoox uses Amazon Elastic Kubernetes Service (Amazon EKS)—which helps companies manage their Kubernetes clusters and applications in hybrid environments. Slurm uses virtual private clouds containing Amazon EC2 instances that are dynamically allocated based on demand. When someone submits a job to the Slurm controller, the controller can choose to run it in the cloud and select how many instances to use. “We can spin up 1,000 nodes within a single AWS Region and run a job in hours to quickly get results on critical research and development experiments—without waiting for those nodes to become available in our on-premises data center or building another data center,” says Herrmann. By relying on AWS for computing power, Zoox can select the Amazon EC2 instances that fit its pricing, reliability, and availability needs, with different scales of machines, memory, and network access. “We have to figure out the best architecture of the environment for costs and results,” says Herrmann. “If you reduce all other costs but then have to wait for your results, that increases the total cost to the company. On AWS, we can come up with an effective way of developing the vehicle without delay.” That flexibility also helps Zoox teams to collaborate more effectively: “There’s a complicated set of interactions between costs, the architecture, and the jobs,” says Herrmann. “We have to work very closely across a lot of disciplines to balance everything. Using AWS helps us put all these pieces of the puzzle together to run these jobs efficiently.” Español Zoox has an on-premises cluster that delivers much of the required computing power for various workloads—mostly simulation but also machine learning to improve perception ability, as well as data ingestion and processing. However, as the company has grown, its workloads have fluctuated dramatically, sometimes exceeding the capacity of its on-premises cluster, which is difficult to scale efficiently. Zoox needed to expand its number of machines to handle the volume of computation. Conrad Herrmann Staff Software Engineer, Zoox 日本語 We can spin up 1,000 nodes within a single AWS Region and run a job in hours to quickly get results on critical research and development experiments.” Amazon S3 Founded in 2014, Zoox is building a fleet of autonomous, symmetrical, battery-electric vehicles that will be used for its ride-hailing service, which is designed to reduce congestion and pollution in urban environments. Its vehicles prioritize the rider’s experience over the driver’s; carriage seating promotes social interaction because riders face each other. Each bidirectional vehicle can drive up to a parking space, drop off its riders, and then back out of the space as if it were driving forward.  Simulating vast and different driving scenarios is crucial to the development and production of these vehicles to verify their safety. 한국어 Zoox Uses AWS for Scalable High Performance Computing to Rapidly Test Autonomous Vehicles Get Started Zoox stores tens of petabytes of data in Amazon S3. “Our storage has to scale very quickly to petabytes of data as we increase the number of vehicles and the computations and simulations that we do,” says Herrmann. Slurm launches Amazon EC2 instances that can access the data quickly and perform computations efficiently. Zoox monitors the data in Amazon S3 using Amazon CloudWatch, which collects monitoring and operational data and provides a unified view of AWS resources, applications, and services that run on AWS and on-premises servers. “Using Amazon CloudWatch helps us understand what’s going on and what’s working,” says Herrmann. AWS Services Used Optimizes workloads using Amazon EC2 instances Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. Spins up 1,000 nodes quickly 中文 (繁體) Bahasa Indonesia Organizations of all sizes across all industries are transforming their businesses and delivering on their missions every day using AWS. Contact our experts and start your own AWS journey today. Ρусский عربي Founded in 2014, Zoox is an autonomous vehicle company building a fleet of autonomous, symmetrical, bidirectional, battery-electric vehicles that will be used for its ride-hailing service, which is designed to reduce congestion and pollution in urban areas. Scaling to Store and Simulate with Hundreds of Petabytes of Data on AWS 中文 (简体) Over the next few years, Zoox will push its workloads from the experimental stage to the production stage, which it expects will use hundreds of petabytes of data. On AWS, Zoox has created a hybrid infrastructure that rapidly and cost-effectively ingests a massive amount of data and runs large simulations, accelerating the testing and development of its autonomous vehicles. “Using managed AWS services, we can create complex systems that let us focus on our mission, without worrying about all the other systems,” says Herrmann. “If we find a problem, AWS resolves it for us.” The company chose AWS because it would give Zoox the scalability and the flexibility to only use and pay for computing power when it’s needed. Zoox would then be able to redirect its resources toward innovative new projects to solve complex technical challenges. “We use AWS to handle specialized workloads that need to be close to the data,” says Conrad Herrmann, staff software engineer at Zoox. SchedMD’s workload manager, Slurm—which optimizes the speed, throughput, and resource consumption of mission-critical workloads for high performance computing and artificial intelligence—also uses AWS. “There are only a handful of job controllers that people use in the high performance computing world, and Slurm is an old standby,” says Herrmann. “We felt very confident that it would work for us.” Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. Benefits of AWS To start, Zoox began testing one workload on AWS that pulls data from Amazon Simple Storage Service (Amazon S3)—which customers can use to store and protect any amount of data for a range of use cases—and began indexing it to detect issues that might arise. Then Zoox built experimental versions of its software, such as a machine learning task designed to run on AWS—matching it to an Amazon EC2 instance to measure how well it performed. Next, Zoox made production workloads and ran them on AWS to test whether they would finish in a set amount of time. “The reason we use AWS for these situations is to get results faster so that we can accelerate development,” says Herrmann. “If the vehicle doesn’t do what it has to in safety simulations, we change the behavior of the driving system and try again until we get the right behavior across millions of different situations.” Türkçe Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. English Amazon independent subsidiary and autonomous vehicle company Zoox had to look beyond its on-premises infrastructure to run simulations that validate the safety of its vehicles. Its simulation workloads were prone to bursts, which meant Zoox experienced more demand for computing power than its machines could handle. The company chose to create a hybrid infrastructure model, turning to Amazon Web Services (AWS) for high performance computing to supplement its in-house supercomputer cluster.  Stores and processes tens of petabytes of data Using a Hybrid Model to Increase Speed, Collaboration, and Savings By taking advantage of Amazon Elastic Compute Cloud (Amazon EC2)—which offers an extensive compute solution with choice of processor, storage, networking, operating system, and purchase model—in parallel with open-source workload manager Slurm from AWS Partner SchedMD, Zoox accelerated testing and development for large amounts of data and improved its speed to market. By the end of 2024, it expects to use hundreds of petabytes of data on AWS. About Zoox Deutsch Amazon EKS Increases collaboration across teams Tiếng Việt Facilitates a hybrid infrastructure Italiano ไทย Additionally, Zoox uses AWS to help it manage compute-intensive periods. “When vehicle design engineers make a change to the driving control system, those changes must be validated using hundreds of hours of CPU and GPU time,” says Herrmann. “Using Slurm and AWS, our cluster is able to more than double the number of CPUs and GPUs available for compute tasks. This burst capability accelerates the sensor perception, machine learning, and simulated driving scenarios that are key ingredients to making an autonomous driving system that is comfortable and safe.” Amazon CloudWatch Contact Sales 2021 Learn more » Amazon EC2 Português Expanding Computing Power Efficiently"